SMART AND COMPACT IMAGE CAPTURE DEVICES FOR IN VIVO IMAGING
20230042900 · 2023-02-09
Inventors
Cpc classification
H04N23/555
ELECTRICITY
H04N23/651
ELECTRICITY
H04N23/74
ELECTRICITY
A61B1/05
HUMAN NECESSITIES
H04N23/667
ELECTRICITY
International classification
A61B1/05
HUMAN NECESSITIES
A61B1/04
HUMAN NECESSITIES
Abstract
A novel in-vivo image capture device for capsule endoscope and its method of operation are described. The device includes a wafer level camera module design, high sensitivity backside illumination pixel with high definition image output and LED's to provide illumination, which is synchronized with an image sensor strobe signal. A frame rate of the device can be adjusted based on an angular motion detection from a gyroscope sensor, in which a high frame rate mode is maintained during fast motion while a low frame rate is maintained during slow or no motion. The image capture device also includes machine learning based SOC for image processing, enhancement, and compression. The SOC can process and store zone average of images. The image capture device also includes a high density flash storage to store images in the device, thus no RF transmitter is needed, which make the system more convenient to use.
Claims
1. A method of synchronizing an in vivo image capture device, comprising: providing an image sensor, whereby the imaging sensor comprises plurality rows of pixels R1 to Rn, with n being an integral number; integrating the pixels row by row; reading the pixels row by row; setting an integration time between the resetting of the last row of pixels Rn and the reading of the first row of pixels R1; illuminating during the integration time; processing readouts from the plurality rows of pixels in a control unit within the image capture device; and transferring an image from the image capture device; wherein the illuminating comprises providing a strobe signal from the image sensor and synchronizing the illuminating with the strobe signal.
2. The method in claim 1, wherein the illuminating comprises setting a pulse width of a LED.
3. The method in claim 1, further comprising detecting a velocity of the image capture device.
4. The method in claim 3, further comprising setting the integration time in proportionate to the velocity.
5. The method in claim 4, wherein the detecting comprising providing a gyroscope for detecting the velocity of the image capture device.
6. The method in claim 5, comprising: setting an exposure time and a gain of the image sensor; setting the pulse width of the LED; obtaining the velocity of the image capture device; and adjusting the exposure time and the gain of the image sensor according to the velocity; and adjusting the pulse width of the LED according to the velocity.
7. The method in claim 1, wherein the transferring of the images comprises transmitting the images by a radio frequency transmitter.
8. The method in claim 1, further comprising storing the image in a memory storage unit.
9. The method in claim 8, wherein the storing the image comprises providing a non-volatile memory.
10. The method in claim 1, wherein the illuminating is performed only during the integration time.
11. An in vivo image capture device, comprising: a housing; an optical window and an optical system separated from the optical window; a CMOS image sensor; a LED; a gyroscope; a system start switch; a battery; a power management unit; and a storage device.
12. The image capture device of claim 11, wherein the CMOS image sensor comprising an imaging area comprising an array of pixels, each pixel comprising a photodetector, pixel readout transistor, correlated double sampling readout; row select circuitry to select one or group of rows; column select circuitry to output one or group of column; one or more analog to digital converter to convert pixel output to digital output; output interface to output digital signal to other chips;
13. The image capture device of claim 12, wherein the CMOS image sensor has configurable register settings to change an integration time.
14. The image capture device of claim 12, wherein the CMOS image sensor has a strobe control signal to synchronize a vertical blank readout period with other devices in the image capture device.
15. The image capture device of claim 11, wherein the image capture device comprises a wide-angle lens; integrated wafer-level optics; and a camera made by wafer level chip scale packaging.
16. A method of operating an in vivo image capture device, comprising: providing a CMOS image sensor having plurality of pixels; allocating an imaging area of the CMOS image sensor into one or more zones, each zone having one or more pixels; taking readouts from the pixels; averaging readouts among neighboring pixels within the one or more zones; comparing an average of readouts from a first frame to an average of readouts from a second frame within the one or more zones and determining a difference; and processing the readouts from the plurality rows of pixels in the image capture device; and transferring an image from the image capture device.
17. The method in claim 16, further comprising discarding the readouts from the second frame if the difference within the one or more zones is below a threshold value.
18. The method in claim 17, further comprising transferring the readouts from the first frame to a flash memory.
19. The method in claim 16, wherein the one or more zones are in equal size, having equal number of pixels.
20. The method in claim 16, wherein the one or more zones are overlapping.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014] The disclosure will be readily understood by the following detailed description in conjunction with the accompanying drawings, wherein like reference numerals designate like structural elements, and in which:
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
DETAILED DESCRIPTION
[0027] Reference will now be made in detail to representative embodiments illustrated in the accompanying drawings. It should be understood that the following description is not intended to limit the embodiments to one preferred embodiment. To the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the described embodiments as defined by the appended claim. Although references are made to an imaging device to be used in an endoscope procedure, this is by no means limiting and a person with ordinary skills in the art may appreciate that a similar device in the invention may be used in other in-vivo imaging as well.
[0028] Reference is now made to
[0029]
[0030]
[0031] The imaging area 310 may be in communication with a column select circuit 330 through one or more column select lines 332, and with a row select circuit 320 through one or more row select lines 322. The row select circuit 320 may selectively activate a particular pixel 312 or group of pixels, such as all of the pixels 312 in a certain row. The column select circuit 330 may selectively receive the data output from a selected pixel 312 or group of pixels 312 (e.g., all of the pixels in a particular row). The row select circuit 320 and/or column select circuit 330 may be in communication with the image processor 340, which may process data from the pixels 312 and output that data to another processor, such as a system on a chip (SOC) included in on the printed circuit board 107.
[0032]
[0033] Besides the photodetector 402, the pixel 400 also comprises four transistors (4T) that include a transfer gate (TX) 404, a reset transistor (RST) 406, a source follower (SF) amplifier 408, and a row-select (Row) transistor 410. The transfer gate 404 separates the floating diffusion (FD) node 416 from the photodiode node 402, which makes the correlated double sampling (CDS) readout possible, and thus lower noise.
[0034] The readout timing diagram of a PIN photodiode is shown in
[0035] The integration time 512 is defined from a falling edge of TX gate 404 during the reset (time A2) to a falling edge of TX gate 404 during charge transfer (time A7). Normally the pixel response increases linearly with integration time 512 with a fixed amount light intensity.
[0036]
[0037] The image sensor is designed to include one of a strobe control signal output pin 662 as shown in
[0038] Now reference is made to
[0039] Wafer-level optics and wafer-level chip-scale packaging technology may be used In order to make a compact camera system suitable for in-vivo endoscopy. In this invention, a highly integrated wafer level camera cube is proposed, which may include a lens focus element 722 and an image sensor 724. All the lens components may be manufactured using a wafer level processing and stacked in a wafer level using a wafer level chip scale packaging technology for the camera manufacturing process to integrate camera functionality in a small footprint and a low profile that fits in tiny space. The wafer level camera module design may be directly soldered to the printed circuit board 104 with no socket or insertion required.
[0040] For in-vivo imaging, a wafer-level integrated camera offers a few advantages compared with traditionally designed cameras. For example, the wafer-level integrated camera features a large field of view, e.g., a greater than 120-degree wide-angle lens design is preferred to capture as much light as possible such that critical information from an endoscopy-procedures will not missed from the image field of view or poor quality of the images. For another example, the camera lens 722 from a wafer-level integrated camera may focus on near focal distance with sharp focus, e.g., within 3 centimeter distance. The camera lens should have a low f-number and a large lens aperture to capture on more light onto image sensor to improve image quality at a low light situation.
[0041] For the purpose of capsule endoscopy and also this application, the image sensors must work in low light condition for most of time. Low light image quality is critical. Image sensor design choice should be carefully made to achieve optimal image quality with low power consumption, fast readout speed, and little image artifact or distortion. Since the camera lens design is circular and symmetric, it is preferred to design the image sensor with square pixel array to fully use the lens optical power, which means the pixel array has equal number of rows and columns to maximize light collect area. Square image sensors with 1280 rows and 1280 columns are recommended to get high-definition output either in x-direction or y-direction.
[0042] To achieve the optimal system performance, pixel size and pixel design have been carefully considered. A large pixel size will provide better low light performance but with higher cost due to a large die size and a larger footprint for the capsule image capture system. On the other hand, a smaller pixel will result in a smaller array size but the image quality suffers in low light conditions. Typically, a 1.0-1.4 μm pixel is good balance between the image quality and die size. A stacked chip back side illumination (BSI) image sensor is chosen over a front side illumination (FSI) image sensor for better low light performance. In addition, a back side illumination sensor provides many benefits over traditional front side illumination image sensor, such as a higher quantum efficiency (QE), lower cross-talks between pixels, a wide pixel acceptance angle, a less signal roll-off from array center to edge, thus is ideal for this application. Micro-lens design and optical stack may be fully optimized to achieve a higher QE, a lower cross-talk, and a less image flare or other artifacts.
[0043] Wafer bonding technology may be used to stack a logic wafer below a pixel wafer such that die size may be reduced significantly from traditional front side illumination image sensors. The logic wafer and the pixel wafer may be bonded in wafer level and connections between the wafers may be made through Cu—Cu hybrid bonding or TSV (Through Si Vias). Another benefit of stacked wafer technology is to use different technology node for the pixel wafer and the logic wafer. The pixel wafer may be made separately for the optimal pixel performance, while a more advanced process node may be adopted for the logic wafer to increase readout speed, reduce die size, add extra features, lower power consumption, and reduce cost. In addition, a memory wafer made of a dynamic random access memory or a NAND flash memory may also be attached by direct or hybrid wafer bonding to the logic wafer for an image storage and local processing of the images.
[0044] To improve image quality at low light, a readout noise from the image sensor must be reduced as much as possible. Correlated double sampling readout may remove kTC noise from RST gate 406 and reduce the readout noise by at least an order of magnitude. A low noise circuit design is also required for the pixel source follower amplifier 408, the pixel bias circuit 412, and a column amplifier and comparator circuitry of analog to digital converters (ADC).
[0045] A linear full-well capacity of the pixels defines the maximum signal to noise ratio of the image sensor and the sensor dynamic range. A typical linear full-well capacity is in a range of 6000e.sup.− to 10000e.sup.− for 1.0-1.4 μm pixel size, which provides an about 69-74 dB dynamic range, assuming a 2e.sup.− readout noise. Other pixel parameters also need to be fully optimized to achieve the best possible image quality with a minimum power consumption.
[0046] Now reference is made to
[0047] The system start switch 801 may be controlled by an external magnet, which keeps the switch closed while it is in proximity to the switch. When a storage box is opened and the external magnet is moved away from, the system start switch 801 will turn on and to activate the SOC 804 and the camera 810 and image capture device 800 starts its operation. The camera 810 will capture images send them to the SOC 804 for processing, enhancement, and compression.
[0048] It is possible to integrate a high-speed large capacity flash drive 808 into the image capture device 800. The images taken from the endoscopic procedure may be stored in flash drive 808 with time stamps. A RF transmitter is not needed. At the end of the endoscopic procedure, an interface cable is used to transfer the images out from flash drive 808 for a diagnosis by a doctor.
[0049] A gyroscope sensor 814 typically measures a rate of an angular motion of the image capture device 800, which is the rate of rotation. The gyroscope sensor, typically made of a microelectrical mechanical devices (MEMS) may measure three types of angular rate: yaw, pitch, and roll and the angular rate may be then converted into a linear velocity to detect the motion of the image capture device 800. The velocity of the image capture device 800, obtained from the gyroscope sensor 814, may be used to control the mode of operation of the image sensor 810. Reference is referring to
[0050] Now reference is made to
[0051] Reference is made to
[0052] Once the system capture is done, the image capture device will be collected and sent to a doctor office for image transfer and analysis. The doctor office may have special devices to connect to the I/O pins inside the image capture device to transfer the time-stamped images for analysis. Machine-learning-based algorithm may be run to identify the images with associated with high risk areas for the doctors to focus on and narrow down the locations of interest. This may reduce the diagnosis time and increase the diagnosis efficiency.