Imaging Apparatus and Video Endoscope Providing Improved Depth Of Field And Resolution
20220026725 · 2022-01-27
Assignee
Inventors
Cpc classification
H04N23/45
ELECTRICITY
H04N23/555
ELECTRICITY
H04N23/951
ELECTRICITY
H04N23/55
ELECTRICITY
G02B27/1066
PHYSICS
A61B1/00059
HUMAN NECESSITIES
A61B1/042
HUMAN NECESSITIES
G02B27/0075
PHYSICS
International classification
A61B1/00
HUMAN NECESSITIES
A61B1/04
HUMAN NECESSITIES
A61B1/05
HUMAN NECESSITIES
G02B23/24
PHYSICS
Abstract
A dynamic imaging system for use with endoscope, or as an element of a video endoscope, utilizes path length differences and/or a variable aperture size to expand a usable depth of field and/or improve image resolution in an area of interest in the image field. In some implementations, the imaging system utilizes a variable aperture in conjunction with unequally spaced image sensors placed downstream from a beam splitter. An imaging system captures multiple focal planes of an image scene on separate sensors. A variable aperture permits the capture of enhanced resolution images or images with longer depths of field. These differently focused images and/or images with different resolutions and depths of field are then combined using image fusion techniques.
Claims
1. An imaging apparatus for an endoscope, comprising: an aperture stop receiving a light beam; a beamsplitter for separating the light beam received from the aperture stop; a first image sensor spaced apart from a first side of the beamsplitter at a first distance; a second image sensor spaced apart from a second side of the beamsplitter at a second distance, wherein the first and second distances are not the same; and an imaging processor receiving image date from the first and second image sensors, the imaging processor configured to combine the captured images from the first and second image sensors, such that in-focus regions of the captured images are combined to produce a resulting image with extended depth of field and/or improved resolution.
2. The imaging apparatus of claim 1, wherein the aperture stop is a variable aperture stop.
3. The imaging apparatus of claim 2, wherein the variable aperture stop is a temporally variable aperture.
4. The imaging apparatus of claim 3, wherein the image processor is further configured to process images captured by the first and second image sensors, collected at a first acquisition period, and images captured by the first and second image sensors at a subsequent second acquisition period into the resulting image.
5. The imaging apparatus of claim 2, wherein the variable aperture stop includes a first diameter defined by the variable aperture stop, wherein an annular polarized filter is disposed in the variable aperture stop defining a second diameter, and wherein the beamsplitter divides the light beam based on polarization.
6. The imaging apparatus of claim 1, further comprising a radio frequency identification (RFID) reader to detect an RFID identifier from the endoscope when the endoscope is attached to the imaging head.
7. The imaging apparatus of claim, 6 wherein the imaging processor identifies properties associated with the attached endoscope based on the detected RFID identifier.
8. The imaging apparatus of claim 2, further comprising a radio frequency identification (RFID) reader to detect an RFID identifier from the endoscope when the endoscope is attached to the imaging head.
9. The imaging apparatus of claim, 8 wherein the imaging processor identifies properties associated with the attached endoscope based on the detected RFID identifier.
10. The imaging apparatus of claim 9, wherein a diameter of the aperture is adjusted in response to the identified properties of the attached endoscope.
11. The imaging apparatus of claim 1, wherein the imaging head is detachably connectable to an external endoscope.
12. The imaging apparatus of claim 1, wherein the image processor is so configured to process the resulting image such that it has a depth of field greater than that of any of the individually collected images and has a region of resolution at least equal to that any of the individual images.
13. The imaging apparatus of claim 4, wherein the image processor is so configured to process the resulting image such that it has a depth of field greater than that of any of the individually collected images, has a region of resolution at least equal to that any of the individually collected images, and a has resolution greater than either each of the images captured at the first acquisition period or each of the images collected at the second acquisition period.
14. The imaging apparatus of claim 1, wherein the imaging apparatus is an element of a video endoscope.
15. The imaging apparatus of claim 14, further comprising an objective lens, capturing light from an image scene and directing the captured light through the variable aperture stop.
16. The imaging apparatus of claim 15, further comprising an aspheric or positive lens receiving the light beam after passing from the variable aperture stop.
17. The imaging apparatus of claim 16, further comprising a collimating or carrier lens, receiving light from the aspheric or positive lens and passing the beam to the beam splitter.
18. The imaging apparatus of claim 14, wherein the aperture stop is a variable aperture stop.
19. The imaging apparatus of claim 18, wherein the variable aperture stop is a temporally variable aperture.
20. The imaging apparatus of claim 19, wherein the image processor is further configured to process images captured by the first and second image sensors, collected at a first acquisition period, and images captured by the first and second image sensors at a subsequent second acquisition period into the resulting image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present invention, and wherein:
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
DETAILED DESCRIPTION
[0029] The beam splitter of
[0030] Each reflection changes the path length and, as a result, the back focal length of each beam is different. The image formed on each portion of the sensor 11 captures a separate focal plane of the object being observed by the insertion portion of an endoscope. Alternatively, two separate sensors can be used in place of the single sensor 11, and the individual sensors can be placed at different distances from the beam splitter.
[0031] The beam splitter of
[0032] Each separate sensor 11 detects a differently focused beam providing an image including information at a particular depth. Each of the beam splitter elements or prisms (8, 10 and 14), can be made of crystal glass, polymers, acrylic, or other light transmissive materials. Also, the interfaces (9, 15) can be made partially reflective such that the intensity of each sensed beam is substantially equal. Alternatively, to compensate for surface losses or other differences, the reflective surfaces on the interfaces (9,15) may divide the beams unequally.
[0033] The optical arrangement of
[0034] Both channels pass through a gap in which the variable liquid lens 20 is disposed. After passing through this gap, both channels enter another beam splitter 16 recombines the two beams and passes them on to a variable wave plate 17 for changing the polarization of both beams. The variable wave plate 17 varies between ON and OFF such that when it is ON, the polarization of the incoming light beam is rotated by 90 degrees.
[0035] After the variable wave plate 17, the combined beam enters a beam splitter 19 which once again separates the channels based on polarization such that they are imaged onto different sensors 18. Thus, on odd frames, one sensor 18 captures “s” polarized light and the other sensor 18 captures “p” polarized light. On even frames, the different sensors 18 are given the other channel. The collimating lens group 22 is disposed before the variable wave plate 17 for further beam manipulation and control.
[0036] In this manner, four different images corresponding to four different focal planes can be acquired over the course of two frames. From the plurality of images, a processing unit (not shown) calculates an image with greater depth of field than would be possible with a single sensor and conventional optics. Alternatively, this imaging head could be integrated into a video endoscope that does not detach from a camera head.
[0037] The two illustrations in
[0038] The focal difference between the first and second channels due to the variable lens 20 and path length difference is also simultaneously provided to the sensors 18. This results in four unique focal planes over two capture periods. Furthermore, the variable lens can change position or focal power to increase the number of focal planes further or simply to adjust focus. The variable lens may have variable curvature or a variable index of refraction.
[0039] The camera head may also have a control unit to adjust the focal difference according to the content of the acquired image, the imaging environment, or other application specific needs. In addition, the camera head can be capable of identifying the specific endoscope or endoscope-type being used and adjust focus accordingly. The “s” and “p” polarization described above is exemplary and could be replaced with circular or elliptical polarization.
[0040] The camera head of
[0041] In the arrangement of
[0042] The arrangement is also connected to a control device for controlling the variable lens and a processor 71 that calculates depth from the captured images or segments the in-focus portions for recombination and display. The processor 71 is also able to model three-dimensional surfaces and build complex tissue models. These models and surfaces can be stored in memory such as RAM or transmitted to a display screen for display to a user.
[0043] Conventional methods to increase depth of field fundamentally decrease the resolution. Thus, typically systems are forced to make a tradeoff between depth of field and resolution. However, combining several image sensors to provide depth information preserves resolution and can even improve it. Furthermore, the images can be segmented to provide the in-focus portion of each captured image and recombine the in-focus segments for a clearer image with more depth information and depth of field.
[0044] Additionally, the camera head of
[0045] The camera head can also be simplified by replacing the variable liquid lens 20 with a simple movable focusing lens 23 as shown in
[0046] Another optical arrangement for providing depth of field, as in the previous arrangements, is shown in
[0047] The arrangement in
[0048] The camera head can identify the endoscope being attached and store in memory or adjust automatically based on a detection of a specific endoscope type, where the variable liquid lens 20 or the relative positions of the sensors 18 are adjusted. In either case, the adjustment preferably optimizes the focal offset introduced by these elements. Furthermore, the ray bundles at the focal planes should be telecentric.
[0049] The larger system diagram of
[0050] An alternative arrangement without variable liquid lenses 20 is provided in
[0051] Additionally, variable apertures could be used to vary the attributes, namely the depth of field and resolution of the captured images at a given focal plane from one acquisition period to the next. From a manufacturing perspective, fixed apertures, and even variable apertures, can be less expensive and faster to position than variable liquid lenses.
[0052] The alternate optical configuration of
[0053] Digital image processing can combine each of the differently focused and separately captured images by selecting and extracting the sharp areas of each image and combining them into a single full resolution image. Additionally, the color information from the blurred areas can be reconstructed using the contrast information of the sharp areas or the combined image such that the colors are accurately reproduced.
[0054] First the fusion method generates a contrast weight map, a saturation weight map and an exposure weight map for each captured image. Then these maps are applied to select the best pixels from each image. Finally, the separate weighted images containing the selected or weighted pixels are combined with pyramid-based image fusion to generate a combined image.
[0055] By interpolating the color information, both resolution and contrast are slightly reduced. This, however, should not present a problem since the resolution of the sensors and combined image exceeds the resolution of the best endoscopes. On the other hand, the increased depth of focus allows for certain errors in the optics such as image field curvature to be compensated. Image field curvature often occurs in endoscopes with a very long inversion system.
[0056] The extended camera head of
[0057] The second segment 28 is an inversion system carrying the remaining light beam to a second beam splitter 19 which splits half or some fraction of the remaining light onto another sensor 18 in a different focal plane. The remaining one-third of the light beam passes through the third segment 29 which is an inversion system like that in the second segment 28. The remaining light is deflected by mirror 30 and imaged by sensor 18, which is also in a different focal plane. Each inversion system flips the image or changes the parity of the image resulting in various captured image orientations which must be corrected optically or digitally.
[0058] The three sensors 18 in
[0059] The loss of light due to the distribution of the light beam onto various sensors may be compensated in that the system can have a higher numerical aperture than an equivalent system, that is a system which covers the same depth of focus with a single sensor as this system does with multiple sensors.
[0060] With the higher numerical aperture, overall a higher resolution is achieved while in conventional systems this high resolution requires a trade-off of lower depth of field. Due to the fact that in the various optical arrangements above the same image is captured by various sensors at the same time on different focal planes, the sharp areas of the individual sensors can be combined into one image.
[0061] The camera head for an endoscope shown in
[0062] A beam exiting the aperture 13 of
[0063] The outlined device in
[0064] Advantageously, one or more of the image sensors 18 can be connected to a small actuator 39 that can adjust the focal plane position. This allows the focal plane difference between the two sensors to be adjusted for a particular situation without a variable liquid lens. The actuator 39 can also be combined with these other modes to provide larger ranges of focal plane differences.
[0065] Upon the identification of the specific endoscope 34 from the tag 38 on the proximal end of the endoscope, the actuator 39 adjusts the focal planes of the sensors 18 to an optimal focal plane offset. Alternatively, the identification can be done via the camera head with a QR code, bar code, or a specific color scheme on the endoscope end. Additionally, the endoscope could be identified by direct connection via a data bus or by analysis of electrical resistance or a magnetic field direction of the endoscope end.
[0066] The actuator 39 can be a piezo-electric motor or other small motor. Upon identification of the endoscope tag 38, a RFID reader 36 of a camera head like that in
[0067] It is also noted that any of the camera heads and optical arrangements disclosed herein may be implemented into the device of
[0068] The invention being thus described, it will be obvious that the same may be varied in many ways. For instance, capabilities, components or features from each of the optical arrangements above are combinable or transferrable to any of the other optical arrangements disclosed herein. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.