IMAGING METHOD FOR IMAGING A SCENE AND A SYSTEM THEREFOR
20230164287 · 2023-05-25
Assignee
Inventors
Cpc classification
G06T7/246
PHYSICS
International classification
Abstract
Methods (10, 10′, 10″) for imaging a scene (110) during a change of a field of view (112) of an image sensor (114) relative to the scene (110) and a corresponding system (100) are disclosed.
Claims
1. An imaging method for imaging a scene during a change of a field of view of an image sensor relative to the scene, the method including the following steps: a) capturing a first image in the scene with a first illumination of the scene at a first time (t1), the first image predominantly containing first image information from the scene in a visible light range, b) following step a, capturing a plurality of second images in the scene with a second illumination of the scene at a respective second time (t2, t2′), the second images predominantly containing second image information from the scene in a non-visible light range, c) following step b, capturing a third image in the scene with the first illumination of the scene at a third time (t3), with the third image predominantly containing third image information from the scene in the visible light range, d) determining at least one reference feature in the scene, the reference feature imaged in the first and in the third image, e) determining a first change (A) of the field of view on the basis of the at least one reference feature, f) registering the second images while considering at least one second change (B), which arises as a first proportion of the first change (A), with the first proportion being the ratio of a first time difference between the second times (t2, t2′) of two second images to be registered and a second time difference between the third time (t3) and the first time (t1), g) processing the registered second images in order to obtain a resultant image, and h) outputting the resultant image.
2. The imaging method of claim 1, wherein the method includes the further steps of: i) following step b and before step c, capturing a fourth image in the scene with the first illumination of the scene at a fourth time, the fourth image predominantly containing fourth image information from the scene in the visible light range, j) following step i and before step c, capturing a plurality of fifth images in the scene with the second illumination of the scene at a respective fifth time, the fifth images predominantly containing fifth image information from the scene in the non-visible light range, k) following step e and before step g, registering the fifth images while considering at least one third change, that arises as a second proportion of the first change, with the second proportion being the ratio of a third time difference between the fifth times of two fifth images to be registered and the second time difference between the third time and the first time, and with the registered second images and the registered fifth images being processed in step g in order to obtain a resultant image.
3. The imaging method of claim 2, wherein step d is instead carried out as follows: determining at least one first reference feature in the scene, the first reference feature is imaged in the first and in the fourth image, and at least one second reference feature in the scene, the second reference feature is imaged in the fourth and in the third image; wherein step e is instead carried out as follows: determining a first change (A1) of the field of view on the basis of the at least one first reference feature and a further change (A2) of the field of view on the basis of the at least one second reference feature; wherein step f is instead carried out as follows: registering the second images while considering at least one second change (B1), that arises as a first proportion of the first change (A1), the first proportion being the ratio of a first time difference between the second times (t2, t2′) of two second images to be registered and a second time difference between the fourth time (t4) and the first time (t1); and wherein step k is instead carried out as follows: registering the fifth images while considering at least one third change (B2), which arises as a second proportion of the further change (A2), with the second proportion being the ratio of a third time difference between the fifth times (t5, t5′) of two fifth images to be registered and a fourth time difference between the third time (t3) and the fourth time (t4).
4. The imaging method of claim 3, wherein a first intensity of the first image and of the third image is greater in each case than a second intensity of each of the second images.
5. The imaging method of claim 1, wherein a first intensity of the first image and of the third image is greater in each case than a second intensity of each of the second images.
6. The imaging method of claim 2, wherein a first intensity of the first image and of the third image is greater in each case than a second intensity of each of the second images.
7. The imaging method of claim 1, wherein the second images predominantly contain second image information from the scene in a near infrared range.
8. The imaging method of claim 2, wherein the second images predominantly contain second image information from the scene in a near infrared range.
9. The imaging method of claim 3, wherein the second images predominantly contain second image information from the scene in a near infrared range.
10. The imaging method of claim 4, wherein the second images predominantly contain second image information from the scene in a near infrared range.
11. The imaging method of claim 5, wherein the second images predominantly contain second image information from the scene in a near infrared range.
12. The imaging method of claim 6, wherein the second images predominantly contain second image information from the scene in a near infrared range.
13. The imaging method of claim 1, wherein the second images are brought into correspondence with a reference image.
14. The imaging method of claim 7, wherein the second images are brought into correspondence with a reference image.
15. The imaging method of claim 8, wherein the second images are brought into correspondence with a reference image.
16. The imaging method of claim 9, wherein the second images are brought into correspondence with a reference image.
17. The imaging method of claim 1, wherein the processing of the second registered images includes the application of computational photography.
18. The imaging method of claim 2, wherein the processing of the second registered images includes the application of computational photography.
19. The imaging method of claim 3, wherein the processing of the second registered images includes the application of computational photography.
20. A system for imaging a scene during a change of a field of view of an image sensor relative to the scene, the system comprising: at least one illumination device, at least one imaging apparatus, a processing device which is designed to cause the system to carry out a method according to claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0040] Exemplary embodiments of the invention are depicted in the drawings and are described in more detail in the following description.
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
DETAILED DESCRIPTION OF THE INVENTION
[0051] In the context of the present invention, the conventional means of improving the image quality of the second images, that predominantly contain second image information from the scene in a non-visible light range, are stretched to their limits. By way of example, more sensitive image chips for recording the second images can be significantly more expensive, and thus not desirable. Another potential solution, that of lengthening the exposure time for the second image leads to a blurring of contours on account of the change of the field of view of the image sensor, which can be caused by any movement of the image sensor relative to the scene. Another common solution, increasing the luminous intensity has only a limited effect, especially in the field of fluorescence imaging, since it is not reflected light that is sensed but rather the fluorescence emission from the fluorophore triggered by the excitation light. The terms “visible” and “non-visible” here relate to human vision, with “visible” describing a spectrum to which the human eye is sensitive and “non-visible” describing a spectrum to which the human eye is insensitive.
[0052] In the context of the present invention, directly overlaying a plurality of second images also does not lead to a satisfactory solution since this would also cause a “blurring” on account of the change in the field of view of the image sensor of each of the second images. Another considered solution is a direct registration (see, for example, https://de.wikipedia.org/wiki/Bildregistrierung or https://en.wikipedia.org/wiki/Image_registration) of the second images, but this was not found to be practical in all situations, especially if the second images cannot accurately be registered due a low contrast of the second images, for example.
[0053] Some of the benefits of the present invention arise from the field of view of the image sensor being determined on the basis of image information from images predominantly collected in the visible light range. In practice, this image information has sufficiently clear features able to be identified, and with the aid of these clearly identifiable features, the change in the field of view can be determined. In this context, the change can be determined from two immediately successive images with image information in the visible light range, or else from images which follow one another in time without being immediately successive.
[0054] It is possible to determine the times at which the images predominantly containing image information in a non-visible light range are captured, for example by reference a timing generator common in image capture systems. If the assumption is now made that the change in the field of view of the image sensor between the capture of images predominantly containing image information in the visible light range is at least substantially uniform, it is possible to calculate the extent to which the change has advanced at the time of recording of the images predominantly containing image information in the non-visible light range. Specifically, the change may include one or more elements of the group translation, rotation, tilt, change of an optical focal length or change of a digital magnification.
[0055] For example, if the assumption is made that a change A between a first time t1, at which time the first image is recorded, and a third time t3, at which time the third image is recorded, is T=t3−t1, and that a second image is recorded at a time t2=0.5 T, then the change between the first image and the second image is 0.5 A and it is also 0.5 A between the second image and the third image.
[0056] In another example, two second images are recorded at a time t2=0.4 T and t2′=0.6 T. Therefore, the change between the first-second image and the second-second image is T=t2′−t2=0.2 T, and so the change between the second images can be calculated as 0.2 A. If this example is expanded, it is possible to recognize that the second images can now be registered because the proportional change between the second images is known. Thus, it is not necessary to determine the change from the second images themselves. Although, this can be additionally implemented in order to increase the accuracy or to test reliability, it is not required to examine the second images themselves in respect to a change. Knowledge of the change in the corresponding images which predominantly contain image information in the visible light range allows the proportional change of the second images to be determined.
[0057] Further, the second images can also be registered to the first and/or the third image, since it is known in the above example that the change between the first image and the first second image is 0.4 A and the change between the second-second image and the third image is also 0.4 A. Thus, for example, the first-second image can be registered with the second-second image, or vice versa, and the resultant image can be registered with the first and/or the third image.
[0058] For the registration of the second images, each second image is assigned a corresponding second time; for example, this may be the second time at which the corresponding second image was recorded. In the case of slow changes and, in particular, if the intention is to register a plurality of images which predominantly contain image information in the non-visible light range, but which are between different images containing predominantly image information in the visible light range, it may be sufficient to assign the second images that are between the same images containing predominantly image information in the visible light range a certain second time, without assigning each second image an individual second time.
[0059] It should be noted that it is not necessary to process all images that predominantly contain image information in the non-visible light range. Rather, a selection can be made, and so only a subset of the recorded images represents the second images which are brought into correspondence. Accordingly, the second frames for processing can also be a subset of all second frames. Finally, not all second times have to be different.
[0060] The proposed solution is the optimization of both the white light image and, in preferred circumstances, the fluorescence light image. The fact that both images correlate alternately in a defined chronological sequence may be exploited by using the effects of this bi-spectral time offset for image improvement. In this way, the conventional “brute force” methods to improve FI overlay imaging by the amplification of the light source, particularly when LEDs are used, or amplification of the camera sensor signal, and all the accompanying deleterious effects and costs that accompany such brute force methods, can be avoided. Instead, the present invention makes innovative use of video processing algorithms for fluorescence light recordings, including a correction using the visible light range and over a plurality of frames.
[0061] In the specific case where alternately visible light and non-visible light (usually NIR FI light) are used, with two separate but correlated channels, which are correlated in spatial-and change-overarching fashion, it is the same scene that is recorded, but registered in different spectra. The time offset is minor in this case. The weaker non-visible information, is preferably not used to track a change during the recording of the images since this alone would be unnecessarily inaccurate, too weak, blurred and contain too few details.
[0062] The movement correlation to the white light is now transferred by extrapolated movement compensation data from the visible light to the non-visible light and is used for the required image optimization in the non-visible light. In endoscopic observations, the movement compensation was found to be very effective as movements generally have an overall uniformity on account of the relatively sluggish mass of the endoscopic system and, often, their additional attachment to holding elements. The image optimization is applied in a movement-compensated manner over a plurality of past frames on the basis of the change in the images in the visible light range described by white light vectors, to the images in the non-visible light range.
[0063] It should be noted it is commonly the case that the first illumination and the second illumination are provided by different light sources, however in some embodiments they may be provided by a single light source. In the latter case, use can be made of a switchable filter for example, which for an excitation frame blocks most white light, passing only the excitation wavelength of the fluorophore, and passes white light for the visible frame. However, it is also possible to dispense with such a filter for the light source if two image sensors are used for the image recording, one sensor of which is substantially sensitive to visible light while the other one is substantially sensitive to non-visible light. However, in principle, it is also possible to use only one image sensor if the filter is connected in front thereof such that either visible light or non-visible light is substantially guided to the image sensor.
[0064]
[0065] In a first step 12, a first image 41 (see
[0066] In step 14, which follows step 12, a plurality of second images 42, 42′ (see
[0067] In step 16, which follows step 14, a third image 43 in the scene 110 is captured with the first illumination 50 of the scene 110 at a third time t3 (see
[0068] In step 18, at least one reference feature 54 (see
[0069] In step 20, a change A of the field of view 112 is determined on the basis of the at least one reference feature 54. In the case of the first chronological sequence of the recording of images in accordance with
[0070] In step 22, the second images 42, 42′ are, while considering at least one second change B that arises as a first proportion of the first change A, with the first proportion being the ratio of a first time difference between the second times t2, t2′ of two second images 42, 42′ to be registered and a second time difference between the third time t3 and the first time t1. This can be expressed as a formula as follows: B=A*(t2′−t2)/(t3−t1).
[0071] Another approach in this embodiment is as follows: For each second image 42, 42′ of the second images 42, 42′, the change A of the field of view 112 is interpolated to the second time t2, t2′ assigned to the second image 42, 42′ in order to obtain a partial change dA, dA′ of the field of view 112 for the second image 42, 42′. In particular, this partial change dA can be calculated as dA=A*(t2−t1)/(t3−t1) and dA′=A*(t2′−t1)/(t3−t1). Moreover, the second change B between the second images 42, 42′ can then be calculated as B=dA′−dA. Then, B=A*(t2′−t2)/(t3−t1) is also true.
[0072] In a step 24, the second images 42, 42′ are registered to bring the second images 42, 42′ into correspondence while considering the second change B, or alternatively the respective obtained partial changes dA, dA′, and thus to obtain registered second images.
[0073] In a step 26, the registered second images are processed in order to obtain a resultant image 109 (see
[0074] In a step 28, the resultant image is output to a monitor 108 (see
[0075]
[0076] The movement of the reference feature 54 is used to symbolically depict how the field of view 112 of the image sensor 114 changes relative to the scene 110. A change A arises between the first image 41 and the third image 43, a change dA arises between the first image 41 and the first-second image 42, a change dA′ arises between the first image 41 and the second-second image 42′ and a second change B arises between the first-second image 42 and the second-second image 42′.
[0077]
[0078]
[0079]
[0080] In a step 32, a fourth image 44 (see
[0081] In a step 34 a plurality of fifth images 45, 45′ in the scene 110 are captured with the second illumination 52 of the scene 110 at a respective fifth time t5, with the fifth images 45, 45′ predominantly containing fifth image information from the scene 110 in the non-visible light range.
[0082] In a step 36, the fifth images 45, 45′ are while considering at least one third change B2 (see
[0083] In step 24, both the registered second images and the registered fifth images are processed in order to obtain a resultant image.
[0084]
[0085] Instead of step 18, a step 18′ is now carried out, in which at least one first reference feature 54 is determined in the scene 110, which reference feature is imaged in the first and in the fourth image 41, 44. Moreover, at least one second reference feature 56 is determined in the scene 110, which reference feature is imaged in the fourth and in the third image 44, 43. It is possible for the first reference feature 54 to be the same as the second reference feature 56. However, the first reference feature 54 may also differ from the second reference feature 56.
[0086] Instead of step 20, a step 20′ is now carried out, in which a first change A1 of the field of view 112 is determined on the basis of the at least one first reference feature 54. Moreover, a further change A2 of the field of view 112 is determined on the basis of the at least one second reference feature 56.
[0087] Instead of step 22, a step 22′ is now carried out, in which the second images 42, 42′ are registered while considering at least one second change B 1, which arises as a first proportion of the first change A1, with the first proportion being the ratio of a first time difference between the second times t2, t2′ of two second images 42, 42′ to be registered and a second time difference between the fourth time t4 and the first time t1. In one embodiment, this can be expressed as a formula as follows: B1=A*(t2′−t2)/(t4−t1).
[0088] Instead of step 36, a step 36′ is now carried out, in which the fifth images 45, 45′ while considering at least one third change B2, which arises as a second proportion of the further change A2, with the second proportion being the ratio of a third time difference between the fifth times t5, t5′ of two fifth images 45, 45′ to be registered and a fourth time difference between the third time t3 and the fourth time t4. In one embodiment, this can be expressed as a formula as follows: B2=A*(t5′−t5)/(t3−t4).
[0089]
[0090]
[0091]
[0092]
[0093] Although the invention and its advantages have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope of the invention as defined by the appended claims. The combinations of features described herein should not be interpreted to be limiting, and the features herein may be used in any working combination or sub-combination according to the invention. This description should therefore be interpreted as providing written support, under U.S. patent law and any relevant foreign patent laws, for any working combination or some sub-combination of the features herein.
[0094] Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification. As one of ordinary skill in the art will readily appreciate from the disclosure of the invention, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized according to the invention. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.