IMAGING METHOD WITH PULSED LIGHT
20220360726 · 2022-11-10
Inventors
Cpc classification
H04N23/125
ELECTRICITY
H04N23/745
ELECTRICITY
International classification
Abstract
The invention relates to a method allowing the use of the information accessible by fluorescence imaging to be optimized. For this purpose, it implements the combination of a protocol for calibration and synchronization of a pulsed light for exciting a fluorescent marker, with the operation in “rolling shutter” mode of a fluorescence camera. An appropriate correction factor allows the complete signal integrated by all of the photodiodes of the camera to be used so that no image is lost.
Claims
1. An imaging method comprising: using a recorder to record an electrical signal that defines brightness of a pixel on an image, the electrical signal having been generated by at least one photosensor from a matrix of photosensors, the matrix including first and second photosensor groups, each including at least one photosensor; using a first controller, carrying out closed-loop control of the recorder for sequentially recording a signal generated by the first photosensor group before recording a signal generated by the second photosensor group, wherein a set that comprises the groups of photosensors allows reconstruction of an image of a region-of-interest that, while being illuminated by a second light source, is periodically illuminated by a first light source; using a second controller, controlling activation and extinction of the first light source over respective first and second time-periods, the second time-period succeeding the first time-period; for each photosensor of a photosensor group, subtracting a second value from a first value, wherein the second value corresponds to a signal recorded during the second time-period for the photosensor and the first value corresponds to a signal recorded during the first time-period for the photo sensor; determining a correction factor based on calibration measurements; and applying the correction factor to at least one value selected from the group consisting of the first value, the second value, and the result of having subtracted the second value from the first value; wherein, during the calibration measurements, first and second images are formed, the first image being an image of a surface that is constantly reflecting light that is emitted by a calibration light source that emits in continuous mode throughout the exposure time of the matrix of photosensors, and the second image being an image of a surface that is constantly reflecting light that is emitted by the same calibration light source while the calibration light source emits light periodically over the first time-period with a period that is the sum of the first and second time periods.
2. The method of claim 1, further comprising selecting the calibration light source to be the first light source.
3. The method of claim 1, further comprising selecting the calibration light source to be a light source other than the first light source.
4. The method of claim 1, wherein determining the correction factor comprises determining the correction factor using the value of the intensity of the signal measured for the photosensor for an image obtained by illuminating a fluorescent surface with the first light source alone.
5. The method of claim 1, wherein determining the correction factor comprises determining the correction factor based at least in part on an interval of time that passes between recording signals generated by a reference group and recording signals generated by a group to which the photosensor belongs.
6. The method of claim 1, further comprising carrying out the calibration measurements before recording the signals generated by the photosensors during the first and second time-periods.
7. The method of claim 1, wherein each photosensor group corresponds to a row of photosensors within the matrix and wherein the correction factor Cr(i, j) for the signal obtained from a photosensor situated on the row i and the column j is a function of the form
Cr(i,j)=(T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j)/T.sup.100%.Math.L1(i,j) wherein T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) is the intensity of the measured signal corresponding to the photosensor of the i.sup.th row and the j.sup.th column for an image T.sub.2k obtained by illuminating the region-of-interest with a percentage Xi% of the exposure time to the first light source, and 100% of the exposure time to the second light source wherein T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j) is the intensity of the measured signal, corresponding to the photosensor of the i.sup.th row and the j.sup.th column for an image T.sub.2k+1 obtained by illuminating the region-of-interest with a percentage Yi% of the exposure time to the first light source, and 100% of the exposure time to the second light source, and wherein T.sup.100%.Math.L1(i, j) is the intensity of the signal measured for the photosensor of the i.sup.th row and the j.sup.th column for an image obtained by illuminating a uniform surface during the entirety of the exposure time of the matrix of photosensors with the first light source alone.
8. The method of claim 7, further comprising carrying out a gamma de-correction before subtracting T.sub.2k+1.sup.(Yi%.Math.L1+100%.Math.L2)(i, j) from T.sub.2k.sup.(Xi%.Math.L1+100%.Math.L2)(i, j).
9. The method of claim 1, wherein applying the correction factor comprises applying the correction factor to the result of having subtracted the second value from the first value.
10. The method of claim 1, wherein applying the correction factor comprises applying the correction factor to the first value.
11. The method of claim 1, wherein applying the correction factor comprises applying the correction factor to the second value.
12. The method of claim 1, further comprising causing the second time-period to be one that immediately succeeds the first time-period.
13. The method of claim 1, further comprising periodically exposing a photosensor group with an exposure time, wherein the exposure time equals the first time-period.
14. The method of claim 1, further comprising choosing a photosensor group to be a reference group, exposing the reference group for the entirety of the first time-period and, after having done so, exposing the reference group for the entirety of the second time-period.
15. The method of claim 1, further comprising choosing a photosensor group to be a reference group, wherein choosing the group comprises choosing a group that spans the middle of the matrix of photosensors.
16. The method of claim 1, wherein a third time-period corresponds to the sum of a first time-interval, a second time-interval, and a third time-interval, wherein the third time-period is selected from the group consisting of the first time-period and the second time-period, wherein the first time-interval is an exposure time of the at least one photosensor of the photosensor group to a third light source, the third light source being selected from the group consisting of the first light source and the second light source, wherein the second time-interval is a recording time for recording the signal acquired by each photosensor in said photosensor group, and wherein the third time-interval is a time interval required for resetting each photosensor in said photosensor group.
17. An apparatus comprising: a matrix of photosensor groups, each of which comprises at least one photosensor that is sensitive to a range of wavelengths that extends between at least seven hundred nanometers and nine hundred nanometers, said matrix being configured to be exposed in a rolling-shutter mode having a refresh frequency; a pulsed light source configured to excite a fluorophore in a region-of-interest; a synchronizer that is configured for synchronizing the activation and the extinction of the pulsed light source with a sequence for integrating the signal extracted from the photosensor groups and at an on/off frequency that is equal to half of the refresh frequency; a processor that is configured, for each photosensor of the at least one group: to generate a third value by subtracting a second value from a first value and to apply a correction factor to a value selected from the group consisting of the first value, the second value, and the third value; wherein said first value corresponds to a signal recording during a first time-period for said photosensor and wherein said second value corresponds to a signal recorded during a second time-period for said photosensor.
18. The apparatus of claim 17, wherein said synchronizer is configured to execute a sequence that comprises turning on said light source no later than the start of an integration period of a photosensor group and turning off said light source no earlier than the end of the period of integration of said photosensor group, wherein said synchronizer is further configured reproduce said sequence for said photosensor group a frequency equal to half of said refresh frequency.
19. A manufacture comprising a non-transitory computer-readable medium having encoded thereon instructions that, when executed by a processor, execute the method recited in claim 1.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
DETAILED DESCRIPTION
[0067] One exemplary embodiment of a device 10 for monitoring the fluorescence emitted at the surface of the biological tissue or any other region of interest 20 is shown in
[0068] The region of interest 20 to be observed is illuminated by a pulsed light source L1 and in continuous mode by a light source L2. The light source L2 is for example a light source designed to illuminate an operating theater, such as a scialytic, or a source incorporated into the housing of a probe 1 forming part of the device 10.
[0069] The pulsed light source L1 (laser for example) is designed to emit a radiation for excitation of a fluorescence marker or fluorophore.
[0070] The probe 1 also comprises a camera, referred to as a “fluorescence” camera, for capturing fluorescence images (in the near infrared or, more generally, in wavelengths detected by this “fluorescence” camera). The fluorescence camera comprises a sensor sensitive to the fluorescence light emitted by the fluorophore at the surface of the region of interest 20. In other words, this fluorescence camera is equipped with at least one sensor designed to capture images in the near infrared or, more generally, in the wavelengths emitted by fluorescent markers (and notably between 700 and 900 nanometers). This sensor is also designed to capture images within other spectral bands and notably in the visible. It is also possible, according to one variant, to use a probe 1 comprising a first fluorescence camera for capturing fluorescence images (for example in the near infrared) and a second camera for capturing images in the visible.
[0071] The sensor comprises photodiodes distributed in a matrix of at least N rows and at least M columns of photodiodes. The sensor is for example a linear sensor of the CMOS (acronym for “Complementary Metal-Oxide Semiconductor”) type. Furthermore, the sensor has a mode of operation of the “rolling shutter” type.
[0072] The device 10 also comprises a computer 2. The computer 2 may be a generic computer. For example, it is equipped with a 2.7 GHz Intel® Core i3 microprocessor, with a 4 GB RAM memory and with a 500 GB hard disk. The probe 1 is for example connected to the computer 2 so as to closed-loop control the operation of the probe 1, and also to record and store the images captured by each camera. With the aid of a suitable computer program, the computer 2 therefore provides: [0073] recording means for recording the electrical signal generated by each photodiode, the signal extracted for each photodiode being used to form one pixel on an image, [0074] closed-loop control means for controlling the recording means; in other words for sequentially recording the signal integrated by a group of photodiodes of the matrix, prior to extracting and recording the signal integrated by another group of photodiodes of the matrix, [0075] means of synchronizing the pulsed light source L1 with a sequence for integration of the signal over various groups of photodiodes; in other words means for controlling the activation and the extinction of the light source L1.
[0076] The computer 2 also allows a processing of the images obtained whether this be in the near infrared, for example, or in the visible. The device 10 also comprises means of viewing and of displaying 3 (screen) images before and/or after processing.
[0077] The probe 1 is potentially held, by means of a support arm, in a stable manner and at a constant distance from the scene comprising the region of interest 20 to be observed and studied (there may however be a slight movement of the scene owing notably to the respiration of the patient for example). However, with a high enough acquisition frequency (for example of 25 images per second), it is possible to work in “hand-held” mode with no support arm, while still avoiding artifacts.
[0078] A fluorescent tracer, or fluorophore, is injected intravenously. Alternatively, no tracer is injected and an auto-fluorescence signal is detected. The emission signal of the fluorophore is captured by the sensor of the probe 1 and is recorded.
[0079] The signal captured by each of the photodiodes of the sensor is associated with a pixel.
[0080] By way of example and in order to simplify the explanations, the invention is described making reference to groups of photodiodes arranged according to the N rows of the sensor, but these groups could just as well be organized in columns, in blocks, etc.
[0081] As illustrated by
[0082] As also illustrated by
[0083] In order to obtain images that each correspond to a different illumination, each of the illuminations may be synchronized with the exposure, under a given illumination, of all the rows corresponding to each of the images. In other words, the illumination is synchronized so that all the rows of photodiodes implemented for the acquisition of an image are exposed when one or more sources (for example L1+L2) are turned on and when all the rows of photodiodes implemented for the acquisition of another image (acquired later) are exposed while one or more other sources (for example L2) are on.
[0084] This is illustrated by
[0085] This sequence may thus be periodically reproduced so that each image “T.sub.4k” (with k a positive or negative integer value) corresponds to an illumination by the sources “L1+L2”, whereas each image “T.sub.4k+2” corresponds to an illumination by the source “L2”. Nevertheless, according to this method, the images “T.sub.2k+1” are never used and a pair of images, each respectively corresponding to an illumination “L1+L2” and “L2”, are therefore obtained every 3 images.
[0086] For this reason, according to the invention, another method is provided allowing all the images or frames to be used. The method according to the invention also allows the rate of output of the images to be increased. The images may then be displayed at the acquisition frequency corresponding to the video frame.
[0087] According to one embodiment of the method according to the invention, the exposure time to each type of illumination (for example: first illumination with the sources L1 and L2 and second illumination with the source L2 alone) are synchronized with the acquisition sequence of at least one row of photodiodes.
[0088] This is illustrated in
[0089] Furthermore, these periods of illumination are advantageously synchronized in such a manner that the photodiodes of a row, in this document called reference row or group of photodiodes (for example that with index N/2 situated in the middle of the sensor) are illuminated by only one type of illumination at a time (for example either with the two sources L1 and L2, or with the source L2 alone), over the entirety of its integration time corresponding to each image. Thus, for example, the row of photodiodes of index N/2 is exposed with the first type of illumination (with the two sources L1 and L2 in the present example) for the signal corresponding to the image T.sub.2k and is exposed with the second type of illumination (with the source L2 in the present example) for the signal corresponding to the image T.sub.2k+1, and so on. In other words, the means of synchronizing the activation and the extinction of the pulsed light source L1 follow a sequence consisting in turning on this light source L1 at the latest at the start of the period of integration of the reference row of photodiodes, and turning off the pulsed light source L1 at the earliest at the end of the period of integration of this same reference row of photodiodes. This sequence is reproduced, for this same reference row of photodiodes, while skipping the following period of integration of the signal.
[0090] Thus, the subtraction, pixel to pixel, of the signal extracted for each photodiode of the row of index N/2 of an image T.sub.2k+1, from the signal of these same photodiodes of an image T.sub.2k, allows the signal to be recovered that would have been obtained for each pixel of this row if the corresponding photodiodes had been illuminated by the source L1 alone.
[0091] More generally, one advantage of the invention resides in the possibility of generating an image resulting from a subtraction using each new image received, notably storing it in memory. The order of the members in the pixel-to-pixel subtraction depends on the index of the new image received. Thus, upon receipt of an image of even index, T.sub.2k, the calculation is performed by subtracting from the newly received image T.sub.2k, illuminated with the source “L1+L2” on the row of index N/2, the preceding image T.sub.2k-1 stored in memory and illuminated with the source “L2”: “T.sub.2k−T.sub.2k−1”. Conversely, if the new image received is an image of odd index, T.sub.2k+1, then the subtraction “T.sub.2k−T.sub.2k+1” is effected (T.sub.2k corresponding, in this case, to the image stored in memory).
[0092] It may be noted that the other rows (above and below that of index N/2) have not been illuminated, for the image T.sub.2k, by the source L1 during the whole of its integration time, even if all the rows have been illuminated during this whole time with the source L2. The method according to the invention allows this problem to be solved.
[0093] Alternatively, according to one variant illustrated by
[0094] Therefore, by generalizing the above for any row of photodiodes:
[0095] The intensity of the signal associated with the j.sup.th pixel (with j going from 0 up to the number of photodiodes on the row of the sensor) of the row N/2 of the image T of even index 2k is denoted T.sub.2k (N/2, j), and the intensity of the signal associated with the j.sup.th pixel of the row N/2 of the image T of odd index 2k+1, is denoted T.sub.2k+1(N/2, j).
[0096] Even more generally speaking, the intensity of the signal associated with the j.sup.th pixel of the i-th row of the image T of even index 2k is denoted T.sub.2k (i, j) and, more precisely, the intensity of the signal associated with the j.sup.th pixel of the i.sup.th row of the image T of even index 2k, subsequent to an illumination by the source L.sub.k (k going from 1 to 2 in the present example) is denoted T.sub.2k.sup.Lk((i, j).
[0097] In a simple case where the signals received by the camera are linear, it may be considered that the intensity of the signal measured on a photodiode, when the light sources L1 and L2 are on, is equal to the linear sum of the intensity of the signal that would have been generated by this photodiode when illuminated with the light source L1, and of the intensity of the signal that would have been generated by this photodiode when illuminated with the light source L2.
[0098] Accordingly, staying with the example where the period of illumination “L1+L2” covers, or is synchronized on, the acquisition of a row of the images of even indices T.sub.2k and the illumination “L2” covers, or is synchronized on, the acquisition of this same row of the images of odd indices T.sub.2k+1, the following relationships may be generalized:
T.sub.2k(i,j)=T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)=X.sub.i%.Math.T.sub.2k.sup.L1(i,j)+100%.Math.T.sub.2k.sup.L2(i,j)
[0099] where Xi % corresponds, for the i.sup.th row, to the percentage of the time during which the source L1 is on, with respect to the exposure time of the sensor, for the image T.sub.2k,
[0100] and:
T.sub.2k+1(i,j)=T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j)=Y.sub.i%.Math.T.sub.2k.sup.L1(i,j)+100%.Math.T.sup.L2(i,j)
[0101] where Yi % corresponds, for the i.sup.th row, to the percentage of the time during which the source L1 is on, with respect to the exposure time of the sensor, for the image T.sub.2k+1 (where this percentage may be different from that corresponding to the image T.sub.2k).
[0102] Accordingly, upon receiving an image of odd index T.sub.2k+1, pixel to pixel, the intensities associated with this image T.sub.2k+1 are subtracted from the intensities of the preceding image T.sub.2k received and saved in the following manner:
T.sub.2k(i,j)−T.sub.2k+1(i,j)=(100%.Math.T.sub.2k.sup.L2(i,j)+X.sub.i%.Math.T.sub.2k.sup.L1(i,j))−(100%.Math.T.sub.2k+1.sup.L2(i,j)+Y.sub.i%.Math.T.sub.2k+1.sup.L1(i,j))
[0103] In a reciprocal manner, upon receiving an image of even index T.sub.2k, pixel to pixel, the intensities associated with the preceding image received and saved T.sub.2k−1 are subtracted from the intensities of the new image T.sub.2k, in the following manner:
T.sub.2k(i,j)−T.sub.2k−1(i,j)=(100%.Math.T.sub.2k.sup.L2(i,j)+X.sub.i%.Math.T.sub.2k.sup.L1(i,j))−(100%.Math.T.sub.2k−1.sup.L2(i,j)+Y.sub.i%.Math.T.sub.2k−1.sup.L1(i,j))
[0104] The assumption is furthermore made that the signal extracted from a photodiode, with an illumination by the ambient light source (provided by L2), has not changed between the respective acquisitions of the images T.sub.2k−1 and T.sub.2k, and also between the images T.sub.2k and T.sub.2k+1 (which is all the more the case the shorter the interval of time between the respective acquisitions of the images). The assumption is also made that the movements of the scene are negligible during the acquisition time of the sensor (for example 40 milliseconds).
Then:
100%.Math.T.sub.2k.sup.L2(i,j)=100%.Math.T.sub.2k+1.sup.L2(i,j)
and
100%.Math.T.sub.2k−1.sup.L2(i,j)=100%.Math.T.sub.2k.sup.L2(i,j)
[0105] Similarly, the assumption is made that the signal extracted from a photodiode and corresponding to the fluorescence (caused by an illumination with the source L1) has not changed between the respective acquisitions of the images T.sub.2k−1 and T.sub.2k and also between the images T.sub.2k and T.sub.2k+1 (preferably, this assumption is only used for processing the signal corresponding to two consecutive images obtained over a short time), then:
X.sub.i%.Math.T.sub.2k.sup.L1(i,j)=X.sub.i%.Math.T.sub.2k+1.sup.L1(i,j)
and
X.sub.i%.Math.T.sub.2k−1.sup.L1(i,j)=X.sub.i%.Math.T.sub.2k.sup.L1(i,j)
and similarly
Y.sub.i%.Math.T.sub.2k.sup.L1(i,j)=Y.sub.i%.Math.T.sub.2k+1.sup.L1(i,j)
and
Y.sub.i%.Math.T.sub.2k−1.sup.L1(i,j)=Y.sub.i%.Math.T.sub.2k.sup.L1(i,j)
Therefore:
T.sub.2k(i,j)−T.sub.2k+1(i,j)=X.sub.i%.Math.T.sub.2k.sup.L1(i,j)−Y.sub.i%.Math.T.sub.2k+1.sup.L1(i,j)=(X.sub.i%−Y.sub.i%).Math.T.sub.2k.sup.L1(i,j)
and
T.sub.2k(i,j)−T.sub.2k−1(i,j)=X.sub.i%.Math.T.sub.2k.sup.L1(i,j)−Y.sub.i%.Math.T.sub.2k−1.sup.L1(i,j)=(X.sub.i%−Y.sub.i%).Math.T.sub.2k.sup.L1(i,j)
[0106] Since the illumination is synchronized with the integration time of the sensor, then each row always receives the same proportion of light from the source L1 (X.sub.i% in the image T.sub.2k and Y.sub.i% in the image T.sub.2k+1) and hence the value (X.sub.i−Y.sub.i)% is fixed and given for each row of the pair of images (T.sub.2k, T.sub.2k+1) and (T.sub.2k−1, T.sub.2k):
(X.sub.i%−Y.sub.i%).Math.T.sub.2k.sup.L1(i,j)=Z.sub.i%.Math.T.sub.2k.sup.L1(i,j)
[0107] Therefore, the intensity T.sub.2k.sup.L1(i, j) of the signal for the j.sup.th pixel of the i.sup.th row corresponding to the image produced by the illumination L1 alone, upon receiving an image T.sub.2k+1 of odd index, is equal to:
T.sub.2k.sup.L1(i,j)=T.sub.2k+1.sup.L1(i,j)=(T.sub.2k(i,j)−T.sub.2k+1(i,j))/(Z.sub.i%)
[0108] and, upon receiving an image T.sub.2k of even index, is equal to:
T.sub.2k.sup.L1(i,j)=T.sub.2k−1.sup.L1(i,j)=(T.sub.2k(i,j)−T.sub.2k−1(i,j))/(Z.sub.i%)
[0109] The value Z.sub.i% for each row of index i may easily be calculated when the images are obtained by illuminating a uniform surface with the source L2 in continuous mode and by illuminating this same surface with the source L1, in the form of periodic pulses (these periodic pulses are, for example as indicated hereinabove, synchronized with the acquisition of the N/2.sup.th row of the sensor). In this case, the ratio, pixel to pixel, between the image resulting from the subtraction (T.sub.2k (i, j)−T.sub.2k+1(i, j)) or (T.sub.2k (i, j)−T.sub.2k−1(i, j)) with the true image T.sup.100%.Math.L1 obtained with the source L1 alone illuminating a uniform surface allows the various values of Z.sub.i% to be obtained which, as indicated above, are fixed and given for each row of the pair of images (T.sub.2k, T.sub.2k+1) and (T.sub.2k−1, T.sub.2k).
[0110] Furthermore, in practice, the problem of non-uniformities of the surface may also be overcome and a test pattern such as that shown in the figures used.
[0111] If the surface is effectively uniform, or even perfectly uniform, the knowledge of this true image T.sup.100%.Math.L1 may be sufficient and, in this case, the value of the intensity obtained for each pixel of the image T.sup.100%.Math.L1 is normalized by the value of the maximum intensity in this same image T.sup.100%.Math.L1. A normalized image T.sup.100%.Math.L1 is obtained and this normalized image is used to calculate the values Z.sub.i%.
[0112] However, more generally, in order to be able to use a test pattern or a surface which is non-uniform, it is preferable to use a true image corrected for the non-uniformities. For this purpose, at least one image of this surface (for example, of a test pattern) is acquired with the source L1 alone, in continuous mode over a certain period of time (for example during the entirety of the exposure time of the sensor) and at least one other image of this same surface with the source L1 in pulsed mode over this same period of time (for example during the entirety of the exposure time of the sensor). It may be advantageous to produce several images, whether this be with the source L1 in continuous mode and/or this be with the source L1 in pulsed mode, in order to generate an average and thus to reduce the errors linked to a random noise (such as a Poisson noise for example). The ratio, pixel by pixel, of the value of the intensity obtained in the image resulting from the pulsed illumination L1, over the value of the intensity obtained in the image resulting from the illumination L1 in continuous mode, allows an image T.sup.100%.Math.L1 to be obtained that is also normalized and this normalized image to be used for calculating the values Z.sub.i%.
[0113] Thus, when an image T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j), such as shown in
[0114] If the dynamic range of the sensor is linear and if the images have not undergone a gamma correction, then the values of the signal respectively associated with each pixel may be directly subtracted, pixel to pixel, from each other. Subsequently, the ratio of the image resulting from the subtraction T.sub.2k−T.sub.2k+1, therefore which must correspond to an image obtained with an illumination by the source L1 alone, over the true image obtained by illuminating a uniform surface with the source L1 alone, T.sup.100%.Math.L1 (normalized).
[0115] The values Z.sub.i% are obtained for each row of index i (in other words, reasoning by rows). In such a manner as to perform a read by blocks or groups of pixels potentially distributed in a random manner for example, the calculations may be even more generalized by reasoning by pixel and calculating a correction matrix Cr(i, j) such that:
Cr(i,j)=(T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j)/T.sup.100%.Math.L1(i,j)
[0116] A first calibration step is then carried out by the acquisition of a series of images of a surface, advantageously uniform, with the source L1 illuminating in a pulsed manner and with the source L2 off (the illumination by this source L2 would not in any case have any effect on the following calculation since it is constant). The following
(T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j))
may then be calculated.
[0117] A second calibration step is also carried out by the acquisition of a series of images of a uniform surface, with the source L1 illuminating in a continuous manner and with the source L2 off. The following
T.sup.100%.Math.L1(i,j)
may then be calculated.
[0118] The ratio of these two measurements (with L1 pulsed and L2 at 0) and (L1 in continuous mode and L2 at 0) allows the correction matrix Cr(i, j) to be determined.
[0119] In other words, the correction matrix is readily obtained by means of calibration measurements which may be performed at the factory prior to delivering the device or at any moment during its use by placing the probe facing a substantially uniform element (which could be fluorescent). Taking a ratio of the values measured by calibration allows potential non-uniformities of the surface of the calibration target, together with the potential variations in efficiency of the photodiodes, to be corrected.
[0120] In order for the result of the calibration to be even more precise, it is possible to calculate an average of the signal obtained on each pixel over several images of the same type before performing the subtraction, then the division indicated hereinabove. In other words, the images T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) and T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j), in the calculation described hereinabove, are images formed from an average over several images (for example the image T.sub.2k corresponds to an average calculated on the images 2, 4, 6, 8, 10 and the image T.sub.2k+1 to an average calculated on the images 3, 5, 7, 9, 11). These averages allow the influence of the noise in the images to be reduced. Similarly, the image T.sup.100%.Math.L1(i, j) may correspond to an average.
[0121] In general, the dynamic behavior of the sensors (CMOS or CCD) is non-linear because a correction for gamma (or contrast factor) is applied.
[0122] In this case, it is necessary to firstly de-correct the gamma so as to revert to a linear dynamic signal behavior prior to subtracting T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j)from T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j). The image T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) de-corrected for the gamma is denoted .sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j)
[0123] A correction matrix Cr(i, j) may then be calculated such that:
Cr(i,j)=(.sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−.sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j))/.sup.gT.sup.100%.Math.L1.sub.(i,j)
[0124] Subsequent to this calculation the correction matrix shown in
[0125] By considering a sensor as linear when not applying any correction for gamma, the correction matrix Cr(i, j) therefore corresponds to the matrix to be applied in a multiplicative fashion after each subtraction T.sub.2k−T.sub.2k+1 upon receiving an image of odd index T.sub.2k+1 and T.sub.2k−T.sub.2k−1 upon receiving an image of even index T.sub.2k, in order to recover the information for T.sub.2k.sup.100%.Math.L1(i, j):
T.sub.2k.sup.100%.Math.L1(i,j)=(T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j))*Cr(i,j)
[0126] In the case of a non-linear sensor, the equation becomes:
.sup.gT.sub.2k.sup.100%.Math.L1(i,j)=(.sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i,j)−.sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i,j))*Cr(i,j)
[0127] In order to bring the image .sup.gT.sub.2k.sup.100%.Math.L1(i, j) back into the reference frame of the camera in which a correction for gamma is applied, the image .sup.gT.sub.2k.sup.100%.Math.L1(i, j) must be corrected by the inverse gamma in order to obtain the image T.sub.2k.sup.100%.Math.L1(i, j):
T.sub.2k.sup.100%.Math.L1(i,j)=.sup.g′(.sup.gT.sub.2k.sup.100%.Math.L1(i,j))
[0128]
[0129]
[0130]
[0131]
.sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j)−.sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j),
[0132] but before correction by means of the correction matrix.
[0133]
[0134]
[0135]
[0136] It may be noted that the method explained hereinabove may be applied with a synchronization of the periodic pulses of the light L1 on rows other than the central row of the sensor of index N/2. In this case, the correction profile (
[0137] Furthermore, under certain conditions (for example with a colorized fluorescence image and superposition of the latter in transparency mode on a contextual image), it is possible that some weak fluorescence signals are not or hardly visible. In this case, the method according to the invention will advantageously comprise a calculation step designed to increase the intensity of the signal to be displayed, over a certain range of values of the latter (in practice, for the lower values of this signal).
[0138] For example, a gamma correction may be applied as shown in
[0139] According to another example, a non-linear correction may be applied of the type of that shown in
[0140] The correction function does not necessarily, as in the example presented hereinabove, depend on the index of the row in which a photodiode is situated. It depends more on the time taken for the integration on a given photodiode to be carried out. Thus, according to one variant of the method described hereinabove, the integration of the signal on the photodiodes of the sensor is not carried out row by row, but column by column and in a linear fashion. Only the correction matrix is modified. Instead of taking the form of that in
[0141] Or alternatively, the integration of the signal on the photodiodes of the sensor may be carried out over a specific time, but non-linear, with respect to the index of the row or of the column in question. The correction matrix may then take the form of that in
[0142] The integration of the signal on the photodiodes of the sensor may be carried out over a specific time for each photodiode or block of photodiodes, but in a non-linear fashion with respect to the position of the photodiode in a row or a column. The correction matrix may then take the form of that in
[0143] One example of implementation of the method according to the invention is described hereinbelow in relation to
[0144] According to this example, the method comprises:
[0145] A preliminary calibration step (not shown) for determining the correction matrix Cr(i, j), as explained hereinabove.
[0146] A step 100 or 100b is, during which the sensor is exposed to the light reflected or emitted at the surface of the region of interest, while this region is illuminated periodically alternately either with the sources L1 and L2 turned on at the same time, or with only the source L2 turned on. In practice, the source L2 illuminates the region of interest in continuous mode, whereas the source L1 illuminates the region of interest with a periodic pulsing and is superimposed on the illumination by the source L2. During this step 100 or 100b is, the signal integrated by the photodiodes of the sensor is extracted by group of photodiodes, the period of integration for at least one group of photodiodes, referred to as reference group, being synchronized with the periodic pulsing of the illumination by the source L1 (superimposed on the continuous illumination by the source L2). As mentioned hereinabove, a group of photodiodes may be organized in a row, or in a column, or in a block, etc. It will be noted that a group of photodiodes may potentially only comprise one photodiode. In the example chosen hereinabove for explaining the type of calculation implemented in accordance with the method according to the invention, a group of photodiodes corresponds to a row of photodiodes and the row of index N/2 is chosen as reference group.
[0147] At the step 100 and 100b is, the images T.sub.2k and T.sub.2k+1 are stored in memory, in the form of matrices T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) and T.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j) (with k a positive or negative integer value), with a view to a processing allowing the matrix T.sub.2k.sup.100%.Math.L1(i, j) of an image to be determined such as would have been obtained with an illumination by the source L1 alone.
[0148] It may be noted that, despite the use of a sensor operating in “rolling shutter” mode, there is no significant deformation of the images attached to the flow diagram in
[0149] At the steps 200 and 200b is, each of these matrices is corrected for the gamma (and potentially for a vignetting) in order to render the signal linear. In other words, after these steps have been applied the signal corresponding to each pixel is proportional to the quantity of light received by the corresponding photodiode. Matrices .sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) and .sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j).
[0150] At the steps 300 and 300b is, each of these matrices .sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) and .sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j) is respectively saved as the last image received, prior to carrying out, at the step 400, the subtraction, pixel to pixel, .sup.gT.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j)−.sup.gT.sub.2k+1.sup.Yi%.Math.L1+100%.Math.L2(i, j).
[0151] At the step 500, the image resulting from the subtraction operation carried out at the preceding step undergoes a multiplication, pixel to pixel, with the correction matrix Cr(i, j).
[0152] Potentially, a step 600 allows certain values of the signal to be increased in order to enhance, as explained hereinabove, the intensity of the color displayed on the pixels corresponding to these values (in practice, either the gamma of the sensor may be re-applied, or a gamma different from that of the camera is applied in order to further enhance certain weaker signals, or any other look-up-table is applied).
[0153] At the step 700, the image obtained at the step preceding is colorized and displayed in transparency mode onto a background image (corresponding to the matrix T.sub.2k.sup.Xi%.Math.L1+100%.Math.L2(i, j) obtained with an illumination combining the sources L1 and L2).
[0154] Each of the steps presented hereinabove is advantageously implemented by means of a computer program.
[0155] Generally speaking, the method according to the invention notably offers the following advantage. The combination of the protocol for synchronizing the pulsed light exciting at least one fluorescent marker, with the operation in “rolling shutter” mode of the sensor, with the application of a suitable correction factor allows the use of the information accessible by fluorescence imaging to be optimized. Indeed, the use of a sensor in “rolling shutter” mode already allows the blanking time of the photodiodes to be shortened, but furthermore, as explained hereinabove, the method according to the invention allows no image to be lost. Each new image corresponding to an exposure under a pulsed illumination is calculated at the rate of the image acquisition. This results in a very good fluidity of the images (each calculated image is refreshed while conserving the same number of images per second as that provided by the camera). Furthermore, by analyzing each new image, the time delay that exists between two images is limited (and hence the artifacts that may be due to the modifications in illumination or to the movement of the camera or of the scene are limited).
[0156] Generally speaking, the method according to the invention notably allows: [0157] an image or a set of images taken with an illumination A to be superimposed onto an image or a set of images taken with an illumination B; [0158] information present on an image or a set of images taken with an illumination A to be corrected and/or improved with the aid of an image or a set of images taken with an illumination B; indeed, biological tissues may for example be illuminated with an illumination in the near infrared A (with A that does not include B) in order to provide contextual information and information on absorption of the biological tissues in this range of wavelengths; by adding a pulsed illumination B in a range of wavelengths that excites a fluorescent tracer, the information on the fluorescence of the tracer may be determined; by making the assumption that the information on absorption of the biological tissues at the wavelengths of the illumination A is close to that resulting from an illumination B, the information on the fluorescence may be corrected by applying a correction factor to it as a function of the absorption of the biological tissues, this correction factor having been obtained from the images resulting from the illumination A; [0159] a real-time multi-spectral imaging to be offered by allowing the emission due to an illumination A to be dissociated with respect to that due to an illumination B which has other wavelengths than that of the illumination A (the illuminations A and/or B not necessarily being in the infrared and/or being adapted to applications other than fluorescence imaging).