Method, image processor and device for observing an object containing a bolus of a fluorophore

11857164 ยท 2024-01-02

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method, an image processor (26) and a medical observation device (1), such as a microscope or endoscope, for observing an object (4) containing a bolus of at least one fluorophore (12). The object (4) is preferably live tissue comprising several types (16, 18, 20) of tissue. According to the method, a set (34) of component signals (36) is provided. Each component signal (36) represents a fluorescence intensity development of the fluorophore (12) over time in a different type of tissue. A time series (8) of input frames (10) is accessed, one input frame (10) after the other. The input frames (10) represent electronically coded still images of the object (4) at subsequent time. Each input frame (10) contains at least one observation area (22) comprising at least one pixel (23). In the observation area (22) of the current input frame (10) of the time series (8), a fluorescent light intensity (I) is determined over at least one fluorescence emission wavelength (15) of the fluorophore (12). This fluorescent light intensity (I.sub.1) is joined with the fluorescence light intensities (I.sub.n) of the observation area (22) of preceding input frames (10) of the time series (8) to generate a time sequence (40) of fluorescent light intensities (I.sub.1, I.sub.n) of the observation area (22). This time sequence (40) is decomposed on in a preferably linear combination (72) of at least some of the component signals (36) of the set (34). A new set (34) of component signals (36) is provided which includes only those component signals (36) which are present in the combination (72). An output frame (46) is generated, in which the observation area (22) is assigned a color from a color space depending on the combination (72) of component signals (36).

Claims

1. A method for observing an object containing a bolus of at least one fluorophore, the method comprising the steps of: providing a set of component signals; accessing a time series input frames representing electronically coded still images of the object, each input frame containing at least one observation area, the observation area comprising at least one pixel; wherein the method further comprises iterative process steps of: automatically determining a fluorescent light intensity in the observation area of one of the input frames of the time series, the fluorescent light intensity being determined over at least one fluorescence emission wavelength of the fluorophore; automatically joining the fluorescent light intensity with respective fluorescence light intensities of the observation area from preceding input frames of the time series to generate a time sequence of fluorescence light intensities of the observation area; automatically decomposing the time sequence into a combination of at least some component signals of the set; and wherein an output frame is generated in which the observation area is assigned a color from a color space depending on the combination of component signals, wherein each component signal is a discrete or analytic time curve representing a fluorescence light intensity development of the fluorophore over time in a specific type of tissue and the method comprises the further iterative process step of automatically providing at least a subset of the component signals in the combination as a new set of component signals.

2. The method according to claim 1, wherein the iterative process steps are carried out only if the fluorescent light intensity is above a fluorescence threshold in the observation area of at least one input frame.

3. The method according to claim 1, wherein the individual component signals in the combination are weighted and wherein a component signal is only included in the new set if the weight of such component signal in the combination exceeds a weight threshold.

4. The method according to claim 1, wherein the number of component signals in the new set is smaller than the number in the set used for the decomposition.

5. The method according to claim 1, wherein a predetermined number of the strongest component signals in the combination is selected to be included in the new set.

6. The method according to claim 1, wherein each input frame comprises a plurality of observation areas and wherein the iterative process steps are carried out for all of the plurality of observation areas.

7. The method according to claim 6, wherein the iterative process steps further comprise the step of automatically including only those component signals in the set, of which a frequency of occurrence in the observation areas exceeds a prevalence threshold.

8. The method according to claim 1, wherein the iterative process steps are carried out while the input frames of the time series are received.

9. The method according to claim 1, wherein the iterative process steps are carried out between receiving two subsequent input frames of the times series.

10. The method according to claim 1, wherein output frames are generated at the same frame rate as the input frames are received.

11. The method according to claim 10, wherein the output frames are displayed on a monitor.

12. The method according to claim 11, wherein the output frames are displayed in real time as the input frames are received.

13. Non-transitory computer storage media storing a program causing a computer to execute the method according to claim 1.

14. An image processor for a medical observation device, the image processor comprising: an input section configured to receive a time series of input frames representing electronically coded still images of the object, each input frame containing at least one observation area, the observation area comprising at least one pixel; a memory section in which a set of component signals is stored; a computing section configured to determine the fluorescent light intensity in the observation area of one input frame of the time series over at least one fluorescence emission wavelength of the fluorophore in the observation area, to join the fluorescent light intensity of the observation area of the one input frame with fluorescent light intensities of previous input frames of the time series to generate a time sequence of fluorescent light intensity in the observation area, and to decompose the time sequence into a combination of the component signals in the set; an image generator section configured to generate an output frame by joining at least one input frame of the time series with the observation area which is assigned a pseudocolor from a color space, the pseudocolor depending on the combination and an output section configured to output the output frame; wherein each component signal is a discrete or analytic time curve representing the course of fluorescence light intensity over time of the fluorophore in a specific type of tissue, and the computing section is further configured to compose a new set of component signals from the component signals of the combination as a replacement of the set.

15. A medical observation device comprising the image processor according to claim 14 and a camera system from which the input frames are derived.

16. The medical observation device according to claim 15, wherein the medical observation device is a microscope.

17. The medical observation device according to claim 15, wherein the medical observation device is an endoscope.

Description

BRIEF DESCRIPTION OF THE DRAWING VIEWS

(1) Throughout the figures, elements which are identical or similar with respect to function and/or design are assigned the same reference numeral.

(2) In the figures:

(3) FIG. 1 shows a schematic rendition of a medical observation device according to the invention;

(4) FIG. 2 shows a schematic representation of a set of component signals;

(5) FIG. 3 shows a schematic representation of the method according to the invention;

(6) FIG. 4 shows a schematic rendition of a decomposition of a time sequence of fluorescent light intensities.

DETAILED DESCRIPTION OF THE INVENTION

(7) First, the invention is described with reference to FIGS. 1 and 2.

(8) FIG. 1 shows a schematic representation of a medical observation device 1, such as a microscope or an endoscope. Only by way of example, a microscope shown with a camera system 2, in particular, a multi- or hyperspectral camera, which is directed onto an object 4. The camera system 2 captures a field of view 6 and records electronically coded still images that form the basis of a time series 8 of input frames 10. An input frame 10 may result from a combination of more than one electronically coded still images or from a single such image. A combination of several images can be used to increase contrast or the depth of the field of view 6, e.g. by combining pictures of different layers of the object 4 as in z-stacking. Additional or alternative combinations may comprise stitching neighboring images, combining images recorded at different wavelengths, such as a visible-light image and an NIR-image, or joining images that have been filtered differently.

(9) The object 4 may in particular be live tissue. The object 4 has been provided with a bolus of at least one fluorophore 12 which, after application, starts to spread across the object 4. The fluorophore 12 may be degradable. An illumination system 13 illuminates at least the field of view 6 and includes fluorescence excitation wavelengths 14 that excite fluorescence of the fluorophore 12. The fluorescence of the at least one fluorophore 12 is emitted in fluorescence emission wavelengths 15 that are recorded by the camera system 2 preferably in addition to light in the visible-light range. If more than one fluorophore 12 is used, at least the emission wavelengths 15 should not overlap, so that the fluorophores 12 can be distinguished by their color.

(10) The time series 8 of input frames 10 represents the interaction of the fluorophore 12 with the object 4 over time. In live tissue, the fluorophore 12 will reach the field of view 6 after a specific time. The intensity of the fluorescent light emitted by the fluorophore 12 will peak and then decay. The times of arrival of the fluorophore 12, of its peak and of its decay are representative for different types of tissue. Typically, three types of blood compartments, namely arterial tissue 16, capillary tissue 18 and venous tissue 20 may be differentiated. In other applications, a different number of tissues may be needed to be distinguished.

(11) Each input frame 10 contains at least one observation area 22 which may be a single pixel 23 or a preferably connected assembly of pixels. Across the input frames 10 of a time series 8, the observation area 22 is preferably fixed in location with respect to the input frame 10.

(12) Depending on the type of tissue 16, 18, 20 which is mapped onto the observation area 22 of the input frames 10, the fluorescent light intensity exhibits a different variation over time. This is schematically shown by the circular and rectangular areas in the input frames 10 of FIG. 1. Over time t, different areas become more visible at different times and then decay.

(13) If there is more than one observation area 22 in the input frames 10, the observation areas 22 preferably do not overlap. It is preferred that each input frame 10 consists of observation areas 22 that are tiled to cover the complete input frame 10.

(14) The time series 8 is analyzed by an image processor 26 which is part of the medical observation device 1 or may be used for upgrading an existing medical observation device 1. The image processor 26 is connected to the camera system 2 via a data transmission line 28 which may be wired, wireless, or a combination of both. The data transmission line 28 may be connected to an input section 30 of the image processor 26. The input section 30 is configured to receive the time series 8.

(15) The image processor 26 further comprises a memory section 32, in which a set 34 of component signals 36 is stored. The set 34 may comprise e.g. between 10 and 200 component signals, depending on the object, the fluorophore, the lighting conditions and the computational power available.

(16) Each component signal 36 is a discrete or analytic time curve representing the development of fluorescent light intensity I over time tin a specific type of tissue. At least one of time t and intensity I may be a dimensionless and/or normalized quantity. Each component signal 36 represents the reaction of a different type of tissue to the bolus of the at least one fluorophore 12 administered at time t.sub.0. The different component signals 36 represent e.g. arterial tissue having arteries of different diameters, venous tissue having veins of different diameters and capillary tissue with capillaries of different diameters. The diameter of the respective vessels, the amount of vessels, and the flow cross section of the tissue in a specific compartment will determine the shape of the component signal 36, i.e. the time when the fluorophore 12 arrives and thus fluorescent light intensity increases, and the rate with which the fluorophore 12 is washed out from the tissue, i.e. fluorescent light intensity decreases.

(17) Each component signal 36 may have been empirically determined by previous measurements. Different sets 34 may be used for different fluorophores and/or for different objects 4, such as different types of organs. For example, a different set 34 may be used for brain tissue and for muscle tissue.

(18) The image processor 26 further comprises a computing section 38. The computing section 38 is configured to determine, for each observation area 22, the fluorescent light intensity I in one, current, input frame 10 of the time series 8. The fluorescent light intensity I is determined over at least one fluorescence emission wavelength 15 of the fluorophore 12 in the observation area 22. If, for example, indocyanine green is used as a fluorophore, the fluorescence wavelengths are located between 750 nm and 950 nm. The fluorescent light intensity may be determined in any part of this region and preferably includes the wavelengths between 780 nm and 850 nm where fluorescence is strongest. The fluorescent light intensity I may be computed by summing or integrating the fluorescent light intensity over several emission wavelengths 15. As a result, a fluorescent light intensity I.sub.1 is obtained for the input frame 10 at time t.sub.1. This fluorescent light intensity is also shown in FIG. 2 although it is not part of the set 34.

(19) Further, the computing section 38 is configured to join the fluorescent light intensity, here I.sub.1, of the current input frame 10 with the fluorescent light intensities I.sub.n of at least the previous input frames 10 of the times series 8. The fluorescent light intensities I.sub.n of the previous frames as well as the frames at a later time are also shown in FIG. 2 as dots, although, again, they are not part of the set 34. The fluorescent light intensities I.sub.n of the previous frames are surrounded by a phantom line 40 for easier identification. The computing section 38 is adapted to generate a time sequence 40 by logically joining the fluorescent light intensity I.sub.1 to the previously determined fluorescent light intensities I.sub.n in the observation area 22.

(20) The computing section 38 is further configured to decompose the time sequence 40 into a preferably linear combination of the component signals 36 in the set 34. Thus, the computing section 38 determines those component signals 36 which make up the time sequence 40 in the observation area 22. These component signals 36 are indicative of the type 16, 18, 20 of tissue which is located in the observation area 22.

(21) The computing section 38 is further configured to compose a new set 34 of component signals 36 from at least a subset of the component signals 36 in the combination which results in the time sequence 40.

(22) These steps are then repeated for each observation area 22 before work is started on the next input frame 10 using the new set of component signals 36.

(23) Each observation area 22 may be assigned a separate set 34 or a single set 34 may be used for the complete input frame 10. Alternatively, a set 34 may be shared among a group of observation areas 22, wherein each input frame 10 may comprise a plurality of such groups.

(24) At the end of this iterative process, when fluorescence has decayed in the object, a final set 42 ideally comprises only those component signals 36 which are indicative of the type 16, 18, 20 of tissue in the respective observation area 22. The weight of the component signals 36 of the final 42 set needed to build the time sequence 40 at a particular observation area 22 is indicative of the prevalence of the respective type 16, 18, 20 of tissue in the respective observation area 22.

(25) The image processor 26 further comprises an image generator section 44 which is configured to generate an output frame 46 from at least one input frame 10 of the time series 8, preferably the input frame 10 which has just been analyzed by the computing section 38, and from the observation area 22. A pseudocolor is assigned by the image generator section 44 to the observation area 22, the pseudocolor depending on the combination of component signals 36 in the respective observation area 22, or of their weight respectively. For example, using an RGB-color space, the color red may be used for the component signal designating arterial issue, the color green for the component signal designating capillary tissue and the color blue for the component signal designating venous tissue. The color of the observation area is then determined by the mixture of red, green and blue which corresponds to the respective weights of the three component signals.

(26) Finally, the image processor 26 may comprise an output section for outputting the output frame 46.

(27) The medical observation device 1 may comprise a display 50, which is connected to the output section 48 and in which the output frame 46 may be displayed. In the output frame 46 in the display 50, the different pseudocolors of the type 16, 18, 20 of tissue are schematically depicted as a different filling and hatching.

(28) In FIG. 3, the steps that may be carried out by the various elements of the medical observation device 1 are shown.

(29) In a first step 54, the set 34 of component signals 36 is provided. The set 34 of component signals 36 may be generated before or during carrying out the method and may be preferentially stored.

(30) Next, in step 56, the time series 8 of input frames 10 is accessed sequentially. This access may be performed in real time, as explained above.

(31) In step 58, the fluorescent light intensity I in the observation area 22 is determined.

(32) The process may, in one configuration, only proceed to the next step 60 if the fluorescent light intensity I in the observation area 22 in the current input frame 10 exceeds a lower fluorescence threshold I.sub.T1 (FIG. 2) and/or is below an upper fluorescence intensity threshold I.sub.T2 (FIG. 2). Once this criterion is met, the observation area 22 may undergo the iterative process until the fluorescent light intensity I in the observation area 22 falls below a decay threshold I.sub.T2 The decay threshold I.sub.T2 may be the same as or be higher or lower than the lower intensity threshold I.sub.T1. A threshold may be assumed to have been passed if the intensity exceeds or falls below the threshold in a single input frame, in a predetermined number of preferably subsequent previous input frames and/or if the average fluorescent light intensity I in the observation area computed over a predetermined number of input frames falls below or exceeds the respective threshold.

(33) If the threshold criterion with regard to I.sub.T1 and I.sub.T2 is met, the fluorescent light intensity I.sub.1 in the current input frame 10 is joined with the fluorescent light intensities I.sub.n in the observation area 22 of the preceding input frames 10 so that the time sequence 40 is generated. The time sequence 40 may, in step 62, undergo post processing, e.g. the time sequence 40 may be smoothed, a curve-fit may be computed, normalization and/or a band- or low-pass filtering may be carried out.

(34) In the next iterative process step 64, the time sequence 40 is decomposed into the best-fitting combination of component signals 36 of the set 34. The best fit may be computed for example using an RMS algorithm.

(35) Steps 54 to 64 may, in one variant, be repeated for all or several observation areas 22 in the input frames 10, before the next step is performed. Alternatively, the process first goes to the next step 66 before working on another observation area 22 of the input frame 10.

(36) In the next step 66, the combination of component signals 36 in one of the observation areas 22, some observation areas or all observation areas of the input frame 10 is analyzed. A new set 34 is composed only of those component signals 36 which are strongest in the particular combination of a single observation area 22 or in the combinations of a plurality of observation areas 22. For example, only those component signals are retained of which the weight exceeds a weight threshold. If more than one observation area is used, the average weight across a plurality of observation areas may be used. The average may also be computed over time from previous input frames. Cumulatively or alternatively, only a predefined number of the strongest component signals may be retained.

(37) Further, only those component signals 36 may be retained in the new set, of which the frequency of occurrence P, across a plurality of observation areas 22, preferably all observation areas 22, exceeds a prevalence threshold T (FIG. 1).

(38) The aim in this step is to reduce the number of component signals in the set 34 for the next input frame. In this step, it is assumed that component signals that are weak and/or do not occur frequently result from noise and errors.

(39) The weight of each component signal 36 in at least the latest combination of component signals 36 of the new set 34 may be stored, e.g. in the memory section.

(40) The set 34 which has been provided at step 54 is then replaced by the new set 34 and the process starts again with the next input frame or the next observation area 22 in the current input frame 10.

(41) As already stated above, the method allows to maintain different sets 34 for different fluorophores 12 and/or observation areas 22. If a single set 34 is maintained for all the observation areas 22 in an input frame 10, then the number of component signals 36 will be high, as it can be expected that a variety of tissues is contained in an input frame 10. As each component signal 36 represents a different type of tissue, a larger number of component signals 36 is needed to represent the part of the object 4 which is mapped onto the input frames 10. In this case, it is the weight of the various component signals 36 in a single observation area 22 that is indicative for the type of tissue prevalent in this observation area 22. The computational effort needed for this method is comparatively low as there is only a single set 34. However, due to the large number of component signals 36 in the set 34, there is the risk that in some observation areas 22, the wrong combination of component signals 36 is computed.

(42) This source of error can be avoided at the expense of computational effort if a plurality of sets 34 is maintained for the input frames 10. In the extreme, every observation area 22 may have its own set 34. In this case, the component signals 36 in the set 34 are indicative of the tissue types 16, 18, 20 in the respective observation area 22 and the weight of the respective component signal 36 in the combination is indicative of the prevalence of the tissue in the observation area 22.

(43) In a balanced approach, a set of observation areas 22 may share a common set 34. Such groups of observation areas 22 may be e.g. classified by the time when the fluorescence intensity threshold I.sub.T1 (FIG. 2) has been exceeded. By defining two or more time intervals, these groups may be easily identified.

(44) As a further measure, a break-off or cut-off criterion may be defined for each observation area 22. When this criterion is met, the iterative process is stopped for this particular observation area 22 and the latest weights of the component signals 36 of the new set in the last combination of component signals is stored for later analysis.

(45) Further, the output frame 46 may be generated in a step 68 and, in step 70, displayed in the display 50.

(46) The steps 68 and 70 need not to be carried out every time the iterative process steps 58 to 66 have been computed, i.e. for every input frame 10 of the time series 8, but can be performed at predetermined time intervals. Preferably, however, the iterative process steps 58 to 66 are carried out in real time, i.e. at the same rate as the input frames 10 are received. The access to the input frames 10 at step 56 preferably occurs at a frame rate which is faster than the flicker frame rate, i.e. the frame rate which is required for a smooth rendition of a video sequence for the human eye. Typically, the flicker frame rate is faster than 26 Hz. The output frames 46 are preferably also generated in the same frequency as the frame rate.

(47) Of course, the process described above may also run as a batch process on a recorded video sequence.

(48) FIG. 4 illustrates the decomposing of a time sequence 40 into two or more component signals 36 which are shown as discrete-time functions g(t), h(t), x(t), y(t), z(t). The time sequence 40 can be expressed as a combination 72 of these functions.

(49) For the functional decomposition of the time sequence 40, the closest approximation of f(t) is computed such that
f(t)a.Math.g(t)+b.Math.h(t)+c.Math.x(t)+d.Math.y(t)+e.Math.z(t)
holds. As the time sequence 40 consists of a plurality of sample points 52, the weights a to e can be accurately determined by various standard algorithms such as an RMS approximation. In the left-hand part of FIG. 4, the time sequence 40 is shown to have been successfully decomposed in two component signals g(t) and h(t), each having the same normalized weight e.g. a=b=1. The functions for component signals g(t) and z(t) do not contribute to the time sequence, d=e=0. the function x(t) has only a very small weight which is below a weight threshold W, c<W, and thus will be considered as resulting from noise.

(50) In the middle part of FIG. 4, the weight of the function g(t) is half of the weight of the function h(t) in order to best approximate the function f(t). This situation is reversed in the right-hand part of FIG. 4, where the weight of the function h(t) is only half the weight of the function g(t).

REFERENCE NUMERALS

(51) 1 medical observation device 2 camera system 4 object 6 field of view 8 time series 10 input frame 12 fluorophore 13 illumination system 14 fluorescence excitation wavelengths 15 fluorescence emission wavelengths 16 arterial tissue 18 capillary tissue 20 venous tissue 22 observation area 23 pixel 26 image processor 28 data transmission line 30 input section 32 memory section 34 set of component signals 36 component signal 38 computing section 40 time sequence 42 final set of input sections 44 image generator section 46 output frame 48 output section 50 display 52 sample points 54, 56, 58, 60, 62, 64, 66, 68,70 process steps 72 combination of component signals a,b,c,d,e weights f,g,h,x,z,y discrete time functions I fluorescent light intensity I.sub.1 fluorescent light intensities at time t.sub.1 in a specific observation area I.sub.T1, I.sub.T2, I.sub.T3 thresholds for the fluorescent light intensity I.sub.n time sequence of fluorescent light intensities in a specific observation area over time P frequency of occurrence of component signal T prevalence threshold t time t.sub.0 time of application of fluorophore to object t.sub.1 specific time W weight threshold