Method for observing a sample

10564602 ยท 2020-02-18

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for observing a sample includes illuminating the sample with a light source and forming a plurality of images, by an imager, the images representing the light transmitted by the sample in different spectral bands. From each image, a complex amplitude representative of the light wave transmitted by the sample is determined in a determined spectral band. The method further includes backpropagation of each complex amplitude in a plane passing through the sample, determining a weighting function from the back-propagated complex amplitudes, propagating the weighting function in a plane along which the matrix photodetector extends, updating each complex amplitude, in the plane of the sample, according to the weighting function propagated.

Claims

1. A method for observing a sample, comprising: i) illuminating the sample using a light source that produces a light wave that propagates along a propagation axis; ii) acquiring, using a photodetector, a plurality of images of the sample, the images being formed in a detection plane, the sample being placed between the light source and the photodetector, each image of the plurality being representative of a light wave, transmitted by the sample under effect of the illumination, called the transmitted light wave, and each image of the plurality being acquired in a spectral band that is different from that of other images of the plurality; iii) determining, based on each image respectively acquired in each spectral band, an initial complex amplitude of the transmitted light wave, in said each spectral band, in the detection plane; iv) selecting a sample plane, in which the sample lies, and back-propagating each complex amplitude established in the detection plane, in said each spectral band, in order to determine a complex amplitude of the transmitted wave, in said each spectral band, in the sample plane; v) calculating in said each spectral band, based on a plurality of complex amplitudes determined in step iv), a weighted sum of the complex amplitudes, or of logarithm functions thereof, or of argument functions thereof, in the sample plane, and calculating a weighting function, the weighting function being calculated using the weighted sum; vi) propagating the weighting function to the detection plane so as to obtain, for at least one spectral band, a weighting function in the detection plane; vii) updating at least one complex amplitude of the transmitted light wave, in a spectral band, in the detection plane, using the weighting function obtained, in the spectral band, in step vi); and viii) repeating steps iv) to vii) until a stop criterion is reached, wherein, in step vii), an argument function of the complex amplitude of the transmitted light wave, in a spectral band, in the detection plane, is calculated depending on an argument function of the weighting function determined, in the detection plane and in the spectral band, in step vi).

2. The method of claim 1, wherein, in step iii), a modulus of the complex amplitude of the transmitted light wave in a spectral band is determined by normalizing the intensity of the image acquired by the photodetector, in the spectral band, by a reference intensity measured by the photodetector in the absence of sample.

3. The of claim 1, wherein, in step iv), the complex amplitude in a sample plane, in a spectral a, is determined by applying a propagation operator to the complex amplitude, defined in the same spectral band, in the detection plane.

4. The method of claim 1, wherein, in step vi), the weighting function, in the detection plane, is propagated by applying a propagation operator to the weighting function determined, in the sample plane, in step v).

5. The method of claim 1, wherein, in step vii), a modulus of the complex amplitude of the transmitted light wave, in a spectral band, in the detection plane, is calculated depending on a modulus of the initial complex amplitude, in the spectral band.

6. The method of claim 1, wherein, in stet v), the weighting function is common to all the spectral bands.

7. The method of claim 1, wherein step v) further comprises determining a plurality of weighting functions, each weighting function of said plurality of weighting functions being associated with one spectral band.

8. The method of claim 1, further comprising, following step viii): ix) forming an image representative of a modulus or of an argument function of the complex amplitude of the wave transmitted by the sample, in the sample plane or in the detection plane, in at least one spectral band.

9. A device for observing a sample, comprising: a light source configured to illuminate the sample; a photodetector, the sample being disposed between the light source and the photodetector, the photodetector being configured to form a plurality of images, in a detection plane, of the light wave transmitted by the sample under effect of illumination by the light source, each image being obtained in a spectral band that is different from that of other images of the plurality; and a processor, configured to process the plurality of images by executing instructions, programmed into a memory, which, when executed, implement the method of claim 1.

Description

FIGURES

(1) FIG. 1 shows a first example of a device for implementing the invention, the analyzed sample being an anatomopathology slide.

(2) FIG. 2 shows a first example of a device for implementing the invention, the analyzed sample being a bodily liquid containing particles.

(3) FIG. 3 shows the detection plane, on which an image is formed, and the sample plane. This figure also illustrates the relationships between the main quantities implemented in the various described embodiments.

(4) FIG. 4 shows a flowchart illustrating the sequence of the main steps of an iterative reconstructing method.

(5) FIG. 5 shows a second example of a device for implementing the invention, the analyzed sample being an anatomopathology slide.

(6) FIGS. 6A, 6B and 6C show reconstructed images, in a sample plane, in a first spectral band, reconstructed using an iterative reconstruction algorithm, these images respectively being obtained following a number of iterations respectively equal to 1, 3 and 10.

(7) FIGS. 7A, 7B and 7C show reconstructed images, in a sample plane, in a second spectral band, reconstructed using an iterative reconstruction algorithm, these images respectively being obtained following a number of iterations respectively equal to 1, 3 and 10.

(8) FIGS. 8A, 8B and 8C show reconstructed images, in a sample plane, in a third spectral band, reconstructed using an iterative reconstruction algorithm, these images respectively being obtained following a number of iterations respectively equal to 1, 3 and 10.

(9) FIGS. 9A, 9B and 9C show composite images obtained by combining reconstructed images in the first, second and third spectral bands, these images respectively corresponding to 1, 3 and 10 iterations. These images are color images, here shown in black and white.

SUMMARY OF PARTICULAR EMBODIMENTS

(10) FIG. 1 shows an example of a device falling within the scope of the invention. A light source 11 is able to emit a light wave 12, called the incident light wave, in the direction of a sample 10, along a propagation axis Z.

(11) The sample 10 may be a biological sample that it is desired to characterize. It may for example be a tissue slide, or an anatomopathology slide, including a small thickness of tissue deposited on a transparent slide 15. By small thickness, what is meant is a thickness preferably smaller than 100 m, and preferably smaller than 10 m, typically a few microns. Such a sample is shown in FIG. 1. It may be seen that the sample lies in a plane P.sub.0, called the sample plane, perpendicular to the propagation axis Z.

(12) The sample 10 may also include a solid or liquid medium 14 containing particles 1, 2, 3, 4, 5 to be characterized, such a case being shown in FIG. 2. It may for example be a question of biological particles in a culture medium, or in a bodily liquid. By biological particle, what is meant is a cell, a bacterium or another microorganism, a mushroom, a spore, etc. The term particles may also designate microbeads, for example metal microbeads, glass microbeads or organic microbeads, which are commonly implemented in biological protocols. It may also be a question of insoluble droplets submerged in a liquid medium, for example lipid droplets in an oil-in-water type emulsion. Thus, the term particle designates both endogenous particles, initially present in the examined sample, and exogenous particles, added to this sample before analysis.

(13) Generally, a particle has a size advantageously smaller than 1 mm, or even smaller than 500 m, and preferably a size comprised between 0.5 m and 500 m.

(14) The distance between the light source and the sample is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Preferably, the light source, seen by the sample, may be considered to be a point source. This means that its diameter (or its diagonal) is preferably smaller than one tenth and better still one hundredth of the distance between the sample and the light source. Thus, preferably, the light reaches the sample in the form of plane waves, or waves that may be considered as such.

(15) The light source 11 is able to produce a plurality of incident light waves 12.sub.1 . . . 12.sub.n, each i.sup.th light wave 12.sub.i lying in an i.sup.th spectral band .sub.i. The spectral bands 12.sub.1 . . . 12.sub.n are different from one another, and, preferably, do not overlap.

(16) In the example device shown in FIGS. 1 and 2, the light source includes three elementary light sources, namely three light-emitting diodes (LEDs) 11.sub.1, 11.sub.2 and 11.sub.3 emitting in the spectral bands .sub.1=450 nm-465 nm; .sub.2=520 nm-535 nm; A.sub.3=620 nm-630 nm, respectively. Preferably, there is no overlap between the various spectral bands; a negligible overlap, for example concerning less than 25% and better still less than 10% of the light intensity emitted, is however envisionable. In this example, the light source 11 includes a Cree (registered trademark) XLamp (registered trademark) MC-E multi-LED diode. This diode includes four individually addressable elementary light-emitting diodes, only three of which are implemented in the context of this invention, the fourth being a white LED. The elementary light sources may be temporally coherent sources, such as laser diodes. Other configurations of light sources 11 are possible and described below.

(17) The light source 11 is preferably a point source. It may in particular comprise a diaphragm 18, or spatial filter. The aperture of the diaphragm is typically comprised between 5 m and 1 mm, preferably between 50 m and 500 m, and is for example 150 m. The diaphragm may be replaced by an optical fiber, a first end of which is placed facing one elementary light source 11.sub.1, 11.sub.2 or 11.sub.3, and a second end of which is placed facing the sample.

(18) The light source 11 preferably includes a diffuser 17, placed between each elementary light source 11.sub.1, 11.sub.2 and 11.sub.3 and the diaphragm 18. The inventors have observed that the use of such a diffuser allows constraints on the centrality of each elementary light source with respect to the aperture of the diaphragm to be relaxed. In other words, the use of such a diffuser allows an elementary light source 11i, with 1i3, that is slightly off center with respect to the aperture of the diaphragm 18 to be used. In this example, the diaphragm is sold by Thorlabs under the reference P150S.

(19) Preferably, each elementary light source 11.sub.i is of small spectral width, for example smaller than 100 nm, or even than 20 nm. The term spectral width designates the full width at half maximum of the emission band of the light source in question.

(20) In this example, the diffuser implemented is a 40 diffuser (reference Light Shaping Diffuser 40, manufactured by Luminit). The function of such a diffuser is to distribute the light beam, produced by an elementary light source 11.sub.i, over a cone of angle , being equal to 40 in the present case. Preferably, the scattering angle varies between 10 and 60.

(21) The sample 10 is placed between the light source 11 and a matrix-array photodetector 16. The latter preferably lies parallel, or substantially parallel to the transparent slide 15 holding the sample.

(22) The term substantially parallel means that the two elements may not be rigorously parallel, an angular tolerance of a few degrees, smaller than 10 being acceptable.

(23) The photodetector 16 is an imager, able to form an image in a detection plane P. In the example shown, it is a CCD or CMOS matrix-array photodetector including a pixel matrix-array. CMOS photodetectors are preferred, because the size of the pixels is smaller, thereby allowing images to be acquired the spatial resolution of which is more favorable. In this example, the detector is a CMOS sensor sold by Omnivision under the reference OV5647. It is an RGB CMOS sensor comprising 25921944 pixels, with an inter-pixel pitch of 1.4 m. The useful area of the photodetector is 3.62.7 mm.sup.2. The detection plane P preferably lies perpendicular to the propagation axis Z of the incident light wave 12.

(24) Preferably, the photodetector comprises a pixel matrix-array, above which matrix array is placed a transparent protective window. The distance between the pixel matrix-array and the protective window is generally comprised between a few tens of m to 150 or 200 m. Photodetectors, the inter-pixel pitch of which is smaller than 3 m, are preferred, in order to improve the spatial resolution of the image. The photodetector may comprise a mirror-type system for redirecting images toward a pixel matrix-array, in which case the detection plane corresponds to the plane in which the image-redirecting system lies. Generally, the detection plane P corresponds to the plane in which an image is formed.

(25) The distance d between the sample 10 and the pixel matrix-array of the photodetector 16 is, in this example, equal to 300 m. Generally, whatever the embodiment, the distance d between the sample and the pixels of the photodetector is preferentially comprised between 50 m and 2 cm, and preferably comprised between 100 m and 2 mm.

(26) The absence of magnifying optics between the photodetector 16 and the sample 10 will be noted. This does not prevent focusing micro-lenses optionally being present level with each pixel of the photodetector 16, these lenses not having the function of magnifying the image.

(27) FIG. 3 shows a sample 10, including diffracting objects 32 placed around non-diffracting or not very diffracting zones 31, which are qualified poor zones below. The sample may be solid, for example in the case of a tissue deposited on an anatomopathology slide. It may also be liquid, for example in the case of a bodily liquid or a cell culture medium.

(28) The photodetector 16 is able to produce an image I.sub.i of a light wave 22.sub.i transmitted by the sample 10 when the latter is illuminated by an incident wave 12.sub.i, in the i.sup.th spectral band .sub.i. The spectral band of the transmitted light wave 22.sub.i includes all or some of the spectral band of the incident wave 12.sub.i. The light wave 22.sub.i, transmitted by the sample, in the spectral band .sub.i, results from the interaction of the sample 10 with the incident light wave 12.sub.i produced by the elementary light source 11.sub.i.

(29) Under the effect of the incident light wave 12.sub.i, the sample 10 may generate a diffracted wave that is liable to produce, level with the detection plane P, interference, in particular with a portion of the incident light wave 12.sub.i transmitted by the sample. This interference gives rise, in the image acquired by the photodetector, to a plurality of elementary diffraction patterns, each elementary diffraction pattern 36 including a central zone and a plurality of concentric diffraction rings. Each elementary diffraction pattern 36 is due to one diffracting object 32 in the sample.

(30) Moreover, the sample may absorb a portion of the incident light wave 12.sub.i. Thus, the light wave 22.sub.i, in a spectral band .sub.i, transmitted by the sample, and to which the matrix-array photodetector 16 is exposed, may comprise: a component resulting from the diffraction, described above, this diffraction component possibly in particular resulting in the presence of elementary diffraction patterns on the photodetector 16, each elementary diffraction pattern possibly being associated with one diffracting element 32 of the sample. Such a diffracting element may be a cell, or a particle, or any other diffracting object 32 present in the sample 10. a component resulting from the absorption of the incident light wave 12.sub.i in the sample.

(31) A processor 20, for example a microprocessor, is able to process each image generated by the matrix-array photodetector 16. In particular, the processor is a microprocessor connected to a programmable memory 23 in which a sequence of instructions for carrying out the calculating and image-processing operations described in this description is stored. It may also be connected to a display screen 24.

(32) The steps of an iterative method for obtaining an image of the sample 10 will be described below with reference to FIGS. 3 and 4.

(33) 1.sup.st Step: Initialization

(34) In a first step 100 of acquiring images, each elementary light source 11.sub.i of the light source 11 is activated in succession, each light source emitting an incident light wave (12.sub.1, . . . 12.sub.N), in a spectral band (.sub.1, . . . .sub.N), along a propagation axis Z, in the direction of the sample 10.

(35) In each acquisition, the matrix-array photodetector captures an image I.sub.i corresponding to a spectral band .sub.i, the index i, relating to the spectral band, being comprised between 1 and N, N being the number of spectral bands in question. In the example shown in FIGS. 1 and 2, the light source 11 includes three elementary light sources 11.sub.1, 11.sub.2 and 11.sub.3. The photodetector captures three images I.sub.1, I.sub.2, I.sub.3, corresponding to the spectral bands .sub.1, .sub.2 and .sub.3, respectively.

(36) The sample is placed at an axial coordinate z=0, along the propagation axis Z. The letter r designates a radial coordinate, i.e. a coordinate in a plane perpendicular to the propagation axis Z. The plane z=d corresponds to the detection plane, whereas the plane z=0 corresponds to a plane passing through the sample, called the sample plane and denoted P.sub.0.

(37) If I.sub.i.sup.z=d(r)=I.sub.i.sup.d(r) designates the value of the intensity captured, in the spectral band .sub.i, by the pixel of the detector of radial coordinate r in the detection plane, it is possible to establish, using the image I.sub.i, a complex amplitude 60.sub.i.sup.z=d(r)=.sub.i.sup.d(r) of the wave 22.sub.i at said pixel of coordinate r, the modulus of which may be expressed by the expression:
|.sub.i.sup.d(r)|={square root over (I.sub.i.sup.d(r))}

(38) The exponent d expresses the fact that the complex amplitude is determined in the sample plane P, of equation z=d. The complex amplitude .sub.i.sup.d(r) includes a modulus and an argument, such that:
.sub.i.sup.d(r)=M.sub.i.sup.d(r)e.sup.j.sup.i.sup.d.sup.(r)
where: M.sub.i.sup.d(r) is the modulus of the complex amplitude of the light wave detected by the photodetector, in the i.sup.th spectral band .sub.i, at a radial coordinate r in the detection plane; and .sub.i.sup.d(r) is the phase of the complex amplitude of the light wave detected by the photodetector, in the i.sup.th spectral band .sub.i, and at said radial coordinate r in the detection plane.

(39) However, the matrix-array photodetector delivers no information on the phase of the light wave. Thus, in step 100, e.sup.j.sup.i.sup.d.sup.(r) is considered to be equal to an arbitrary initial value, for example equal to 1.

(40) The complex amplitude .sub.i.sup.d(r) may be expressed, normalized, by the expression:

(41) A i d ( r ) = i d ( r ) I i mean
where: I.sub.i.sup.mean is the mean intensity of the light wave 12.sub.i emitted by the light source 11 in the i.sup.th spectral band .sub.i; this mean intensity may be determined experimentally, by placing the photodetector 16 facing the light source 11, without a sample placed therebetween, and by calculating the mean of the pixels of the image acquired by the photodetector 16. A.sub.i.sup.d(r) is the normalized complex amplitude of the light wave 22.sub.i detected by the matrix-array photodetector 16 in the i.sup.th spectral band .sub.i.

(42) The normalization may also be carried out by dividing the complex amplitude .sub.i.sup.d(r) by I.sub.i.sup.mean(r), this term representing the light intensity, at the radial coordinate r, measured in the absence of sample.

(43) The normalized complex amplitude A.sub.i.sup.d(r) includes a modulus and an argument, such that:
A.sub.i.sup.d(r)=m.sub.i.sup.d(r)e.sup.j.sup.i.sup.d.sup.(r)
where: m.sub.i.sup.d(r) is the modulus of the normalized complex amplitude A.sub.i.sup.d(r); and .sub.i.sup.d(r) is the phase of the normalized complex amplitude, which is also the phase of the complex amplitude .sub.i.sup.d(r).

(44) The first step 100 allows, on the basis of the image I.sub.i detected by the photodetector in the i.sup.th spectral band .sub.i, an initial value to be assigned to each complex amplitude .sub.i.sup.d(r) or to each normalized complex amplitude A.sub.i.sup.d(r), such that:
.sub.i,p=1.sup.d(r)=M.sub.i.sup.d(r)={square root over (I.sub.i.sup.d(r))}
or

(45) A i , p = 1 d ( r ) = m i d ( r ) = I i d ( r ) I i mean .

(46) The index p corresponds to the rank of the iteration of the iterative method described below. Step 100 being an initialization step, the value 1 is attributed to this index.

(47) By addressing all or some of the pixels r of the photodetector 16, a complex image, or complex field, of the light wave 22.sub.i in the detector plane is obtained, this image containing the complex amplitudes .sub.i.sup.d(r) or the normalized complex amplitudes A.sub.i.sup.d(r).

(48) In the rest of the description, only the normalized complex amplitude A.sub.i.sup.d(r) will be considered, though the reasoning also applies to the complex amplitude .sub.i.sup.d(r).

(49) This first step is repeated for each spectral band (.sub.1 . . . .sub.N) detected by the photodetector.

(50) 2.sup.nd Step: Back Propagation to the Sample Plane P.sub.0

(51) During a second step 200, the normalized complex amplitude A.sub.i,p.sup.d(r) of the wave 22.sub.i to which the detector is exposed is estimated, in the sample plane P.sub.0. This estimation is made by back propagating the normalized complex amplitude A.sub.i,p.sup.d(r), determined in the detection plane P, from the detection plane P to the sample plane P.sub.0.

(52) The index p designates the rank of the iteration. In the first iteration (p=1), the initial normalized complex amplitude A.sub.i,p=1.sup.d(r)=A.sub.i.sup.d(r) obtained at the end of the first step 100 is used. In the following iterations (p>1), the complex amplitude resulting from the preceding iteration is used, as will be detailed below.

(53) According to well-known principles of digital holographic reconstruction, by determining the product of a convolution between the complex amplitude of the light wave 22.sub.i determined, for the spectral band .sub.i, in the detection plane z=d, and a propagation operator h(r,z), it is possible to reconstruct a complex amplitude of the same light wave at any point of spatial coordinates (r,z), and in particular in the sample plane P.sub.0.

(54) In other words, the normalized complex amplitude A.sub.i,p.sup.z(r) of the light wave 22.sub.i may be obtained, at a point of coordinates (r, z), on the basis of A.sub.i,p.sup.z=d(r), via the operation:
A.sub.i,p.sup.z(r)=A.sub.i,p.sup.z=d(r)*h.sub.i(r,zd),
where h.sub.i is the propagation operator in the spectral band .sub.i.

(55) When the reconstruction is carried out in the direction of propagation of the light, for example from the sample to the photodetector, propagation is spoken of. When the reconstruction is carried out in the direction opposite the direction of propagation of the light, for example from the photodetector to the sample, back propagation is spoken of.

(56) The propagation operator may in particular be based on the Fresnel diffraction model. In this example, the propagation operator is the Fresnel-Helmholtz function:

(57) h ( r , z ) = 1 j z e j 2 z exp ( j r 2 z )
where is the wavelength.

(58) Thus,

(59) A i , p z = 0 ( r ) = A i , p 0 ( r ) = A i , p z = d ( r ) * h i ( r , - d ) = - 1 j i z e - j 2 z i A i , p d ( r ) exp - ( j ( r - r ) 2 i d ) dr
where r designates the radial coordinates in the plane of the photodetector (z=d); r designates the radial coordinates in the reconstruction plane (z=0); and .sub.i is the central wavelength of the spectral band in question.

(60) A.sub.i,p.sup.0(r) is therefore obtained by back propagating A.sub.i,p.sup.d(r) over the distance d separating the detection plane P from the sample plane P.sub.0.

(61) This second step is repeated for each spectral band (.sub.1 . . . .sub.N) emitted by the light source 11 or, more generally, for each spectral band (.sub.1 . . . .sub.N) respectively associated with each image (I.sub.1 . . . I.sub.N) detected by the photodetector 16.

(62) It is possible, at this stage, to establish an image of the modulus or of the phase of the complex amplitude A.sub.i,p.sup.0(r) of each light wave 22.sub.i, in the sample plane P.sub.0, whether the complex amplitude be normalized or not, by calculating the value of A.sub.i,p.sup.0(r) at the various coordinates r in the sample plane.

(63) Each image of the modulus of the complex amplitude A.sub.i,p.sup.0(r) is representative of the intensity of the light wave level with the sample, whereas each image of the argument of the complex amplitude A.sub.i,p.sup.0(r) is representative of the phase of the intensity of the light wave level with the sample.

(64) When, as in the present case, three spectral bands centered respectively on wavelengths in the blue, green and red, are used, the information contained in the three images allows a color image of the sample to be obtained.

(65) It will be noted that the normalized complex amplitude A.sub.i,p.sup.0(r) is equivalent to a transmission function describing transmission of the incident wave 12.sub.i by the sample 10 at the radial coordinate r.

(66) 3.sup.rd Step: Determining the Weighting Function

(67) In the step 300, a weighting function, denoted F.sub.p.sup.0(r), allowing the complex amplitude of the light wave transmitted by the sample in the various spectral bands .sub.i in question to be weighted, is determined, in the sample plane.

(68) According to this example, the weighting function F.sub.p.sup.0(r), in the sample plane, may be common to each spectral band. It is obtained by combining the normalized complex amplitudes A.sub.i,p.sup.0(r) of the light wave transmitted by the sample, in the sample plane P.sub.0 and in the various spectral bands .sub.i.

(69) According to one example, the weighting function is obtained via a weighted sum of each complex amplitude determined in step 200, in the sample plane P.sub.0, using the expression:

(70) F p 0 ( r ) = 1 .Math. i k i .Math. i k i A i , p 0 ( r )
where k.sub.i is a positive weighting factor associated with the i.sup.th spectral band .sub.i.

(71) The weighting factors may be equal to one another, for example equal to .

(72) Other ways of determining the weighting function, in the sample plane, are detailed below.

(73) 4.sup.th Step: Propagation of the Weighting Function to the Detector Plane

(74) The step 400 aims to propagate, from the sample plane P.sub.0 to the detector plane P, the weighting function F.sub.p.sup.0(r) determined, in the preceding step, in the sample plane P.sub.0. Since the propagation operator is dependent on wavelength, this propagation is carried out for each spectral band .sub.i in question.

(75) Thus, for each spectral band .sub.i, F.sub.i,p.sup.d(r)=F.sub.p.sup.0(r)*h.sub.i(r,z=d).

(76) When the propagation operator is a Fresnel-Helmholtz operator such as defined above,

(77) F i , p d ( r ) = 1 j i d e - j 2 d i F i , p 0 ( r ) exp ( j ( r - r ) 2 i d ) dr

(78) Since the propagation operator is dependent on wavelength, as many weighting functions are determined, in the detection plane, as there are spectral bands. r designates the radial coordinates in the sample plane (z=0); r designates the radial coordinates in the reconstruction plane, i.e. in the detector plane (z=d); and .sub.i is the central wavelength of the spectral band in question.
5.sup.th Step: Update of the Complex Amplitude in the Detector Plane

(79) In the step 500, the value of the weighting function, in the detection plane z=d, is used to update the estimation of the normalized complex amplitude A.sub.i,p.sup.d(r) of the light wave 22.sub.i to which the photodetector 16 is exposed in the spectral band .sub.i.

(80) The updating formula is:

(81) A i , p d ( r ) = m i d ( r ) F i , p d ( r ) .Math. F i , p d ( r ) .Math. = m i d ( r ) e j ~ i , p d ( r )
where: |F.sub.i,p.sup.d(r)| is the modulus of F.sub.i,p.sup.d(r); m.sub.i.sup.d(r) is the modulus of the normalized initial complex amplitude A.sub.i,p.sup.d(r) determined, on the basis of the image I.sub.i, in the first step 100. This term serves as a link to the measured data; {tilde over ()}.sub.i,p.sup.d is an estimation of the phase of the complex amplitude of the wave 22.sub.i in the i.sup.th spectral band .sub.i; and A.sub.i,p.sup.d(r) is the complex amplitude of the light wave 22.sub.i transmitted by the sample, in the plane of the photodetector 16, this complex amplitude forming the base of the following iteration.

(82) Following this step, a new iteration may start, the input datum of this new iteration p+1 being A.sub.i,p+1.sup.d(r)=A.sub.i,p.sup.d(r), this new iteration starting with the back propagation of each normalized complex amplitude A.sub.i,p+1.sup.d(r), for the various spectral bands in question, to the sample plane P.sub.0, according to step 200.

(83) Steps 200 to 500 are carried out iteratively, either to a preset number of iterations p.sub.max or until a convergence criterion is reached, the latter possibly being, for example, expressed in the form of a discrepancy between the estimation of two given quantities in two successive iterations. When this discrepancy is smaller than a given threshold , the convergence criterion is reached. For example, the process is stopped when one of these conditions is reached:

(84) .Math. F i , p d ( r ) .Math. F i , p d ( r ) .Math. - F i , p + 1 d ( r ) .Math. F i , p + 1 d ( r ) .Math. .Math. < .Math. ; .Math. F i , p 0 ( r ) - F i , p + 1 0 ( r ) .Math. < .Math. ; .Math. A i , p 0 ( r ) - A i , p + 1 0 ( r ) .Math. < .Math. ; Arg ( A i , p 0 ( r ) - A i , p + 1 0 ( r ) ) < .Math. ;
this list is not limiting.

(85) At the end of the method, an estimation of the complex amplitude of the light wave 22.sub.i, transmitted by the sample, and to which the photodetector is exposed, in the detector plane P, of equation z=d, and/or in the sample plane P.sub.0, of equation z=0, is obtained, for each spectral band in question. Using the various complex amplitudes A.sub.i,p.sup.0(r) reconstructed in the sample plane, a precise representation of the latter is obtained, in each of the spectral bands in question, in particular by forming images on the basis of the modulus or of the phase of said complex amplitudes.

(86) As previously mentioned, when the spectral bands are spread over the visible spectrum, the modulus or phase images may be combined, for example superposed, so as to obtain representations in color.

(87) It will be recalled that this algorithm, although described in relation to a normalized complex amplitude A.sub.i, also applies to the non-normalized complex amplitude .sub.i.

(88) Contribution of the Weighting Function

(89) One of the important points of this iterative algorithm is the construction of the weighting function F.sup.0(r) in the sample plane. Specifically, generally, it is insufficient to determine the complex amplitude of a light wave on the basis of an image acquired by a photodetector, because information as to the phase of the wave is not recorded by the photodetector, the latter being sensitive only to intensity, which corresponds to the modulus of the complex amplitude of the wave.

(90) Thus, as indicated in the description of step 100, the complex amplitude .sub.i.sup.d(r) or normalized complex amplitude A.sub.i.sup.d(r) determined in this step contains no information as to the phase of the light wave that they represent. This lack of information results, during the back propagation from the detector plane P to the sample plane P.sub.0, which is the subject matter of step 200, in the formation of artefacts that are referred to as twin images.

(91) The inventors have observed that these artefacts mainly affect poor zones 31 located in the vicinity of diffracting elements 32, i.e., zones located between two adjacent diffracting elements 32. Furthermore, they have observed that these artefacts are liable to fluctuate as a function of wavelength. Thus, artefacts in the poor zones 31 may be averaged out statistically by combining, for various wavelengths, the complex amplitudes back propagated to the sample plane. This statistical smoothing then increases the signal-to-noise ratio in the complex image back propagated to the sample plane. Generally, the method amounts to: obtaining an initial estimation A.sub.i,p=1.sup.d(r) of the complex amplitude of the wave 22.sub.i transmitted by the sample, in the detector plane, and in a plurality of spectral bands (step 100); back propagating each of these complex amplitudes to the sample plane, in order to obtain, in each spectral band, a complex amplitude A.sub.i,p.sup.0(r) in the sample plane (step 200); calculating a weighting function F.sub.p.sup.0(r) weighting each complex amplitude in the sample plane (step 300), so as to decrease the influence of twin-image artefacts; propagating said weighting function to the detector plane, for at least one spectral band (step 400); and updating the estimation of the complex amplitude A.sub.i,p.sup.d(r) of the wave 22.sub.i transmitted by the sample, in the detector plane, and in a plurality of spectral bands, using the weighting function F.sub.i,p.sup.d(r) propagated to the detector plane (step 500).

(92) The updating formula of step 500 shows that in each iteration, the modulus m.sub.i.sup.d(r) (M.sub.i.sup.d(r), respectively) of the normalized complex amplitude A.sub.i,p.sup.d(r) (of the complex amplitude .sub.i.sup.d(r), respectively), in the detection plane, corresponds to that determined, in step 100, with each image I.sub.i formed by the photodetector 16 in the spectral band .sub.i. In other words, in the various iterations, the modulus, in the detection plane, of the complex amplitude .sub.i.sup.d(r) or of the normalized complex amplitude A.sub.i,p.sup.d(r) does not vary and corresponds to that derived from the intensity measured by the photodetector.

(93) In contrast, the algorithm tends to cause, in each update, a variation in the argument of the complex expression A.sub.i,p.sup.d(r) or .sub.i.sup.d(r), and in particular in the estimation of the phase {tilde over ()}.sub.i,p.sup.d, the latter being considered to be equal to the phase of the weighting function F.sub.i,p.sup.d(r) propagated to the detector plane, at each wavelength .sub.i.

(94) Thus, in this algorithm, each iteration comprises: updating the complex amplitude A.sub.i,p.sup.d(r) of each light wave in the sample plane P.sub.0 (step 200); updating the argument of each complex amplitude A.sub.i,p.sup.d(r), and in particular its phase, in the detection plane (step 500).
Generation of the Weighting Function

(95) A first way of calculating the weighting function consists in assigning an equal weight to the various spectral bands .sub.i in question.

(96) For example, the weighting function take the form

(97) F p 0 ( r ) = 1 .Math. i k i .Math. i k i A i , p 0 ( r ) ,
where k.sub.i is the weighting factor, or weight, attributed to the i.sup.th spectral band .sub.i, as described above with reference to step 300. Each weighting factor k.sub.i is positive and may have the same value, for example .

(98) According to one variant, and this applies in particular in the case where the sample analyzed is dyed, in a spectral range .sub.0, the moduli of the complex amplitudes of first light waves 22.sub.i the spectral bands .sub.i of which are close to the spectral range .sub.0 have a higher value than the moduli of the complex amplitudes of second light waves the spectral bands of which are further from the wavelength .sub.0. In such a case, it is preferable to under-weight the complex amplitudes of the first light waves, and to over-weight the complex amplitudes of the second light waves.

(99) For example, if the sample is dyed using a blue dye, which corresponds in our example to the first spectral band .sub.1, the weighting factor k.sub.1 is lower than the weighting factors k.sub.2 and k.sub.3 associated with the spectral bands .sub.2 (green) and .sub.3 (red), respectively.

(100) According to another variant, the modulus and the argument of each complex amplitude are weighted by independent weighting factors, such that

(101) 0 .Math. F p 0 ( r ) .Math. = 1 .Math. i k i .Math. i k i .Math. A i , p 0 ( r ) .Math. Arg ( F p 0 ( r ) ) = 1 .Math. i k i .Math. i k i Arg ( A i , p 0 ( r ) )
k.sub.i and k.sub.i being weighting factors respectively associated with the modulus and the argument of the complex amplitude of the light wave 22.sub.i, in the sample plane, in the spectral band .sub.i.

(102) According to another variant, the combination of the complex amplitudes A.sub.i,p.sup.0(r) takes the form of a sum of logarithms, according to the expression:

(103) ln ( F p 0 ( r ) ) = 1 .Math. i k i .Math. i k i ln [ A i , p 0 ( r ) ]

(104) According to another variant, rather than one weighting function F.sub.p.sup.0(r), a plurality of weighting functions F.sub.i,p.sup.0(r) are determined in the sample plane, each function being associated with one spectral band .sub.i.

(105) Each weighting function F.sub.i,p.sup.0(r) associated with an i.sup.th wavelength is obtained by combining a plurality of complex amplitudes A.sub.i,p.sup.0(r), respectively associated with various spectral bands.

(106) In a first example, considering three spectral bands:

(107) [ F 1 , p 0 F 2 , p 0 F 3 , p 0 ] = [ k 1 , 1 k 1 , 2 k 1 , 3 k 2 , 1 k 2 , 2 k 2 , 3 k 3 , 1 k 3 , 2 k 3 , 3 ] [ A 1 , p 0 ( r ) A 2 , p 0 ( r ) A 3 , p 0 ( r ) ]

(108) Thus, according to this embodiment, the weighting function takes the form of a vector {right arrow over (F.sub.p.sup.0)}(r), of dimension N, N being the number of spectral bands in question, each term F.sub.i,p.sup.0(r) of which is a weighting function associated with one spectral band .sub.i. This weighting function may be obtained via the following matrix product:
{right arrow over (F.sub.p.sup.0)}(r)=K{right arrow over (A.sub.p.sup.0)}

(109) Where K is a weighting matrix, each term k.sub.i,j of the weighting matrix representing the weight associated with the complex amplitude A.sub.j,p.sup.0(r) associated with the spectral band .sub.j for the calculation of the weighting function associated with the spectral band .sub.i.

(110) The matrix K is a square matrix of N by N size, N being the number of spectral bands in question.

(111) The weighting function is preferably normalized, such that each term F.sub.i,p.sup.0 may be expressed in the form:

(112) F i , p 0 ( r ) = 1 .Math. j k i , j .Math. j k i , j A j 0 ( r )
the term

(113) 1 .Math. j k i , j
being a normalization term.

(114) According to a second example of this embodiment, again considering three spectral bands,

(115) [ .Math. F 1 , p 0 ( r ) .Math. .Math. F 2 , p 0 ( r ) .Math. .Math. F 3 , p 0 ( r ) .Math. arg ( F 1 , p 0 ( r ) ) arg ( F 2 , p 0 ( r ) ) arg ( F 3 , p 0 ( r ) ) ] = [ k 1 , 1 k 1 , 2 k 1 , 3 k 1 , 4 k 1 , 5 k 1 , 6 k 2 , 1 k 2 , 2 k 2 , 3 k 2 , 4 k 2 , 5 k 2 , 6 k 3 , 1 k 3 , 2 k 3 , 3 k 3 , 4 k 3 , 5 k 3 , 6 k 4 , 1 k 4 , 2 k 4 , 3 k 4 , 4 k 4 , 5 k 4 , 6 k 5 , 1 k 5 , 2 k 5 , 3 k 5 , 4 k 5 , 5 k 5 , 6 k 6 , 1 k 6 , 2 k 6 , 3 k 6 , 4 k 6 , 5 k 6 , 6 ] [ .Math. A 1 , p 0 ( r ) .Math. .Math. A 2 , p 0 ( r ) .Math. .Math. A 3 , p 0 ( r ) .Math. arg ( A 1 , p 0 ( r ) ) arg ( A 2 , p 0 ( r ) ) arg ( A 3 , p 0 ( r ) ) ] .

(116) Thus, according to this embodiment, the weighting function takes the form of a vector {right arrow over (F.sub.p.sup.0)}(r), of dimension 2N, N being the number of spectral bands in question, each term of which is either the modulus or the argument of a weighting function F.sub.i,p.sup.0(r) associated with one spectral band .sub.i. This weighting function may be obtained via the following matrix product:
{right arrow over (F.sub.p.sup.0)}(r)=K{right arrow over (A.sub.p.sup.0)}

(117) Where K is a weighting matrix, of 2N2N size, each term k.sub.i,j of the weighting matrix representing the weight associated either with the argument or with the modulus of the complex amplitude A.sub.j,p.sup.0(r) associated with the spectral band .sub.j.

(118) According to this embodiment, each coordinate of the vector {right arrow over (A.sub.p.sup.0)} represents either the modulus, or the argument, of a complex amplitude A.sub.j,p.sup.0(r), in a spectral band j.

(119) Just as in the preceding example, the weighting function is preferably normalized, such that each term F.sub.i,p.sup.0 may be expressed in the form:

(120) .Math. F i , p 0 ( r ) .Math. = 1 .Math. j = 1 3 k i , j .Math. 3 j = 1 k i , j .Math. A j , p 0 ( r ) .Math. Arg ( F i , p 0 ( r ) ) = 1 .Math. j = 4 6 k i , j .Math. j = 4 6 k i , j arg ( A j , p 0 ( r ) )

(121) Whatever the circumstances, the coefficients of a weighting matrix may be determined beforehand, either arbitrarily or on the basis of experimental trials.

(122) For example, it is possible to establish a linear regression coefficient between two components i and j of the vector {right arrow over (A.sub.p.sup.0)}(r), by considering a plurality of radial positions (r) in the sample plane, so as to obtain a statistically significant sample. The coefficient k.sub.ij of weighting matrix may then be determined depending on this linear regression coefficient .sub.ij, optionally assigned a term taking into account the dispersion around the linear regression model. In such a case, the diagonal of the weighting matrix may consist of coefficients k.sub.ii equal to 1.

(123) This allows a weighting function F.sub.i,p.sup.0, associated with the wavelength .sub.i, taking into account the correlation between the various terms of the vector {right arrow over (A.sub.p.sup.0)}(r) to be established.

(124) Variants Regarding the Light Source or the Photodetector.

(125) In the examples given with reference to FIGS. 1 and 2, the light source 11, able to emit a light wave 12 in various spectral bands, includes three elementary light sources 11.sub.1, 11.sub.2, 11.sub.3, taking the form of light-emitting diodes emitting in a first spectral band .sub.1, a second spectral band .sub.2, and a third spectral band .sub.3, respectively, the spectral bands being different from one another, and, preferably, not overlapping.

(126) The light source 11 may also include a white light source 11.sub.w placed upstream of a filtering device 19, for example a filter wheel, able to place a filter of pass band .sub.i between the white light source and the sample, as shown in FIG. 5, such that the image I.sub.i formed by the photodetector 16 is representative of said pass band .sub.i. A plurality of filters, having pass bands that are different from one another, are then successively placed between the light source 11.sub.w and the sample 10.

(127) According to one variant, the filtering device 19 may also be a tri-band filter, defining a plurality of spectral bands. An example of a filter suitable for this application is the Edmund. Optics 458, 530 & 628 nm tri-band filter, which defines spectral bands centered on the wavelength of 458 nm, 530 nm and 628 nm, respectively. This allows the sample to be illuminated simultaneously using 3 wavelengths.

(128) The use of a diffuser 17, such as described above, between the light source and the diaphragm 18 is preferable, whatever the embodiment.

(129) The photodetector 16 may, as described above, be an RGB matrix-array photodetector, this allowing the various images I.sub.1 . . . I.sub.i . . . I.sub.N to be acquired in the various spectral bands .sub.1 . . . .sub.i . . . .sub.N in succession or simultaneously. In this case, the light source may be a white light source 11.sub.w, in which case the various images may be acquired simultaneously.

(130) It may also be a question of a monochromatic photodetector 16, in which case the light source 11 is able to generate, in succession, a light wave in various spectral bands .sub.1 . . . .sub.i . . . .sub.N. In such a configuration, the light source includes either a plurality of elementary light sources 11.sub.1, 11.sub.2, 11.sub.3, or a filtering device 19, as described above. In such a case, the sample is exposed in succession to incident light waves 12.sub.1 . . . 12.sub.i . . . 12.sub.N, N being the number of spectral bands in question. An image I.sub.i (1iN), representative of the light wave 22.sub.i transmitted by the sample is then acquired on each exposure.

(131) Realized Trials.

(132) Trials were carried out in the configuration shown in FIG. 1 and described above. The sample was an anatomopathology slide, including a cross section of colon stained with hematoxylin eosin saffron. The light source was placed at a distance equal to 5 cm from the sample, this distance separating the diaphragm 18 from the sample 10.

(133) FIGS. 6A, 6B and 6C show an image of the modulus |A.sub.1,p.sup.0(r)| of the complex amplitude A.sub.1,p.sup.0(r) of the wave 22.sub.1 transmitted by the sample, in the plane P.sub.0 of the sample, in the first spectral band .sub.1 extending between 450 and 465 nm, these images being obtained after a number of iterations p equal to 1, 3 and 10, respectively.

(134) FIGS. 7A, 7B and 7C show an image of the modulus |A.sub.2,p.sup.0(r)| of the complex amplitude A.sub.2,p.sup.0(r) of the wave 222 transmitted by the sample, in the plane P.sub.0 of the sample, in the third spectral band .sub.2 extending between 520 and 535 nm, these images being obtained after a number of iterations p equal to 1, 3 and 10, respectively.

(135) FIGS. 8A, 8B and 8C show an image of the modulus |A.sub.3,p.sup.0(r)| of the complex amplitude A.sub.3,p.sup.0(r) of the wave 22.sub.3 transmitted by the sample, in the plane P.sub.0 of the sample, in the third spectral band .sub.3 extending between 620 and 630 nm, these images being obtained after a number of iterations p equal to 1, 3 and 10, respectively. It will be noted that the average grayscale level of these images is higher than the grayscale level of the images of FIGS. 6A, 6B, 6C, 7A, 7B and 7C. This is due to the red-violet color of the sample.

(136) FIGS. 9A, 9B and 9C show the combination of the images 6A-7A-8A, 6B-7 B-8B, and 6C-7C-8C, respectively. These figures allow a color representation of the sample to be obtained, by simultaneously taking into account the three spectral bands .sub.1, .sub.2 and .sub.3.

(137) In each series of images, an increase in contrast as a function of the number of iterations may be seen. It may also be noted that images the spatial resolution of which is satisfactory are formed when the number of iterations is lower than or equal to 10, this limiting the calculation time to a few seconds.

(138) The method is therefore suitable for the high-rate, large-field observation of samples. It allows images to be obtained in one or more spectral bands, making it compatible with the staining methods commonly used in the field(s) of anatomical pathology and/or cytopathology.