Method and system for observing a sample under ambient lighting

11415505 · 2022-08-16

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for observing a sample is placed between a light source and an image sensor, comprising at least 10000 pixels, the light source emits an illuminating beam, which propagates to the sample, the light beam is emitted in an illumination spectral band (Δλ.sub.11) lying above 800 nm, the method comprising the following steps: (a) illuminating the sample with the light source; (b) acquiring an image of the sample (I.sub.0) with the image sensor, no image-forming optics being placed between the sample and the image sensor; and (c) the image sensor being configured such that it has a detection spectral band (Δλ.sub.20), which blocks wavelengths in the visible spectral band, such that the image may be acquired in ambient light.

Claims

1. A method for observing a sample, comprising: placing the sample between a light source and an image sensor so that the light source is positioned on a first side of the sample and the image sensor is positioned on a second side of the sample opposite the first side, the image sensor having at least 10000 pixels; emitting an illumination beam from the light source, which propagates to the sample, the illuminating light beam being emitted in an illumination spectral band lying above 800 nm; illuminating the sample with the light source; and acquiring an image of the sample with the image sensor, no image-forming optics being placed between the sample and the image sensor, wherein: the image sensor is configured such that it has a detection spectral band that blocks wavelengths in a visible spectral band, lying at least between 400 nm and 750 nm, such that the image may be acquired in ambient light; the illumination spectral band has a bandwidth narrower than or equal to 100 nm; and the image sensor is placed less than 20 mm from the sample.

2. The method according to claim 1, wherein the detection spectral band is comprised between 800 nm and 1200 nm or between 800 nm and 1000 nm.

3. The method according to claim 1, wherein the illumination spectral band is comprised between 800 nm and 1200 nm or between 800 nm and 1000 nm.

4. The method according to claim 1, wherein: the illumination spectral band has a bandwidth narrower than or equal to 50 nm, and preferably narrower than 20 nm; and/or the detection spectral band has a bandwidth narrower than or equal to 50 nm, and preferably narrower than 20 nm.

5. The method according to claim 1, wherein the detection spectral band is defined by a high-pass or band-pass detection filter placed on the image sensor, the detection filter being configured to block wavelengths in the visible spectral band.

6. The method according to claim 1, wherein the illumination spectral band is defined by an illumination filter, coupled to the light source.

7. The method according to claim 1, wherein, in acquiring the image of the sample, the image sensor is exposed to an exposure light wave, the method further comprising: applying a holographic reconstruction algorithm to the acquired image, the holographic reconstruction algorithm using a propagation operator, the propagation operator describing the propagation of light between the image sensor and a reconstruction plane so as to determine a complex expression of the exposure light wave in different points of the reconstruction plane, the complex expression being a complex number, the argument and modulus of the complex number are respectively representative of the phase and intensity of the exposure light wave; and obtaining an image representative of the complex expression of the exposure light wave in the different points of the reconstruction plane.

8. The method according to claim 7, wherein the reconstruction plane is a plane in which the sample lies.

9. The method according to claim 1, wherein the sample is placed at a distance of 2 cm to 30 cm from the light source.

10. A device for observing a sample, comprising: a light source, configured to emit an illuminating beam that propagates toward the sample, in an illumination spectral band; a pixelated image sensor, comprising at least 10000 pixels, and configured to acquire an image in a detection spectral band; a holder, arranged to hold the sample between the light source and the image sensor so that the light source is positioned on a first side of the sample and the image sensor is positioned on a second side of the sample opposite the first side; the device being configured such that no image-forming optics are placed between the image sensor and the sample when the sample is held on the holder and such that the image sensor is placed less than 20 mm from the sample when the sample is held on the holder; wherein: the detection spectral band lies above 800 nm; the detection spectral band blocks wavelengths in a visible spectral band, lying at least between 400 nm and 750 nm; the illumination spectral band has a bandwidth narrower than or equal to 100 nm.

11. The device according to claim 10, wherein the detection spectral band is comprised between 800 nm and 1200 nm or between 800 nm and 1000 nm.

12. The device according to claim 10, wherein the image sensor is coupled to a detection filter, defining the detection spectral band.

13. The device according to claim 10, wherein the illumination spectral band is comprised between 800 nm and 1200 nm or between 800 nm and 1000 nm.

14. The device according to claim 10, wherein: the illumination spectral band has a bandwidth narrower than or equal to 50 nm, and preferably narrower than 20 nm; and/or the detection spectral band has a bandwidth narrower than or equal to 50 nm, and preferably narrower than 20 nm.

15. The device according to claim 10, wherein: the light source is a source of laser light; or the light source is a light-emitting diode coupled to an illumination filter, the illumination filter defining the illumination spectral band.

16. The device according to claim 10, comprising a processing unit configured to: apply a holographic reconstruction algorithm to the acquired image, the holographic reconstruction algorithm using a propagation operator, the propagation operator describing the propagation of light between the image sensor and a reconstruction plane so as to determine a complex expression of the exposure light wave in different points of the reconstruction plane, the complex expression being a complex number, the argument and modulus of the complex number are respectively representative of the phase and intensity of the exposure light wave; and obtain an image representative of the complex expression of the exposure light wave in the different points of the reconstruction plane.

17. The device according to claim 16, wherein the reconstruction plane is a plane in which the sample lies.

18. A device for observing a sample, comprising: a light source, configured to emit an illuminating beam that propagates toward the sample, in an illumination spectral band; a pixelated image sensor, comprising at least 10000 pixels, and configured to acquire an image in a detection spectral band; a holder, arranged to hold the sample between the light source and the image sensor so that the light source is positioned on a first side of the sample and the image sensor is positioned on a second side of the sample opposite the first side; an optical system, placed between the sample and the pixelated image sensor, the optical system having an object focal plane and an image focal plane; the device being configured such that: the object focal plane is offset with respect to a plane in which the sample lies, by an object offset; and/or the image focal plane is offset with respect to a detection plane, by an image offset, wherein the detection spectral band lies above 800 nm, wherein the detection spectral band blocks wavelengths in a visible spectral band, lying at least between 400 nm and 750 nm, and wherein the illumination spectral band has a bandwidth narrower than or equal to 100 nm.

19. The device according to claim 18, wherein the image offset or the object offset is comprised between 10 μm and 2 mm.

Description

FIGURES

(1) FIGS. 1A and 1B are examples of devices according to the invention.

(2) FIG. 2 shows the transmission spectral bands of a Bayer filter.

(3) FIGS. 3A and 3B are examples of images acquired using a reference device according to the prior art and according to the invention, respectively. FIGS. 3C and that 3D are details of regions of interest delineated in FIGS. 3A and 3B, respectively. FIGS. 3E and 3F are profiles obtained from the FIGS. 3C and 3D, respectively, along lines respectively drawn on the latter.

(4) FIG. 4 shows another embodiment of a device.

DESCRIPTION OF PARTICULAR EMBODIMENTS

(5) FIG. 1A shows an example of a device 1 allowing the invention to be implemented. A light source 11 is configured to emit a light beam 12, called the illuminating beam, which propagates in the direction of a sample 10. The illuminating beam reaches the sample by propagating along a propagation axis Z.

(6) The illuminating beam is emitted in an illumination spectral band Δλ.sub.12. The illumination spectral band Δλ.sub.12 preferably lies outside of the visible spectral band. By visible spectral band, what is meant is a spectral band comprised between 400 nm and 750 nm, or between 400 and 780 nm. Preferably, the illumination spectral band Δλ.sub.12 lies between 750 nm or 780 nm and 10 μm, and preferably between 800 nm and 10 μm, and preferably between 750 nm or even 800 nm and 5 μm, and more preferably between 750 nm or even 800 nm and 2 μm, or between 750 nm or even 800 nm and 1200 nm, or between 750 nm or even 800 nm and 1000 nm.

(7) By lies between m and n, m and n representing wavelength values, what is meant is that more than 80% of the intensity of the emitted light, or even more than 90% or 95% of the emitted intensity, is comprised between m and n. The term lies between m and n does not necessarily mean extend from m to n.

(8) The sample 10 is a sample that it is desired to characterize. It notably comprises a medium 10.sub.m in which particles 10.sub.p bathe. The medium 10.sub.m may be a liquid medium. It may comprise a bodily liquid, for example obtained from blood or urine or lymph or cerebrospinal fluid. It may also be a question of a culture medium, comprising nutriments allowing microorganisms or cells to develop. By particle, what is notably meant, non-exhaustively, is: a cell, whether it be a question of a cultured cell or a bodily cell, a blood cell for example; a microorganism, for example a bacterium or a yeast or a microalgae; a solid particle, for example a microsphere, the microsphere possibly being functionalized, so as to promote the graft of an analyte; a particle forming an emulsion in the medium 10.sub.m, in particular a particle that is insoluble in the medium 10.sub.m, an example being a lipid droplet in an aqueous medium.

(9) A particle 10.sub.p may be solid or liquid.

(10) The sample 10 may be a thin slide of biological tissue, such as a pathology slide. The thickness of such a slide is of the order of a few tens of microns.

(11) The sample 10 is, in this example, contained in a fluidic chamber 15. The fluidic chamber 15 is for example a Gene Frame® fluidic chamber of thickness e=250 μm. The thickness e of the sample 10, along the propagation axis, typically varies between 10 μm and 1 cm, and is preferably comprised between 20 μm and 500 μm. The sample lies in a plane P.sub.10, called the sample plane. The sample plane P.sub.10 is preferably perpendicular to the propagation axis Z, or substantially perpendicular to the latter. By substantially perpendicular, what is meant is perpendicular to within an angular tolerance, for example to within ±10% or ±20%. The sample plane is defined by the axes X and Y shown in FIGS. 1A and 1B. The sample is held on a holder 10s at a distance d from an image sensor 20.

(12) The distance D between the light source 11 and the fluidic chamber 15 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Advantageously, the light source 11, seen by the sample, may be considered to be point-like. This means that its diameter (or its diagonal) is preferably smaller than one tenth and better still one hundredth of the distance between the fluidic chamber 15 and the light source. In FIG. 1A, the light source is a light-emitting diode. It is generally associated with a diaphragm 18, or spatial filter. The aperture of the diaphragm is typically comprised between 5 μm and 1 mm, and preferably between 50 μm and 500 μm.

(13) The diaphragm may be replaced by an optical fibre, a first end of which is placed facing the light source 11 and a second end of which is placed facing the sample 10. The device shown in FIG. 1A also comprises a diffuser 17, placed between the light source 11 and the diaphragm 18. The use of such a diffuser allows constraints on the centeredness of the light source 11 with respect to the aperture of the diaphragm 18 to be relaxed, as described in EP3221688.

(14) Alternatively, the light source may be a laser source, such as a laser diode, as shown in FIG. 1B. In this case, it is not useful to associate therewith a spatial filter or a diffuser.

(15) Preferably, the illumination spectral band Δλ.sub.42 has a bandwidth narrower than 100 nm. By spectral bandwidth what is meant is a full width at half maximum of said spectral band. Preferably, the illumination spectral bandwidth Δλ.sub.12 is narrower than 50 nm, or even narrower than or equal to 20 nm.

(16) The sample 10 is placed between the light source 11 and the image sensor 20. The image sensor 20 defines a detection plane P.sub.0, preferably lying parallel, or substantially parallel to the plane P.sub.10 in which the sample lies. The expression substantially parallel means that the two elements may not be rigorously parallel, an angular tolerance of a few degrees, of the order of ±20° or ±10° being acceptable.

(17) The image sensor 20 is able to form an image I.sub.0 of the sample 10 in the detection plane P.sub.0. In the example shown, it is a question of a CCD or CMOS image sensor 20 comprising a matrix array of pixels. The image sensor comprises a number of pixels preferably higher than 10000, and more preferably higher than 100000. The detection plane P.sub.0 preferably lies perpendicular to the propagation axis Z. The distance d between the sample 10 and the matrix array of pixels of the image sensor is preferably comprised between 50 μm and 2 cm, and more preferably comprised between 100 μm and 2 mm.

(18) The absence of image-forming or magnifying optics between the image sensor 20 and the sample 10 in this embodiment will be noted. This does not prevent focusing micro-lenses potentially being present level with each pixel of the image sensor 20, said micro-lenses not performing the function of magnifying the image acquired by the image sensor, their function being to optimize detection effectiveness.

(19) The image sensor 20 is configured to form an image in a detection spectral band Δλ.sub.20. Advantageously, the detection spectral band does not lie in the visible spectral band, or if it does does so negligibly. It preferably lies between 750 nm or 780 nm and 10 μm, and preferably between 800 nm and 10 μm, and more preferably between 750 nm or even 800 nm and 5 μm, and even more preferably between 750 nm or even 800 nm and 2 μm, or between 750 nm or even 800 nm and 1200 nm, or between 750 nm or even 800 nm and 1000 nm. Because it lies outside of the visible spectral band, the detection spectral band Δλ.sub.20 allows images to be acquired when the device 1, and notably the image sensor 20, is exposed to ambient light, in the visible spectral band. The detection spectral band is configured such that the image acquired by the image sensor 20 is not affected, or affected negligibly, by the ambient light. Thus, the device 1 may be used without it being necessary to place it in a chamber that is impermeable to light. It may be used in ambient light. The ambient-light level in which the device is able to operate depends on the fraction of the visible spectral band detected by the image sensor.

(20) Preferably, the detection spectral band Δλ.sub.20 has a bandwidth narrower than 100 nm. By spectral bandwidth, what is meant is a full width at half maximum of said spectral band. Preferably, the width of the detection spectral band Δλ.sub.20 is narrower than 50 nm, or even narrower than or equal to 20 nm.

(21) It will be understood that the detection spectral band Δλ.sub.20 and the illumination spectral band Δλ.sub.12 overlap at least partially.

(22) The detection spectral band Δλ.sub.20 may be defined by the intrinsic properties of the pixels. The image sensor then comprises pixels able to detect photons solely in the detection spectral band. More simply, the detection spectral band Δλ.sub.20 may be defined by a detection filter 29, of high-pass or band-pass type, placed between the image sensor 20 and the sample 10. Analogously, the illumination spectral band Δλ.sub.12 may be defined by the intrinsic properties of the light source 11. This is notably the case when the light source is a laser, as shown in FIG. 1B. The illumination spectral band may be defined by an illumination filter 19, placed between the light source and the sample. Use of an illumination filter 19 is conventional when the light source 11 is a white light source or a light-emitting diode.

(23) The image sensor 20 may be an RGB CMOS sensor comprising pixels the detection spectral band of which is defined by a Bayer filter. Thus, the pixels of the image sensor are sensitive in spectral bands corresponding to the colours red, green and blue of the visible spectral band, respectively. FIG. 2 shows the detection passbands defined by the Bayer filter. The x-axis corresponds to wavelength, expressed in nm, whereas the y-axis corresponds to the transmission, i.e. to the percentage of light flux transmitted. The dotted, dashed and solid curves correspond to the passbands in the blue, green and red, respectively. This type of curve is conventional in the field of standard RGB image sensors. It may be seen that beyond 850 nm, the transmission is equivalent in each spectral band. Beyond 1000 nm, the transmission decreases. Thus, when the image sensor is a standard RGB sensor, it is preferable for the detection spectral band to be comprised in the interval [750 nm-1100 nm], and preferably in the interval [850 nm-1000 nm]. The same goes for the illumination spectral band. Pixels the transmission of which is uniform, while being sufficient to form an exploitable images, are then obtained. The image sensor 20 then behaves as a monochromic sensor. With this type of image sensor, i.e. one comprising a Bayer filter, the detection spectral band is defined by a high-pass or band-pass detection filter 29 defining the detection passband.

(24) As mentioned in the patent applications cited with respect to the prior art, under the effect of the incident light wave 12, the particles 10.sub.p present in the sample may generate a diffracted wave 13, liable to produce, in the detection plane P.sub.0, interference, in particular with a portion 12′ of the incident light wave 12 transmitted by the sample. Moreover, the sample 10 may absorb some of the incident light wave 12. Thus, the light wave 14, transmitted by the sample, and to which the image sensor 20 is exposed, which light wave is called the “exposure light wave”. The exposure light wave 14 may comprise: a component 13 resulting from the diffraction of the incident light wave 12 by each particle of the sample; a component 12′ resulting from the transmission of the incident light wave 12 by the sample, some of the latter possibly being absorbed in the sample.

(25) These components form interference in the detection plane. Thus, the image I.sub.0 acquired by the image sensor comprises interference patterns (or diffraction patterns), each interference pattern possibly being associated with one particle 10.sub.p of the sample.

(26) A processing unit 21, for example a microprocessor, is able to process each image I.sub.0 acquired by the image sensor 20. In particular, the processing unit 21 is a microprocessor connected to a programmable memory 22 in which a sequence of instructions for performing the image-processing and computing operations described in this description is stored. The processing unit may be coupled to a screen 24 allowing images acquired by the image sensor 20 or computed by the processor 21 to be displayed.

(27) An image I.sub.0 acquired by the image sensor 20, also referred to as a hologram, may be subjected to a reconstruction, called a holographic reconstruction. As described with reference to the prior art, it is possible to apply, to the image I.sub.0 acquired by the image sensor 20, a holographic propagation operator h, so as to compute a complex amplitude A(x,y,z) representative of the exposure light wave 14, and to do so for every point of coordinates (x,y,z) of the space, and more particularly between the image sensor 20 and the sample 10. The coordinates (x,y) designate coordinates, called radial coordinates, parallel to the detection plane P.sub.0. The coordinate z is a coordinate along the propagation axis Z, expressing a distance between the sample 10 and the image sensor 20.

(28) The complex amplitude may be obtained using one of the following expressions:

(29) A(x,y,z)=I.sub.0(x,y,z)*h * designating the convolution operator, or, and preferably,

(30) A(x,y,z)=√{square root over (I.sub.0(x,y,z))}*h, or even:

(31) A ( x , y , z ) = I 0 ( x , y , z ) I 0 _ * h ,
I.sub.0 being a mean of the acquired image.

(32) The function of the propagation operator h is to describe the propagation of light between the image sensor 20 and a point of coordinates (x,y,z), located at a distance |z| from the image sensor. The propagation operator is for example the Fresnel-Helmholtz function, such that:

(33) h ( x , y , z ) = 1 j λ z e j 2 π z λ exp ( j π x 2 + y 2 λ z ) .

(34) It is then possible to determine a property of the exposure light wave 14, for example the modulus M(x,y,z) and/or the phase φ(x,y,z), at the distance |z|, with:
M(x,y,z)=abs[A(x,y,z)];
φ(x,y,z)=arg[A(x,y,z)];

(35) The operators abs and arg respectively designate the modulus and argument.

(36) The distance |z| is a reconstruction distance.

(37) The complex expression A(x,y,z) of the light wave 14 at any point of coordinates (x,y,z) of the space, is such that: (x,y,z)=M(x,y,z)e.sup.j.sup.φ.sup.(x,y,z).

(38) The complex expression A is a complex quantity the argument and modulus of which are respectively representative of the phase and intensity of the exposure light wave 14.

(39) By implementing holographic reconstruction algorithms, it is possible to determine the complex expression A in a reconstruction plane. The reconstruction plane is preferably parallel to the detection plane P.sub.0 and/or to the sample plane P.sub.10. A complex image A.sub.Z of the exposure light wave 14 in the reconstruction plane is then obtained. Advantageously, the reconstruction plane is the plane P.sub.10 in which the sample 10 lies. In order to obtain a holographic reconstruction of good quality, the image acquired by the image sensor may be subjected to an iterative reconstruction algorithm. Iterative reconstruction algorithms are for example described in WO2016189257 or in WO2017162985.

(40) It is possible to form images M.sub.Z and ϕ.sub.z respectively representing the modulus or the phase of a complex image A.sub.Z in a plane P.sub.Z located at a distance |z| from the detection plane P.sub.0, with M.sub.Z=mod(A.sub.Z) and ϕ.sub.z=arg(A.sub.Z). When the reconstruction plane P.sub.Z corresponds to a plane in which the sample lies, the images M.sub.Z and ϕ.sub.z allow the sample 10 to be observed with a correct spatial resolution.

(41) Trials

(42) A trial was carried out using a reference device and a device according to the invention. Each device comprises: an infrared LED light source, emitting about a central wavelength equal to 980 nm, of 20 nm bandwidth (±10 nm on either side of the central wavelength); an 8-bit IDS UI-1492LE-M CMOS image sensor composed of 3884×2764 square pixels of 1.67 μm side length; a diaphragm defining a 150 μm aperture placed next to the light source.

(43) The reference device was placed in a dark chamber, forming a chamber that was impermeable to light. The device according to the invention comprises a detection filter 29 placed directly on the image sensor, defining a detection spectral band centred on 980 nm and of spectral width equal to 10 nm. Thus, the detection spectral band lay between 975 nm and 985 nm. In this example, the device according to the invention is used in daylight.

(44) A sample, containing micron-sized particles in aqueous solution, was placed at a distance of 1.5 mm from the image sensor. FIGS. 3A and 3B are respectively images acquired by the image sensor, with the reference device and with the device according to the invention, respectively. In these figures, zones of interest have been delineated by dashed lines. FIGS. 3C and 3D correspond to the zoomed-in images of the regions of interest. FIGS. 3E and 3F show intensity profiles produced with each figure, along a dashed line. These profiles show that the image quality was equivalent with both devices.

(45) according to another embodiment, schematically shown in FIG. 4, an image-forming optical system 16 is placed between the sample and the image sensor, the image sensor being arranged in a what is called a defocused configuration. The image-forming optic 16 may comprise a lens or an objective. The image-forming optic 16 defines an object focal plane P.sub.obj and an image focal plane P.sub.m. In the defocused configuration: the object focal plane P.sub.obj is offset from the plane in which the sample lies by a distance called the defocus distance; and/or the image focal plane is offset from the detection plane by a distance called the defocus distance.

(46) The defocus distance may be comprised between 5 μm and 5 mm, and preferably between 10 μm and 2 mm. In the same way as in a lensless configuration, such a configuration allows an image to be obtained in which diffracting elements of the sample, particles for example, appear in the form of diffraction patterns, interference occurring between the light wave emitted by the light source and propagating to the image sensor and a diffracted wave generated by each diffracting element of the sample. In the example of FIG. 4, the object plane P.sub.obj is coincident with the sample plane P.sub.10. The image plane P.sub.m is offset with respect to the detection plane P.sub.0. The features described with reference to the embodiment shown in FIGS. 1A and 1B may be applied to the defocused configuration.

(47) However, a lensless-imaging configuration is preferred, because of the larger observation field that it procures.

(48) The invention will possibly be employed to observe samples in the field of biology or health, or in other industrial fields, for example food processing and/or environmental inspection.