Method for observing a sample by lensless imaging, with a spatial dispersion in the sample taken into account
10848683 ยท 2020-11-24
Assignee
Inventors
Cpc classification
G03H1/0866
PHYSICS
G01N15/1468
PHYSICS
H04N23/55
ELECTRICITY
H04N23/74
ELECTRICITY
G02B21/367
PHYSICS
G03H1/0443
PHYSICS
International classification
G02B21/36
PHYSICS
Abstract
Method for observing a sample comprising the steps of (a) illuminating the sample using a light source, the light source emitting an incident light wave that propagates toward the sample along a propagation axis (Z); (b) acquiring, using an image sensor, an image of the sample, which image is formed in a detection plane; (c) forming a stack of images, called reconstructed images, from the image acquired in step (b), each reconstructed image being obtained by applying, for one reconstruction distance, a numerical propagation operator; and (d) from each image of the stack of images, computing a clearness indicator for various radial positions, each clearness indicator being associated with one radial position and with one reconstruction distance.
Claims
1. Method for observing a sample, the sample comprising particles, comprising the following, steps: a) illuminating the sample using a light source, the light source emitting an incident light wave that propagates toward the sample along a propagation axis; b) acquiring, using an image sensor, an image of the sample, which image is formed in a detection plane, the sample being, placed between the light source and the image sensor, the image being representative of an exposure light wave, to which the image sensor is exposed under the effect of the illumination, the image comprising pixels, each pixel corresponding to a defined radial position in a plane parallel to the detection plane; c) forming a stack of complex images, called reconstructed images, from the image acquired in step b), each reconstructed image being obtained by applying, for one reconstruction distance along the propagation axis, a numerical propagation operator, the stack of images comprising as many reconstructed images as there are different reconstruction distances, each reconstructed image being representative of an exposure light wave to which the image sensor is exposed; d) from each image of the stack of images, computing a clearness indicator for various radial positions, corresponding to different particles, each clearness indicator being associated with one radial position and with one reconstruction distance; e) taking into account a selection criterion; f) for each radial position, and depending on the selection criterion, selecting one clearness indicator among, the various clearness indicators defined, at the radial position, from the various reconstructed images, the selected clearness indicator being associated with a reconstructed image that is optimal for the radial position in question; and g) forming an observation image of the sample, each pixel of which is associated with one radial position, the value of pixels of the observation image being determined depending on the value, at the radial position of the pixel, of the optimal reconstructed image that is associated therewith; wherein, in step d): each clearness indicator is computed from one elementary image, the elementary image being established from a modulus and/or an argument and/or a real part and/or an imaginary part of a reconstructed image; and each clearness indicator is established while considering a gradient of the elementary image in one or more gradient directions, and wherein: the method comprises, following step f) determining, for each radial position in question, an optimal distance, corresponding to the reconstruction distance of the optimal reconstructed image associated with the radial position, the optimal distances respectively determined with respect to at least two position, corresponding to at least two differ particles, being different, and in g), the observation image shows particles, lying in different radial positions, with different optimal distances.
2. Method according to claim 1, wherein step d) comprises, for each reconstructed image of the stack of images: computing a gradient image representative of a gradient of the reconstructed image in at east one gradient direction; and computing a norm of the gradient image; such that the clearness indicator, in each radial position, is obtained from the norm of the gradient image at the radial position.
3. Method according to claim 2, wherein step d) comprises: computing a gradient image respectively along two orthogonal axes defining a radial plane orthogonal to the propagation axis; and combining each computed gradient image so as to form a two-dimensional gradient image; such that each two-dimensional gradient image is representative of a two-dimensional gradient of one reconstructed complex image.
4. Method according to claim 1, wherein step f) comprises taking into account a range of validity, and rejecting a selected dearness indicator if the reconstruction distance that is associated therewith is not comprised in the range of validity.
5. Method according to claim 1, wherein, in step g), the observation image, at each radial position, is obtained from die optimal reconstructed image for the radial position in question.
6. Method according to claim 1, wherein no image-forming optic is placed between the sample and the image sensor.
7. Method according to claim 1, wherein an optical system is placed between the sample and the image sensor, the optical system conjugating an image plane with an object plane, the detection plane being offset with respect to the image plane and/or the sample being offset from the object plane, such that the image acquired by the image sensor is a defocused image of the sample.
8. Method according to claim 1, wherein the sample is held in or on holding element, the sample being immobile with respect to the holding element.
9. Method according to claim 1, wherein the sample contains particles, the method comprising a step h) of characterizing the particles from the observation image of the sample, the characterization comprising: counting the particles: and/or determining a size and/or a shape of the particles; and/or counting particles depending on their size and/or their shape; and/or determining a three-dimensional position of the particles.
10. Device for identifying a sample, comprising: a light source configured to emit an incident light wave that propagates toward the sample; an image sensor configured to acquire an image of the sample; a holder, configured to hold the sample between the light source and the image sensor; and a processor, configured to receive an image of the sample from the image sensor, and to implement steps c) to g) of the method according to claim 1.
11. Device according to claim 10, wherein no image-forming optic is placed between the sample and the image sensor.
12. Device according to claim 10, comprising an optical system defining a focused configuration, in which the image sensor is conjugated with a plane passing through the sample, the device being such that the image sensor or the sample are offset with respect to the focused configuration, such that, in step b), the image sensor acquires a defocused image of the sample.
13. Method according to claim 1, comprising: obtaining a plurality of three-dimensional positions for which observation of the sample is considered to be clear; and forming the observational image of the sample using the stack of images and the plurality of three-dimensional positions.
14. Method for observing a sample, comprising the following steps: a) illuminating the sample using a light source, the light source emitting an incident light wave that propagates toward the sample along a propagation axis; b) acquiring, using an image sensor, an image of the sample, which image is formed in a detection plane, the sample being placed between the light source and the image sensor, the image being representative of an exposure light wave, to which the image sensor is exposed under the effect of the illumination, the image comprising pixels, each pixel corresponding to a defined radial position in a plane parallel to the detection plane; c) forming a stack of complex images, called reconstructed images, from the image acquired in step b), each reconstructed image being obtained by applying, for one reconstruction distance along the propagation axis, a numerical propagation operator, the stack of images comprising as many reconstructed images as there are different reconstruction distances, each reconstructed image being representative of an exposure light wave to which the image sensor is exposed; d) from each image of the stack of images, computing a clearness indicator for various radial positions, each clearness indicator being associated with one radial position and with one reconstruction distance; e) taking into account a selection criterion; f) for each radial position, and depending on the selection criterion, selecting one clearness indicator among the various clearness indicators defined, at the radial position, from the various reconstructed images, the selected clearness indicator being associated with a reconstructed image that is optimal for the radial position in question; g) forming an observation image of the sample, each pixel of which is associated with one radial position, the value of pixels of the observation image being determined depending on the value, at the radial position of the pixel, of the optimal reconstructed image that is associated therewith; wherein, in step d); each clearness indicator is computed from one elementary image, the elementary image being established from a modulus and/or an argument and/or a real part and/or an imaginary part of a reconstructed image; and each clearness indicator is established while considering a gradient of the elementary image in one or more gradient directions, obtaining a plurality of three-dimensional positions for which observation of the sample is considered to be clear; and forming the observational image sample using the stack of images and the plurality of three-dimensional positions.
Description
FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
SUMMARY OF PARTICULAR EMBODIMENTS
(11)
(12) The sample 10 is a sample that it is desired to characterize. The sample comprises a medium 10m, for example a liquid medium, in which particles 10p bathe. The particles may be cells, or microorganisms, or microalgae, or fragments thereof. By microorganism, what is notably meant is a yeast, a bacterium, a spore, or a fungus. The term particles may also designate solid particles, in particular microspheres, for example metal microspheres, glass microspheres or organic microspheres, which are commonly implemented in biological protocols. It may also be a question of insoluble droplets bathing in a liquid medium, for example lipid droplets in an oil-in-water emulsion. The particles may have a diameter or a diagonal smaller than 100 m. They may be inscribed in a circle or a sphere the diameter of which is smaller than 100 m.
(13) The medium 10m may be a solid medium or a medium forming a gel. The medium 10m may comprise a bodily liquid, for example, and nonlimitingly, blood, urine, lymph, or cerebrospinal fluid. It may be a question of a culture medium, propitious for the development of cells or microorganisms.
(14) The sample 10 is held by a holding element 15. The function of the holding element is to hold the sample facing the image sensor 16. It is preferably transparent or translucent. The holding element may be a transparent plate, for example a glass plate, or a fluidic chamber. In the example shown in
(15) The distance D between the light source 11 and the fluidic chamber 15 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Advantageously, the light source, seen by the sample, may be considered to be point like. This means that its diameter (or its diagonal) is preferably smaller than one tenth, and better still one hundredth, of the distance between the fluidic chamber 15 and the light source. In
(16) Preferably, the emission spectral band of the incident light wave 12 has a bandwidth narrower than 100 nm. By spectral bandwidth, what is meant is a full width at half maximum of said spectral band.
(17) According to one embodiment, the light source 11 comprises a plurality of elementary light sources 11.sub.k, each being configured to emit an incident light wave 12.sub.k in a spectral band .sub.k. Such a light source is shown in
(18) The sample 10 is placed between the light source 11 and an image sensor 16. The image sensor 16 is configured to form an image I.sub.0 of the sample 10 in a detection plane P.sub.0. In the example shown, it is a question of an image sensor comprising a matrix array of CCD pixels, or a CMOS sensor. The pixel matrix array forms the detection plane P.sub.0. The detection plane P.sub.0 preferably lies perpendicular to the propagation axis Z of the incident light wave 12. The detection plane lies in a radial plane XY, defined by two axes X and Y, the radial plane being perpendicular to the propagation axis Z.
(19) The distance d between the sample 10 and the matrix array of pixels of the image sensor 16 is preferably comprised between 50 m and 2 cm, and preferably comprised between 100 m and 2 mm.
(20) In this embodiment, the absence of magnifying or image-forming optic between the image sensor 16 and the sample 10 will be noted. This does not prevent focusing micro-lenses possibly being present level with each pixel of the image sensor 16, said micro-lenses not performing the function of magnifying the image acquired by the image sensor, their function being to optimize the efficiency with which light is collected by the pixels.
(21) Under the effect of the incident light wave 12, the particles present in the sample may generate a diffracted wave 13, liable to generate, in the detection plane P.sub.0, interference, in particular with a portion of the incident light wave 12 transmitted by the sample. Moreover, the sample may absorb a portion of the incident light wave 12. Thus, the light wave 14, transmitted by the sample, and to which the image sensor 16 is exposed, which wave is designated by the term exposure wave, may comprise: a component 13 resulting from diffraction of the incident light wave 12 by each particle of the sample; a component 12 resulting from transmission of the incident light wave 12 by the sample, a portion of the latter possibly being absorbed in the sample.
(22) These components interfere in the detection plane. Thus, the image acquired by the image sensor contains interference patterns (or diffraction patterns), due to the various particles of the sample.
(23) A processor 20, for example a microprocessor, is configured to process each image I.sub.0 acquired by the image sensor 16, according to the steps described below. In particular, the processor is a microprocessor connected to a programmable memory 22 in which a sequence of instructions for carrying out the image-processing and computing operations described in this description is stored. The processor may be coupled to a screen 24 allowing the images acquired by the image sensor 16 or computed by the processor 20 to be displayed.
(24) Because of the absence of image-forming optic, an image I.sub.0 acquired by the image sensor 16, which image is also called a hologram, does not allow a sufficiently precise representation of the observed sample to be obtained. The acquired image I.sub.0 may notably comprise a high number of interference patterns, and may not be easily exploitable to identify the particles present in the sample.
(25)
(26) It is possible to apply, to the image I.sub.0 acquired by the image sensor, a holographic propagation operator h, so as to compute a quantity representative of the exposure light wave 14. It is then possible to reconstruct a complex expression A for the light wave 14 at any point of spatial coordinates (x,y,z), and in particular in a reconstruction plane P.sub.z located at a distance |z| from the image sensor 16, called the reconstruction distance, this reconstruction plane being for example the plane P.sub.10 in which the sample lies, with:
A(x,y,z)=I.sub.0(x,y,z)*h(1)*designating the convolution operator.
(27) The function of the propagation operator h is to describe the propagation of the light between the image sensor 16 and a point of coordinates (x,y,z), which point is located at a distance |z| from the image sensor. It is then possible to determine the modulus M(x,y,z) and/or the phase (x,y,z) of the light wave 14, at the distance |z|, with:
M(x,y,z)=abs[A(x,y,z)];
(x,y,z)=arg[A(x,y,z)].
(28) The operators abs and arg designate the modulus and argument, respectively.
(29) The propagation operator is for example the Fresnel-Helmholtz function, such that:
(30)
(31) In other words, the complex expression A of the light wave 14, at any point of spatial coordinates (x,y,z), is such that: A(x,y,z)=M(x,y,z)e.sup.i(x,y,z).
(32) In the rest of this description, the coordinates (x,y) designate a radial position in a radial plane XY parallel to the detection plane. The coordinate z designates a coordinate along the propagation axis Z.
(33) The complex expression A is a complex quantity the argument and the modulus of which are representative of the phase and of the intensity of the exposure light wave 14 detected by the image sensor 16, respectively. The product of convolution of the image I.sub.0 by the propagation operator h allows a complex image A.sub.z representing a spatial distribution of the complex expression A in a reconstruction plane P.sub.z lying at a distance |z| from the detection plane P.sub.0 to be obtained. In this example, the equation of the detection plane P.sub.0 is z=0. The complex image A.sub.z corresponds to a complex image of the sample in the reconstruction plane P.sub.z. The image A.sub.z is defined at radial coordinates (x,y), such that A.sub.z(x,y)=A(x,y,z). The image A.sub.z also represents a two-dimensional spatial distribution of the complex expression of the exposure wave 14. Such a method, designated by the term holographic reconstruction, notably allows an image of the modulus or of the phase of the exposure light wave 14 in the reconstruction plane to be reconstructed. To do this, images M.sub.z and .sub.z respectively representing the modulus or the phase of the complex image A.sub.z may be formed, with M.sub.z=mod (A.sub.z) and .sub.z=arg (A.sub.z).
(34) The application of a holographic reconstruction operator to an acquired image, according to Expression (1), may be accompanied by the formation of noise affecting the reconstructed images. In order to limit the appearance of this noise, the application of the holographic reconstruction operator is carried out using iterative holographic reconstruction algorithms. Such algorithms are for example described: in document WO2016189257, in particular in steps 100 to 150 described in the latter; or in document WO2017162985, and more precisely according to steps 110 to 160 described in the latter.
(35) The image shown in
(36) The inventors have developed a method allowing this to be remedied, the main steps of which are described below, with reference to
(37) Step 100: Acquiring an image I.sub.0 of the sample 10 with the image sensor 16, this image forming a hologram. One of the advantages of the lensless configuration shown in
(38) Step 110: Obtaining a stack of reconstructed images.
(39) This step comprises reconstructing a plurality of complex images by carrying out holographic reconstructions at various reconstruction distances z.sub.j from the detection plane P.sub.0. A stack of complex images A.sub.z.sub.
(40) Each index j is a natural integer indexing one reconstruction distance, with 1jJ, J being the number of complex images A.sub.z.sub.
(41) Preferably, the reconstruction distances z.sub.j encompass the sample 10. If the sample lies, along the propagation axis Z, between two distances z.sub.min and z.sub.max, with respect to the image sensor 16, with z.sub.min<z.sub.max, the minimum and maximum reconstruction distances z.sub.j=1 and z.sub.j=J are such that z.sub.j=1z.sub.min<z.sub.maxz.sub.j=J.
(42) The respective reconstruction distances z.sub.j, z.sub.j+1 of two adjacent reconstructed images A.sub.zj, Az.sub.j+1 may be separated by a distance comprised between 5 m and 100 m, this distance corresponding to a reconstruction pitch.
(43) Each complex image A.sub.z.sub.
(44) The stack of complex images may be established using the method described in WO2017178723.
(45) At the end of step 110, a stack of complex images A.sub.z.sub.
(46)
(47) Step 120: Computing a clearness indicator for various radial positions (x,y), along the propagation axis Z.
(48) In this step, a variation in a clearness indicator C.sub.j(x,y) along the propagation axis Z is determined for a plurality of radial positions (x,y). In other words, for each radial position (x,y), this step comprises computing a clearness indicator C.sub.j(x,y), at the radial position in question, of each image A.sub.z.sub.
(49) Each reconstructed image A.sub.z.sub.
(50) Generally, an elementary image I.sub.z.sub.
(51) The clearness indicator C.sub.j(x,y) of each complex image A.sub.z.sub.
G.sub.XY(x,y)=|G.sub.X(M.sub.z.sub.
or G.sub.XY(x,y)=|G.sub.X(M.sub.z.sub.
(52) The two-dimensional gradient image then forms, in each radial position (x,y), an indicator C.sub.j(x,y) of the clearness of the complex image A.sub.z.sub.
(53) The two-dimensional gradient image thus obtained is representative of a two-dimensional gradient of the reconstructed complex image A.sub.z.sub.
(54) The clearness indicator may be established, at each reconstruction distance and at each radial position, from an elementary image containing the modulus, and/or the phase, and/or the real part, and/or the imaginary part of the reconstructed complex images, by considering the intensity of these images. The clearness indicator may be established from the intensity of such images.
(55) Following step 120, a clearness indicator C.sub.j(x,y) is obtained for each radial position (x,y) corresponding to one pixel of the complex image A.sub.z.sub.
(56) Step 130: Selecting an optimal clearness indicator C.sub.j(x,y) for each radial position (x,y).
(57) In this step, for each radial position (x,y), among the clearness indicators C.sub.j(x,y) (1jj) corresponding to the radial position, an optimal clearness indicator C.sub.j*(x,y) meeting a preset selection criterion is selected. The optimal clearness indicator C.sub.j*(x,y) thus selected may be the maximum clearness indicator, in which case C.sub.j*(x,y)=max C.sub.j(x,y). It may also be a question of the minimum clearness indicator, in which case
(58)
(59) In the stack of the elementary images illustrated in
(60) Step 140: Determining an optimal reconstruction distance z.sub.j* for each radial position (x,y) in question.
(61) The selection of the optimal clearness indicator C.sub.j*(x,y) allows, for each radial position, a reconstruction distance, called the optimal reconstruction distance z.sub.j*(x,y), which distance is associated with the optimal clearness indicator, to be established. At this optimal reconstruction distance z.sub.j*(x,y), for the radial position (x,y) in question, the complex image A.sub.z.sub.
(62) Thus, if an image A.sub.z.sub.
j*=argmax(C.sub.j(x,y)).
(63) When A.sub.z.sub.
j*=argmin(C.sub.j(x,y)).
(64) At the end of this step, a list of three-dimensional positions (x,y,z.sub.j*(x,y)) for which the observation of the sample is considered to be clear is obtained.
(65) Step 150: Forming an observation image I of the sample.
(66) In this step, a clear image of the sample is formed from the stack of complex images, by considering the three-dimensional positions (x,y,z.sub.j*(x,y)) at which a complex image A.sub.z.sub.
(67) The observation image I of the sample may be formed from the modulus of the complex images of the stack of images. In this case, the observation image of the sample is such that I(x,y)=mod(A.sub.z.sub.
(68)
(69)
(70)
(71)
(72) According to one embodiment, the method comprises an optional step 145 in which, for each radial coordinate (x,y) in question, the optimal distance z.sub.j*(x,y) obtained following steps 130 and 140 is selected. In this selection, when, for a radial position (x,y), the optimal distance is located outside of a range of validity z defined beforehand, it is invalidated. The range of validity z for example corresponds to a distance range [z.sub.min, z.sub.max] bounding the sample. Thus, if z.sub.j*(x,y)z.sub.min or z.sub.j*(x,y)z.sub.max, the optimal distance z.sub.j*(x,y) is invalidated. The clearness indicator C.sub.j(x,y) corresponding to such a distance is then invalidated. For this radial position, steps 130 and 140 may then be reiterated. Such an embodiment makes it possible to prevent elements located outside of the sample, for example dust on the holding element 15, or on the image sensor 16, from being considered to form part of the sample 10.
(73) According to one such embodiment, step 150 is optional. In this case, the method allows an optimal distance to be obtained at each radial position in question, without necessarily resulting in an observation image of the sample.
(74) According to one variant, an image-forming optic is placed between the sample and the image sensor. According to one variant, illustrated in
(75) Nonlimitingly, the invention may be implemented in the field of diagnostics, of biology, or in inspection of the environment, or even in the field of food processing or of control of industrial processes.