DEVICE AND METHOD FOR OBSERVING AN OBJECT BY LENSLESS IMAGING
20190120747 ยท 2019-04-25
Assignee
- Commissariat A L'energie Atomique Et Aux Energies Alternatives (Paris, FR)
- Biomerieux (Marcy-l'Etoile, FR)
Inventors
Cpc classification
G01N15/00
PHYSICS
G01N2015/1454
PHYSICS
G03H1/0443
PHYSICS
G03H1/041
PHYSICS
G03H1/0866
PHYSICS
G01N15/1468
PHYSICS
G03H2001/0471
PHYSICS
International classification
Abstract
A device and a method for observing an object by imaging, or by lensless imaging. The object is retained by a holder defining an object plane inserted between a light source and an image sensor, with no enlargement optics being placed between the object and the image sensor. An optical system is arranged between the light source and the holder and is configured to form a convergent incident wave from a light wave emitted by the light source, and for forming a secondary light source, conjugated with the light source, positioned in a half-space defined by the object plane and including the image sensor, such that the secondary source is closer to the image sensor than to the holder. This results in an image with a transversal enlargement factor having an absolute value of less than 1.
Claims
1-16. (canceled)
17. A device for observing an object, comprising: a light source, configured to generate an emission light wave, that propagates along a propagation axis; an image sensor; a holder configured to hold an object, the holder being placed between the image sensor and the light source such that the image sensor is configured to form an image of the object held on the holder; an optical system, placed between the light source and the holder, the optical system configured to form, from the emission light wave, a convergent incident light wave that propagates from the optical system to the holder; wherein: the holder defines an object plane, that is perpendicular to the propagation axis and that passes through the holder, the optical system being configured to conjugate the light source with a secondary source that is located in a half space defined by the object plane and that includes the image sensor; the optical system is configured such that the secondary source is located closer to the image sensor than to the holder, such that the image of the object, held on the holder, on the image sensor is affected by a magnification lower than 1.
18. The device of claim 17, wherein the device does not comprise magnifying optics between the holder and the image sensor.
19. The device of claim 17, wherein the optical system is configured such that the secondary source is located between the holder and the image sensor.
20. The device of claim 17, wherein the image sensor lies in a detection plane, the optical system being configured such that the secondary source is located in a half space that is defined by the detection plane and that does not include the light source.
21. The device of claim 17, wherein the optical system is parameterized by a parameter, such that the position of the secondary source is adjustable depending on the parameter.
22. The device of claim 21, wherein the parameter is a position of the optical system along the propagation axis or a focal length of the optical system.
23. A method for observing an object comprising: a) placing the object between a light source and an image sensor, the light source being coupled to an optical system that is placed between the light source and the object; b) activating the light source, the light source then emitting an emission light wave that propagates to the optical system, the optical system forming a convergent incident light wave that propagates to the object; and c) acquiring, using the image sensor, an image of the object thus exposed to the convergent incident light wave; wherein: the emission light wave is emitted along a propagation axis, the object defining an object plane, that passes through the object and that is perpendicular to the propagation axis, such that, in b), the optical system conjugates the light source with a secondary light source, the secondary light source being located in a half space that is defined by the object plane and that includes the image sensor; and the secondary source is located closer to the image sensor than to the object.
24. The method of claim 23, wherein no magnifying optics are placed between the object and the image sensor.
25. The method of claim 23, wherein the secondary light source is located between the object and the image sensor.
26. The method of claim 23, wherein the image sensor lies in a detection plane, and wherein the secondary light source is located in a half space that is defined by the detection plane and that does not include the light source.
27. The method of claim 23, wherein b) further comprises adjusting the position of the secondary source depending on a parameter of the optical system.
28. The method of claim 27, wherein the parameter of the optical system is: a focal length of the optical system; or a position of the optical system along the propagation axis.
29. The method of claim 23, wherein, in c), the image sensor is exposed to an exposure light wave including: a wave that is transmitted by the object and that results from the transmission, by the object, of the convergent incident light wave; and a diffraction wave that results from the diffraction, by the object, of the convergent incident light wave.
30. The method of claim 23, further comprising: d) applying a holographic reconstruction algorithm to the image formed on the image sensor in c).
31. The method of claim 23, wherein: the light source emits the emission light wave at a wavelength; the object is translucent or transparent at the wavelength.
32. The method of claim 7, wherein: the light source emits the emission light wave at a wavelength; the object includes particles that are dispersed in or on the surface of a medium, the latter being translucent or transparent at the wavelength.
Description
FIGURES
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
SUMMARY OF PARTICULAR EMBODIMENTS
[0049]
[0050] The object 10 may be a sample that it is desired to characterize. It may comprise a solid or liquid medium 10a that is transparent or translucent to said wavelength , in which medium, or on the surface of which medium, particles 10b are dispersed.
[0051] The expression bodily fluid is understood to mean a fluid issued from an animal or human body, such as blood, urine, sweat, cerebrospinal fluid, lymph, etc. The expression culture medium is understood to mean a medium that lends itself well to the development of a biological species such as cells, bacteria or other microorganisms.
[0052] The object may also be a tissue slide or anatomo-pathology slide including a small thickness of tissue deposited on a transparent slide. It may also be a question of a slide obtained by applying a staining protocol suitable for allowing a microorganism to be identified in a sample, for example a Gram or Giemsa stain. By small thickness, what is meant is a thickness that is preferably smaller than 100 m, and more preferably smaller than 10 mtypically a few micrometers.
[0053] The distance between the light source 11 and the object 10 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Preferably, the light source, seen by the object, may be considered to be point-like. This means that its diameter (or its diagonal) is preferably smaller than one tenth, and better still one hundredth of the distance between the object and the light source.
[0054] The light source may be a light-emitting diode or a source of laser light, such as a laser diode. It may preferably be a point source. In the example shown, the light source 11 is a light-emitting diode sold by Innovation Optics under the reference Lumibright 1700A-100-A-C0, the emission spectral band of which is centered on the wavelength of 450 nm. This light-emitting diode is placed facing a first end of an optical fiber 13, the second end of which is placed facing the object 10, or facing the holder 10s holding the object. The diameter of the core of the optical fiber is for example 1 mm. According to one variant, the optical fiber 13 may be replaced by a diaphragm, the aperture of which is typically comprised between 5 m and 1 mm, and preferably between 50 m and 500 m150 m for example. According to another variant, the optical fiber is coupled to an objective, allowing an image of its distal end to be formed so as to improve the point-like character of the source. This particular case will be described below. The optical fiber or the diaphragm, which are optionally coupled to an objective, form a spatial filter 13 allowing a point light source to be formed when the light source 11 is not judged to be sufficiently point-like.
[0055] The device also includes an image sensor 20, which is able to form an image I in a detection plane P.sub.20. In the example shown, it is a question of a matrix-array image sensor including a CCD or CMOS pixel matrix array. CMOS image sensors are preferred because the pixel size is smaller, this allowing images the spatial resolution of which is more favorable to be acquired. In this example, the image sensor is a CMOS sensor sold by Aptina under the reference Micron MT9P031. It is a question of a monochromatic CMOS sensor comprising 25921944 pixels of 2.2 m side length, forming a detection surface, the area of which is 24.4 mm.sup.2. Image sensors the inter-pixel pitch of which is smaller than 3 m are preferred, in order to improve the spatial resolution of the image. The detection plane P.sub.20 preferably lies perpendicular to the propagation axis Z of the emission light wave 12. The image sensor 20 may comprise a mirror-type system for redirecting images toward a pixel matrix array, in which case the detection plane corresponds to the plane in which the image-redirecting system lies. Generally, the detection plane P.sub.20 corresponds to the plane in which an image is formed.
[0056] The distance d between the object 10 and the pixel matrix array of the image sensor 20 is, in this example, equal to 2 cm. Generally, whatever the embodiment, the distance d between the object and the pixels of the image sensor is preferably comprised between 50 m and 5 cm.
[0057] The absence of magnifying optics between the image sensor 20 and the object 10 will be noted, this being the preferred configuration. This does not prevent focusing micro-lenses optionally being present level with each pixel of the image sensor 20, the latter not having the function of magnifying the image.
[0058] The device 1 includes an optical system 15 that is placed between the light source 11 and the object 10. Its function is to collect the emission wave 12 propagating toward the object and to form a convergent wave 12.sub.c that propagates to the object, which wave is called the convergent incident wave. Some of the convergent incident wave 12.sub.c is then transmitted by the object, forming a transmitted wave 22, and propagates to the image sensor 20. Moreover, under the effect of exposure to the convergent incident wave 12.sub.c, the object may generate a diffraction wave 23 resulting from diffraction, by the object, of the convergent incident wave 12.sub.c. The image sensor is therefore exposed to a wave, called the exposure wave 24, comprising the transmitted wave 22 and the diffraction wave 23. Detection of the exposure wave 24 by the image sensor allows an image of a portion of the object to be formed, this portion corresponding to the field of observation. This image represents a spatial distribution of the amplitude of the exposure wave 24 in the detection plane P.sub.20. It may in particular include diffraction patterns resulting from interference between the transmitted wave 22 and the diffraction wave 23. These patterns may in particular take the form of a central core, around which concentric rings lie. It is a question of the diffraction patterns described in the section relating to the prior art.
[0059] When the object includes various particles 10b, the diffraction wave includes a plurality of elementary diffraction waves, each elementary diffraction wave resulting from diffraction of the convergent incident wave 12.sub.c by said particles. Appearance of these diffraction waves is promoted when the size of the said particles is about the same as or larger than the wavelength A emitted by the light source 11.
[0060] The optical system 15 allows a secondary image 11.sub.s of the source to be formed, above or below the object. The terms above and below are understood to mean along the propagation axis of the emission wave 12. Thus, by below the object, what is meant is in a half space defined by the plane P.sub.10 that passes through the holder able to hold the object 10 and that is perpendicular to the propagation axis Z, this half space including the image sensor 20 (and therefore not including the source 11). In the example shown in
[0061] In
where [0062] x.sub.10 is a dimension in the object plane P.sub.10; [0063] x.sub.20 is the same dimension in the detection plane P.sub.20, i.e. in the image acquired by the image sensor; and [0064] the operator
[0065] The expression transverse magnification is understood to mean a magnification along an axis that is perpendicular to the propagation axis of the light. In the rest of the text, the terms transverse magnification and magnification are used interchangeably.
[0066] In the configuration shown in
[0067] An incident wave 12.sub.AA, according to the prior art is also shown in this figure. The incident wave is divergent from the light source to the object, from which a transmitted wave 22.sub.AA, which is also divergent, propagates to the image sensor 20. The transverse magnification is then higher than 1.
[0068]
[0072] On the basis of Expression (1), it is possible to determine the transverse magnification of each of these configurations. [0073] When the secondary source is positioned between the image sensor 20 and the object 10, which position is referenced S in
[0075]
and the y-axis represents the transverse magnification. It may be seen that: [0076] i) when the secondary source 11.sub.s is located closer to the image sensor 20 than to the object 10,
the absolute value of the magnification g.sub.x is strictly lower than 1: |g.sub.x|<1; the magnification is negative when the secondary source is placed between the image sensor 20 and the object 10, and positive when the secondary source is located below the image sensor; [0077] ii) when the secondary source 11.sub.s is placed closer to the object 10 than to the image sensor 20,
the absolute value of the magnification g.sub.x is strictly higher than 1: |g.sub.x|>1; [0078] iii) when the secondary source 11.sub.s is placed between the object and the image sensor,
the magnification g.sub.x is negative, this corresponding to an inversion of the image of the object; [0079] iv) when the secondary source 11.sub.s is placed below the image sensor, i.e. in a half space defined by the detection plane P.sub.20 and not containing the object 10 (or the source 11),
the magnification g.sub.x is strictly comprised between 0 and 1:0<g.sub.x<1; and [0080] v) when the secondary source 11.sub.s is placed between the source 11 and the object 10
the magnification is strictly higher than 1.
[0081] The configuration in which the secondary source is in the object plane, i.e.
corresponds to a configuration in which the magnification is infinity. However, in this configuration, that portion of the object which is illuminated by the convergent incident wave 12.sub.c is then infinitely small, and hence this configuration is of no interest. When the secondary source is brought closer to the object, the magnification tends toward: [0082] if the secondary source is located below the object, i.e.
[0084] The configuration in which the secondary source 11.sub.s is in the detection plane P.sub.20 i.e.
corresponds to a configuration in which the magnification is zero. This configuration is of no interest.
[0085]
and [0088]
[0089] Thus, a magnification the absolute value of which is lower than 1 is obtained in configurations i) or iv). This is due to the fact that the wave 12.sub.c incident on the object 10 is convergent, and that the secondary source 11.sub.s is closer to the image sensor 20 than to the object 10. In this type of configuration, provided that the illuminated field on the object is sufficiently large, the field of observation of the image sensor is increased with respect to the prior art. The curly bracket shown in
[0090] Moreover, by interposing an optical system 15 between the light source 11 and the object 10, it is possible to make the position of the secondary light source 11.sub.s vary, for example using an optical system 15 of variable focal length or by moving said system. The magnification g.sub.x may be modulated depending on a parameter characterizing the optical system 15, for example its focal length or its position along the propagation axis Z. This allows, during observation of an object, images corresponding to a magnification lower than 1, and hence to a large field of observation, to be alternated with images corresponding to a magnification higher than 1, allowing, via a zoom effect, details to be better seen. Although the device does not include any magnifying optics between the source and the object, the invention allows the magnification of an image to be modulated.
[0091] The image obtained on the image sensor 20 may be exploited as such. Specifically, it has been shown that certain particles produce diffraction patterns the morphology of which is specific thereto. It is thus possible to count particles, and to identify them, as described in document WO2008090330, which was cited in the section relating to the prior art. As a variant, a reconstruction algorithm may be applied to this image so as to obtain a reconstructed image in a plane parallel to the detection plane P.sub.20, called the reconstruction plane P.sub.r, this plane being located at a known distance d.sub.r from the detection plane. It is then a question of using the well-known principles of digital holographic reconstruction, which principles are for example described in the publication Ryle et al, Digital in-line holography of biological specimens, Proc. OF SPIE Vol. 6311 (2006), to calculate the product of convolution between the image intensity I(x,y) measured by each pixel 20.sub.(x,y) of the image sensor 20 and a propagation operator h(x,y,z). The function of the propagation operator h(x,y,z) is to describe the propagation of light between the image sensor 20 and a point of coordinates (x,y,z). The coordinates (x, y) are coordinates in the detection plane P.sub.20, whereas the coordinate z is a coordinate along the propagation axis Z. It is possible to reconstruct a complex expression I*(x,y,z) for the exposure wave 24 at every point of spatial coordinates (x,y,z) and in particular in a plane located at a reconstruction distance d.sub.r from the image sensor, i.e. a plane of equation z=d.sub.r. It is then possible to determine the amplitude u(x,y,z) and phase (x,y,z) of this exposure wave 24 at the reconstruction distance d.sub.r, with: [0092] u(x,y,z)=abs [I*(x,y,z=d.sub.r)]; and [0093] (x,y,z)=arg [I*(x,y,z=d.sub.r)],
abs and arg respectively designating the modulus and argument operators.
[0094] In this example, the detection plane P.sub.20 in which the image I is formed is assigned a coordinate z=0. The propagation operator may be such that:
where r={square root over (x.sup.2+y.sup.2+z.sup.2)}, and is the wavelength.
[0095] Such an operator was described in the publication Marathay A On the usual approximation used in the Rayleigh-Sommerfeld diffraction theory, J. Opt. Soc. Am. A, Vol. 21, No. 4, April 2004. Other propagation operators are usable, for example an operator based on the Fresnel-Helmholtz function, such that:
Experimental Trials
[0096] Experimental trials have been carried out using a configuration such as shown in
[0097] The device included an optical system 15 that was placed between the light source 11 and the object 10: it was a question of a tube lens of 50 mm focal length (reference AC254-050-A, manufacturer Thorlabs). This system was able to conjugate the light source 11 with a secondary source 11.sub.s, via the point source 11. This optical system was arranged such that the secondary source 11.sub.s was positioned between the object 10 and the image sensor 20, as shown in
[0098] The distance d.sub.15 between the optical system 15 and the objective 13 was varied, this distance being called the inter-optic distance, so as to move the position S of the secondary source 11.sub.s along the propagation axis Z of the light. The object 10 included a transparent reticle, which is shown in
[0099] Holographic reconstruction algorithms were applied to each of the images 6A, 6B and 6C. The holographic reconstruction algorithm implemented was based on the operator described by Expression (2), the reconstruction distance employed being z=2 cm, the coordinate z=0 corresponding to the detection plane P.sub.20.
[0100] Since the distance between two adjacent pixels was known, it was then possible to measure the transverse magnification generated by the optical system using the expression:
where: [0101] is the distance between a preset number, here equal to 10, of successive graduations of the reticle; [0102] .sub.pix is the distance between two adjacent pixels of the image sensor 20; and [0103] n.sub.pix is the number of pixels between said preset number of successive graduations in the reconstructed image.
[0104]
[0105]
[0106]
[0107] These trials demonstrated the ability of the device to obtain an exploitable image of an object the size of which is larger than the size of the sensor, because of a magnification the absolute value of which is lower than 1. They also demonstrated that it is possible to easily modulate the position of the secondary source 11s and, therefore, to pass from a magnification of higher than 1 to a magnification lower than 1, without modifying the position of the source, of the object and of the image sensor, and to do so with no magnifying optics placed between the object and the image sensor. Thus, while remaining within the field of lensless imaging, the magnification of the image of the object may be lower than 1, and may vary.
[0108] In another trial, the device of which is shown in
[0109]
[0112]
[0113] This once again demonstrates that it is possible to modulate the magnification of an image in a lensless-imaging type configuration, i.e. without magnifying optics placed between the object and the sensor.
[0114] The invention will possibly be used to observe samples, for example biological samples, or in other fields, for example the field of the food-processing industry.