Device and method for bimodal observation of an object
10724937 ยท 2020-07-28
Assignee
- Commissariat A L'energie Atomique Et Aux Energies Alternatives (Paris, FR)
- Biomerieux (Marcy-l'Etoile, FR)
Inventors
Cpc classification
G03H2001/005
PHYSICS
G01N2015/1445
PHYSICS
G03H1/0443
PHYSICS
G01N2021/1738
PHYSICS
International classification
Abstract
A device including a light source, an image sensor, and a holder defining two positions between the light source and the image sensor. Each position is able to receive an object with a view to its observation. An optical system is placed between the two positions. Thus, when an object is placed in a first position, it may be observed, through the optical system, via a conventional microscopy modality. When an object is placed in the second position, it may be observed via a second lensless imagery modality.
Claims
1. A device for observing an object, comprising: a light source and an image sensor, the light source being configured to emit an emission wave, along an emission axis, so that a light wave propagates, along an optical path, toward the image sensor, through an object; an optical system that is placed, on the optical path, between the light source and the image sensor; a holder that defines a first position and a second position, each position being configured to receive the object, the holder being configured such that: the first position is interposed, on the optical path, between the light source and the optical system, the optical system configured to conjugate the image sensor to the first position; and the second position is interposed, on the optical path, between the optical system and the image sensor, such that there is no magnifying optics between the second position and the image sensor.
2. The device of claim 1, wherein the optical path lies parallel to the emission axis so that the light source, the optical system, and the image sensor are aligned along the emission axis, the holder configured such that the first position and the second position are aligned along the emission axis.
3. The device of claim 2, wherein the holder is translationally movable in a direction that is perpendicular to the emission axis.
4. The device of claim 1, further comprising a first mirror placed, along the optical path, between the light source and the image sensor, the first mirror lying between the first and second positions and configured to reflect a light wave propagating from the first position to the second position.
5. The device of claim 4, further comprising: a second mirror placed between the first position and the second position, the first and second mirrors forming an assembly, the assembly configured to reflect a light wave propagating from the first position along an axis parallel to the emission axis, the first and second positions being offset along an axis that is perpendicular to the emission axis.
6. The device of claim 5, wherein the first and second positions are aligned along an offset axis, that is perpendicular to the emission axis.
7. The device of claim 1, wherein the holder includes a moving mechanism configured to move the object between the first and second positions.
8. The device of claim 6, wherein the holder comprises a moving mechanism configured to move the object between the first and second positions, and wherein the moving mechanism is configured to translate the object between the first and second positions, along the offset axis.
9. A method for observing an object using a device as claimed in claim 1, wherein the light source emits the emission wave at a wavelength, the object being transparent or translucent at the wavelength, the method comprising: a) positioning the object in the first position and obtaining a first image of the object using the image sensor, the object being conjugated with the image sensor by the optical system, the first image being formed using a conventional imaging modality; and/or b) positioning the object in the second position and obtaining a second image of the object using the image sensor, using a lensless imaging modality.
10. The method of claim 9, wherein a) and b) are carried out in succession, and wherein: the first image defines a first field of observation of the object; and the second image defines a second field of observation of the object; the second field of observation being larger than the first field of observation.
11. The method of claim 10, further comprising, following b): b) applying, to the second image, a propagation operator that takes into account a distance between the second position and the image sensor, to obtain a reconstructed image of the object placed in the second position.
12. The method of claim 9, wherein during a), a first object is positioned in the first position, the method further comprising: placing a second object in the second position; and obtaining, with the image sensor, a dual image, simultaneously showing the first object and the second object.
13. The method of claim 12, further comprising: d) performing a numerical reconstruction, which comprises applying a propagation operator to the dual image, to obtain a reconstructed dual image: in a plane extending through the first position, the reconstructed dual image corresponding to an image of the first object; and/or in a plane extending through the second position the reconstructed dual image corresponding to an image of the second object.
14. The method of claim 13, further comprising: selecting, depending on their respective intensity, representative pixels in the reconstructed dual image, to form an image of the first object or the second object from the pixels selected.
15. The method of claim 14, wherein, in e), wherein the reconstructed dual image comprises pixels, the selection of representative pixels including: i) calculating, for each pixel of the reconstructed dual image, an indicator representing a dispersion of the intensity of adjacent pixels around the pixel; ii) comparing the indicator calculated for each pixel to a threshold; and iii) selecting the representative pixels on the basis of the comparison made in ii).
16. The method of claim 9, wherein the object occupies both the first position and the second position, the method further comprising: obtaining a first image defining a first field of observation of the object; and obtaining a second image defining a second field of observation of the object; the second field of observation being larger than the first field of observation.
17. A method for observing a first object and a second object, comprising: placing a light source and an image sensor such that the light source emits an emission light wave at a wavelength, the emission light wave propagating toward the image sensor, thereby defining an optical path; interposing a first object and a second object between the light source and the image sensor, the first and second objects being transparent or translucent at the wavelength, the first and second objects lying transversely to the optical path, such that the first and second objects are placed, on the optical path, on either side of an optical system, the optical system configured to conjugate a portion of the first object with the image sensor; and obtaining a dual image, on the image sensor, from the emission wave that propagates, from the light source, through the first and second objects, to the image sensor.
18. The method of claim 17, further comprising applying a numerical propagation operator to the dual image, to obtain a reconstructed dual image at a reconstruction distance from the sensor.
19. The method of claim 18, wherein the reconstructed dual image is representative of the second object, the reconstruction distance being a distance between the second object and the image sensor; and/or the reconstructed dual image is representative of the first object, the reconstruction distance being a distance between the first object and the image sensor.
Description
FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DESCRIPTION OF PARTICULAR EMBODIMENTS
(9)
(10) The object 10 may be a sample that it is desired to characterize. It may comprise a solid or liquid medium 10.sub.a that is transparent or translucent at said wavelength , in which medium, or on the surface of which medium, particles 10.sub.b are dispersed. By translucent, what is meant is that the object transmits all or some of a light wave that is incident thereon.
(11) The expression bodily liquid is understood to mean a liquid issued from an animal or human body, such as blood, urine, sweat, cerebrospinal fluid, lymph, etc. The term culture medium is understood to mean a medium propitious to the development of a biological species such as cells, bacteria or other microorganisms.
(12) The object may also be a tissue slide, or pathology slide, including a small thickness of tissue deposited on a transparent slide. It may also be a slide resulting from the application of a staining protocol suitable for finding a microorganism in a sample, for example a Gram or Giemsa stain. By small thickness, what is meant is a thickness that is preferably smaller than 100 m, and preferably smaller than 10 m, and typically a few microns.
(13) The light source may be a light-emitting diode or a laser light source, such as a laser diode. It is preferably a point source.
(14) The device also includes an image sensor 20, which is able to form an image in a detection plane P.sub.20. In the example shown, it is a question of a matrix-array image sensor including a matrix array of CCD or CMOS pixels. CMOS image sensors are preferred because the size of the pixels is smaller, thereby allowing images the spatial resolution of which is more favorable to be acquired. Image sensors the inter-pixel pitch of which is smaller than 3 m are preferred, in order to improve the spatial resolution of the image. The detection plane P.sub.20 preferably lies perpendicular to the Z-axis along which the light wave 12 is emitted.
(15) The image sensor 20 is connected to an information-processing unit 30, for example a microprocessor, and to a screen 34. The microprocessor is connected to a memory 32, which includes instructions in particular allowing the numerical reconstruction algorithms described below to be implemented.
(16) The image sensor 20 may comprise a mirror-type system for redirecting images toward a pixel matrix array, in which case the detection plane corresponds to the plane in which the image-redirecting system lies. Generally, the detection plane P.sub.20 corresponds to the plane in which an image is formed. Preferably, the detection plane P.sub.20 is parallel to the planes P.sub.10.1 and P.sub.10.2 described above.
(17) The device includes an optical system 15 that is able to conjugate an object, positioned in the first position 10.1, with the image sensor 20. In other words, the optical system 15 allows a clear image of the object 10, positioned in the position 10.1, to be formed on the image sensor 20. Thus, when the object 10 is positioned in the first position 10.1, as shown in
(18) The second position 10.2 is located facing the image sensor 20, no magnifying or image-forming optics being placed between this position and the image sensor. This does not prevent focusing micro-lenses possibly being present level with each pixel of the image sensor 20, said lenses not having an image-magnifying function. Thus, when the object 10 is placed in this second position 10.2, as shown in
(19) Thus, the holder 16 allows an object to be held in the first position 10.1 (
(20) In the second imaging modality, because magnifying optics are absent between the image sensor 20 and the second position, the second image I.sub.2 obtained on the image sensor 20 represents an observation of the object 10 in a second field of observation that is preferably larger than the first field of observation.
(21) The second image I.sub.2 obtained on the image sensor 20 may be exploited as such. Specifically, it is known that certain particles produce diffraction patterns the morphology of which is specific thereto. It is thus possible to count these particles, or even to identify them. This is for example described in document WO2008090330, which was cited with reference to the prior art. Thus, a user may make an observation of the object using this second modality, so as to obtain an observation with a large field of observation. He may then target certain zones of interest of the object, and obtain a more detailed image thereof, with a narrower field of observation, by placing the object in the first position 10.1. The device thus allows observation of a detail of the object with a narrow field of observation, through the optical system 15, by virtue of the first modality, to be alternated with an observation of the object, with a large field of observation, by virtue of the second modality.
(22) According to one variant, a reconstruction algorithm may be applied to the image I.sub.2 obtained using the second modality, so as to obtain a reconstructed image in a plane parallel to the detection plane P.sub.20, called the reconstruction plane P.sub.z, placed at a known distance d.sub.r, called the reconstruction distance, from the detection plane P.sub.20, along the propagation axis Z. It is then a question of applying the principles of numerical holographic reconstruction, which are for example described in the publication Ryle et al. Digital in-line holography of biological specimens, Proc. of SPIE Vol. 6311 (2006), i.e. of obtaining the product of convolution of the second image I.sub.2 with a propagation operator h(x,y,z). The function of the propagation operator h(x,y,z) is to describe the propagation of the light between the image sensor 20 and a point of coordinates (x,y,z). The coordinates (x,y) are the coordinates of pixels in the detection plane P.sub.20, whereas the coordinate z is a coordinate along the propagation axis Z. The product of convolution of the image with the propagation operator allows a complex expression I.sub.2,z(x,y) to be reconstructed for the exposure wave 24 at any point of spatial coordinates (x,y,z), and in particular in a plane located at a reconstruction distance d.sub.r from the image sensor, having the equation z=d.sub.r. A reconstructed image I.sub.2,z=dr is then obtained. It is then possible to determine the amplitude u(x,y,z) and the phase (x,y,z) of the exposure wave 24 at the reconstruction distance d.sub.r, where:
u(x,y,z=d.sub.r)=abs[I.sub.2,z=dr(x,y)]; and
(x,y,z=d.sub.r)=arg[I.sub.2,z=dr(x,y)].
(23) The operators abs and arg are the modulus and argument, respectively.
(24) In this example, the detection plane P.sub.20 is assigned a coordinate z=0. The propagation operator may be such that:
(25)
where r={square root over (x.sup.2+y.sup.2+z.sup.2)}, and is the wavelength.
(26) Such an operator was described in the publication Marathay, A On the usual approximation used in the Rayleigh-Sommerfeld diffraction theory, J. Opt. Soc. Am. A, Vol. 21, No. 4, April 2004.
(27) Other propagation operators are usable, for example an operator based on the Fresnel-Helmholtz function, such that:
(28)
(29) When the reconstruction is carried out in the direction of propagation of the light, for example from an object plane P.sub.10-1 or P.sub.10-2 to the detection plane P.sub.20, propagation is spoken of. When the reconstruction is carried out in the inverse direction to the propagation of the light, for example from the detection plane P.sub.20 to a plane located between the light source said detection plane, and an object plane P.sub.10-1 or P.sub.10-2 for example, back propagation is spoken of. In the rest of the text, the axis of propagation of the light is considered to be oriented from the light source 11 to the image sensor 20, and the coordinate z=0 is considered to correspond to the detection plane P.sub.20 in which the image sensor 20 lies.
(30) As already described, the device shown in
(31)
(32) The image sensor 20 is a CMOS sensor sold by Aptina under the reference Micron MT9P031. It is a monochromatic CMOS sensor comprising 25921944 pixels of 2.2 m side length, forming a detection surface the area of which is 24.4 mm.sup.2.
(33) The optical system 15 comprises an objective 15.1, of Motic brand, of reference EF-N plan 40X, of 0.65 numerical aperture, and of focal length f.sub.1=4.6 mm. This objective is placed at a distance equal to 10 cm from the distal end of the optical fiber 13. It is placed at a distance of about 300 m from the first position 10.1 of the device, and is placed in contact with a tube lens 15.2 (reference AC 254-050-A, manufacturer Thorlabs) of 25 mm diameter and of 50 mm focal length. The image sensor is placed at a distance of 43.7 mm from the tube lens 15.2. The first position and the second position lie at a distance d.sub.1=26.2 mm and d.sub.2=15.3 mm from the image sensor 20, respectively. This optical system allows a first image I.sub.1 of an object 10 placed in the first position 10.1 to be formed on the image sensor, said image being assigned a magnification factor equal to the ratio of the focal lengths, i.e. of about 10.8.
(34)
(35)
where r={square root over (x.sup.2+y.sup.2+z.sup.2)}.
(36) The reconstructed image I.sub.2,z=d2 corresponds well to the graduated test pattern. It will be noted that the second position 10.2 allows an image to be obtained the field of observation of which is about 6.1 mm4.6 mm, to be compared with the field of observation obtained when the object is placed in the first position 10.1, this field of observation being 528 m396 m in extent.
(37) In the preceding trial, in succession, a first object, in fact the USAF 1951 test pattern, was placed in the first position 10.1, then a second object, in fact the graduated test pattern, was placed in the second position 10.2.
(38) Alternatively, as shown in
(39) The image sensor 20 then forms what is called a dual image I.sub.3 representing a spatial distribution of the intensity of the exposure wave 24 in the detection plane P.sub.20. This dual image I.sub.3 is representative of the first object 10 and of the second object 10. It is then necessary to distinguish, in said dual image I.sub.3, a contribution I.sub.3-1, corresponding to an image of the object 10 and a contribution I.sub.3-2, corresponding to an image of the second object 10.
(40) According to one embodiment, shown in
(41) The same effect may be obtained, using the device shown in
(42) According to another embodiment, the dual image I.sub.3 is subjected to a numerical reconstruction algorithm allowing the respective contributions of the first object 10 and of the second object 10 to be distinguished as a function of their distances d.sub.1, d.sub.2 with respect to the image sensor 20, along the optical path 14.
(43) Thus, after a dual image I.sub.3 has been obtained, the latter is subjected to a convolution with a propagation operator h(x,y,z), the z coordinate corresponding either to the distance d.sub.1 (z=d.sub.1) or to the distance d.sub.2 (z=d.sub.2). The indication z=d.sub.1 corresponds to the fact that the propagation may take place in the direction of the propagation axis Z, or in an opposite direction, the latter case corresponding to a back propagation. In the following examples, back propagations (z<0) are performed, though it will be understood that a propagation with positive values of z could also be envisioned. The coordinate z=0 corresponds to the detection plane P.sub.20. The back propagations of the dual image I.sub.3 by the first distance d.sub.1 and the second distance d.sub.2, respectively, are referred to using the notations I.sub.3,z=d1 and I.sub.3,z=d2. They respectively correspond, discounting reconstruction noise, to an image I.sub.1 that is representative of the first object 10 placed in the position 10.1, and to an image I.sub.2 that is representative of the object 10, placed in the position 10.2, with:
I.sub.1I.sub.3,z=d1=I.sub.3*h.sub.z=d.sub.
I.sub.2I.sub.3,z=d2=I.sub.3*h.sub.z=d.sub.
(44) The propagation operator used is that described in the preceding example. The notation h.sub.z=d.sub.
(45) As in the preceding example, the first and second positions are placed at a distance d.sub.1=26.2 mm and d.sub.2=15.3 mm from the image sensor, respectively.
(46)
(47) However, the quality of the images of each object may be improved by implementing another reconstruction method, applied to the dual image I.sub.3. The dual image is subjected to a plurality of reconstructions, in a distance range d.sub.r comprising both the first distance d.sub.1 and the second distance d.sub.2. Thus, d.sub.1d.sub.rd.sub.2. A stack of images I.sub.z reconstructed in various reconstruction planes P.sub.z parallel to the detection plane P.sub.20 is obtained, each reconstructed image I.sub.z being such that:
(48) I.sub.z=dr=I.sub.3*h.sub.z=dr with z.sub.mind.sub.1 and z.sub.maxd.sub.2, z.sub.min and z.sub.max being the limits of the range in which the reconstruction is carried out.
(49) For all or some of the pixels I.sub.z(x,y) of each image I.sub.z reconstructed at the reconstruction distance z, a coefficient, called the Tamura coefficient, C.sub.z(x,y), is determined, this coefficient being such that:
(50)
where: n is an uneven integer, the Tamura coefficient relative to a pixel (x,y) being determined depending on a group of n pixels (i,j) of the image I.sub.z, said pixels being located in a region of interest centered on the pixel (x,y) and extending n pixels along the X-axis and n pixels along the Y-axis, the axes X and Y being perpendicular to the propagation axis Z, such that the reconstructed image 4 lies in a plane parallel to the axes X and Y; n is for example equal to 7 and is usually comprised between 3 and 15; and .sub.z(x,y) is an average of the image I.sub.z in said zone of interest centered on the pixel (x, y), such that
(51)
(52) To determine a Tamura coefficient on the border of an image, the reconstructed image I.sub.z is extended, beyond each of these borders, with virtual pixels the value of which is set to the average value of the pixels of this image.
(53) In each reconstructed image I.sub.z, each pixel (x,y) the Tamura coefficient C.sub.z(x,y) of which exceeds a threshold value s.sub.z is considered to be significant and is assigned a preset intensity value i.sub.z, depending on the coordinate z of the plane P.sub.z in which the reconstructed image I.sub.z lies. Each pixel (x,y) associated with a Tamura coefficient C.sub.z(x,y) lower than the threshold value s.sub.z is given a zero intensity value. In other words, each reconstructed image I.sub.z is binarized to form a binarized reconstructed image denoted .sub.z: the significant pixels are assigned the intensity value i.sub.z, depending on the reconstruction distance z, whereas the other pixels are assigned an intensity of zero. The threshold value s.sub.z considered for each reconstructed image I.sub.z may be preset, or set depending on the distribution of the Tamura coefficients C.sub.z(x,y) in the image I.sub.z: it may for example be a question of the mean or median or of another fractal.
(54) Reconstructed binarized images .sub.z the reconstruction distances of which correspond to the first position 10.1 (i.e. z=d.sub.1) and to the second position 10.2 (i.e. z=d.sub.2), respectively, may then be formed. The images shown in
(55) It is also possible to form an overall image containing all of the values of the significant pixels I.sub.z(x,y) of the stack of binarized reconstructed images, with z comprised between z.sub.min and z.sub.max. Since the intensity i.sub.z of each significant pixel depends on the reconstruction distance z, it is possible to obtain, in a single image, a representation of the two observed objects.
(56) Thus, according to this embodiment, on the basis of a dual image I.sub.3, it is possible: to apply a numerical propagation algorithm, so as to obtain a reconstructed image I.sub.3,z=d1, I.sub.3,z=d2 in reconstruction planes corresponding to each position 10.1 and 10.2, respectively; to select, in each reconstructed image, what are called significant pixels, the selection being carried out using the intensity of said pixels in the reconstructed image; and to form each reconstructed image using only the pixels thus selected.
(57) The selection may be made on the basis of an indicator, associated with each pixel I.sub.3,z=d1(x,y), I.sub.3,z=d2(x,y), this indicator representing a dispersion of the intensity of the pixels in a zone of interest centered on said pixel. This indicator may be normalized by an average value of the intensity of the image I.sub.z in said zone of interest. This indicator may be a Tamura criterion C.sub.Z(x,y), such as explained above. It will be noted that recourse to such an indicator, applied to holographic reconstruction, has been described in the publication Pitkako Tomi, Partially coherent digital in-line holographic microscopy in characterization of a microscopic target, Applied Optics, Vol. 53, No. 15, 20 May 2014.
(58) The selection may also be made on the basis of a thresholding of the intensity of each pixel of the reconstructed image I.sub.z, with respect to a preset intensity threshold s.sub.z.
(59) This embodiment may also include a step in which each representative pixel is assigned an intensity value I.sub.z that is dependent on the reconstruction distance z. In this case, an overall image including all of the representative pixels of all of the reconstructed images may be formed, in which image the intensity of the pixels indicates the distance between the object represented by said pixel and the image sensor.
(60) The obtainment of a dual image and the formation of two images that are representative of the first object and of the second object, respectively, is not tied to the device shown in
(61)
(62)
(63) As in the examples described above, the device 1 makes it possible to observe, using two different imaging modalities, an object placed alternatively in the first or second position. It also allows an object, called the first object, placed in the first position 10.1, and an auxiliary object, called the second object, placed in the second position 10.2, to be observed simultaneously.
(64) According to one variant, shown in
(65)
(66) The invention will possibly be implemented in the field of the observation of biological samples, or of samples taken in the field of food processing or other industrial fields.