Method for observing a sample in two spectral bands, and at two magnifications
10866398 ยท 2020-12-15
Assignee
Inventors
Cpc classification
G02B21/18
PHYSICS
G02B21/361
PHYSICS
International classification
G01J3/00
PHYSICS
G02B21/18
PHYSICS
Abstract
Device for observing a sample, comprising: a light source configured to emit an incident light wave in a first spectral band and in a second spectral band, the first spectral band being different from the second spectral band; an image sensor comprising a first group of pixels, which are sensitive to the first spectral band, and a second group of pixels, which are sensitive to the second spectral band; a holder configured to hold the sample in a holder plane, the holder plane lying between the light source and the image sensor, such that when the sample is held by the holder, under the effect of its illumination by the incident light wave, a light wave of interest propagates to the image sensor; the device being configured to direct, to the image sensor, a first component of the light wave of interest, in the first spectral band, and a second component of the light wave of interest, in the second spectral band.
Claims
1. A device for observing a sample, comprising: a light source configured to emit an incident light wave in a first spectral band and in a second spectral band, the first spectral band being different from the second spectral band; an image sensor comprising a first group of pixels, which are sensitive to the first spectral band, and a second group of pixels, which are sensitive to the second spectral band; a holder configured to hold the sample in a holder plane, the holder plane lying between the light source and the image sensor, such that when the sample is held by the holder, under the effect of an illumination by the incident light wave, a light wave of interest propagates to the image sensor, the light wave of interest being the result of the propagation of the incident light wave through the sample or from reflection of the incident light by the sample; a first splitter, placed between the holder plane and the image sensor, so as: to transmit a first component of the light wave of interest through a first optical channel, the first optical channel extending to a second splitter; to reflect a second component of the light wave of interest to a second optical channel that is different from the first optical channel, the second optical channel extending to the second splitter; a first filter and a second filter, which are placed downstream of the first splitter in order to filter the first component in the first spectral band and the second component in the second spectral band, respectively; a first optical system, which defines a first magnification and which at least partially lies in the first optical channel, and a second optical system, which defines a second magnification and which at least partially lies in the second optical channel; the second splitter being placed between the first splitter and the image sensor, and arranged so as to recombine the first component and the second component, such that downstream of the second splitter, the first component and the second component propagate to the image sensor parallel to one and the same propagation axis; wherein the first magnification is strictly higher or strictly lower than the second magnification.
2. The device according to claim 1, comprising a processing unit programmed to form: a first image from the first group of pixels; a second image from the second group of pixels.
3. The device according to claim 1, comprising an infinity corrected objective placed between the holder plane and the first splitter, and: a first tube lens lying in the first optical channel, the first tube lens forming, with the infinity corrected objective, the first optical system; a second tube lens lying in the second optical channel, the second tube lens forming, with the infinity corrected objective, the second optical system.
4. The device according to claim 1, wherein the first splitter comprises a half-silvered mirror and/or wherein the second splitter comprises a half-silvered mirror.
5. The device according to claim 1, wherein the second optical channel comprises mirrors, so as to direct the second component, i.e. the component reflected by the first splitter, to the second splitter.
6. The device according to claim 1, wherein the holder plane defines two half-spaces, such that: the light source and the image sensor belong to two different half-spaces, respectively; or the light source and the image sensor belong to the same half-space.
7. A method for observing a sample using a device according to claim 1, the method comprising: placing the sample on the holder; illuminating the sample using the light source; acquiring an image with the image sensor; from the acquired image, forming a first image of the sample using the first groups of pixels, and forming a second image of the sample using the second group of pixels; such that the first image shows the sample at the first magnification defined by the first optical system; the second image shows the sample at the second magnification defined by the second optical system.
8. The method according to claim 7, wherein: the first optical system defines a first object plane and a first image plane, and wherein the sample lies in a sample plane; the image sensor defines a detection plane; the method being such that: the sample plane is offset with respect to the first object plane by a first object defocus distance smaller than 1 mm; and/or the detection plane is offset with respect to the first image plane by a first image defocus distance smaller than 1 mm; such that the first image is a defocused image of the sample.
9. The method according to claim 7, also comprising reconstructing a complex image of the first component from the defocused first image.
10. The method according to claim 7, wherein: the second optical system defines a second object plane and a second image plane, and wherein the sample lies in a sample plane; the image sensor defines a detection plane; the method being such that: the sample plane is offset with respect to the second object plane by a second object defocus distance smaller than 1 mm; and/or the detection plane is offset with respect to the second image plane by a second image defocus distance smaller than 1 mm; such that the second image is a defocused image of the sample.
11. The method according to claim 10, also comprising reconstructing a complex image of the second component from the defocused second image.
12. The method according to claim 11, wherein the light source simultaneously emits an incident light wave in the first spectral band and in the second spectral band, such that the first image and the second image are formed simultaneously from one and the same image acquired by the image sensor.
13. The method according to claim 7, wherein the light source successively emits an incident light wave in the first spectral band and in the second spectral band, such that the first image and the second image are formed successively from two images successively acquired by the image sensor.
Description
FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DESCRIPTION OF PARTICULAR EMBODIMENTS
(13)
(14) The light source 11 may be a white light source, or be composed of elementary light sources, each elementary source emitting in various spectral bands. The light source is configured to emit the incident light wave 12 in at least a first spectral band .sub.1 and a second spectral band .sub.2. The first spectral band .sub.1 and the second spectral band .sub.2 may be emitted simultaneously, this being a preferred configuration. This is notably the case when the light source is a white light source, or, generally, when the light source emits in a spectral band encompassing the first spectral band .sub.1 and the second spectral band .sub.2. The light source is for example an LED source (LED standing for light-emitting diode).
(15) The configuration shown in
(16) Under the effect of the illumination by the incident light wave 12 emitted by the light source 11, the sample 10 transmits a light wave 14, called the light wave of interest, the latter propagating to the image sensor 20, parallel to a first propagation axis Z. The sample 10 must be transparent or translucent enough for the light wave of interest 14 to be exploitable.
(17) The sample 10 may be a liquid, a biological liquid containing particles for example. The term particle for example designates biological particles, cells or microorganisms for example. It may also designate droplets of an emulsion, or microbeads used in certain biological protocols. It may also designate bubbles of air or of another gas bathing in a solid or liquid medium. It may also designate liquid or solid particles in suspension in a gas. Generally, the particles are micrometric or millimetric in size. This means that the particles are inscribed in a disc or a sphere the radius of which is smaller than or equal to a few microns, typically 10 m or 100 m, or a few millimetres, 1 or 2 mm for example. In one example of an application (see
(18) The sample may also be solid. It may be a question of a dry content of a biological fluid, blood for example. It may also be a thin slice of tissue, such as those of pathology slides. In this case, the slice is sufficiently thin to be able to be observed in transmission. Its thickness may notably be comprised between 10 m and 100 m.
(19) The sample may also be a culture medium, such as agar jelly, on which bacteria or bacterial colonies are developing.
(20) Preferably, the incident light wave 12 illuminates an area of sample larger than 1 mm.sup.2. The area of sample illuminated may be comprised between 1 mm.sup.2 and 30 or 40 mm.sup.2. Hence, the sample may be observed with a large field of observation.
(21) The device 1 comprises a first splitter 13, here a half-silvered mirror, configured to transmit a first component 14.sub.1 of the light wave of interest 14 parallel to the first propagation axis Z. For example, the first splitter 13 allows 50% of the light wave of interest 14 to be transmitted along the first propagation axis Z. By to transmit, what is meant is that the light wave propagates, upstream and downstream of the sample, along the same propagation axis. The first splitter 13 reflects a second component 14.sub.2 of the light wave of interest 14 transversely to the axis Z, and for example along a second axis X perpendicular to the first axis Z. In the example shown, the second component 14.sub.2 of the light wave of interest 14, i.e. the component reflected along the second axis X, corresponds to 50% of the light wave of interest 14. In this example, the second axis X is perpendicular to the first axis Z.
(22) The device 1 is configured such that the first and second components of the light wave of interest that propagate downstream of the first splitter are contained in the first spectral band .sub.1 and in the second spectral band .sub.2, respectively. The terms upstream/downstream are to be interpreted with respect to the direction of propagation of the light, from the light source 11 to the image sensor 20. The device 1 comprises a first spectral filter 15.sub.1, and a second spectral filter 15.sub.2, which are placed downstream of the first splitter 13. In this example, the first filter 15.sub.1 and the second filter 15.sub.2 are bandpass filters, so as to transmit only the first spectral band .sub.1 and the second spectral band .sub.2, respectively. Thus: downstream of the first filter 15.sub.1, the first component 14.sub.1 of the light wave of interest 14 lies in the first spectral band .sub.1; downstream of the second filter 15.sub.2, the second component 14.sub.2 of the light wave of interest 14 lies in the second spectral band .sub.2.
(23) The bandwidth of the first spectral band .sub.1 and of the second spectral band .sub.2 is preferably narrower than or equal to 100 nm, or even than 50 nm. By bandwidth, what is meant is the full width at half maximum of a spectral band. Preferably, the first spectral band .sub.1 and the second spectral band .sub.2 do not overlap, or do so only marginally. This means that in case of overlap, less than 5% or less than 1% of the intensity of each component is found in the range of overlap.
(24) In the example shown in
(25) The device 1 comprises a first optical system 16.sub.1 defining a first magnification G.sub.1. The first magnification may be lower or higher than 1, or equal to 1. In the example shown in
(26) The device 1 comprises a second optical system 16.sub.2, defining a second magnification G.sub.2, which is preferably different from the first magnification G.sub.1. In the examples shown in
(27) The fact that the first and second optical systems 16.sub.1, 16.sub.2 are each formed by an association of an infinity corrected objective and of a tube lens allows optical components to be placed between the objective and each tube lens. It is in this example a question of the first splitter 13 and of the filters 15.sub.1 and 15.sub.2.
(28) The device 1 comprises a second splitter 17, placed between the first optical system 16.sub.1 and the image sensor 20. The second splitter 17 is also placed between the first splitter 13 and the image sensor 20. The second splitter 17 may be similar to the first splitter 13. The second splitter 17 transmits a proportion, for example 50%, of the first component 14.sub.1 of the light wave of interest 14 transmitted by the first splitter 13 and propagating along the first propagation axis Z. The second splitter 17 reflects, to the image sensor 20, a proportion, for example 50%, of the second component 14.sub.2 of the light wave of interest 14. The second splitter 17 is configured such that, downstream of the latter, the first and second components of the light wave of interest propagate parallel to each other, about the same propagation axis, in the present case the axis Z, to the image sensor 20. Thus, downstream of the second splitter 17, the first and second components of the light wave of interest are coaxial.
(29) According to one variant shown in
(30) Thus, downstream of the second splitter 17, the first and second components of the light wave of interest propagate to the image sensor 20 along the same propagation axis, in the present case the axis X.
(31) The device 1 defines, between the first splitter 13 and the second splitter 17: a first optical channel C1, which in this example lies parallel to the axis Z, and through which the first component 14.sub.1 of the light wave of interest 14 propagates; a second optical channel C2, through which the second component 14.sub.2 of the light wave of interest 14 propagates.
(32) In the examples shown in
(33) Generally: the first optical channel C.sub.1 is configured to spectrally filter the first component 14.sub.1 of the light wave of interest, in the first spectral band .sub.1, and to magnify the first component 14.sub.1 by a first magnification G.sub.1; the second optical channel C.sub.2 is configured to spectrally filter the second component 14.sub.2 of the light wave of interest, in the second spectral band .sub.2, and to magnify the second component 14.sub.2 by a second magnification G.sub.2.
(34) Because of the spectral filtering carried out by the first filter 15.sub.1 and by the second filter 15.sub.2, when the first and second components of the light wave of interest 14 reach the second splitter 17 they lie in the first spectral band .sub.1 and in the second spectral band .sub.2, respectively.
(35) The image sensor 20 is placed downstream of the second splitter 17. It receives the first and second components of the light wave of interest 14. The image sensor is a polychromatic sensor comprising a matrix array of pixels. It is for example a question of a CCD or CMOS sensor. The pixels are placed in a detection plane P.sub.20. As shown in
(36) The image sensor 20 is placed at a distance equal to the first focal length (75 mm) from the first optical system 16.sub.1 and at a distance equal to the second focal length (200 mm) from the second optical system 16.sub.2. It will be understood that use of a first optical channel C.sub.1 and of a second optical channel C.sub.2 allows the image sensor 20 to be placed at a distance equal to the first focal length from the first optical system 16.sub.1 and at a distance equal to the second focal length from the second optical system 16.sub.2. The distance between the image sensor and each optical system is adjusted depending on their respective focal length.
(37) Each image I acquired by the image sensor may simultaneously contain: information on the first component 14.sub.1 of the light wave of interest, in the first spectral band .sub.1, distributed over the first group of pixels 20.sub.1; information on the second component 14.sub.2 of the light wave of interest, in the second spectral band .sub.2, distributed over the second group of pixels 20.sub.2.
(38) The image sensor is connected to a processing unit 22, a microprocessor for example, said processing unit being connected to a memory 23 and to a display 24. The processing unit 22 or the image sensor 20 is configured to form a first image I.sub.1, from the information collected by the first group of pixels 20.sub.1. The first image I.sub.1 allows the sample to be observed at the first magnification G.sub.1. The processing unit 22 or the image sensor 20 is configured to form a second image I.sub.2, from the information collected by the second group of pixels 20.sub.2. The second image I.sub.2 allows the sample to be observed at the second magnification G.sub.2.
(39) Thus, the first optical channel C.sub.1 allows the image sensor 20 to form a first image I.sub.1 of the sample, in the first spectral band .sub.1. This allows the sample 20 to be imaged at the first magnification G.sub.1. The second optical channel C.sub.2 allows the image sensor 20 to form a second image I.sub.2 of the sample, in the second spectral band. This allows the sample to be imaged at the second magnification G.sub.2.
(40) Thus, the invention allows, with one and the same image sensor, two images of the sample to be simultaneously formed respectively at two different magnifications without moving the sample or any of the component elements of the device: neither the image sensor, nor the sample, nor the optical systems are moved. This is due to the fact that the two images I.sub.1 and I.sub.2 are formed from the same acquired image I. In the example given above, the first magnification G.sub.1 is lower than the second magnification G.sub.2. Thus, in a first image I.sub.1 that field of observation will be privileged whereas in the second image I.sub.2 the observation of details in the field of observation of the first image I.sub.1 will be privileged.
(41) The simultaneous formation of two images is one preferred embodiment. It is not however necessary for the two images to be formed from the same acquired image I. The two images may be formed from images I acquired sequentially if the sample is illuminated sequentially in all or some of the first spectral band .sub.1 or of the second spectral band .sub.2.
(42) According to one embodiment, which is shown in
(43) According to one embodiment, which is shown in
and/or the image plane P.sub.i,2 of the second optical system 16.sub.2 is offset from the detection plane P.sub.20 by a second image defocus distance d.sub.i,2, and/or the object plane P.sub.o,2 of the second optical system is offset from the sample plane P.sub.10 by a second object defocus distance d.sub.o,2.
(44) In the example shown in
(45) Generally, the defocus distance is smaller than 1 mm, and preferably smaller than 500 m or even 100 m. A slightly defocused configuration may be spoken of. Such a defocus is appropriate for the observation of transparent particles, images of transparent particles taken in a focused configuration being unusable.
(46) An image obtained in a defocused configuration may form the subject of a holographic reconstruction. It is a question of applying a holographic propagation operator h that models the propagation of the light between the detection plane P.sub.20 and a reconstruction plane, so as to obtain a complex expression A of the light wave of interest in question, whether it be the first component 14.sub.1 of the light wave of interest or the second component 14.sub.2 of the light wave interest. This may in particular allow a phase image of the light wave of interest in question to be obtained. An example of a holographic propagator h is a Fresnel operator, namely:
(47)
(48) Preferably, it is possible to obtain a complex image I* of the light wave in question. The complex image corresponds to a spatial distribution of the complex expression A of the light wave in the reconstruction plane.
(49) For example, if the first optical system 16.sub.1 is placed in a defocused configuration, a defocused first image I.sub.1, which shows the first component 14.sub.1 of the light wave of interest, is obtained. It is possible to obtain a reconstructed complex first image I.sub.1* by convoluting the first image I.sub.1 with the operator:
I*.sub.1=I.sub.1*h
(50) The reconstructed first image I.sub.1* shows the first component of the light wave of interest 14.sub.1 in the reconstruction plane. The reconstruction plane may for example be the plane P.sub.10 in which the sample lies. It is thus possible to form a phase image of the sample, by computing the argument of the first complex image I.sub.1*.
(51) The above is also valid when the second optical system 16.sub.2 is placed in a defocused configuration. From the second image I.sub.2, it is possible to obtain a complex second image I.sub.2* such that:
I*.sub.2=I.sub.2*h
(52) A complex image may be obtained, from an image acquired in a defocused configuration, by applying elaborate reconstruction algorithms. Such algorithms are for example described in WO2017162985.
(53) From a complex image, obtained by holographic reconstruction, it is possible to obtain an exploitable image of the sample via an image established on the basis of the modulus or of the phase of the real part or of the imaginary part of the complex image.
(54) A device such as that shown in
(55)
(56)
(57) The device was employed with a sample containing spermatozoa. As is known, characterization of spermatozoa is a commonplace application. Document WO2014021031 for example describes an example of tracking of the movement of spermatozoa by lensless imaging. The analysis of spermatozoa assumes the path of each spermatozoon is tracked, so as to determine the type of path and/or the speed of movement. This requires a large field of observation, if it is desired to avoid moving the image sensor or the sample. The analysis of spermatozoa may also comprise an analysis of the morphology of the observed spermatozoa, or certain thereof. It is then a question of detecting potential morphological anomalies, for example a malformation. This assumes a precise representation of the spermatozoa, with a high magnification.
(58) During this trial, images I were acquired at a high acquisition frequency, comprised between 60 Hz and 100 Hz, using the image sensor 20, so as to allow the path of the spermatozoa to be tracked, in particular using first images I.sub.1 formed from each acquired image I. The first images I.sub.1 had a large field of observation, of the order of 3 mm.sup.2. From each acquired image I, second images I.sub.2, allowing the morphology of the spermatozoa to be observed individually, were formed. The field of observation corresponding to each second image I.sub.2 was 0.4 mm.sup.2. In order to obtain a better rendering, the second optical system 16.sub.2 was placed in a slightly defocused configuration, the image plane P.sub.I,2 of the second optical system being defocused by 5 m with respect to the detection plane P.sub.20. Since the defocus distance was small, there was no need to apply a holographic reconstruction operator.
(59)
(60) The field of application of the invention is not limited to the observation of spermatozoa and may be applied to any other type of cell, and, more generally, to the characterization of any type of particle moving in a medium, whether it be a question of a liquid or gaseous medium. Apart from biology or use in diagnostics, the invention may be applied to other fields, for example, and non-exhaustively, food processing, the inspection of fluids or the monitoring of processes.