Method for calibrating an analysis device, and associated device
11385164 · 2022-07-12
Assignee
- BIOMERIEUX (Marcy-l'Étoile, FR)
- Commissariat A L'energie Atomique Et Aux Energies Alternatives (Paris, FR)
Inventors
Cpc classification
G03H1/0866
PHYSICS
G01N15/1468
PHYSICS
G03H2001/2655
PHYSICS
G03H2001/005
PHYSICS
G01N2015/1454
PHYSICS
G03H1/0443
PHYSICS
G03H2222/45
PHYSICS
International classification
Abstract
A method of calibration of a device for analyzing at least one element present in a sample, said device including: a detection assembly configured to acquire an image formed by the interference between a light source and said sample; and digital processing means configured to detect a digital position of at least one element in said sample based on said acquired image; said calibration method including the implementation of a plurality of predetermined displacements of said sample with respect to said detection assembly and, for all of said displacements, the detection of a digital position of a same element to determine the digital position and the real position matching model according to the predetermined displacements and to the digital positions of said element after each displacement.
Claims
1. A method of calibration of a device for analyzing at least one element present in a sample, said device comprising: a detection assembly comprising a light source configured to illuminate said sample, an optical system configured to collect a light radiation originating from said sample, and a planar image sensor configured to acquire a holographic image formed by the interference between a reference wave originating from said light source and a wave diffracted by said radiation originating from the sample; and a digital processing computer system configured to detect a digital position of at least one element in said sample based on said acquired holographic image, and to calculate a real position of said element according to said digital position and to a digital position and real position matching model; wherein the calibration method comprises implementing a plurality of predetermined displacements of said sample with respect to said optical system and, for each of said displacements, detecting a digital position of a same element to determine said digital position and real position matching model according to the predetermined displacements and to the digital positions of said element after each displacement, said detection comprising: acquiring an image; digitally constructing a series of electromagnetic matrices modeling, by digital propagation of said acquired holographic image, the electromagnetic wave in planes parallel to the plane of the image sensor and comprised in said sample for a plurality of differences with respect to said plane; based on the series of electromagnetic matrices, determining an average focusing matrix for said sample and determining the corresponding electromagnetic matrix; identifying said same element in the first corresponding electromagnetic matrix; and determining said digital position of said same element in said electromagnetic matrix; said digital position and real position matching model corresponding to a triaxial matrix along three axes of a metric system, said step of determining said digital position of said same element in said electromagnetic matrix being carried out along an axis, modeling the position of said element in a depth of said sample, according to said electromagnetic matrix at the average focusing distance of said sample, wherein the predetermined displacements are performed in two opposite directions for each axis of said metric system.
2. The calibration method according to claim 1, wherein the determination of the digital position and real position matching model is performed via an average of the variations of the digital positions of a plurality of elements present in said holographic image.
3. The calibration method according to claim 1, wherein said optical system has an optical axis A.sub.OPT and performs the conjugation between a focusing plane and a focal plane, wherein the step of acquisition of said holographic image is carried out while said optical system is placed with respect to said sample so that said elements of said sample are not in said focusing plane.
4. The calibration method according to claim 1, wherein before the steps of acquisition of a holographic image to obtain said digital positions of said element, the method comprises a step of acquisition of a background image, the holographic images obtained during said acquisition steps being normalized by said background image.
5. The calibration method according to claim 1, wherein said element corresponds to a landmark of said sample.
6. The calibration method according to claim 1, wherein said element corresponds to a particle present in said sample.
7. The calibration method according to claim 6, wherein the step of determining said digital position of said particle in said electromagnetic matrix is performed by looking for the center of said particle.
8. The calibration method according to claim 6, wherein the matching model between digital positions and real positions is formed by considering a plurality of particles present in said sample, the digital positions of the particles between two consecutive images being determined by looking for the positions of the particles of said two images, by calculating vectors coupling the particles two by two and by determining a probable displacement vector corresponding to a most recurrent vector.
9. A device for analyzing at least one element present in sample, said device comprising: a detection assembly comprising a light source configured to illuminate said sample, an optical system configured to collect the light radiation originating from said sample, and a planar image sensor configured to acquire a holographic image formed by the interference between a reference wave originating from said light source and a wave diffracted by said radiation originating from the sample; and a digital processing computer system configured to detect a digital position of at least one element in said sample based on said acquired holographic image and to calculate a real position of said element according to said digital position and to a digital position and real position matching model; a stage for displacing the sample with respect to the optical system, said displacement means being driven by the digital processing computer system; wherein the digital processing computer system is configured to control the stage to perform a plurality of predetermined displacements of said sample with respect to said optical system and, for each of said displacements, to detect a digital position of a same element to determine said digital position and real position matching model according to the predetermined displacements and to the digital positions of said element after each displacement, the detection of the digital position comprising: acquiring a holographic image; digitally constructing a series of electromagnetic matrices modeling, by digital propagation of said acquired holographic image, the electromagnetic wave in planes parallel to the plane of the image sensor and comprised in said sample for a plurality of deviations with respect to said plane; based on the series of electromagnetic matrices, determining an average focusing matrix for said sample and determining the corresponding electromagnetic matrix; identifying said same element in the first corresponding electromagnetic matrix; and determining said digital position of said same element in said electromagnetic matrix, said digital position and real position matching model corresponding to a triaxial matrix along three axes of a metric system, said step of determining said digital position of said same element in said electromagnetic matrix being carried out along an axis, modeling the position of said element in the depth of said sample, according to said electromagnetic matrix at the average focusing distance of said sample, wherein the predetermined displacements are performed in two opposite directions for each axis of said metric system.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The way to implement the present invention, as well as the resulting advantages, will better appear from the description of the following non-limiting embodiment, given as an indication, based on the accompanying drawings, where
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6)
(7) In the following, reference is made to the central emission wavelength of light source 13, for example, in the visible range. Light source 13 emits a coherent signal Sn oriented on a first surface of the sample, for example, conveyed by a waveguide such as an optical fiber.
(8) Sample 12 is a liquid such as water, a buffer solution, a culture medium or a reactive medium (comprising or not an antibiotic), containing the particles 11 to be observed.
(9) As a variation, sample 12 may appear in the form of a solid medium, preferably translucent, such as an agar-agar gelose, having particles 11 located inside and on top of it. Sample 12 may also be a gaseous medium. Particles 11 may be located within the medium or at the surface of sample 12.
(10) Particles 11 may be microorganisms such as bacteria, fungi, or yeasts. They may also be cells, multicellular organisms, or any other particle of contaminating particle or dust type. The size of the observed particles 11 varies from 500 nanometers to several hundreds of micrometers, or even a few millimeters.
(11) Sample 12 is contained in an analysis chamber, vertically delimited by a lower plate and an upper plate, preferably parallel planar plates, for example, conventional microscope plates. The analysis chamber is laterally delimited by an adhesive or by any other tight material. As a variation, the sample may be deposited on a microscope plate or an equivalent support, without being imprisoned in an analysis chamber.
(12) The lower and upper plates are transparent to the wavelength of light source 13, the sample and the chamber for example giving way to more than 50% of the wavelength of light source 13 under a normal incidence on the lower plate.
(13) Preferably, particles 11 are arranged in sample 12 at the level of the upper plate. The lower surface of the upper plate comprises for this purpose ligands enabling to bind the particles, for example, polycations (e.g. poly-L-lysine) or antibodies in the case of microorganisms. This enables to contain particles 11 within a thickness equal to or close to the depth of field of the optical system, that is, within a thickness smaller than 1 mm (e.g. lens), and preferably smaller than 100 micrometers (e.g. microscope lens) Particles 11 may however displace in sample 12.
(14) Preferably, the device comprises an optical system 18 for example formed of a microscope lens, for example, a fixed-focus lens, and of a tube lens, arranged in the air. Optical system 18 is optionally equipped with a filter capable of being located in front of the lens or between the lens and the tube lens. Optical system 18 is characterized by its optical axis A.sub.opt, its object plane, also called plane of focus, at a distance from the lens, and its image plane, which is the conjugate of the object plane relative to the optical system.
(15) In other words, an object located in the object plane has a corresponding sharp image of this object in the image plane, also called focal plane 23. The optical properties of optical system 18 are fixed (e.g. fixed focus optical system). The object and image planes are ideally orthogonal to the optical axis and assumed to be such for the mathematical models described hereafter.
(16) Image sensor 16 is planar and is located, opposite an upper surface 14 of sample 12, in the focal plane 23 or close thereto, to within assembly and construction errors. Image sensor 16, for example, a planar CCD or CMOS sensor, comprises a periodic two-dimensional grating of elementary sensitive sites, and a proximity electronic system which sets the exposure time and the resetting of the sites, in a way known per se. The output signal of an elementary site is a function of the quantity of radiation of the spectral range incident on said site during the exposure time. This signal is then converted, for example, by the proximity electronic system, into an image point or “pixel” of a digital image.
(17) Image sensor 16 thus generates a digital image in the form of a matrix of C columns and L rows. Each pixel of this matrix, of coordinates (c, l) in the matrix, corresponds in a way known per se to a position of Cartesian coordinates (x(c,l), y(c, l)) in the focal plane 23 of optical system 18, for example, the position of the center of the elementary sensitive site of rectangular shape. Image sensor 16 also comprises an axis A.sub.c normal to the planar surface of image sensor 16 for which the image Ih originating from image sensor 16 has a minimum deformation with respect to the observed scene.
(18) Preferably, the pitch and the filling factor of the periodic grating are selected to respect the Shannon-Nyquist criterion regarding the size of the observed particles, to define at least two pixels per particle. Thus, image sensor 16 acquires a transmission image of sample 12 in the spectral range of light source 13.
(19) The image Ih acquired by image sensor 16 comprises holographic information since it results from the interference between a wave Fi diffracted by particles 11 and a reference wave Fn having crossed sample 12 without having interacted therewith.
(20) As described hereabove, in the case of a CMOS or CCD sensor, the digital image Ih acquired and stored in digital processing means 25 is an intensity image, the phase information being thus here intensity-coded, with digital image Ih according to relation:
(21)
(22) As a variation, it is possible to divide the coherent signal Sn originating from light source 13 into two components, for example, by means of a beam splitter. The first component is then used as a reference wave and the second component is diffracted by sample 12, the image in the image plane of optical system 18 resulting from the interference between the diffracted wave Fi and the reference wave Fn.
(23) The intensity image Ih acquired by image sensor 16 is not focused on a particle 11 to be observed and the obtaining of information focused on a particle 11 is digitally achieved by digital processing means 25 connected to image sensor 16 to receive the images Ih acquired by the latter.
(24) “Out of focus” here means that there is no intersection between the plane of focus and the particle 11 which is to be observed. Digital processing means 25 may correspond to a computer, a microcontroller, a touch tablet or a smartphone, or generally any computer system based on a processor capable of receiving data, of processing the data by implementing computer instructions stored in a computer memory, and of delivering and/or storing in a computer memory the result of the processing.
(25) Digital processing means 25 may be connected in wired or wireless fashion to image sensor 16 or by means of a wireless communication. Digital processing means 25 may be associated with a screen to display intermediate or final results of the method of the invention. As a variation, digital processing means 25 may correspond to an assembly of a plurality of computer systems with a system dedicated to image sensor 16 and other specific systems for the other elements coupled to digital processing means 25.
(26) Light source 13, image sensor 16, and digital processing means 25 form a device for analyzing at least one particle 11 in sample 12 to obtain the position of at least one particle of interest 11 in sample 12.
(27) To perform the analysis of a specific particle of interest 11, present in sample 12, the device also integrates a stage 20 for displacing sample 12. Stage 20 provides a triaxial displacement of axes X, Y, Z based on stepper motors controlled by a control unit, for example, digital processing means 25, according to a position or displacement instruction. Stage 20 enables to displace sample 12 with respect to optical system 18 along an axis Z ideally parallel to optical axis A.sub.opt and to displace sample 12 along two other axes X, Y ideally defining a plane orthogonal to optical axis A.sub.opt. As a variation, stage 20 may be configured to displace optical system 18 and/or light source 13 while sample 12 is fixed.
(28) Thus, displacements along axes X, Y enable to set to the position of optical axis A.sub.opt on sample 12, and thus the position of a Raman laser on sample 12. Further, displacements along axis Z enable to set the distance between sample 12 and optical system 18, and accordingly the position of the focal plane 23 (or, equivalently, the position of the confocal volume) with respect to sample 12
(29) Ideally, optical axis A.sub.opt, the mechanical axis Z of displacement of stage 20, illumination axis A.sub.ill, the axis A.sub.c, normal to the planar surface of sensor 16, and the axis A.sub.n, normal to the planar surface of sample 12 should be aligned, but construction errors and the presence of a clearance may make such alignments imperfect when a great accuracy is required. A real axis system, different from system X, Y, Z, linked to the image sensor such as X.sub.n, Y.sub.n and Z.sub.n where X.sub.n and Y.sub.n are in the planar surface of sensor 16 and Z.sub.n is collinear to axis A.sub.c, can be defined.
(30) The calculations for digitally reconstructing the wavefront described hereafter or displacement instructions are based on such an assumption. In reality, there exist misalignments which are corrected by the invention by means of the calibration of the matching model.
(31) The Raman spectroscopy analysis assembly for example comprises a monochromatic light source 17 and a spectrometer 19. Light source 17 preferably corresponds to a laser.
(32) The beam originating from the laser is directed into optical system 18, and thus onto sample 12, for example, by an assembly of mirrors and filters 28. Optical system 18 enables to focus light source 17 on a specific position of the sample, the focusing position being settable by displacing sample 12 with respect to optical system 18 by means of stage 20. As known per se, stage 20 receives a positioning instruction for sample 12, which enables to position the laser shot thereon, which instruction is for example communicated to digital processing means 25, particularly the position of a particle detected as described hereabove. In particular, digital processing means 25 determine a digital position of the particle according to an image processing and converts the digital position into a real position of the particle in the sample (for example, in a fixed reference frame linked to the device frame or to the stage in a way known per se), the real position being communicated to the displacement means as a position instruction. For this purpose, digital processing means 25 use a model for matching a digital position, noted Pn, and a real position, noted Pr. The calibration of the matching model will be described hereafter.
(33) The Raman diffusion of the laser beam in sample 12 is captured by spectrometer 19 by also passing through the assembly of mirrors and filters 28. The laser and spectrometer 19 are connected to digital processing means 25, which control the laser shot and receive the image of the diffusion. Of course, the different elements may be arranged differently without modifying the invention.
(34) The position of optical system 18 and the displacements of sample 12 aim at focusing the laser beam on a particle of interest 11 present in sample 12. For this purpose,
(35) A method of calculating wavefronts by digital propagation is explained in Sang-Hyuk Lee et al.'s article entitled “Holographic microscopy of holographically trapped three-dimensional structures” published in Optics Express, Vol. 15; Nr. 4, Feb. 19, 2017, pp. 1505-1512.
(36) More particularly, noting h.sub.z(r) the Rayleigh-Sommerfeld propagation function, that is:
(37)
(38) where:
(39) z is the so-called “defocusing” height, in other words the deviation with respect to the focusing plane 22,
(40) r=(|r|, ∂) is the position in polar coordinates in the image plane, of radial coordinate |r| and of angular coordinate,
(41) R.sup.2=|r|.sup.2+z.sup.2, and
(42) k=2πn/λ is a wave number relative to the propagation medium of refraction index n at the wavelength λ of the light source.
(43) Based on this relation, electromagnetic wave a(r, z), of amplitude |a(r,z)| and of phase φ(r,z), in ordinate plane z, can be expressed as:
(44)
(45) where
(46) b(r) is the measured intensity, i.e. image Ih (the intensity of the reference wave is here assumed to be constant),
(47) B(q) is th0e Fourier transform of b(r),
(48) H.sub.−z(q) is the Fourier transform of h.sub.−z(r), and
(49) q is the dual variable of r in the Fourier transform.
(50) The above equations define an analytic formulation of amplitude a(r, z). Although this model is developed for a propagation in a homogeneous medium (and thus with no modification of the wave number, without the presence of an interface creating a reflection and/or a deviation of the wave, etc.), and accordingly with no relation with sample 12 and the enclosure (which comprise many interfaces and changes of index, for example), the inventors have noted that it enables to reconstruct rich electromagnetic information in relation with the observed particles, as will be described hereafter.
(51) Thus, advantageously, digital processing means 25 store a single wave number, common for all the involved mediums, for example, the index of air.
(52) As a variation, digital processing means 25 store the refraction indexes and thicknesses of the different involved mediums along the optical axis and constructs matrices I1-IN from close to close to take into account the phenomena at the interfaces.
(53) For bacteria, the sampling pitch in the z direction is preferably smaller than one tenth of the thickness of the bacterium, for example, smaller than 0.1 micrometer, and preferably smaller than 0.03 micrometer.
(54) It can thus be understood that a stack of electromagnetic matrices I1-IN can be constructed for ordinates z.sub.1, z.sub.2, . . . , z.sub.n, z.sub.N along the optical axis, the origin of ordinates (z=0) being taken at the axial focusing position, each matrix In being defined by a complex amplitude a(r, z.sub.n) according to relations:
(55)
(56) Digital processing means 25 then calculate on each matrix In a positive surjective application AS from the complex space .sup.C×L to the real space
.sup.C×L:
(57)
(58) For example, digital processing means 25 calculate the hermitian norm (or its square) of components a(c,l).sub.z.sub.
(59) Without being bound by theory, matrices AS(I1)-AS(IN) do not necessarily represent a light intensity, but the inventors have noted their resemblance with intensity images obtained under a non-coherent illumination. Particularly, the particles are represented, as in a photograph, in their particle form.
(60) It is thus possible to apply any type of conventional image processing (segmentation, thresholding, detection of particles based on their morphology, etc.), and even for an operator to visually identify the particles (conversely to an image coding interferences which are intensity-coded in the form of fringes). In the following, to simplify the notations, matrices AS(I1)-AS(IN) are noted I1-IN, and notation a(c,l).sub.z.sub.
(61) The method then comprises identifying particles in the sample according to matrices I1-IN, and for each identified particle, determining an optimal focusing distance z for this particle, and then determining, in the matrix of series I1-IN corresponding to this distance, a set of pixels belonging to this particle.
(62) Second unit 52 aims at determining an average focusing distance zfmoy from the series of matrices I1-IN and at selecting in this series the matrix, noted Ifmoy, having its distance z equal or the closest to distance zfmoy. As a variation, digital processing means 25 recalculate matrix Ifmoy for distance zfmoy.
(63) Average focusing distance zfmoy is that which best corresponds to the ideal conditions of focusing on the set of particles 11 in the sense of a predetermined focusing criterion. Such a distance can be determined by all known techniques of signal processing or in the field of photography, for example, aufocus. The resulting electromagnetic matrix Ifmoy is sufficiently “focused” to be able to detect particles at different depths in matrix Ifmoy. The detected particles are particularly those comprised in a sample depth equal to the depth of field. As previously described, in a preferred embodiment, the particles are arranged in a volume having a thickness close or equal to this depth of field, so that all or almost all the particles of the sample can be detected in matrix Ifmoy.
(64)
(65) Average focusing distance zfmoy can thus be looked for by digital processing means 25 by calculating, at 59, the average of the distances z obtained by detecting the optimum Ipm of each coordinate (c,l). Ifmoy thus is the matrix of the series of matrices I1-IN closest to the calculated distance z. As a variation, digital processing means 25 select the P coordinates, for example, the 10,000 coordinates, having the greatest variations of their values a(c,l).sub.z.sub.
(66) As a variation, the values a(c,l).sub.z.sub.
(67) Although the average focusing distance zfmoy illustrates a general focusing of the image, the focusing of each particle 11 is not optimal, particularly due to the depth variations of particles 11 relative to one another. To improve the focusing on a specific particle 11, the invention provides determining an optimum focusing distance for each particle and determining a focused matrix specific to the particle. For this purpose, digital processing means 25 comprise a particle selection unit 53 and a unit 54 for determining the coordinates of particles 11 in the image.
(68) Unit 53 of selection of the particles in matrix Ifmoy may take a plurality of image segmentation shapes of the state of the art, such as a scanning of this matrix to detect the contours of a finite element. As a variation, digital processing means 25 apply a prior thresholding to matrix Ifmoy, the threshold value being for example equal to Moy(Ifmoy)+p×(Ifmoy), where Moy(Ifmoy) is the average of the pixels of matrix Ifmoy, E(Ifmoy) is their standard deviation, and p is an integer greater than 1, for example, equal to 6.
(69) The values greater than this threshold are then determined as belonging to particles, and an image segmentation on the thresholded matrix is implemented. At the end of the identification, I sets of pixel coordinates (c,l), noted Part_1, . . . , Part_i, . . . Part_I are thus obtained.
(70) Each set, stored in a memory 26 associated with digital processing means 25, records the coordinates in plane X.sub.n, Y.sub.n of the pixels of image Ifmoy belonging to a same particle.
(71) The method then determines, at 54, for each set of coordinates Part_i, which distance z provides the best focusing for the corresponding particle, and then determines what matrix in series I1-IN corresponds to this distance (or calculates a new image for this distance), and finally determines the coordinates of the particle 11 present in image Ifp.
(72) The calculation of the optimum focusing distance is for example performed similarly to the calculation of the average focusing distance. For each coordinate (c,l) of set Part_i, the distance corresponding to the optimum of a(c,l).sub.z.sub.
(73) The optical focusing distance corresponds to an average focusing on particle 11, usually resulting in a focusing on a median plane of particle 11. Such an average focusing enables to obtain information relative to the depth, that is, along axis Z.sub.n, of particle 11 in sample 12. It is thus possible to accurately detect, with no focus, the position along X.sub.n, Y.sub.n and Z.sub.n of a particle 11 in sample 12.
(74) Particles 11 may extend over a plurality of pixels of the image in plane X.sub.n, Y.sub.n and over a plurality of depth units along axis Z.sub.n. In this case, the X.sub.n, Y.sub.n, and Z.sub.n position of particle 11 in sample 12 will be determined according to the center of particle 11.
(75) To focus a laser shot on a particle 11 according to Raman spectroscopy, it is conventional to detect a particle 11 in an image and to displace sample 12 along axis X, X, Y by applying a scale transformation between the position of particle 11 and a real displacement.
(76) The invention provides performing the displacement by means of a triaxial matrix Mc, which is more accurate. Triaxial matrix Mc is obtained, at step 55, by determining the coordinates of at least one particle 11 in a plurality of images Ifp according to a plurality of predetermined displacements 56. Triaxial matrix Mc aims at modeling the real displacement performed by sample 12 as a result of a displacement instruction transmitted by digital processing means 25 to displacement means 20 while integrating the opto-mechanical defects of device 10.
(77) As previously described, the different electromagnetic calculations and displacements are performed under the assumption of an ideal alignment of the different reference frames and axes, which induces a difference between the real displacements and the displacements desired for sample 12 with respect to the detection assembly. A correction is implemented to compensate for this error through a base change matrix Mc enabling to transform the digital position of the particle determined by digital processing means 25 into a real position thereof in the sample. Triaxial matrix Mc reflects the transformation between the digital space having its values xn, yn, and zn coded in pixels on image Ifp and the real metric system 21 of the sample displacements relative to the optical block, having its values xs, ys, and zs conventionally expressed in micrometers. Thus, triaxial base change matrix Mc can be expressed according to the following relation:
(78)
(79) To determine the coefficients of triaxial matrix Mc, a plurality of displacements are performed and the variation of the position of the same particle 11 is compared with the displacements instructions transmitted to displacement means 20.
(80) The number of displacements to be performed may depend on the system to be qualified and on its properties. At least one displacement per displacement axis, be it a rotation or a translation, is to be performed. It is also possible for the properties of the system to impose performing a displacement in each displacement direction if the system has behavior asymmetries according to the direction, for example, due to a clearance error of the drive system in a given direction.
(81) In particular, in the case of a system provided with three translation axes and having a behavior asymmetry along each of the axes, six displacements are performed with respect to a central position according to the instructions of the stepper motors, for example:
(82) a first displacement to coordinates (0, 0, −30).sub.s;
(83) a second displacement to coordinates (+15, 0, −30).sub.s;
(84) a third displacement to coordinates (−15, 0, −30)s;
(85) a fourth displacement to coordinates (0, +15, −30)s;
(86) a fifth displacement to coordinates (0, −15, −30).sub.s; and
(87) a sixth displacement to coordinates (0, 0, +30).sub.s.
(88) In the case of a system provided with three translation axes and provided with a drive system having no asymmetry, or an asymmetry considered as non-prejudicial, it is possible to more simply perform 3 displacements only, for example:
(89) a first displacement to coordinates (0, 0, −30).sub.s;
(90) a second displacement to coordinates (+15, 0, −30).sub.s; and
(91) a third displacement to coordinates (0, +15, −30).sub.s.
(92) Further, another displacement may be performed to acquire a background image enabling to ascertain the quality of the images acquired for each displacement, even in complex cases where many dust particles and debris mask the particles of interest. In particular, a displacement to coordinates (0, 0, 300).sub.s may be performed to acquire a background image. As a variation, the background image is acquired only once if the system is deemed clean or not varying over time.
(93) In the following, reference is made to the more complex case for a system with 3 translation axes with an asymmetrical behavior to illustrate the use of the images thus obtained, that is, six displacements.
(94)
(95) It is thus possible to simplify the determination of the coefficients of triaxial matrix Mc according to the following equation:
(96)
(97) The determination of the positions of the circled particle 11 between these two displacements thus enables to obtain coefficients Cxx, Cxy, and Cxz.
(98) Similarly, the fourth and fifth displacements enable to obtain coefficients Cxy, Cyy, and Czy according to the following simplified relation:
(99)
(100) Similarly, the first and sixth displacements enable to obtain coefficients Cxz, Cyz, and Czz according to the following simplified relation:
(101)
(102) The resolution of these equation systems thus enables to obtain the coefficients of triaxial matrix Mc. A matrix Md inverse to triaxial matrix Mc models the transformation of the pixels into displacements. Inverse matrix Md is expressed according to the following coefficients:
(103)
(104) Thus, triaxial matrix Mc or inverse matrix Md exhibits rotations and proportional transformations on all the possible axes. According to a non-limiting example, inverse matrix Md may integrate the following coefficients:
(105)
(106) Even if the typical values are low, such values are sufficient to significantly alter the results of a measurement by the Raman spectroscopy having a desired accuracy in the order of 0.07 micrometer along axes x and y, and 0.02 micrometer along axis z. In the above example, the coefficients are obtained by considering the displacements of a single particle 11, circled on the images Ifp of
(107) As a variation, each coefficient may be obtained by an average of the variations of the coordinates of a plurality of particles 11 present on two displacements. Further, a single displacement may be performed in each direction to obtain triaxial matrix Mc.
(108) When triaxial matrix Mc is known, a displacement instruction may be transmitted by digital processing means 25 to displace sample 12 to focus laser 17 on a particle of interest 11 having had its position determined as described hereabove. Thus, an accurate laser shot may be performed on particle of interest 11 while spectrometer 19 captures the diffusion of the light originating from laser 17 to determine characteristics, particularly physiological, of particle of interest 11. Further, triaxial matrix Mx may be used several times to determine physiological characteristics of a plurality of particles of interest 11.
(109) Matrix Mc, and thus matrix Md, may be regularly updated, during the introduction of a sample 12 into the analysis device (for example requiring the opening/closing of a drawer or of a door capable of inducing a displacement of the different parts of the system), or each time an analysis of a particle 11 is desired.
(110) The determination of the base change matrix by analysis of the position of a particle 11 in images has been described. A plurality of particles 11 may be analyzed to average the result and obtain a more robust determination of matrix Mc.
(111) Similarly, the analysis of the position of a particle, for example, a bacterium in a biological sample, has been described. Any type of landmark, for example, a defect of the medium (for example, a bubble, a crack), may be used to perform the analysis. The description also mentions a stage 20 to displace sample 12 by means of stepper motors. As a variation, other displacement means may be implemented and the displacements may be performed manually by an operator. The matching matrix then indicates to the operator the accurate displacements to be performed to target a particle 11 in sample 12.