DEVICE AND METHOD FOR OBSERVING AN OBJECT BY LENSLESS IMAGING

20190120747 ยท 2019-04-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A device and a method for observing an object by imaging, or by lensless imaging. The object is retained by a holder defining an object plane inserted between a light source and an image sensor, with no enlargement optics being placed between the object and the image sensor. An optical system is arranged between the light source and the holder and is configured to form a convergent incident wave from a light wave emitted by the light source, and for forming a secondary light source, conjugated with the light source, positioned in a half-space defined by the object plane and including the image sensor, such that the secondary source is closer to the image sensor than to the holder. This results in an image with a transversal enlargement factor having an absolute value of less than 1.

Claims

1-16. (canceled)

17. A device for observing an object, comprising: a light source, configured to generate an emission light wave, that propagates along a propagation axis; an image sensor; a holder configured to hold an object, the holder being placed between the image sensor and the light source such that the image sensor is configured to form an image of the object held on the holder; an optical system, placed between the light source and the holder, the optical system configured to form, from the emission light wave, a convergent incident light wave that propagates from the optical system to the holder; wherein: the holder defines an object plane, that is perpendicular to the propagation axis and that passes through the holder, the optical system being configured to conjugate the light source with a secondary source that is located in a half space defined by the object plane and that includes the image sensor; the optical system is configured such that the secondary source is located closer to the image sensor than to the holder, such that the image of the object, held on the holder, on the image sensor is affected by a magnification lower than 1.

18. The device of claim 17, wherein the device does not comprise magnifying optics between the holder and the image sensor.

19. The device of claim 17, wherein the optical system is configured such that the secondary source is located between the holder and the image sensor.

20. The device of claim 17, wherein the image sensor lies in a detection plane, the optical system being configured such that the secondary source is located in a half space that is defined by the detection plane and that does not include the light source.

21. The device of claim 17, wherein the optical system is parameterized by a parameter, such that the position of the secondary source is adjustable depending on the parameter.

22. The device of claim 21, wherein the parameter is a position of the optical system along the propagation axis or a focal length of the optical system.

23. A method for observing an object comprising: a) placing the object between a light source and an image sensor, the light source being coupled to an optical system that is placed between the light source and the object; b) activating the light source, the light source then emitting an emission light wave that propagates to the optical system, the optical system forming a convergent incident light wave that propagates to the object; and c) acquiring, using the image sensor, an image of the object thus exposed to the convergent incident light wave; wherein: the emission light wave is emitted along a propagation axis, the object defining an object plane, that passes through the object and that is perpendicular to the propagation axis, such that, in b), the optical system conjugates the light source with a secondary light source, the secondary light source being located in a half space that is defined by the object plane and that includes the image sensor; and the secondary source is located closer to the image sensor than to the object.

24. The method of claim 23, wherein no magnifying optics are placed between the object and the image sensor.

25. The method of claim 23, wherein the secondary light source is located between the object and the image sensor.

26. The method of claim 23, wherein the image sensor lies in a detection plane, and wherein the secondary light source is located in a half space that is defined by the detection plane and that does not include the light source.

27. The method of claim 23, wherein b) further comprises adjusting the position of the secondary source depending on a parameter of the optical system.

28. The method of claim 27, wherein the parameter of the optical system is: a focal length of the optical system; or a position of the optical system along the propagation axis.

29. The method of claim 23, wherein, in c), the image sensor is exposed to an exposure light wave including: a wave that is transmitted by the object and that results from the transmission, by the object, of the convergent incident light wave; and a diffraction wave that results from the diffraction, by the object, of the convergent incident light wave.

30. The method of claim 23, further comprising: d) applying a holographic reconstruction algorithm to the image formed on the image sensor in c).

31. The method of claim 23, wherein: the light source emits the emission light wave at a wavelength; the object is translucent or transparent at the wavelength.

32. The method of claim 7, wherein: the light source emits the emission light wave at a wavelength; the object includes particles that are dispersed in or on the surface of a medium, the latter being translucent or transparent at the wavelength.

Description

FIGURES

[0040] FIG. 1A shows a first embodiment of the invention. FIGS. 1B and 1C show examples of objects capable of being observed by virtue of the invention.

[0041] FIG. 2 show various embodiments of the invention.

[0042] FIG. 3 shows the variation in the magnification generated by the device as a function of the position of the secondary source with respect to the object.

[0043] FIGS. 4A to 4E illustrate certain configurations that are commented upon in the description with reference to FIG. 3.

[0044] FIG. 5A shows an experimental device. FIG. 5B shows a detailed view of a reticle used in this device. FIG. 5C shows another experimental device.

[0045] FIGS. 6A, 6B and 6C illustrate images acquired using the experimental device shown in FIG. 5A.

[0046] FIGS. 7A, 7B and 7C show images obtained by holographic reconstruction, on the basis of FIGS. 6A, 6B and 6C, respectively.

[0047] FIGS. 8A and 8B show images acquired using an experimental device similar to that shown in FIG. 5C.

[0048] FIGS. 9A and 9B show images obtained by holographic reconstruction, on the basis of FIGS. 8A and 8B, respectively.

SUMMARY OF PARTICULAR EMBODIMENTS

[0049] FIG. 1A shows an example of a device 1 that is one subject of the invention, according to a first embodiment. A light source 11 is able to produce a light wave 12, called the emission light wave, that propagates in the direction of a holder 10s, along a propagation axis Z. The light wave 12 is emitted it at a least one wavelength . The holder 10s is able to hold an object 10 that it is desired to observe using a device 1. The holder allows the object 10 to be placed such that this object lies in a plane P.sub.10, called the object plane.

[0050] The object 10 may be a sample that it is desired to characterize. It may comprise a solid or liquid medium 10a that is transparent or translucent to said wavelength , in which medium, or on the surface of which medium, particles 10b are dispersed. FIGS. 1B and 1C show examples of such objects. The particles 10b may be biological particles. The medium 10a may be a culture medium, or a bodily fluid. By biological particle, what is meant is a cell, such as a eukaryote cell, a bacterium or another microorganism, a fungus, a spore, a virus, etc. The term particles may also designate microbeads, for example metal microbeads, glass microbeads or organic microbeads, which are commonly implemented in biological protocols. It may also be a question of insoluble droplets submerged in a liquid medium, for example lipid droplets in an oil-in-water type emulsion. Thus, the term particles designates both endogenous particles, which are initially present in the examined object, and exogenous particles, which are added to this object before its observation. This term may also designate particles generated by aggregating other particles present in the sample, for example a complex formed by antibodies with elements bearing an antigen. Generally, each particle has a size that is advantageously smaller than 1 mm, or even smaller than 500 m, and preferably a size comprised between 0.5 m and 500 m. Preferably, each particle has a size larger than the emission wavelength of the light source, so as to cause a diffraction effect as described below. By particle size, what is meant is a diameter or a diagonal.

[0051] The expression bodily fluid is understood to mean a fluid issued from an animal or human body, such as blood, urine, sweat, cerebrospinal fluid, lymph, etc. The expression culture medium is understood to mean a medium that lends itself well to the development of a biological species such as cells, bacteria or other microorganisms.

[0052] The object may also be a tissue slide or anatomo-pathology slide including a small thickness of tissue deposited on a transparent slide. It may also be a question of a slide obtained by applying a staining protocol suitable for allowing a microorganism to be identified in a sample, for example a Gram or Giemsa stain. By small thickness, what is meant is a thickness that is preferably smaller than 100 m, and more preferably smaller than 10 mtypically a few micrometers.

[0053] The distance between the light source 11 and the object 10 is preferably larger than 1 cm. It is preferably comprised between 2 and 30 cm. Preferably, the light source, seen by the object, may be considered to be point-like. This means that its diameter (or its diagonal) is preferably smaller than one tenth, and better still one hundredth of the distance between the object and the light source.

[0054] The light source may be a light-emitting diode or a source of laser light, such as a laser diode. It may preferably be a point source. In the example shown, the light source 11 is a light-emitting diode sold by Innovation Optics under the reference Lumibright 1700A-100-A-C0, the emission spectral band of which is centered on the wavelength of 450 nm. This light-emitting diode is placed facing a first end of an optical fiber 13, the second end of which is placed facing the object 10, or facing the holder 10s holding the object. The diameter of the core of the optical fiber is for example 1 mm. According to one variant, the optical fiber 13 may be replaced by a diaphragm, the aperture of which is typically comprised between 5 m and 1 mm, and preferably between 50 m and 500 m150 m for example. According to another variant, the optical fiber is coupled to an objective, allowing an image of its distal end to be formed so as to improve the point-like character of the source. This particular case will be described below. The optical fiber or the diaphragm, which are optionally coupled to an objective, form a spatial filter 13 allowing a point light source to be formed when the light source 11 is not judged to be sufficiently point-like.

[0055] The device also includes an image sensor 20, which is able to form an image I in a detection plane P.sub.20. In the example shown, it is a question of a matrix-array image sensor including a CCD or CMOS pixel matrix array. CMOS image sensors are preferred because the pixel size is smaller, this allowing images the spatial resolution of which is more favorable to be acquired. In this example, the image sensor is a CMOS sensor sold by Aptina under the reference Micron MT9P031. It is a question of a monochromatic CMOS sensor comprising 25921944 pixels of 2.2 m side length, forming a detection surface, the area of which is 24.4 mm.sup.2. Image sensors the inter-pixel pitch of which is smaller than 3 m are preferred, in order to improve the spatial resolution of the image. The detection plane P.sub.20 preferably lies perpendicular to the propagation axis Z of the emission light wave 12. The image sensor 20 may comprise a mirror-type system for redirecting images toward a pixel matrix array, in which case the detection plane corresponds to the plane in which the image-redirecting system lies. Generally, the detection plane P.sub.20 corresponds to the plane in which an image is formed.

[0056] The distance d between the object 10 and the pixel matrix array of the image sensor 20 is, in this example, equal to 2 cm. Generally, whatever the embodiment, the distance d between the object and the pixels of the image sensor is preferably comprised between 50 m and 5 cm.

[0057] The absence of magnifying optics between the image sensor 20 and the object 10 will be noted, this being the preferred configuration. This does not prevent focusing micro-lenses optionally being present level with each pixel of the image sensor 20, the latter not having the function of magnifying the image.

[0058] The device 1 includes an optical system 15 that is placed between the light source 11 and the object 10. Its function is to collect the emission wave 12 propagating toward the object and to form a convergent wave 12.sub.c that propagates to the object, which wave is called the convergent incident wave. Some of the convergent incident wave 12.sub.c is then transmitted by the object, forming a transmitted wave 22, and propagates to the image sensor 20. Moreover, under the effect of exposure to the convergent incident wave 12.sub.c, the object may generate a diffraction wave 23 resulting from diffraction, by the object, of the convergent incident wave 12.sub.c. The image sensor is therefore exposed to a wave, called the exposure wave 24, comprising the transmitted wave 22 and the diffraction wave 23. Detection of the exposure wave 24 by the image sensor allows an image of a portion of the object to be formed, this portion corresponding to the field of observation. This image represents a spatial distribution of the amplitude of the exposure wave 24 in the detection plane P.sub.20. It may in particular include diffraction patterns resulting from interference between the transmitted wave 22 and the diffraction wave 23. These patterns may in particular take the form of a central core, around which concentric rings lie. It is a question of the diffraction patterns described in the section relating to the prior art.

[0059] When the object includes various particles 10b, the diffraction wave includes a plurality of elementary diffraction waves, each elementary diffraction wave resulting from diffraction of the convergent incident wave 12.sub.c by said particles. Appearance of these diffraction waves is promoted when the size of the said particles is about the same as or larger than the wavelength A emitted by the light source 11.

[0060] The optical system 15 allows a secondary image 11.sub.s of the source to be formed, above or below the object. The terms above and below are understood to mean along the propagation axis of the emission wave 12. Thus, by below the object, what is meant is in a half space defined by the plane P.sub.10 that passes through the holder able to hold the object 10 and that is perpendicular to the propagation axis Z, this half space including the image sensor 20 (and therefore not including the source 11). In the example shown in FIG. 1A, the secondary source 11.sub.s is positioned below the image sensor 20, in the extension of the convergent incident wave 12.sub.c. It is therefore a question of a virtual source. By below the sensor, what is meant is in a half space defined by the detection plane P.sub.20 and not including the source 11.

[0061] In FIG. 1A, the positions of the object 10, of the image sensor 20 and of the secondary source 11.sub.s are O, C and S, respectively. If g.sub.X is the transverse magnification of the object along an axis X that is perpendicular to the propagation axis Z,

[00001] X = x 20 x 10 = SO _ + OC _ SO _ ( 1 )

where [0062] x.sub.10 is a dimension in the object plane P.sub.10; [0063] x.sub.20 is the same dimension in the detection plane P.sub.20, i.e. in the image acquired by the image sensor; and [0064] the operator designates the algebraic value.

[0065] The expression transverse magnification is understood to mean a magnification along an axis that is perpendicular to the propagation axis of the light. In the rest of the text, the terms transverse magnification and magnification are used interchangeably.

[0066] In the configuration shown in FIG. 1, the transverse magnification is lower than 1. In other words, the dimensions of the imaged object are smaller in the detection plane P.sub.20 than in the object plane P.sub.10. It is therefore possible to take an image of an object the dimensions of which, in the object plane P.sub.10, are larger than the dimensions of the image sensor 20. The further the secondary source 11.sub.s is moved from the image sensor (from the object), the closer the magnification gets to 1 while remaining below 1, hence the notation g.sub.x.fwdarw.1.sup.. When the secondary source 11.sub.s is brought closer to the image sensor, the magnification tends to 0 while remaining positive: g.sub.x.fwdarw.0.sup.+. The term positive magnification designates the fact that there is no inversion between the image I formed on the image sensor 20 and the object 10.

[0067] An incident wave 12.sub.AA, according to the prior art is also shown in this figure. The incident wave is divergent from the light source to the object, from which a transmitted wave 22.sub.AA, which is also divergent, propagates to the image sensor 20. The transverse magnification is then higher than 1.

[0068] FIG. 2 shows three configurations in which the secondary source 11.sub.s, 11.sub.s, 11.sub.s occupies the positions S, S and S, respectively. The position S of the secondary source 11.sub.s is the position that was described above with reference to FIG. 1. For each of these positions, the following have also been shown: [0069] the waves 12.sub.c, 12.sub.c and 12.sub.c incident on the object, which waves propagate between the optical system and the object, and the waves 22, 22, 22 transmitted by the object, which waves propagate to the image sensor 20; [0070] a transverse dimension x.sub.10, x.sub.10, x.sub.10 of the object in the object plane P.sub.10; and [0071] said transverse dimension x.sub.20, x.sub.20, x.sub.20 of the object in the object plane P.sub.20.

[0072] On the basis of Expression (1), it is possible to determine the transverse magnification of each of these configurations. [0073] When the secondary source is positioned between the image sensor 20 and the object 10, which position is referenced S in FIG. 2, the magnification is negative. The negative value of the magnification indicates an inversion of the object in the image. The magnification tends toward 0 as the secondary source is brought closer to the image sensor and tends toward as the secondary source is brought closer to the object. The case where the secondary source is located between the sensor 20 and the object 10, at equal distance from each other, corresponds to a configuration in which SO=OC and therefore g.sub.x=1. [0074] When the secondary source is positioned between the optical system 15 and the object 10, which position is referenced S in FIG. 2, the magnification is positive, and higher than 1: it tends toward + as the secondary source is brought closer to the object and tends toward 1 as the secondary source is moved further from the object. The configuration is then similar to the prior-art configuration. In this case, the incident wave 12.sub.c reaching the object is divergent.

[0075] FIG. 3 shows the variation in the transverse magnification as a function of the position S of the secondary source, along the propagation axis of the incident wave. In this figure, the x-axis represents the ratio

[00002] CS _ CO _

and the y-axis represents the transverse magnification. It may be seen that: [0076] i) when the secondary source 11.sub.s is located closer to the image sensor 20 than to the object 10,

[00003] ( CS _ CO _ < 1 2 ) ,

the absolute value of the magnification g.sub.x is strictly lower than 1: |g.sub.x|<1; the magnification is negative when the secondary source is placed between the image sensor 20 and the object 10, and positive when the secondary source is located below the image sensor; [0077] ii) when the secondary source 11.sub.s is placed closer to the object 10 than to the image sensor 20,

[00004] ( CS _ CO _ > 1 2 ) ,

the absolute value of the magnification g.sub.x is strictly higher than 1: |g.sub.x|>1; [0078] iii) when the secondary source 11.sub.s is placed between the object and the image sensor,

[00005] ( 0 < CS _ CO _ < 1 ) ,

the magnification g.sub.x is negative, this corresponding to an inversion of the image of the object; [0079] iv) when the secondary source 11.sub.s is placed below the image sensor, i.e. in a half space defined by the detection plane P.sub.20 and not containing the object 10 (or the source 11),

[00006] ( CS _ CO _ < 0 ) ,

the magnification g.sub.x is strictly comprised between 0 and 1:0<g.sub.x<1; and [0080] v) when the secondary source 11.sub.s is placed between the source 11 and the object 10

[00007] ( CS _ CO _ > 1 ) ,

the magnification is strictly higher than 1.

[0081] The configuration in which the secondary source is in the object plane, i.e.

[00008] CS _ CO _ = 1

corresponds to a configuration in which the magnification is infinity. However, in this configuration, that portion of the object which is illuminated by the convergent incident wave 12.sub.c is then infinitely small, and hence this configuration is of no interest. When the secondary source is brought closer to the object, the magnification tends toward: [0082] if the secondary source is located below the object, i.e.

[00009] CS _ CO _ .fwdarw. 1 - ; [0083] + if the secondary source is located above the object, i.e.

[00010] CS _ CO _ .fwdarw. 1 + ;

[0084] The configuration in which the secondary source 11.sub.s is in the detection plane P.sub.20 i.e.

[00011] CS _ CO _ = 0

corresponds to a configuration in which the magnification is zero. This configuration is of no interest.

[0085] FIGS. 4A to 4E show configurations associated with particular portions of the curve of FIG. 3: [0086] FIGS. 4A and 4B show two configurations in which the magnification is positive and lower than 1, the magnification tending toward 1 as the secondary source 11.sub.s is moved further from the image sensor 20; [0087] FIG. 4C shows the limiting case in which the magnification is equal to 1, the secondary source 11.sub.s being located at equal distance from the object 10 and from the image sensor 20:

[00012] CS _ CO _ = 1 2 ;

and [0088] FIGS. 4D and 4E show two configurations in which the magnification is positive, and tends toward 1 as the secondary source 11.sub.s is moved further from the object 10.

[0089] Thus, a magnification the absolute value of which is lower than 1 is obtained in configurations i) or iv). This is due to the fact that the wave 12.sub.c incident on the object 10 is convergent, and that the secondary source 11.sub.s is closer to the image sensor 20 than to the object 10. In this type of configuration, provided that the illuminated field on the object is sufficiently large, the field of observation of the image sensor is increased with respect to the prior art. The curly bracket shown in FIG. 4C indicates the range of positions of the secondary source corresponding to this particular case.

[0090] Moreover, by interposing an optical system 15 between the light source 11 and the object 10, it is possible to make the position of the secondary light source 11.sub.s vary, for example using an optical system 15 of variable focal length or by moving said system. The magnification g.sub.x may be modulated depending on a parameter characterizing the optical system 15, for example its focal length or its position along the propagation axis Z. This allows, during observation of an object, images corresponding to a magnification lower than 1, and hence to a large field of observation, to be alternated with images corresponding to a magnification higher than 1, allowing, via a zoom effect, details to be better seen. Although the device does not include any magnifying optics between the source and the object, the invention allows the magnification of an image to be modulated.

[0091] The image obtained on the image sensor 20 may be exploited as such. Specifically, it has been shown that certain particles produce diffraction patterns the morphology of which is specific thereto. It is thus possible to count particles, and to identify them, as described in document WO2008090330, which was cited in the section relating to the prior art. As a variant, a reconstruction algorithm may be applied to this image so as to obtain a reconstructed image in a plane parallel to the detection plane P.sub.20, called the reconstruction plane P.sub.r, this plane being located at a known distance d.sub.r from the detection plane. It is then a question of using the well-known principles of digital holographic reconstruction, which principles are for example described in the publication Ryle et al, Digital in-line holography of biological specimens, Proc. OF SPIE Vol. 6311 (2006), to calculate the product of convolution between the image intensity I(x,y) measured by each pixel 20.sub.(x,y) of the image sensor 20 and a propagation operator h(x,y,z). The function of the propagation operator h(x,y,z) is to describe the propagation of light between the image sensor 20 and a point of coordinates (x,y,z). The coordinates (x, y) are coordinates in the detection plane P.sub.20, whereas the coordinate z is a coordinate along the propagation axis Z. It is possible to reconstruct a complex expression I*(x,y,z) for the exposure wave 24 at every point of spatial coordinates (x,y,z) and in particular in a plane located at a reconstruction distance d.sub.r from the image sensor, i.e. a plane of equation z=d.sub.r. It is then possible to determine the amplitude u(x,y,z) and phase (x,y,z) of this exposure wave 24 at the reconstruction distance d.sub.r, with: [0092] u(x,y,z)=abs [I*(x,y,z=d.sub.r)]; and [0093] (x,y,z)=arg [I*(x,y,z=d.sub.r)],
abs and arg respectively designating the modulus and argument operators.

[0094] In this example, the detection plane P.sub.20 in which the image I is formed is assigned a coordinate z=0. The propagation operator may be such that:

[00013] h ( x , y , z ) = z 2 .Math. .Math. .Math. r .Math. ( 1 - j .Math. .Math. 2 .Math. .Math. r ) .Math. e ( j .Math. 2 .Math. .Math. r ) r 2 , ( 2 )

where r={square root over (x.sup.2+y.sup.2+z.sup.2)}, and is the wavelength.

[0095] Such an operator was described in the publication Marathay A On the usual approximation used in the Rayleigh-Sommerfeld diffraction theory, J. Opt. Soc. Am. A, Vol. 21, No. 4, April 2004. Other propagation operators are usable, for example an operator based on the Fresnel-Helmholtz function, such that:

[00014] h ( x , y , z ) = 1 j .Math. .Math. .Math. .Math. z .Math. e j .Math. .Math. 2 .Math. .Math. z .Math. exp ( j .Math. .Math. .Math. ( x + y ) 2 .Math. .Math. z ) . ( 2 )

Experimental Trials

[0096] Experimental trials have been carried out using a configuration such as shown in FIGS. 5A and 5B. The light source 11 was coupled to an optical fiber 13, a proximal end of which was placed facing the point source and a distal end of which formed a point light source. This distal end was placed at a large distance, so as to be able to be considered to be placed at the infinity of an objective 13 able to form, at its focal point, a reduced image of the distal end of the optical fiber. By large distance, what is meant here is at least 10 times, or even 100 times the focal length of the objective. In this example, the objective 13 was a 40 EF-N Plan Motic objective of 0.65 numerical aperture and of focal length f.sub.1=4.6 mm. The association of the optical fiber and the objective formed a spatial filter allowing a point light source 11 to be formed at the focal point of the objective 13. Alternatively, a diaphragm could have been used instead of the optical fiber or instead of the optical-fiber+objective assembly.

[0097] The device included an optical system 15 that was placed between the light source 11 and the object 10: it was a question of a tube lens of 50 mm focal length (reference AC254-050-A, manufacturer Thorlabs). This system was able to conjugate the light source 11 with a secondary source 11.sub.s, via the point source 11. This optical system was arranged such that the secondary source 11.sub.s was positioned between the object 10 and the image sensor 20, as shown in FIG. 5A.

[0098] The distance d.sub.15 between the optical system 15 and the objective 13 was varied, this distance being called the inter-optic distance, so as to move the position S of the secondary source 11.sub.s along the propagation axis Z of the light. The object 10 included a transparent reticle, which is shown in FIG. 5B, comprising opaque graduations that were spaced apart from one another by a distance equal to 100 m. The object 10 was placed at a distance of 2 cm from the detection plane P.sub.20 of the CMOS image sensor 20, which was described above. FIGS. 6A, 6B and 6C show images obtained on the image sensor. Each image includes patterns, called diffraction patterns, resulting from interference between a diffraction wave 23, produced by diffracting elements of the object, and the wave 22 transmitted by the object. The diffracting elements of the object may for example be the opaque graduations of the reticle. Thus, as described above, the image sensor 20 was exposed to an exposure wave 24, including the wave 22 transmitted by the object and a wave 23 resulting from the diffraction, by the object, of the convergent wave 12.sub.c incident on the object. The images 6A, 6B and 6C show a spatial distribution of the amplitude of the exposure wave 24 in the detection plane P.sub.20.

[0099] Holographic reconstruction algorithms were applied to each of the images 6A, 6B and 6C. The holographic reconstruction algorithm implemented was based on the operator described by Expression (2), the reconstruction distance employed being z=2 cm, the coordinate z=0 corresponding to the detection plane P.sub.20. FIGS. 7A, 7B and 7C show the results of the reconstructions obtained on the basis of the images 6A, 6B and 6C, respectively. The reconstruction algorithms did not take into account the magnification induced by the optical system 15. Specifically, they were based on the assumption of a propagation of a plane wave, propagating parallel to the propagation axis. Thus, the reconstructed images had the same magnification as the images obtained in the detection plane. It will be noted that the graduations may clearly be seen in the reconstructed images, this attesting to the high quality of the reconstruction.

[0100] Since the distance between two adjacent pixels was known, it was then possible to measure the transverse magnification generated by the optical system using the expression:

[00015] X = pix n pix , ( 3 )

where: [0101] is the distance between a preset number, here equal to 10, of successive graduations of the reticle; [0102] .sub.pix is the distance between two adjacent pixels of the image sensor 20; and [0103] n.sub.pix is the number of pixels between said preset number of successive graduations in the reconstructed image.

[0104] FIGS. 6A and 7A correspond to an inter-optic distance of zero, the optical system 15 being placed against the objective 13. The secondary source 11s is then located, between the object and the image sensor, at a distance of 12.6 mm from the image sensor and at a distance of 7.4 mm from the object, and therefore closer to the object than to the sensor. The magnification is negative and higher than 1. It's estimation according to Expression (3) indicates g.sub.X=1.7.

[0105] FIGS. 6B and 7B correspond to an inter-optic distance of 29.7 mm. The secondary source 11s is then located between the object 10 and the image sensor 20, at equal distance from both. It is then in a configuration such as shown in FIG. 4C. The magnification is negative, and its absolute value is equal to 1. In other words, g.sub.X=1.

[0106] FIGS. 6C and 7C correspond to an inter-optic distance of 120 mm. The secondary source 11s is then located between the object 10 and the image sensor 20, at a distance of 7.2 mm from the image sensor and therefore closer to the image sensor than to the object. The magnification is negative, and its absolute value was estimated to be 0.57.

[0107] These trials demonstrated the ability of the device to obtain an exploitable image of an object the size of which is larger than the size of the sensor, because of a magnification the absolute value of which is lower than 1. They also demonstrated that it is possible to easily modulate the position of the secondary source 11s and, therefore, to pass from a magnification of higher than 1 to a magnification lower than 1, without modifying the position of the source, of the object and of the image sensor, and to do so with no magnifying optics placed between the object and the image sensor. Thus, while remaining within the field of lensless imaging, the magnification of the image of the object may be lower than 1, and may vary.

[0108] In another trial, the device of which is shown in FIG. 5C, the lens 15 was replaced with an Optotune variable-focal-length lens (reference EL-10-30-VIS-LD). The focal length of such a lens may be controlled by application of an electrical current. The reticle was replaced with a reticle the graduations of which were not opaque, but formed of trenches etched by laser etching. Each graduation formed what is called a phase object that caused the convergent wave 12.sub.c incident on the object to diffract. The lens was arranged such that the secondary source 11s was positioned below the image sensor 20.

[0109] FIGS. 8A and 8B respectively show images obtained: [0110] when the current was zero: the magnification was positive, the secondary source 11s being located below the sensor, at a distance of 21.8 mm from the latter; and [0111] when the current was 292 mA: the magnification was positive, the secondary source 11s being located below the image sensor, at a distance of 12.2 mm from the latter.

[0112] FIGS. 9A and 9B show images reconstructed from FIGS. 8A and 8B, respectively, in the object plane, i.e. using a reconstruction distance of 2 cm. It is possible to estimate the magnification in each of these configurations, said magnification being estimated from FIGS. 9A and 9B to be 0.52 and 0.38, respectively. These trials illustrate the configurations shown in FIGS. 4A and 4B, and confirm the fact that the closer the secondary source is brought to the image sensor, the lower the magnification becomes.

[0113] This once again demonstrates that it is possible to modulate the magnification of an image in a lensless-imaging type configuration, i.e. without magnifying optics placed between the object and the sensor.

[0114] The invention will possibly be used to observe samples, for example biological samples, or in other fields, for example the field of the food-processing industry.