Method for measuring several wavefronts incoming from different propagation directions

11609124 · 2023-03-21

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for determining wavefront shapes of N angular channels C.sub.L of different propagation directions P.sub.L, said propagation directions P.sub.L being determined by a mean propagation direction vector custom character, from a single signal image acquisition I(x,y) of a multi-angular signal light beam containing said angular channels, each angular channel Ci being separated from other angular channels Cj by an angular separation Δα.sub.ij defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character.

Claims

1. A method for determining wavefront shapes of N angular channels C.sub.L of different propagation directions P.sub.L, said propagation directions P.sub.L being determined by a mean propagation direction vector custom character, from a single signal image acquisition I(x,y) of a multi-angular signal light beam containing said angular channels, each angular channel Ci being separated from other angular channels Cj by an angular separation Δα.sub.ij defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character, with a device comprising an optical assembly made at least of an optical mask and an imaging sensor for generating and recording intensity patterns of incident beams, by having these beams reflect on, or propagate through, the optical mask, the optical mask having the optical properties: i) to cause the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape, with said tilt being smaller than an angular memory effect δα of said optical mask, results in a local displacement amount of the intensity pattern, and ii) for two incident beams M.sub.n, M.sub.m of respective propagation directions P.sub.n, P.sub.m determined by mean propagation direction vectors custom character and custom character, respectively, said incident beams M.sub.n, M.sub.m having a same wavefront shape and separated from each other by a separation angle defined by Δα.sub.mn=arccoscustom character larger than the angular memory effect δα, ie Δα.sub.mn>δα, to produce uncorrelated intensity patterns over a surface area A of the imaging sensor, two uncorrelated random intensity patterns being defined as statistically orthogonal relatively to a zero-mean cross-correlation product, the angular memory effect δα being smaller than angular separations Δα.sub.ij of angular channels C.sub.L: δα > Δ α ij , for all i , j [ 1 , N ] , with i j the method comprising: a) providing several reference intensity patterns R.sub.L(x,y), wherein each reference intensity pattern R.sub.L(x,y) corresponds to a respective propagation direction P.sub.L, L varying between 1 and N, x and y being coordinates, b) recording one single signal image I(x,y) of the intensity pattern generated by said multi-angular signal light beam which comprises the N propagation directions P.sub.L using the device, the single signal image I(x,y) being representative of light impinging on the at least one surface area (A); c) computing intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y), for all L varying from 1 to N, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I (x,y) being representative of an intensity modulation and a diffeomorphism, respectively, of each given reference intensity pattern R.sub.L(x,y), at propagation direction P.sub.L for the single signal image I(x,y), all the N intensity-weight data W.sub.L.sup.I(x,y) and the N deformation data T.sub.L.sup.I(x,y) being computed, for L varying from 1 to N, so as to minimize, for all sampling points (x, y) of the surface area A, from the single signal image I(x,y): a difference D.sub.A between the single signal image I(x,y) on the one hand, and the sum of reference intensity patterns R.sub.L multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand: N D A = .Math. I ( x , y ) - .Math. L = 1 N W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A the symbol ∥.∥.sub.A designating a norm calculated for all (x, y) sampling points in the surface area A; for the surface A, each given reference intensity patterns R.sub.L(x,y) being orthogonal to each reference intensity pattern R.sub.K(x,y) relatively to the zero-mean cross-correlation product, when K natural number different from L and chosen between [1; N]; d) generating data for each propagation direction P.sub.L representative of:  the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y), the intensity map based on the weight W.sub.L.sup.I (x,y).

2. The method according to claim 1, comprising at step a), recording said reference intensity patterns R.sub.L(x,y) using the device, each reference intensity pattern R.sub.L(x,y) being generated by a respective reference incident beam L with propagation direction P.sub.L, L varying from 1 to N.

3. The method according to claim 1, the optical mask comprising a diffuser, an engineered pseudo-diffuser, a diffractive element, an optical fiber bundle, a metasurface, a freeform optical element an array of micro-optical elements, each micro-optical element generating on the imaging sensor an intensity pattern different from the intensity pattern generated by the micro-optical elements located at least in its vicinity, an array of micro-lenses with random aberration or an array of micro-lenses randomly arranged spatially, or any combination thereof.

4. The method according to claim 1, wherein the imaging sensor is a matrix imaging sensor, the surface A being a sum of a macro-pixels Ai A=ΣA.sub.i, The method comprising in step c): computing intensity-weight data W.sub.L.sup.I(Ai) and deformation data T.sub.L.sup.I(Ai) which are constant for the macro-pixel of surface Ai, for all L varying from 1 to N, by minimizing the difference D.sub.Ai updating the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I (x,y) for all coordinates (x,y) on the surface A,  with W.sub.L.sup.I(x,y)=W.sub.L.sup.I(Ai) and T.sub.L.sup.I(x,y)=T.sub.L.sup.I(Ai) for all (x,y) belongs to Ai.

5. The method according claim 4, wherein an estimate of T.sub.L.sup.I(Ai) and W.sub.L.sup.I(Ai) and of the minimum of the difference D.sub.Ai is obtained by computing zero-mean cross-correlation product images for each macropixel of surface Ai, D.sub.Ai being defined relatively to the norm ∥.∥.sub.Ai, between the signal image I(x,y) and each of the reference intensity patterns R.sub.L(x,y) with (x,y) coordinates of the portion A.sub.i, the zero-mean cross-correlation product image between the signal sub-image and each of the reference intensity patterns having a peak, the intensity-weight data W.sub.L.sup.I(A.sub.i) being the amplitude of the peak and the deformation data T.sub.L.sup.I(A.sub.i) being the displacement vector between the said peak or its centroid, from the center of the zero-mean cross-correlation product image.

6. The method according to claim 4, wherein an estimate of T.sub.L.sup.I(A.sub.i) and W.sub.L.sup.I(A.sub.i) and of the difference D.sub.A is obtained by computing a Wiener deconvolution for each macropixel of surface Ai, D.sub.Ai being defined relatively to the norm ∥.∥.sub.Ai, the Wiener deconvolution possibly relating to: a Wiener deconvolution of the signal sub-images by the reference intensity patterns R.sub.L(x,y), or a Wiener deconvolution of the reference intensity patterns R.sub.L(x,y) by the signal sub-images.

7. The method according to claim 4, wherein an estimate of T.sub.L.sup.I(A.sub.i) and W.sub.L.sup.I(A.sub.i) and of the difference D.sub.Ai is obtained for each macropixel of surface Ai, by computing a matrix inversion algorithm, D.sub.Ai being defined relatively to the norm ∥.∥.sub.Ai, wherein the matrices to be inverted being related to at least a transform of a sub-image of an intensity pattern R.sub.L(x,y), such as a Fourier transform.

8. The method according to any one of claim 1, wherein an estimate of the difference DA is obtained by computing intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y) thanks to an iterative optimization algorithm or thanks to a compressed sensing algorithm.

9. The method according to any one of claim 1, wherein an estimate of T.sub.L.sup.I(x,y) and W.sub.L.sup.I(x,y) and of D.sub.A is computed thanks to deep learning approaches using neuronal networks which minimize D.sub.A with a collection of signal images, references images with respective N wavefronts known.

10. The method according to any one of claim 1, wherein intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y) are computed thanks to a genetic algorithm with a collection of would-be solutions are tested, evaluated, selected for minimizing the value of DA.

11. The method according to claim 1, wherein measured wavefront shapes are characteristic of some aberrations introduced by an optical system, a biological tissue or the atmosphere.

12. The method according to claim 1, wherein the multi-angular signal light beam is generated by illuminating a polarization assembly located before the optical assembly by an initial light beam containing at least one angular channel, the polarization assembly containing at least one polarization optic, notably a quarter wave-plate, a half-waveplate, a polarizer, a Wollaston prism, a Glan-Taylor polarizer, a Glan-Thomson polarizer, a Rochon prism a Nomarski prism or any other polarization dependent optic, so that different polarization states of the initial light beam are encoded into different angular channels CL of the multi-angular signal light beam.

13. A method for determining wavefront shapes of N angular channels C.sub.L of different propagation directions P.sub.L, said propagation directions P.sub.L, said propagation directions being determined by a mean propagation direction vector custom character, from a single signal image I(x,y) acquisition of a multi-angular signal light beam containing said angular channels α.sub.L, each angular channel Ci being separated from other angular channels Cj by an angular separation Δα.sub.ij, defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character, with a device comprising an optical assembly made at least of an optical mask and an imaging sensor for generating and recording intensity patterns of incident beams, by having these beams reflect on, or propagate through, the optical mask, the optical mask having the optical properties: i) to cause the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape, with said tilt being smaller than an angular memory effect δα of said optical mask, results in a local displacement amount of the intensity pattern, and ii) for two incident beams M.sub.n and M.sub.m of respective propagation directions P.sub.n, P.sub.m determined by mean propagation direction vectors custom character and custom character, respectively, said incident beams Mn and Mm having a same wavefront shape and separated from each other by a separation angle Δα.sub.mn=arccoscustom character smaller than the an angular memory effect δα, ie Δα.sub.mn<δα, to produce uncorrelated intensity patterns over at least one surface area A.sub.n of the imaging sensor, the surface area A.sub.n having a largest spatial extent L.sub.An smaller that Δα.sub.mn.Math.d, with d being a separation distance between the optical mask and the imaging sensor, two uncorrelated random intensity patterns being defined as statistically orthogonal relatively to a zero-mean cross-correlation product, the angular memory effect δα being larger than angular separations Δα.sub.ij of angular channels C.sub.L: δ α > Δ α ij , for all i , j [ 1 , N ] , with i j the method comprising, a) providing several reference intensity patterns R.sub.L(x,y), wherein each reference intensity pattern R.sub.L(x,y) corresponds to a respective propagation direction P.sub.L, L varying from 1 to N, x and y are coordinates; b) recording one single signal image I(x,y) of the intensity pattern generated by said multi-angular signal light beam which comprises the N propagation directions, using the device, the single signal image I(x,y) being representative of light impinging on a surface area A of the imaging sensor; the surface A comprising the at least one surface portion A.sub.n; e) for the at least one surface portion A.sub.n: i) extracting a signal sub-image I′(x,y), representative of light impinging on the at least one surface portion A.sub.n, from the single signal image I(x,y): I ( x , y ) = I ( x , y ) with ( x , y ) A n , ii) computing intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y), with (x, y) coordinates of the surface portion A.sub.n, for all L varying from 1 to N, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) being representative of an intensity modulation and a diffeomorphism, respectively, of each given reference intensity pattern R.sub.L(x,y), at propagation directions P.sub.L, for the signal sub-image I′(x,y), all the N intensity-weight data W.sub.L.sup.I(x,y) and the N deformation data T.sub.L.sup.I(x,y) being computed, for L varying from 1 to N, so as to minimize, for all sampling points (x, y) of the surface portion A.sub.n, from the signal sub-image I′(x,y): a difference D.sub.An between the single signal image I(x,y) on the one hand, and the sum of reference intensity patterns R.sub.L multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand: D A n = .Math. I ( x , y ) - .Math. L = 1 N W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A n the symbol ∥.∥.sub.A.sub.n designating a norm calculated for all (x, y) sampling points in the surface portion A.sub.n; for the surface portion A.sub.n, each given reference intensity patterns R.sub.L(x,y) being orthogonal to each reference intensity pattern R.sub.K(x,y) relatively to the zero-mean cross-correlation product, when K natural number different from L and chosen between [1; N];  iii) generating data for each propagation direction P.sub.L representative of:  the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y),  the intensity map based on the weight W.sub.L.sup.I(x,y).

14. Method according to claim 13, comprising at step a), recording said reference intensity patterns R.sub.L(x,y) using the device, each reference intensity pattern R.sub.L(x,y) being generated by a respective reference incident beam L with propagation directions P.sub.L L varying from 1 to N.

15. The method according to claim 13, comprising at step a): recording at least one reference intensity patterns R.sub.K(x,y), using the device, K being between 1 and N; and if necessary, generating numerically the other intensity patterns R.sub.L(x,y), L being between 1 and N different from K, using the recorded intensity pattern R.sub.K(x,y).

16. The method according to claim 15, the other intensity patterns R.sub.L(x,y) being generated numerically by applying a simple translation transformation to the recorded reference intensity pattern R.sub.K(x,y).

17. The method according to any one of claim 15, at step a) generating the reference intensity patterns comprises: achieving a cross-correlation product of the single signal image I(x, y) and the recorded reference pattern R.sub.K(x,y), the cross-correlation product image between the signal image and each of the recorded intensity patterns R.sub.K(x,y) having N maxima, and shifting spatially the recorded reference pattern R.sub.K(x,y) to the corresponding position based on each maximum position to generate the corresponding reference intensity pattern reference R.sub.L(x,y).

18. The method according to claim 13, comprising at step a): recording a reference intensity pattern R.sub.K(x,y) corresponding to a propagation direction P.sub.K different from the propagation directions P.sub.L contained in the single signal image I(x,y) using the device, and generating numerically the all N intensity patterns R.sub.L(x,y) using the recorded intensity pattern Rk (x,y), with L varying from 1 to N.

19. The method according to any one of claim 18, at step a) generating the reference intensity patterns comprises: achieving a cross-correlation product of the single signal image I(x, y) and the recorded reference pattern R.sub.K(x,y), the cross-correlation product image between the signal image and each of the recorded intensity patterns R.sub.K(x,y) having N maxima, and shifting spatially the recorded reference pattern R.sub.K(x,y) to the corresponding position based on each maximum position to generate the corresponding reference intensity pattern reference R.sub.L(x,y).

20. The method according to claim 13, the optical mask comprising a diffuser, an engineered pseudo-diffuser, a diffractive element, an optical fiber bundle, a metasurface, a freeform optical element an array of micro-optical elements, each micro-optical element generating on the imaging sensor an intensity pattern different from the intensity pattern generated by the micro-optical elements located at least in its vicinity, an array of micro-lenses with random aberration or an array of micro-lenses randomly arranged spatially, or any combination thereof.

21. The method according to claim 13, wherein the imaging senor is a matrix imaging sensor, the surface A being as a sum of a plurality of surface portions An, each surface portion An having a largest spatial extent L.sub.A.sub.n<Δα.sub.ij.Math.d, for all i, j∈[1, N], with i≠j: A = .Math. n A n the method comprising at step c, computing intensity-weight data W.sub.L.sup.I(A.sub.n) and deformation data T.sub.L.sup.I(A.sub.n) for each surface portion A.sub.n, for all L varying from 1 to N, by minimizing the differences D.sub.An for each surface portion A.sub.n.

22. The method according to claim 13, wherein the multi-angular signal light beam is generated by illuminating a polarization assembly located before the optical assembly by an initial light beam containing at least one angular channel, the polarization assembly containing at least one polarization optic, so that different polarization states of the initial light beam are encoded into different angular channels CL of the multi-angular signal light beam.

23. Use of a multi-angular wavefront sensor for a multi-angular signal light beam according to claim 22, in: Optical metrology; Diffraction Tomography; Quantitative phase microscopy (Biology, chemistry . . . ); Adaptive optics; Ophthalmology.

24. A wavefront sensor for a multi-angular signal light beam from a single image acquisition of the said multi-angular signal light beam, said multi-angular light beam comprising N angular channels of different propagation directions P.sub.L, said propagation directions P.sub.L being determined by a mean propagation direction vector custom character, each angular channel Ci being separated from other angular channels Cj by an angular separation Δα.sub.ij defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character, the wavefront sensor comprising an optical assembly made at least of: an optical mask and an imaging sensor for generating and recording intensity patterns of incident beams, by having these beams reflect on, or propagate through, the optical mask, the optical mask having the optical properties: i) to cause the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape, with said tilt being smaller than an angular memory effect δα of said optical mask, results in a local displacement amount of the intensity pattern, and ii) for two incident beams M.sub.n, M.sub.m of respective propagation directions P.sub.n, P.sub.m determined by mean propagation direction vectors custom character and custom character respectively, said incident beams M.sub.n, M.sub.m having a same wavefront shape and separated from each other by a separation angle defined by Δα.sub.mn=arccoscustom character larger than the angular memory effect δα, ie Δα.sub.mn>δα, to produce uncorrelated intensity patterns over a surface area A of the imaging sensor, two uncorrelated random intensity patterns being defined as statistically orthogonal relatively to a zero-mean cross-correlation product, the angular memory effect δα being smaller than angular separations Δα.sub.ij of angular channels C.sub.L: δα > Δ α ij , for all i , j [ 1 , N ] , with i j the imaging sensor recording: a) several reference intensity patterns R.sub.L(x,y), each reference intensity pattern R.sub.L(x,y) being generated by having a respective reference incident beam L with propagation directions P.sub.L, reflect on or propagate through the optical mask L varying from 1 to N, x and y being coordinates; b) one single signal image I(x,y) of the intensity pattern generated by having the multi-angular signal light beam which comprises the N propagation directions P.sub.L reflect on or propagate through the optical mask, the single signal image I(x,y) being representative of light impinging on the at least one surface area (A); computing means for: c) computing intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y), the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) being representative of an intensity modulation and a diffeomorphism, respectively, of each reference intensity pattern at propagation direction P.sub.L, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) being computed so that to minimize, for the at least one surface area A, a quantity that depends on the differences D.sub.A between the single signal image I(x,y) on the one hand, and the sum of reference intensity patterns R.sub.L(x,y) multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand: D A = .Math. I ( x , y ) - .Math. L W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A the symbol ∥.∥.sub.A designating a norm calculated for all (x, y) sampling points in the surface area A; d) generating data for each propagation direction P.sub.L inclination angle representative of: the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y), the intensity map based on the weight W.sub.L.sup.I(x,y).

25. The wavefront sensor according to claim 24, the optical mask comprising a diffuser, an engineered pseudo-diffuser, a diffractive element, an optical fiber bundle, a metasurface, a freeform optical element an array of micro-optical elements, each micro-optical element generating on the imaging sensor an intensity pattern different from the intensity pattern generated by the micro-optical elements located at least in its vicinity, an array of micro-lenses with random aberration or an array of micro-lenses randomly arranged spatially, or any combination thereof.

26. The wavefront sensor according to claim 24, the computing means being configured for: extracting signal sub-images, each representative of light impinging on a portion A′ of the at least one surface area, from the single signal image I(x,y); estimating deformation data T.sub.L.sup.I(x,y) for each sub-image and for every propagation direction P.sub.L by: i) computing the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y) so as to minimize, for the corresponding surface area A′, a quantity that depends on the differences D.sub.A′; ii) updating the deformation data T.sub.L.sup.I(x,y) by storing the deformation data T.sub.L.sup.I(x,y) at at least one point (x,y) inside said surface area A′.

27. The wavefront sensor according to claim 24, the computing means being configured to estimate the deformation data T.sub.L.sup.I(x,y) by computing zero-mean cross-correlation product images, for each signal sub-image, between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y), the plurality of reference incident beams L being uncorrelated relatively to the zero-mean cross-correlation product, the zero-mean cross-correlation product image between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y) having a peak and the displacement amount being the distance between the said peak, or its centroid, from the center of the zero-mean cross-correlation product image.

28. The wavefront system according to claim 24, the computing means being configured to compute the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) thanks to: a Wiener deconvolution of the signal sub-images by the reference intensity patterns R.sub.L(x,y), or a Wiener deconvolution of the reference intensity patterns R.sub.L(x,y) by the signal sub-images.

29. Use of a multi-angular wavefront sensor fora multi-angular signal light beam according to claim 28, in: Optical metrology; Diffraction Tomography; Quantitative phase microscopy (Biology, chemistry . . . ); Adaptive optics; Ophthalmology.

30. Optical device with: a wavefront sensor according to claim 24, a single light emitting source or an assembly of light emitting sources for generating a multi-angular signal light beam with specific propagation directions.

31. Optical device according to claim 30, comprising a polarization assembly located before the optical assembly, the polarization assembly comprising at least one polarization optic.

32. A wavefront sensor for a multi-angular signal light beam from a single image acquisition of the said multi-angular signal light beam, said multi-angular light beam comprising N angular channels of different propagation directions P.sub.L, said propagation directions P.sub.L being determined by a mean propagation direction vector custom character, each angular channel Ci being separated from other angular channels Cj by an angular separation Δα.sub.ij defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character, The wavefront sensor comprising an optical assembly made at least of: an optical mask and an imaging sensor for generating and recording intensity patterns of incident beams, by having these beams reflect on, or propagate through, the optical mask, the optical mask having the optical properties: i) to cause the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape, with said tilt being smaller than an angular memory effect δα of said optical mask, results in a local displacement amount of the intensity pattern, and ii) for two incident beams M.sub.n and M.sub.m of respective propagation directions P.sub.n, P.sub.m determined by mean propagation direction vectors custom character and custom character, respectively, said incident beams Mn and Mm having a same wavefront shape and separated from each other by a separation angle Δα.sub.mn=arccoscustom character smaller than the an angular memory effect δα, ie Δα.sub.mn<δα, to produce uncorrelated intensity patterns over at least one surface area A.sub.n of the imaging sensor, the surface area A.sub.n having a largest spatial extent L.sub.An smaller that Δα.sub.mn.Math.d, with d being a separation distance between the optical mask and the imaging sensor, two uncorrelated random intensity patterns being defined as statistically orthogonal relatively to a zero-mean cross-correlation product, the angular memory effect δα being larger than angular separations Δα.sub.ij of angular channels C.sub.L: δα > Δ α ij , for all i , j [ 1 , N ] , with i j the imaging sensor recording: a) at least one reference intensity patterns R.sub.L(x,y), the reference intensity pattern R.sub.L(x,y) being generated by having a respective reference incident beam L with propagation b directions P.sub.L, reflect on or propagate through the optical mask L varying from 1 to N, x and y being coordinates; b) one single signal image I(x,y) of the intensity pattern generated by having the multi-angular signal light beam which comprises at least the N propagation directions P.sub.L reflect on or propagate through the optical mask, the single signal image I(x,y) being representative of light impinging on at least one surface area A comprising the at least one surface portion A.sub.n; computing means for: c) if necessary generating numerically one or several reference intensity patterns R.sub.L(x,y) using the recorded reference pattern R.sub.K(x,y), d) for the at least one surface A.sub.n: i) extracting a signal sub-image I′(x,y), representative of light impinging on the at least one surface A.sub.n, from the single signal image I(x,y): I ( x , y ) = I ( x , y ) with ( x , y ) A n , ii) computing intensity-weight data W.sub.L.sup.I(x,y) and deformation data T.sub.L.sup.I(x,y), with (x, y) coordinates of the surface portion A.sub.n, for all L varying from 1 to N, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) being representative of an intensity modulation and a diffeomorphism, respectively, of each given reference intensity pattern R.sub.L(x,y), at propagation direction P.sub.L, for the signal sub-image I′(x,y), all the N intensity-weight data W.sub.L.sup.I(x,y) and the N deformation data T.sub.L.sup.I(x,y) being computed, for L varying from 1 to N, so as to minimize, for all sampling points (x, y) of the surface area A.sub.n, from the signal sub-image I′(x,y): a difference D.sub.An between the single signal image I(x,y) on the one hand, and the sum of reference intensity patterns R.sub.L multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand: D A n = .Math. I ( x , y ) - .Math. L = 1 N W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A n the symbol ∥.∥.sub.A.sub.n designating a norm calculated for all (x, y) sampling points in the surface area A.sub.n; for the surface A.sub.n, each given reference intensity patterns R.sub.L(x,y) being orthogonal to each reference intensity pattern R.sub.K(x,y) relatively to the zero-mean cross-correlation product, when K natural number different from L and chosen between [1; N];  iii) generating data for each propagation direction P.sub.L representative of: the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y), the intensity map based on the weight W.sub.L.sup.I(x,y).

33. The wavefront according to the claim 32, wherein the computing means being configured for: splitting the surface A of the imaging sensor into a plurality of surface portion A.sub.n having a largest spatial extent L.sub.An<Δα.sub.ij, for all i, j∈[1, N], with i≠j, estimating deformation data T.sub.L.sup.I(x,y) for each sub-image and for every propagation direction P.sub.L by: i) computing the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y) so as to minimize, for the corresponding surface portion A.sub.n, a quantity that depends on the differences D.sub.An; ii) updating the deformation data T.sub.L.sup.I(x,y) by storing the deformation data T.sub.L.sup.I(x,y) at at least one point (x,y) inside said surface portion A.sub.n.

34. Optical device with: a wavefront sensor according to claim 32, a single light emitting source or an assembly of light emitting sources for generating a multi-angular signal light beam with specific propagation directions.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate presently preferred embodiments of the invention, and together with the description, serve to explain the principles of the invention. In the drawings:

(2) FIG. 1A is a schematic illustrating the principle of adaptive optics with a large field of view,

(3) FIG. 1B is a schematic illustrating the principle of adaptive optics depicting diffractive tomography,

(4) FIG. 2 is an example related to prior art,

(5) FIG. 3A is a schematic illustration of an embodiment of an optical system according to the invention,

(6) FIG. 3B is a schematic illustration of an embodiment of an optical system according to the invention,

(7) FIG. 3C is a schematic illustration of an embodiment of an optical system according to the invention,

(8) FIG. 3D is a schematic illustration of an embodiment of an optical system according to the invention,

(9) FIG. 4 is a schematic illustration of an embodiment of an optical system according to the invention,

(10) FIG. 5 is a schematic illustration of optical mask properties according to the invention,

(11) FIG. 6 is a schematic illustration of optical mask properties according to the invention,

(12) FIG. 7 is a schematic illustration of optical mask properties according to the invention,

(13) FIG. 8 is a schematic illustration of optical mask properties according to the invention,

(14) FIG. 9 is a schematic illustration of optical mask properties according to the invention,

(15) FIG. 10 is a schematic of different embodiment of a method according to the invention,

(16) FIG. 11 is a schematic of different embodiment of a method according to the invention,

(17) FIG. 12 is a schematic of different embodiment of a method according to the invention,

(18) FIG. 13 is a schematic of different embodiment of a method according to the invention,

(19) FIG. 14 shows an example of reconstruction of the shapes of wavefront using a method according to the invention,

(20) FIG. 15 shows an example of reconstruction of the shapes of wavefront using a method according to the invention,

(21) FIG. 16 shows an example of reconstruction of the shapes of wavefront using a method according to the invention,

(22) FIG. 17A illustrates the application of reconstruction wavefront method of some embodiments to the adaptive optics,

(23) FIG. 17B illustrates the application of reconstruction wavefront method of some embodiments to the adaptive optics,

(24) FIG. 18A is a schematic illustration of an embodiment of an optical system according to the invention,

(25) FIG. 18B is a schematic illustration of an embodiment of an optical system according to the invention, and

(26) FIG. 18C is a schematic illustration of an embodiment of an optical system according to the invention.

DETAILED DESCRIPTION

(27) Whenever possible, the same reference numbers will be used throughout the drawings to refer to the same or like parts.

(28) In accordance with the invention, and as broadly embodied, in FIGS. 3A, 3B, 3C, and 4, an optical device 15 is provided.

(29) The optical device 15 may comprise a single light emitting source or an assembly of light emitting sources for generating a multi-angular signal light beam 12 with specific propagation directions. Depending on the application of the optical device 10, the light sources 11 may be a LED or an assembly of LEDs or Lasers. The light sources can also be by an assembly of scattering, fluorescent or luminescent emitters.

(30) For illustration purposes, the light source or the assembly light emitting sources are shown as block 11.

(31) The signal light beam comprises N angular channels C.sub.L with different propagation directions P.sub.L determined by mean propagation direction vectors custom character. Each angular channel C.sub.i is separated from other angular channels C.sub.j by an angular separation Δα.sub.ij defined by Δα.sub.ij=arccoscustom character, where “.Math.” stands for the inner product between custom character and custom character.

(32) FIG. 4 illustrates an example, in which the signal light beam 12 comprises two angular channels C.sub.1 and C.sub.2 of respective propagation directions P1 and P2 with respective mean propagation direction vectors custom character and custom character. The angular Channels C1 and C2 have different wavefront shapes 13.

(33) The optical device 15 further comprises a wavefront sensor 10. The latter includes a device 16 comprising an optical mask 14 and an imaging sensor 18 for generating and recording intensity patterns of incident beams. For calculation purposes, computing means are also provided. The computing means may comprise a memory storing, a set of instructions and processor for executing said set of instructions, such as a PC computer or dedicated microcontroller bound. For purposes of illustration, the computing means are shown as block 17, which may be part of the wavefront sensor 10 or separate from the latter.

(34) As shown in FIGS. 3A to 3D and 4, the optical mask 14 is illuminated by the light beam 12 presenting wavefronts 13.

(35) FIGS. 3A and 4 display an embodiment of the wavefront sensor 10 where the optical mask is working in transmission preferably at the close vicinity of the imaging sensor 18, for example at a distance d, ranging from 0 to 10 D/θ, where D is the size of the imaging sensor and θ the scattering angle of the optical mask.

(36) FIGS. 3B and 3D show an alternative embodiment where the optical mask 14 is working in reflection, preferably in a close vicinity of the imaging sensor.

(37) The light beam 22 emerging from the optical mask 14 is then captured using the imaging sensor 18, yielding an intensity pattern image I(x,y). The intensity pattern of the emergent light beam 22 is a superposition of individual intensity patterns IL weighted by their respective contributions to the signal beams constituting light beam 12.

(38) An example of signal intensity pattern generated with the optical mask 14 is shown in FIG. 4. In FIG. 4, the signal intensity pattern corresponds to an intensity pattern generated by a diffuser.

(39) Optical mask crossing causes the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape results in a displacement of said intensity pattern, as can be seen in FIG. 5. Such property is also commonly referred to as the “memory effect” of the optical mask. This assumption may be valid until a so-called memory effect angle or angular memory effect, as described hereafter.

(40) Depending on the angular separation Δα.sub.ij of channels C.sub.i and C.sub.j and on the value of the memory angle δα.sub.ij, two cases can be distinguished: the optical mask 14 may exhibit a small memory effect angle δα<Δα.sub.ij. This case is illustrated in FIG. 6A through the example of two angular channels C1 and C2, but can be generalized for all N angular channels. As can be shown, the resulting individual intensity patterns I1 and I2 on the imaging sensor, corresponding to propagation directions P1 and P2 with mean propagation direction vector custom character and custom character respectively, are uncorrelated over the surface of the imaging sensor. In particular, in a given area A of the imaging sensor, the two individual intensity patterns remain uncorrelated.

(41) Further, it is worth noting, as illustrated in FIG. 7A, that a small gradient phase, ie a tilt in a given wavefront, results in a simple translation of the corresponding intensity pattern, while a strong phase gradient, larger than the dashed circle, results in a completely different and uncorrelated speckle pattern. Hence, it is preferable for the memory effect angle δα of the optical mask 14 to be larger than the extent of the gradient phase distribution, while remaining smaller than the angular separation Δαij between two different angular channels C.sub.i and Cj. In other words, it is preferable for the tilt in the wavefront to be smaller than the angular memory effect of the optical mask. In a second embodiment, the optical mask may exhibit a large memory effect angle δα>Δα.sub.ij. This case is illustrated in FIGS. 6B and 7B.  In FIG. 6B, two angular channels C1 and C2 are used. As shown, resulting individual intensity patterns on the imaging sensor corresponding to different propagation directions P1 and 2 with respective mean propagation direction vectors custom character and custom character, are similar but shifted by a quantity Δα.sub.ij.Math.d, where d is the distance between the optical mask and the matrix sensor. However, in a given surface portion A.sub.n having a large spatial extend L.sub.An<Δα.sub.ij.Math.d, the two intensity patterns are locally uncorrelated.

(42) In the following, embodiments in relation with the case in which the optical mask 14 may exhibit a small memory effect angle will be described.

(43) In these embodiments, the optical mask may comprise an engineered pseudo-diffuser, a diffuser or a diffractive optical element. The optical mask comprises for example a plurality of diffusers or pseudo-diffusers, or a combination thereof, preferably diffusers or pseudo-diffusers.

(44) Otherwise, the optical mask may be any optical mask: i) causing the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape, with said tilt being smaller than an angular memory effect δα of said optical mask, results in a local displacement amount of the intensity pattern, said angular memory effect δα being smaller than each angular separation Δα.sub.ij, ii) producing uncorrelated intensity patterns over a surface area A of the imaging sensor, for a plurality of respective incident angular beams of different propagation directions having a same wavefront shape.

(45) The property ii) is evaluated relatively to a given measure of similarity.

(46) A measure of similarity between two intensity patterns can be mathematically characterized using correlation tools. For example, a measure may consist in localizing critical points of the intensity such as intensity maxima and to look for correlations between intensity maxima localization of the two intensity patterns. This can be quantified using a Pearson correlation coefficient. It can also be quantified using the mean average distance to the nearest neighbor weighted by the critical point density.

(47) A measure of similarity between two intensity patterns may also be estimated by computing a zero-mean cross-correlation product. A statistical average of the zero-mean cross-correlation product between two uncorrelated random signals is zero. In contrast, a statistical average of the zero-mean cross-correlation product of an intensity pattern with itself is a function admitting an extremum, in particular a maximum. An illustration of the zero-mean cross-correlation product of the speckle of FIG. 8 with itself (also called auto-correlation product) is shown in FIG. 8. The result displayed in FIG. 8 exhibits an intense central bright spot demonstrating the high degree of correlation of an intensity pattern with itself.

(48) Alternatively, a Wiener deconvolution can be used rather than a zero-mean cross-correlation product. The result of a Wiener deconvolution applied to an intensity pattern with itself is similar to what is displayed in FIG. 8.

(49) An example illustrating property ii) is shown in FIG. 9.

(50) FIG. 9 shows an illustration of correlation images corresponding to a surface A computed between reference intensity patterns R.sub.L(x,y) obtained with the optical mask 14 using five angular beams with propagation directions P.sub.L. The angular beams are separated by a separation angles ranging from −458 mdeg to 458 mdeg.

(51) FIG. 9 shows a 5×5 matrix of images. Each image at position (i, j) represents the cross-correlation between the sub-images at propagation directions Pi and Pj.

(52) As shown, the diagonal of the matrix exhibits bright spots demonstrating the high degree of correlation of an intensity pattern with itself, whereas out of diagonal terms are almost perfectly zero, illustrating the absence of correlation between patterns obtained at different propagation directions.

(53) The wavefront sensor 10 mentioned above may be used to determine the wavefront shapes of the multi-angular signal light beam 22 from a single signal image acquisition of the said multi-angular signal light beam.

(54) FIG. 10 illustrates a method according to the present invention.

(55) First, at step 101, several reference intensity patterns R.sub.L(x,y) are recorded. Each reference intensity pattern R.sub.L(x,y) is generated by sending a reference incident monochromatic beam L with propagation directions P.sub.L having mean propagation direction vectors custom character, onto the wavefront sensor 10, L varying between 1 and N, and x and y denoting coordinates. The reference incident monochromatic beams may exhibit a controlled wavefront shape, such as a planar or a spherical wavefront.

(56) Then, at step 103, a single signal image I(x,y) of the intensity pattern is recorded using the imaging sensor 18. The latter is generated by sending the said multi-angular signal light beam 12 onto the optical mask 14. The light beam 12 comprises the N propagation directions P.sub.L

(57) In a variant, reference intensity patterns R.sub.L(x,y) are recorded after capturing the single signal image I(x,y) of the intensity pattern generated by said multi-angular signal light beam 12

(58) In this embodiment, the optical mask exhibits a small angular memory effect.

(59) In order to determine the wavefront shape at all N propagation directions P.sub.L deformation data T.sub.L.sup.I(x,y) are computed at step 109 using the computing means 17. Such data are representative of a diffeomorphism of each reference intensity pattern R.sub.L(x,y).

(60) At step 109, intensity-weight data W.sub.L.sup.I(x,y) is also computed. This feature is representative of an intensity modulation of each reference intensity pattern at propagation direction P.sub.L with mean propagation direction vector custom character.

(61) All N intensity-weight data W.sub.L.sup.I(x,y) and N deformation data T.sub.L.sup.I(x,y) are computed, for L varying from 1 to N, so as to minimize, for all sampling points (x,y) of the surface area A, from the single signal image I(x,y):

(62) a difference D.sub.A between the single signal image I(x,y) on the one hand, and the sum of reference intensity patterns R.sub.L multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand:

(63) D A = .Math. I ( x , y ) - .Math. L = 1 N W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A

(64) the symbol ∥.∥.sub.A designating a norm calculated for all (x,y) sampling points in the surface area A;

(65) The quantity may comprise a regularization term.

(66) An optimization method aimed at minimizing the distance D.sub.A is now described in accordance with FIG. 11.

(67) As illustrated, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) are estimated by using a Digital Image Correlation method (DIC). In a preferred embodiment, the DIC method is performed by computing zero-mean cross-correlation product images between the signal image I(x,y) and each of the reference intensity patterns. As mentioned previously, the plurality of reference incident beams L are uncorrelated relatively to the zero-mean cross-correlation product. Therefore, the zero-mean cross-correlation product images between the signal image and each one of the reference intensity patterns R.sub.L(x,y) exhibit a peak, the intensity-weight data W.sub.L.sup.I(x,y) being the amplitude of the peak and the deformation data T.sub.L.sup.I(x,y) being the displacement vector between the peak (or its centroid) from the center of the zero-mean cross-correlation product image.

(68) Finally, at step 111 data are generated for each propagation directions P.sub.L representative of: the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y) over at least one direction of the intensity pattern image, preferably over the two directions of the intensity pattern image; and the intensity map based on the intensity-weight data W.sub.L.sup.I(x,y).

(69) In some embodiments, the contrast of the single signal image I(x,y) may decrease with the increase of the amount of angular channels C.sub.K. Thus, the accuracy of the wavefronts reconstruction can be affected. It is preferable to rely on a global optimization method to retrieve T.sup.I.sub.L(x,y) and W.sup.I.sub.L(x,y).

(70) An example of a global optimization method according to the invention will be now described in accordance with FIG. 12.

(71) Said method consists in iteratively trying to improve the candidate solution of the optimization problem.

(72) At the first iteration M=1, T.sub.L.sup.I,1(x,y) and W.sub.L.sup.I,1(x,y) are obtained in a similar way as previously described.

(73) Herein, unlike the previous embodiment, T.sub.L.sup.I,1(x,y) and W.sub.L.sup.I,1(x,y) are not directly integrated. Instead, a second iteration is performed, in order to obtain new values for these data.

(74) To that extent, at this iteration, for each propagation direction P.sub.L, the single signal image I(x,y) is replaced by I.sup.2(x,y)=I(x,y)−Σ.sub.L≠KW.sub.L.sup.I,1R.sub.L((x,y)+T.sub.L.sup.I,1(x,y)).

(75) Then, a DIC method is used again to update T.sub.L.sup.I,2(x,y) and the weighted coefficient W.sub.L.sup.I,2(x,y).

(76) The same process is then repeated for the following iterations as illustrated in FIG. 12. At a given iteration p, for each propagation direction P.sub.L: the single signal image I(x,y) is replaced by I.sup.p(x,y)=I(x,y)−Σ.sub.L≠KW.sub.L.sup.I,p-1R.sub.L((x,y)+T.sub.L.sup.I,p-1(x,y)), then a DIC method is used to update T.sub.L.sup.I,p(x,y) and the weighted coefficient W.sub.L.sup.I,p(x,y)

(77) After M iterations, the final transformation function T.sub.L.sup.I,Mit(x,y) and weighted coefficient W.sub.N.sup.I,Mit(x,y) are used to recover the wavefronts by using a 2D integration algorithm.

(78) The method described above takes advantage of the fact that the single signal image I(x,y) corresponds to the incoherent sum of the individual intensity patterns from different wavefronts. Hence, any individual intensity pattern I.sub.L(x,y) containing only one individual wavefront can be retrieved by subtracting the other individual intensity patterns from the single signal image I(x,y).

(79) In some applications, for instance for complex distorted wavefronts, it is preferable to work at a local scale rather than to perform the estimation on the global intensity pattern. One possibility consists in splitting the intensity pattern into signal sub-images, each representative of light impinging on a portion A′ and to estimate local deformation data T.sub.L.sup.I(x,y) for each sub-image and for every propagation direction P.sub.L.

(80) In this case, the method may comprise two additional steps 105 and 107 preceding step 109. At step 105, the single signal image is split into several sub-images of surface A′.

(81) At step 107, for every propagation direction P.sub.L and for each sub-image of surface A′, the intensity-weight data W.sub.L(x′,y′) and the deformation data T.sub.L(x′,y′) are estimated by: i) Assuming that W.sub.L.sup.I(x′,y′)=W′.sub.L and T.sub.L.sup.I(x′,y′)=T′.sub.L where W′.sub.L is a constant and positive scalar and T′.sub.L a constant two-dimensional translation vector, over the surface area A′, with (x′,y′) coordinates on A′, and ii) computing the intensity-weight data W′.sub.L and the deformation data T′.sub.L, between the signal sub-image I(x,y) and each of the reference intensity patterns R.sub.L(x,y) so as to minimize, for the corresponding surface area A′, the difference D.sub.A′;

(82) D A = .Math. I ( x , y ) - .Math. L = 1 N W L R L [ ( x , y ) + T L ] .Math. A

(83) In the same way as for the global surface A, the intensity-weight data W.sub.L.sup.I and the deformation data T.sub.L.sup.I may be estimated by performing a Digital Image Correlation method, notably by computing zero-mean cross-correlation product images, for each signal sub-image, between the signal sub-image and each of the reference intensity patterns. As the plurality of reference incident angular beams L are uncorrelated relatively to the zero-mean cross-correlation product, the zero-mean cross-correlation product image between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y) exhibits a peak. The intensity-weight data corresponds to the amplitude of the peak and the deformation data is the displacement vector between the said peak or its centroid from the center of the zero-mean cross-correlation product image.

(84) Finally, at step 109, deformation data T.sub.L.sup.I(x,y) an intensity data WL for all coordinates (x,y) on the surface A are obtained by applying the equality T.sub.L.sup.I(x,y)=T.sub.L.sup.I(x′,y′)=T′.sub.L and W.sub.L.sup.I(x,y)=W.sub.L.sup.I(x′,y′)=W′.sub.L with x=x′,y=y′.

(85) FIG. 13 illustrates a method according to a further embodiment of the present invention. Unlike the previous embodiments, the optical mask in the following has a large memory effect angle range δα>Δα.sub.ij, for all i,j∈[1,N], with i≠j.

(86) First, at step 201, as in the previous embodiments, the single signal image I(x,y) of an intensity pattern is recorded using the imaging sensor 18. The latter is generated by sending the said multi-angular signal light beam 12 onto the optical mask 14. The light beam 12 comprises the N channels C.sub.L each having a propagation direction P.sub.L.

(87) Then, at step 207, several reference intensity patterns R.sub.L(x,y) are provided, wherein each reference intensity pattern R.sub.L(x,y) corresponds to a respective propagation direction P.sub.L, L varying between 1 and N, and x and y denoting spatial coordinates.

(88) To that end, the method comprises, previous to step 207, a step 203 wherein at least one intensity patterns R.sub.K(x,y) is recorded using device 16, K being between 1 and N; and

(89) if the number of the recorded intensity patterns is smaller than N, the method further comprises a step 205 consisting in generating numerically the other intensity patterns R.sub.L(x,y), L being between 1 and N different from K, using the recorded pattern RK, so as to obtain N references patterns corresponding to propagation directions P.sub.L, L varying between 1 and N.

(90) Preferably, step 205 comprises first achieving a cross-correlation of the single signal image I(x,y) and the at least one recorded reference patterns R.sub.K(x,y). As a result, N maxima are obtained. Then, the at least one recorded reference intensity pattern R.sub.K(x,y) is spatially shifted to the corresponding position based on each maximum position and generates the other reference intensity pattern references R.sub.L(x,y).

(91) This method takes advantage of the fact that the intensity patterns obtained for a plurality of respective incident angular beams of different propagation directions but having the same wavefront are identical but shifted by an amount Δα.Math.d, d being a separation distance between the optical mask and the imaging sensor 18.

(92) Alternatively, in step 203, a reference intensity pattern R.sub.K(x,y) is generated from an incident beam with a propagation direction P.sub.L different from the propagation directions contained in the single signal image I(x,y).

(93) Then, in step 205, all N references R.sub.L(x,y) are generated numerically using the recorded reference intensity pattern R.sub.K(x,y) in a similar way as described above, ie by: achieving a cross-correlation of the single signal image I(x,y) and the recorded reference pattern R.sub.K(x,y) for identifying the N maxima, and spatially shifting the recorded reference intensity pattern R.sub.K(x,y) to the corresponding position based on each maximum position for generating the N reference intensity pattern references R.sub.L(x,y).

(94) In order to determine the wavefront shape at the N propagation directions P.sub.L deformation data T.sub.L.sup.I(x,y) are computed at step 213 using computing means, as well as intensity-weight data W.sub.L.sup.I(x,y).

(95) First, at a step 209, the intensity pattern I(x,y) is split into several sub-images I′(x,y), each representative of light impinging on a surface portion A.sub.n of the imaging sensor with An having a large extent L.sub.An smaller than Δα.sub.ij.Math.d.

(96) When the imaging sensor is a matrix, A.sub.n corresponds advantageously to the surface of a micropixel.

(97) At step 211, for each surface portion A.sub.n, the intensity-weight data W.sub.L.sup.I(x,y) and the deformation data T.sub.L.sup.I(x,y) are calculated between the signal sub-image I′ and each of the reference so as to minimize, for all sampling points (x y) of each surface portion A.sub.n, from the single signal sub-image I′(x,y):

(98) a difference D.sub.An between the sub-image I′ on the one hand, and the sum of reference intensity patterns R.sub.L multiplied by intensity-weight data W.sub.L.sup.I(x,y) and deformed by deformation data T.sub.L.sup.I(x,y), on the other hand:

(99) D A n = .Math. I ( x , y ) - .Math. L W L I ( x , y ) R L [ ( x , y ) + T L I ( x , y ) ] .Math. A n

(100) the symbol ∥.∥.sub.A.sub.n designating a norm calculated for all (x,y) sampling points in the surface portion A.sub.n;

(101) The quantity may comprise a regularization term.

(102) In some embodiments, the intensity-weight data W.sub.L(x,y) and the deformation data T.sub.L(x,y) are estimated by: iii) Assuming that W.sub.L.sup.I(x,y)=W′.sub.L and T.sub.L.sup.I(x,y)=T′.sub.L where W′.sub.L is a constant and positive scalar and T′.sub.L a constant two-dimensional translation vector, over the surface area A.sub.n, with (x,y) coordinates on A.sub.n, and iv) computing the intensity-weight data W′.sub.L and the deformation data T′.sub.L, between the signal sub-image and each of the reference intensity patterns R.sub.L(x,y) so as to minimize, for the corresponding surface area A.sub.n, the difference D.sub.An;

(103) D A n = .Math. I ( x , y ) - .Math. L W L R L [ ( x , y ) + T L I ] .Math. A n

(104) In the same way as was previously described with respect to FIG. 11, the intensity-weight data W.sub.L.sup.I and the deformation data T.sub.L.sup.I may be estimated by a Digital Image Correlation method. The that end, a zero-mean cross-correlation product images is computed, for each surface portion A.sub.n, between the signal sub-image and each of the reference intensity patterns, the plurality of reference intensity patterns being uncorrelated in the sub-image I′ relatively to the zero-mean cross-correlation product, the zero-mean cross-correlation product image between the signal sub-image I′ and each of the reference intensity patterns R.sub.L(x,y) having a peak, the intensity-weight data being the amplitude of the peak and the deformation data T.sub.L.sup.I being the displacement vector between the said peak or its centroid from the center of the zero-mean cross-correlation product image.

(105) Finally, at step 213, data are generated for each propagation direction P.sub.L representative of: the shape of the wavefront by integrating the deformation data T.sub.L.sup.I(x,y) over at least one direction of the intensity pattern image, preferably over the two direction of the intensity pattern image; and the intensity map based on the intensity-weight data W.sub.L.sup.I(x,y).

(106) In some embodiments, the contrast of the single signal image I(x,y) may decrease with the increase of the amount of angular channels Ck. Thus, reconstruction of wavefronts accuracy can be affected. It is preferable to achieve a global optimization method to retrieve T.sub.L(x,y) and W.sub.L(x,y).

(107) An example of a global optimization method according to the invention similar to the one of FIG. 12 will be now described.

(108) Said method consists in iteratively trying to improve the candidate solution of the optimization problem.

(109) For each surface portion A.sub.n,

(110) At the first iteration M=1, T.sub.L.sup.I,1(x,y) and W.sub.L.sup.I,1(x,y) are obtained in a similar way as previously described.

(111) Then a second iteration is performed, in order to obtain new values for these data.

(112) To that extent, at this iteration, for each propagation direction P.sub.L, the signal sub-image I′(x,y) is replaced by I′.sup.,2(x,y)=I′(x,y)−Σ.sub.L≠KW.sub.L.sup.I,1R.sub.L((x,y)+T.sub.L.sup.I,1(x,y)).

(113) Then, a DIC method is used again to update T.sub.L.sup.2(x,y) and the weighted coefficient W.sub.L.sup.2(x,y).

(114) The same process is then repeated for the following iterations. At a given iteration p, for each propagation direction P.sub.L: the signal sub-image I′(x,y) is replaced by I′.sup.,p(x,y)=I′(x,y)−Σ.sub.L≠KW.sub.L.sup.I,p-1R.sub.L((x,y)+T.sub.L.sup.I,p-1(x,y)), then a DIC method is used to update T.sub.L.sup.I,p(x,y) and the weighted coefficient W.sub.L.sup.I,p(x,y)

(115) After M iterations, the final transformation function T.sub.L.sup.I,Mit(x,y) and weighted coefficient W.sub.N.sup.I,Mit(x,y) are used to recover the wavefronts by using a 2D integration algorithm.

(116) FIG. 14 displays an example of a reconstruction of a multi-angular light beam illuminated the imaging sensor 18. In the example illustrated herein, a group of 3×3 wavefronts is used as the input, in which each of them is generated by using a sum of Zernike modes with random coefficients), the wavefronts are distributed around the wavefront in the center with a fix separation angle of 1 degree. The left column Col 1 represents the inputs. The column in the middle shows the results using a sequential method.

(117) The right column Col 3 illustrates the wavefronts reconstruction by performing the method according to the present invention. The results show that the multiplexed mode, ie simultaneous angular-wavefront measurement according to the invention, works as well as the sequential mode.

(118) FIGS. 15A to 15D are an experimental demonstration been performed using two fluorescent guide stars S1 and S2. An excitation laser beam excites through a spatial light modulator two fluorescent bead independently. It can be adjusted properly to excite any or both of them, which enables to switch between the sequential modes and the multiplexed mode. The Wavefront sensor is located in a Fourier plane of a microscope objective.

(119) In the case of sequential mode, the two wavefronts are retrieved by sequentially exciting the two guide stars and recover the wavefront as illustrated in FIGS. 15A and 15B. FIG. 15C illustrates the case where the two guides stars are simultaneously excited in the multiplexed mode. As illustrated, the two wavefronts are recovered with a single intensity pattern according to the present invention. As demonstrated in FIG. 15C, the wavefront reconstruction according to the present invention performs as well as the sequential mode by comparing their Zernike decomposition.

(120) FIGS. 16A to 16D refer to a reconstruction of wavefronts of a multi-angular light beam emitting by several guide stars.

(121) In the example illustrated in FIG. 16A, an intensity single signal image I(x,y) corresponding to several guide stars S1, S2 and S3 located in the object plan, is recorded using the device 16. The optical mask of the device used herein has a large angular memory effect. In the illustrated example, the optical mask is a diffuser

(122) In order to reconstruct the wavefronts contained in the intensity pattern, a reference pattern R.sub.0 which is obtained from a plane wave is recorded, as shown in FIG. 16A.

(123) Thanks to the large memory effect range of the diffuser, the individual intensity pattern generated from each guide star S1, S2 and S3 would have a relative global shift, in the case of aberration, i.e. low phase gradient. Therefore, the cross-correlation between the reference R0 and the single signal image I(x,y) enables to find several maxima, corresponding to the number of guide stars, as it can be seen in FIG. 16B.

(124) Since the maximum of the autocorrelation always locates in the center, the relative shift between the reference intensity pattern R0 and the corresponding individual intensity pattern of each guide star can be easily recovered. Each wavefront contained in the single signal image can be retrieved by shifting the reference speckle to corresponding positions. Three wavefronts are retrieved in FIG. 16C with the any one of the optimization methods described above.

(125) To demonstrate validity of the recovered wavefronts from the single signal image, each retrieved Wavefront is compared with the one obtained using sequential acquisition, ie corresponding to a single intensity pattern, as it is shown in FIG. 16C. In this figure, left column col 1 corresponds to the sequential mode, column in the middle corresponds to the multiplex acquisition according to the present invention and right column col 3 corresponds to Point Spread functions (PSFs).

(126) Then, all of them are decomposed into the first 16 Zernike modes, shown in FIG. 15D. As it is shown, a good agreement between sequential and multiplexed measurement. The corresponding Point Spread functions (PSFs) can be calculated from these wavefronts, as shown in FIG. 16D and enable to deconvolve the images.

(127) FIGS. 17A and 17B illustrate the need in adaptive optics for measuring several wavefronts incoming from different regions in the field of view.

(128) In adaptive optics, the wavefront is generally measure from a single guide star. An image, for instance acquired with a camera located in the image plane can then be corrected from this aberration either using an active correction (eg. spatial light modulator in the pupil plane) or a passive correction (deconvolution). However, as the aberrations vary in the field of view, this correction is generally efficient only within an “isoplanatic patch” around the considered guide star. This effect usually limits the field of view in adaptive optics. The multiplexed wavefront measurement according to the present invention allow to measure the aberrations from different guide stars in different isoplanetic patch and then allow to enlarge the Field of view after correction. Here, a deconvolution method (Richardson-Lucy algorithm) is used to illustrate this point. Many florescent beads are images and the aberration of 3 beads, located in different isoplanetic patches are acquired (Region 1,2,3 in FIG. 17A).

(129) In FIG. 17A, the imaging results are compared with and without the aberration, which is caused here by inserting a diffuser between the sample and the objective. A circle in each region indicates the guide star from which the wavefronts have been measured. Each recovered PSF (from each recovered WF) is used to specifically deconvolve the three selected areas.

(130) The results are shown in FIG. 17B. First, the recovered wavefronts from the single signal Image I(x,y) can be used to decrease the aberration and improve the image quality; second, the multiplexed measurement can increase the FOV of adaptive optic based microscopy. Specifically, the deconvolution results of certain area using the corresponding measured PSF are always better than the others, which even fail to recover the information of the selected area. The invention is not limited to the described embodiments, and various variations and modifications may be made without departing from its scope.

(131) For example, the multi-angular signal light beam 12 may be generated by illuminating a polarization assembly 60 located before the wavefront sensor 14 and 18 by an initial light beam 50 containing at least one angular channel as illustrated in FIG. 18A.

(132) As shown, a polarization assembly 60 is located before the optical mask 14 in order to encode several polarization states of one or several wavefronts into N angular channels C.sub.L.

(133) The Polarization assembly 60 may contains at least one polarization optic, for example a quarter wave-plate, a half-waveplate, a polarizer, a Wollaston prism, a Glan-Taylor polarizer, a Glan-Thomson polarizer, a Rochon prism a Nomarski prism or any other polarization dependent optic, so that different polarization states of the initial light beam 50 are encoded into different angular channels Ci, Ck, Cj, Cl of the multi-angular signal light beam 12.

(134) FIGS. 18B and 18C relate to specific embodiments where the polarization assembly 60 is a Wollaston prism which generates for each incident wavefront, two separate linearly polarized outgoing wavefronts with orthogonal polarization and distinct propagation direction.

(135) The invention is not limited to the described embodiments.

(136) The optical mask may be a diffractive optical element.

(137) The optical mask 14 may be an optical fiber bundle, a metasurface, or a freeform optical element.

(138) The optical mask 14 may be an array of micro-optical elements, each micro-optical element generating on the imaging sensor an intensity pattern different from the intensity pattern generated by the micro-optical elements located at least in its vicinity.

(139) The optical mask 14 may be an array of micro-lenses randomly arranged spatially.

ADDITIONAL EXAMPLE OF INVENTION

(140) In an embodiment, a method according to the present invention relies on the specific properties of random intensity patterns generated by diffusers or more generally by an optical mask having the optical properties:

(141) i) to cause the intensity pattern to depend on the wavefront shape, so that a tilt applied to the wavefront shape results in a displacement amount of the intensity pattern,

(142) ii) to produce uncorrelated intensity patterns over at least one surface area A of the imaging sensor, for a plurality of respective incident angular beams of different propagation directions having a same wavefront shape,

(143) The principle is described below.

(144) If two beams exhibit the same wavefront and the same intensity distribution, and only differ in their propagation direction vector, the intensity patterns on a micropixel will be different. The similarity degree between two intensity patterns {s.sub.j}.sub.j=1,2 obtained at two different propagation direction vectors depends on the separation angle Δα.sub.ij.

(145) The degree of similarity between two patterns can be mathematically characterized using several correlation tools. One solution to characterize two intensity patterns consists in localizing critical points of the intensity such as intensity maxima and to look for correlations between intensity maxima localization of two patterns. The localization correlation can be quantified using a Pearson correlation coefficient. It can also be quantified using the mean average distance to the nearest neighbor weighted by the critical point density. Such correlation characterization between two different populations of critical points in different field patterns was achieved in Gateau et al. ArXiv 1901:11497.

(146) Alternatively, correlation between two intensity patterns is characterized by computing the zero-mean cross-correlation product. The statistical average of the cross-correlation product between two uncorrelated random signals (of zero mean) is zero.

(147) For intensity patterns {R.sub.L}.sub.L=1,2:

(148) .Math. ( R 1 - .Math. R 1 .Math. ) * ( R 2 - .Math. R 2 .Math. ) .Math. = 0

(149) Where custom character stands for a statistical average. This statistical average may be achieved by integrating the intensity pattern over a given surface area of the imaging sensor under ergodic hypothesis. Conversely, the zero-mean cross-correlation product of a signal intensity pattern with itself, also called auto-correlation function, is a peaked function:

(150) C ( R 1 , R 2 ) = ( R 1 - .Math. R 1 .Math. ) * ( R 1 - .Math. R 1 .Math. ) = δ ( r )

(151) Where δ designates the Dirac distribution, for example if s.sub.1 is assumed to be of variance 1, and r is a spatial coordinate on the imaging sensor.

(152) An optical mask may be designed so that to satisfy orthogonality properties between patterns obtained at different propagation direction vectors.

(153) Alternatively, a diffuser generating propagation direction-dependent random intensity patterns, speckles.

(154) Mathematically, two uncorrelated random intensity patterns (or speckles) are thus statistically orthogonal relatively to the zero-mean cross-correlation product.

(155) In cases where the intensity pattern is a speckle, the integration surface area should at least contain one “speckle grain” and the larger the integration surface area, the better the validity of the orthogonality approximation. The zero-mean cross-correlation product thus provides a useful inner product.

(156) Noteworthy, the orthogonality property satisfied by speckles generated by a diffuser is in particular true in a statistical sense, or in other words, when averaged over a large amount of speckle grains.

(157) In some embodiments, optical masks with tailored properties may be designed to optimize the orthogonality of patterns obtained at different propagation direction vectors. In this case, regular diffuser would be replaced by specifically engineered diffractive optical elements or pseudo-diffusers. Optimizing the orthogonality of patterns not only reduces the cross-talk between angular channels but also provides a simple mean to retrieve the wavefronts at each angular channel thanks to the simple zero-mean cross-correlation product or a Wiener deconvolution.

(158) Mathematical Grounding, Theoretical Principle

(159) Given a signal light beam composed of a plurality of angular channels having propagation direction vectors {custom character, . . . , custom character}.

(160) If illuminating the diffuser with such a multi angular light beam, every individual propagation direction vector will produce its own proper signal intensity pattern. The total signal intensity pattern of the a multi-angular beam is then the sum of all the individual signal intensity patterns. In a calibration step, the reference intensity patterns s.sub.j({right arrow over (r)}) are obtained at each individual propagation direction vector custom character where {right arrow over (r)} is the spatial coordinate vector on the imaging sensor. In an embodiment, the wavefront changes are, in particular, beam-tilts, hence resulting in global shifts of the angular signal intensity patterns at the imaging sensor. The total single signal intensity pattern at the imaging sensor 18 will then be:

(161) I ( r .fwdarw. ) = .Math. j = 1 n β j R j ( r .fwdarw. - r .fwdarw. j ) ( 1 ) where μ.sub.j is the weight of the contribution to the signal beam at propagation direction vector custom character and {right arrow over (r)}.sub.j a translation vector resulting from the beam-tilt.

(162) Mathematical treatments allow to retrieve every individual custom character and {right arrow over (r)}.sub.j;

(163) To achieve so, the zero-mean cross-correlation product on R.sub.n may be performed:

(164) 0 .Math. ( I - .Math. I .Math. ) * ( R n - .Math. R n .Math. ) .Math. = .Math. j = 1 n β j .Math. ( R j ( r .fwdarw. - r .fwdarw. j ) - .Math. R j .Math. ) * ( R n - .Math. R n .Math. ) .Math. .Math. ( I - .Math. I .Math. ) * ( R n - .Math. R n .Math. ) .Math. = β n δ ( r .fwdarw. - r n .fwdarw. )

(165) Where custom character stands for a statistical average. The result of this mathematical treatments may be a translation image exhibiting a peaked function of amplitude β.sub.j, whose peak is centered at {right arrow over (r)}.sub.j. The parameter {right arrow over (r)}.sub.j gives access to the uniform deformation data T.sub.L(x,y)=−{right arrow over (r)}.sub.j while β.sub.j gives access to the uniform intensity weight data W.sub.L(x,y)=β.sup.j of the beam at propagation direction vector custom character.

(166) In the following, several possible algorithmic implementations are cited for retrieving these parameters. However, whatever the algorithm used, the orthogonality of the propagation direction-dependent reference intensity patterns relatively to the zero-mean cross-correlation product is the common theoretical grounding requirement.

(167) Practical Algorithmic Implementation

(168) In some embodiments, typical wavefront distortions are more complex than simple tilts. Integration of the displacement vector maps will then provide the wavefronts, as described above.

(169) Algorithms such as Demon Algorithm may be used in order to compute directly a distortion map (Berto, P., Rigneault, H., & Guillon, M. (2017), Wavefront sensing with a thin diffuser. Optics letters, 42(24), 5117-5120). Deep-learning computational approaches based on neuronal networks can also be implemented in order to find the distortion maps.

(170) A more simple approach consists in splitting the original intensity map into macro-pixels. At a “local scale”, the signal intensity pattern is just translated and pattern deformation may be neglected, making algorithms more simple. The “local” translation of the signal intensity pattern is proportional to the “local tilt” of the wavefront. One algorithmic difficulty consists defining what the size of the “local scale” is, what should the size of the macro-pixel be. In practice, it depends on geometrical and physical parameters of the WFS. This point is thus discussed now.

(171) As a first simple algorithmic implementation, the zero-mean cross-correlation product, extensively discussed above, may be used. At the defined “local scale”, the signal intensity pattern is just a translation of the reference intensity pattern. A crop in the pixilated full field camera image of the signal intensity pattern must thus be performed. The crop size now defines a macro-pixel (composed of several camera pixels), which will correspond, in the end, to the phase pixel; i.e. the pixel of the final phase image. Although the simplest solution consists in considering a square macro-pixel of constant size, the macro-pixel size may vary depending on the camera region. The size of the macro-pixel is a balance between the spatial resolution of the final phase image (the larger the macro-pixel, the fewer they are) and the angular resolution of the WFS (the larger the macro-pixel, the larger the number of speckle grains per macro-pixel and the larger the number of orthogonal angular modes over a given angular width).

(172) Once the signal intensity pattern is split into macro-pixels, every macro-pixel can be projected on the corresponding macro-pixels of the reference intensity pattern, thanks to the zero-mean cross-correlation product, as described above.

(173) Alternatively, a Wiener deconvolution can be used rather than a zero-mean cross-correlation product. Wiener deconvolution is very similar to a zero-mean cross-correlation product. This similarity appears when performing these operations in the Fourier domain.

(174) Considering a signal image I and a reference image R, their two-dimensional Fourier transform are then written Î and {circumflex over (R)}. The Fourier transform of their cross-correlation is then F={circumflex over (R)}*Î where custom character is the complex conjugate of R. By comparison, the Fourier transform of the Wiener deconvolution of S by R is:

(175) W = R * I σ 2 + R 2 .
Here the term σ.sup.2 is an average signal to noise ratio. Note that both the cross-correlation product and Wiener deconvolution are slightly dissymmetric in R and I. For our data treatment exchanging the role of R and I does not change significantly the result. Our preliminary data presented below are obtained using Wiener deconvolution.

(176) In practice, intensity patterns obtained at different propagation directions are only orthogonal in a statistical sense. Consequently, there may be some cross-talk between angular channels. To reduce these cross-talk Iterative Wiener deconvolution can be implemented to suppress the cross talk effect. In this implementation, an estimate of β.sub.j and {right arrow over (r)}.sub.j are first deduced from the translation image obtained thanks to a Wiener deconvolution. Second, assuming the expected signal intensity pattern is numerically rebuilt according to equation (1). As a third step, the expected signal intensity pattern is compared to the actual experimental signal intensity pattern and differences are sent as an input for step 1. Steps 1 to 3 can then be iterated.

(177) A more elaborated compressed sensing algorithms can also be used, taking into account that a given propagation directions can only be responsible for a single peak. Such algorithms are optimized to simultaneously minimize two quantities, one of which being the root mean squared error between the experimental data (the signal intensity beam) and the rebuilt data. The other quantity to be minimized here is the number of non-zero coefficients in each translation image. More elaborated reconstruction algorithms may be used among which all techniques relying on matrix inversion: Moore-Penrose pseudo-inversion, singular value decomposition (also called principal component analysis), Tikhonov regularization etc. For instance, principal component analysis was used in N. K. Metzger et al., Nat Commun 8:15610 (2017) to make a very sensitive spectrometer. Such a matrix pseudo-inversion can be achieved the following way. In the Fourier domain, equation (1) can be written:

(178) F ( I ) ( k .fwdarw. ) = .Math. j = 1 n β j R j ( k .fwdarw. ) e - i k .fwdarw. . r .fwdarw. j

(179) Where R.sub.n({right arrow over (k)}) are the 2D Fourier transforms of reference intensity patterns, we then have:

(180) R n * ( k .fwdarw. ) F ( I ) ( k .fwdarw. ) = .Math. j = 1 n β j R n * ( k .fwdarw. ) R j ( k .fwdarw. ) e - i k .fwdarw. . r .fwdarw. j

(181) Which can be simply re-written as a matrix equation to invert:

(182) V I ( k .fwdarw. ) = M ( k .fwdarw. ) V O ( k .fwdarw. ) Where V I = R n F ( I ) , M n , j = R n R j and V O ( k .fwdarw. ) = β j e - i k .fwdarw. . r .fwdarw. j .

(183) The present invention is not limited to these embodiments, and various variations and modifications may be made without departing from its scope.