Wavefront sensor and method of reconstructing distorted wavefronts
11874178 ยท 2024-01-16
Assignee
Inventors
Cpc classification
G01J9/00
PHYSICS
International classification
G01J9/00
PHYSICS
Abstract
A wavefront sensor includes a mask and a sensor utilized to capture a diffraction pattern generated by light incident to the mask. A reference image is captured in response to a plane wavefront incident on the mask, and another measurement image is captured in response to a distorted wavefront incident on the mask. The distorted wavefront is reconstructed based on differences between the reference image and the measurement image.
Claims
1. A method of measuring a phase of a distorted wavefront, the method comprising: capturing a reference image based on a planar wavefront directed through a foreground mask to a sensor located behind the foreground mask; capturing a measurement image based on a distorted wavefront directed through the foreground mask to the sensor; forming a diffraction pattern image pair with the reference image and the measurement image, wherein a local slope of the distorted wavefront is encoded in the diffraction image pair; and numerically solving a joint optimization problem, including comparing the reference image to the measurement image to decode the local slope of the distorted wavefront from the diffraction pattern image pair.
2. The method of claim 1, wherein the reference image is captured utilizing a collimated light source.
3. The method of claim 1, wherein the distorted wavefront is reconstructed utilizing an alternating direction method of multipliers (ADMM) methodology.
4. The method of claim 1, wherein the foreground mask is selected from the group comprising a binary mask, a uniform random binary mask, a gray scale mask, diffuser, and a wavelength-dependent mask.
5. The method of claim 4, wherein the foreground mask is located a distance z in front of the sensor.
6. The method of claim 1, wherein the foreground mask is a uniform random pattern mask that modulates an incoming wavefront to create an encoded diffraction pattern.
7. The method of claim 1, wherein numerically solving the joint optimization problem includes a least squares method linearized around a scalar coordinate r.
8. A method of measuring a phase of a distorted wavefront, the method comprising: capturing a reference image I.sub.0(r) based on a planar wavefront directed through a foreground mask to a sensor located behind the foreground mask; capturing a measurement image I(r) based on a distorted wavefront directed through the foreground mask to the sensor; and numerically solving a joint optimization problem using the reference image and the measurement image to obtain a local slope (r) of the distorted wavefront, including linearizing a diffractive scalar field uz(r) of the distorted wavefront around scalar coordinate r.
9. The method of claim 8, wherein the diffractive scalar field uz(r) is represented as:
10. The method of claim 9, wherein linearizing the diffractive scalar field uz(r) of the distorted wavefront around scalar coordinate r is represented as:
I(r)I.sub.0(r)z(I.sub.0(r).sup.T(r)).
11. The method of claim 10, wherein the reference image I.sub.0(r) and the measurement image I(r) form a diffraction pattern image pair [I.sub.0(r), I(r)], and wherein the local slope (r) of the distorted wavefront is encoded in the diffraction pattern image pair [I.sub.0(r), I(r)].
12. The method of claim 11, wherein the foreground mask is a uniform random pattern mask that modulates an incoming wavefront to create encode the diffraction pattern image pair [I.sub.0(r), I(r)].
13. A method of measuring a phase of a distorted wavefront, the method comprising: capturing a reference image based on a planar wavefront directed through a foreground mask to a sensor located behind the foreground mask; capturing a measurement image based on a distorted wavefront directed through the foreground mask to the sensor; and numerically solving a joint optimization problem using the reference image and the measurement image to obtain a local slope of the distorted wavefront, wherein the foreground mask includes a uniform random pattern mask.
14. The method of claim 13, wherein the reference image and the measurement image form a diffraction pattern image pair, wherein the local slope of the distorted wavefront is encoded in the diffraction image pair by the uniform random pattern mask.
15. The method of claim 13, wherein numerically solving the joint optimization problem includes a least squares method linearized around a scalar coordinate r.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7) The present invention provides a wavefront sensor utilized to measure the phase of distorted wavefronts. In one embodiment, the wavefront sensordescribed herein as a Coded Shack-Hartmann (CSH) wavefront sensorutilizes a foreground mask that modulates the incoming wavefront at diffraction scale. The resulting wavefront information is encoded in the diffraction pattern, and is numerically decoded by a wavefront reconstruction algorithm that relies on a simple optimization method. In this way, the CSH wavefront sensor provides higher pixel utilization, full sensor resolution wavefront reconstruction, and efficient parallel computation and real-time performance as compared with traditional wavefront sensors such as the traditional Shack-Hartmann wavefront sensor.
(8)
(9) The location of mask 102 modulates the in coming wavefront, whose local slopes are encoded as local movements of the diffraction pattern, which is then captured by sensor 104. In the embodiment shown in
(10) In one embodiment, the accuracy of the CSH wavefront sensor is dependent, at least in part, on the distinctive diffraction pattern generated among neighbors, which in turn is determined in general by the mask 102 utilized, and the setup distance z as well. However, the requirement for distinctiveness over neighbors must be weighed against practical issues of light-efficiency and fabrication easiness. In one embodiment, mask 102 utilizes a uniformly random pattern. However, in other embodiments the mask may be optimized to satisfy the distinctiveness requirements while maintaining the required light-efficiency and fabrication requirements. Also, in other embodiments the mask need not be binary, for example a greyscale mask, or a colored mask that is capable for wavelength-variant amplitude modulation, for the sake of specific applications.
(11) In one embodiment, the distance z is dependent on specific requirement. If the distance z is too small, this results in a small diffraction of the mask, and thus weakens the distinctiveness between neighbors. This is best illustrated via a hypothetical in which the distance z is reduced to zero, in which case the captured image is always equal to the mask pattern itself. In addition, if the distance z is too small, then the warped displacement flow z(r) becomes too small and is non-sensible by the Horn-Schunck intensity preserving assumption, and is more vulnerable to the sensor pixel noise and quantization error. In contrast, a large z value makes the CSH wavefront sensor physically bulky, and more diffractive is the unknown wavefront scalar field. This weakens our basic assumption that the unknown wavefront phase is of low-diffraction. Further, large z value also enhances the diffraction of mask 102, and possibly leads to a low contrast diffraction pattern and thus weakens the distinctiveness between neighbors as well, consequently impair the wavefront sensing ability of the CSH.
(12)
(13)
(14) The following derivation is provided to illustrate the feasibility of the above-identified CSH wavefront sensor, Consider a general scalar field u.sub.0(r) (coordinate r=[x y]T) with wave number k=2/ at the mask position:
u.sub.0(r)=p.sub.0(r).sub.0(r),(1)
where p.sub.0(r) is the mask amplitude function (with sharp boundaries), .sub.0(r)=exp[j(r)] is the scalar field with the distorted wavefront (r) representing the value to be measured. To simplify, the following assumptions are relied upon: Mask p.sub.0(r) is of high frequency (and uniformly random binary), whose Fourier transform P.sub.0() (where is the Fourier dual of r) is broadband. Distorted scalar field .sub.0(r) is smooth enough such that its spectrum is bandlimited, and decays sufficiently enough to zero in high frequency regions. The wavefront (r) is of low-frequency, and is first-order differentiable.
(15) Assuming the mask 102 is placed close to sensor 104 (shown in
(16)
Equation (2) may be further simplified by ignoring the diffraction of scalar field .sub.0(rz). This is appropriate for the following reasons: (i) the distance z is small, and the diffraction effect is not obvious, and (ii) distorted scalar field .sub.0 (rz) is smooth and its spectrum decays sufficiently fast, so its Laplacian matrix exponential expansion is very small and negligible as well. Consequently, the following approximation can be relied upon:
(17)
Subsequently, scalar field .sub.0(rz) may be approximated by imposing Taylor expansion and linearizing (rz) around r, which yields:
(18)
(19) With these approximations, Eq. (2) can be simplified as:
(20)
where p.sub.z(r(z/k)(r)) is the propagated diffractive scalar field of a linearly distorted mask p.sub.0(r(z/k)(r)), predicted by the strict Rayleigh-Sommerfeld diffraction formula. It is worth noting that as indicated by Equation (5), in dispersion-negligible materials, (r)=ko(r) thus the resultant intensity is irrelevant to wave number k, indicating the possibility of using incoherent illumination for optical path measurements. In following contents no distinction is made between optical path and wavefront, and as a result o=1/k(r)=(r).
(21) CSH wavefront sensor is based on result provided by Eq. (5). The reference image is captured under collimated illumination (i.e., wherein u.sub.0(r)=p.sub.0(r)), such that equation (5) above is reduced to:
I.sub.0(r)=|p.sub.z(r)|.sup.2(6)
which represents the diffraction pattern of the binary mask p.sub.0(r). The measurement image is captured under distorted illumination .sub.0(r), such that equation (5) above is reduced to:
I.sub.0(r)=|p.sub.z(rz(r))|.sup.2=I.sub.0(rz(r))(7)
(22) In this way, the local slope (r) is encoded in the diffraction pattern image pair I.sub.0(r) and I(r), which can be solved in a number of different ways. For example, in one embodiment decoding involves utilizing a simple least squares method linearized around r (as described below). In another embodiment, an iterative nonlinear least squares updating using warping theory. After getting the wavefront slope (r), the unknown wavefront (r) can then be reconstructed by integrating over the previously obtained slopes.
(23) In some embodiments, however, the distance z is unknown and the displacements in the diffraction images pair depend on the physical scale of r. Therefore, under these conditions the reconstructed wavefront (r) can only be determined up to a scaled constant. In this embodiment, to correctly retrieve the wavefront (r), there is an additional constant that needs to be computed. One method of computing this scaling constant is to determine empirically in a least squares sense over all the wavefront samples against the ground truth wavefront (described in more detail below). In this way, the CSH wavefront sensor is regarded as a first-order sensing methods.
(24) To numerically decode the hidden phase information of the CSH wavefront sensor, equation (7) is further linearized around r, as follows:
I(r)I.sub.0(r)z(I.sub.0(r).sup.T(r)(8)
which represents the final image formation model that contains the wanted phase information (r). In this embodiment, phase information (r) is solved for directly. A benefit of this approach is that it takes advantage of the curl-free property of the flow, and allows for solving a joint optimization problem simultaneously.
(25) In one embodiment, the decoding operation formulates the wavefront retrieval problem in discrete form. In this embodiment, let the total sensor pixel number be M, and the unknown wavefront .sup.N, where N>M includes unobserved boundary values. By defining image gradient fields g.sub.x
.sup.M and g.sub.y
.sup.M as the average x and y gradients of the reference image and measured image, and g.sub.t
.sup.M as the temporal gradient between the two, a concatenated matrix G=[diag(g.sub.x)diag(g.sub.y)]
.sup.M2M is obtained, where diag(.Math.) denotes the diagonal matrix formed by the corresponding vector. The gradient operator is defined as =[.sub.x.sup.T.sub.y.sup.T].sup.T:
.sup.N.fwdarw.
.sup.2N where .sub.x and .sub.y are discrete gradient operators along x and y directions respectively. With a binary diagonal matrix M.sub.o
.sub.{0,1}.sup.2M2N denoting the measurement positions, and define
(26)
where the classical Horn-Schunck intensity consistency term is imposed, with a regularization term on the phase smoothness. The parameter >0 is a tradeoff parameter between the two terms. Exploiting proper splitting strategy, Eq. (9) can be efficiently solved using Alternating Direction Method of Multipliers (ADMM).
(27) In one embodiment, a slack variable w.sup.2N is introduced to allow the original object function shown in Eq. (9) to be split into two parts, represented as:
(28)
where the two functions (.Math.) and g(.Math.) are defined as:
L.sup.N.fwdarw.
()=.sub.2.sup.2
g:.sup.2N.fwdarw.
(w)=GM+g.sub.t.sub.2.sup.2, with w|=
(29) Using the Alternating Direction Method of Multipliers (ADMM) yields Algorithm (1), where .sup.2N is the dual variable. The updating of and w may be computed using the ADMM as follows:
(30) Algorithm 1. ADMM for solving Eq. (10)
(31) 1: Initialize w.sup.0 and .sup.0, set >0 and K 2: for k=0, 1, . . . , K1 do 3:
(32)
(i) The -Update Step
(33) In one embodiment, the -update step (wavefront estimation) involves solving a Poisson equation, which usually requires proper boundary conditions in conventional approaches. However, in one embodiment, because of the existence of M, the unknown boundary values (i.e. the non-zero elements of (IM.sub.0.sup.TM.sub.0), where I is identity matrix) are implicitly determined by minimizing the objective. When trivial boundary conditions are assumed, the resultant Poisson equation solution leads to non-trivial boundary values on the observed part of wavefront , which allows for the estimation M.sub.0. In some embodiments, the Neumann boundary condition suffices to yield a good estimation constrained to smallest possible N. In these embodiments, by just assuming Neumann boundary condition on the linear operators, denoting .sub.DCT and
.sub.DCT.sup.1 as forward and inverse Discrete Cosine Transforms (DCT) respectively, -update is given by:
(34)
where the division is element-wise. Note that forward/inverse DCT can be efficiently implemented via forward/inverse Fast Fourier Transforms (FFT), respectively.
(ii) w-Update Step
(35) In one embodiment, the w-update step (slack variable) involves the evaluation of prox.sub.g(u), the proximal operator of g(w) with parameter , which is defined and computed as:
(36)
wherein I+2G.sup.TG is diagonal in blocks and whose inverse is also diagonal in blocks and in closed-form, so is prox.sub.g(u). In this embodiment, the computation of prox.sub.g(u) is element-wise. In one embodiment, to suppress noise, a median filtering is imposed to the final gradient estimation. In one embodiment, the estimated wavefront .sub.estimate is computed by solving Eq. (11), with median filtered (w.sup.K+.sup.K). In this embodiment, M.sub.0.sub.estimate is used as the final output wavefront.
(37) One of the benefits of the described decoding process is the ability to execute the decoding process in a highly efficient and parallelizable manner. For example, the update step described above exploits Fast Fourier Transform (FFT) operations, element-wise operation, or simple convolutions. In addition, in one embodiment processing system 106 (shown in
(38) Several of the parameters described above may be modified, with each modification affecting the subsequent influence on the convergence speed of the decode algorithm. For example, in one embodiment the proximal parameter determines, at least in part, the convergence speed of the ADMM algorithm shown in
(39)
(40)
(41) At step 406, a thin layer 408 is deposited onto a side of wafer 410. In the embodiment shown in
(42) At step 412, a uniform photoresist layer 414 is deposited on top of the thin layer 408. In one embodiment, a spin-coating method is utilized to deposit the photoresist layer onto thin layer 408.
(43) At step 416, wafer 410 is aligned with photomask 404 on a contact aligner and the photoresist layer 414 is exposed to UV light. Depending on the type of photoresist layer utilized (positive or negative), either the areas exposed to UV light become soluble to the photoresist developer or the areas exposed to the UV light become insoluble to the photoresist developer.
(44) At step 418, a development solution is utilized to either remove areas exposed to UV light at step 416 or remove areas unexposed to UV light at step 416.
(45) At step 420, an etching process is utilized to etch portions of thin layer 408. Having etched the exposed portions of thin layer 408, remaining photoresist is removed, leaving the final binary mask formed on fused silica wafer 410.
(46) Accuracy of a proposed Coded Shack-Hartmann wavefront sensor is provided based on experimental results. The experimental setup relies on generation of a plurality of different target wavefronts. In this embodiment, target wavefronts are generated using a reflective LCoS-based Spatial Light Modulator (SLM) (e.g., PLUTO SLM by HOLOEYE) having a pixel pitch of approximately 8 m, and the maximum phase retardation of 2. For collimated illumination, a faraway white point light source is employed, along with a polarizer placed in the optical path to allow for phase-only modulation of the SLM. A beamsplitter is employed, along with a Kepler telescope configuration utilized to ensure the SLM and wavefront sensor are in conjugate, to ensure that the distorted wavefront measured by the CSH wavefront sensor is the one produced by the SLM. In this experiment, the Kepler telescope is composed of two lenses, with focal lengths of 100 mm and 75 mm, respectively. The sensor exposure time is set to be 5 ms in both calibration and measurement steps.
(47) Using the above mentioned optical setup, four types of wavefront samples were generated and acquired, each with a series of different scales: (i) cubic phases, (ii) spherical phases, (iii) single-mode Zernike phases, and (iv) customized Zernike phases. Each of these wavefronts are analyzed to determine how parameters may be tuned to measure and detect the desired wavefront, as well as the tradeoff expected between execution time and accuracy of the detected wavefront.
(48) Tuning of Proximal Parameter
(49)
(50) Tradeoff Between Time and Accuracy
(51) In addition to the proximal parameter , there is also a tradeoff between time and accuracy of the decoding algorithm. For example, to suppress noise, a median filtering on the estimated gradients is imposed at the last step. In practice, a median filtered final gradient estimation could indeed benefit for a smoother reconstructed wavefront. In addition, lower energy, in this case in the form of a large number of iterations and more computation time, does not equivalently yield better results. On the other hand, the original energy function may not converge sufficiently if given inadequate number of iterations. Consequently, it is necessary to investigate the tradeoff between time and accuracy. In one embodiment, a value of K=10 yields a good tradeoff.
(52) Scaling Constant
(53) As mentioned before, a scaling constant is also computed. In one embodiment, the scaling constant is determined by way of (i) mean normalization on the reconstructed wavefront; (ii) comparing with ground truth and determining a suitable scaling constant for each pixel position, (iii) getting the median as the scaling constant for each individual wavefront reconstruction, (iv) and obtaining the final scaling constant as the mean of the medians in step (iii). In one embodiment, an estimation for the scaling constant is set to 0.25.
(54)
(55) In addition to good error performance, the CSH wavefront sensor provides efficient run-time operation. In one embodiment, the run-time of the wavefront reconstruction algorithm is mainly reduced by reducing the required number of iterations. For example, in one embodiment the first iteration or update of the wavefront may be skipped when slack variable w.sup.0 and .sup.0 are set to zero vectors.
(56) The present invention provides a wavefront sensor utilized to measure the phase of distorted wavefronts. In one embodiment, the wavefront sensordescribed herein as a Coded Shack-Hartmann wavefront sensorutilizes in a foreground mask that is utilized to modulate the incoming wavefront at diffraction scale. The resulting wavefront information is encoded by the diffraction pattern, and is numerically decoded by a wavefront reconstruction algorithm that relies on a simple optimization method. In this way, the Coded Shack-Hartmann wavefront sensor provides higher pixel utilization, full sensor resolution wavefront reconstruction, and efficient parallel computation and real-time performance as compared with traditional wavefront sensors such as the traditional Shack-Hartmann wavefront sensor.
(57) While the invention has been described with reference to an exemplary embodiment(s), it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the scope of the invention. In addition, many modifications may be made to adapt a particular situation or material to the teachings of the invention without departing from the essential scope thereof. Therefore, it is intended that the invention not be limited to the particular embodiment(s) disclosed, but that the invention will include all embodiments falling within the scope of the appended claims.