Rapid image correction method for a simplified adaptive optical system

10628927 ยท 2020-04-21

Assignee

Inventors

Cpc classification

International classification

Abstract

A computer-aided image correction method for images with at least spatially partially coherent light requiring only a phase-modulated image as input. It does not reconstruct the phases on the image sensor, but rather assumes that they should ideally all be the same. With this assumption, the invention formulates a constraint and an update rule. The result of the iteration is an amplitude distribution of the lightwave field, which could have been measured directly if aberrations in the form of wavefront distortions had not contributed to the actual image acquisition. Further, a device that can, using the image correction method, be used as an adaptive optic for imaging instruments.

Claims

1. A computer-assisted image correction method for images with predetermined optical aberrations, wherein at least spatially partially coherent light is guided onto an electronic, two-dimensional image sensor with light-sensitive pixels, comprising the steps: a) identifying the predetermined optical aberrations with the predetermined deformation of the wavefront of the incident light wave field with respect to an even wave and providing a phase modulation representing the deformation as a function of the pixel coordinates of the image sensor; b) capturing an image on the image sensor; characterized by the following steps: c) calculating a first phase-modulated lightwave field from the image by calculating the amplitude from the detected intensities and initializing the phase components with .sub.0, where the phase .sub.0 has random or unitary initial values; d) calculating a corrected lightwave field by Fourier transforming the first phase modulated lightwave field, removing the phase modulation and inverse Fourier transform; e) assuming an even wavefront for the corrected lightwave field by equalizing the phase on all pixel co-ordinates; f) calculating a second phase modulated lightwave field by Fourier transforming the corrected lightwave field, adding the phase modulation and inverse Fourier transformation; g) replacing the amplitudes of the second phase modulated lightwave field with the amplitudes of the first phase modulated lightwave field while maintaining the calculated phases of the second phase modulated lightwave field at all pixel coordinates; h) accepting the result of step g as the first phase modulated lightwave field and iteratively repeating the steps d to h until a convergence criterion is met; i) calculating a corrected image from the corrected lightwave field when fulfilling the convergence criterion by calculating the intensity from the amplitude by squaring.

2. The method according to claim 1, wherein the predetermining of the optical aberrations occurs by measuring the wavefront distortion of the incident lightwave field in a predetermined time frame of capturing the image.

3. The method according to claim 1, wherein during the iteration for the corrected lightwave field calculated in step d, the minimization of the pixel-by-pixel variance of the phases is used as the convergence criterion.

4. The method according to claim 1, wherein during the iteration for the second phase-modulated lightwave field calculated in step f, the minimization of the pixel-by-pixel difference of the amplitudes from the first phase-modulated lightwave field is used as the convergence criterion.

5. The method according to claim 1, wherein the predetermining of optical aberrations and the providing of the phase modulation are repeated at regular intervals.

6. The method according to claim 1, wherein a plurality of images are detected in temporal succession on the image sensor, and wherein iteration continues between two detection times until convergence.

7. A method according to claim 5, wherein a temporal sequence of images is iteratively corrected, and wherein the predetermining of the optical aberrations for the correction of individual images keeps pace in time with the sequence.

8. A device for use as an adaptive optic, comprising at least a wavefront sensor and a computing unit which is adapted to accept and process the data of an electronic image sensor and the measurement data of a wavefront sensor and, using this, to carry out the image correction method according to claim 1, wherein the measurement data are detected within a predeteimined time interval to each other and the data acquisition is repeated at regular intervals.

9. A machine-readable storage medium comprising read-only, computer-executable instructions for performing the steps of the method of claim 1.

Description

(1) The image correction method according to the invention is surprisingly simple and will also be explained with reference to the following figures. There is shown in:

(2) FIG. 1 a sketch of an adaptive optics according to the state of the art;

(3) FIG. 2 real shots of a test object for microscopes a) focused, b) defocused and c) numerically reconstructed;

(4) FIG. 3 a synthetic image distortion, where a) is the original and b) is the distorted image and c) is the wavefront deformation used for the distortion;

(5) FIG. 4 the numerical reconstruction of the picture from FIG. 3b) after a) 10 iterations and b) 500 iterations;

(6) FIG. 5 a sketch of a simplified adaptive optics, wherein the computing unit uses the image correction method.

(7) The distortion of an incoming, inherently planar wavefront of the lightwave field is nothing more than the total of the path length differences between the different local areas of the wavefront. Consequently, the phase position of the light wave is modulated before focusing along a coordinate plane perpendicular to the optical axis. The location-dependent phase-modulated wave function is then focused on the image sensor, so that in the plane of the image sensor, a light wave field is present and is detected, which has a mixing of different phase-modulatedand thus partially misalignedlight components and their interference on the pixels of the image sensor.

(8) The phase modulation before focusing P(x, y)=exp(ik (x, y)) can be derived from the path length differences (x, y) of the wavefront distortion with k=2/ and obtained as the wavelength of the monochromatic light here. A common wavefront sensor typically outputs in units of wavelength and is often capable of interpolation to the pixel coordinates of the image sensor.

(9) Taking advantage of the known fact that the focus corresponds to a Fourier transformation of the lightwave field, one can use the phase-modulated lightwave field E.sub.M with an undisturbed light wave field E.sub.0, that has no wavefront distortions, described as follows:
E.sub.M=F(E.sub.0*P).(1)

(10) This means the wave function E.sub.0(x, y) is first multiplied pixel by pixel using the location-dependent phase modulation P(x, y)i.e. represented in pixel coordinates of the image sensor, even if E.sub.0 is not present on the image sensorand thereafter the Fourier transform of the product occurs.

(11) Preferably, a complex Fast Fourier Transform (FFT) algorithm in two dimensions may be used for this, and we will briefly note here F( . . . ) and F.sup.1( . . . ) as operators for the Fourier transformation and the inverse Fourier transformation, respectively. The execution of both operators in succession corresponds to the identity, if necessary, up to a constant offset value.

(12) You can expect an image correction, if you instead of E.sub.M a light wave field E.sub.K would have been present on the image sensor, as
E.sub.K=F(E.sub.0)(2)
could describe. But because the complex fields of light waves E.sub.M cannot be measured, you can get an approximate calculation of E.sub.K only numerically iteratively. From (1) and (2) arise
E.sub.K=F(P.sup.1*F.sup.1(E.sub.M))(3)
E.sub.M=F(P*F.sup.1(E.sub.K))(4)

(13) with P.sup.1(x, y)=exp(ik(x, y)) as conjugated phase modulation.

(14) Therefore, according to the invention, (3) and (4) are used for setting up a numerical iteration loop. In this case, (3) describes the double Fourier transformation of the phase-modulated lightwave field E.sub.M under removal of the phase modulation. In contrast, (4) describes the double Fourier transform of the lightwave field E.sub.K by adding the phase modulation.

(15) First, the intensity distribution I(x, y) is detected on the image sensor as an image, and from this is calculated a first phase modulated light wave field
E.sub.M.sup.(1)(x,y)={square root over (I(x,y))}e.sup.i.sup.0.sup.(x,y)(5)

(16) on the pixel coordinates x, y with arbitrary, i.e., for example, random or uniform initial values of the phase .sub.0(x, y). With as a natural number then a corrected, i.e. from the predetermined phase modulation adjusted, light wave field is approximately determined as
E.sub.K.sup.(n)=F(P.sup.1*F.sup.1(E.sub.M.sup.(n)))(6)

(17) which should satisfy the additional condition that the same phase exists on all pixels of the image sensor. Therein

(18) E K ( n ) ( x , y ) = .Math. E K ( n ) ( x , y ) .Math. 2 e i ( n ) ( x , y ) ( 7 )

(19) and the phases .sup.()(x, y) are all set to any constant value, for example and preferably zero. Expressed in other words, the pointers of the complex values of E.sub.K.sup.() are rotated on a predetermined axis, preferably the real axis. So you get preferably the update
E.sub.K.sup.(n+1)(x,y)={square root over (|E.sub.K.sup.(n)(x,y)|.sup.2)}.(8)

(20) The distribution thus obtained contains only positive real values. The practical testing of the algorithm has shown that the determination of an exclusively positive amplitude distribution can not visibly influence the result and actually leads to greater convergence.

(21) Furthermore, this results in a second phase-modulated lightwave field
E.sub.M.sup.(n+1)=F(P*F.sup.1(E.sub.K.sup.(n+1)))(9)

(22) calculated according to (4). This in turn must have the captured image as an intensity distribution, so it should be valid:
I(x,y)=|E.sub.M.sup.(n+1)(x,y)|.sup.2=|E.sub.M.sup.(1)(x,y)|.sup.2(10)

(23) Therefore, the second phase-modulated lightwave field becomes

(24) E M ( n + 1 ) ( x , y ) = .Math. E M ( n + 1 ) ( x , y ) .Math. 2 e i ( n + 1 ) ( x , y ) ( 11 )

(25) replaced by in another update step

(26) E M ( n + 2 ) ( x , y ) = .Math. E M ( 1 ) ( x , y ) .Math. 2 e i ( n + 1 ) ( x , y ) ( 12 )

(27) while maintaining the phases .sup.(+1)(x, y) which resulted from the calculation of (9). With the result of (12) one can go to the next iteration loop beginning with (6) and iterate until convergence.

(28) Convergence is achieved when the phases .sup.()(x, y) as a result of (7) show a predetermined minimization of the pixel-by-pixel variance, i.e. if they are already substantially a constant independent from (x, y), for example zero.

(29) Alternatively, convergence is achieved when the pixel-by-pixel difference in the amplitudes of the second and first phase modulated lightwave fields satisfies a predetermined minimization criterion, for example
.sub.x,y|{square root over (|E.sub.M.sup.(n+1)(x,y)|.sup.2)}{square root over (|E.sub.M.sup.(n)(x,y)|.sup.2)}|<(13)

(30) with a predetermined real numerical value >0. The summation runs over all pixels. Here the output E.sub.M.sup.(n+1) from (9) is to be compared with the input E.sub.m.sup.() in (6).

(31) Both convergence criteria are suitable for achieving the same result of the iteration. One can also require that both must be fulfilled simultaneously.

(32) When convergence is reached, the last calculated corrected lightwave field E.sub.K is the actual result of the image correction process. Its absolute square as intensity distribution over the pixel coordinates represents an improved, in particular sharper image than the image originally captured with the image sensor. The aberrations associated with wavefront distortions are then largely eliminated.

(33) The computer-aided image correction method according to the invention for images with predetermined aberrations, wherein at least spatially partially coherent light is guided onto an electronic, two-dimensional image sensor with light-sensitive pixels, comprises the following steps:

(34) a. identifying the predetermined optical aberrations with the predetermined deformation of the wavefront of the incident lightwave field with respect to an even wave and providing a phase modulation representing the deformation as a function of the pixel coordinates of the image sensor;

(35) b. capturing an image on the image sensor;

(36) c. calculating a first phase-modulated lightwave field from the image by calculating the amplitude from the detected intensities and initializing the phase components with .sub.0, wherein the phase .sub.0 has random or uniform initial values,

(37) d. calculating a corrected lightwave field by Fourier transforming the first phase modulated lightwave field, removing the phase modulation and inverse Fourier transformation;

(38) e. assuming an even wavefront for the corrected lightwave field by equalizing the phase on all pixel co-ordinates;

(39) f. calculating a second phase modulated lightwave field by Fourier transforming the corrected lightwave field, adding the phase modulation and inverse Fourier transformation;

(40) g. replacing the amplitudes of the second phase modulated lightwave field with the amplitudes of the first phase modulated lightwave field while maintaining the calculated phases of the second phase modulated lightwave field at all pixel coordinates;

(41) h. accepting the result of step g as the first phase modulated lightwave field and iteratively repeating the steps d to h until a convergence criterion is met;

(42) i. calculating a corrected image from the corrected lightwave field when fulfilling the convergence criterion by calculating the intensity from the amplitude by squaring.

(43) The method according to the invention reminds the person skilled in the art of a Gerchberg-Saxton algorithm (A Practical Algorithm for the Determination of Phase from Image and Diffraction Plane Pictures, Optics, Vol. 35, No. 2, (1972) pp. 237-246) for phase retrieval in a coherent light image. However, such algorithms for phase recovery usually require a plurality of differently phase-modulated, possibly differently defocused, images, which are used within an iteration loop to fulfill constraints.

(44) The invention, on the other hand, requires only a phase-modulated image as input, and it does not reconstruct the phases on the image sensor, but rather assumes that they should ideally all be the same. With this assumption, the invention formulates a constraintstep eand by way of example the update rule (8). The result of the iteration is an amplitude distribution of the lightwave field, which could have been measured directly if aberrations in the form of wavefront distortions had not contributed to the actual image acquisition.

(45) As one embodiment, the application of the image correction method to the digital imaging of a conventional microscopic test object is demonstrated. For this purpose, the flat test object is irradiated with light from a medium wavelength 650 nm light source, and the transmitted light is imaged onto an image sensor with a digital camera instead of the eyepiece. A series of images is recorded for different distances of the test object with increments of 50 microns. The sharpest picture of the test object is shown in FIG. 2a), while FIG. 2b) shows a strongly defocused image which arises when the test object is shifted by 1950 micrometers along the optical axis.

(46) The effective aberration in FIG. 2b) is thus predetermined and quantified here as a defocus and may be described in the formula

(47) P ( x , y ) = exp ( i k x 2 + y 2 2 r ) . ( 14 )

(48) Therein r is the radius of curvature of the wavefront, which can either be measured directly or calculated by means of an imaging simulation program by modeling the measurement setup, including the defocus.

(49) With the predetermined phase modulation from (14) and the phase modulated image from FIG. 2b) as inputs, the image correction method according to the invention can be carried out and as a result delivers the image in FIG. 2c). It should be noted that it may be advantageous to first apply edge killing algorithms to the input image prior to image correction to reduce artifacts in the FFT operations.

(50) The image correction compared to the images in FIG. 2 validates the assumption that ordinary light emitting diodes generate light with sufficient spatial coherence for performing the image correction process presented herein.

(51) A second embodiment is intended to show that the image correction can also be applied to images which have been produced with more complicated aberrations than a defocus. For this purpose, an original image in FIG. 3a) combined with a predetermined wavefront distortion in a simulated image. The result is the calculated image in FIG. 3b). The wavefront distortion used here is a combination of the aberrations defocus, astigmatism and coma, and in FIG. 3c) as path length differences (x, y) shown graphically. With the phase modulation P(x, y)=exp(ik(x, y)) and the picture from FIG. 3b) as inputs, the image correction method according to the invention after passing through only 10 iteration loops leads to the corrected image in FIG. 4a), and after about 500 iterations it achieves an almost perfect correction in FIG. 4b). A single iteration loop for a 512512 pixel image on a 4 GHz PC takes about 14 milliseconds in the current implementation.

(52) The relatively rapid convergence to a significantly improved image, certainly sufficient for some purposes, with a computational requirement on the order of 1 second or less, suggests the consideration of real-time image correction applications. In addition, in a real-time application, it is not necessary to constantly initialize the iteration loop with a random array, which speeds up the convergence tremendously.

(53) It is considered to be particularly advantageous that applications can be considered in which the optical aberrations are subject to a change over time, i.e. in particular those which must use a wavefront sensor.

(54) In a preliminary consideration, it is examined whether the data measurable with a commercially available Shack-Hartmann sensor provides sufficient information for the application of the method. Based on the data sheet of such a sensor, with 1913 lenslets, a specified measurement accuracy of /50 and a specified dynamic range of at least 50 at the wavelength =633 nanometer, different wavefront deformations are simulated as measured data sets with Gaussian noise and used for image correction. It turns out that a good restoration of the image can be achieved. Even with significantly reduced measurement accuracy, the results are still acceptable. The inventive method is also applicable to the use of commercial wavefront sensors.

(55) Since a wavefront sensor allows the instantaneous measurement of wavefront deformation, it is a preferred embodiment of the invention to repeat the prediction of optical aberrations and the provision of the phase modulation at regular time intervals.

(56) A further preferred embodiment of the invention is seen in that a plurality of images is detected in succession on the image sensor, wherein iterating continues between two detection times until convergence.

(57) As a combination of the aforementioned embodiments, it is advantageous that a temporal sequence of images is iteratively corrected, with the predetermining of the optical aberrations for the correction of individual images occurring in time with the sequence. For this, it is preferable to predetermine the optical aberrations by measuring the wavefront distortion of the incident lightwave field in a predetermined time frame of capturing the image. It is to be understood that the time environment is to be predetermined in such a way that a useful relationship between the time of image acquisition and the time of wavefront measurement can at least be assumed.

(58) A device for use as an adaptive optic is shown schematically in FIG. 5. The device comprises at least one wavefront sensor (4) and a computing unit (12) which is designed to accept and process the measurement data of an electronic image sensor (2) and the measurement data of the wavefront sensor (4) and to perform with them the image correction method presented here, wherein measurement data are detected within a predetermined time interval to each other and the data acquisition is repeated at regular intervals. The apparatus may further include a beam splitter (6) which splits the incident light (10) and guides a respective partial beam to the image sensor (2) and the wavefront sensor (4). The device may further comprise an imaging optic (3) and the image sensor (2).

(59) The image correction method according to the invention can be implemented as software in a conventional personal computer. In particular, it is also possible to provide a machine-readable storage medium which has read-only stored instructions executable by a computer unit for carrying out the image correction method presented here in one of its embodiments. All other mentioned components are also commercially available.

LIST OF REFERENCE NUMBERS

(60) 2 image sensor

(61) 4 sensor

(62) 6 beamsplitter

(63) 8 optical compensation element

(64) 10 light

(65) 12 control device