COMPUTATIONAL IMAGING WITH UNCALIBRATED PUPIL PHASE
20170287118 · 2017-10-05
Assignee
Inventors
Cpc classification
G01J9/00
PHYSICS
International classification
Abstract
Systems and methods are disclosed for improving image quality by modifying received radiation wavefronts with one or more uncalibrated variable phase plates at the pupil plane of the optical system, to produce an atmospheric-like blurred image on the focal plane with an effective increase in the sampling parameter Q. Real-time image restoration algorithms may then be applied to data sets sampled from the blurred image formed on the detector array. Numerous phase plate embodiments are provided for modifying the wavefront.
Claims
1. A method of improving image quality of an optical imaging system including a pupil plane and a pixel array detector positioned at a focal plane, and having a focal plane image sampling parameter Q that is less than lcomprising: receiving an optical radiation wavefront at an entrance aperture of the optical imaging system; modifying the optical radiation wavefront with at least one uncalibrated variable phase plate at the pupil plane so as to produce a blurred image on the focal plane with an effective increase in the sampling parameter Q; sampling the blurred image with the pixel array detector to produce a sampled data set; and applying a real-time image restoration algorithm to the sampled data set to produce an unblurred image.
2. The method of claim 1, wherein modifying the optical radiation wavefront comprises electronically varying at least one variable phase plate.
3. The method of claim 1, wherein modifying the optical radiation wavefront comprises mechanically actuating the at least one variable phase plate.
4. The method of claim 1, wherein the real-time restoration algorithm comprises a bispectral image restoration algorithm.
5. The method of claim 1, wherein modifying the optical radiation wavefront introduces an atmospheric-like blurring to the blurred image.
6. The method of claim 1, wherein modifying the optical radiation waveform increases the sampling parameter Q to 1 or greater.
7. An optical imaging apparatus, comprising: an aperture positioned at the entrance of the optical imaging system configured to receive an optical radiation wavefront; an imaging detector positioned at a focal plane including an array of pixels having a pixel pitch and a focal plane image sampling parameter Q that is less than 1, where sampling parameter Q is defined as the ratio of a product of the wavelength of an incident optical radiation wavefront and the focal ratio of the system to the pixel pitch, the imaging detector configured to produce a sampled data set from the incident optical radiation wavefront; at least one uncalibrated variable phase plate positioned at a pupil plane optically coupled to the aperture and configured to modify the optical radiation wavefront so as to produce a blurred image on the focal plane with an effective increase in the sampling parameter Q; and a digital image processor coupled to the imaging detector and configured to apply a real-time image restoration algorithm to produce an unblurred image.
8. The optical imaging apparatus of claim 7, wherein the at least one uncalibrated variable phase plate comprises a fast steering mirror mechanism for shifting the optical radiation wavefront in two dimensions at the focal plane.
9. The optical imaging apparatus of claim 7, wherein the at least one uncalibrated variable phase plate comprises a mechanically actuated phase plate.
10. The optical imaging apparatus of claim 7, wherein the at least one uncalibrated variable phase plate comprises an electronically deformable phase plate.
11. The optical imaging apparatus of claim 7, wherein the real-time image restoration algorithm comprises a bispectral image restoration algorithm.
12. The optical imaging apparatus of claim 7, wherein the uncalibrated variable phase plate introduces an atmospheric-like blurring to the blurred image.
13. The optical imaging apparatus of claim 7, wherein the effective sampling parameter Q is increased to 1 or greater.
Description
BRIEF DESCRIPTION OF THE DRAWING
[0009] The foregoing and other objects, features and advantages will be apparent from the following more particular description of the examples, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views. In the annexed drawings, which are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the examples:
[0010]
[0011]
[0012]
[0013]
[0014]
[0015]
DETAILED DESCRIPTION
[0016] In the description that follows, like components have been given the same reference numerals, regardless of whether they are shown in different examples. To illustrate an example(s) of the present disclosure in a clear and concise manner, the drawings may not necessarily be to scale and certain features may be shown in somewhat schematic form. Features that are described and/or illustrated with respect to one example may be used in the same way or in a similar way in one or more other examples and/or in combination with or instead of the features of the other examples.
[0017] As the term is used herein “atmospheric distortion” includes optical distortion arising from the turbulent air motion that drives variations in index of refraction. These optical distortions can be modeled and predicted using theories such as that developed by A. N. Kolmogorov, V. I. Tatarskii, and D. L. Fried (Roggermann and Welsh, Imaging through Turbulence, CRC Press, Boca Raton, Fla., 2006, the contents of which are hereby incorporated by reference in their entirety.)
[0018] Aspects and embodiments are directed to methods and systems for improving the sampling parameter Q in a detector array that involves modulating an optical radiation wavefront at an imaging system pupil plane. According to one embodiment, the modulation causes the point spread function of the lens to be deliberately blurred and spread over multiple pixels of the detector. Sub-detector information may be recovered by correlation filtering, as discussed in more detail below.
[0019] Referring to
[0020] With reference to
[0021] The blurring function can be represented using Fried's well known long exposure or short exposure transfer function (Welsh, Imaging through Turbulence, CRC Press, 2006). In these models, the blurring function is dependent on the Fried parameter r.sub.o which represents the diameter of a circular area within which phase errors are on the order of 1 radian. The Fried parameter is used as an input into the image restoration algorithm.
[0022] The image restoration algorithm relies a Fourier magnitude estimation and a speckle image phase reconstruction algorithm. There are a variety of well-known techniques to estimate or measure the Fourier magnitude. One embodiment uses the square root of the ratio of a Power Spectrum and a Labeyrie-Korff transfer function (Korff, “Analysis of a method for obtaining near-diffraction-limited information in the presence of atmospheric turbulence,” Journal of the Optical Society of America, August 1973, the contents of which are hereby incorporated by reference). Although a variety of speckle image phase reconstruction algorithms may be employed, in certain real-time embodiments, a bispectrum algorithm is used such as described in the work of Lawrence, et al. “Speckle imaging of satellites at the U.S. Air Force Maui Optical Station” in Applied Optics, October 1992, the contents of which are hereby incorporated by reference. In this algorithm, the object's phase spectrum is found by solving the following equation:
arg|O({right arrow over (r)}+{right arrow over (v)})|=arg|O({right arrow over (u)})|+arg|O({right arrow over (v)})|−arg|I.sub.B,n({right arrow over (u)}, {right arrow over (v)})
.sub.n| Eqn. 1
where the image bispectrum is defined as:
I.sub.B,n({right arrow over (u)}, {right arrow over (v)})=I.sub.n({right arrow over (u)})I.sub.n({right arrow over (v)})I.sub.n(−{right arrow over (u)}−{right arrow over (v)}) Eqn. 2
In the above equations, {right arrow over (u)} and {right arrow over (v)} are 2 dimensional spatial frequency vectors, O is the Fourier transform of the object, and I.sub.n is the Fourier transform of the nth member of a set of the images.
[0023]
The blurring function H preferably approximates atmospheric blurring such as described above. Triple correlation bispectral techniques were first applied in X-ray crystallography, then later developed for the restoration of atmospherically degraded astronomical imaging.
[0024]
[0025] According to another embodiment shown in
[0026] In yet another embodiment illustrated in
[0027] With reference again to
[0028] Digital signal processor 430 (in Step 160) outputs image data 440 comprised of an unblurred image effectively sampled with an increased sampling parameter Q (i.e., sub-pixel detector information is obtained.) As noted, imaging system resolution, focal plane array sampling, and field of view are a function of the imaging sampling parameter Q, where Q=λ*(focal ratio)/(detector element size). Image aliasing caused by under-sampling an image may be reduced or eliminated by increasing the sampling parameter Q above the diffraction (Nyquist sampling) limit of Q=1, essentially transcending the diffraction limits of the optical system imposed by the number and size of pixels in detector array 255. Thus, the extracted image is enhanced to a higher resolution (i.e., “super-resolution”, reducing the minimal resolvable image feature size) than the system would otherwise be capable of delivering, without the deliberate spatial blurring (subpixel shifting) and correlation based image restoration disclosed herein.
[0029] It will be appreciated by those skilled in the art, given the benefit of this disclosure, that the digital image processing discussed herein is linear in nature. Thus, according to certain embodiments, the blurring function convolved imagery may only require one pass through the digital image restoration processor 430 to recover the image resolution. This linearity is in striking contrast to conventional computational imaging techniques, which generally require many iterations to recover a high resolution image. Thus, embodiments of the techniques discussed herein may offer a significant reduction in computational burden.
[0030] Proof of concept experiments were performed in real-time, i.e., using 49 frames/second at high definition resolution on a 1280×720 pixel array that demonstrated feasibility and efficacy of the method.
[0031] The systems, methods and components disclosed herein, including phase plate controller 410, detector circuity 420 and/or digital signal processor 430, may be implemented in digital electronic circuitry, in computer hardware, firmware, and/or software. The implementation can be as a computer program product (i.e., a computer program tangibly embodied in an information carrier medium). The implementation can, for example, be in a machine-readable storage device for execution by, or to control the operation of, data processing apparatus. The implementation can, for example, be a programmable processor, a computer, and/or multiple computers.
[0032] In one example, a computer program can be written in any form of programming language, including compiled and/or interpreted languages, and the computer program can be deployed in any form, including as a stand-alone program or as a subroutine, element, and/or other unit suitable for use in a computing environment to carry out the features and functions of various examples discussed herein. A computer program can be deployed to be executed on one computer or on multiple computers at one site.
[0033] Method steps or operations can be performed as processes by one or more programmable processors executing a computer program to perform functions of various examples by operating on input data and generating output. Method steps can also be performed by and an apparatus can be implemented as special purpose logic circuitry. The circuitry can, for example, be a field programmable gate array (FPGA) and/or an application specific integrated circuit (ASIC). Modules, subroutines, and software agents can refer to portions of the computer program, the processor, the special circuitry, software, and/or hardware that implement that functionality.
[0034] The phase plate controller 410, detector circuity 420 and/or digital signal processor 430, may comprise one or more processors suitable for the execution of a computer program include, by way of example, both general and special purpose microprocessors, and any one or more processors of any kind of digital computer. Generally, a processor receives instructions and data from a read-only memory or a random access memory or both. The elements of a computer may comprise a processor for executing instructions and one or more memory devices for storing instructions and data. Generally, a computer can include, can be operatively coupled to receive data from and/or transfer data to one or more mass storage devices (e.g., a memory module) for storing data (e.g., magnetic, magneto-optical disks, or optical disks). The memory may be a tangible non-transitory computer-readable storage medium having computer-readable instructions stored therein for processing images, which when executed by one or more processors cause the one or more processors to carry out or implement the features and functionalities of various examples discussed herein.
[0035] Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, including by way of example semiconductor memory devices. The information carriers can, for example, be EPROM, EEPROM, flash memory devices, magnetic disks, internal hard disks, removable disks, magneto-optical disks, CD-ROM, and/or DVD-ROM disks. The processor and the memory can be supplemented by, and/or incorporated in special purpose logic circuitry.
[0036] To provide for interaction with a user, the above described techniques can be implemented on a computing device having a display device. The display device can, for example, be a cathode ray tube (CRT) and/or a liquid crystal display (LCD) monitor, and/or a light emitting diode (LED) monitor. The interaction with a user can, for example, be a display of information to the user and a keyboard and a pointing device (e.g., a mouse or a trackball) by which the user can provide input to the computing device (e.g., interact with a user interface element). Other kinds of devices can be used to provide for interaction with a user. Other devices can, for example, be feedback provided to the user in any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback). Input from the user can, for example, be received in any form, including acoustic, speech, and/or tactile input.
[0037] The terms “comprise,” “include,” and/or plural forms of each are open ended and include the listed parts and can include additional parts that are not listed. “And/or” is open ended and includes one or more of the listed parts and combinations of the listed parts.
[0038] One skilled in the art will realize the invention may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The foregoing embodiments are therefore to be considered in all respects illustrative rather than limiting of the invention described herein. Scope of the invention is thus indicated by the appended claims, rather than by the foregoing description, and all changes that come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.