SYSTEMS AND METHODS FOR IMAGE RECONSTRUCTION
20170168285 ยท 2017-06-15
Assignee
Inventors
Cpc classification
G02B21/365
PHYSICS
G03H1/0866
PHYSICS
G02B21/0008
PHYSICS
G02B21/367
PHYSICS
G02B21/0056
PHYSICS
G03H1/0443
PHYSICS
International classification
G02B21/36
PHYSICS
G03H1/00
PHYSICS
Abstract
A method for obtaining a high resolution image of objects contained within a sample is disclosed that combines pixel super-resolution and phase retrieval techniques into a unified algorithmic framework that enables new holographic image reconstruction methods with significantly improved data efficiency, i.e., using much less number of raw measurements to obtain high-resolution and wide-field reconstructions of the sample. Using the unified algorithmic framework, twin image noise and spatial aliasing signals, along with other digital holographic artifacts, can be interpreted as noise terms modulated by digital phasors, which are all analytical functions of the imaging parameters including e.g., the lateral displacement between the hologram and the sensor array planes (x, y shifts), sample-to-image sensor distance (z), illumination wavelength (), and the angle of incidence (,).
Claims
1. A method for obtaining a high resolution image of one or more objects contained within a sample comprising: illuminating the sample with a light source emitting partially coherent light or coherent light, wherein the sample is interposed between the light source and an image sensor with a sample-to-image sensor distance z.sub.k; obtaining a plurality of lower resolution hologram image frames of the objects with the image sensor, wherein the plurality of lower resolution hologram image frames are obtained at different (1) sample-to-image sensor distances z.sub.k or (2) different illumination angles .sub.k,.sub.k; generating from the plurality of lower resolution hologram image frames a high-resolution initial guess of the objects based on a summation of upsampled holograms in the lower resolution image frames; iteratively eliminating twin image noise, aliasing signal, and artifacts and retrieving phase information from the high-resolution initial guess, wherein the iterative process comprises: forward-propagating the high-resolution initial guess from an object plane to an image sensor plane to generate a high-resolution forward-propagated field; updating an amplitude of the high-resolution forward-propagated field using holograms in the lower resolution hologram image frames; back-propagating the updated high-resolution forward-propagated field to the object plane; updating a transmitted field of the object at the object plane in the spatial frequency domain; and outputting a phase retrieved high resolution image of the one or more objects contained within the sample based on the updated transmitted field of the one or more objects.
2. The method of claim 1, further comprising obtaining a plurality of additional lower resolution hologram image frames of the one or more objects with the image sensor, wherein the additional lower resolution hologram image frames are obtained using sub-pixel lateral shifts and wherein the additional lower resolution hologram image frames are used to generate the high-resolution initial guess of the objects.
3. The method of claim 2, wherein the sub-pixel lateral shifts are obtained at a single sample-to-image sensor distance z.sub.k.
4. The method of claim 2, wherein the sub-pixel lateral shifts are obtained at a single illumination angles .sub.k,.sub.k.
5. The method of claim 1, wherein the sample comprises a biological sample.
6. The method of claim 1, wherein the sample comprises a pathology sample.
7. The method of claim 1, wherein the plurality of lower resolution hologram image frames comprise less than fifty (50) image frames.
8. The method of claim 1, wherein the light source comprises a wavelength-tunable light source and wherein the plurality of lower resolution hologram image frames of the objects with the image sensor are obtained at different illumination wavelengths .sub.k.
9. The method of claim 8, wherein the different illumination wavelengths .sub.k separated by one another by at least 2 nm.
10. The method of claim 8, wherein the different illumination wavelengths .sub.k are within a spectrum range less than 30 nm.
11. A method for obtaining a high resolution image of one or more objects contained within a sample comprising: sequentially illuminating the sample at a plurality of different wavelengths with a light source emitting partially coherent light or coherent light, wherein the sample is interposed between the light source and an image sensor with a sample-to-image sensor distance z.sub.k; obtaining a plurality of lower resolution hologram image frames of the objects with the image sensor at the plurality of different wavelengths, wherein the plurality of lower resolution hologram image frames are also obtained at different (1) sample-to-image sensor distances z.sub.k or (2) different illumination angles .sub.k,.sub.k; generating from the plurality of lower resolution hologram image frames a high-resolution initial guess of the objects based on a summation of upsampled holograms in the lower resolution image frames; iteratively eliminating twin image noise, aliasing signal, and artifacts and retrieving phase information from the high-resolution initial guess, wherein the iterative process comprises: forward-propagating the high-resolution initial guess from an object plane to an image sensor plane to generate a high-resolution forward-propagated field; updating an amplitude of the high-resolution forward-propagated field using holograms in the lower resolution hologram image frames; back-propagating the updated high-resolution forward-propagated field to the object plane; updating a transmitted field of the object at the object plane in the spatial frequency domain; and outputting a phase retrieved high resolution image of the objects contained within the sample based on the updated transmitted field of the object.
12. The method of claim 11, wherein the plurality of different wavelengths are separated by one another by at least 2 nm.
13. The method of claim 11, wherein the plurality of different wavelengths span a wavelength range of less than 30 nm.
14. The method of claim 11, wherein the phase retrieved high resolution image comprises a phase unwrapped image obtained from the plurality of different wavelengths.
15. The method of claim 11, wherein the sample comprises a biological sample.
16. The method of claim 11, wherein the sample comprises a pathology sample.
17. A wavelength scanning pixel super-resolution microscope device for imaging a sample comprising: a sample holder configured to hold the sample; a wavelength-tunable light source or multiple different light sources configured to illuminate the sample at a plurality of different wavelengths .sub.k along an optical path; a lens or set of lenses disposed along the optical path; an image sensor configured to receive illumination passing through the sample and lens or set of lenses along the optical path, wherein the at least one of the sample holder, lens, set of lenses, or image sensor are moveable along the optical path to introduce incremental defocusing conditions to the microscope device, wherein the image sensor obtains a plurality of images of the sample at the different wavelengths under the incremental defocusing conditions; and at least one processor configured to (1) generate a high-resolution initial guess of the sample image based on the plurality of images of the sample at the different wavelengths under the incremental defocusing conditions, (2) iteratively eliminate artifacts and retrieving phase information from the high-resolution initial guess of the sample image, and (3) output a high resolution, phase retrieved high resolution image of the sample.
18. The wavelength scanning pixel super-resolution microscope device of claim 17, wherein the lens or set of lenses comprises an objective lens and the objective lens is moved incrementally.
19. The wavelength scanning pixel super-resolution microscope device of claim 17, wherein the different illumination wavelengths .sub.k separated by one another by at least 2 nm.
20. The wavelength scanning pixel super-resolution microscope device of claim 17, wherein the incremental defocusing conditions are created by moving one or more of the sample, image sensor, or the lens or set of lenses.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
[0083] The configuration of an in-line holographic imaging system 10 according to one embodiment of the invention is seen with reference to
[0084] As seen in
[0085] Thus, some of the controllable parameters of the imaging system are illustrated in
[0086] Still referring to
[0087] The components of the system 10 described above may be incorporated into a small, benchtop unit that can be used to obtain high resolution images of a sample 12. In other embodiments, it may be possible to incorporate the imaging components into a small, portable device. While any type of sample 12 may be imaged using the system 10, it has particular suitability for imaging a sample 12 that includes a biological sample (e.g., bodily fluid such as blood or the like). The sample 12 may also include a pathological sample such as histological slide that may be unstained or stained depending on the particular application. The system 10 may also be used to image environmental samples containing small particulate matter or pathogens found in air or water sources.
[0088]
[0089] Mathematical Formalism of Propagation Phasor Approach in Digital Holography
[0090] The concept of propagation phasors is established by deriving the analytical expressions that contain not only the holographic information of the sample 12, but also the twin image noise, spatial aliasing signal, and upsampling related spatial artifacts that are present. Throughout this analysis, plane wave illumination is assumed and it is also supported by the imaging set-up of
[0091] Under these different imaging configurations, each labeled with index k, the transmission properties of a two-dimensional (2D) sample 12 can be generally expressed as o.sub.k(x, y)=1+s.sub.k(x, y), where s.sub.k refers to the scattered object field that interferes with the background unscattered light. The frequency spectrum O.sub.k(f.sub.x, f.sub.y) of o.sub.k(x, y) can be written as:
O.sub.k(f.sub.x,f.sub.y)=(f.sub.x,f.sub.y)+S.sub.k(f.sub.x,f.sub.y)(1)
[0092] Similarly, one can write the 2D spatial frequency spectrum of the transfer function h.sub.k(x, y, z.sub.k, .sub.k, .sub.k) as:
H.sub.k(f.sub.x,f.sub.y,z.sub.k,.sub.k,.sub.k)FT{h.sub.k(x,y,z.sub.k,.sub.k,.sub.k)}(2)
[0093] where FT refers to the Fourier Transform operation. From now on, the expressions of all the frequency spectra will be simplified in the equations by hiding the spatial frequency variables f.sub.x, and f.sub.y. The frequency spectrum of the field intensity i.sub.k(x, y) on the image sensor plane can then be expressed as:
I.sub.k=+T.sub.kS.sub.k+(T.sub.k.sup..Math.S.sub.k.sup.)*+SS.sub.k(3)
[0094] where the superscript represents using variable set (f.sub.x, f.sub.y) instead of (f.sub.x, f.sub.y) and the asterisk stands for complex conjugate operation. SS.sub.k represents the self-interference terms, which can be written as SS.sub.k=.sub.f.sub.
T.sub.k=H.sub.k*(f.sub.x,k,f.sub.y,k).Math.H.sub.k(f.sub.x+f.sub.x,k,f.sub.y+f.sub.y,k)(4)
[0095] where f.sub.x, k=n.sub.k sin .sub.k cos .sub.k/.sub.k, f.sub.y, k=n.sub.k sin .sub.k sin .sub.k/.sub.k, and n.sub.k is the refractive index of the medium, which is assumed to be a function of only the illumination wavelength. It is important to notice that H.sub.k is a complex function with a unit magnitude, defining a phasor. Based on Eq. (4), as a product of H.sub.k*(f.sub.x, k, f.sub.y, k) and H.sub.k, the function T.sub.k is also a phasor, and the term T.sub.k is deemed as a propagation phasor, the function of which in the reconstruction framework will be more clear below.
[0096] When any intensity distribution i.sub.k(x, y) is sampled by an image sensor 14 with a pixel pitch of x and y in lateral directions, the discrete Fourier transform (DFT) of the sensor's output can be expressed as:
[0097] In Eq. (5) u and v are integers representing the aliasing orders, and (u, v)=(0, 0) denotes the non-aliased target signal of the object. P.sub.k(f.sub.x, f.sub.y) is the 2D FT of the pixel function that defines the responsivity distribution within each pixel of the image sensor 14. Originally, f.sub.x, and f.sub.y in Eq. (5) are discrete frequency values confined within the Nyquist window. Based on the periodic nature of DFT, Eq. (5) and all of the further derivations can be numerically extended to a broader frequency domain by simply upsampling the raw measurements. Therefore, without change of notations, I.sub.sampled, k refers to the DFT of the upsampled version of the raw measurements.
[0098] Now, the lateral displacements between the holograms and the image sensor 14 are incorporated into Eq. (5). If one adds lateral shifts (x.sub.shift, k, y.sub.shift, k) to each hologram, then Eq. (5) can be re-written as:
[0099] where the expression of spatial aliasing order is simplified by using the subscript uv, and .sub.shift, uv, k represents the phase change caused by a lateral shift:
[0100] In Eq. (6), by replacing the expression of I.sub.uv, k with Eq. (3), one can obtain an expanded expression for I.sub.sampled, k:
[0101] On the right side of Eq. (8), one can see that, for each aliasing order (i.e., each combination of u and v, including the target signal: u=0, v=0), there are four items inside the square brackets. The first item, .sub.uv, represents the background light, the second item, T.sub.uv, k.Math.S.sub.uv, k, represents the real image, the third item, (T.sub.uv, k.sup..Math.S.sub.uv, k)*, represents the twin image; and the last item, SS.sub.uv, k, is the self-interference term.
[0102] Two-Stage Holographic Reconstruction Algorithm
[0103] As explained herein, a generic, two-stage holographic reconstruction algorithm using propagation phasors is set forth which aims to recover the object term .sub.00+S.sub.00, k from a series of measured holograms.
[0104] Stage I of Propagation Phasor Based Holographic Reconstruction: Generation of an Initial Guess
[0105] With reference to
[0108] On the left side of Eq. (9), the pixel function is kept at P.sub.00, k multiplied with .sub.00, k S.sub.00, k; note, however, that it can be later removed using deconvolution techniques as the last step of the holographic reconstruction. The right side of Eq. (9) shows that in order to extract (S.sub.00+S.sub.00, k).Math.P.sub.00, k, there are five terms that need to be eliminated from the back-propagated intensity (i.e., T.sub.00, k*.Math.e.sup.jshift, 00, k.Math.I.sub.sample, k). The first term, T.sub.00, k*.Math.T.sub.00, k.sup.*.Math.P.sub.00, k.Math.S.sub.00, k.sup.*, represents the twin image noise; the second and third terms which contain S.sub.uv, k or S.sub.uv, k.sup.* (u0, v0) represent the spatial aliasing signals for real and twin images, respectively; the fourth term with .sub.uv (u0, v0) is the high frequency artifacts generated during the upsampling process. The last term with SS.sub.uv, k is the self-interference signal. [0109] Step 3: Summation of all the upsampled and back-propagated holograms T.sub.00, k*.Math.e.sup.jshift, 00, k.Math.I.sub.sampled, k to generate an initial guess. This initial summation can greatly suppress the twin image noise, aliasing signal and other artifact terms outlined above in Step 2. To better explain the impact of this summation step, one can simplify the expression of the phasor terms in Eq. (9) as:
[0110] where Q.sub.00, k.sup.twin=T.sub.00, k*.Math.T.sub.00, k.sup.*, Q.sub.uv, k.sup.real=T.sub.00, k*.Math.T.sub.uv, k.Math.e.sup.j(.sup.
[0111] Also notice that except the illumination wavelength .sub.k, the changes of the imaging parameters z.sub.k, .sub.k, .sub.k, x.sub.shift, k, and y.sub.shift, k do not affect the transmission properties of the 2D sample. During the imaging process, the illumination wavelengths are confined within a narrow spectral range, typically less than 10 nm, so that the transmission properties of the sample and the image sensor's pixel function can be approximately considered identical when generating an initial guess of the object, i.e., S.sub.uv, kS.sub.uv, and P.sub.uv, kP.sub.uv. If one lists Eq. (10) for all the possible K imaging conditions (e.g., as a function of various illumination wavelengths, sub-pixel shifts, etc.), and then sum them up with a set of weighting factors, {c.sub.k}, this produces:
[0112] By finding a set of weighting factors {c.sub.k} that satisfy
one can have complete elimination of the twin image noise, aliasing signals and upsampling related spatial artifacts, while still maintaining the target object function, (.sub.00+S.sub.00).Math.P.sub.00. However, considering the fact that Q.sub.uv, k.sup.twin, Q.sub.uv, k.sup.real and Q.sub.uv, k.sup.artifact are also functions of spatial frequencies (f.sub.x, f.sub.y), it is computationally expensive to obtain a set of ideal {c.sub.k} values. Therefore, an alternative strategy is adopted as shown in
[0113] In fact, as illustrated in
[0114] This initial guess is then used as the input to an iterative algorithm (Stage II) to reconstruct and refine the object function/image, which will be detailed below.
[0115] Stage II of Propagation Phasor Based Holographic Reconstruction: Iterative Image Reconstruction
[0116] Using the initial guess defined by Eq. (13), an iterative process is then implemented as the second stage of the propagation phasor based holographic reconstruction algorithm to eliminate the remaining twin image noise, aliasing signal, and the upsampling related artifacts. Each iteration of Stage II is comprised of four steps (i.e., Steps 4 through 7see
[0118] Wave propagation from the object plane to the image sensor is deemed as forward-propagation, and the spatial form of the forward-propagated field is denoted as g.sub.forward, k(x, y). [0119] Step 5: On the image sensor plane, the raw measurements (i.e., the low-resolution, undersampled holograms) are used to update the amplitude of the high-resolution, forward-propagated field .sub.forward, k(x, y). To do so, one first convolves the intensity of the field, |.sub.forward, k(x, y)|.sup.2, with the pixel function of the image sensor, and shift the convolved intensity by an amount of (x.sub.shift, k, y.sub.shift, k) to compensate the corresponding lateral displacement. Next, this shifted intensity is downsampled to the same resolution as the raw measurement, and the difference between this downsampled intensity and the raw measurement is considered as a low-resolution correction map. In order to apply this low-resolution correction map to each shifted intensity, this correction map is upsampled by taking its Kronecker product with the pixel function, and added the upsampled correction map to the shifted intensity with a relaxation factor (typically 0.5). Then this corrected intensity is deconvolved with the pixel function using Wiener deconvolution, and shifted back in place by the amount of (x.sub.shift, k, y.sub.shift, k). The Wiener filter takes into account the measured noise level of the image sensor to avoid over-amplification of noise during each iteration. One then uses the square root of the deconvolved and shifted intensity to replace the amplitude of g.sub.forward, k(x, y), while keeping its phase unaltered. [0120] Step 6: Back-propagate the amplitude-updated, high-resolution field to the object plane, and remove the phase modulation caused by the illumination angle. [0121] Step 7: The back-propagated field is then used to update the transmitted field on the object plane. This is different from Step 6; as this update on the object plane is carried out in the spatial frequency domain. The spatial frequency region for this update is a circular area centered at f.sub.x, k=n.sub.k.Math.sin .sub.k.Math.cos .sub.k/.sub.k, f.sub.y, k=n.sub.k.Math.sin .sub.k sin .sub.k/.sub.k, and the radius of the circle is chosen so that all the spatial frequencies within it experience less than 3 dB amplitude attenuation during wave propagation. This update in the spatial frequency domain is also smoothened using a relaxation factor of 0.5. In other words, the updated frequency region is the weighted sum of the old transmitted field and the back-propagated field, and the weighting factor (i.e., relaxation factor) for the back-propagated field is 0.5. After this update, the phase of the field is converted into an optical path length map of the object, and the amplitude of the field gives the object's final transmission image, i.e., reconstruction.
[0122] These above outlined steps (Steps 4 to 7) are performed for every imaging configuration. It is considered as one iteration cycle when all the K raw measurements are used for once. Similar to the convergence condition defined in Allen, L. J. et al., Exit wave reconstruction at atomic resolution. Ultramicroscopy 100, 91-104 (2004), which is incorporated by reference, the convergence of the iterations and the reconstruction is determined when the sum-squared error (SSE.sub.avg.sup.itr) between the raw measurement and the downsampled intensity map satisfies the following criterion:
|SSE.sub.avg.sup.itrSSE.sub.avg.sup.itr-1|<(15)
[0123] where itr is the index of the iteration cycle, and e is the convergence constant, empirically defined as 0.2% of SSE.sub.avg.sup.itr.
EXPERIMENTAL
[0124] To demonstrate the propagation phasor approach for holographic image reconstruction, it was implemented using lens-free holographic microscopy although it is broadly applicable to other holographic microscopy platforms (including those with lenses). As depicted in
[0125] Sample Preparation
[0126] Besides a standard 1951 USAF resolution test target, the propagation phasor approach was also tested by imaging biological samples, including unstained Papanicolaou (Pap) smear slides and blood smears. Pap smears are prepared using ThinPrep method (Hologic, Massachusetts, USA). The blood smear samples are prepared using EDTA (ethylenediaminetetraacetic acid) anticoagulated human blood and stained with Wright's Stain.
[0127] Computation Platform Used for Propagation Phasor Based Holographic Reconstructions
[0128] For proof-of-concept implementation, the propagation phasor approach based reconstruction algorithm has been implemented using MATLAB (Version R2012a, MathWorks, Massachusetts, USA) on a desktop computer with 3.60-GHz central processing unit (Intel Xeon ES-1620) and 16 GB random-access memory. Of course, dedicated imaging software or other programs may be used besides MATLAB. Using an upsampling factor of seven, the computation time of one iteration in reconstruction Stage II is 1.2 seconds for a region-of-interest of 11 mm.sup.2. As for the total computation time including Stages I and II, assuming that the number of intensity distribution updates is 8-10 per iteration (see e.g.
[0129] Results and Discussion
[0130] The main challenges of wide field-of-view, high-resolution holographic imaging include: (1) phase retrieval, and (2) mitigating the undersampling caused by an image sensor chip. The propagation phasor approach described herein relies on the fact that in the digital hologram of a sample, the twin image noise and spatial aliasing signals vary under different imaging configurations. Such variations enable one to eliminate these unwanted noise terms (twin image noise and aliasing signal) and obtain phase-retrieved and high-resolution (i.e., super-resolved) reconstructions of the object. The imaging configuration in a holographic microscope can in general be changed by varying different parameters: (1) the lateral displacements between the holograms and the sensor-array (i.e., lateral relative shifts x.sub.shift, k and y.sub.shift, k), (2) the sample-to-image sensor distance (z.sub.k), (3) the illumination wavelength (4), and (4) the angle of incidence (.sub.k, .sub.k).
[0131] To better illustrate the inner workings of the propagation phasor approach, the dependencies of the twin image noise and the aliasing signal on these controllable imaging parameters are demonstrated followed by a summary of the combinations of these imaging parameters that can create phase-retrieved and high-resolution reconstructions while also improving the data efficiency of holographic imaging.
[0132] Dependency of Twin Image Noise and Aliasing Signal on Imaging Parameters
[0133] From Eq. (10), one can see that all the terms which need to be eliminated from an upsampled and back-propagated hologram T.sub.00, k*.Math.e.sup.jshift, 00, k.Math.I.sub.sampled, k are modulated by phasors, including: (1) the twin image term, modulated by Q.sub.00, k.sup.twin; (2) aliasing signals, modulated by Q.sub.uv, k.sup.real and Q.sub.uv, k.sup.twin, u0, v0; (3) upsampling artifacts (.sub.uv terms modulated by Q.sub.uv, k.sup.artifact, u0, v0; and (4) self-interference patterns (SS.sub.uv, k terms modulated by Q.sub.uv, k.sup.artifact). From the perspective of the propagation phasor approach, it is desirable that the phasors that modulate these unwanted noise terms or artifacts exhibit sufficient variations across [0, 2], so that they can be significantly suppressed during the initial summation in the reconstruction Stage I. Here, the focus of the discussion is on twin image phasor Q.sub.00, k.sup.twin and aliasing related phasors Q.sub.uv, k.sup.real, Q.sub.uv, k.sup.twin, (u0, u0), where the conclusions would be broadly applicable to a wide range of holographic imaging systems (lens-based or lens-free). Meanwhile, the self-interference patterns/artifacts are much weaker in signal strength compared to the holographic interference terms and can be easily suppressed by the iterative reconstruction algorithm (Stage II).
[0134] To illustrate the dependencies of the twin image noise and the aliasing signal on the holographic imaging parameters, the twin image phasor e.sup.jtwinQ.sub.00, k.sup.twin and one of the spatial aliasing phasors, i.e., e.sup.jaliasQ.sub.uv, k.sup.real (u=1, v=1) were chosen as examples.
[0135] From
[0136] Lateral Shifts (x.sub.shift, k, y.sub.shift, k):
[0137] Since the twin image phasor e.sup.jtwinQ.sub.00, k.sup.twinT.sub.00, k*.Math.T.sub.00, k.sup.* (see Eq. 4) does not contain variables x.sub.shift, k or y.sub.shift, k, the absolute value of its partial derivatives with respect to x.sub.shift, k and y.sub.shift, k is zero, i.e., |.sub.twin/x.sub.shift, k|=0 and |.sub.twin/y.sub.shift, k|=0 (
[0138] Sample-to-Image Sensor Distance (z.sub.k):
[0139] Using the diversity of the sample-to-image sensor distance (z.sub.k) to eliminate the twin image noise has been one of the most widely-used phase retrieval techniques in holographic image reconstruction. For completeness, here the effect of z.sub.k on the twin image noise is analyzed from the perspective of the propagation phasor approach. As shown in
[0140] Wavelength (.sub.k):
[0141] The diversity of illumination wavelength can be used for twin image elimination (i.e., phase retrieval). As shown in
[0142] Angle of Incidence (.sub.k, .sub.k):
[0143] As shown in
[0144] The propagation phasor approach described herein: (1) provides a unique mathematical formalism that combines/merges various existing phase retrieval and pixel super-resolution techniques used in digital holography into the same unified framework, and (2) creates two new techniques to eliminate the aliasing signal in digital holography, namely using the diversity of the sample-to-image sensor distance, and the diversity of the illumination angle. For consistency with the previous used terminology, these two new methods are called multi-height based pixel super-resolution and multi-angle based pixel super-resolution, respectively.
[0145] Propagation Phasor Approach Using Multi-Height and Multi-Angle Holographic Data
[0146] Using this new propagation phasor based reconstruction framework, the diversities of sample-to-image sensor distance or illumination angle can enable not only twin image elimination, but also resolution enhancement, i.e., super-resolution. To demonstrate the resolution enhancement brought by the diversity of z.sub.k (i.e., multi-height based pixel super-resolution), holograms of a standard resolution test target were captured at eight different heights, where the values of z.sub.k are evenly distributed between 200 m and 305 m with a spacing of 15 m. For comparison, the sample was first reconstructed using a previous technique: multi-height based phase retrieval algorithm as seen in
[0147] In addition to multi-height based pixel super-resolution, a similar resolution enhancement can also be achieved using the diversity of illumination angles (i.e., multi-angle based pixel super-resolution). As shown in
[0148] Improving the Data Efficiency in High-Resolution Holographic Reconstructions Using the Propagation Phasor Approach
[0149] It was also found that much higher resolution images can be reconstructed using the propagation phasor approach by simply adding lateral shift based pixel super resolution to only one of the measurement heights or angles, which is used as an initial guess at Stage I of the reconstruction algorithm. This approach is also quite efficient in terms its data requirement compared to existing approaches.
[0150] Using the multi-height imaging configuration outlined earlier, a 44 lateral shift-based pixel super-resolution was performed at only one sample-to-image sensor distance (i.e., 190 m), which added fifteen (15) extra raw measurements/holograms to the original data set that is composed of measurements at eight (8) heights. In the propagation phasor based reconstruction, the back-propagation of this super-resolved hologram at this height (190 m) was used as the initial guess (Stage I of algorithm). The resolution improvement that were obtained by using these additional fifteen (15) raw measurements in the propagation phasor approach is significant: a half-pitch resolution of 0.55 m (corresponding to an NA of 0.48 at 530 nm illumination) was achieved, which is the same level of resolution that is achieved by performing lateral shift-based super-resolution at every height (see
[0151] A similar level of improvement in data efficiency of the propagation phasor approach is also observed in the multi-angle imaging configuration: by simply performing 66 pixel super-resolution at only the vertical illumination, the propagation phasor based reconstruction can achieve a half-pitch resolution of 0.49 m (corresponding to an NA of 0.53 at 530 nm illumination). As a comparison, the synthetic aperture approach achieves a half-pitch resolution of 0.44 m; however it uses 66 pixel super-resolution at every illumination angle (
[0152] Imaging Biological Samples Using the Propagation Phasor Approach
[0153] To demonstrate the success of the propagation phasor approach in imaging a biological sample, unstained Papanicolaou (Pap) smears were imaged (see
[0154] Based on these results, it can be seen that the propagation phasor approach would greatly increase the speed of high-resolution and wide-field holographic microscopy tools. In previously reported holographic imaging modalities, multiple laterally shifted images are captured to achieve pixel super-resolution at every one of the sample-to-image sensor distances or illumination angles. As demonstrated in
[0155] Wavelength-Scanning Pixel Super-Resolution
[0156] In this particular embodiment, pixel super-resolution is achieved that utilizes wavelength scanning to significantly improve the resolution of an undersampled or pixelated imaging system, without the use of any lateral shifts or displacements. In this technique, the sample is sequentially illuminated at a few wavelengths that are sampled from a rather narrow spectral range of 10-30 nm (although a larger wavelength range may also be used in some embodiments). Compared with sub-pixel displacement or lateral shift-based super-resolution techniques, wavelength scanning introduces a uniform resolution improvement across all the directions on the sample plane and requires significantly fewer raw measurements to be made. Making fewer measurements without sacrificing performance could greatly benefit high-speed wide-field imaging, field-portable microscopy and telemedicine applications, which are all sensitive to data transmission and storage.
[0157] Mathematical Formalism of Wavelength-Scanning Pixel Super-Resolution
[0158] An assumption is made that the sample is a thin object mounted on a plane parallel to the image sensor chip and that the sample is sequentially illuminated by multiple wavelengths {.sub.k}. At a given wavelength .sub.k, the object wave can be written as o.sub.k(x, y)=1+s.sub.k(x, y), where s.sub.k(x, y) represents the scattered object wave, immediately at the exit of the object plane (z=0). The 2D Fourier transform of o.sub.k(x, y) can be written as O.sub.k(f.sub.x, f.sub.y)=(f.sub.x, f.sub.y)+S.sub.k(f.sub.x, f.sub.y). At the image sensor plane (z=z.sub.0), the Fourier transform of the intensity distribution, i.sub.k(x, y), can be written as Equation (3), above. On the right-hand side of Eq. (3), the first item, , represents the background intensity; the second and third items are conjugate holographic terms, which represent the interference of the scattered object wave with the background wave at the sensor plane. The fourth item is the self-interference term, which can be considered negligible for weakly scattering objects. The expression for T.sub.k can be written as follows:
T.sub.k(f.sub.x,f.sub.y)=H.sub.k*(f.sub.x,k,f.sub.y,k).Math.H.sub.k(f.sub.x+f.sub.x,k,f.sub.y+f.sub.y,k)(16)
[0159] where H.sub.k(f.sub.x, f.sub.y) is the free space transfer function, and the frequency shifts f.sub.x, k and f.sub.y, k are determined by the illumination wavelength and the incident angle. After the object intensity is sampled by an image sensor array with a pixel pitch of x and y, the discrete Fourier transform (DFT) of the sensor's output can be expressed as follows:
[0160] where u and v are integers and f.sub.x and f.sub.y are discrete spatial frequency values. Note that I.sub.pix, k(f.sub.x, f.sub.y)=I.sub.k(f.sub.x, f.sub.y).Math.P.sub.k(f.sub.x, f.sub.y), where P.sub.k(f.sub.x, f.sub.y) represents the Fourier transform of the pixel function, i.e., the 2D responsivity distribution within each pixel: p.sub.k(x, y). Variables u and v represent the order of spatial aliasing due to pixelation, and (u, v)=(0, 0) corresponds to the non-aliased real (i.e., target) signal. The periodic nature of the DFT enables one to extend the expression of I.sub.sampled, k to a broader frequency space by upsampling. Based on these definitions, one can express the undersampled or pixelated lens-free hologram at a given wavelength .sub.k as follows:
[0161] The non-aliased target signal o.sub.k(x, y) or its spatial Fourier spectrum can be obtained under (u, v)=(0, 0), i.e., .sub.00+S.sub.00, k which can be written as follows:
[0162] On the left side of Eq. (19), one retains the pixel function P.sub.00, k, which can be removed later in the last step of the image reconstruction, using, for example, spatial deconvolution with a Wiener filter. Eq. (19) also shows that to obtain the non-aliased object at (u, v)=(0, 0), one needs to eliminate or subtract four terms from the upsampled and back-propagated holographic term (i.e., T.sub.00, k*.Math.I.sub.sampled, k). To this end, the first item to eliminate, T.sub.00, k*T.sub.00, k*.Math.P.sub.00, k.Math.S.sub.00, k.sup.*, is the twin image noise, a characteristic artifact of in-line holography. The second term in Eq. (19), which contains S.sub.uv, k and S.sub.uv, k.sup.* (u0, v0) in the summation, represents the effects of spatial aliasing and undersampling for both the real and twin image terms. The third item, which contains .sub.uv in the summation, is the periodic background artifact generated during the upsampling process, and the last item is the self-interference term and its upsampling related artifacts.
[0163] As with the other embodiments, a two-stage reconstruction algorithm for eliminating all four of these items listed on the right side of Eq. (19) using wavelength scanning to enable super-resolved reconstructions of complex (i.e., phase and amplitude) object functions.
[0164] Reconstruction Stage 1: Generation of a High-Resolution Initial Guess of the Sample Using Wavelength Diversity
[0165] As depicted in
[0166] This expression indicates that by summing all the back propagations at different wavelengths (over a narrow spectral range of 10-30 nm, for example), the reconstructed image, i.e., .sub.00+S.sub.00, k or (.sub.00+S.sub.00, k).Math.P.sub.00, k, can be significantly enhanced, whereas the spatial aliasing and undersampling related terms with T.sub.00, k*.Math.T.sub.uv, k will be considerably suppressed. Therefore, in this first stage of the reconstruction process, a high-resolution initial guess of the sample is generated by summing all the upsampled and back-propagated raw measurements, i.e., low-resolution diffraction patterns. The artifact items {(.sub.uv(u0, v0)} are subtracted before the back-propagation step to create a cleaner image.
[0167] It should be noted that modifying Eq. (20) into a weighted average at each spatial frequency point could achieve better suppression of spatial aliasing and undersampling-related artifacts. However, using the computation platform used for experimental results, which is based on a central processing unit (CPU), the search for optimal weighting factors at each frequency point will significantly increase the total computation time. Therefore, in this implementation, a simpler summation approach was chosen to minimize the computation time for the generation of the initial object guess. The spatial aliasing and undersampling-related artifacts of this initial guess will be further eliminated and cleaned up during the second stage of the algorithm as explained below.
[0168] Reconstruction Stage 2: Multi-Wavelength-Based Iterative Pixel Super-Resolution and Phase Retrieval
[0169] The second stage of the numerical reconstruction involves an iterative algorithm, which contains four sub-steps in each iteration:
[0170] (1) Knowing each raw measurement's corresponding wavelength and incidence angle, one applies the corresponding plane wave illumination on the initial guess of the sample (from Stage 1) and propagate the optical field from the object plane to the image sensor plane using the angular spectrum approach.
[0171] (2) The amplitude of the high-resolution field on the image sensor plane is updated using the low-resolution measurement at the corresponding wavelength. To this end, the intensity of the high-resolution field is convolved with the image sensor's pixel function and downsampled to the same grid size as the pixelated raw measurement. The difference between the raw measurement and the downsampled intensity map is considered a low-resolution correction map for each illumination wavelength. A high-resolution correction map can then be generated by taking the Kronecker product of this low-resolution map and the pixel function. To perform a smooth update, this high-resolution correction map is added to the high-resolution intensity distribution with a relaxation parameter, typically set to approximately 0.5. After the smoothed update, a Wiener deconvolution filter that incorporates the image sensor's noise level is applied to this updated intensity distribution. The square root of this filtered high-resolution intensity distribution is then applied to the amplitude of the field on the sensor plane while the phase map is kept unaltered.
[0172] (3) This updated field is then back-propagated to the object plane.
[0173] (4) The back-propagated field is used to update the transmission field on the object plane. This update is performed in the frequency domain within a circular area as seen in
[0174] The four steps described above are performed for every raw (i.e., undersampled) measurement captured by the image sensor array. It is considered one iteration cycle when each one of the raw measurements has been used for amplitude update. Typically after 5-10 iteration cycles, the reconstruction converges. The convergence condition for the iteration is defined using Equation (15) above.
[0175] Phase Retrieval Using Multi-Height and Synthetic Aperture Techniques
[0176] Multi-height and synthetic aperture techniques have been proven to be robust phase retrieval methods for lens-free on-chip imaging. In previously reported lens-free reconstructions, pixel super-resolution and phase retrieval are carried out sequentially: at each height or illumination angle, lateral shift-based pixel super-resolution is first performed to obtain high-resolution diffraction patterns on the image sensor plane. These super-resolved diffraction patterns are then used by an iterative phase retrieval algorithm, in which wave propagations between the object plane and the image sensor plane are executed repeatedly. However, in wavelength-scanning-based pixel super-resolution, raw measurements are essentially undersampled versions of different holograms. Therefore, the same iterative algorithm detailed in the previous subsection (i.e., Reconstruction Stage 2) is used to realize resolution enhancement and phase retrieval altogether. More specifically, in the multi-height configuration, the sample is illuminated sequentially at each wavelength, and the corresponding lens-free holograms are captured before the vertical scanning stage moves the sample or the image sensor to the next height. Each height will be labeled with index l; therefore, all the measurements {I.sub.sampled, k} and the corresponding transfer functions {H.sub.k} and {T.sub.uv, k} that are used in the previous derivations can be relabeled as {I.sub.sampled, kl}, {H.sub.kl} and {T.sub.uv, kl}, respectively. During the numerical reconstruction process, all the raw holograms are upsampled, back-propagated, and then summed together to generate the high-resolution initial guess at a given height. In Stage 2 of the reconstruction algorithm, the aforementioned four-step process is applied to each raw measurement. The same set of operations and processing also apply to the synthetic aperture technique, except that index l now refers to each illumination angle instead of sample height.
[0177] In general, for pathology slides such as blood smears and Pap smears, the optical path length difference between the sample (i.e., biological tissue) and the medium (i.e., air or the sealing glue) is rather small. Under these circumstances, phase unwrapping is not a concern; therefore, in the phase recovery process, one can use a scrambled order of {I.sub.sampled, kl} in each iteration cycle. However, when working with samples with larger optical path length differences, such as grating lines carved into a glass substrate, one extra step, i.e., phase unwrapping, must be added after the reconstruction, and the order of iterations must be modified accordingly.
[0178] Multi-Wavelength Phase Unwrapping
[0179] A robust phase unwrapping algorithm requires high-resolution and phase-retrieved reconstructions at multiple wavelengths; therefore, the raw measurements are divided into subsets, in which the wavelengths are identical or very similar (e.g., 5 nm), and perform the four-step reconstruction process previously discussed (as part of the Reconstruction Stage 2) on each subset separately. For example, reconstruction No. 1 uses subset {I.sub.sampled, kl|k=1, l=1, . . . L}, No. 2 uses {I.sub.sampled, kl|k=2, l=1, . . . L}, etc. When the iterations for all these subsets are completed, one obtains high-resolution (i.e., super-resolved) phase-retrieved reconstructions at multiple wavelengths, i.e., {O.sub.k}, whose phase maps {.sub.k, wrapped} need unwrapping. Using these wrapped phase maps {.sub.k, wrapped} at multiple wavelengths, phase unwrapping is performed to accurately reveal the optical path length differences between the sample and the surrounding medium. Assuming that the optical path length difference is L(x, y), the phase distribution at the object plane at each wavelength can be written as .sub.k(x, y)=2.Math.L(x, y)/.sub.k. The wrapped phase can thus be expressed as .sub.k, wrapped(x, y)=.sub.k(x, y)2N, where <.sub.k, wrapped and N is an integer. These resulting wrapped phase maps {.sub.k, wrapped} that are generated through super-resolved and phase-retrieved reconstructions at multiple wavelengths are then fed into an optimization algorithm that finds the optimum path length L.sub.opt(x, y) at each spatial point (x, y) by minimizing a cost function defined as follows:
[0180] To avoid convergence to a local minimum and reduce the computation cost/time, a search range of [L.sub.0-min{.sub.k }/2, L.sub.0+min{.sub.k }/2] is defined, where L.sub.0 is the initial guess of the optical path length:
[0181] where the total number of wavelengths (K) is typically 5-10. Within this search interval, the values are scanned to find the optical path length L.sub.opt(x, y) that minimizes the cost function, resulting in an unwrapped object phase image.
[0182] In one embodiment, a fundamentally new pixel super-resolution method is described that utilizes wavelength scanning to significantly improve the resolution of an undersampled or pixelated imaging system, without the use of any lateral shifts or displacements. In this technique, the sample is sequentially illuminated at a few wavelengths that are sampled from a rather narrow spectral range of 10-30 nm. Compared with sub-pixel displacement or lateral shift-based super-resolution techniques, wavelength scanning introduces a uniform resolution improvement across all the directions on the sample plane and requires significantly fewer raw measurements to be made. Making fewer measurements without sacrificing performance could greatly benefit high-speed wide-field imaging, field-portable microscopy and telemedicine applications, which are all sensitive to data transmission and storage.
[0183] Using an experimental setup, the effectiveness of this new wavelength-scanning-based pixel super-resolution approach has been demonstrated using lens-free holographic microscopy (
[0184] Optical Setup
[0185] For wavelength scanning experiments, an in-line holographic imaging system 10 that disclosed in
[0186] Wavelength Calibration and Dispersion Compensation
[0187] Wavelength calibration of the light source is achieved using an optical spectrum analyzer (HR2000+, Ocean Optics, Amersham, UK). The intensity-weighted average wavelength of each measured spectrum is considered as the illumination wavelength. To achieve optimal resolution, the refractive index of the glass substrate (100 m, N-BK7, Schott AG, Mainz, Germany) at each wavelength is also corrected using the dispersion formula for borosilicate glass.
[0188] Sample Preparation
[0189] The grating lines used for resolution quantification are fabricated on an approximately 100-m glass slide (N-BK7, Schott AG, Mainz, Germany) using focused ion beam (FIB) milling. Unstained Papanicolaou (Pap) smear slides are prepared through the ThinPrep method (Hologic, Massachusetts, USA). The blood smear samples are prepared using EDTA (ethylenediaminetetraacetic acid) anticoagulated human blood and stained with Wright's stain.
[0190] Computation Platform Used for Super-Resolved Image Reconstructions
[0191] The reconstructions were performed using MATLAB (Version R2012a, MathWorks, Massachusetts, USA) on a desktop computer equipped with a 3.60-GHz central processing unit (Intel Xeon ES-1620) and 16 GB of random-access memory. For a 11 mm.sup.2 sub-region with an upsampling factor of seven, one iteration of the wavelength-scanning super-resolution routine takes approximately 1.2 seconds. For example, one cycle of the algorithm, which undergoes all the undersampled measurements (e.g., seven wavelengths for each angle/height, and 22 angles/heights in total), takes approximately 3 minutes. In this implementation, the iterations did not use either GPU (graphics processing unit) or parallel computing, which could significantly improve the overall computation time. The total image reconstruction time could be further improved by implementing the algorithm in the C language rather than MATLAB.
[0192] Results and Discussion
[0193] The physical basis for wavelength-scanning pixel super-resolution is the strong wavelength dependence of the undersampled interference patterns in coherent or partially coherent diffraction imaging systems such as lens-free, holographic microscopy or defocused lens-based imaging systems. When illuminated at slightly different wavelengths, the high-frequency interference fringes caused by object scattering will change, causing the undersampled output of the image sensor chip to change as well. In the spatial frequency domain, the aliasing signal caused by pixel-induced undersampling is modulated by a complex transfer function whose phase is rather sensitive to even small wavelength changes, which makes it possible to use wavelength diversity within a narrow spectral range (i.e., 10-30 nm) to cancel out the spatial aliasing term and enhance the resolution of the reconstructions beyond the pixel pitch.
[0194] Without pixel super-resolution, lens-free microscopy with a unit magnification on-chip imaging geometry (in which the sample FOV equals the active area of the sensor array) can achieve a half-pitch resolution close to the image sensor's pixel pitch (i.e., approximately 1 m in
[0195] In addition to delivering a competitive resolution and NA, the wavelength-scanning-based super-resolution approach also offers better data efficiency compared with lateral shift-based pixel super-resolution techniques, i.e., fewer raw measurements are needed for the same resolution improvement. In lateral shift-based pixel super-resolution, the sub-pixel shifts between the raw measurements are obtained by moving the light source, image sensor and/or the sample with respect to each other, and the resolution improvement is direction dependent. Therefore, sub-pixel shifts that spread uniformly within a two-dimensional pixel area are preferred in lateral shift-based pixel super-resolution techniques to achieve optimal resolution enhancement. As a result, the number of raw measurements generally increases as a quadratic function of the pixel super-resolution factor. However, in the wavelength-scanning super-resolution approach, the resolution improvement caused by wavelength diversity is uniform in all lateral directions across the sensor array, which enables us to achieve competitive resolution with much fewer raw measurements compared with lateral shift-based super-resolution. To exemplify this important advantage of our approach, in a normal-illumination configuration, compared with the lateral shift technique, which needs nine measurements to achieve a half-pitch resolution of approximately 0.6 m (
[0196] One should re-emphasize that wavelength-scanning super-resolution requires only a few wavelengths taken from a narrow spectral range (e.g., 10-30 nm). With this new super-resolution approach, one can obtain high-resolution amplitude reconstructions of not only colorless but also colored (i.e., stained/dyed) samples without further modifications in the reconstruction algorithm. This capability has been demonstrated by imaging various biological samples, including unstained Pap smears (
[0197] In addition to significantly improving resolution and mitigating undersampling, wavelength diversity also enables one to perform robust phase unwrapping and reveal the optical path length differences between a given sample and surrounding medium. The retrieved phase reconstruction from a single wavelength is constrained to its principle value (, ], and therefore large optical path length differences can cause polarity errors that may not be corrected even by using state-of-the-art phase unwrapping algorithms (see, for example,
[0198] The wavelength-scanning pixel super-resolution approach, together with phase retrieval methods, including multi-height, synthetic aperture, and object-support-based techniques, could constitute high-resolution imaging systems with greatly improved imaging speed. For a bench-top system, high-speed wavelength scanning can be realized using a fast tunable source (employing, for example, an acousto-optic tunable filter with a supercontinuum source) synchronized with the image sensor chip. Compared with lateral shift-based super-resolution setups, such a combination avoids motion blur and could increase the data acquisition speed to the maximum frame rate of the image sensor. Furthermore, the lateral shifts generated by the source-shifting approach are determined by both the sample-to-image sensor and sample-to-aperture distances, which can make it challenging to generate optimized lateral shifts for samples at different vertical heights. The wavelength-scanning approach, however, is performed with evenly distributed wavelengths regardless of sample height. Therefore, the wavelength-scanning pixel super-resolution may be more favorable that lateral shifting techniques for building high-resolution wide-field microscopes with high imaging speeds. Additionally, the better data efficiency of the wavelength-scanning super-resolution approach can reduce the cost of data storage and transmission, benefiting telemedicine implementations and server-based remote reconstructions.
[0199] In addition to delivering competitive results on a bench-top system, the presented wavelength-scanning pixel super-resolution approach also holds great potential for field-portable microscopy applications. Compared with lateral shift-based pixel super-resolution, the wavelength-scanning approach does not require any mechanical motion or fiber bundle (although the integration of such options may be optional), which could make the mobile imaging platform more robust. Because the wavelength scanning range is narrow (i.e., 10-30 nm), the combination of a few light-emitting diodes (LEDs), each with a standard spectral bandwidth of 15-30 nm, and a variable optical thin-film filter to narrow the LED spectra can be used to implement wavelength-scanning super-resolution in a field-portable and cost-effective design.
[0200] It should further be emphasized that the wavelength-scanning pixel super-resolution approach, in addition to lens-free or holographic diffraction-based imaging systems, can also be applied to lens-based point-to-point imaging modalities. By introducing a simple defocus condition into a lens-based imaging system (by, for example, introducing a relative axial shift of the sensor array, object and/or the objective lens along the z direction of the optical path), the entire wavelength diversity framework described herein would be able to achieve pixel super-resolution. This was experimentally demonstrated with the wavelength-scanning pixel super-resolution approach using a conventional lens-based microscope with a 10 objective lens (NA=0.3) and a CMOS sensor chip with a pixel size of 3.75 m (see
[0201] While embodiments of the present invention have been shown and described, various modifications may be made without departing from the scope of the present invention. For example, while several embodiments are described in the context of lens-free embodiments, the reconstruction methods described here may also be used with lens-based microscope systems. The invention, therefore, should not be limited, except to the following claims, and their equivalents.