OPTICAL PHASE RETRIEVAL SYSTEMS USING COLOR-MULTIPLEXED ILLUMINATION
20190107655 ยท 2019-04-11
Assignee
Inventors
Cpc classification
G02B21/365
PHYSICS
H04N2209/043
ELECTRICITY
H04N23/951
ELECTRICITY
International classification
G02B21/36
PHYSICS
Abstract
Systems and methods are disclosed for recovering both phase and amplitude of an arbitrary sample in an optical microscope from a single image, using patterned partially coherent illumination. This is realized through the use of a encoded light source which embeds several different illumination patterns into color channels. The sample is modulated by each illumination wavelength separately and independently of each other, but all of the channels are sensed by the imaging device in a single step. This color image contains information about the phase and amplitude of a sample encoded in each channel, and can be used to recover both amplitude and phase from this single image, at the incoherent resolution limit. Further, extensions of this method are shown which allow the same recovery of a sample whilst it is moving during a single exposure using a motion deblurring algorithm.
Claims
1. An apparatus for recovering phase and amplitude data from an image of a sample, comprising: (a) an encoded light source configured for providing a partially coherent illumination that embeds multiple illumination patterns into a plurality of color channels each at distinct illumination wavelengths; (b) one or more optical elements configured for directing said partially coherent illumination on the sample, wherein the sample is modulated by each illumination wavelength separately and independently of each other; (c) an optical imaging device configured for sensing all color channels simultaneously; (d) a processing unit; and (e) a non-transitory memory storing instructions executable by the processing unit; (d) wherein said instructions, when executed by the processing unit, perform steps comprising: (i) generating a color image of the sample containing information about both phase and amplitude of the sample.
2. The apparatus of claim 1, wherein said instructions, when executed by the processing unit, further perform steps comprising: extracting quantitative amplitude and phase data from the color image of the sample.
3. The apparatus of claim 2, wherein the amplitude and phase data are extracted via processing the image via a single deconvolution.
4. The apparatus of claim 3, wherein the deconvolution is performed via L2 regularization.
5. The apparatus of claim 3, wherein the deconvolution is performed via L1 regularization on the object or object gradient.
6. The apparatus of claim 3, wherein the deconvolution is performed via the equation:
min.sub.A,??.sub.m=1.sup.N|???.sub.O?{tilde over (H)}.sub.?,m.Math.{tilde over (?)}?{tilde over (H)}.sub.A,m.Math.?|.sub.2.sup.2+R(?,A) wherein I is a color intensity measurement, I.sub.0 is a background signal, N is the total number of wavelengths, A is amplitude, ? is phase, {tilde over (H)}.sub.?,m and {tilde over (H)}.sub.A,m are transfer functions for phase and amplitude, respectively, for a given wavelength index m, and R(?, A) is a regularizer function.
7. The apparatus of claim 6, wherein regularizer R(?,A) is selected based on depends on a-priori information about the sample.
8. The apparatus of claim 1, wherein said encoded light source comprises a broadband light source coupled to a static multiple-color filter configured to separate the broadband light into the multiple illumination patterns and encode the illumination into different spectral bands.
9. The apparatus of claim 1, wherein said encoded light source comprises a multiple-color LED configured generate the multiple illumination patterns and encode the illumination into different spectral bands.
10. The apparatus of claim 8: wherein the one or more optical elements comprises a microscope; and wherein the multiple-color filter is configured to be positioned adjacent a back focal plane of a microscope.
11. The apparatus of claim 10, wherein the multiple-color filter comprises a filter insert configured to be positioned at the back focal plane of the condenser of the microscope.
12. The apparatus of claim 1, wherein said encoded light source is configured to provide contrast in either phase or amplitude.
13. The apparatus of claim 1, wherein the amplitude and phase data are extracted from a single image by said optical imaging device.
14. The apparatus of claim 1, wherein said instructions, when executed by the processing unit, further perform steps comprising: single image phase and amplitude imaging of the sample with motion deblurring.
15. A method for recovering phase and amplitude data from an image of a sample, comprising: encoding a source of light into a partially coherent illumination that embeds multiple illumination patterns into a plurality of color channels each at distinct illumination wavelengths; directing said partially coherent illumination on the sample and modulating the sample by each illumination wavelength separately and independently of each other; for sensing all color channels simultaneously; and generating a color image of the sample containing information about both phase and amplitude of the sample.
16. The method of claim 15, further comprising: extracting quantitative amplitude and phase data from the color image of the sample.
17. The method of claim 15, wherein the amplitude and phase data are extracted via processing the image via a single deconvolution.
18. The method of claim 17, wherein the deconvolution is performed via L2 regularization.
19. The method of claim 17, wherein the deconvolution is performed via L1 regularization on the object or object gradient.
20. The method of claim 17, wherein the deconvolution is performed via the equation:
min.sub.A,??.sub.m=1.sup.N|???.sub.O?{tilde over (H)}.sub.?,m.Math.{tilde over (?)}?{tilde over (H)}.sub.A,m.Math.?|.sub.2.sup.2+R(?,A) wherein I is a color intensity measurement, I.sub.0 is a background signal, N is the total number of wavelengths, A is amplitude, ? is phase, {tilde over (H)}.sub.?,m and {tilde over (H)}.sub.A,m are transfer functions for phase and amplitude, respectively, for a given wavelength index m, and R(?,A) is a regularizer function.
21. The method of claim 20, wherein regularizer R(?,A) is selected based on depends on a-priori information about the sample.
22. The method of claim 15, wherein the amplitude and phase data are extracted from a single image by said optical imaging device.
23. The method of claim 15, further comprising: single image phase imaging of the sample with motion deblurring of the sample.
24. The method of claim 23, wherein motion deblurring comprises: applying motion to the sample during imaging of the sample; and applying a motion deblurring algorithm.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING(S)
[0018] The technology described herein will be more fully understood by reference to the following drawings which are for illustrative purposes only:
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
DETAILED DESCRIPTION
[0036] 1. Hardware Description
[0037] The general hardware components for practicing the methods of the present description are: 1) a color-encoded illumination source (either from a programmable source or static filter); 2) an existing imaging system (i.e. microscope); and 3) a color imaging sensor. Two primary options for color-multiplexed Differential Phase Contrast systems are detailed in
[0038]
[0039]
[0040]
[0041]
[0042] The illumination patterns detailed in the spectral filters 16a and 16b shown in
[0043]
[0044]
[0045] The color imaging sensor 28 is ideally sensitive to both spatial location as well as temporal content (frequency) of the image. The temporal sensitivity need only be fine enough to distinguish between the spectral bands that the illumination is colored into. For example, if the illumination is encoded into Red, Green, and Blue color channels, the camera must be able to distinguish between signals with these three spectra. In this method, it is assumed that frequencies are not modified by the sample (such as by nonlinear materials), and that chromatic dispersion effects of the sample and optical system are minimal.
[0046]
[0047] 2. Computational Methods
[0048] Conventional DPC microscopy converts the amplitude (A) and optical phase (?) information into the final intensity measurements. Eq. 1 and Eq. 2 show the mathematical expressions of optical phase and amplitude, respectively:
where ?.sub.0 is a reference wavelength of the optical field, d is the thickness of the sample, n represents refractive index and ? indicates absorption coefficient. In conventional DPC microscopy using monochromatic (single color) illumination, the amplitude and phase transfer functions are fully determined by the system's illumination pattern, pupil function, and illumination wavelength. Monochromatic DPC assumes constant wavelength for acquisitions, and solves for the phase given the common illumination spectra.
[0049] However, in the color-multiplexed DPC (cDPC) system and method of the present description, the transfer functions must also consider the change in wavelength of each color channel. In addition, the choice of illumination wavelength is variable and arbitrary, so our definition of phase (?) depends on which wavelength we use as our reference, since phase is defined in terms of wavelength ?.sub.0. To resolve this ambiguity, we note that the optical path length (OPL=nd) and absorption length (AL=?d) are constant for all wavelengths (?). Therefore, both amplitude and phase can be synthesizes for any wavelength by simply multiplying the AL and OPL by the wave number (2?/?.sub.0) for a desired reference wavelength, ?.sub.0.
[0050] Using the weak-object approximation (WOA), the point spread functions (PSFs) or transfer functions for each color channel can be formulated based on a-priori information about our system. This enables us to develop a forward model for the formation of the measured intensity image, which is the sum of convolutions between color-dependent PSFs (H.sub.A,m, H.sub.?,n), defined at their respective wavelengths ?.sub.m, and physical quantities (A, ?):
I=I.sub.O+?.sub.m=1.sup.N(H.sub.A,m.Math.A+H.sub.?,m.Math.?)Eq. 3
where I is the color intensity measurement, I.sub.0 is the background signal, .Math. indicates the convolution process, m is the wavelength index, N is the total number of wavelengths, H.sub.A,m, H.sub.?,m are point spread functions for amplitude and phase respectively.
[0051] If we express this forward model in Fourier space by performing the 2D Fourier transform operation on both sides of the equation, we arrive at the following expression:
?=?.sub.O+?.sub.m=1.sup.N({tilde over (H)}.sub.A,m.Math.?+{tilde over (H)}.sub.?,m.Math.{tilde over (?)})Eq. 4
where {tilde over ()} represents the Fourier transform representation of the function, () is the point-wise product, and {tilde over (H)}.sub.?,m and {tilde over (H)}.sub.A,m are the transfer functions for phase and amplitude, respectively, for a given wavelength index m.
[0052] Considering the influence of wavelength and our variable source S(?m) and pupil P (?m), the transfer functions are defined as:
where S(?.sub.m) and P(?.sub.m) are wavelength dependent source shapes and pupil functions, and * represents the cross-correlation operation. Candidates for S(?m) in a practical implementation using asymmetric differential phase contrast are illustrated in
[0053] To retrieve the amplitude and phase of the object, we formulate our inverse problem as:
min.sub.A,??.sub.m=1.sup.N|???.sub.O?{tilde over (H)}.sub.?,m.Math.{tilde over (?)}?{tilde over (H)}.sub.A,m.Math.?|.sub.2.sup.2+R(?,A)Eq. 7
[0054] This problem is linear and can be solved with one-step deconvolution or iterative algorithm, such as steepest descent, and can be incorporated into application programming 70, as appropriate. The choice of regularizer R(?,A) depends on a-priori information about the object. For instance, if the sample is sparse (only a small number of pixels have non-zero values), one can use L1 regularization. On the other case, when the object has a limited amount of energy (value), L2 regularization can help avoiding amplification of the noise in the reconstruction. Similarly, other types of regularization such as sparsity of gradients can be applied based on different a-priori information. Application programming 70 may provide options for regularizer and other input variables as appropriate.
[0055] 3. Results
[0056] A simulation of the cDPC phase retrieval technique was performed using 3 color channels and a devolution process with L2 regularization. For the partially coherent, asymmetric source patterns, color channels corresponding to wavelengths blue (450 nm), green (550 nm), and red (650 nm) light were used. These patterns were similar to those illustrated in
[0057] The numerical aperture (NA) of the system was set to 0.25, image dimensions to 256?256, and pixel size to 0.325 ?m, which corresponds to the same parameters of the commercial Nikon TE300 microscope with a 10? objective.
[0058]
[0059] In addition to the simulation results, an experiment was performed to verify the capability of cDPC. We measured the phase and amplitude of a commercially available micro-lenslet array, where each lenslet has a side length of 130 ?m, with an illumination pattern designed to replicate our simulation results in
[0060]
[0061]
[0062] 4. Phase Imaging with Motion Deblur
[0063] Throughput is a primary concern for many imaging applicationsfor example, the clinical laboratory a large hospital may need to scan hundreds of histology slides per hour. Often, slide imaging or scanning large samples at high resolution requires the registration and tiling of many images acquired using a high-magnification objective. The acquisition of these datasets is often very long, owing to the precise positioning required as well as a necessary autofocusing step prior to each acquisition. Optimization of this process would clearly benefit hospitals and research centers studying datasets which necessitate a large field of view with high resolution.
[0064] Most modern high-throughput imaging systems use a stop-and-stare imaging style, where images are acquired by moving the sample to many positions, stopping, autofocusing, and finally acquiring an image. The full-field image is then stitched together using a registration algorithm. As mentioned previously, this method is often prohibitively slow due to the time necessary to stop and start the motion as well as autofocusing. A promising alternative to this scheme is strobed illumination, where the sample is moved constantly while the illumination is strobed, or flashed once each exposure using a very short, high-intensity pulse width. In this framework the image is still blurred, but the blurring is designed to be smaller than the system PSF causing no image degradation.
[0065] Strobed illumination is an ideal implementation in many applications. However, producing a very bright, very short pulse can often be difficult, particularly when a sample is moving fast, or in a low-resource setting such as a portable device where illumination power and intensity are restricted. A promising alternative to strobed illumination is motion compensation using a deblurring algorithm which incorporates hardware coding techniques.
[0066] Phase imaging using the Weak Object Transfer Function (WOTF) is highly compatible with motion deblur since both are modeled as linear convolutions on the same object. Our forward model for single image phase imaging with motion deblur is a combination of the existing motion deblur model with the WOA. In the case of a single image, we model the blurred intensity image as two separate convolutions, applied sequentially:
?=B*[H.sub.?*A+H.sub.?*?]Eq. 8
[0067] We can express the above equation as a block-wise matrix product in the Fourier domain, letting {tilde over (B)}, {tilde over (H)}.sub.? and {tilde over (H)}.sub.? be the diagonalized Fourier Transforms of the transform functions, and ?, {tilde over (?)} and {tilde over (?)} be the vectorized image and object components respectively:
?={tilde over (B)}.Math.[{tilde over (H)}.sub.?{tilde over (H)}.sub.?][{tilde over (?)}{tilde over (?)}]Eq. 9
[0068] Combining measurements from the three color channels, we model the full over-determined system as:
[0069] Our patterns are configured such that the condition number of the blur kernel is minimized. By combining both motion deblurring and the linearized phase retrieval technique presented in the previous section, we can use knowledge of our WOTF as derived previously to influence our choice of B to improve our overall phase and amplitude reconstruction from blurred data. The motion deblur problem as presented here will always degrade the result even with an ideal B due to the constraints placed on the optimization problem. In the previous case, degradation due to the blurring operation was minimized by solving for a blur kernel with an optimally flat Fourier spectrum. This method, however, did not take into account the additional attenuation due to the OTF of the optical system.
[0070] A unique aspect of the deblur methods of the present description is to consider the spectrums of cascaded filters in our system when designing our blur kernel. We can think of our sequential deconvolution problem as a single-step deconvolution which inverts the element-wise product of the blur kernel and WOTF's in the Fourier Domain. Therefore, the relative attenuation produced by the blur kernel at each frequency can be matched to reduce the degradation to highly attenuated frequencies in the WOTF, such as high frequencies in both amplitude and phase WOTF's, as well as low frequencies in the phase WOTF. The exact structure of this transfer function depends on the pupil function of the optical system and design of the source. In practice, we note that the phase transfer function is of higher order than the amplitude transfer function (OTF), which generally means there are more zero crossings and values close to zero in the phase transfer function. Therefore, we will use the phase transfer function for optimizing the blur kernel.
[0071] To solve for the optimal blur kernel considering the WOTF, we model Eq. 11 to consider a 1D spectral reference q, which provides a measure of the attenuation imposed by the optical system at each spatial frequency in the blur kernel:
[0072] For linear kernels, we chose q to be the sum of the magnitude of the phase transfer function along the direction orthogonal to the blur direction as shown in
[0073]
[0074]
[0075] Simulation results for motion deblurring+quantitative phase imaging are shown in
[0076] In this simulation we note that the normalized sum-squared error (N-SSE) in phase is reduced significantly in our method. The amplitude N-SSE did increase slightly using our method, which is likely due to the fact that we used the phase WOTF for generating our blur kernels instead of amplitude. The choice of which WOTF to use could be application-dependent.
[0077] To verify our results in experiment, we developed a system which consists of a commercial Nikon AZ100 microscope using a 1?0.10 NA objective, a Bayer-patterned SCMOS camera (PCO edge 5.5), an XY stage with linear encoders, (Prior H117), and illumination from 23 multi-channel LEDs (Jameco 2128500) arranged in a hexagonal pattern using a laser-cut holder for positioning. The LEDs were placed approximately 160 mm from the sample to match the spacing such that the outer LEDs illuminate the sample from angles just inside of the NA of the microscope. This is done to ensure maximum phase contrast and bandwidth (resolution) of the system. The LEDs are controlled using a Teensy 3.2 microcontroller, which can be dynamically programmed. Camera exposures, stage movement, and illumination modulation are controlled using open-looped feedback with 5 ms synchronization resolution, limited by the update speed of the LED array.
[0078] Our forward model considers the case where our LED illumination is incoherent and discrete, both spatially and temporally. We assume each emitter has three coincident emitters for Red ({tilde over (?)}=625), Green ({tilde over (?)}=525), and Blue ({tilde over (?)}=470), wavelengths, which propagate through the optical system independently of each other and are detected separately by the Bayer filter of the color camera. We assume a sample which is non-dispersive and unstained. A velocity of 25 mm per second was used for sample movement, but this could be increased by improving hardware synchronization. Blur kernels were calculated using the calibrated phase WOTFs for each color channel separately, considering the spacing of k-space due to wavelength.
[0079] To test our method, we used a micro-lens array (Fresnel-Tech 605) as our sample due to its well-defined geometry. Reconstructions for the static case, previous method, and our method were performed. While the sample amplitude was relatively unchanged, the phase reconstructions clearly show that accounting for the spectral reference provides better results than optimizing the blur kernel alone. This supports our claim that image degradation from blurring can be reduced significantly, but not eliminated, using our method.
[0080] 5. Advantages
[0081] The cDPC deconvolution method of the present description needs only a single image to recover both amplitude and phase. Unlike the original Differential Phase Contrast inversion method, the cDPC deconvolution method uses a RGB color camera (e.g. Bayer patterned or similar patterning) to acquire the same information in a single image. This is done by multiplexing different images corresponding to different asymmetric illumination patterns into the color channels, which propagate independently of each other through the optical system. Single-shot imaging is substantially more useful than multiple exposures, enabling camera limited video frame rates and the use of standard file formats. The raw frames are useful qualitatively due to embedded phase contrast, but can also be post-processed using a simple linear algorithm to recover the quantitative amplitude and phase.
[0082] The cDPC deconvolution method has minimal hardware requirements, which are comparable to existing phase imaging methods, but provides substantially more information. The proposed condenser insert 40 embodiment of
[0083] The cDPC deconvolution method can be implemented cheaply using several hardware configurations, such as a programmable LED array or color filter insert, and is compatible with most infinity corrected microscopes which provide access to the back focal plane. Infinity-corrected systems have become commonplace in recent years, which makes the cDPC deconvolution method especially viable. The cDPC deconvolution method can recover phase using a simple condenser insert having several band-pass filters spanning the back focal plane in an asymmetric arrangement. This configuration is compatible with any modern current phase contrast or DIC microscope with removable condenser inserts. Furthermore, similar results can be achieved using a programmable multi-color LED array, making it compatible with complimentary super-resolution methods.
[0084] The cDPC deconvolution method uses partially coherent illumination, which provides twice the resolution compared to coherent phase retrieval techniques. Coherent imaging methods such as interferometry or Transport of Intensity (TIE) phase imaging require coherent illumination, which limits the spatial resolution of the images. The cDPC deconvolution method uses a partially coherent illumination pattern, which provides similar resolution to bright-field imaging (2?Improvement in resolution over coherent imaging). Moreover, the cDPC deconvolution method does not suffer from loss of field of view as is the case for single-shot off-axis holography.
[0085] Since the cDPC deconvolution method is single-shot, it may be used in conjunction with a wide-variety of computational imaging techniques, such as recovering an image of a moving sample using a motion deconvolution algorithm or acquiring a high-speed quantitative phase video of a transparent sample.
[0086] Embodiments of the present technology may be described herein with reference to flowchart illustrations of methods and systems according to embodiments of the technology, and/or procedures, algorithms, steps, operations, formulae, or other computational depictions, which may also be implemented as computer program products. In this regard, each block or step of a flowchart, and combinations of blocks (and/or steps) in a flowchart, as well as any procedure, algorithm, step, operation, formula, or computational depiction can be implemented by various means, such as hardware, firmware, and/or software including one or more computer program instructions embodied in computer-readable program code. As will be appreciated, any such computer program instructions may be executed by one or more computer processors, including without limitation a general purpose computer or special purpose computer, or other programmable processing apparatus to produce a machine, such that the computer program instructions which execute on the computer processor(s) or other programmable processing apparatus create means for implementing the function(s) specified.
[0087] Accordingly, blocks of the flowcharts, and procedures, algorithms, steps, operations, formulae, or computational depictions described herein support combinations of means for performing the specified function(s), combinations of steps for performing the specified function(s), and computer program instructions, such as embodied in computer-readable program code logic means, for performing the specified function(s). It will also be understood that each block of the flowchart illustrations, as well as any procedures, algorithms, steps, operations, formulae, or computational depictions and combinations thereof described herein, can be implemented by special purpose hardware-based computer systems which perform the specified function(s) or step(s), or combinations of special purpose hardware and computer-readable program code.
[0088] Furthermore, these computer program instructions, such as embodied in computer-readable program code, may also be stored in one or more computer-readable memory or memory devices that can direct a computer processor or other programmable processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory or memory devices produce an article of manufacture including instruction means which implement the function specified in the block(s) of the flowchart(s). The computer program instructions may also be executed by a computer processor or other programmable processing apparatus to cause a series of operational steps to be performed on the computer processor or other programmable processing apparatus to produce a computer-implemented process such that the instructions which execute on the computer processor or other programmable processing apparatus provide steps for implementing the functions specified in the block(s) of the flowchart(s), procedure (s) algorithm(s), step(s), operation(s), formula(e), or computational depiction(s).
[0089] It will further be appreciated that the terms programming or program executable as used herein refer to one or more instructions that can be executed by one or more computer processors to perform one or more functions as described herein. The instructions can be embodied in software, in firmware, or in a combination of software and firmware. The instructions can be stored local to the device in non-transitory media, or can be stored remotely such as on a server, or all or a portion of the instructions can be stored locally and remotely. Instructions stored remotely can be downloaded (pushed) to the device by user initiation, or automatically based on one or more factors.
[0090] It will further be appreciated that as used herein, that the terms processor, hardware processor, computer processor, central processing unit (CPU), and computer are used synonymously to denote a device capable of executing the instructions and communicating with input/output interfaces and/or peripheral devices, and that the terms processor, hardware processor, computer processor, CPU, and computer are intended to encompass single or multiple devices, single core and multicore devices, and variations thereof.
[0091] From the description herein, it will be appreciated that that the present disclosure encompasses multiple embodiments which include, but are not limited to, the following:
[0092] 1. An apparatus for recovering phase and amplitude data from an image of a sample, comprising: (a) an encoded light source configured for providing a partially coherent illumination that embeds multiple illumination patterns into a plurality of color channels each at distinct illumination wavelengths; (b) one or more optical elements configured for directing said partially coherent illumination on the sample, wherein the sample is modulated by each illumination wavelength separately and independently of each other; (c) an optical imaging device configured for sensing all color channels simultaneously; (d) a processing unit; and (e) a non-transitory memory storing instructions executable by the processing unit; (d)
[0093] wherein said instructions, when executed by the processing unit, perform steps comprising: (i) generating a color image of the sample containing information about both phase and amplitude of the sample.
[0094] 2. The apparatus of any preceding embodiment, wherein said instructions, when executed by the processing unit, further perform steps comprising: extracting quantitative amplitude and phase data from the color image of the sample.
[0095] 3. The apparatus of any preceding embodiment, wherein the amplitude and phase data are extracted via processing the image via a single deconvolution.
[0096] 4. The apparatus of any preceding embodiment, wherein the deconvolution is performed via L2 regularization.
[0097] 5. The apparatus of any preceding embodiment, wherein the deconvolution is performed via L1 regularization on the object or object gradient.
[0098] 6. The apparatus of any preceding embodiment, wherein the deconvolution is performed via the equation: min.sub.A,??.sub.m=1.sup.N|???.sub.O?{tilde over (H)}.sub.?,m.Math.{tilde over (?)}?{tilde over (H)}.sub.A,m.Math.?|.sub.2.sup.2+R(?,A); wherein I is a color intensity measurement, I0 is a background signal, N is the total number of wavelengths, A is amplitude, ? is phase, {tilde over (H)}?,m and {tilde over (H)}A,m are transfer functions for phase and amplitude, respectively, for a given wavelength index m, and R(?, A) is a regularizer function.
[0099] 7. The apparatus of any preceding embodiment, wherein regularizer R(?,A) is selected based on depends on a-priori information about the sample.
[0100] 8. The apparatus of any preceding embodiment, wherein said encoded light source comprises a broadband light source coupled to a static multiple-color filter configured to separate the broadband light into the multiple illumination patterns and encode the illumination into different spectral bands.
[0101] 9. The apparatus of any preceding embodiment, wherein said encoded light source comprises a multiple-color LED configured generate the multiple illumination patterns and encode the illumination into different spectral bands.
[0102] 10. The apparatus of any preceding embodiment: wherein the one or more optical elements comprises a microscope; and wherein the multiple-color filter is configured to be positioned adjacent a back focal plane of a microscope.
[0103] 11. The apparatus of any preceding embodiment, wherein the multiple-color filter comprises a filter insert configured to be positioned at the back focal plane of the condenser of the microscope.
[0104] 12. The apparatus of any preceding embodiment, wherein said encoded light source is configured to provide contrast in either phase or amplitude.
[0105] 13. The apparatus of any preceding embodiment, wherein the amplitude and phase data are extracted from a single image by said optical imaging device.
[0106] 14. The apparatus of any preceding embodiment, wherein said instructions, when executed by the processing unit, further perform steps comprising: single image phase and amplitude imaging of the sample with motion deblurring.
[0107] 15. A method for recovering phase and amplitude data from an image of a sample, comprising: encoding a source of light into a partially coherent illumination that embeds multiple illumination patterns into a plurality of color channels each at distinct illumination wavelengths; directing said partially coherent illumination on the sample and modulating the sample by each illumination wavelength separately and independently of each other; for sensing all color channels simultaneously; and generating a color image of the sample containing information about both phase and amplitude of the sample.
[0108] 16. The method of any preceding embodiment, further comprising: extracting quantitative amplitude and phase data from the color image of the sample.
[0109] 17. The method of any preceding embodiment, wherein the amplitude and phase data are extracted via processing the image via a single deconvolution.
[0110] 18. The method of any preceding embodiment, wherein the deconvolution is performed via L2 regularization.
[0111] 19. The method of any preceding embodiment, wherein the deconvolution is performed via L1 regularization on the object or object gradient.
[0112] 20. The method of any preceding embodiment, wherein the deconvolution is performed via the equation: min.sub.A,??.sub.m=1.sup.N|???.sub.O?{tilde over (H)}.sub.?,m.Math.{tilde over (?)}?{tilde over (H)}.sub.A,m.Math.? |.sub.2.sup.2+R(?,A); wherein I is a color intensity measurement, I0 is a background signal, N is the total number of wavelengths, A is amplitude, ? is phase, {tilde over (H)}?, m and {tilde over (H)}A,m are transfer functions for phase and amplitude, respectively, for a given wavelength index m, and R(?,A) is a regularizer function.
[0113] 21. The method of any preceding embodiment, wherein regularizer R(?,A) is selected based on depends on a-priori information about the sample.
[0114] 22. The method of any preceding embodiment, wherein the amplitude and phase data are extracted from a single image by said optical imaging device.
[0115] 23. The method of any preceding embodiment, further comprising: single image phase imaging of the sample with motion deblurring of the sample.
[0116] 24. The method of any preceding embodiment, wherein motion deblurring comprises: applying motion to the sample during imaging of the sample; and applying a motion deblurring algorithm.
[0117] Although the description herein contains many details, these should not be construed as limiting the scope of the disclosure but as merely providing illustrations of some of the presently preferred embodiments. Therefore, it will be appreciated that the scope of the disclosure fully encompasses other embodiments which may become obvious to those skilled in the art.
[0118] In the claims, reference to an element in the singular is not intended to mean one and only one unless explicitly so stated, but rather one or more. All structural, chemical, and functional equivalents to the elements of the disclosed embodiments that are known to those of ordinary skill in the art are expressly incorporated herein by reference and are intended to be encompassed by the present claims. Furthermore, no element, component, or method step in the present disclosure is intended to be dedicated to the public regardless of whether the element, component, or method step is explicitly recited in the claims. No claim element herein is to be construed as a means plus function element unless the element is expressly recited using the phrase means for. No claim element herein is to be construed as a step plus function element unless the element is expressly recited using the phrase step for.