IMAGING SYSTEM AND METHOD FOR IMAGING OBJECTS WITH REDUCED IMAGE BLUR
20230222629 · 2023-07-13
Assignee
Inventors
Cpc classification
H04N23/81
ELECTRICITY
H04N23/951
ELECTRICITY
H04N23/55
ELECTRICITY
G02B13/001
PHYSICS
G02B27/0075
PHYSICS
International classification
H04N23/81
ELECTRICITY
H04N23/951
ELECTRICITY
G02B13/00
PHYSICS
Abstract
An imaging device is presented for use in an imaging system capable of improving the image quality. The imaging device has one or more optical systems defining an effective aperture of the imaging device. The imaging device comprises a lens system having an algebraic representation matrix of a diagonalized form defining a first Condition Number, and a phase encoder utility adapted to effect a second Condition Number of an algebraic representation matrix of the imaging device, smaller than said first Condition Number of the lens system.
Claims
1. An optical system comprising: a first lens system and a second lens system, the first lens system collecting light from an object and including a first optical axis and a first Numerical Aperture, the second lens system also collecting light from said object and including a second optical axis parallel to and spaced-apart from the first optical axis and a second Numerical Aperture smaller than the first Numerical Aperture, a first imaging detection device located on the first optical axis and configured to detect the light collected by the first lens system and to generate corresponding first image data bore-sight to an image plane, and a second imaging detection device located on the second optical axis and configured to detect the light collected by the second lens system and to generate corresponding second image data also bore-sight to said image plane; and a control system configured and operable to receive the first image data from the first imaging detection device and the second image data from the second imaging detection device, and process the first and second image data to computationally produce reconstructed image data.
2. The system of claim 1, wherein the reconstructed image data provides an improvement in signal-to-noise ratio (SNR).
3. The imaging system of claim 1, wherein the reconstructed image data provides an improvement in resolution.
4. The system of claim 1, wherein the reconstructed image data provides an improvement in distortion.
5. The system of claim 1, wherein the reconstructed image data provides an improvement in blur.
6. The system of claim 1, being implemented in a cellular phone or a video camera.
7. The system of claim 1, wherein the control system is configured to operate the first and the second imaging detection devices concurrently.
8. The system of claim 1, wherein the control system is configured to operate the first and the second imaging detection devices at different times.
9. The system of claim 1, wherein the reconstructed image data provides an improvement in the accuracy of greyscale rendition.
10. An optical system comprising: a plurality of lens systems, each lens system configured to capture at least an image corresponding to an object in a common object plane; wherein at least two of the plurality of lens systems have different numerical apertures; a control system configured and operable to receive and process the captured images from the plurality of lens systems and the control system also configured and operable to process two or more successive images captured using at least one of the plurality of lens systems; the processing including duplicating and shifting the images, producing a combined image and outputting a reconstructed image. wherein the reconstructed image is an image with improved image quality comprising a reduction in blur of the object in the common object plane.
11. The optical system of claim 10, wherein the improvement in image quality further comprises any one or more of an improved signal-to-noise ratio (SNR), an improved resolution, a reduced distortion and an improved dynamic range.
12. The system of claim 10, being implemented in a cellular phone or a video camera.
13. An optical system comprising: a plurality of lens systems, each lens system configured to capture at least an image corresponding to an object in a common object plane; wherein at least two of the plurality of lens systems have different numerical apertures; a control system configured and operable to receive and process the captured images from the plurality of lens systems and the control system also configured and operable to process two or more successive images captured using the same lens system; the processing including duplicating and shifting the images, producing a combined image and outputting a reconstructed image. wherein the reconstructed image is an image with improved image quality comprising a reduction in blur of the object in the common object plane.
14. The optical system of claim 13, wherein the improvement in image quality further comprises any one or more of an improved signal-to-noise ratio (SNR), an improved resolution, a reduced distortion and an improved dynamic range.
15. The system of claim 14, being implemented in a cellular phone or a video camera.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
[0062] In order to understand the invention and to see how it may be carried out in practice, embodiments will now be described, by way of non-limiting example only, with reference to the accompanying drawings, in which:
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
[0093]
[0094]
[0095]
[0096]
DETAILED DESCRIPTION OF EMBODIMENTS
[0097] Referring to
[0098] Imaging device 2 includes a lens system 10 (one or more lenses) defining an optical axis OA of light propagation; and a corrector utility 20 applying a correction function on light passing through the imaging device, which in the present example is constituted by phase encoder 20 for applying a phase correction function, the configuration of which will be described further below. Imaging device 2 is configured and operable according to the invention for reducing image blur created by the lens system. It should be understood that a lens may be any optical element having a lensing effect suitable for use in imaging, e.g. a lensing mirror.
[0099] The lens system and phase encoder are arranged together to define an effective aperture EA of the imaging device 2. The lens system 10 is at time referred to herein below as a main system or main lens or main optics or first system. The main system has an algebraic representation matrix of a diagonalized form defining a first Condition Number. The phase encoder 20 is at time referred to herein below as an auxiliary system or second system or auxiliary lens or auxiliary considered together with a respective portion of the main lens). The phase encoder 20 is configured to produce a second Condition Number of an algebraic representation matrix of the imaging device 2, smaller than the first Condition Number of the main lens matrix.
[0100] As will be exemplified further below, the phase encoder 20 may be configured as a mask defining a first pattern located in a first region, which is aligned (along the optical axis of the lens system) with a part of the effective aperture, leaving a remaining part of the effective aperture free of the first pattern. The geometry of the first region and configuration of the first pattern therein are such as to define a predetermined first phase correction function induced by the first pattern onto light passing therethrough.
[0101] Control unit 6 typically includes a computer system including inter alia a processor utility, data input/output utilities, data presentation utility (display), etc. The phase encoder 20 and control unit 6 operate together to produce a high-quality image. The imaging device 2 equipped with the phase encoder 20 appropriately distorts (codes) the wavefront of light collected from the object, and data corresponding to such distorted image is then digitally processed by the control unit to restore a distortion-free image, with improved quality.
[0102] An algebraic representation of an imaging device with no phase encoder (i.e. an imaging lens system) can be described by a matrix H, the columns of which are the PSF for each field point in vector representation. Accordingly, said imaging device may be represented by matrix H consisting of L×L elements where L=m×n, for the n×m pixel representation of the image.
[0103] One way to diagonalize the matrix H is to apply to it a Singular Value Decomposition (SVD). According to this method, the singular values of H are square-roots of the eigenvalues of the matrix H.Math.H.sup.t, where H.sup.t is the so-called transposed matrix of H. According to this method, H may be represented as [2]:
H=U.Math.S.Math.V.sup.t (3)
where matrices U and V are found by solving the following eigenvalue problems:
H.Math.H.sup.t=U.Math.Δ.Math.U.sup.t (4)
H.sup.t.Math.H=V.Math.Δ.Math.V.sup.t (5)
U.sub.L×L=[u.sub.1,u.sub.2, . . . ,u.sub.L] (6)
V.sub.L×L=[v.sub.1,v.sub.2, . . . ,v.sub.L] (7)
(u.sub.i and v.sub.i are the vector columns of U and V, respectively). A singular value matrix S can then be obtained from matrix Δ, S=Δ.sup.1/2.
[0104] It should be noted that there are usually many ways, in addition to SVD, to diagonalize matrix H. The present invention may be employed in conjunction with any diagonalization procedure of the representing matrix H which results in a diagonal matrix S, and the principles of the invention are not limited to the example described herein.
[0105] Returning to SVD-based approach, the positive square roots of the eigenvalues of Δ are the eigenvalues σ.sub.i in S and are termed also the singular values in H. The order of the rows of S is such that σ.sub.i are ordered from high to low:
One of the figures of merit for the matrix H is a Condition Number k(.sub.H), defined by a ratio as in (9) and (10) below:
Here, denotes a moment of any order of weighted averaging,
σ.sub.m
is such an averaging on a group of the selected high singular values and
σ.sub.n
is such an averaging on a group of selected low singular values. One example of a condition number is thus the ratio of the highest to lowest singular values of H:
k.sub.(H)=σ.sub.1/σ.sub.L (10)
It should be understood that when the matrix S is a result of a general diagonalization procedure of the matrix H (rather than SVD), then σ.sub.m and σ.sub.n denote the absolute values of the eigenvalues of S.
[0106] Considering the SVD-based approach, all the eigenvalues of S are real, and are smaller than or equal to one and non-negative, hence k(s) is always greater than or equal to one. Generally, the higher the Condition Number the lower (or worse) the condition of the matrix, and the worse the system immunity to additive noise with respect to the matrix inversion. And if, for example, H is not invertible (namely at least one of its singular values is zero), then k is infinite. In an absolutely noise-free system, the image inversion is always possible even in very ill conditioned (high condition number) system; however introduction of noise to ill condition system will corrupt the image inversion restoration results. Thus, improving the matrix condition improves the system immunity to noise and allows performing image restoration in the presence of noise. Performing image inversion in matrix notation provides blur reduction in the image. Hence, improving the matrix condition can be an effective indication of improving image quality by image reconstruction in actual optical systems.
[0107] Turning back to
[0108] As indicated above, the lens system may include a single lens, or one or more lenses and possibly one or more other optical elements. It should be understood that a phase encoder is constituted by appropriate one or more phase-affecting transition regions associated with the lens system, so as to introduce a pre-determined phase correction to the imaging device to improve the condition of the corresponding matrix. In other words, the phase encoder may affect the phase of light incident thereon as compared to that outside the phase encoder (e.g. within the lens aperture), and possibly also differently affect the phase of light portions passing through different regions within the phase encoder. As a result, the entire imaging device creates a certain phase profile in light passing therethrough.
[0109] Thus, for example, a phase encoder may be a stand-alone element (e.g. mask physically separated from the lens) located upstream or downstream of the lens system, or located in between the elements of the lens system (this being possible as long as the light portions associated with the lens and the phase encoder are spatially separated). In another example, the phase encoder may be configured as a phase-affecting element attached to the lens of the lens system (e.g. a phase-affecting patterned structure optically glued to the lens); or may be a phase-modifying element incorporated within the lens (generally within one or more elements of the lens system). For example, the imaging lens may be formed with a particular phase-pattern engraved on the lens surface, in the form of a surface relief (profile) and/or materials of different refractive indices. Yet other possible examples of the phase encoding element may include a mirrors structure in a reflecting mode of operation, a holographic phase encoding element, etc.
[0110] Reference is made to
[0111] In the example of
[0112] In the example of
[0113] The geometry of region R.sub.2 and configuration of the first pattern therein are selected such that the first phase correction function satisfies a predetermined condition with respect to a phase correction function induced by the lens region R.sub.1 (generally, a remaining part of the effective aperture). In some examples, this predetermined condition is such that the value of the first phase correction function, all along the corresponding region R.sub.2, is not smaller than the phase correction function corresponding to region R.sub.1; and in some examples is such that the value of the first correction function in region R.sub.2 does not exceed the other phase correction function corresponding to region R.sub.1.
[0114] As will be shown further below, the geometry of said part of the effective aperture carrying the first pattern, as well as the configuration of the first pattern itself, define a phase correction function induced by the first pattern onto light passing therethrough. As for the second pattern that may be provided within an optical path overlapping with the remaining part of the effective aperture where the lens effect is applied, such a phase pattern may be aimed mainly at achieving another effect, e.g. extending the depth of focus. The use of phase-masks in the optical path overlapping with the lensing region is generally known and therefore need not be described in details.
[0115] The following is an example of how to select the appropriate configuration for the phase encoder, namely the geometry (shape and dimensions) and the features of the pattern to be used in a specific imaging system.
[0116] A phase encoder may be designed to be incorporated in a lens system to modify (typically, increase) a selected sub-group of the singular values of H, σ.sub.j of the lens system matrix. For the sake of clarity and simplicity of the description, σ.sub.i denotes herein the whole group of singular values of matrix H, where i takes all the numbers from 1 to L, L being the size of H. Further, σ.sub.j denotes said subgroup of singular values to be modified, and usually to be increased. Also, the positions of the selected singular values σ.sub.j in the matrix S will be denoted simply “positions j” avoiding the need for more complex formal mathematical notations.
[0117] A diagonal matrix ΔS, in the size of S, may therefore be generated, having positive numbers in the selected positions j (corresponding to the positions of the singular values of H which are to be modified). It therefore follows that the matrix sum of S and ΔS can yield an invertible matrix with improved condition. In other words, if S1 is the said sum
S1=S+ΔS (11)
then the matrix H1 is:
H1=U.Math.S1.Math.V.sup.t=U.Math.(S+ΔS).Math.V.sup.t=U.Math.S.Math.V.sup.t+U.Math.ΔS.Math.V.sup.t=H+U.Math.ΔS.Math.V.sup.t=H+O (12)
approximates a PSF matrix of the entire imaging device (lens and encoder), formed by the lens system represented by matrix H and the phase encoder represented by the additional part which is determined as follows
O=U.Math.ΔS.Math.V.sup.t. (13)
In various embodiments, the two systems H and O are generally different, however observe the same object and their image is mixed. Therefore, these embodiments are referred to herein generally as parallel optics.
[0118] It should be understood that a construction of an actual phase encoder (mask) may be subject to additional constraints, on top of those expressed in the formula (13) for O, as is further detailed below. Hence an actual phase encoder may be algebraically represented by a matrix approximately equal to O but not necessarily identical to it.
[0119] The invented approach for designing the phase encoder utilizes selecting the desired approximation for the phase encoder matrix resulting in the image quality improvement. It should therefore be understood that O matrix referred to hereinbelow as being that of the phase encoder relates also to the desired approximation thereof.
[0120] Both the lens system and the phase encoder represented by matrices H and O “see” the same object and bore-sight to the same image plane. However, the lensing effect and the phase encoding effect are applied to different portion of light incident onto the effective aperture, thus presenting a parallel-type optical processing. This effect can be taken into consideration by dividing a lens area into two parts (or more in general). The division can be done for example by dividing an extended lens to a “lens zone” in the center and a “phase mask” in the lens rim zone (periphery), termed herein “Rim-ring zone”. The linear nature of diffraction allows such a separation, which is a division of the integration into two zones. The integration boundaries for the “lens zone” are like those of inner circular pupil function, while the integration boundaries for the “Rim-ring zone” are like those of annular pupil function. In simulation, those are continuous pupil functions with two different zones for two different phases.
[0121] It should be noted that under this approach the effective aperture can be divided into as many zones (parts of different patterns) as needed. These zones can have arbitrary shapes and transmission values to appropriately affect the phase of light (and possibly also amplitude) passing through these zones.
[0122] Following the above discussion, the matrices H and O generate incoherent impulse response for each field point, the coherent response will be of the form as follows (see for example [3]):
{tilde over (h)}(x.sub.img,y.sub.img)=∫∫{circumflex over (P)}(λS.sub.img{tilde over (x)},λS.sub.img{tilde over (y)}).Math.exp(−j2π(x.sub.img{tilde over (x)},y.sub.img{tilde over (y)}))d{tilde over (x)}d{tilde over (y)} (14)
({tilde over (x)}=xy/λ.Math.S.sub.img,{tilde over (y)}=yp/λ.Math.S.sub.img) (15)
where (x.sub.img,y.sub.img) are the image point coordinate, S.sub.img is the image distance. The explicit form of the pupil function is:
{circumflex over (P)}(λS.sub.img{tilde over (x)},λS.sub.img{tilde over (y)})=P(λS.sub.img{tilde over (x)},λS.sub.img{tilde over (y)}).Math.exp(jKW(λS.sub.img{tilde over (x)},λS.sub.img{tilde over (y)})) (16)
[0123] Here, P( ) is the amplitude which is effected by local transmittance, and KW( ) is the phase effected by both lens aberration and phased elements (the latter affecting the “rim-ring zone” only). The system impulse response is a super-position of the two “optics” responses, which is then also a function of its power which reflects both in the cross-action area (A) and in the transparency (T):
{tilde over (h)}.sub.tot(x.sub.img,y.sub.img)={tilde over (h)}.sub.lens(x.sub.img,y.sub.img,A.sub.lens,T.sub.lens)+{tilde over (h)}.sub.rim-ring(x.sub.img,y.sub.img,A.sub.rim-ring,T.sub.rim- ring) (17)
[0124] However, in regular photography, the optical system describes imaging in incoherent illumination. Hence the matrix columns are the system PSF. Thus:
PSF(x.sub.img,y.sub.img)=|{tilde over (h)}.sub.tot(x.sub.img,y.sub.img)|.sup.2==|{tilde over (h)}.sub.lens|.sup.2+|{tilde over (h)}.sub.rim-ring|.sup.2*{tilde over (h)}.sub.lens{tilde over (h)}.sub.rim-ring+{tilde over (h)}.sub.lens{tilde over (h)}*.sub.rim-ring≈≈|{tilde over (h)}.sub.lens|.sup.2+|{tilde over (h)}.sub.rim-ring|.sup.2. (18)
[0125] In that case, there are potential cross-products between the “lens” and the “rim-ring” and therefore their contributions are not entirely parallel as in field response. Theoretically, the cross products damage the parallelism assumption. However, the nature of the problem is that the “rim-ring” PSF and “lens” PSF work in different zones of the FOV. Thus, the cross-products' power is relatively low so the parallelism assumption is generally reasonable.
[0126] Thus, according to some embodiments of the invention, a phase encoder that induces a desired phase correction and thereby improves the condition of the matrix S (reduces the Condition Number), may be configured in accord with the eigenvalues σ.sub.j defined above and the resulting ΔS. An initial condition for the construction of the phase encoder can be set as follows: For each singular value σ.sub.t, there exists an eigen matrix M.sub.i defined by the outer product of the appropriate column vectors u and v as defined in (6) and (7):
M.sub.L×L.sup.i=u.sub.i.Math.v.sub.i.sup.t. (19)
Thus, the PSF matrix of the required phase encoder can be composed as a linear combination of the eigen matrices associated with the singular values σ.sub.j—referred to as matrices M.sub.j—where the actual coefficients in the linear combination are the new, modified singular values. A direct sum of the matrices M.sub.j corresponding to the singular values σ.sub.j (corresponding to the desired O) as is defined in (20) below, is termed herein Binary Matrix Spot Diagram (BMSD):
[0127] The BMSD is thus used as an initial condition for the construction of the phase encoder O.
[0128] It should be noted that since the phase encoder is a physical entity, its algebraic representation must contain only real PSF for each field point. Further, in order to actually effect a change in the condition of the entire lens system represented by matrix H, the phase encoder should be selected such that the modified system matrix H1 (that of the lens system and encoder) is invertible.
[0129] As a result of these constraints, construction of a phase encoder from the matrix O usually involves some necessary approximations. Since on the one hand the phase mask is common for all field points, and on the other hand said linear combination of the eigen-matrices M.sub.j is space variant, to yield a common best pupil function some compromise between best pupil function of each field point must be made.
[0130] Reference is made to
[0131] The overall power Power.sub.ring that reaches the image plane from the phase encoder relative to the power Power.sub.lens that reaches from the lens is determined by the area of the encoder (that of part R.sub.2 of the effective aperture) relative to the area of the lens part R.sub.1, and by the relative transmission factors. Assuming that the ring transmission T.sub.ring equals 1,
and therefore
Δr=¼.Math.T.sub.lens.sup.2.Math.D.Math.PR (22)
[0132] Referring back to
where D is the dimension (e.g. diameter) of the lens region R.sub.1.
[0133] It also follows from
R=Si/cos(β) (24)
[0134] The ray aberration Rim, acting as the blur radius, is related to the wavefront derivative. For the maximal ray aberration one obtains:
where W is the wavefront (the wavefront being related to the wave phase through phase=K*W were K is the wave number), (xp.sub.rim,yp.sub.rim) are the coordinates of the ring 20, and n is a refraction index of the surroundings of the device in the optical path of light propagation through the imaging system. It follows that:
[0135] Assuming quadratic form of the shape function F, the phase can be expressed as:
where F0 is a constant. Differentiating (27) obtains the ray aberration:
and applying the maximum value obtains:
From combining (26) and (29) the relation between the geometrical parameter F.sub.0 and the blur radius is obtained:
[0136] For example, the remaining part R.sub.1 of the effective aperture associated with the imaging lens 10 may have a diameter D of 0.4 mm, and a distance Si (roughly equal to the focal length f) from the imaging device to the image plane is Si=0.69 mm. The imaging lens 10 is further characterized by relatively high aberrations, having the Siedel sums S1=0.0123, S2=0.0130, S3=0.0199, S4=2.922.Math.10.sup.−4 and S5=3.335.Math.10.sup.−5. The exit pupil of the lens system is in the lens plane. The imaging system utilizing such imaging device 2 is further associated with a square FOV having 10×10 pixels and a width of 0.113 mm. The optical characteristics of the imaging system can be calculated using an optical simulation tool, and PSF matrix corresponding to the imaging system is also calculated. Further, a singular value decomposition (SVD) of the PSF matrix is obtained.
[0137]
[0138] Next, in order to improve the system condition by reducing the condition number of the imaging device used in the system, the system's weak (low value) singular values are to be enlarged. Thus, a group of weak singular values σ.sub.j is selected to be modified. In this example the six lowest singular values σ.sub.95, . . . , σ.sub.100 are selected to be modified for the improvement of the lens system 10 performance. Accordingly, the eigen matrices M.sub.95, . . . , M.sub.100 as defined in (19) and their sum BMSD as defined in (20) are considered for the construction of the phase mask, as is further described below.
[0139]
[0140] Referring back to
[0141] A preliminary set of parameters values is selected for the phase encoding element 20, and a new PSF matrix is found for the imaging device 2. From the new PSF matrix a new SVD matrix is calculated and a new condition number is obtained from the new SVD matrix. This completes a first iteration of the optimization process.
[0142] A second iteration starts with a selection of a second set of parameters values for Δr, F.sub.0 and T.sub.ring. A new condition number is re-calculated from the new SVD matrix obtained in the second iteration, and is compared to the new condition number obtained in the first iteration. This iterative process is repeated until a minimum, or, alternatively, a satisfactory low level, of the new condition number is achieved.
[0143] The resulting parameters values of the phase encoding element 20 of the imaging device 2 in this example, under an assumption that the aberration coefficients of the system are the same with or without the rim, are Δr=0.03 mm, F.sub.0=2.235 um and T.sub.ring=0.9. The new condition number associated with the imaging device 2 is found to be 2291, representing an improvement by a factor of about thirty-eight with respect to the condition number of the lens 10.
[0144] In another example the resulting parameters of the phase encoding element 20 are Δr=0.03 mm, F.sub.0=1.56 um and T.sub.ring=0.9. This example includes an aberration model where the aberrations coefficients grow due to the extension of system diameter associated with the rim. The resulting condition number was found to be 3187. This condition number represents an improvement by factor of about twenty-eight with respect to the condition number of the lens 10.
[0145] The image enhancement expressed in the improved image quality obtained by the entire imaging device 2 compared to the lens 10 can be quantitatively described by a Mean Square Error Improvement Factor (MSEIF) [5] defined in (32):
wherein i is the vector column representation (Lx1) of matrix in the size (N×M) corresponding to the number of pixels (N×M) of the image FOV, and comprising the grey-scale level of the pixels.
[0146] Thus, i.sup.object represents an ideal image of the object, i.sup.Image corresponds to an image obtained from the optical system before restoration (for proper comparison that was taken from the uncorrected optical system), and i.sup.Restored corresponds to an image obtained after restoration. The operation ∥x∥.sub.2 stands for the norm of the resulting vector x. An important limit is when the nominator and denominator norm are equal (corresponding to MSEIF=0 db). At that point the restored image i.sup.Restored and the image before restoration i.sup.Image have equal similarity to the object, i.e. there is no use in restoration. MSEIF having a negative value indicates the case when the image restoration is worse than the image. The restored image in this example is provided by a simple matrix inversion according to (2) namely i.sup.Restored=H1.sup.−1.Math.i.sup.Image, where the i's denote the vector columns of the respective images and H1 is the PSF matrix of the imaging device 2, as defined in (12).
[0147] In the description given below, comparisons of image quality obtainable from conventional imaging systems (with no phase encoder based imaging device) and that of the invention are presented by simulations.
[0148] Reference is made to
[0149]
[0150]
[0151] Reference is now made to
[0152]
[0153]
[0154] As also shown in
[0155] Referring to
[0156] More specifically,
[0157]
[0158]
[0159] Referring to
[0160] A PSF matrix H of the imaging lens arrangement (i.e. in the absence of the phase encoder) is obtained (step 801), from experimental measurements using specific imaging lens arrangement (e.g. of certain lens diameter, curvatures, material, thickness, refraction index, as well as object and image distances), or simulating the same for this imaging lens arrangement, or supplied by manufacturer. Then, an SVD form S,V,U of the matrix H is calculated (step 802). A group of weak singular values of H, σ.sub.j, is selected (step 803) and a new matrix ΔS with improved singular values is constructed (step 804).
[0161] In the PSF matrix ΔH (corresponding to the matrix ΔS according to (12)), each column represents the PSF of a different field point. Hence, the columns are converted to 2D PSF image and a representative PSF is chosen (step 805). Next, the phase at the exit pupil of the system (of the imaging lens device is calculated (step 806) for a selected number of field points. In order to obtain a physical result for the calculated phase, for each field point using Fourier transform and inverse Fourier transform relation between image plane and exit pupil plane a “Ping-Pong” iterative process [5] is carried out (step 806) until convergence of the phase function at the system pupil exit is obtained. This process is repeated (step 807) for all the selected field points (m,n) until a set of phase functions, P.sub.m,n(x,y) (one associated with each field point) is obtained. Generally, these phase functions differ from one another, thus there cannot actually be a real phase encoder capable of inducing all of them. Therefore, selecting or generating one representative phase function from the group of phase functions, is required.
[0162] According to some embodiments, a cross-correlation matrix ρ.sub.m,n is thus calculated (step 808) between all phase functions, and the phase function {circumflex over (P)}(x, y) characterized by having the best accumulated cross-correlation
to all other phase functions is selected (step 810).
[0163] Considering the entire imaging device (lens arrangement and phase encoder) inducing a phase correction {circumflex over (P)}(x, y) at the exit pupil of the device, the PSF matrix is calculated again, and the associated condition number is obtained (step 811). This completes a single iterative step of the phase mask construction. If the condition number is not satisfactory (step 812), the above described sequence of operations is repeated (step 812). The common correction phase {circumflex over (P)}(x, y) is taken as the initial condition of the iterative process for all selected field points, repeating steps (806)-(812). The iterative process thus repeats until a satisfactory improvement in the condition number of the PSF matrix of the improved imaging device (optimal phase encoder) is obtained (step 813).
[0164] Reference is made to
[0165]
[0166] Reference is now made to
[0167] Each of
[0168] In
[0169] As shown in
[0170] It should generally be noted that the reconstruction step of the image is not limited to the process of inverting matrix H1 as is described in the examples above. According to some other embodiments of the invention, image reconstruction can be done by any known method that is intended to remove the phase encoding introduced by the phase encoder to the image while being created by the imaging device, thereby removing also at least some of the image blur. For example, the image reconstruction method can be provided by least square. According to the least square method, the reconstructed image i.sup.restored is obtained by minimizing the norm E in (33):
E=∥H1.Math.i.sup.restored−i.sup.Image∥.sub.2 (33)
where i.sup.Image is the image column vector associated with the PSF matrix H1 (namely, the image obtained from the imaging device before reconstruction).
[0171] According to another example, the so-called regularization method, the image i.sup.restored may be obtained by minimizing E in (34):
E=∥H1.Math.i.sup.restored−i∥.sub.2+α∥I.Math.i.sup.restored∥.sub.2 (34)
where α is a constant (usually 0<α<1) and I is the unity matrix. Such regularization
i.sup.restored=(H1.sup.t.Math.H1+α.Math.I).sup.−1.Math.H1.Math.i (35)
where I is the identity matrix, α is the regularization factor. For simulations bellow, it was assumed that α=1/SNR. This method will be exemplified more specifically further below.
[0172] The following is the description of another example of the technique of the invention. An imaging system captures images with a specific FOV. The BMSD matrix can be approximately decomposed by series of transmission or trajectories matrices which are associated with shifting of the captured image relative to the imaging FOV, i.e. some of the image will get out of the FOV thus the transformation matrix between the object space and image space will be changed. This technique is at time referred to herein below as shifted lenses technique or shifted images technique or transformation technique or trajectories technique.
[0173] Assuming a pixel confined imaging system is used, its transformation (PSF) matrix from object space to image space is a unit matrix. Following the lexicographic order of the vectors, the mapping of the object coordinates (l.sub.in) to the image coordinates (l.sub.out) is:
(l.sub.in).fwdarw.(l.sub.out)=(m−1).Math.N.sub.n+n.fwdarw.(m−1).Math.N.sub.n+n (36)
[0174] By determining a PSF matrix, the field of view (FOV) of the system is defined. Assuming, a finite image is captured, if the image is shifted by (Δm,Δn) relative to the fixed FOV origin, the mapping of the object coordinates (l.sub.in) to the image coordinates (l.sub.out) will change to:
(l.sub.in).fwdarw.(l.sub.out)=(m−1).Math.N.sub.n+n.fwdarw.(m+Δm−1).Math.N.sub.n+n+Δn (37)
[0175] With a series of shifts, a series of transfer matrices is created, each of which “draws” a different “Trajectory” over the 2D L×L empty matrix.
[0176] Referring to
[0177] To realize an auxiliary system with the required auxiliary PSF matrix O, the FOV is defined as the H matrix FOV and decompose O by a series of weighted shifted “transformation” matrix as follows:
BMSD≈O=Σ.sub.i=1.sup.MW.sub.l.Math.O.sub.l=Σ.sub.i=1.sup.NÔ.sub.l (38)
[0178] For each “transformation”, there is a different weight constant (W.sub.l). The weight is defined by solving the following average equation:
where N.sub.l is the number of instances in the specific “Trajectory”, BMSD (see eq. (20) above) is the target PSF matrix for the auxiliary system O, ∘ is the projection operator which serves to project the BMSD over O.sub.l. Generalizing equations (38) and (39), the sum of all transformation matrices O.sub.l can be used to approximate the BMSD (see eq. (20) above) of the lens 10. By minimizing the square difference E in (40), an approximation to the BMSD can be optimized.
[0179] Finally the overall system response is:
Reference is now made to
[0180] It should also be noted that although the “shifted lenses” embodiment is exemplified above in the effective aperture configuration of
[0181] As further shown in
[0182] According to yet further embodiment of the present invention, the “shifted images” approach for improving matrix condition of an imaging device can be implemented by an image processing algorithm used with an imaging technique (not necessarily utilizing a phase encoder as a physical element). Such imaging technique utilizes creation of two images with different Numerical Apertures of light collection and different levels of blur. The system may include two separate imaging devices referred to above as the main system and the auxiliary system. The main and auxiliary systems observe the same field of view, which for example might be similar to the image capture used in phase diversity systems. The principles of phase diversity technique are known per se and need be described in details, except to note that it is aimed at reducing the amount of data to be processed. An image is the combination of two unknown quantities, the object and the aberrations; the aberrations are perturbed with a known difference in phase between two simultaneously collected images, for example by recording one of the images slightly out of focus [6-10].
[0183] Assuming an ill conditioned main system, the above target BMSD can be defined and decomposed by digital summation of N shifted and weighted reproductions of the auxiliary system image. Finally, the main system image is added to that summation, a new image associated with the improved condition system is obtained. The governing equations (35)-(39) are applicable in this process, while the shift and weight are done digitally on a captured auxiliary system image.
[0184] According to some embodiments, a desired set of weight factors can be obtained for example by minimizing E in (39). Further, the weight factors W.sub.l, in (35)-(39) may assume both negative and positive values, since the summation is performed computationally and not physically. Accordingly, a relatively good approximation of the BMSD can be obtained by a proper summation of the matrices, and a considerable improvement of matrix condition can be consequently achieved.
[0185] Reference is made to
[0186] According to some embodiments, the weak parts of the first imaging device 52 are improved by adding to the image (i) a combination of N shifted images (i.sub.l) obtained by the second imaging device 54. The images (i.sub.l) are obtained computationally from the auxiliary image i.sub.aux by duplications and shifts. For a given BMSD of the first imaging device 52, a desired combination of the shifted images (i.sub.l),
is obtained. The sum (i.sub.main+i) is then reconstructed—e.g. by multiplication by an inverted matrix H.sub.1.sup.−1—to generate an enhanced image of the object.
[0187] According to some examples of the invention, a desired set of weight factors W.sub.l can be obtained in the way described below. An algebraic representation H of the first imaging device 52—e.g. a PSF matrix—is obtained and presented in a modal form—e.g. SVD. The weak parts of the modal matrix are identified and a BMSD matrix is generated accordingly, as is described above.
[0188] Reference is made to
[0189]
[0190] Image 1317 is a weighted sum of a multitude of duplicate images of image 1316, shifted with respect to one another by essentially all possible shifts allowed in a 10×10 pixels FOV. The region of interest in image 1317, marked by the rectangle in the center 1317A, is used for the improvement of the system matrix condition further in the process. Image 1318 is the direct sum of image 1312 of the first imaging device 52, and image 1317A. Image 1319 is the result of image 1318 after restoration according to eq. (2).
[0191] It should be appreciated that the images 1312 and 1314, obtained from the imaging device 52 before and after restoration, respectively, have both poor resemblance to the object 1310. In contrast, the restored image 1319 obtained by the complete “shifted images” technique shows relatively high resemblance to the object 1310.
[0192]
[0193] Reference is made to
[0194] In the above described “shifted lenses” or “trajectories” examples, the auxiliary lens system was assumed to be of a so-called “perfect” or “pixel confined” configuration that does not induce an image blur. It was also emphasized that for realistic demonstration, the auxiliary system formed by a low NA simple lens which is almost pixel confined is to be considered. It should also be understood that the principles of the “trajectories” or “transformation” technique may be used with a linear system other than optical, for example a mechanical system vibration measurement. In this connection, reference is made to
[0195] Turning back to the optical system design, the auxiliary system itself may have a blurred response. Moreover, the same lens system may function as the “main system” and as the auxiliary system”, in which case the “main” and “auxiliary” functions are constituted by successive image acquisition performed by the same lens system.
[0196] The following is an example of how to extend the “trajectories” or “transformation” model to a more realistic auxiliary lens with a blurred response (which will be at times referred to herein below as Blurred Trajectories or blurred transformations).
[0197] Extending the Trajectories or Transformations model to the general case of a blurred auxiliary lens, the auxiliary lens image is not pixel confined, hence relating it to the perfect object, it is first blurred and then weighted and shifted. In this case, the O matrix is decomposed (eq. (38) above) by “blurred transformation” calculated by simple matrix multiplication.
Ô1.sub.l(i,j;Δm,Δn)=O.sub.l(i,j;Δm,Δn).Math.H.sub.aux(i,j) (42)
Following the “blurred transformation” approach, the normalization factor Nis modified:
Finally, the weighting factor for eq. (37) is:
The overall decomposition is:
BMSD≈O=Σ.sub.i=1.sup.MŴ.sub.l.Math.Ô1.sub.l(i,j;Δm,Δn) (45)
[0198]
[0199] For example, the main system matrix H may be as follows:
[0200] The matrix is ill conditioned with a condition number of κ=18625.
[0201] Using the above described “blurred transformation” technique, the system's matrix has 6 eigen matrices. For example, the last two eigen matrices can be arbitrarily determined as the target BMSD. Under the assumption of pixel confinement, a set of 15 “transformations” is obtained presented in
[0202] The following are simulation results for the “blurred transformation” using two different optical systems: a space variant imaging system and a highly defocused imaging system. The performance of both systems was tested in image restoration over an ensemble of 8 objects (see
[0203] The restoration performance is determined by the Mean Square Error Improvement Factor (MSEIF)—equation (32) also presented above:
In this function, the restoration error and the blur error are both relative to the ideal object (I.sup.object). When MSIEF<0 db, the restored image (I.sup.res) is worse than the blurred optical image (I.sup.img), so there is no use in performing restoration. For each SNR, a set of 2400 measurements of the 8 objects was performed, and then the MSEIF was calculated for each measurement and the average value over the ensemble. The results were presented as a function of the main system SNR.
[0204] Since the object range is limited to a dynamic range of 256 gray, it was assumed that restoration gray levels below 0 and above 255 are out of the dynamic range, and hence they were rounded to 0 and 255 respectively. For post processing, Following equation (35) regularization was used.
Î.sub.L×1.sup.res=(H1.sup.t.Math.H1+α.Math.I).sup.−1.Math.H1.Math.I.sub.Lx1.sup.image (46)
[0205] For demonstration of the “blurred transformation” capabilities, it was assumed that the auxiliary system used is blurred, and yields such bad images that if taken as restoration results the resulted MSEIF will be negative. Equation (47) presents the auxiliary system PSF:
[0206] Considering the space variant ill conditioned imaging system, the similar simulation conditions were used, while removing the assumption of a pixel confined auxiliary lens. The main system is generally similar to that described above with reference to
[0207] The auxiliary lens is a 0.16 mm diameter lens with a space invariant PSF as in eq. (47). The auxiliary system works with a lower NA than the main system. This effects the signal level and is reflected in a SNR level of 16 db lower than that of the main system. In PSF calculation, a diffraction model was used, implemented using a 512×512 FFT operator matrix. Regarding the NA condition, it should be noted that the invention is not limited to any specific requirement for the NA of the auxiliary system at all and relative to that of the main system.
[0208] Since the “transformation” decomposition is of a diagonal form, a search was performed for an eigen matrices combination which yields a BMSD with a “Toeplitz” like shape. The search is done automatically, and in each step an additional eigen matrix was added and the condition number of the resulted parallel optics was calculated. The inventor have found few local minima and chose the solution with a condition number of κ=1212. Since the ability to decompose the target BMSD with high fidelity depends on both the choice of target BMSD and the auxiliary system PSF profile, this is generally not an optimal solution of the system but a local minima.
[0209]
[0210] Reference is made to
[0211] The following is an example of a deep defocused imaging system case. In this case, a standard 0.4 mm diameter double convex lens is used. It is assumed that a 0.69 mm focal distance monochrome imaging (λ=0.5875 μm) is performed. The lens material is SCHOTT K10. The system's FOV, pixel size and image distance are the same as in the above described example (FOV is ±4.67 deg, pixel size is 11 μm). The lens is subject to nominal Seidel aberrations and to a 176 μm defocus. This is shown schematically in
[0212] First, the target BMSD was graphically determined.
[0213]
[0214]
[0215] Turning back to eq. (47), the size of the three pixel blur in the auxiliary system is 33 μm. This could be a diameter of a diffracted limited, airy disk:
[0216] The result in (48) shows that the “blurred Trajectories” might be achieved by a micro lenses array (example of
[0217] Also, as indicated above, since the “blurred Trajectories” technique uses a mixture of two highly blurred images to create a relatively sharp image, an optical design may be selected which uses the “main” system images (generally at least two image) as the main and the auxiliary images, i.e. resolves the system by a so-called “double exposure”.
[0218] The pixel confined “Trajectories” (or “transformation”) model can be extended to the “blurred Trajectories” model. The simulations have shown that the “blurred Trajectories” can improve system condition and image restoration, by Tikhonov regularization. The invented technique provides significant improvement in the system matrix condition (from κ=87640 down to κ=1211 in the first study case and from κ=6412.5 down to κ=238.7 in the second study case). Since the “blurred Trajectories” filtering is done by software and not by hardware in general, it is much more flexible than the traditional optical filtering approach. The concept of combining two blurred images enables the “Blurred trajectories” method to produce a system where the “main” and “auxiliary” images are produced by a single aperture system.
[0219] Thus, the “Image fusion” technique of the present invention provides for improving the matrix condition of the imaging system. For example, the “Rim-ring” or the “ping-pong” method can be used to work on a different auxiliary system (designated as O). All these methods obey the Parallel optics rule:
I.sup.image=H.Math.I.sup.object+O.Math.I.sup.object (49)
which is actually fusion of to systems the goal of which is to improve the system matrix condition, the main image is:
I.sup.image_main=H.Math.I.sup.object (50)
and the auxiliary image is:
I.sup.image_Aux=O.Math.I.sup.object (51)
The fused image is: I.sup.image=I.sup.image_main+I.sup.image_Aux (52)
where (38) is a fused image virtually captured by H1 of a system which has improved matrix condition.
H1=H+O (53)
H being the main system and O being the auxiliary system.
[0220] Turning back to
[0221] In this example, the main lens system is similar to the above-described one (10 in
[0222] Each row is converted into 2D image, PSF(x,y,i). The rim-ring filter is single, assuming a phase effect induced thereby is more dominate than that of the main lens aberrations in the rim-ring zone, and it together with the main lens creates a point spread function which is common to all field points and approximately centered around the paraxial coordinate (“filter's PSF”). The filter's PSF design is preferably such as to compromise between the different PSF(x,y,i) shapes. For example, such a filter may be constructed by weighted superposition of PSF(x,y,i), one possible implementation being an average filter.
[0223] Then, in order to calculate the average filter, each PSF(x,y,i) is shifted such that its paraxial coordinate (x.sub.pr,y.sub.pr) is transferred to the center of the field (x=0,y=0). This is illustrated in
[0224] Negative values, which are not a physical response for incoherent point spread function, are truncated, and a target PSF for the auxiliary system is define as shown in
[0225] This method can serve for calculating a separate auxiliary system O such that its implementation will be suitable for use in imaging device which includes two separate optical systems such as in the above-described examples of the Trajectories method of
[0226] Reference is made to
[0227]
[0228] Those skilled in the art will readily appreciate that various modifications and changes can be applied to the embodiments of the invention is hereinbefore described without departing from its scope defined in and by the appended claims.
[0229] It is the intent of the applicant(s) that all publications, patents and patent applications referred to in this specification are to be incorporated in their entirety by reference into the specification, as if each individual publication, patent or patent application was specifically and individually noted when referenced that it is to be incorporated herein by reference. In addition, citation or identification of any reference in this application shall not be construed as an admission that such reference is available as prior art to the present invention. To the extent that section headings are used, they should not be construed as necessarily limiting. In addition, any priority document(s) of this application is/are hereby incorporated herein by reference in its/their entirety.