METHOD FOR DETERMINING THE COMPLEX AMPLITUDE OF THE ELECTROMAGNETIC FIELD ASSOCIATED WITH A SCENE

20180007342 · 2018-01-04

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for determining the complex amplitude of the electromagnetic field associated with a scene, comprising a) capturing a plurality of images of the scene by means of a photographic camera, the images being focused in planes of focus arranged at different distances, wherein the camera comprises a lens of focal length F and a sensor arranged at a certain distance from the lens in its image space, taking at least one image pair from the plurality of images and determining the accumulated wavefront to the conjugate plane in the object space corresponding to the intermediate plane with respect to the planes of focus of the two images of the pair.

Claims

1. A method for determining the complex amplitude of the electromagnetic field associated with a scene, comprising: a) capturing a plurality of images of the scene by a photographic camera, the images being focused in planes of focus arranged at different distances, wherein the camera comprises a lens or system of lenses of focal length F and a sensor arranged at a certain distance from the lens in the image space, b) taking at least one image pair from the plurality of images and determining the accumulated wavefront to a conjugate plane in an object space corresponding to an intermediate plane with respect to the planes of focus of the two images of the pair, determining the wavefront W(x,y) as: W ( x , y ) = .Math. p = 0 N - 1 .Math. d p .Math. Z p ( x , y ) where {Z.sub.p(x,y)} is a predetermined set of polynomials and N is the number of polynomials used in the expansion, wherein the coefficients d.sub.j are determined by means of solving the system of equations: u 2 .Math. X ( j ) - u 1 .Math. X ( j ) 2 .Math. z = ( .Math. p = 0 N - 1 .Math. d p .Math. x .Math. Z p ( x , y ) ) .Math. .Math. x = u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , y = y 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 .Math. .Math. u 2 .Math. Y ( k ) - u 1 .Math. Y ( k ) 2 .Math. z = ( .Math. p = 0 N - 1 .Math. d p .Math. x .Math. Z p ( x , y ) ) .Math. .Math. x = u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , y = y 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 where 2z is the distance between the planes of focus of the two images of the pair, where {(u.sub.1X(j), u.sub.1Y(k)),j,k=1 . . . T} are points belonging to the first image of the pair, and {(u.sub.2X(j), u.sub.2Y(k)),j, k=1 . . . T} are points belonging to the second image of the pair, such that for each 1≦j, k≦T, the following is verified
∫.sub.−∞.sup.u.sup.1Y.sup.(k)∫.sub.−∞.sup.u.sup.1X.sup.(j)ƒ.sub.1XY(x,y)dxdy=s(j)s(k)
and
∫.sub.−∞.sup.u.sup.2Y.sup.(k)∫.sub.−∞.sup.u.sup.2X.sup.(j)ƒ.sub.2XY(x,y)dxdy=s(j)s(k) where s(j) is a sequence of real numbers of values between 0 and 1, monotonically increasing for each 1≦j≦T, where ƒ.sub.XY is the two-dimensional density function which takes into account the probability of occurrence of a photon and is determined in each case by the normalized intensity I(x,y) of the corresponding image of the pair, i.e.:
∫.sub.−∞.sup.u.sup.1Y.sup.(k)∫.sub.−∞.sup.u.sup.1X.sup.(j)I.sub.1(x,y)dxdy=s(j)s(k)
∫.sub.−∞.sup.u.sup.2Y.sup.(k)∫.sub.−∞.sup.u.sup.2X.sup.(j)I.sub.2(x,y)dxdy=s(j)s(k).

2. The method according to claim 1, wherein the wavefront is determined by the expression: W ( x , y ) = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. Z pq ( x , y ) where Z pq ( x , y ) = 1 N .Math. e 2 .Math. π .Math. .Math. i N .Math. ( px + qy ) for each 0≦p, q≦N−1.

3. The method according to claim 1, further comprising determining the accumulated wavefront for a plurality of image pairs.

4. The method according to claim 3, further comprising determining a phase shift between two planes of the object space as the subtraction of the accumulated wavefronts to said planes.

5. The method according to claim 4, further comprising determining the phase shift for a plurality of object planes.

6. The method according to claim 1, further comprising: determining, from P images selected from the plurality of captured images, a value of a light field (L) focused at distance F at M values other than u, M≦P, as the values of the light field verifying the system of equations:
Σ.sub.n=1.sup.ML.sub.F(n+[(x−n)/a.sub.j],n)=α.sub.j.sup.2F.sup.2I.sub.j(x), ∀jε{1 . . . P}Λ∀xε{1 . . . k} where P is the number of images considered for determining the light field, F is the focal length of the lens, L.sub.F is the value of the light field focused at distance F, α.sub.jF is the focus distance of the image j and I.sub.j(x) is the intensity of the image j, and where [x] denotes the integer closest to x, obtaining as a result for each image j, with 1≦j≦P, the light field L.sub.F (x) evaluated at the value of u.sub.j resulting from the fit, i.e., the view of the light field corresponding to the value u.sub.j, where x and u are the two-dimensional vectors determining the position in the sensor and in the lens of the camera, respectively.

7. The method according to claim 6, wherein the value of the light field is determined by solving the system of equations by means of least squares, i.e., minimizing:
∥Σ.sub.n=1.sup.ML.sub.F(n+(x−n)/α.sub.j,n)−α.sub.j.sup.2F.sup.2I.sub.j(x)∥.sub.2.

8. The method according to claim 1, wherein the two images of each selected image pair are taken, respectively, on both sides of the focus.

9. The method according to claim 8, wherein the two images of each selected image pair are taken from symmetrical distances on both sides of the focus.

10. A device for determining the amplitude of the electromagnetic field associated with a scene, the device comprising: means for capturing images comprising a lens of focal length F and an image sensor arranged parallel to the lens, at a certain distance from the lens in its image space; and processing means configured for carrying out step b) of the method according to claim 1.

11. The device according to claim 10, wherein the processing means are additionally configured for determining the wavefront by the expression: W ( x , y ) = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. Z pq ( x , y ) where Z pq ( x , y ) = 1 N .Math. e 2 .Math. π .Math. .Math. i N .Math. ( px + qy ) for each 0≦p, q≦N−1.

12. The device according to claim 10, wherein the processing means are additionally configured for determining the accumulated wavefront for a plurality of image pairs.

13. The device according to claim 12, wherein the processing means are additionally configured for determining a phase shift between two planes of the object space as the subtraction of the accumulated wavefronts to said planes.

14. The device according to claim 13, wherein the processing means are additionally configured for determining the phase shift for a plurality of object planes.

15. The device according to claim 10, wherein the processing means are additionally configured for: determining, from P images selected from the plurality of captured images, a value of a light field (L) focused at distance F at M values other than u, M≦P, as the values of the light field verifying the system of equations:
Σ.sub.n=1.sup.ML.sub.F(n+[(x−n)/a.sub.j],n)=α.sub.j.sup.2F.sup.2I.sub.j(x), ∀jε{1 . . . P}Λ∀xε{1 . . . k} where P is the number of images considered for determining the light field, F is the focal length of the lens, L.sub.F is the value of the light field focused at distance F, α.sub.jF is the focus distance of the image j and I.sub.j(x) is the intensity of the image j, and where [x] denotes the integer closest to x, and obtaining as a result for each image j, with 1≦j≦P, the light field L.sub.F (x) evaluated at the value of u.sub.j resulting from the fit, i.e., the view of the light field corresponding to the value u.sub.j, where x and u are the two-dimensional vectors determining the position in the sensor and in the lens of the camera, respectively.

16. The device according to claim 15, wherein the processing means are additionally configured for determining the value of the light field by solving the system of equations by means of least squares, i.e., minimizing:
∥Σ.sub.n=1.sup.ML.sub.F(n+(x−n)/α.sub.j,n)−α.sub.j.sup.2F.sup.2I.sub.j(x)∥.sub.2.

17. The device according to claim 10, wherein the processing means are additionally configured for taking the two images of each selected image pair, respectively, on both sides of the focus.

18. The device according to claim 17, wherein the processing means are additionally configured for taking the two images of each selected image pair from symmetrical distances on both sides of the focus.

Description

DESCRIPTION OF THE DRAWINGS

[0035] To complement the description made below and for the purpose of helping to better understand the features of the invention according to a preferred practical embodiment thereof, a set of drawings is enclosed as an integral part of said description in which the following is depicted in an illustrative and non-limiting manner:

[0036] FIGS. 1 and 2 schematically depict part of the method of the invention.

[0037] FIG. 3 schematically depicts the light field between the lens and the sensor of a camera.

[0038] FIGS. 4 and 5 schematically exemplify a part of the method of the invention.

[0039] FIG. 6 schematically depicts obtaining the wavefront phase corresponding to different planes.

[0040] FIGS. 7 and 8 show image recompositions in transform domain and in measurement domain, respectively.

PREFERRED EMBODIMENT OF THE INVENTION

Two-Dimensional Wavefront Reconstruction

[0041] The method of the invention allows retrieving, from two or more defocused images, the Cartesian distribution of the wavefront phase horizontal and vertical slopes in polynomial basis, which in turn allows using any method for phase recomposition from slopes, whether they are zonal methods (Hudgin, etc.) or modal methods. In the case of modal methods, the set of polynomials on which the wavefront phase map is expanded and fitted can be chosen according to the need of the problem: Zernike polynomials (coincide with the classic optical or Seidel aberrations), complex exponentials (contain the Fourier transform kernel, the use of which accelerates computation), Karhunen-Löeve function (without any analytical form but constituting a basis in annular pupils, which are typical in telescopes), etc.

[0042] In general, the method for restoring the phase map from the expansion thereof in a set of polynomials Z.sub.j(x,y) comprises considering the wavefront phase at a point (x,y) as follows:


W(x,y)=Σ.sub.j=0.sup.N-1d.sub.jZ.sub.j(x,y)  (1)

where N indicates the number of polynomials used in the expansion.

[0043] The horizontal and vertical Cartesian slopes, S.sup.x and S.sup.y respectively, correspond to the following partial derivatives of the wavefront:

[00004] S x = x .Math. W ( x , y ) = .Math. j = 0 N - 1 .Math. d j .Math. x .Math. Z j ( x , y ) ( 2 ) S y = y .Math. W ( x , y ) = .Math. j = 0 N - 1 .Math. d j .Math. y .Math. Z j ( x , y ) ( 3 )

[0044] A photon is assumed to displace from a plane −z to a plane +z, and the wavefront at points (x, y) of the intermediate plane is estimated.

[0045] The propagated wavefront intensity is represented by a two-dimensional density function (PDF) for taking into account the probability of occurrence of a photon (denoted as f.sub.XY (x, y)), through the corresponding two-dimensional cumulative distribution function (CDF) (denoted as C(x, y)).

[0046] The density function verifies:


∫.sub.−∞.sup.+∞∫.sub.−∞.sup.+∞ƒ.sub.XY(x,y)dxdy=1

[0047] The marginal cumulative distribution function in the variable x is constructed as:


C.sub.X(x)=∫.sub.−∞.sup.xƒ.sub.X(s)ds

where f.sub.X is a marginal density function constructed from the density function (f.sub.XY) as follows:


ƒ.sub.X(x)=∫.sub.−∞.sup.+∞ƒ.sub.XY(x,y)dy

[0048] The property of being a cumulative distribution function in the corresponding variable is conserved for the marginal density function. Therefore,


∫.sub.−∞.sup.+∞ƒ.sub.X(x)dx=1

[0049] Since there is data in planes −z and +z, corresponding to the two images considered, there are two cumulative distribution functions. The marginal cumulative distribution function in plane −z is denoted as C.sub.1X, and the marginal cumulative distribution function in plane +z is denoted as C.sub.2X.

[0050] Given that the method starts from the values of f.sub.XY in planes −z and +z, it is assumed that the data associated with plane −z is defined by f.sub.1XY and the data associated with plane +z is determined by f.sub.2XY:


ƒ.sub.1X(x)=∫.sub.−∞.sup.+∞ƒ.sub.1XY(x,y)dy,


ƒ.sub.2X(x)=∫.sub.−∞.sup.+∞ƒ.sub.2XY(x,y)dy,


and


C.sub.1X(x)=∫.sub.−∞.sup.xƒ.sub.1X(s)ds,


C.sub.2X(x)=∫.sub.−∞.sup.xƒ.sub.2X(s)ds,

[0051] A monotonically increasing sequence of real numbers (s(j)) is considered with 1≦j≦T, of values between 0 and 1, i.e., 0≦s(j)≦1 for each 1≦j≦T.

[0052] Histogram matching in the marginal cumulative distribution function is performed, to find the mirror image of the values of the cumulative distribution function of the values of s(j). In other words, the value u.sub.1X(j) that meets the following is sought:


C.sub.1X(u.sub.1X(j))=s(j)

for each 1≦j≦T, and the value u.sub.2X(j) that meets the following:


C.sub.2X(u.sub.2X(j))=s(j)

[0053] Therefore, u.sub.1X(j) and u.sub.2X(j) have been found for each fixed value of s(j). Graphically speaking, a search is conducted with an x-axis scan of corresponding points, identifying all the ordinates, as schematically depicted in FIG. 1.

[0054] What provides more accurate values is to now go from the density function in the two variables for each of these values, to find, for each value k from 1 to T, the values u.sub.1Y (k) and u.sub.2Y (k) that meet the following:


∫.sub.−∞.sup.u.sup.1Y.sup.(k)∫.sub.−∞.sup.u.sup.1X.sup.(j)ƒ.sub.1XY(x,y)dxdy=s(j)s(m)


and


∫.sub.−∞.sup.u.sup.2Y.sup.(k)∫.sub.−∞.sup.u.sup.2X.sup.(j)ƒ.sub.2XY(x,y)dxdy=s(j)s(m)

where the functions f.sub.1XY(x,y) and f.sub.2XY(x,y) correspond to the images considered I.sub.1(x,y) and I.sub.2(x,y), respectively.

[0055] Graphically, what is done is to associate each corresponding value on the x-axis with the ordinate that makes the mirror images through the cumulative distribution function match up, as schematically depicted in FIG. 2.

[0056] The result is a two-dimensional mesh of points determined by


{(u.sub.1X(j),u.sub.1Y(k)),j,k=1 . . . T} at height −z, and


{(u.sub.2X(j),u.sub.2Y(k)),j,k=1 . . . T} at height +z,

such that for each 1≦j, k≦T, the points (u.sub.1X(J), u.sub.1Y (k)) and (u.sub.2X(j), u.sub.2Y(k)) are associated with the same value of a ray in the wavefront.

[0057] The directional derivatives of the wavefront in the points of the intermediate plane can be considered determined by the expressions:

[00005] W x ( u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 ) = u 2 .Math. X ( j ) - u 1 .Math. X ( j ) 2 .Math. z , ( 4 )

for each 1≦j≦T, and

[00006] W y ( u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 ) = u 2 .Math. Y ( k ) - u 1 .Math. Y ( k ) 2 .Math. z , ( 5 )

for each 1≦k≦T.

[0058] Therefore, the system of equations (2) and (3) can be written as:

[00007] u 2 .Math. X ( j ) - u 1 .Math. X ( j ) 2 .Math. z = ( .Math. p = 0 N - 1 .Math. d p .Math. x .Math. Z p ( x , y ) ) .Math. .Math. x = u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , y = u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 .Math. .Math. u 2 .Math. Y ( k ) - u 1 .Math. Y ( k ) 2 .Math. z = ( .Math. p = 0 N - 1 .Math. d p .Math. x .Math. Z p ( x , y ) ) .Math. .Math. x = u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , y = u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2

or in a simplified form:


S=A.Math.d  (6)

where the unknown is the matrix of coefficients d. Equation (6) represents an overdetermined system of equations where there are more equations (2T.sup.2) than unknowns (N), where 2T.sup.2 is the number of pixels (x,y) that are available.

[0059] The coefficients d of the expansion can be found as the best fit on the plane in the sense of least squares. A preferred way to solve the preceding system is to solve by means of the least squares as:


d=(A.sup.TA).sup.−1A.sup.TS=S.sup.+S  (7)

[0060] Equation (7) can be solved by a number of techniques known by the person skilled in the art, depending on whether or not the matrix A.sup.TA is singular.

[0061] In a particular embodiment, the wavefront is expanded as a function of complex exponentials. Expansion is truncated in a certain N≧1 such that it can be written as

[00008] W ( x , y ) = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. Z pq ( x , y )

where (d.sub.pq).sub.p,q≧0 is a doubly indexed family of coefficients, and where

[00009] Z pq ( x , y ) = 1 N .Math. e 2 .Math. π .Math. .Math. i N .Math. ( px + qy ) ( 8 )

for each 0≦p, q≦N−1.

[0062] At this point a problem of least squares can be solved with the obtained data because by deriving with respect to x or y, the following is deduced from expression (8)

[00010] x .Math. W ( x , y ) = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. x .Math. Z pq ( x , y ) , ( 9 ) y .Math. W ( x , y ) = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. y .Math. Z pq ( x , y ) , ( 10 )

[0063] Therefore, for each 0≦p, q≦N−1:

[00011] x .Math. Z pq ( x , y ) = Z pq .Math. 2 .Math. π .Math. .Math. ip N , .Math. y .Math. Z pq ( x , y ) = Z pq .Math. 2 .Math. .Math. π .Math. .Math. i .Math. .Math. q N .

[0064] By evaluating at the midpoints, taking into account expressions (4) and (5) and replacing these values in equations (9) and (10), it is possible to arrive at the following overdetermined system:

[00012] u 2 .Math. X ( j ) - u 1 .Math. X ( j ) 2 .Math. z = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. 2 .Math. π .Math. .Math. ip N .Math. Z pq ( u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 ) u 2 .Math. Y ( k ) - u 1 .Math. Y ( k ) 2 .Math. z = .Math. p = 0 N - 1 .Math. .Math. q = 0 N - 1 .Math. d pq .Math. 2 .Math. π .Math. .Math. iq N .Math. Z pq ( u 1 .Math. X ( j ) + u 2 .Math. X ( j ) 2 , u 1 .Math. Y ( k ) + u 2 .Math. Y ( k ) 2 )

with N.sup.2 unknowns and 2T.sup.2 equations. The value of T is determined by the data, which is considered to be much greater than the number of addends in the expansion of the phase in terms of exponentials.

[0065] In this case, the coefficients of the expansion can be obtained from the expression:

[00013] d pq = - 2 [ i .Math. .Math. sin ( π .Math. .Math. p / N ) .Math. DF .Math. { S x } + i .Math. .Math. sin ( π .Math. .Math. q / N ) .Math. DF .Math. { S y } ] 4 [ sin 2 ( π .Math. .Math. p / N ) + sin 2 ( π .Math. .Math. q / N ) ]

where DF denotes the discrete Fourier transform.

Tomographic Restoration of the Image

[0066] The method of the invention provides a two-dimensional restoration of the wavefront phase from the defocused images. The obtained wavefront phase corresponds to the accumulated phase differences to the conjugate position in the object space. In other words, if two defocused images are taken that are so far away from the focus of the lens that they almost correspond with images taken in the pupil (or with very little separation from the entrance pupil of the optical system), the phase accumulated in the entire field of view of the scene to the arrival to the objective would be obtained. As the defocused image pair used approaches the focus, the conjugate plane in the object space will correspond to a plane farther away from the entrance pupil, and it will describe the phase accumulated in the scene to that plane.

[0067] The difference between both accumulated phases provides the phase shift present between the farthest plane and the pupil plane of the optical system. Therefore, the greater the number of defocused images used, the more complete the discretization of the object space and the obtained tomographic distribution of the wavefront phase will be. This tomographic distribution of the wavefront phase will have the original two-dimensional optical resolution associated with the capture sensor and the three-dimensional resolution (in optical axis z) that the number of images used allows. It should be pointed out that the three-dimensional resolution does not strictly coincide with the number of planes or defocused images acquired, as it is possible to consider any pair of acquisition planes for obtaining subdiscretization of accumulated wavefront phases, as schematically depicted in FIG. 6.

[0068] With planes I.sub.a and I.sub.b, the accumulated phase W.sub.1(x,y) to the pupil is found. With I.sub.a′ and I.sub.b′, the accumulated phase W.sub.2(x,y) is found. The difference between W.sub.2 and W.sub.1 provides the phase in the section indicated by the key. By using more planes (more captured images), resolution in axis z of the phase is increased, and a three-dimensional map of the wavefront phase is obtained.

[0069] The method of the present invention can be applied in any technical field in which the wavefront associated with the observation of a scene is to be known, including computational photography and adaptive optics, particularly in applications relating to astronomical observations to obtain a three-dimensional map of turbulences (wavefront phases) associated with a column of the atmosphere, in applications in which it is necessary to correct the view through turbulent media (for example in augmented reality glasses, mobiles, microscopes, or endoscopes), in applications for the tomographic measurement of variations in refractive index in transparent organic tissue samples or in applications of optic communications through turbulent media (atmosphere, ocean, body fluids, etc.).

Image Intensity Recomposition

[0070] The light field L is a four-dimensional representation of the light rays going through the objective of a camera. For the sake of simplicity, a simplified two-dimensional notation will be used. Therefore, L.sub.F(x,u) represents the ray going through the main lens of the camera in position u=(u.sub.1, u.sub.2) and arriving at the sensor in position x=(x.sub.1, x.sub.2) for a camera of focal length F, as depicted in FIG. 3.

[0071] Therefore, there is a four-dimensional volume representing all the rays entering the camera and their positions of arrival to the sensor. Ng (Ng, R., Fourier slice photography, in ACM Transactions on Graphics (TOG), Vol. 24, No. 3, pp. 735-744, ACM, 2005, July) demonstrates that the image that would be projected onto the sensor if said sensor were at distance αF, would correspond to a two-dimensional projection of the light field at angle θ=tan.sup.−1(1/α):

[00014] I α ( x ) = 1 α 2 .Math. F 2 .Math. L F ( u + x - u α , u ) .Math. du ,

as schematically depicted in FIG. 4.

[0072] The method of the invention is based on interpreting I.sub.α(x) as a sum of images at different values u displaced with respect to one another, as schematically depicted in FIG. 5, and on estimating images at different values u, finding which set of images that are displaced due to a value α′ and added to one another best approximates the input image captured with a focus distance Fα′. Displacement in the x dimension (in pixels) is therefore u+(x−u)/α′.

[0073] The method comprises estimating the value of the light field focused at distance F (L.sub.F) at M values other than u from P images (I.sub.1(x), I.sub.2(x) . . . I.sub.P(x)) focused at distances α.sub.1F, α.sub.2F α.sub.2F and captured with a conventional photographic camera. To that end, the method seeks to find the values of the light field such that the following is met:

[00015] .Math. n = 1 M .Math. L F .Math. ( n + [ ( x - n ) / α j ] , n ) = α j 2 .Math. F 2 .Math. I j ( x ) , .Math. j { 1 .Math. .Math. .Math. .Math. .Math. P } .Math. x { 1 .Math. .Math. .Math. .Math. .Math. k }

[0074] The preceding expression can be simply represented by a linear system of equations of type Ax=b. This system can be solved by finding for x such that it minimizes ∥Ax−b∥.sup.2.

[0075] Up until now single channel images have been assumed. In the case of color images (having multiple channels), generating the matrix A once is enough. Then a new vector b containing the information about the images in the channel to be solved is created.

[0076] The method for recomposing the intensity of the image according to the invention allows generating a single image that is completely focused and with complete optical resolution (all-in-focus), generating the all-in-focus stereo pair, generating an all-in-focus multi-stereo image (light field) and generating a light field focused at will where desired, with applications in microscopy, photography, endoscopy, cinematography, etc.

Example

[0077] Assume two images of 8 elements, I.sub.1(x) and I.sub.2(x), focused at distances α.sub.1=2 and α.sub.2=4, with F=1 m. The summation in this case is with indices from n=1 to n=2.

[0078] The equations for j=1 are:

[00016] L F ( 1 + [ 1 - 1 2 ] , 1 ) + L F ( 2 + [ 1 - 2 2 ] , 2 ) = 2 2 .Math. I 1 ( 1 ) L F ( 1 + [ 2 - 1 2 ] , 1 ) + L F ( 2 + [ 2 - 2 2 ] , 2 ) = 2 2 .Math. I 1 .Math. ( 2 ) L F ( 1 + [ 3 - 1 2 ] , 1 ) + L F ( 2 + [ 3 - 2 2 ] , 2 ) = 2 2 .Math. I 1 .Math. ( 3 ) .Math. .Math. .Math. L F ( 1 + [ 8 - 1 2 ] , 1 ) + L F ( 2 + [ 8 - 2 2 ] , 2 ) = 2 2 .Math. I 1 .Math. ( 8 )

and for j=2

[00017] L F ( 1 + [ 1 - 1 2 ] , 1 ) + L F ( 2 + [ 1 - 2 2 ] , 2 ) = 4 2 .Math. I 2 .Math. ( 1 ) L F ( 1 + [ 2 - 1 2 ] , 1 ) + L F ( 2 + [ 2 - 2 2 ] , 2 ) = 4 2 .Math. I 2 .Math. ( 2 ) L F ( 1 + [ 3 - 1 2 ] , 1 ) + L F ( 2 + [ 3 - 2 2 ] , 2 ) = 4 2 .Math. I 2 .Math. ( 3 ) .Math. .Math. .Math. L F ( 1 + [ 8 - 1 2 ] , 1 ) + L F ( 2 + [ 8 - 2 2 ] , 2 ) = 4 2 .Math. I 2 ( 8 )

[0079] Expanding:


L.sub.F(1,1)+L.sub.F(2,2)=2.sup.2I.sub.1  (1)


L.sub.F(2,1)+L.sub.F(2,2)=2.sup.2I.sub.1  (2)


L.sub.F(2,1)+L.sub.F(3,2)=2.sup.2I.sub.1  (3)


L.sub.F(3,1)+L.sub.F(3,2)=2.sup.2I.sub.1  (4)


L.sub.F(3,1)+L.sub.F(4,2)=2.sup.2I.sub.1  (5)


L.sub.F(4,1)+L.sub.F(4,2)=2.sup.2I.sub.1  (6)


L.sub.F(4,1)+L.sub.F(5,2)=2.sup.2I.sub.1  (7)


L.sub.F(5,1)+L.sub.F(5,2)=2.sup.2I.sub.1  (8)


L.sub.F(1,1)+L.sub.F(2,2)=4.sup.2I.sub.2  (1)


L.sub.F(1,1)+L.sub.F(2,2)=4.sup.2I.sub.2  (2)


L.sub.F(2,1)+L.sub.F(2,2)=4.sup.2I.sub.2  (3)


L.sub.F(2,1)+L.sub.F(3,2)=4.sup.2I.sub.2  (4)


L.sub.F(2,1)+L.sub.F(3,2)=4.sup.2I.sub.2  (5)


L.sub.F(2,1)+L.sub.F(3,2)=4.sup.2I.sub.2  (6)


L.sub.F(3,1)+L.sub.F(3,2)=4.sup.2I.sub.2  (7)


L.sub.F(3,1)+L.sub.F(4,2)=4.sup.2I.sub.2  (8)

[0080] In matrix form:

[00018] [ 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 1 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 ] .Math. [ L F ( 1 , 1 ) L F ( 2 , 1 ) L F ( 3 , 1 ) L F ( 4 , 1 ) L F ( 5 , 1 ) L F ( 6 , 1 ) L F ( 7 , 1 ) L F ( 8 , 1 ) L F ( 1 , 2 ) L F ( 2 , 2 ) L F ( 3 , 2 ) L F ( 4 , 2 ) L F ( 5 , 2 ) L F ( 6 , 2 ) L F ( 7 , 2 ) L F ( 8 , 2 ) ] = [ 2 2 .Math. I 1 ( 1 ) 2 2 .Math. I 1 ( 2 ) 2 2 .Math. I 1 ( 3 ) 2 2 .Math. I 1 ( 4 ) 2 2 .Math. I 1 ( 5 ) 2 2 .Math. I 1 ( 6 ) 2 2 .Math. I 1 ( 7 ) 2 2 .Math. I 1 ( 8 ) 4 2 .Math. I 2 ( 1 ) 4 2 .Math. I 2 ( 2 ) 4 2 .Math. I 2 ( 3 ) 4 2 .Math. I 2 ( 4 ) 4 2 .Math. I 2 ( 5 ) 4 2 .Math. I 2 ( 6 ) 4 2 .Math. I 2 ( 7 ) 4 2 .Math. I 2 ( 8 ) ]

[0081] The resolution of the preceding system provides the values of the light field L.sub.F. The values of the light field that are not defined in any equation in the preceding system take the value 0.

[0082] FIG. 7 shows the image recomposition of a scene performed in transform domain, according to a method from the state of the art. FIG. 8 shows the image recomposition of the same scene performed in measurement domain, using the method of the present invention for obtaining the light field from defocused images. Although the images of FIGS. 7 and 8 are not normalized to the same signal strength value, it can be seen that the recomposition performed in the measurement domain is better defined and sharper at the edges of the resolution test figures. The area marked in the box and enlarged so it can be seen better perfectly illustrates the difference in quality between both retrievals.