Process and apparatus for the capture of plenoptic images between arbitrary planes
11523097 · 2022-12-06
Assignee
Inventors
- Milena D'Angelo (Bari BA, IT)
- Augusto Garuccio (Bari BA, IT)
- Francesco Vincenzo Pepe (Bari BA, IT)
- Francesco Maria Di Lena (Casamassima BA, IT)
Cpc classification
H04N13/239
ELECTRICITY
H04N13/122
ELECTRICITY
H04N13/254
ELECTRICITY
G02B27/0075
PHYSICS
International classification
H04N13/254
ELECTRICITY
H04N13/122
ELECTRICITY
H04N13/239
ELECTRICITY
Abstract
A process and an apparatus for the plenoptic capture of photographic or cinematographic images of an object or a 3D scene (10) of interest are based on a correlated light emitting source and correlation measurement, along the line of “Correlation Plenoptic Imaging” (CPI). A first image sensor (Da) and a second image sensor (Db) detect images along a path of a first light beam (a) and a second light beam (b), respectively. A processing unit (100) of the intensities detected by the synchronized image sensors (Da, Db) is configured to retrieve the propagation direction of light by measuring spatio-temporal correlations between light intensities detected in the image planes of at least two arbitrary planes (P′, P″; D′b, D″a) chosen in the vicinity of the object or within the 3D scene (10).
Claims
1. A process for the plenoptic capture of photographic, cinematographic, microscopic or steroscopic images of an object or a 3D scene of interest, comprising the steps of: providing a light emitting source; generating a first light beam (a) and a second light beam (b) coming from said light emitting source; directing said first light beam (a) towards a first image sensor (Da) and said second light beam (b) towards a second image sensor (Db), said second light beam (b) being adapted to be either reflected by said object or 3D scene (10) or transmitted through said object or 3D scene (10); retrieving via the second image sensor, a focused image of a first arbitrary plane chosen in a vicinity of the objector within the 3D scene (10), and retrieving a non-focused image of a second arbitrary plane chosen in the vicinity of the object or within the 3D scene (10), wherein the first arbitrary plane and the second arbitrary plane are planes other than a focusing element plane or a light source plane; retrieving, via the first image sensor, one of a focused image or a ghost image of the second arbitrary plane; and retrieving the propagation direction of light by measuring spatio-temporal correlations between the light intensities detected by said first and second image sensors (Da, Db) in the image planes of said first and second arbitrary planes (P′, P″; D′b, D″a).
2. The process according to claim 1, wherein said light emitting source is selected from an object or 3D scene (10) which reflects/transmits the illuminating chaotic light and a chaotic light emitting object or 3D scene (10).
3. The process according to claim 1, wherein the information about the propagation direction of light is obtained by measuring the correlation between the intensity fluctuations retrieved at points ρ.sub.a and ρ.sub.b on said image sensors (D.sub.a, D.sub.b) according to the correlation function
G(ρ.sub.a, ρ.sub.b)=ΔI.sub.a(ρ.sub.a)ΔI.sub.b(ρ.sub.b)
where i=a, b and ΔI.sub.i(ρ.sub.i)=I.sub.i(ρ.sub.i)−
I.sub.i(ρ.sub.i)
are the intensity fluctuations at points ρ.sub.i on the image sensor D.sub.i, the symbol
. . .
denoting the statistical average over the emitted light.
4. The process according to claim 1, wherein the depth of field (DOF) of the original image is enhanced by reconstructing the direction of light between said two arbitrary planes (P′, P″; D′.sub.b, D″.sub.a) and by retracing the light paths to obtain refocused images.
5. The process according to claim 4, wherein the image of the object with enhanced depth of field or the 3D image of the scene is reconstructed by stacking said refocused images.
6. The process according to claim 1, wherein a primary beam (5) of chaotic light coming from said object or 3D scene (10) is collected by a main lens (L.sub.f), and wherein said first light beam (a) and said second light beam (b) are generated by splitting said primary light beam (5) by means of a splitting element (20).
7. The process according to claim 6, wherein said first image sensor (D.sub.a) is placed at a distance (z′.sub.a) from the back principal plane of said main lens (L.sub.f) and retrieves the focused image of a first plane (P′) at a distance (z.sub.a) from the front principal plane of said main lens (L.sub.f), and wherein said second image sensor (D.sub.b) is placed at a distance (z′.sub.b) from the back principal plane of said main lens (L.sub.f) and retrieves the focused image of a second plane (P″) at a distance (z.sub.b) from the front principal plane of said main lens (L.sub.f).
8. The process according to claim 1, wherein said first light beam (a) and said second light beam (b) are quantum entangled beams generated by a source of entangled photons, or beams (30).
9. The process according to claim 8, wherein two identical lenses (L.sub.2) of focal length f.sub.2 are placed in such a way that their back principal planes are at distances (Z′.sub.a) and (z′.sub.b) from the image sensors (D.sub.a) and (D.sub.b), respectively, and define two conjugate planes (D′.sub.a) and (D′.sub.b) at distances z.sub.a=(1/f−1/z′.sub.a).sup.−1 and z.sub.b=(1/f−1/z′.sub.b).sup.−1, respectively, from the front principal planes of said lenses (L.sub.2).
10. The process according to claim 8, wherein an additional lens (L.sub.1) with focal length (f.sub.1) collects the two correlated beams emitted by said entangled photon or beam source (30).
11. The process according to claim 8, wherein said object or 3D scene (10) is placed in the optical path of one of said light beams (a, b).
12. The process according to claim 8, wherein a plane (D″.sub.a) parallel to said planes (D′.sub.a, D′.sub.b) is defined along the optical path of said second light beam (b) at a distance (z.sub.a) from the relevant lens (L.sub.2), and wherein a “ghost image” of the plane (D″.sub.a) is reproduced in the plane (D′.sub.a) when correlation or coincidence measurements are measured between said image sensors (D.sub.a) and (D.sub.b).
13. An apparatus for the plenoptic capture of photographic, cinematographic, microscopic or stereoscopic images of an object or a 3D scene (10) of interest, comprising: a first image sensor (D.sub.a) to detect images coming from said object or 3D scene (10) along a path of a first light beam (a); a second image sensor (D.sub.b) to detect images coming from said object or 3D scene (10) along a path of a second light beam (b); a processing unit (100) of the intensities detected by said image sensors (D.sub.a, D.sub.b); wherein said processing unit is configured to: retrieve, via the second image sensor, a focused image of a first arbitrary plane chosen in a vicinity of the object or within the 3D scene, and retrieve a non-focused image of a second arbitrary plane chosen in the vicinity of the object or within the 3D scene, wherein the first arbitrary plane and the second arbitrary plane are planes other than a focusing element plane or a light source plane; retrieve, via the first image sensor, one of a focused image or a ghost image of the second arbitrary plane; and retrieve the propagation direction of light by measuring spatio-temporal correlations between the light intensities detected by said first and second image sensors (D.sub.a, D.sub.b) in the image planes of the first and second arbitrary planes (P′, P″; D′.sub.b, D″.sub.a).
14. The apparatus according to claim 13, further including a main lens (L.sub.f), wherein the focused images of said at least two arbitrary planes (P′, P″) are retrieved at different distances (z.sub.a) and (z.sub.b) from the front principal plane of said main lens (L.sub.f).
15. The apparatus according to claim 13, wherein said first image sensor (D.sub.a) is placed at a distance (z′.sub.a) from the back principal plane of said main lens (L.sub.f) and said second image sensor (D.sub.b) is placed at a distance (z′.sub.b) from the back principal plane of said main lens (L.sub.f).
16. The apparatus according to claim 13, further including a splitting element (20) placed between said main lens (L.sub.f) and said image sensors (D.sub.a, D.sub.b) so as to generate said first light beam (a) and said second light beam (b) from a primary light beam (5) coming from said object or 3D scene (10) through said main lens (L.sub.f).
17. The apparatus according to claim 13, further including a light emitting source, wherein said light emitting source is an object or 3D scene (10) which reflects/transmits the light from a chaotic source or it is the same object or 3D scene (10) which emits the chaotic light.
18. The apparatus according to claim 13 which is configured to detect said first light beam (a) and said second light beam (b) from an entangled photons or beams light source (30).
19. The apparatus according to claim 13, wherein two identical lenses (L.sub.2) of focal length (f.sub.2) are placed in such a way that their back principal planes are at distances (z′.sub.a) and (z′.sub.b) from the image sensors (D.sub.a) and (D.sub.b), respectively, along the respective paths of said first light beam (a) and said second light beam (b) and define two conjugate planes (D′.sub.a) and (D′.sub.b) at distances z.sub.a(1/f−1/z′.sub.a).sup.−1 and z.sub.b=(1/f−1/z′.sub.b).sup.−1, respectively, from the front principal planes of said lenses (L.sub.2).
20. The apparatus according to claim 13, wherein the focused images of said at least two arbitrary planes (D′.sub.b, D″.sub.a) are retrieved at different distances (z′.sub.a) and (z′.sub.b) from the back principal planes of said lenses (L.sub.2).
21. The apparatus according to claim 13, further including an additional lens (L.sub.1) with focal length (f.sub.1) placed between said entangled photon or beam source (30) and said two identical lenses (L.sub.2).
22. The apparatus according to claim 13, wherein said first image sensor (D.sub.a) and said second image sensor (D.sub.b) are distinct, synchronized, image sensor devices.
23. The apparatus according to claim 13, wherein said first image sensor and said second image sensor are two disjoint parts of a same image sensor device.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further characteristics and advantages of the present invention will be more evident in the following description, given for illustrative purposes by referring to the attached figures, in which:
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) In the schematic view of the apparatus shown in
(7) A primary light beam 5 coming from the object 10 is collected by a lens L.sub.f, supposed to be a thin lens in the figure, with positive focal length f and diameter D; the latter can also be the diameter of any other entrance pupil. After the lens L.sub.f, a beam splitter 20 separates the collected light in two beams, denoted as a and b, impinging on image sensors D.sub.a and D.sub.b, respectively. A processing unit 100 is connected to the image sensors D.sub.a, D.sub.b to process the intensities detected by the synchronized image sensors.
(8) The image sensors D.sub.a and D.sub.b, placed, respectively, at a distance z′.sub.a and z′.sub.b from the thin lens L.sub.f (on the image side) retrieve the focused images of the planes P′ and P″ at a distance z.sub.a and z.sub.b from the thin lens L.sub.f (on the object side), according to the equation:
1/z.sub.j+1/z′.sub.j=1/f,
with magnification
M.sub.j=−z′.sub.j/z.sub.j,
with image resolution Δx.sub.j on the two planes
Δx.sub.j=0.61λz.sub.j/D,
and with depth of field Δz.sub.j
Δz.sub.j=1.22λ(z.sub.j/D).sup.2
where j=a, b in all the equations above.
(9) In the interesting cases in which the distance between the two planes P′ and P″ is larger than this natural depth of field, the intensities at the two image sensors D.sub.a and D.sub.b do not encode any relevant information about the volume enclosed by the two planes. For performing “Correlation Plenoptic Imaging between Arbitrary Planes” (CPI-AP), the correlation is measured between the intensity fluctuations retrieved at points ρ.sub.a and ρ.sub.b on the two synchronized image sensors D.sub.a and D.sub.b, respectively; the correlation function is thus
G(ρ.sub.a,ρ.sub.b)=ΔI.sub.a(ρ.sub.a)ΔI.sub.b(ρ.sub.b)
(1)
where i=a, b and ΔI.sub.i(ρ.sub.i)=I.sub.i(ρ.sub.i)−I.sub.i(ρ.sub.i)
are the intensity fluctuations at points ρ.sub.i on the image sensor D.sub.i and the symbol
. . .
denotes the statistical average over the emitted light. For light emitted by a stationary and ergodic source, the statistical average can be replaced by a time average over a sequence of frames.
(10) By propagating the electromagnetic field in the setup described above, and indicating by A(ρ.sub.i) the aperture function of the object, placed at a distance z from the front principal plane of the lens L.sub.f, and by P(ρ.sub.i) the lens pupil function, one gets
(11)
with j=a, b, p.sub.a* is the complex conjugate of p.sub.a, ρ.sub.0 is the coordinate on the object plane and ρ.sub.l the coordinate on the lens plane. By integrating G(ρ.sub.a, ρ.sub.b) over one of the two image sensor coordinates, let's say ρ.sub.a(or ρ.sub.b), one would get the focused incoherent image of the plane placed at a distance z.sub.b (or z.sub.a) from the front principal plane of the lens L.sub.f. In particular, if the object is placed exactly in z=z.sub.b (or z=z.sub.a), such integral gives an image of the object (known as “ghost image”) characterized by the same resolution, depth of field and magnification of the image directly retrieved by D.sub.b (or D.sub.a) through intensity measurements. In other words, the “ghost image” has the same characteristics of the standard image.
(12) If the object is placed in z≠z.sub.b (or z≠z.sub.a), such integral gives an out-of-focus image of the object.
(13) However, the dependence of G(ρ.sub.a, ρ.sub.b) on both the planar coordinates ρ.sub.a and ρ.sub.b is much more informative, and enables to reconstruct the direction of light between the two planes, and beyond them, and thus to refocus the object aperture function independently of its location within the setup. This can be easily seen in the geometrical optics limit (λ.fwdarw.0), where the effects of diffraction are negligible:
(14)
where C is an irrelevant constant. A proper parametrization of the correlation function can be implemented to decouple the object aperture function A from the lens pupil function P, thus leading to several refocused images, one for each value of ρ.sub.s
(15)
where ρ.sub.r an ρ.sub.s are the points on the object an on the lens plane, respectively, that give the most relevant contribution to the correlation function. After the refocusing operation (5), applied to the measured correlation function, integration over the planar coordinate ρ.sub.s provides the refocused image of the object aperture, independent of its original position, i.e. on its displacement, z−z.sub.a and z−z.sub.b, from the plane conjugate to each image sensor plane:
Σ.sub.ref(ρ.sub.r)=∫d.sup.2ρ.sub.sG.sub.ref(ρ.sub.r, ρ.sub.s)≈C′|A(ρ.sub.r)|.sup.4 (6)
with C′ another irrelevant constant.
(16) The limits to the refocusing operation do not appear in the geometrical optics regime; such limits can be obtained from the exact expression of the correlation function in Equations (2) and (3), which include the effects of interference and diffraction, as determined by the wave nature of light.
(17)
(18)
(19) In the standard images (
(20)
(21) The apparatus of
(22) A transmissive object 10 is placed in the optical path labelled by b. The planes D′.sub.a and D″.sub.a, both at a distance z.sub.a from the lens L.sub.2 (on the object side), are in the focal plane of an additional lens L.sub.1 (also supposed to be a thin lens for simplicity), with focal length f.sub.1>0. Lens L.sub.1, collects the two correlated beams emitted by the SPDC source 30. Due to the correlation between the two beams, a “ghost image” of the plane D″.sub.a is reproduced in the plane D′.sub.a when correlation (or coincidence) measurements are measured between D.sub.a and D.sub.b. Hence, in the optical path a, the lens L.sub.2 serves for focusing the “ghost image” of the plane D″.sub.a on the image sensor D.sub.a. The refocused image is still given by Equation (5), upon replacing M.sub.a=−z′.sub.a/z.sub.a with −M.sub.a. However, due to the finite aperture of the lenses L.sub.2, characterized by the pupil function ρ.sub.2, the refocused image is slightly complicated by the presence of an envelope function, namely:
(23)
(24) The envelope function is due to the asymmetry between the two light paths; in fact, different from the setup illuminated by chaotic light previously disclosed for the embodiment of
(25) In the images of
(26) The refocused image of
(27) Various modifications can be made to the embodiments herein depicted for illustrative purposes, without departing from the scope of the present invention as defined by the attached claims. For example, the two beams along the light paths a and b can be either naturally diverging from each other or made divergent by the insertion of additional optical elements (beam splitters and mirrors). Each lens in all the embodiments can always be replaced by a more complex imaging system. In the embodiment of