System and method for diffuse imaging with time-varying illumination intensity
09759995 · 2017-09-12
Assignee
Inventors
Cpc classification
G01J1/0437
PHYSICS
G01J1/4228
PHYSICS
G01J1/08
PHYSICS
International classification
G01J1/08
PHYSICS
Abstract
Diffuse image measurement system and digital image formation method. The system includes a source of light with time-varying intensity directed at a scene to be imaged. A time-resolved light meter is provided for receiving light reflected from the scene to generate time-resolved samples of the intensity of light incident at the light meter. The temporal variation in the intensity of light incident at the light meter is associated with a function of a radiometric property of the scene, such as a linear functional of reflectance, and a computer processes the samples to construct a digital image. The spatial resolution of the digital image is finer than the spatial support of the illumination on the scene and finer than the spatial support of the sensitivity of the light meter. Using appropriate light sources instead of impulsive illumination significantly improves signal-to-noise ratio and reconstruction quality.
Claims
1. A system comprising: a non-impulsive, low bandwidth of approximately 350 MHz source of scene illumination with intensity varying in time and some spatial intensity characteristic; a time-resolved light meter to generate time-resolved samples at a sampling frequency of approximately 750 MHz of intensity of light incident on the time-resolved light meter, reflected from the scene, with some spatial sensitivity characteristic; and a computer means to process the time-resolved light meter samples to form a digital image of the scene for a plurality of combinations of illumination spatial intensity characteristics and light meter spatial sensitivity characteristics.
2. The system of claim 1 wherein a plurality of light meter spatial sensitivity characteristics is obtained with a plurality of light meters, a plurality of positions of a light meter, or a plurality of states on a spatial light modulator.
3. The system of claim 2 wherein the light meters are arranged in a regular array.
4. The system of claim 1 wherein a plurality of illumination spatial intensity characteristics is obtained with a plurality of light sources, a plurality of positions of a light source, or a plurality of states on a spatial light modulator.
5. The system of claim 1 wherein the scene comprises a Lambertian surface.
6. The system of claim 1 wherein the spatial support of an illumination spatial intensity characteristic is greater than the spatial resolution of the digital image.
7. The system of claim 1 wherein the spatial support of a light meter spatial sensitivity characteristic is greater than the spatial resolution of the digital image.
8. The system of claim 1 wherein the light intensity variation is chosen to enhance the accuracy of the digital image.
Description
BRIEF DESCRIPTION OF THE DRAWING
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DESCRIPTION OF THE PREFERRED EMBODIMENT
(10) The invention is described through one exemplary configuration. Those skilled in the art can generate many other configurations.
(11) Consider the imaging scenario depicted in
(12) We assume that the position, orientation, and dimensions (L-by-L) of the planar surface 10 are known. Many methods for estimating these geometric parameters are known to those skilled in the art; a method using diffuse illumination and time-resolved sensing was recently demonstrated [6].
(13) Formation of an ideal gray scale image is the recovery of the reflectance pattern on the surface, which can be modeled as a 2D function ƒ:[0,L].sup.2.fwdarw.[0,1]. Those skilled in the art can replace the gray scale reflectance with other radiometric scene properties, including but not limited to a set of 2D functions representing reflectance at various wavelengths. We assume the surface to be Lambertian so that its perceived brightness is invariant to the angle of observation [7]; those skilled in the art may incorporate any bidirectional reflectance distribution function (BRDF), including but not limited to the Phong model, the Blinn-Phong model, the Torrance-Sparrow model, the Cook-Torrance model, the Oren-Nayar model, the Ashikhmin-Shirley model, the He-Torrance-Sillion-Greenberg model, the Lebedev model, the fitted Lafortune model, or Ward's anisotropic model.
(14) The light incident at Light Meter k is a combination of the time-delayed reflections from all points on the planar surface. For any point x=(x.sub.1,x.sub.2)ε[0,L].sup.2, let d.sup.(1)(x) denote the distance from illumination source to x, and let d.sub.k.sup.(2)(x) denote the distance from x to Light Meter k. Then d.sub.k(x)=d.sup.(1)(x)+d.sub.k.sup.(2)(x) is the total distance traveled by the contribution from x. This contribution is attenuated by the reflectance ƒ(x), square-law radial fall-off, and cos(θ(x)) to account for foreshortening of the surface with respect to the illumination, where θ(x) is the angle between the surface normal at x and a vector from x to the illumination source. Thus, when the intensity of the omnidirectional illumination is abstracted as a unit impulse at time 0, denoted s(t)=δ(t), the contribution from point x is the light intensity signal a.sub.k(x)ƒ(x)δ(t−d.sub.k(x)), where we have normalized to unit speed of light and
(15)
is the light meter spatial sensitivity characteristic for Light Meter k. Examples of distance functions and light meter spatial sensitivity characteristics are shown in
(16) Combining contributions over the plane, the total light incident at Light Meter k is
g.sub.k(t)=∫.sub.0.sup.L∫.sub.0.sup.La.sub.k(x)ƒ(x)δ(t−d.sub.k(x))dx.sub.1dx.sub.2. (2)
Thus, evaluating g.sub.k(t) at a fixed time t amounts to integrating over xε[0,L].sup.2 with d.sub.k(x)=t. Define the isochronal curve C.sub.k.sup.t={x:d.sub.k(x)=t}. Then
g.sub.k(t)=∫a.sub.k(x(k,u))ƒ(x(k,u))du (3)
where x(k,u) is a parameterization of C.sub.k.sup.t∩[0,L].sup.2 with unit speed. The intensity g.sub.k(t) thus contains the contour integrals over C.sub.k.sup.t's of the desired function ƒ. Each C.sub.k.sup.t is a level curve of d.sub.k(x); as illustrated in
(17) A digital system can use only samples of g.sub.k(t) rather than the continuous-time function itself. We now see how uniform sampling of g.sub.k(t) with a linear time-invariant (LTI) prefilter relates to linear functional measurements of ƒ. This establishes the foundations of a Hilbert space view of diffuse imaging. Those skilled in the art can incorporate effects of non-LTI device characteristics.
(18) Suppose discrete samples are obtained at Light Meter k with sampling prefilter h.sub.k(t) and sampling interval T.sub.k:
y.sub.k[n]=(g.sub.k(t)*h.sub.k(t))|.sub.t=nT.sub.
(19) A sample y.sub.k[n] can be seen as a standard L.sup.2(R) inner product between g.sub.k and a time-reversed and shifted h.sub.k [8]:
y.sub.k[n]=<g.sub.k(t),h.sub.k(nT.sub.k−t)>. (5)
Using (2), we can express (5) in terms of ƒ using the standard L.sup.2([0,L].sup.2) inner product:
y.sub.k[n]=<ƒ,φ.sub.k,n> where (6a)
φ.sub.k,n(x)=a.sub.k(x)h.sub.k(nT.sub.k−d.sub.k(x)). (6b)
Over a set of sensors and sample times, {φ.sub.k,n} will span a subspace of L.sup.2([0,L].sup.2), and a sensible goal is to form a good approximation of ƒ in that subspace.
(20) For ease of illustration and interpretation, let T.sub.k=T, meaning all light meters have the same time resolution, and
(21)
for all k, which corresponds to “integrate and dump” sampling. Now since h.sub.k(t) is nonzero only for tε[0,T], by (4) or (5), the sample y.sub.k[n] is the integral of g.sub.k(t) over tε[(n−1)T,nT]. Thus, by (3), y.sub.k[n] is an a-weighted integral of ƒ between the contours C.sub.k.sup.(n−1)T and C.sub.k.sup.nT. To interpret this as an inner product with ƒ as in (5), we see that φ.sub.k,n(x) is a.sub.k(x) between C.sub.k.sup.(n−1)T and C.sub.k.sup.nT and zero otherwise.
(22) To express an estimate {circumflex over (ƒ)} of the reflectance ƒ, it is convenient to fix an orthonormal basis for a subspace of L.sup.2([0,L].sup.2) and estimate the expansion coefficients in that basis. For an M-by-M pixel representation, let
(23)
so that {circumflex over (ƒ)}=Σ.sub.i=1.sup.MΣ.sub.j=1.sup.Mc.sub.i,jψ.sub.i,j in the span of {ψ.sub.i,j} is constant on Δ-by-Δ patches, where Δ=L/M. Those skilled in the art can generalize the reconstruction space to other finite-dimensional manifolds in L.sup.2([0,L].sup.2).
(24) For {circumflex over (ƒ)} to be consistent with the value measured by Sensor k at time n, we must have
y.sub.k[n]=<{circumflex over (ƒ)},φ.sub.k,n>=Σ.sub.i=1.sup.MΣ.sub.j=1.sup.Mc.sub.i,j<ψ.sub.i,j,φ.sub.k,n>. (9)
Note that the inner products {<ψ.sub.i,j,φ.sub.k,n>} exclusively depend on Δ, the positions of illumination and sensors, the plane geometry, the sampling prefilters {h.sub.k}.sub.k=1.sup.K, and the sampling intervals {T.sub.k}.sub.k=1.sup.K—not on the unknown reflectance of interest ƒ. Hence, we have a system of linear equations to solve for the coefficients {c.sub.i,j}. (In the case of basis (8), the coefficients are the pixel values multiplied by Δ.)
(25) When we specialize to the box sensor impulse response (7) and basis (8), many inner products <ψ.sub.i,j,φ.sub.k,n> are zero so the linear system is sparse. The inner product <ψ.sub.i,j,φ.sub.k,n> is nonzero when reflection from the (i,j) pixel affects the light intensity at Sensor k within time interval [(n−1)T,nT]. Thus, for a nonzero inner product the (i,j) pixel must intersect the elliptical annulus between C.sub.k.sup.(n−1)T and C.sub.k.sup.nT. With reference to
(26) To express (9) with a matrix multiplication, replace double indexes with single indexes (i.e., vectorize, or reshape) as
y=Ac (10)
where yεR.sup.KN contains the data samples {y.sub.k[n]}, the first N from Light Meter 1, the next N from Light Meter 2, etc.; and cεR.sup.M.sup.
(27) Assuming that A has a left inverse (i.e., rank(A)=M.sup.2), one can form an image by solving (10). The portion of A from one sensor cannot have full column rank because of the collapse of information along elliptical annuli depicted in
(28) Those skilled in the art will appreciate that the rank condition is not necessary for producing a digital image. Any method of approximate solution of (10) could be employed. In the case that non-LTI effects are included, the analogue of (10) may be nonlinear, in which case rank would not apply. The invention applies to any method of approximate solution of the resulting system of equations.
(29) Those skilled in the art will recognize that the discretized linear system (10) is not the only way to process the measurements in (6) to form a digital image. For example, the measurements are a one-dimensional projection of an elliptical Radon transform, and methods such as those in [14,15,16] can be employed.
(30) A Dirac impulse illumination is an abstraction that cannot be realized in practice. One can use expensive, ultrafast optical lasers that achieve Terahertz bandwidth as an approximation to impulsive illumination, as in [1]. The present invention allows practical, non-impulsive illuminations to improve upon impulsive illumination for typical scenes and sensors.
(31) Light transport is linear and time invariant. Hence, the effect of a general illumination intensity waveform s(t) is the superposition of effects of constant illuminations over infinitesimal intervals. This superposition changes the light incident at Light Meter k from g.sub.k(t) in (2) to g.sub.k(t)*s(t). Thus, the block diagram in
(32) A typical natural scene ƒ has a good bandlimited approximation. Integration over elliptical contours C.sub.k.sup.t further smooths the signal. Plotted in
(33) In [1], high-bandwidth illumination and sampling were used under the assumption that these would lead to the highest reconstruction quality. However, impulsive illumination severely limits the illumination energy, leading to poor SNR, especially due to the radial fall-off attenuations in (1).
(34) Here we compare impulsive and lowpass illumination. All image reconstructions are obtained with (10) regularized by the l.sup.2 norm of the discrete Laplacian, with regularization parameter optimized for l.sup.2 error. This conventional technique for backprojection [9] mildly promotes smoothness of the reconstruction; additional prior information, such as sparsity with suitable {ψ.sub.i,j}, is beneficial but would obscure the novelty of the invention. Results are for M=50 and several values of sampling period T and sensor array extent W.
(35) In direct analogy with [1], we simulated short-pulsed, high-bandwidth illumination using a square-wave source with unit amplitude and time width equal to one-fifth of T.
(36) Using non-impulsive, low-bandwidth illumination, we show that high SNR can be achieved while improving the reconstruction resolution. We chose s(t) to be the truncated impulse response of a third-order Butterworth filter, with again a unit peak amplitude. As illustrated in
(37) The proof-of-concept experiments in [1] show that diffuse imaging can succeed in forming image reconstructions. In this patent application we have used signal processing abstractions to show that using lowpass time-varying illumination instead of impulsive sources improves the SNR and reconstruction quality.
(38) Assigning dimensions to our simulation enables the specification of required device capabilities for one instantiation of the exemplary configuration. Suppose the 50-by-50 pixel image reconstruction with W=4 and T=0.1 shown in
(39) The numbers in brackets refer to the references listed herein. The contents of all of these references are incorporated herein by reference as is the provisional application to which this application claims priority.
(40) It is recognized that modifications and variations of the present invention will be apparent to those of skill in the art—including but not limited to the variations mentioned in the description of the exemplary configuration—and it is intended that all such modifications and variations be included within the scope of the appended claims.
REFERENCES
(41) [1] A. Kirmani, A. Velten, T. Hutchison, M. E. Lawson, V. K. Goyal, M. Bawendi, and R. Raskar, “Reconstructing an image on a hidden plane using ultrafast imaging of diffuse reflections,” May 2011. [2] H. E. Edgerton and J. R. Killian, Jr., Flash! Seeing the Unseen by Ultra High-Speed Photography. Boston, Mass.: Hale, Cushman and Flint, 1939. [3] B. Schwarz, “LIDAR: Mapping the world in 3D,” Nature Photonics, vol. 4, no. 7, pp. 429-430, July 2010. [4] S. Foix, G. Alenyà, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” IEEE Sensors J., voL 11, no. 9, pp. 1917-1926, September 2011. [5] P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACM Trans. Graphics, vol. 24, no. 3, pp. 745-755, July 2005. [6] A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using transient imaging,” in Proc. IEEE 12th Int. Conf. on Computer Vision, Kyoto, Japan, September-October 2009.
(42) [7] M. Oren and S. K. Nayar, “Generalization of the Lambertian model and implications for machine vision,” Int. J. Comput. Vis., vol. 14, no. 3, pp. 227-251, April 1995. [8] M. Unser, “Sampling—50 years after Shannon,” Proc. IEEE, vol. 88, no. 4, pp. 569-587, April 2000. [9] M. Elad and A. Feuer, “Restoration of a single superresolution image from several blurred, noisy and undersampled measured images,” IEEE Trans. Image Process., vol. 6, no. 12, pp. 1646-1658, December 1997. [10] P. Sen and S. Darabi, “Compressive dual photography,” Computer Graphics Forum, vol. 28 no. 2, pp. 609-618, April 2009. [11] P. K. Baheti and M. A. Neifeld, “Feature-specific structured imaging,” Applied Optics, vol. 45, no. 28, pp. 7382-7391, October 2006. [12] D. Takhar, J. N. Laska, M. B. Wakin, M. F. Duarte, D. Baron, S. Sarvotham, K. F. Kelly, and R. G. Baraniuk, “A new compressive imaging camera architecture using optical-domain compression,” in Computational Imaging IV, C. A. Bowman, E. L. Miller, and I. Pollak, eds., Proc. SPIE vol. 6065, 2006. [13] B. I. Erkmen and J. H. Shapiro, “Ghost imaging: from quantum to classical to computational,” Advances in Optics and Photonics, vol. 2, no. 4, pp. 405-450. [14] J. D. Coker, A. H. Tewfik, “Multistatic SAR image reconstruction based on an elliptical-geometry Radon transform,” in International Waveform Diversity and Design Conference 2007, pp. 204-208. [15] R. Gouia, “Some problems of integral geometry in advanced imaging,” doctoral dissertation, University of Texas-Arlington, 2011. [16] R. Gouia-Zarrad and G. Ambartsoumian, “Approximate inversion algorithm of the elliptical Radon transform,” in 8.sup.th International Symposium on Mechatronics and Its Applications, pp. 1-4, 2012.