Energy optimized imaging system with synchronized dynamic control of directable beam light source and reconfigurably masked photo-sensor
11747135 · 2023-09-05
Assignee
- Carnegie Mellon University (Pittsburgh, PA)
- The Governing Council Of The University Of Toronto (Toronto, Ontario, CA)
Inventors
- Srinivasa Narasimhan (McDonald, PA, US)
- Supreeth Achar (Seattle, WA, US)
- Matthew O'Toole (Palo Alto, CA, US)
- Kiriakos Neoklis Kutulakos (Toronto, CA)
Cpc classification
G03B7/16
PHYSICS
G01B11/2545
PHYSICS
G01S17/42
PHYSICS
H04N5/30
ELECTRICITY
H04N23/74
ELECTRICITY
G06T7/521
PHYSICS
H04N23/65
ELECTRICITY
H04N13/271
ELECTRICITY
International classification
G01B11/25
PHYSICS
G06T7/521
PHYSICS
H04N13/271
ELECTRICITY
H04N5/30
ELECTRICITY
Abstract
An energy optimized imaging system that includes a light source that has the ability to illuminate specific pixels in a scene, and a sensor that has the ability to capture light with specific pixels of its sensor matrix, temporally synchronized such that the sensor captures light only when the light source is illuminating pixels in the scene.
Claims
1. A method of detecting an image, comprising: enabling a first row of a pixels of a two-dimensional pixel array to detect light reflected from at least one object illuminated by a first light pulse from a first scanning line of a directable light source, the first row of pixels in an epipolar configuration with the first scanning line; and enabling a second row of pixels of a two-dimensional pixel array to detect light reflected from the at least one object illuminated by a second light pulse from a second scanning line of the directable light source, the second row of pixels in an epipolar configuration with the second scanning line.
2. The method of claim 1, further comprising: generating detection signals corresponding to the detected light reflected from the at least one object when illuminated by each of the first and second light pulses.
3. The method of claim 2 further comprising: generating depth information based on the generated detection signals corresponding to the detected light reflected from the object by the first and second light pulses.
4. The method of claim 2, wherein enabling the two-dimensional pixel array to detect light reflected from the at least one object illuminated by the first or second light pulses comprises: determining an active region of the two-dimensional pixel array corresponding to a portion of the object being illuminated by the first or second light pulse; and enabling the determined active region during the first or second light pulses.
5. A device, comprising: a two-dimensional pixel array comprising a plurality of lines of pixels; a directable light source; and a controller to: enable a first row of pixels of the two-dimensional pixel array to detect light reflected from at least one object illuminated by a first light pulse from a first scanning line of the directable light source, the first row of pixels in an epipolar configuration with the first scanning line; and enable a second row of pixels of the two-dimensional pixel array to detect light reflected from the at least one object illuminated by a second light pulse from a second scanning line of the directable light source, the second row of pixels in an epipolar configuration with the second scanning line.
6. The device of claim 5 wherein the controller further: generates detection signals corresponding to the detected light reflected from the object when illuminated by each of the first and second light pulses; generates depth information based on the generated detection signals.
7. The device of claim 5 wherein the controller further: determines a sequence of portions of the object to be illuminated; directs the directable light source to sequentially illuminate the sequence of portions; and directs the two-dimensional pixel array to sequentially capture light from the sequence of illuminated portions.
8. The device of claim 7 wherein the controller further: temporally synchronizes the directable light source and the two-dimensional pixel array such that the two-dimensional pixel array captures light from a portion of the object illuminated by the directable light source.
9. The device of claim 7 wherein the controller further: spatially synchronizes the directable light source and the two-dimensional pixel array such that the directable light source illuminates a portion of the object that the two-dimensional pixel array is configured to capture.
10. The device of claim 9 wherein the sequence of portions of the object to be illuminated is chosen to maximize total energy transferred from the directable light source to the two-dimensional pixel array.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
DETAILED DESCRIPTION OF THE INVENTION
(5) A widely known truth in the field of image capture is that to optimally capture images with the most detail and least noise, the light throughput between the light source and the photosensor must be optimized. This invention implements this maxim while at the same time allowing for selective blocking of light paths between the light source and photosensor. The system topology that results from this optimization also allows for never-seen-before imaging techniques and energy efficiency.
(6) There are three main parts to the invention as currently implemented, interconnected as shown in
(7) As used herein, the term “directable light source” is a controllable light source that emits different amounts of light in different directions, where each pixel in the projector corresponds to a direction along which a slightly diverging beam is emitted. By changing the amount of light emitted along each direction, the projected pattern can be changed.
(8) There are two broad classes of projectors, spatial light modulator (SLM) based projectors and scanning projectors.
(9) SLM projectors are of the type shown in
(10) Scanning projectors are of the type shown in
(11) As used herein, the terms “light source”, “directable light source” and “projector” are used interchangeably.
(12) Also, in the preferred embodiments of the invention, various types of sensors may be used. Phase measuring light sensors (example photonic mixing devices or PMDs) can be used for measuring distance based on continuous wave time-of-flight; Dynamic vision Sensors (DVS) are sensors that are sensitive to changes in light levels; and photodiode arrays and avalanche photodiode arrays are high speed, high sensitivity light sensors that are often used for impulse time-of-flight measurements (flash LIDARS). In addition, basic CMOS and CCD sensors may be used.
(13) In the preferred embodiment of the invention, a scanning projector of the type using a LASER-based projector with a beam steering mechanism, for example, a MEMS mirror, is used as the directable light source, and the sensor is preferably a light sensitive photosensor with a rolling shutter.
(14) With reference to
(15) The mathematical framework for this energy-optimized imaging system follows. If light source 10 is always on, and emits at the constant rate of Φ watts, illuminating a scene for exposure time T means that the total energy generated by light source 10 is ΦT.
(16) The illumination vector l is used to describe how the total energy of a projector is distributed over N individual pixels. In particular, each element of l measures the total energy emitted by the source through a specific projector pixel during the exposure time. The l.sub.1-norm of l is therefore equal to the total “useful” energy of the source, i.e., the energy actually used for scene illumination. This energy cannot be larger than the energy generated by the source:
0≤,l∥l∥.sub.1≤ΦT
where ∥ ∥.sub.1 is the l.sub.1-norm, giving the sum of all elements of a vector.
(17) The energy efficiency of a projector depends critically on its ability to direct a maximum amount of the energy generated by the light source 10 to individual pixels. This ability is expressed as an upper bound on the individual elements of l:
∥l∥.sub.∞≤ΦT/σ
where σ is a projector-specific parameter defined as the spatial spread. This parameter takes values between 1 and N and models energy redistribution. The larger its value, the lower the energy that can be sent through any one pixel, and the more energy wasted when projecting a pattern with just few pixels turned on.
(18) The specific value of σ depends on the projection technology. At the far end of the range, with σ=N, are conventional projectors, as shown in
(19) The l.sub.1 and l.sub.∞ constraints on l can be written more concisely as
(20)
where ∥⋅∥.sub.†σ is the max of two norms and therefore also a norm. These constraints are useful in three ways. First, arrangements can be optimized with very different light redistribution properties by adjusting the spatial spread parameter. Second, the dependence on exposure time makes a distinction between systems that conserve energy and those that merely conserve power. Third, they explicitly account for timescale-dependent behavior, for example raster-scan laser projectors can act like a beam, light sheet, or point source depending on T.
(21) For masks that can control light attenuation at individual pixels on a sensor, we consider mask m, which is bounded from 0 to 1. The combined effect of the mask and illumination pattern can be represented as the outer product matrix of two vectors:
Π=ml.sup.T
Intuitively, matrix Π can be thought of as defining a non-uniform spatial light distribution that concentrates energy usable for imaging in some parts of space and not in others. Energy utilization is maximized when both the illumination pattern and the mask reach their norm upper bounds, ∥m∥.sub.∞ ∥l∥.sub.†σ.
(22) It is also possible to use more than one mask and illumination pattern for the frame exposure time. Suppose for instance that K masks and illuminations were used. The optimization equation could then be written as:
(23)
(24) There may be sequences that distribute light exactly like M and L but with greater total energy. Finding the most energy-efficient sequences requires solving a homogeneous factorization problem, where the goal is to produce a matrix Π with the largest possible scale factor:
(25)
(26) The optimization equations above are hard to solve directly. But the equation can be relaxed into the following form:
(27)
where λ is a regularization parameter that balances energy efficiency and the reproduction of Π. This allows for finding M & L that will saturate their upper-bound constraints, and hence a fully illuminated matrix Π.
(28) Illumination codes that maximize the energy efficiency are the impulse illuminations, like those of
(29) To capture the epipolar component, the exposure t.sub.e for each sensor row is matched to the time the projector stays on a scanline (t.sub.p) and the other timing parameters are chosen so that the line scanned by the projector is synchronized to the row being exposed in the sensor. Conversely, to capture non-epipolar light, the sensor exposure time is set to be t.sub.p less than the projector cycle time and the trigger is offset by t.sub.p so that every row is exposed for the entire projector cycle except during the time it is illuminated directly by the projector.
(30) This energy optimized imaging system also has unique capabilities that are not possible in other imaging systems.
(31) Because the rolling shutter of sensor 15 is tuned by synchronization controller 20 for the impulse illuminations of light source 10, very little ambient light is let into the sensor. This allows the invention to image extremely bright objects and scenes under bright ambient illumination. With current technology imaging systems, light from a controlled light source would be overwhelmed by ambient light and would not be detectable at the photosensor.
(32) Also, since the rolling shutter of sensor 15 is aligned solely to the light source 10, reflections and scattered light that are caused by the object (such as if the object was mirrored, shiny, metallic, translucent, etc.) are not captured in the frame. Note that the rolling shutter of sensor 15 can purposely be offset from the source illumination so that only the reflections are captured.
(33) This ability to not image reflections, scattered light and ambient light also gives the invention the ability to image and recover the shape of objects that are in challenging lighting conditions, specifically smoke or mist filled surroundings. Using the source illumination-to-photo-sensor disparity offset can allow for three-dimensional reconstruction within such lighting challenged areas.
(34) It should be understood by one of skill in the art that controller 20 could be implemented as circuitry, as an ASIC, as a microprocessor running software, or by any other means known in the art. The invention is not intended to be limited to one method of implementing the functions of the controller.
(35) Dual photography, a technique where the image generated is from the viewpoint of the light source rather than the photosensor, is also possible, even in a live video context, with no processing required.
(36) The illumination technique used in this invention can be expanded to multiple photosensors. This allows for highly power efficient active illumination stereo using two or more photosensors.
(37) The technique also extends naturally to configurations with multiple light sources. Different light sources interfere with each other minimally when used with the proposed technique. With inter-source synchronization, interference can be eliminated completely.
(38) The proposed technique can be realized with a time-of-flight (ToF) photosensor. A rolling shutter ToF photosensor combined with a modulated scanning laser light source using our technique would allow for a power efficient ToF depth sensor that works under bright ambient light conditions and suppresses indirect lighting effects.
(39) In other embodiments, the invention can be used with other imaging modalities including, but not limited to, light field imaging, microscopy, polarization, coherent, nonlinear, fluorescent and non-linear imaging.
(40) Although the invention is illustrated and described herein with reference to specific embodiments, the invention is not intended to be limited to the details shown. Rather, various modifications may be made in the implementation without departing from the invention.