Composite imaging systems using a focal plane array with in-pixel analog storage elements
11514594 · 2022-11-29
Assignee
Inventors
Cpc classification
H04N23/54
ELECTRICITY
G06T7/246
PHYSICS
H04N5/2226
ELECTRICITY
H04N13/254
ELECTRICITY
G01S17/894
PHYSICS
G06T7/521
PHYSICS
H04N25/771
ELECTRICITY
H04N25/75
ELECTRICITY
International classification
G06T7/521
PHYSICS
G06T7/246
PHYSICS
H04N13/254
ELECTRICITY
Abstract
Various embodiments of a 3D+imaging system include a focal plane array with in-pixel analog storage elements. In embodiments, an analog pixel circuit is disclosed for use with an array of photodetectors for a sub-frame composite imaging system. In embodiments, a composite imaging system is capable of determining per-pixel depth, white point and black point for a sensor and/or a scene that is stationary or in motion. Examples of applications for the 3D+imaging system include advanced imaging for vehicles, as well as for industrial and smart phone imaging. an extended dynamic range imaging technique is used in imaging to reproduce a greater dynamic range of luminosity.
Claims
1. An imaging system configured to generate a composite image depth map of a scene, the imaging system comprising: at least one emitter configured to emit an active light pulse toward the scene; an array of sub-frame imaging pixels, wherein each sub-frame imaging pixel includes a detector and at least three analog memory components configured to receive light that includes some of the active light pulse reflected from the scene for a field of view that includes at least a portion of the scene, each detector in the array of sub-frame imaging pixels configured to produce an analog response to a number of incident photons of light; control circuitry operably coupled to the at least one emitter and the array of sub-frame imaging pixels and configured to cause the at least one emitter to emit the active light pulse and to cause the array of sub-frame imaging pixels to receive light to store at least three successive sub-frames of analog stored charge values within the at least three analog memory components, wherein each sub-frame has a timing relationship of an emitter/detector cycle for that sub-frame that enables the at least three successive sub-frames to be utilized for range gating of an image of the scene in different distance bands from the array of sub-frame imaging pixels; and a processing system operably coupled to the control circuitry and the at least three analog memory components to generate the composite image depth map of the scene, the processing system configured to: analyze the at least three successive sub-frames of analog stored charge values to determine for a sub-frame imaging pixel a black point, a white point, and the one of the at least three successive sub-frames at which the white point occurs; and determine a distance for each sub-frame imaging pixel based on the one of the at least three successive sub-frames at which the white point occurs.
2. The imaging system of claim 1 wherein the distance for each sub-frame imaging pixel is defined by an overlap in a duration of the timing relationship of the emitter/detector cycle for that sub-frame imaging pixel.
3. The imaging system of claim 2 wherein a total distance range of the imaging system is equal to a number of sub-frames per sub-frame imaging pixel multiplied by the distance for each sub-frame pixel.
4. The imaging system of claim 1 wherein the imaging system is mounted in a vehicle capable of moving at speeds of more than 50 km/hour and all of the at least three successive sub-frames for each sub-frame imaging pixel are stored within an imaging window less than 250 μSec.
5. The imaging system of claim 1 wherein the imaging system is mounted in a handheld device and the at least three successive sub-frames for each sub-frame imaging pixel are stored within an imaging window of less than 2500 μSec.
6. The imaging system of claim 1 wherein the processing system, the array of detectors, the control circuitry and the processing system are integrated on a single electronic device.
7. The imaging system of claim 1 wherein the processing system the array of detectors and the control circuitry are integrated on a single electronic device and the processing system is external to the single electronic device.
8. The imaging system of claim 1 wherein a distance to an object represented by a sub-frame imaging pixel is determined by an equation that is unique for a distance range for the one of the at least three successive sub-frames at which the white point occurs.
9. The imaging system of claim 1 wherein the active light pulse in a given emitter/detector cycle for a given sub-frame imaging pixel comprises: a number of pulses selected from the set consisting of a single pulse for each of the at least three successive sub-frames, a sequence of multiple pulses for each of the at least three successive sub-frames, a single pulse per sub-frame, or multiple pulses per sub-frame, and a frequency selected from the set consisting of a single frequency range or multiple frequency ranges.
10. The imaging system of claim 1 wherein the array of sub-frame imaging pixels is configured to accumulate light based on a single accumulation for the timing relationship of the emitter/detector cycle that is unique for each sub-frame.
11. The imaging system of claim 1 wherein the array of detectors is configured to accumulate light based on a plurality of accumulations for the timing relationship of the emitter/detector cycle that is the same for each sub-frame.
12. The imaging system of claim 1, wherein an analog memory component is selected from a set that includes a capacitor, a switched-current memory, and an analog shift register.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(20)
(21)
(22) As shown in
(23) SI memory has limitations for data accuracy and current draw. It is, however, an effective way to show functional current-switched logic.
(24) In embodiments, sub-frame capture and processing produces composite images. An example of composite imagery created with sub-frames is disclosed in U.S. Pat. No. 9,866,816 (Retterath) for an Active Pulsed 4D Camera, and this patent is incorporated by reference herein.
(25) In accordance with various embodiments described herein sub-frame capture may utilize varying photodetector integration times for sub-frames within a passive composite image and will utilize varying timing relationships between emitter active and photodetector integration times. In some embodiments, the sub-frame processing techniques rely on the use of photodetector responses that are linearized. For active camera system embodiments, multiple emitter wavelengths may be utilized for the various modes. Multiple wavelengths may be emitted during a sub-frame cycle, or single wavelengths may be emitted during a single sub-frame cycle with a different wavelength being emitted during subsequent sub-frames within a composite imaging window. Not all operational modes of various embodiments utilize linearization; however, for operational modes that utilize photodetector linearization, photodetectors that respond to multiple wavelengths must have a linearization capability for every wavelength modality of the emitter(s). As an example, for a photodetector array that contains a Bayer filter, the individual photodetectors may have a green, blue or red filter associated with the photodetector and will have differing responses to visible light (400-700 nm) and to narrowband NIR light like 850 nm. Photodetectors with a red filter that are used in multispectral composite image sub-frame processing in accordance with this embodiment would require a linearization function for visible light and a linearization function for NIR light. Photodetectors with a green filter that are used in multispectral composite image sub-frame processing in accordance with this embodiment would require a linearization function for visible light and a linearization function for NIR light. Photodetectors with a blue filter that are used in multispectral composite image sub-frame processing in accordance with this embodiment would require a linearization function for visible light and a linearization function for NIR light.
(26) Sub-frame integration for a sub-frame composite imaging cycle will result in an intensity value for each pixel (m,n) in an imaging array.
CoM.sub.WhitePoint(sf)=(IP2(sf)−IP1(sf))/2 Eq. 1
(27) Where IP2(sf) is the sub-frame value of IP2(i,sf) IP1(sf) is the sub-frame value of IP1(i,sf)
(28) Alternatively, the center of mass of the trapezoid based on black point inflection points is defined according to:
CoM.sub.BlackPoint(sf)=(IP3(sf)−IP0(sf))/2 Eq. 2
(29) Where IP3(sf) is the sub-frame value of IP3(i,sf) IP0(sf) is the sub-frame value of IP0(i,sf)
(30) For an isosceles trapezoidal, sub-frame composite image pixel waveform, Eqs. 1 and 2 yield equivalent results. In embodiments, sub-frame composite image timing is implemented with thirty-two sub-frames, an emitter clock period of 8 nanoseconds, an integration time of twelve emitter clock periods, an emitter pulse width of eight emitter clock periods, a sub-frame 0 offset from detector end to emitter start of one emitter clock period, and a sub-frame period duration of 5 μSec. Based on these parameters, the shape, size, and horizontal location of isosceles trapezoidal waveforms is defined sufficiently to allow a trapezoidal descriptor to enable the computation of a distance parameter for every pixel in an array. A sub-frame trapezoid descriptor for a 32/8/8/12/1/5 configuration is shown:
(31) Sub-frame trapezoidal descriptor parameters:
(32) TABLE-US-00001 Sub-frame trapezoidal descriptor parameters: # of sub-frames 32 Emitter clock period 8 nSec Emitter pulse width 8 emitter clock periods Integration width 12 emitter clock periods Sub-frame 0 integration end to 1 emitter clock period emitter start Sub-frame period duration 5 μSec Speed of light constant (in a vacuum) 0.299792 m/nSec Sub-frame trapezoidal descriptor derived values: Trapezoid IP0(sf) at d = 0 1 Trapezoid IP1(sf) at d = 0 9 Trapezoid IP2(sf) at d = 0 13 Trapezoid IP3(sf) at d = 0 21 Trapezoid CoM(sf) at d = 0 11 Trapezoid lower base width 20 Trapezoid upper base width 4 Trapezoid width at mid-height 12 Range of camera 24.0 meters
(33) Trapezoidal descriptor parameters are used to identify other trapezoid parameters and are used to identify inflection point “locations” and a CoM “location” at a distance of d=0, where location refers to the sub-frame number at which the point intersects the horizontal axis of an isosceles trapezoid pixel plot. In embodiments, sub-frame locations for points are specified in floating point values, thus yielding higher accuracy for pixel distance determinations. According to the trapezoidal descriptor derived values, the four inflection points at d=0 are at sub-frames 1, 9, 13, and 21 for the four inflection points IP0(sf), IP1(sf), IP2(sf), IP3(sf), respectively. The trapezoid CoM(sf) at d=0 is at sub-frame 11 and is computed by using Eq. 1 or Eq. 2. For composite image post-processing, the distance for each pixel is determined by computing the delta between the CoM(sf) value for pixel (m,n) and the CoM(sf) for d=0, where:
ΔCOM.sub.[m,n]=COM.sub.(m,n)(sf)−COM.sub.d=0(sf) Eq. 3
(34) Where CoM.sub.(m,n)(sf) is the CoM of a trapezoid for pixel [m,n] CoM.sub.d=0(sf) is the CoM at d=0 from the trapezoid descriptor
(35) The distance for pixel (m,n), where distance is defined as the measure from the camera to the object represented by pixel (m,n), is computed according to:
Distance.sub.[m,n]=(ΔCoM.sub.(m,n)*C*P.sub.emitter)/2 Eq. 4
Where C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(36) In embodiments, the range of a sub-frame composite imaging camera may be specified in various ways, depending on the shape and structure of the resulting waveform. For isosceles trapezoidal waveforms, the range is defined as the maximum pixel distance for which an isosceles trapezoidal waveform lies completely within the sub-frame range for a trapezoidal descriptor. Said another way, the maximum range of a distance-measuring camera that utilizes sub-frame collection and isosceles trapezoidal waveform processing is defined as the point at which IP3(sf) is equal to the maximum sub-frame number. In embodiments, the center of mass for a maximum range isosceles trapezoid is computed according to:
CoM.sub.MaxRange=SF.sub.max−(Width.sub.LowerBase/2) Eq. 5
(37) Where SF.sub.max is the maximum sub-frame number Width.sub.LowerBase is the width of the lower trapezoid base
(38) In embodiments, the maximum device range for distance measurements is computed according to:
Range.sub.max={[SF.sub.max−(Width.sub.LowerBase/2)−CoM.sub.d-0(sf)]*C*P.sub.emitter}/2 Eq. 6
(39) Where SF.sub.max is the maximum sub-frame number Width.sub.LowerBase is the width of the lower trapezoid base CoM.sub.d=0(sf) is the sub-frame for the CoM at which d=0 C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(40)
(41) The use of inflection points for center of mass calculations for isosceles trapezoids leads to decreased precision for distance calculations when inflection points do not correspond to integer values of sub-frame numbers. In embodiments, this limitation is removed by utilizing waveform mid-height crossover points to determine center of mass.
I.sub.mid-ht(m,n)=(I.sub.BlackPoint(m,n)+I.sub.WhitePoint(m,n))/2 Eq. 7
(42) The slope of the leading edge of the waveform is computed according to:
Slope.sub.LeadingEdge=(I.sub.BlackPoint(m,n)−I.sub.WhitePoint(m,n))/(IP1(sf)−IP0(sf)) Eq. 8
(43) The slope of the trailing edge of the waveform is computed according to:
Slope.sub.TrailingEdge=(I.sub.BlackPoint(m,n)−I.sub.WhitePoint(m,m))/(IP3(sf)−IP2(sf)) Eq. 9
(44) In embodiments, an algorithm for determining the mid-height crossover points 148, 150 for the leading edge 148 and trailing edge 150 consists of a process of incrementing sub-frame numbers and identifying the sub-frame number at which the leading and trailing edge waveforms cross over the computed mid-height intensity 146 value. The leading-edge remainder 152 is the intensity value difference between the leading edge sub-frame crossover point intensity value and the mid-height intensity 146 value. The sub-frame value at which the leading edge crosses over the mid-height intensity is computed according to Eq. 10 below:
SF.sub.mid-ht-lead(m,n)=SF.sub.mid-ht-exc-lead(m,n)−[(I.sub.mid-ht-exc-lead(m,n)−I.sub.mid-ht(m,n))/Slope.sub.LeadingEdge] Where SF.sub.mid-ht-exc-lead(m,n) is the leading-edge sub-frame at which the intensity exceeds the mid-height value for pixel (m,n) I.sub.mid-ht-exc-lead(m,n) is the intensity value for the leading-edge sub-frame at which the intensity exceeds the mid-height value for pixel (m,n) Slope.sub.LeadingEdge is the slope of the leading edge of the trapezoid
(45) The trailing-edge remainder 154 is the intensity value difference between the trailing edge sub-frame crossover point intensity value and the mid-height intensity 146 value. The sub-frame value at which the trailing edge crosses over the mid-height intensity is computed according to Eq. 11 below:
SF.sub.mid-ht-trail(m,n)=SF.sub.mid-ht-exc-trail(m,n)−[(i.sub.mid-ht-exc-trail(m,n)−I.sub.mid-ht(m,n))/Slope.sub.TrailingEdge] Where SF.sub.mid-ht-exc-trail(m,n) is the trailing-edge sub-frame at which the intensity exceeds the mid-height value for pixel (m,n) i.sub.mid-ht-exc-trail(m,n) is the intensity value for the trailing-edge sub-frame at which the intensity exceeds the mid-height value for pixel (m,n) Slope.sub.TrailingEdge is the slope of the trailing edge of the trapezoid
(46) The CoM 156 of the waveform 140 is the mid-point of the leading-edge crossover point 148 and the trailing-edge crossover point 150 and is computed according to:
CoM(m,n)=(SF.sub.mid-ht-lead(m,n)+SF.sub.mid-ht-trail(m,n))/2 Eq. 12
(47) The computed trapezoid mid-height width 158 is the offset (in sub-frames) between of the leading-edge crossover point 148 and the trailing-edge crossover point 150 and is computed according to:
Width.sub.MidHeight(m,n)=sf.sub.mid-ht-trail(m,n)−sf.sub.mid-ht-lead(m,n) Eq. 13
(48) For an isosceles trapezoid, the computed value of the mid-height width 158 will be equivalent to the mid-height width from the trapezoidal descriptor. Variations between the computed mid-height width 158 and the corresponding trapezoidal descriptor value are indications of scenarios like imaging in attenuating environments or motion of objects in a scene and/or motion of a camera.
(49) In embodiments, an algorithm is specified for execution on a one or more CPUs or GPUs for determining black point, white point and CoM for each pixel (m,n) in a sub-frame, composite imaging system and is as follows:
(50) TABLE-US-00002 CPU/GPU Pseudocode Constants: SF = number of sub-frames per composite image M = number of columns of pixels in FPA N = number of rows of pixels in FPA CPU/GPU instruction CPU/GPU Comment Begin m = 0 initialize column counter n = 0 initialize row counter LoopMN k = 0 initialize loop count for BP, WP BlackPoint[m,n] = 0x3FF initialize BP to a high value WhitePoint[n,m] = 0 initialize WP to a low value LoopWP Read i[m,n,k] read bit k from shift memory If i[m,n,k] < BlackPoint[m,n] bit k lowest so far? BlackPoint[m,n] = i[m,n,k] if yes, make bit sf new lowest endif If i[m,n,k] > WhitePoint[m,n] bit k highest so far? WhitePoint[m,n] = i[m,n,k] if yes, make bit k new highest endif k = k+1 increment sub-frame counter If k<K, GoTo LoopWP end of MidHeight[m,n] = WhitePoint[m,n] − BlackPoint[m,n] mid-height intensity value LeadEdgeMidPassed[m,n] = FALSE initialize leading edge CoM flag k = 0 initialize loop count for CoM TrailEdgeActive[m,n] = FALSE initialize search for trailing edge LastI[m,n] = BlackPoint[m,n] initialize intensity value for k−1 LoopLead If LeadEdgeMidPassed[m,n] = FALSE, Do If i[m,n,k] > MidHeight[m,n] Leading edge crossed midpoint? LeadEdgeMidPassed[m,n] = TRUE LeadingCrossover[m,n] = k + {(MidHeight[m,n] − LastI[m,n])/(i[m,n,k] − LastI[m,n])} TrailEdgeActive[m,n] = TRUE endif endif If TrailEdgeActive[m,n] = TRUE, Do If i[m,n,k] < MidHeight[m,n] Trailing edge crossed midpoint? TrailingCrossover[m,n] = k + {(MidHeight[m,n] − LastI[m,n])/(i[m,n,k] − LastI[m,n])} TrailEdgeActive[m,n] = FALSE endif endif k = k+1 increment sub-frame counter If k<K, GoTo LoopLead end of CoM[m,n] = TrailingCrossover[m,n] − LeadingCrossover[m,n] m = m + 1 increment column counter If m ≠ M, GoTo LoopMN end of column? endif m=0 if yes, reset column counter n = n + 1 and increment row counter If n ≠ N, GoTo LoopMN end of row? endif if yes, CoM algorothm complete
(51)
(52) The overall throughput and composite image rate for a device is determined by the durations of the three stages 162, 162, 164. In embodiments, a duration for an imaging window establishes the time it takes for all K sub-frames to be integrated and shifted into analog shift registers located at each pixel 166. In embodiments, considerations for an imaging window duration 174 are determined by the amount of motion expected in a scene, the amount of motion expected for a composite camera, and the desired maximum horizontal and vertical pixel movement for sub-frame zero through sub-frame K−1. In embodiments, an imaging window duration for optimal performance for forward-facing and rear-facing camera automotive applications is in the range from 50 μSec to 200 μSec. In embodiments, side-facing or oblique-angle automotive applications provide optimal performance with imaging windows durations in the range from 25 μSec to 150 μSec. In embodiments, smart phone and industrial camera applications provide optimal performance with imaging windows durations in the range from 50 μSec to 2000 μSec. Transfer duration 176 specifies the time it takes to transfer 162 all sub-frames off an imaging device. Sub-frame transfer duration is determined by the focal plane array 160 bus transfer 162 rate and is defined according to:
TransferDuration=(AD×M×N×K)/(R.sub.Transfer×2.sup.30×8) Eq. 14
(53) Where AD is the number of bits utilized in A/D conversion M is the number of columns in a focal plane array N is the number of rows in a focal plane array K is the number of sub-frames per composite image R.sub.Transfer is the specified transfer rate of a bus in GB/sec 2.sup.30 represents the number of bytes in a gigabyte 8 represents the number of bits in a byte
(54) As an example, the transfer duration 176 for a 16 megapixel composite imaging system with K=32 sub-frames per composite image is computed according to these parameters:
(55) TABLE-US-00003 Focal Plane Array size 16,777,216 pixels Bits per pixel for A/D Conversion 12 bits/pixel Bytes per pixel 1.5 Bytes/pixel Focal Plane Array bus transfer rate 5 GB/sec Number of sub-frames per composite image 32 sub-frames
(56) The resulting transfer duration according to Eq. 14 is 150 milliseconds (mSec).
(57) In embodiments, upon transfer of information to sub-frame memory 168, the CPU/GPU 170 performs sub-frame processing at the pixel level to determine black point, white point, and CoM for each pixel. In embodiments, utilizing multiple GPUs for processing will typically lead to a lower elapsed time for pixel processing. In embodiments, sub-frame pixel processing time for each pixel, expressed in microseconds, is computed according to:
t(m,n).sub.Sub-framePixelProcessing=OpNum/MFLOP Eq. 15
(58) Where OpNum is the number of operations per pixel to perform an algorithm MFLOP stands for Mega-FLOPs and is the number of millions of floating point operations per second for a single GPU
(59) In embodiments, for a CoM algorithm with OpNum equal to 500 operations running on a 50 MFLOP processor, Eq. 15 results in an elapsed time for processing of a single pixel of 10 microseconds. In embodiments, for a camera system with M×N pixels and a frame processor that includes multiple CPU/GPU cores, the processing duration is computed according to:
ProcessingDuration=(t(m,n).sub.Sub-framePixelprocessing×M×N)/NumPU Eq. 16
(60) Where M is the number of columns in a focal plane array N is the number of rows in a focal plane array NumPU is the number of processing units used for algorithmic computation
(61) In embodiments, for a camera system with M equal to 4096 pixels, N equal to 4096 pixels, and a frame processor that includes 1024 CPU/GPU cores with each core running at 50 MFLOPs, the resulting processing duration is 163.84 milliseconds. Having computed the durations for the stages of composite image collection, transfer, and processing, the overall elapsed time of the stages is:
(62) TABLE-US-00004 Imaging Window 0.16 milliseconds Transfer Duration 150 milliseconds Processing Duration 163.84 milliseconds Total Elapsed Time 314.00 milliseconds
(63) The elapsed time of 314.00 milliseconds results in an overall performance specification for a 32 sub-frame processing, composite image-generating 3D+ camera of approximately 3.2 composite images per second. For applications that require performance of 30 or 60 images per second, 3.2 images per second comes well short of meeting the requirements. In embodiments, camera architecture may be modified for pipelined processing whereby sequential stages in a process are overlapped in time by utilizing extra storage and/or additional electronics, typically at the expense of higher components costs and higher electrical current requirements. In embodiments, total elapsed time for pipelined operation may be reduced to 163.84 milliseconds, which is the elapsed time for the stage with the longest elapsed time. The elapsed time of 163.84 milliseconds results in an overall performance specification for a 32 sub-frame processing, composite image-generating 3D+ camera of approximately 6.1 composite images per second, which is still well short of a desired throughput rate for many imaging applications.
(64) According to Wong (https://www.imperial.ac.uk/media/imperial-college/faculty-of-engineering/computing/public/1718-pg-projects/WongM-Analog-Vision.pdf), Focal-Plane Sensor-Processor (FPSP) chips are a special class of imaging devices in which the sensor arrays and processor arrays are embedded together on the same silicon chip (Zarandy, 2011). Unlike traditional vision systems, in which sensor arrays send collected data to a separate processor for processing, FPSPs allow data to be processed in place on the imaging device itself. This unique architecture enables ultra-fast image processing even on small, low-power devices, because costly transfers of large amounts of data are no longer necessary.
(65) According to Wong, the SCAMP-5 Vision Chip is a Focal-Plane Sensor-Processor (FPSP) developed at the University of Manchester (Carey et al., 2013a).
(66) According to Wong, the fully-parallel interface coupled with the use of analog registers for arithmetic operations has allowed the SCAMP-5 to achieve superior outcomes on key performance metrics, particularly in terms of frame rate and power consumption. The SCAMP-5 architecture allows for the transfer of a complete image frame from the image sensor array to the processor array in one clock cycle (100 ns), which equates to a sensor processing bandwidth of 655 GB/s (Martel and Dudek, 2016). This allows for the implementation of vision algorithms at extremely high frame rates which are simply unattainable with traditional architectures. For example, Carey et al. (2013a) demonstrated an object-tracking algorithm running at 100,000 fps. On the other hand, when operating at lower frame rates, the SCAMP-5 can function at ultra-low power consumption rates. Carey et al. (2013b) demonstrated a vision system capable of carrying out loiterer detection, which operated continuously at 8 frames per second for 10 days powered by three standard AAA batteries. These superior performance characteristics have positioned the SCAMP-5 as an ideal device for implementing vision algorithms in low-power embedded computing systems (Martel and Dudek, 2016).
(67) SCAMP-5 and other FPSP chips are known as neighbor-in-space FPSP devices because they perform operations at the pixel level and will perform processing within a single frame of data. Each pixel processing element has the ability to reference and perform operations for neighboring pixels in space. To this point, sub-frame processing for composite image creation has not required neighbor-in-space processing and, as a result, has been unable to benefit significantly from a neighbor-in-space FPSP implementation. In contrast, sub-frame processing requires neighbor-in-time processing whereby pixel (m,n) in a sub-frame is processed along with pixels (m,n) from other sub-frames within a collection of sub-frames collected within an imaging window for a composite image or a collection of composite images.
(68)
(69)
(70) In embodiments, switched current SI circuitry is used to convey basic functionality. In practice, more complex circuitry is used in order to reduce processing errors, to increase accuracy, and to reduce power dissipation.
(71)
(72)
(73)
I.sub.Count<I.sub.max/K Eq. 17
(74) Where I.sub.max is the maximum current value for an analog storage register
(75) K is the number of sub-frames for algorithms that require an analog loop counter
(76) In embodiments, I.sub.Count is enabled onto the analog bus 288 when the ICount_Enbl 284 signal is activated. An exemplary analog count circuit for use in this embodiment may consist of a single stage amplifier with a large capacitive feedback that accumulates a charge that is proportional to the number of pulses counted for each event enabled by the Flag0_Set 276 signal. Other examples of analog counter circuits may also be used in various embodiments, such as are shown and described in U.S. Pat. No. 7,634,061, the contents of which are hereby incorporated by reference.
(77) When using a DμC for providing instructions to M×N NitAPP elements, all NitAPPs perform the same instruction simultaneously. In embodiments, conditional operations are handled by using information from the Flag0, Flag1, and Flag2 bits, which enable or disable operations for register banks. In embodiments, registers are used for storing intermediate results, are used for event counters, and are used for conditional instruction execution based on flag bits. In embodiments, a 32-bit digital instruction word is routed to each of the MxN NitAPP elements, whereby each instruction bit controls the gate input to a switching transistor or controls current flow from the source to the gate for an MOS transistor. In embodiments, the definition of the bits for a 32-bit digital instruction word is:
(78) TABLE-US-00005 NitAPP Instruction Bit # switch Function 0 SFSR_Xfer Sub-frame Shift Register Transfer 1 SFSR_Shift Sub-frame Shift Register Bit Shift 2 SFSR_PD_Sel Sub-frame Shift Register Photodetector Select 3 SFSR_Rd Sub-frame Shift Register Bit 0 Enable to Analog Bus 4 Mult_In_Wrt Write Input Value to Multiplication Block 5 Mult_Out_Wrt Write Output Value from Multiplication Block 6 Mult_Out_Rd Enable Multiplication Output to Analog Bus 7 Flag0_Latch Latch Flag0 Based on Compare Circuit 8 Flag0_Set Set Flag0 9 Flag1_Latch Latch Flag1 Based on Compare Circuit 10 Flag1_Set Set Flag1 11 Flag2_Latch Latch Flag2 Based on Compare Circuit 12 Flag2_Set Set Flag2 13 RA0_Rd Enable Register A0 to Analog Bus 14 RA0_Wrt Write Analog Bus Value to Register A0 15 RB0_Rd Enable Register B0 to Analog Bus 16 RB0_Wrt Write Analog Bus Value to Register B0 17 RC0_Rd Enable Register C0 to Analog Bus 18 RC0_Wrt Write Analog Bus Value to Register C0 19 RD0_Rd Enable Register D0 to Analog Bus 20 RD0_Wrt Write Analog Bus Value to Register D0 21 RA1_Rd Enable Register A1 to Analog Bus 22 RA1_Wrt Write Analog Bus Value to Register A1 23 RB1_Rd Enable Register B1 to Analog Bus 24 RB1_Wrt Write Analog Bus Value to Register B1 25 RA2_Rd Enable Register A2 to Analog Bus 26 RA2_Wrt Write Analog Bus Value to Register A2 27 RB2_Rd Enable Register B2 to Analog Bus 28 RB2_Wrt Write Analog Bus Value to Register B2 29 ICount_Enbl Enable Icount Current to Analog Bus 30 Result_SR_Xfer Result Shift Register Transfer 31 Result_SR_Shift Result Shift Register Bit Shift
(79) In embodiments, pseudocode for DμC instructions that perform black point, white point, and CoM computations for each pixel, along with the associated NitAPP instruction values, is shown below:
(80) TABLE-US-00006 DuC Pseudocode Constants : K = number of sub-frames per composite image M = number of columns of pixels in FPA N = number of rows of pixels in FPA DuC NitAPP DuC instruction Comment NitAPP Inst Switches NitAPP Comments initialize loop count for k = 0 BP, WP Initialize Flag0[m,n] Set Flag0 Flag0_Set Enable Flag0 Registers Initialize Flag1[m,n] Set Flag1 Flag1_Set Enable Flag1 Registers Initialize Flag2[m,n] Set Flag2 Flag2_Set Enable Flag2 Registers initialize BP to a high Mult_In_Wrt, BlackPoint[m,n] = 0x3FF value Icount −> Mult Icount_Enbl Icount to Mult input Mult * Icount −> Mult_Out_Wrt, Mult Icount_Enbl Mult = Icount squared Mult_Out_Rd, Mult −> RC0 RC0_Wrt RC0 = BlackPoint[m,n] initialize WP to a RA0_Wrt, WhitePoint[n,m] = 0 low value Icount −> RA0 Icount_Enbl RA0 = Icount − Icount RA0_Rd, RA0 + Icount −> Icount_Enbl, RD0 RD0_Wrt RB2 = WhitePoint[m,n] read bit K LoopWP Read i[m,n,k] from shift register bit k Flag0_Latch, If i[m,n,k] < lowest so SFSR_Rd, BlackPoint[m,n] far? SFSR < RC0 RC0_Rd Check for new BlackPoint if yes, make bit BlackPoint[m,n] k new SFSR_Rd, Conditional BlackPoint = i[m,n,k] lowest SFSR −> RC0 RC0_Wrt update endif Set Flag0 Flag0_Set Enable Flag0 Registers bit k Flag0_Latch, If i[m,n,k] > highest SFSR_Rd, WhitePoint[m,n] so far? SFSR > RD0 RD0_Rd Check for new WhitePoint if yes, make bit WhitePoint[m,n] = k new SFSR_Rd, Conditional WhitePoint i[m,n,k] highest SFSR −> RD0 RB2_Wrt update endif Set Flag0 Flag0_Set Enable Flag0 Registers increment sub- frame k = k+1 counter SFSR_Xfer SFSR circular transfer circular shift of K shift Circular Shift of Shift SFShiftRegister[m,n] register SFSR SFSR_Shift SFSR circular shift If k<K, GoTo LoopWP end of RC0 −> Result_SR Result_SR_Xfer, Send BlackPoint[m,n] to RC0_Rd output shift register Shift Output SR Result_SR_Shift Result_SR_Xfer, Send WhitePoint[m,n] to RD0 −> Result_SR RD0_Rd output shift register Result_SR_Shift Shift Output SR mid- MidHeight[m,n] = height (WhitePoint[m,n] − intensity RC0_Rd, BlackPoint[m,n]) / 2 value RC0 −> RB0 RB0_Wrt Negate BlackPoint[m,n] RC0_Wrt, RA0_Wrt, RC0 = (RB0 + RB0_Rd, RD0)/2 RD0_Rd RC0 = MidHeight[m,n] initialize leading Flag1 = LeadEdgeMidPassed[m,n] = edge LeadEdgeMidPassed[m,n] FALSE CoM flag Set Flag1 Flag1_Set = FALSE initialize loop count for k = 0 CoM initialize NitAPP SF Icount_Enbl, SFCount[m,n] = 0 counter Icount −> RA0 RA0_Wrt Icount_Enbl, RD0 = Icount + RD0_Wrt, RA0 RA0_Rd RD0 = SFCount[m,n] = 0 initialize search TrailEdgeActive[m,n] = for trailing Flag2_Latch, Flag2 = FALSE edge RA0 > 0 RA0_Rd TrailEdgeActive[m,n] initialize LastI[m,n] = intensity value RC0_Rd, BlackPoint[m,n] for sf−1 RC0 −> RA2 RA2_Wrt RA2 = LastI[m,n] LoopLead If LeadEdgeMidPassed[m,n] = FALSE, Do Leading edge If i[m,n,k] > crossed RC0_Rd, MidHeight[m,n] midpoint? RC0 −> RA0 RA0_Wrt Negate MidHeight[m,n] Flag0_Latch, SFSR_Rd, SFSR > RA0 RA0_Rd Icount_Enbl, LeadEdgeMidPassed[m,n] =TRUE Icount −> RA0 RA0_Wrt Flag1_Latch, LeadEdgeMidPassed[m,n] RA0 > 0 RA0_Rd = TRUE TrailEdgeActive[m,n] = TRUE Set Flag2 Flag2_Set LeadingCrossover[m,n] = SFCount[m,n] + I.sub.Count*{(MidHeight[m,n] − LastI[m,n])/(i[m,n,k] − LastI[m,n])} RA2_Rd, RA2 −> RA0 RA0_Wrt Negate LastI[m,n] RB0 = SFSR + RA0 RB0_Wrt, RB0 = SFSR − LastI[m,n] SFSR_Rd, RA0_Rd RB0_Rd, RB0_Wrt, Mult_In_Wrt, Invert RB0 Mult_Out_Wrt, Mult_Out_Rd RB0_Rd, RB0 −> Mult_In Mult_In_Wrt RB2_Wrt, RA1_Rd, RB2 = MidHeight[m,n] − RB2 = RA1 + RA0 RA0_Rd LastI[m,n] RB2_Rd, RB0 = Icount* Mult_Out_Wrt, (RB2/RB0) Icount_Enbl RB1_Wrt, RD0_Rd, RB1 = RD0 + RB0 RB0_Rd endif endif If TrailEdgeActive[m,n] = TRUE, Do Trailing edge If i[m,n,k] < crossed SFSR_Rd, MidHeight[m,n] midpoint? SFSR −> RA0 RA0_Wrt Negate SFSR Flag1_Latch, SFSR_Enbl, RA0 > RC0 RC0_Rd TrailingCrossover[m,n] = SFCount[m,n] + I.sub.Count*{(MidHeight[m,n] − RA2_Rd, LastI[m,n])/(i[m,n,k] − LastI[m,n])} RA2 −> RA0 RA0_Wrt Negate LastI[m,n] RB0_Wrt, SFSR_Rd, RB0 = SFSR + RA0 RA0_Rd RB0 = SFSR − LastI[m,n] RB0_Rd, RB0_Wrt, Mult_In_Wrt, Invert RB0 Mult_Out_Wrt, Mult_Out_Rd RB0_Rd, RB0 −> Mult_In Mult_In_Wrt RB2_Wrt, RA1_Rd, RB2 = MidHeight[m,n] − RB2 = RA1 + RA0 RA0_Rd LastI[m,n] RB2_Rd, RB0 = Icount * Mult_Out_Wrt, (RB2/RB0) Icount_Enbl RB2_Wrt, RD0_Rd, RB2 = RD0 + RB0 RB0_Rd Icount_Enbl, TrailEdgeActive[m,n] = FALSE Icount −> RA0 RA0_Wrt Flag2_Latch, LeadEdgeMidPassed[m,n] RA0 > 0 RA0_Rd = TRUE endif endif increment sub- frame k= k+1 counter SFSR_Xfer SFSR circular transfer Shift SFShiftRegister[m,n] circular Circular Shift of SFSR_Shift SFSR circular shift shift of K SFSR shift register increment sub- SFCount[m,n] = frame RD0_Rd, SFCount[m,n] + I.sub.Count counter RD0 −> RA0 RA0_Wrt Negate SFCount[m.n] RB0_Wrt, RB0 = RA0 + RA0_Rd, Icount Icount_Enbl RB0_Rd, RB0 −> RD0 RD0_Rd If k<K, GoTo LoopLead end of CoM[m,n] = TrailingCrossover[m,n] − Negate LeadingCrossover[m,n] RB1 −> RA0 LeadingCrossover[m,n] RA1 = RA0 + RB2 RA1 = CoM[m,n] Result_SR_Xfer, Send CoM[m,n] to output RA1 −> Result_SR RA1_Rd shift register Shift Output SR Result_SR_Shift
(81)
(82) In embodiments, there are four sequential time durations of the system 290—an analog focal plane array imaging window 310, an on-FPA computation duration for NitAPP processing 312, an off-chip transfer 314 and A/D conversion, and a digital processing duration 316. The overall throughput and composite image rate for a device is determined by the durations of the four stages 310, 312, 314 and 316. In embodiments, a duration for an imaging window establishes the time it takes for all K sub-frames to be integrated and shifted into analog shift registers located at each NitAPP pixel 300. In embodiments, considerations for an imaging window duration 174 are determined by the amount of motion expected in a scene, the amount of motion expected for a composite camera, and the desired maximum horizontal and vertical pixel movement for from sub-frame zero through sub-frame K−1. In embodiments, an imaging window of 160 μSec for automotive applications meets the sub-frame horizontal and vertical alignment guidelines for forward-facing and rear-facing camera applications. In embodiments, NitAPP processing is the amount of time required for the DuC 296 to issue all of the instructions to the NitAPP[m,n] elements for the desired algorithmic processing and control functionality for on-pixel, sub-frame processing. In embodiments, transfer duration 314 specifies the time it takes to read result information from all pixels and transfer all sub-frames off a focal plane array 292 and into NitAPP result memory 304. In embodiments, processing duration 316 is the time it takes to digitally produce composite images 308 from the information contained in the NitAPP result memory 304.
(83) In embodiments, NitAPP architecture displays significant throughput advantages versus digital sub-frame-processing systems. As an example, a throughput comparison is presented for NitAPP-processed and digitally-processed images for a 16 megapixel composite imaging system with K=32 sub-frames per composite image according to these parameters:
(84) TABLE-US-00007 Focal Plane Array size 16,777,216 pixels Bits per pixel for A/D Conversion 12 bits/pixel Bytes per pixel 1.5 Bytes/pixel Focal Plane Array bus transfer rate 5 GB/sec Number of sub-frames per composite image 32 sub-frames
(85) In embodiments, the duration comparisons are made for comparative algorithms to determine black point, white point and center of mass (CoM) for each of the 16 megapixels. The overall durations for two imaging systems are:
(86) TABLE-US-00008 Digital Sub-frame NitAPP Imaging Window 0.16 ms 0.16 ms On-FPA Computation 0 ms 0.10 ms Transfer Duration 150 ms 14.06 ms Processing Duration 163.84 ms 3.28 ms Total Elapsed Time 314.0 ms 17.6 ms Images/second - no pipeline 3.2 56.7 Images/second - pipelined 6.1 71.1
(87) In embodiments, on-FPA computation for NitAPP consists of the duration required to execute the NitAPP instructions for a black point, white point and CoM algorithm. The duration (in μSec) of the algorithm is computed according to:
t(m,n).sub.NitAPP=OpNum.sub.NitAPP/MFLOP.sub.NitAPP Eq. 18
(88) Where OpNuM.sub.NitAPP is the number of NitAPP instructions to perform an algorithm MFLOP.sub.NitAPP stands for Mega-FLOPs and is the number of millions of floating point operations per second for a single NitAPP processing element
(89) For the WP/BP/CoM algorithm presented herein, the number of NitAPP operations is 1012, which is 9 instructions for BP/WP start, 256 instructions (8 instructions times 32 loops) for the WP/BP loop, 4 instructions for WP/BP end, 736 instructions (23 instructions times 32 loops) for the CoM loop, and 7 for CoM end. Utilizing a NitAPP instruction clock of 10 MHz results in a MFLOP.sub.NitAPP equal to 10. Eq. 17 results in an on-FPA computation time of 101.2 μs. For a camera system with M equal to 4096 pixels, N equal to 4096 pixels, and a frame processor that includes 1024 CPU/GPU cores with each core running at 50 MFLOPs, the digital sub-frame processing duration is 163.84 milliseconds based on a per-pixel algorithm of 500 instructions. When utilizing NitAPP for on-FPA processing, the digital back end has a reduced processing duration because fewer instructions are required per pixel. In embodiments, if the digital processing back end requires 10 instructions per pixel to perform composite image creation, the CPU/GPU processing duration is reduced to 3.28 milliseconds.
(90) The elapsed time of 17.6 milliseconds for NitAPP sub-frame processing results in an overall performance specification for a 32 sub-frame processing, composite image-generating 3D camera of approximately 56.7 composite images per second. For applications that require performance of 30 images per second, 56.7 images per second more than meets the requirements. In embodiments, camera architecture may be modified for pipelined processing whereby sequential stages in a process are overlapped in time by utilizing extra storage and/or additional electronics, typically at the expense of higher components costs and higher electrical current requirements. In embodiments, total elapsed time for pipelined operation for NitAPP processing may be reduced to 14.06 milliseconds, which is the elapsed time for the stage with the longest elapsed time. The elapsed time of 14.06 milliseconds results in an overall performance specification for a 32 sub-frame processing, composite image-generating 3D camera of approximately 71.1 composite images per second, which is sufficient for meeting the throughput rate for 60 image-per-second imaging applications.
(91) Digital CPUs and GPUs typically attempt to extract top performance out of a given technology, often at the expense of power consumption. The use of NitAPP processing for composite image creation offers the benefit of lower overall device power consumption because most of the processing is shifted from power-hungry digital processing to very-low-power analog computing. Utilizing a 10 nm feature size silicon fabrication process, the power consumption for various elements can be expressed as:
(92) TABLE-US-00009 Function Power Units Photodetector accumulation 80 pW per accumulation per pixel NitAPP instruction 6 pW per NitAPP element Digital Memory Read/Write 0.45 nW per byte GPU instruction 0.08 nW per instruction FPA transfer and A/D 0.85 nW per byte Conversion
(93) In embodiments, a power consumption comparison for digital sub-frame processing and for NitAPP sub-frame processing for a sixteen megapixel, 32 sub-frame composite image utilizing a 10 nm process is:
(94) TABLE-US-00010 Digital Digital NitAPP NitAPP #/img/ mW/ #/img/ mW/ Function pixel image pixel image FPA accumulations 1 1.3 1 1.3 NitAPP instructions 0 0 1012 101.9 FPA Xfer and A/D 48 684.5 4.5 64.2 Digital Read/Write 96 724.8 9 67.9 GPU instructions 500 671.1 10 13.4 Total Power (mW) 2081.7 248.8
(95)
(96) TABLE-US-00011 TABLE 1 m − 1 m m + 1 n + 1 I.sub.bp-le: 80 I.sub.bp-le: 80 I.sub.bp-le: 80 I.sub.bp-te: 68 I.sub.bp-te: 68 I.sub.bp-te: 74 I.sub.wp-le: 256 I.sub.wp-le: 256 I.sub.wp-le: 256 I.sub.wp-te: 224 I.sub.wp-te: 224 I.sub.wp-te: 240 SF.sub.le-mid: 12.5 SF.sub.le-mid: 12.5 SF.sub.le-mid: 12.5 SF.sub.te-mid: 24.5 SF.sub.te-mid: 24.5 SF.sub.te-mid: 24.5 n I.sub.bp-le: 40 I.sub.bp-le: 40 I.sub.bp-le: 80 I.sub.bp-te: 34 I.sub.bp-te: 34 I.sub.bp-te: 68 I.sub.wp-le: 128 I.sub.wp-le: 128 I.sub.wp-le: 256 I.sub.wp-te: 112 I.sub.wp-te: 112 I.sub.wp-te: 224 SF.sub.le-mid: 6.5 SF.sub.le-mid: 6.5 SF.sub.le-mid: 12.5 SF.sub.te-mid: 18.3 SF.sub.te-mid: 18.3 SF.sub.te-mid: 24.5 n − 1 I.sub.bp-le: 40 I.sub.bp-le: 40 I.sub.bp-le: 80 I.sub.bp-te: 40 I.sub.bp-te: 34 I.sub.bp-te: 68 I.sub.wp-le: 120 I.sub.wp-le: 128 I.sub.wp-le: 256 I.sub.wp-te: 120 I.sub.wp-te: 112 I.sub.wp-te: 224 SF.sub.le-mid: 6.5 SF.sub.le-mid: 6.5 SF.sub.le-mid: 12.5 SF.sub.te-mid: 18.3 SF.sub.te-mid: 18.3 SF.sub.te-mid: 24.5
(97) Based on trapezoidal analysis, a slope is computed for each pixel for the white point portion of the trapezoid according to:
Slope.sub.wp(m,n)=[I.sub.te-wp(m,n)−I.sub.le-wp(m,n)]/ΔSF.sub.trapezoid-top Eq. 19
(98) Where I.sub.te-wp(m,n) is the trailing edge white point intensity value for pixel (m,n) I.sub.le-wp(m,n) is the leading edge white point intensity value for pixel (m,n) ΔSF.sub.trapezoid-top is the width of the top of a trapezoid in # of sub-frames
(99) White point intensity value analysis for a 3×3 pixel grouping yields the white point slope values shown in Table 2 below, along with computed distances for each pixel in accordance with Eq. 4:
(100) TABLE-US-00012 TABLE 2 m − 1 m m + 1 n + 1 Slope.sub.wp: −8 Slope.sub.wp: −8 Slope.sub.wp: −4 Distance: 22.18 m Distance: 22.18 m Distance: 22.18 m n Slope.sub.wp: −4 Slope.sub.wp: −4 Slope.sub.wp: −8 Distance: 14.87 m Distance: 14.87 m Distance: 22.18 m n − 1 Slope.sub.wp: 0 Slope.sub.wp: −4 Slope.sub.wp: −8 Distance: 14.87 m Distance: 14.87 m Distance: 22.18 m
(101) In embodiments, pixels (m−1, n−1), (m, n−1), and (m+1, n−1) form an m-motion pixel triplet whereby motion is detected along the m-axis due to the zero slope for pixel (m−1, n−1) and non-zero slopes for pixels (m, n−1) and (m+1, n−1) whereby the signs of the non-zero slopes are the same. The m-motion pixel triplet is the result of an object of high intensity migrating from the field of view (FOV) of pixel (m+1, n−1) into the FOV of pixel (m, n−1), or the m-motion pixel triplet is the result of an object of low intensity migrating from the field of view (FOV) of pixel (m, n−1) into the FOV of pixel (m+1, n−1). In embodiments, pixels (m−1, n+1), (m−1, n), and (m−1, n−1) form an n-motion pixel triplet whereby motion is detected along the n-axis due to the zero slope for pixel (m−1, n−1) and non-zero slopes for pixels (m−1, n) and (m−1, n+1) whereby the signs of the non-zero slopes are the same. The n-motion pixel triplet is the result of an object of high intensity migrating from the FOV of pixel (m−1, n+1) into the FOV of pixel (m−1, n), or the n-motion pixel triplet is the result of an object of low intensity migrating from the field of view (FOV) of pixel (m−1, n) into the FOV of pixel (m−1, n+1). In embodiments, the amplitude of the m-motion or the n-motion is computed by determining the sub-frame number at which the extrapolated high-intensity white point trapezoid slope crosses over the trailing edge black point intensity value for the other non-zero slope pixel in the m-motion or n-motion pixel triplet according to Eq. 20 below:
ΔSF.sub.motion=[(I.sub.te-wp(m,n)−I.sub.le-wp(m,n))*(I.sub.te-bp(m,n)−I.sub.le-wp(m,n))]/ΔSF.sub.trapezoid-top
(102) Where I.sub.te-wp(m,n) is the trailing edge white point intensity value for the high-intensity pixel of a non-zero-sloped pixel triplet I.sub.le-wp(m,n) is the leading edge white point intensity value for the high-intensity pixel of a non-zero-sloped pixel triplet I.sub.te-bp(m,n) is the trailing edge black point intensity value for the low-intensity pixel of a non-zero-sloped pixel triplet ΔSF.sub.trapezoid-top is the width of a trapezoid top, expressed in # of sub-frames, as determined from the derived parameters of a trapezoid descriptor
(103) In embodiments, m-axis or n-axis motion is expressed as the number of sub-frame periods required for the intensity value of a pixel to completely replace the intensity value of a neighboring pixel that shares a white point slope sign within a pixel triplet. In embodiments, the amplitude of m-axis or n-axis movement is converted to a length by determining the distance of the in-motion object from the camera and utilizing the angular offset between FOVs of neighboring pixels and is computed by:
Motion.sub.m-axis(m,n)=d(m,n)*sin Δφ(m,n) Eq. 21
(104) Where d(m,n) is the distance to the nearest pixel of an m-axis triplet Δφ(m,n) is the angular offset between the centers of m-axis FOVs
Motion.sub.n-axis(m,n)=d(m,n)*sin Δθ(m,n) Eq. 22
(105) Where d(m,n) is the distance to the nearest pixel of an n-axis triplet Δθ(m,n) is the angular offset between the centers of m-axis FOVs
(106) In embodiments, m-axis and n-axis motion is determined according to the identification of m-axis pixel triplets and n-axis pixel triplets. The amplitude of m-axis and n-axis motion is determined for same-signed, non-zero-sloped pixel pairs within pixel triplets. The determination of sign (indicating direction of motion) of the m-axis or n-axis motion on a pixel basis depends on a distance difference between same-signed, non-zero-sloped pixels. In embodiments, the direction of m-axis or n-axis movement is selected according to determining that the pixel with the shortest distance value is a pixel located on the in-motion object in a scene. Therefore, the direction of m-axis or n-axis movement will be from the pixel with the smaller distance parameter to the pixel with the larger distance parameter.
(107) In embodiments, for m-axis and n-axis motion whereby the same-slope pixel values are at the same distance from the sensor, the pixels likely represent differing intensity values from the same in-motion object. Therefore, the amplitude of the motion is determinable from pixel triplet processing, but the direction of the movement is determined from triplet processing for an in-motion triplet that is nearby in space whereby the distances of same-slope pixels are different.
(108) In embodiments, motion in the d axis is determined by computing the width of a trapezoid as determined by the distance (in sub-frames) between a leading edge midpoint and a trailing edge midpoint and comparing it to the width of an ideal trapezoid for a non-moving object. Pixels associated with objects moving toward a sub-frame processing, composite image camera will exhibit trapezoid widths that are less than the width of an ideal trapezoid, and pixels associated with objects moving away from a sub-frame processing, composite image camera will exhibit trapezoid widths that are greater than the width of an ideal trapezoid. D-axis motion is computed according to Eq. 23 below:
Motion.sub.d-axis(m,n)=({[SF.sub.te-mid(m,n)−Sf.sub.le-mid(m,n)]−SF.sub.mid-height-width}*C*P.sub.emitter)/2
(109) Where SF.sub.te-mid(m,n) is the sub-frame for the trailing edge midpoint for pixel (m,n) Sf.sub.le-mid(m,n) is the sub-frame for the leading edge midpoint for pixel (m,n) SF.sub.mid-height-width is the width, in number of sub-frames, at the mid-height of an ideal trapezoid C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(110) In embodiments, d-axis motion is determined on a pixel basis and is not dependent on neighbor-in-space intensity values or neighbor-in-space distance values. Said another way, d-axis motion is detectable and measureable for each pixel in a sub-frame processing, composite imaging system.
(111) In embodiments, sub-frame processing in a composite imaging system interprets sub-frame intensity values to determine, within a single composite image, pixel parameters like intensity, radiance, luminance, distance, m-axis motion (horizontal motion relative to the sensor), n-axis motion (vertical motion relative to the sensor) and d-axis motion (relative motion toward or away from the sensor). In embodiments, sensor pixel parameters are determined from sub-frame intensity waveform parameter analysis according to Table 3 below:
(112) TABLE-US-00013 TABLE 3 Min. Waveform Pixel Waveform Type Sub-frames Parameters Properties WP/BP 2 I.sub.bp(m, n) Luminance I.sub.wp(m, n) Radiance WP/BP 3 I.sub.le-wp(m, n) Luminance I.sub.te-wp(m, n) Radiance I.sub.bp(m, n) m-axis motion n-axis motion Trapezoid 5 I.sub.wp(m, n) Luminance I.sub.bp(m, n) Radiance CoM(m, n) Distance Trapezoid 6 I.sub.le-wp(m, n) Luminance I.sub.te-wp(m, n) Radiance I.sub.bp(m, n) Distance CoM(m, n) m-axis motion n-axis motion Trapezoid 8 I.sub.le-wp(m, n) Luminance I.sub.te-wp(m, n) Radiance I.sub.bp(m, n) Distance SF.sub.le-mid(m, n) m-axis motion SF.sub.te-mid(m, n) n-axis motion d-axis motion Non-overlapping 3 I.sub.0(m, n) Luminance Range Gating I.sub.1(m, n) Radiance I.sub.G−1(m, n) Distance eXtended Dynamic 3 I.sub.0(m, n) XDR Intensity Range I.sub.1(m, n) Fill Rate I.sub.2(m, n)
(113) Trapezoidal sub-frame collection and subsequent trapezoid parameter determination place high demands on digital-only processing systems. In embodiments, NitAPP architecture displays significant throughput advantages versus digital sub-frame-processing systems. As an example, a throughput comparison is presented for NitAPP-processed and digitally-processed images for a 16 megapixel composite imaging system with K=32 sub-frames per composite image according to these parameters:
(114) TABLE-US-00014 Focal Plane Array size 16,777,216 pixels Bits per pixel for A/D Conversion 12 bits/pixel Bytes per pixel 1.5 Bytes/pixel Focal Plane Array bus transfer rate 5 GB/sec Number of sub-frames per composite image 32 sub-frames
(115) In embodiments, the duration comparisons are made in Table 4 below for comparative algorithms to determine luminance, radiance, distance, m-axis motion, n-axis motion, and d-axis motion, all within a single composite image, for each of the 16 megapixels.
(116) TABLE-US-00015 TABLE 4 Elapsed Time - Elapsed Time - Function Digital NitAPP/Digital Sub-frame Capture (32 sub-frames) 0.16 ms 0.16 ms Compute NitAPP I.sub.wp-le(m, n) — 0.02 ms Compute NitAPP I.sub.wp-te(m, n) — 0.02 ms Compute NitAPP I.sub.bp(m, n) — 0.02 ms Determine NitAPP SF.sub.le-mid(m, n) — 0.06 ms Determine NitAPP SF.sub.le-mid(m, n) — 0.06 ms Transfer NitAPP sub-frames from FPA — 23.43 ms (5 sub-frames) Transfer all sub-frames from FPA 150 ms — (32 sub-frames) Compute Digital I.sub.wp-le(m, n) 32 ms — Compute Digital I.sub.wp-te(m, n) 32 ms — Compute Digital I.sub.bp(m, n) 32 ms — Determine Digital SF.sub.le-mid(m, n) 72 ms — Determine Digital SF.sub.le-mid(m, n) 72 ms — Determine Slope.sub.wp(m, n) 4 ms 4 ms Determine Distance(m, n) 4 ms 4 ms Compute Luminance 4 ms 4 ms Compute Radiance 4 ms 4 ms Determine m-axis motion 4 ms 4 ms Determine n-axis motion 4 ms 4 ms Determine d-axis motion 4 ms 4 ms Total Elapsed Time 428.16 ms 51.77 ms Composite Images per Second 2.34 19.32
(117) Digital CPUs and GPUs typically attempt to extract top performance out of a given technology, often at the expense of power consumption. The use of NitAPP processing for composite image creation offers the benefit of lower overall device power consumption because most of the processing is shifted from power-hungry digital processing to very-low-power analog computing. Utilizing a 10 nm feature size silicon fabrication process, the power consumption for various elements can be expressed as:
(118) TABLE-US-00016 Function Power Units Photodetector accumulation 80 pW per accumulation per pixel NitAPP instruction 6 pW per NitAPP element Digital Memory Read/Write 0.45 nW per byte GPU instruction 0.08 nW per instruction FPA transfer and A/D 0.85 nW per byte Conversion
(119) In embodiments, a power consumption comparison for digital sub-frame processing and for NitAPP sub-frame processing for a sixteen megapixel, 32 sub-frame composite image utilizing a 10 nm process is shown in Table 5 below.
(120) TABLE-US-00017 TABLE 5 Power Usage - Power Usage - Function Digital NitAPP/Digital Sub-frame Capture (32 sub-frames) 1.3 mW 1.3 mW Compute NitAPP I.sub.wp-le(m, n) — 20.37 mW Compute NitAPP I.sub.wp-te(m, n) — 20.37 mW Compute NitAPP I.sub.bp(m, n) — 20.37 mW Determine NitAPP SF.sub.le-mid(m, n) — 62.84 mW Determine NitAPP SF.sub.le-mid(m, n) — 62.84 mW Transfer NitAPP sub-frames from FPA — 45.86 mW (5 sub-frames) Transfer all sub-frames from FPA 684.5 mW — (32 sub-frames) Compute Digital I.sub.wp-le(m, n) 279.2 mW — Compute Digital I.sub.wp-te(m, n) 279.2 mW — Compute Digital I.sub.bp(m, n) 279.2 mW — Determine Digital SF.sub.le-mid(m, n) 837.5 mW — Determine Digital SF.sub.le-mid(m, n) 837.5 mW — Determine Slope.sub.wp(m, n) 34.9 mW 34.9 mW Determine Distance(m, n) 34.9 mW 34.9 mW Compute Luminance 34.9 mW 34.9 mW Compute Radiance 34.9 mW 34.9 mW Determine m-axis motion 34.9 mW 34.9 mW Determine n-axis motion 34.9 mW 34.9 mW Determine d-axis motion 34.9 mW 34.9 mW Total Power per Composite Image 3442.7 mW 470.25 mW
(121) Table 3 identifies a WP/BP waveform with a minimum of three sub-frames. In embodiments, a minimum of three sub-frames enables the determination of m-axis and n-axis motion within a single composite image. As an example of an embodiment, a WP/BP descriptor of 3/50, signifying a white point sub-frame followed by a black point sub-frame followed by a second white point sub-frame with an elapsed time from the start of one sub-frame to the start of a subsequent sub-frame defined as 50 μSec. In embodiments, Eq. 18 is modified by replacing ΔSF.sub.trapezoid-width with ΔSF.sub.wp, and a white point slope is computed for each pixel according to:
Slope.sub.wp(m,n)=[I.sub.te-wp(m,n)−I.sub.le-wp(m,n)]/ΔSF.sub.wp Eq. 24
(122) Where I.sub.te-wp(m,n) is the trailing edge white point intensity value for pixel (m,n) I.sub.le-wp(m,n) is the leading edge white point intensity value for pixel (m,n) ΔSF.sub.wp is the # of sub-frames between the leading edge and trailing edge white point sub-frames
(123) In embodiments, the amplitude of the m-motion or the n-motion is computed by determining the sub-frame number at which the extrapolated high-intensity white point trapezoid slope crosses over the trailing edge black point intensity value for the other non-zero slope pixel in the m-motion or n-motion pixel triplet. Eq. 19 is modified by replacing ΔSF.sub.trapezoid-width with ΔSF.sub.wp, and the amplitude of m-axis or n-axis motion, expressed in terms of the # of sub-frames, is computed for each pixel according to according to Eq. 25 below:
ΔSF.sub.motion=[(I.sub.te-wp(m,n)−I.sub.le-wp(m,n))*(I.sub.te-bp(m,n)−I.sub.le-wp(m,n))]/ΔSF.sub.wp
(124) Where I.sub.te-wp(m,n) is the trailing edge white point intensity value for the high-intensity pixel of a non-zero-sloped pixel triplet I.sub.le-wp(m,n) is the leading edge white point intensity value for the high-intensity pixel of a non-zero-sloped pixel triplet I.sub.te-bp(m,n) is the trailing edge black point intensity value for the low-intensity pixel of a non-zero-sloped pixel triplet ΔSF.sub.wp is the # of sub-frames between the leading edge and trailing edge white point sub-frames
(125) Table 3 identifies a non-overlapping range gating waveform with a minimum of three sub-frames. In embodiments, a minimum of three sub-frames enables the determination of radiance, luminance, and distance within a single composite image.
(126) The Sub-frame 0 graph 354 illustrates the amount of emitter and detector overlap for various distances throughout the device range and signifies that: 1) emitter and detector experience 100% overlap for distances between 0 and 15 meters 360, 2) emitter and detector overlap decreases linearly from 100% to 0% for distances between 15 and 30 meters 362, 3) emitter and detector overlap is 0% for distances between 30 and 45 meters 364, and 4) emitter and detector overlap is 0% for distances between 45 and 60 meters 366. The Sub-frame 1 graph 356 illustrates the amount of emitter and detector overlap for various distances throughout the device range and signifies that: 1) emitter and detector overlap increases linearly from 0% to 100% for distances between 0 and 15 meters 360, 2) emitter and detector experience 100% overlap for distances between 15 and 30 meters 362, 3) emitter and detector overlap decreases linearly from 100% to 0% for distances between 30 and 45 meters 364, and 4) emitter and detector overlap is 0% for distances beyond 45 meters 366. The Sub-frame 2 graph 356 illustrates the amount of emitter and detector overlap for various distances throughout the device range and signifies that: 1) emitter and detector overlap is 0% to 100% for distances between 0 and 15 meters 360, 2) emitter and detector overlap increases linearly from 0% to 100% for distances between 15 and 30 meters 362, 3) emitter and detector experience 100% overlap for distances between 30 and 45 meters 364, 4) emitter and detector overlap decreases linearly from 100% to 0% for distances between 45 and 60 meters 366.
(127) In embodiments, the determination of distance for each pixel (m,n) for a non-overlapping range gating optical configuration with a descriptor of 3/100/1/2/0 is illustrated in Table 6 below.
(128) TABLE-US-00018 TABLE 6 Condition White Black Test Distance Range Point Point Distance I.sub.0(m, n) > 0 m < d(m, n) < 15 m I.sub.0(m, n) I.sub.2(m, n) Eq. 26 I.sub.1(m, n)? I.sub.1(m, n) > 15 m < d(m, n) < 30 m I.sub.1(m, n) Eq. 27 Eq. 28 I.sub.0(m, n) & I.sub.1(m, n) > I.sub.2(m, n)? I.sub.2(m, n) > 30 m < d(m, n) < 45 m I.sub.2(m, n) I.sub.0(m, n) Eq. 29 I.sub.1(m, n) & I.sub.1(m, n) > I.sub.0(m, n)? I.sub.2(m, n) > 45 m < d(m, n) < 60 m >I.sub.2(m, n) I.sub.0(m, n) Eq. 30 I.sub.1(m, n) & I.sub.1(m, n) = I.sub.0(m, n)? I.sub.2(m, n) = d(m, n) > 60 m n/a I.sub.0(m, n) Eq. 31 I.sub.1(m, n) = I.sub.0(m, n)?
(129) In embodiments, when I.sub.0(m,n)>I.sub.1(m,n) the object at pixel (m,n) is in the range 0 m<d(m,n)<15 m and I.sub.1(m,n) determines the actual distance according to:
d(m,n)={[(I.sub.1(m,n)−I.sub.2(m,n))/(I.sub.0(m,n)−I.sub.2(m,n))]*C*P.sub.emitter}/2 Eq. 26
(130) Where I.sub.0(m,n) is the sub-frame 0 intensity value and the white point value for pixel (m,n) I.sub.1(m,n) is the sub-frame 1 intensity value for pixel (m,n) I.sub.2(m,n) is the sub-frame 2 intensity value and the black point value for pixel (m,n) C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(131) In embodiments, when I.sub.1(m,n)>I.sub.0(m,n) & I.sub.1(m,n)>I.sub.2(m,n) the object at pixel (m,n) is in the range 15 m<d(m,n)<30 m and the black point value is determined according to:
BP(m,n)=(I.sub.1(m,n)−I.sub.0(m,n))+(I.sub.1(m,n)−I.sub.2(m,n)) Eq. 27
(132) Where I.sub.0(m,n) is the sub-frame 0 intensity value for pixel (m,n) I.sub.1(m,n) is the sub-frame 1 intensity value and the white point value for pixel (m,n) I.sub.2(m,n) is the sub-frame 2 intensity value for pixel (m,n)
(133) In embodiments, when I.sub.1(m,n)>I.sub.0(m,n) & I.sub.1(m,n)>I.sub.2(m,n) the object at pixel (m,n) is in the range 15 m<d(m,n)<30 m and the actual distance is computed according to:
d(m,n)={[1+(I.sub.2(m,n)−BP(m,n))/(I.sub.1(m,n)−BP(m,n))]*C*P.sub.emitter}/2 Eq. 28
(134) Where I.sub.1(m,n) is the sub-frame 1 intensity value and the white point value for pixel (m,n) I.sub.2(m,n) is the sub-frame 2 intensity value for pixel (m,n) BP(m,n) is the black point value for pixel (m,n) from Eq. 26 C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(135) In embodiments, when I.sub.2(m,n)>I.sub.1(m,n) & I.sub.1(m,n)>I.sub.0(m,n) the object at pixel (m,n) is in the range 30 m<d(m,n)<45 m and the actual distance is computed according to:
d(m,n)={[2+(I.sub.2(m,n)−I.sub.1(m,n))/(I.sub.2(m,n)−I.sub.0(m,n))]*C*P.sub.emitter}/2 Eq. 29
(136) Where I.sub.0(m,n) is the sub-frame 0 intensity value and the black point value for pixel (m,n) I.sub.1(m,n) is the sub-frame 1 intensity value for pixel (m,n) I.sub.2(m,n) is the sub-frame 2 intensity value and the white point value for pixel (m,n) C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(137) In embodiments, when I.sub.2(m,n)>I.sub.1(m,n) & I.sub.1(m,n)=I.sub.0(m,n) the black point value is determined as I.sub.0(m,n) and the white point value is undermined. Without knowledge of the white point the distance to the object at pixel (m,n) is in the range 45 m<d(m,n)<60 m and is determined according to:
(3*C*P.sub.emitter)/2>d(m,n)>(3*C*P.sub.emitter)/2 Eq. 30
(138) In embodiments, when I.sub.2(m,n)=I.sub.1(m,n)=I.sub.0(m,n) the black point value is determined as I.sub.0(m,n) and the white point value is undermined. Without knowledge of the white point the distance to the object at pixel (m,n) is in the range d(m,n)>60 m and is determined according to:
d(m,n)>(4*C*P.sub.emitter)/2 Eq. 31
(139) Increasing the number of sub-frames in a non-overlapping range gating configuration increases the number of ranges for which distances are determined. Increasing the period of the emitter clock increases the range of each range gating cycle. In embodiments, the maximum ranges for which pixel distances are determined for varying numbers of range gating cycles at varying emitter clock periods is expressed as:
Range.sub.max=(N.sub.RG*C*P.sub.emitter)/2 Eq. 32
Where N.sub.RG is the number of non-overlapping range gating sub-frames C is a constant for the speed of light in a medium P.sub.emitter is the emitter clock period
(140) In embodiments, with a speed of light expresses as 0.299792 m/nSec, the maximum ranges for combinations of sub-frame numbers and emitter clock periods are shown in Table 7 below.
(141) TABLE-US-00019 TABLE 7 Number of Range Gate Sub-frames Emitter Clock Period Max Range 3 50 nSec 22.5 m 4 50 nSec 30.0 m 5 50 nSec 37.5 m 3 100 nSec 45.0 m 4 100 nSec 60.0 m 5 100 nSec 74.9 m 3 100 nSec 89.9 m 4 100 nSec 119.9 m 5 100 nSec 149.9 m
(142) High-dynamic-range imaging (HDR) imaging is a technique used in imaging to reproduce a greater dynamic range of luminosity than what is possible with standard digital imaging techniques, such as many real-world scenes containing very bright, direct sunlight to extreme shade. HDR is often achieved by capturing and then combining several different, narrower-range exposures of the same subject matter. Non-HDR cameras take images with a limited exposure range, referred to as low-dynamic-range (LDR), resulting in the loss of detail in highlights or shadows. HDR images typically require little or motion by a camera or by objects within a scene. Table 3 identifies an eXtended Dynamic Range (XDR) waveform with a minimum of three sub-frames. In embodiments, sub-frames are collected at three different exposures with photodetectors that exhibit a linear response to an incident number of photons. Intensity levels for the three or more XDR sub-frames are expressed as I.sub.0, I.sub.1 through where the intensity values are the response to three or more exposure levels, typically measured in number of microseconds.
(143) In embodiments, the fill rate of an XDR cycle expresses how rapidly a pixel's intensity increases to a unit increase in exposure time. For a three sub-frame XDR cycle, the fill rate for sub-frames one and two for each pixel is expressed as:
FillRate.sub.1-2=[I.sub.2(E.sub.2)−I.sub.1(E.sub.1)]/E.sub.2−E.sub.1 Eq. 33
Where I.sub.2 is the intensity for sub-frame 2 E.sub.2 is the exposure time that produced 12 I.sub.1 is the intensity level for I.sub.1 E.sub.1 is the exposure time that produced I.sub.1
(144) For a three sub-frame XDR cycle, the fill rate for sub-frames zero and one for each pixel is expressed as:
FillRate.sub.0-1=[I.sub.1(E.sub.1)−I.sub.0(E.sub.0)]/E.sub.1−E.sub.0 Eq. 34
(145) Where I.sub.1 is the intensity for sub-frame 1 E.sub.1 is the exposure time that produced I.sub.1 I.sub.0 is the intensity level for I.sub.0 E.sub.0 is the exposure time that produced I.sub.0
(146) The XDR intensity level for each pixel for sub-frames one and two is expressed as:
I.sub.XDR(E.sub.XDR)=FillRate.sub.1-2*(E.sub.XDR−E.sub.2) Eq. 35
(147) Where E.sub.XDR is the exposure level for which XDR is computed E.sub.2 is the sub-frame 2 exposure time
(148) The XDR intensity level for each pixel for sub-frames zero and one is expressed as:
I.sub.XDR(E.sub.XDR)=*FillRate.sub.0-1*(E.sub.XDR−E.sub.1) Eq. 36
(149) Where E.sub.XDR is the exposure level for which XDR is computed E.sub.1 is the sub-fame 1 exposure time
(150) For purposes of describing the various embodiments, the following terminology and references may be used with respect to reflective articles or materials in accordance with one or more embodiments as described.
(151) “Lighting-invariant imaging” describes a multi-frame, composite imaging system whereby maximum pixel intensity values and minimum pixel intensity values are determined for successive frames that constitute a composite image.
(152) “Black Point” refers to a frame pixel intensity value or a frame of pixels whereby there existed no active light source or a low level of active light projected onto a scene during the photodetector integration time. The term black point is equivalent to the minimum pixel intensity in a Lighting-invariant imaging system.
(153) “White Point” refers to a frame pixel intensity value or a frame of pixels whereby there existed an active light projected onto a scene during photodetector integration time, whereby the intensity of the light or the duration of the on time was greater than the intensity or the duration of the associated black point intensity or duration. The term white point is equivalent to the maximum pixel intensity in a Lighting-invariant imaging system.
(154) “Luminance” describes the amount of radiant flux emitted or reflected by a surface per unit projected area due to one or more ambient light sources, and is expressed in Watts/m.sup.2.
(155) “Radiance” describes the amount of radiant flux emitted or reflected by a surface per unit projected area due to a directed light source, and is expressed in Watts/m.sup.2.
(156) “Spherical Coordinate System” is a three-dimensional coordinate space used for description of locations relative to a known point on a vehicle or an imaging component. Spherical coordinates are specified as (ρ,θ,φ), where ρ specifies distance, θ specifies the vertical angle, and φ specifies the horizontal or azimuth angle.
(157) “Photodetector Accumulation Cycle” refers to accumulation of charge by a photodetector for an accumulation duration followed by the transfer of accumulated photodetector charge to a storage element.
(158) “Multiple Accumulation” refers to a process whereby more than one photodetector accumulation cycle is performed within a photodetector sub-frame event. The amplitude of collected charge at a storage element is the sum of the accumulated photodetector charges that are transferred to the storage element within a multiple accumulation cycle.
(159) “Frame” describes the electrical data produced by an imaging element like a focal plane array whereby optical information is converted to electrical information for a multi-pixel device or system. Frame information is post-processed in an imaging system to convert a single frame to an image. Focal plane arrays typically specify a capture and transfer rate by utilizing a term like frames per second.
(160) “Sub-frame” describes the electrical data produced by an imaging element like a focal plane array whereby optical information is converted to electrical information for a multi-pixel device or system. Sub-frame information is post-processed in an imaging system to convert multiple sub-frames to a composite image or multiple composite images.
(161) A “sub-frame trapezoidal descriptor” defines the electro-optical parameters of a sub-frame composite imaging cycle whereby the timing relationship of an emitter and a detector is different for subsequent sub-frames within an imaging duration, with the descriptor defined by a format:
(162) TABLE-US-00020 <# of sub-frames>/ <emitter clock period (in nSec)>/ <# of emitter clock periods for emitter pulses>/ <# of emitter clock periods for detector integration>/ <# of emitter clock periods between end of integration and start of emitter pulse for sub-frame 0> <sub-frame period duration, defined as the elapsed time from the start of a sub-frame to the start of a subsequent sub-frame within an imaging cycle (in μSec)>.
(163) A “sub-frame WP/BP descriptor” defines the electro-optical parameters of a sub-frame composite imaging cycle whereby white point sub-frames and black point sub-frames are produced alternately throughout the imaging window, with the descriptor defined by a format:
(164) TABLE-US-00021 <# of sub-frames>/ <sub-frame period duration, defined as the elapsed time from the start of a sub-frame to the start of a subsequent sub-frame within an imaging cycle (in μSec)>.
(165) “Range Gating” describes an active sensor imaging technique that allows for the imaging of an object within a distance band from a sensor. In range-gated imaging, a pulsed light source is used to illuminate a scene while reflected light is detected by a sensor with a short exposure time or a short integration time referred to as a gate. The gate is delayed so imaging occurs at a particular range from the sensor.
(166) “Non-overlapping range gating” describes the use of multiple range gates in a sub-frame, composite imaging system whereby the maximum distance of a range gate equates to the minimum distance of a subsequent range gate. Non-overlapping range-gating composite imagery requires a minimum of two sub-frames per composite image.
(167) A “sub-frame non-overlapping range gating descriptor” defines the electro-optical parameters of a sub-frame composite imaging cycle whereby the timing relationship of an emitter and a detector is different for subsequent sub-frames within an imaging duration, and whereby there exists no overlap between the range at which the maximum intensity of one sub-frame range overlaps with the maximum intensity of a previous or subsequent sub-frame within a composite image, with the descriptor defined by a format:
(168) TABLE-US-00022 <# of sub-frames>/ <emitter clock period (in nSec)>/ <# of emitter clock periods for emitter pulses>/ <# of emitter clock periods for detector integration>/ <# of emitter clock periods between start of integration and start of emitter pulse for sub-frame 0>.
(169) Persons of ordinary skill in the relevant arts will recognize that embodiments may comprise fewer features than illustrated in any individual embodiment described above. The embodiments described herein are not meant to be an exhaustive presentation of the ways in which the various features of the embodiments may be combined. Accordingly, the embodiments are not mutually exclusive combinations of features; rather, embodiments can comprise a combination of different individual features selected from different individual embodiments, as understood by persons of ordinary skill in the art. Moreover, elements described with respect to one embodiment can be implemented in other embodiments even when not described in such embodiments unless otherwise noted. Although a dependent claim may refer in the claims to a specific combination with one or more other claims, other embodiments can also include a combination of the dependent claim with the subject matter of each other dependent claim or a combination of one or more features with other dependent or independent claims. Such combinations are proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended also to include features of a claim in any other independent claim even if this claim is not directly made dependent to the independent claim.
(170) Any incorporation by reference of documents above is limited such that no subject matter is incorporated that is contrary to the explicit disclosure herein. Any incorporation by reference of documents above is further limited such that no claims included in the documents are incorporated by reference herein. Any incorporation by reference of documents above is yet further limited such that any definitions provided in the documents are not incorporated by reference herein unless expressly included herein.
(171) For purposes of interpreting the claims, it is expressly intended that the provisions of Section 112, sixth paragraph of 35 U.S.C. are not to be invoked unless the specific terms “means for” or “step for” are recited in a claim.