Intelligent control module for utilizing exterior lighting in an active imaging system
11477363 · 2022-10-18
Assignee
Inventors
Cpc classification
H04N23/671
ELECTRICITY
G01S17/894
PHYSICS
H04N13/271
ELECTRICITY
International classification
H04N13/271
ELECTRICITY
Abstract
Exterior lighting on vehicles and buildings is typically used to illuminate scenes for better vision. The same exterior lighting can be used as part of an active sensor at discrete times during the sensor's active imaging cycles. In embodiments, an intelligent control module enables emitter output in accordance with the imaging system during active imaging cycles and enables emitter output in accordance with the non-imaging lighting control unit during non-imaging cycles. Embodiments of intelligent control modules can be used in security applications, in Automatic Driving Alert Systems and Autonomous Control Systems for commercial and passenger vehicles, and in low-altitude aircraft applications.
Claims
1. An active camera system configured to generate an image of a scene comprising: at least one emitter configured to emit light toward the scene when a non-camera light control signal is active; an array of detectors configured to receive light for a field of view that includes at least a portion of the scene; detector control circuitry operably coupled to the array of detectors and configured to generate: a camera control signal that specifies the time at which the camera is in an active imaging cycle; and an emitter drive pulse signal that specifies the times during the active imaging cycle at which the emitters are activated; and emitter control circuitry operably coupled to the at least one emitter and configured to: emit light when the camera control signal and the emitter drive pulse signal are both in the active state; and emit light when the camera control signal is in the inactive state and the non-camera light control signal is in the active state.
2. The active camera system of claim 1 wherein the at least one emitter comprises at least one light-emitting diode.
3. The active camera system of claim 1 wherein the at least one emitter comprises a plurality of light-emitting diodes having multiple color components.
4. The active camera system of claim 1 wherein the at least one emitter comprises at least one vehicle headlamp.
5. The active camera system of claim 1 wherein the defined frequency range of the at least one emitter is selected from the group consisting of: 100 nanometers (nm) to 400 nm; 400 nm to 700 nm; 700 nm to 1400 nm; 1400 nm to 8000 nm; 8 micrometers (micron) to 15 micron; 15 micron to 1000 micron; and 0.1 mm to 1 mm.
6. The active camera system of claim 1 wherein the non-camera light control signal is generated by an electronic control module on board a vehicle.
7. A lighting control module for use with an active camera system configured to generate an image of a scene comprising: at least one emitter configured to emit light toward the scene when a non-camera light control signal is active; and emitter control circuitry operably coupled to the at least one emitter and configured to: emit light when the a camera control signal for the active camera system and emitter drive pulse signal for at least one emitter are both in the active state; and emit light when the camera control signal is in the inactive state and the non-camera light control signal is in the active state.
8. The lighting control module of claim 7 wherein the at least one emitter comprises at least one light-emitting diode.
9. The lighting control module of claim 7 wherein the at least one emitter comprises a plurality of light-emitting diodes having multiple color components.
10. The lighting control module of claim 7 wherein the at least one emitter comprises at least one vehicle headlamp.
11. The lighting control module of claim 7 wherein the defined frequency range of the at least one emitter is selected from the group consisting of: 100 nanometers (nm) to 400 nm; 400 nm to 700 nm; 700 nm to 1400 nm; 1400 nm to 8000 nm; 8 micrometers (micron) to 15 micron; 15 micron to 1000 micron; and 0.1 mm to 1 mm.
12. The lighting control module of claim 7 wherein the non-camera light control signal is generated by an electronic control module on board a vehicle.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
DETAILED DESCRIPTION OF THE DRAWINGS
(24)
(25)
(26) For a detector array 50 with an in-focus lens 52 the individual fields of view 62 corresponding to each detector 58 should perfectly align with the fields of view for neighboring detectors. In practice, a lens 52 will almost never be perfectly in focus. Thus, the fields of view 62 of each detector 58 in a lensed system may typically overlap, though the field of view of each detector 58 is different from that of any other detector 58 in the detector array 50. Detector arrays 50 may not have optimal density in their configuration due to semiconductor layout limitations, substrate heat considerations, electrical crosstalk avoidance, or other layout, manufacturing, or yield constraints. As such, sparse detector arrays 50 may experience loss in photon detector efficiency within the device field of view 56 due to reflected photons contacting the unutilized spaces between successive detector elements 58.
(27) For non-lensed systems the field of view 62 of each detector 58 can be determined by a diffraction grating, an interferometer, a waveguide, a 2D mask, a 3D mask, or a variety of other aperture configurations designed to allow light within a specific field of view. These individual detector apertures will typically have overlapping fields of view 62 within the device field of view 56.
(28) An element of various embodiments is the determination of an angle 60 for each detector 58.
(29) Variations will occur in the fabrication of detector arrays 50 used in 4D cameras. In single-lens 52 detector array devices like that shown in
(30) Due to the importance of accurate determination of the optical path, in situ calibration may be desirable for devices according to various embodiments. As an example, a 4D camera device according to an embodiment may be used as a sensor in an autonomous vehicle. In order to protect the device it may be mounted inside a passenger vehicle affixed to the windshield behind the rear-view mirror. Since the device is facing in front of a vehicle, emitted light and reflected light will pass through the windshield on its way to and from external objects. Both components of light will undergo distortion when passing through the windshield due to reflection, refraction, and attenuation. In situ calibration for this autonomous vehicle 4D camera may include the device emitting pre-determined calibration patterns and measuring the intensity, location, and angle of the reflected signals. Device characterization parameters would be updated to account for a modified optical path of the incident and/or reflected light based on a calibration.
(31)
(32) In embodiments a photodetector array 72 is fabricated as a focal plane array that utilizes electrical connections 78 to interface with other camera 70 circuitry. This electrical interface 78 is typically of lower bandwidth than that required by the high-speed photodetection elements 72. The charge transfer array 80 is a collection of fast analog storage elements that takes information from the photodetector array 72 at a rate sufficient to allow the photodetector elements 72 to rapidly process subsequent emitter/detector events. The size of the charge transfer array 80 is typically M×N×K analog storage elements where M is the number of rows in the detector array 72, N is the number of columns in the detector array 72, and K is the number of emitter/detector cycles that constitute a 4D capture cycle for a single 4D camera 70 event.
(33) Information from a 4D frame buffer 74 is processed separately for color information and distance information. A controller 82 computes distance values from the TOF algorithm for each of the M×N pixels and stores the distance information in the depth map 84 memory. In embodiments a photodetector array 72 is fabricated with a color filter pattern like a Bayer pattern or some other red-green-blue (RGB) configuration. Each color from a detector filter pattern will require a corresponding color plane 86 in device 70 memory.
(34) A controller 82 will assemble separate color planes 86 into an output image format and store a resulting file in device memory 88. An output file may be in a format such as TIFF, JPEG, BMP or any other industry-standard or other proprietary format. Depth map 84 information for an image may be stored in the image file or may be produced in a separate file that is associated with the image file. After completion of the creation of the output file(s) the controller 82 transmits information via the I/O 90 interface to an upstream application or device. A controller 82 configures all of the sequencing control information for the emitters 92, the photodetector 72 integration, the 4D frame buffer 74 transformation to color 86 and depth 84 information, and device 70 communication to other devices. A controller 82 can be a single CPU element or can be a collection of microcontrollers and/or graphics processing units (GPUs) that carry out the various control functions for the device 70.
(35)
(36) Upon completion of the filling of a 4D frame buffer a camera controller will create an M×N depth map 106 and will create the color plane(s) 108. In embodiments where a camera utilizes multiple color planes produced by multiple color filters on a detector array a controller performs demosaicing for each of the sparse color planes to produce M×N color values for each color plane. A controller creates an output file for the present color image and will format 110 the file for transmission to the upstream device or application.
(37) The frame rate of a 4D camera will typically be a function of the longest action in the processing sequence. For the
(38)
(39) During a detector 122 integration cycle the intensity of the charge collected at the capacitor 132 is proportional to the number of incident photons 126 present during the gating time of the integrator 130. During photodetector 130 integration the charge transfer switch 136 remains in the open position. Upon completion of an integration cycle the integration switch 128 is opened and collected charge remains at the integrator 130 stage. During the start of the charge transfer cycle charge is migrated from the integration capacitor 132 to the charge transfer stage 0 138 capacitor 140 by closing the charge transfer stage 0 138 gate switch 136. At the exit line from the charge transfer stage 0 138 another gate switch 142 enables the transfer of charge from stage 0 138 to stage 1 144. The input switch 136 and the output switch 142 for stage 0 are not in the “on” or closed position at the same time, thus allowing charge to be transferred to and stored at stage 0 138 prior to being transferred to stage 1 on a subsequent charge transfer cycle. Charge transfer stage K-1 144 represents the last charge transfer stage for K emitter/detector cycles. Charge is transferred from stage K-1 144 to a data bus 146 leading to a 4D frame buffer when the K-1 output switch 148 is closed. At the end of each of K detector integration cycles the grounding switch 149 can be closed to remove any excess charge that may have collected at the photodetector 124.
(40)
(41) At the completion of the first 160 detector integration 156 period the integrated charge is transferred 168 from each of the M×N integration elements to each of the M×N charge transfer stage 0 elements. After the second detector integration 156 period is complete a second charge transfer 170 operation is performed that transfers charge from stage 0 to stage 1 and transfers charge from the integration stage to charge transfer stage 0. The detector input 172 signal shows times at which charge is being collected at integration stages for the M×N integration elements.
(42)
Distance=(TOF*c)/2 Eq. 1
(43) Where TOF=time of flight c=speed of light in the medium
(44) Using c=0.3 m/nSec as the speed of light the Minimum Dist. (m) 200 and Maximum Dist. (m) 202 values are established for the lower and upper bounds for the range detected for each of the K stages in a 4D camera capture sequence. The intensity (Hex) 204 column shows the digital hexadecimal value for the integrated intensity value for each of the K stages. It is noted that each of the M×N elements in the detector array will have K intensity values corresponding to the integrated intensities of the K stages. The timing parameters and the TOF values from
(45) A review of the intensity values 204 shows a minimum value 206 of 0x28 and a maximum value 208 of 0xF0. These values are designated I.sub.min[m,n]=0x28 and I.sub.max[m,n]=0xF0. For embodiments that utilize constant pulse width timing for all detector/emitter stages in a capture sequence, the intensity value inserted in the color plane buffer is determined by:
I.sub.color[m,n]=I.sub.max[m,n]−I.sub.min[m,n] Eq. 2
(46) By utilizing Eq. 2 for color plane intensity the effects of ambient light are eliminated by subtracting out the photonic component of intensity I.sub.min[m,n] that is due to ambient light on the scene or object. Eq. 2 is an effective approach for eliminating ambient light when the photodetector integration response has a linear relationship to the number of incident photons at the photodetector. For non-linear photonic/charge collection relationships Eq. 2 would be modified to account for the second-order or N-order relationship between incident photons and integrated charge intensity.
(47) For each photodetector m,n in embodiments that utilize multi-color filter elements the I.sub.color[m,n] value is stored at location m,n in the color plane that corresponds to the color of the filter. As an example, an embodiment with a Bayer filter pattern (RGBG) will have M×N/2 green filter detectors, M×N/4 blue filter detectors, and M×N/4 red filter detectors. At the completion of K integration stages, subsequent filling of the 4D frame buffer, and determination of the M×N color values the controller will store the M×N/4 I.sub.red[m,n] values at the appropriate locations in the red color plane memory. In turn the controller will determine and store the M×N/4 blue values in the appropriate locations in the blue color plane memory and the M×N/2 green values in the appropriate locations in the green color plane memory.
(48) Referring again to
(49)
(50) where i is the stage at which the leading-edge-clipped signal is detected j is the stage at which the trailing-edge-clipped signal is detected I(i,m,n) is the intensity value for pixel m,n at stage i I(j,m,n) is the intensity value for pixel m,n at stage j I.sub.min(m,n) is the minimum intensity value for the current emitter/detector sequence I.sub.max(m,n) is the maximum intensity value for the current emitter/detector sequence TOF.sub.min(i) is the minimum TOF for stage i of the detector sequence TOF.sub.max(j) is the maximum TOF for stage j of the detector sequence
(51) The embodiments from
(52) For ground-based vehicle-mounted and low-altitude aircraft-mounted applications the 4D imaging cycle time should be no longer than 50 microseconds. For higher-altitude aircraft at higher speeds the 4D imaging cycle time should be no longer than 10 microseconds. One skilled in the art can envision embodiments where the relative movement between scene objects and the camera exceeds 0.05 pixels. These longer-image-cycle-time embodiments will utilize inter-sample trajectory techniques to account for information from subsequent emitter/detector stages that do not align within the structure of the detector array grid.
(53) Embodiments described in
(54) An embodiment in
(55)
(56) Based on the selection of emitter and detector pulse widths for this embodiment, the control algorithm establishes that the intensity values transition from environmental values at stage 6 to ambient values at stage 12. Furthermore, the control algorithm determines that anywhere from one to three stages will contain an integrated signal that includes 100% of the object-reflected waveform. From the data in
(57) TABLE-US-00001 Stage # Environment Signal % Object Signal % Ambient Signal % 6 100 0 0 7 E.sub.0 1 − E.sub.0 0 8 not used for this computation 9 E.sub.1 1 − E.sub.1 − A.sub.1 A.sub.1 10 not used for this computation 11 0 1 − A.sub.0 A.sub.0 12 0 0 100
I(clr,m,n,s−3)=I.sub.env(clr,m,n) Eq. 5
I(clr,m,n,s−2)=E.sub.0*I.sub.env(clr,m,n)+(1−E.sub.0)I.sub.obj(clr,m,n) Eq. 6
I(clr,m,n,s)=E.sub.1*I.sub.env(clr,m,n)+(1−E.sub.1-A.sub.1)I.sub.obj(clr,m,n)+A.sub.1*I.sub.amb(clr,m,n) Eq. 7
I(clr,m,n,s+2)=A.sub.0*I.sub.amb(clr,m,n)+(1-A.sub.0)I.sub.obj(clr,m,n) Eq. 8
I(clr,m,n,s+3)=I.sub.amb(clr,m,n) Eq. 9
E.sub.0=E.sub.1+(2*t.sub.emitter-clock-cycle)/D Eq. 10
A.sub.1=A.sub.0+(2*t.sub.emitter-clock-cycle)/D Eq. 11
(58) Where s is the stage number identifier for the detector stage with a 100% reflected signal clr is the color of the pixel m,n is the identifier of the pixel in the array I.sub.env( ) is the detected intensity for the stage with a 100% environmental signal I.sub.amb( ) is the detected intensity for the stage with a 100% ambient signal I.sub.obj( ) is the computed intensity for the object E.sub.0 is the percentage of the stage s−2 intensity that is due to the environmental signal E.sub.1 is the percentage of the stage s intensity that is due to the environmental signal A.sub.0 is the percentage of the stage s intensity that is due to the ambient signal A.sub.1 is the percentage of the stage s+2 intensity that is due to the ambient signal t.sub.emitter-clock-cycle is the period of the emitter clock D is the duty cycle of the emitter/detector for the stage of the sequence and is defined as the emitter-pulse-width divided by the detector-pulse-width for the stage
(59) Utilizing the five equations (Eqs. 6, 7, 8, 10 and 11) with five unknowns, (I.sub.obj( ), E.sub.0, E.sub.1, A.sub.0 and A.sub.1) the control algorithms determine I.sub.obj( ) for each color and each pixel and assigns the computed intensity values to the appropriate locations in the color frame buffers. The distance to the object is determined by computing TOF to the object based on Eq. 7:
TOF(clr,m,n)=TOF.sub.min(clr,m,n,s)+E.sub.1*t.sub.detector-pulse-width(s) Eq. 12
(60) Where TOF( ) is the time of flight for a particular pixel TOF.sub.min(s) is the minimum time of flight for stage s E.sub.1 is the percentage of the stage s intensity that is due to the environmental signal t.sub.detector-pulse-width(S) is the width of the detector pulse for stage s
(61) The identification of stage s for Eqs. 5-12 depends on knowledge of the emitter pulse width for each stage and the detector pulse width for each stage. The known pulse widths determine the duty cycle and determine how many stages are involved in the transition from environmental signals to ambient signals for each pixel. Eqs. 5-12 are applicable for embodiments where the emitter pulses are shorter in duration than the detector pulses. For embodiments where emitter pulses are longer than detector pulses Eq. 7 will compute to either E.sub.1 or A.sub.1 being equal to zero. As a result, two more equations with two more unknowns are necessary to resolve the intensity values of the object. The first additional equation will describe two new unknowns (A.sub.2 and E.sub.2) as a function of the measured stage intensity and the second additional equation will describe A.sub.2 and E.sub.2 as a function of the stage duty cycle.
(62) In various embodiments, it will be appreciated that utilizing techniques for evaluating signal attenuation may utilize a minimum of five emitter/detector cycles—one cycle in which an environmental detected signal is determined, one cycle containing a leading edge detected signal of the active pulsed signal, one full-emitter-cycle of the detected signal of the active pulsed signal, one cycle containing a trailing-edge detected signal, and one cycle containing an ambient detected signal. Depending upon timing, field of vision, distances, ambient and environmental conditions, additional emitter/detector cycles may be needed to obtain the necessary information to utilize the techniques for evaluating signal attenuation as described with respect to these embodiments.
(63) For uniform emitter pulses Eq. 3 and Eq. 4 will produce the same value for TOF for each pixel m,n. Due to signal noise and ambient light TOF values based on higher integrated intensity values will produce higher accuracy distance computations than lower integrated intensity values. In embodiments the controller will utilize only one of values from Eq. 3 or Eq. 4 to establish the TOF for the pixel, with the preferred TOF value being selected from the equation that utilizes the largest amplitude integrated intensity value.
(64) Objects farther from 4D cameras will receive less light from emitters than objects closer to the camera. As a result, reflected signals from far objects will have lower intensity than reflected signals from closer objects. One method to compensate for lower intensity return signals is to increase the emitter pulse width and to increase the detector integration time, thus increasing the intensity of the integrated signal for a given object distance.
(65) Stage 0 has a four-period emitter cycle 210 and a six-period detector integration cycle 212. Stage 1 has a five-period emitter cycle 214 and has a seven-period detector integration cycle 216. Stage 9 is a special cycle that has a very long emitter pulse 218 and a correspondingly long detector integration cycle 220. This special long emitter/detector cycle may not be used for distance determination but is used to establish accurate color values for objects that are not very retroreflective at the wavelengths of the camera emitter.
(66)
(67) In previous embodiments the distances to objects computed via TOF were dependent on distance ranges established by the multi-period detector integration cycles. It may be desirable to achieve greater precision for TOF distance measurements.
(68)
(69) where i is the stage at which the leading-edge-clipped signal is detected j is the stage at which the trailing-edge-clipped signal is detected I(i,m,n) is the intensity value for pixel m,n at stage i I(j,m,n) is the intensity value for pixel m,n at stage j I.sub.min(m,n) is the minimum intensity value for the current emitter/detector sequence I.sub.max(m,n) is the maximum intensity value for the current emitter/detector sequence TOF.sub.min(i) is the minimum TOF for stage i of the detector sequence TOF.sub.max(j) is the maximum TOF for stage j of the detector sequence f.sup.−1(t) is the inverse function of f(t) and expresses the point in time during the reflected pulse integration at which the cumulative intensity is equal to the non-integrated portion of the leading edge signal or at which time the cumulative intensity is equal to the integrated portion of the trailing edge signal
(70) In practice f(t) will likely be a non-linear or higher-order relationship between cumulative intensity and time. As such, the inverse function f.sup.−1(t) may be implemented in embodiments as a lookup table or some other numerical conversion function.
(71)
(72) In embodiments the intensity determination for separate color planes is achieved with an unfiltered detector array and selective use of multi-colored emitters.
(73) TABLE-US-00002 TABLE 1 Round-robin emitters within a single K-stage sequence Stage # Emitter(s) Emitter/Detector Offset 0 Red 0 1 Green 0 2 Blue 0 3 Red 1 4 Green 1 5 Blue 1 6 Red 2 7 Green 2 8 Blue 2 9 Red 3 10 Green 3 11 Blue 3
(74) An example in Table 2 below shows multiple emitter/detector stages for a K-stage sequence with K=12, whereby each emitter wavelength is utilized for K/3 sequential stages.
(75) TABLE-US-00003 TABLE 2 Sequential emitter events within a single K-stage sequence Stage # Emitter(s) Emitter/Detector Offset 0 Red 0 1 Red 1 2 Red 2 3 Red 3 4 Green 0 5 Green 1 6 Green 2 7 Green 3 8 Blue 0 9 Blue 1 10 Blue 2 11 Blue 3
(76) A K-stage sequence with K=12 can also be allocated to a single wavelength emitter, with subsequent K-stage sequences allocated to other wavelengths in a round-robin fashion as shown in Table 3 below.
(77) TABLE-US-00004 TABLE 3 Sequential emitter events in separate K-stage sequences Cycle # Event # Emitter(s) Emitter/Detector Offset 0 0 Red 0 0 1 Red 1 0 2 Red 2 0 3 Red 3 0 4 Red 4 0 5 Red 5 0 6 Red 6 0 7 Red 7 0 8 Red 8 0 9 Red 9 0 10 Red 10 0 11 Red 11 1 0 Green 0 1 1 Green 1 1 2 Green 2 1 3 Green 3 1 4 Green 4 1 5 Green 5 1 6 Green 6 1 7 Green 7 1 8 Green 8 1 9 Green 9 1 10 Green 10 1 11 Green 11 2 0 Blue 0 2 1 Blue 1 2 2 Blue 2 2 3 Blue 3 2 4 Blue 4 2 5 Blue 5 2 6 Blue 6 2 7 Blue 7 2 8 Blue 8 2 9 Blue 9 2 10 Blue 10 2 11 Blue 11
(78) Embodiments that utilize individual detector filters will have certain advantages and disadvantages over embodiments that utilize separate wavelength emitters to achieve multi-color detected signals. Table 4 below compares the relative advantages of embodiments.
(79) TABLE-US-00005 TABLE 4 Comparison of detector filter techniques for visible spectrum embodiments Det. Filter(s) Emitter(s) # of Rs # of Gs # of Bs Advantage RGBG 400-700 nm M × N/4 M × N/2 M × N/4 Inc. range/ precision None RGB M × N.sup. M × N.sup. M × N.sup. Inc. spatial resolution
(80)
(81)
(82) In-motion imaging applications have the advantage of imaging an object from multiple viewpoints and, more importantly, multiple angles. Physical objects possess light-reflecting characteristics that, when sensed properly, can be utilized to categorize objects and even uniquely identify objects and their surface characteristics.
(83)
(84) Upon completion of the processing for n=0 the processing algorithm obtains the next image 370 in a sequence. The image is analyzed to determine if point P0 is present 372 in the image. If P0 is present the loop counter is incremented 374 and the algorithm proceeds to the normal vector determination step 364. If P0 is not present the algorithm establishes whether there are enough points 376 to identify the object based on angular intensity characteristics. If the minimum requirements are not met the algorithm concludes 384 without identifying the object. If the minimum requirements are met the algorithm creates a plot in 3D space 378 for each color for the intensity information determined for all of the n points. The algorithm will define the object by comparing the collected angular intensity profile to reference characteristic profiles that are stored in a library. The characteristic profiles are retrieved from the library 380 and a correlation is determined 382 for each characteristic profile and the P0 profile. The characteristic profile with the highest correlation to P0 is used to determine the object type, class or feature for the object represented by P0.
(85) The algorithm from
(86) In practice the library of characteristic angular intensity profiles will contain hundreds or possibly thousands of profiles. Performing correlations on all profiles in real-time is a computationally intensive operation. As a way of parsing the challenge to a more manageable size the analysis functionality on the device can perform image analysis to classify detected objects. Once classified, angular intensity profiles from the detected objects can be compared to only the library profiles that are associated with the identified object class. As an example, the image analysis functionality in a vehicle-mounted application can identify roadway surfaces based on characteristics such as coloration, flatness, orientation relative to the direction of travel, etc. Having established that a profile for a point P0 is classified as a roadway surface point, the algorithm can access only those characteristic profiles from the library that are classified as road surface characteristics. Some road surface characteristic profiles could include, but not be limited to:
(87) Asphalt—smoothness rating A
(88) Asphalt—smoothness rating B
(89) Asphalt—smoothness rating C
(90) Asphalt with surface moisture
(91) Asphalt with surface ice
(92) Concrete—smoothness rating A
(93) Concrete—smoothness rating B
(94) Concrete—smoothness rating C
(95) Concrete with surface moisture
(96) Concrete with surface ice
(97) An object like road signs is another profile class that can be separate in the profile library. Some road sign characteristic profiles could include, but not be limited to:
(98) ASTM Type I
(99) ASTM Type III
(100) ASTM Type IV—manufacturer A
(101) ASTM Type IV—manufacturer M
(102) ASTM Type IV—manufacturer N
(103) ASTM Type VIII—manufacturer A
(104) ASTM Type VIII—manufacturer M
(105) ASTM Type IX—manufacturer A
(106) ASTM Type IX—manufacturer M
(107) ASTM Type XI—manufacturer A
(108) ASTM Type XI—manufacturer M
(109) The characteristic profile algorithm specifies correlation as the means to compare characteristic profiles and to select the most representative characteristic profile for the object represented by P0. Those reasonably skilled in the art can devise or utilize other methods to select the most representative characteristic profile based on the information collected and analyzed for the object represented by P0.
(110)
(111) The rear-view mirror 408 displays an unobstructed view from a rear-facing camera (not shown) that is in a rear-facing orientation mounted at the rear of the vehicle 400 or inside the vehicle 400 projecting through the rear window. Environmental 416 obstructions for the side of the vehicle 400 are addressed with features in the side mirror 418. A rear oblique-angle camera 420 detects obstructed environmental conditions 416 and projects an obstruction-free image 422 on the mirror for use by the vehicle 400 operator. Alternately, or in addition, the obstruction-free image 422 is delivered to the vehicle control system for autonomous or semi-autonomous driving systems. An indicator 424 on the side mirror indicates the presence of objects within a certain space, thus assisting the vehicle 400 operator in maneuvers like lane changes.
(112) The other embodiments, the processing system can include various engines, each of which is constructed, programmed, configured, or otherwise adapted, to autonomously carry out a function or set of functions. The term engine as used herein is defined as a real-world device, component, or arrangement of components implemented using hardware, such as by an application specific integrated circuit (ASIC) or field-programmable gate array (FPGA), for example, or as a combination of hardware and software, such as by a microprocessor system and a set of program instructions that adapt the engine to implement the particular functionality, which (while being executed) transform the microprocessor or controller system into a special-purpose device. An engine can also be implemented as a combination of the two, with certain functions facilitated by hardware alone, and other functions facilitated by a combination of hardware and software. In certain implementations, at least a portion, and in some cases, all, of an engine can be executed on the processor(s) of one or more computing platforms that are made up of hardware that execute an operating system, system programs, and/or application programs, while also implementing the engine using multitasking, multithreading, distributed processing where appropriate, or other such techniques.
(113) Accordingly, it will be understood that each processing system can be realized in a variety of physically realizable configurations, and should generally not be limited to any particular implementation exemplified herein, unless such limitations are expressly called out. In addition, a processing system can itself be composed of more than one engine, sub-engines, or sub-processing systems, each of which can be regarded as a processing system in its own right. Moreover, in the embodiments described herein, each of the various processing systems may correspond to a defined autonomous functionality; however, it should be understood that in other contemplated embodiments, each functionality can be distributed to more than one processing system. Likewise, in other contemplated embodiments, multiple defined functionalities may be implemented by a single processing system that performs those multiple functions, possibly alongside other functions, or distributed differently among a set of processing system than specifically illustrated in the examples herein.
(114) Embodiments utilize high-speed components and circuitry whereby the relative movement of the device and/or scene could be defined as the movement of less than the inter-element spacing in the detector array. For embodiments wherein the relative movement is small the processing software can assume the axis of the 3D volumetric computations is normal to the detector elements in the array. For relative movement greater than the inter-element spacing in the detector array during the timeframe of the emitter cycles the frame buffer analysis software will need to perform 3D analysis of the sampled waveforms whereby the representations have an axis that is non-normal to the detector elements in the array.
(115) The electrical circuitry of embodiments is described utilizing semiconductor nomenclature. In other embodiment circuitry and control logic that utilizes optical computing, quantum computing or similar miniaturized scalable computing platform may be used to perform part or all of the necessary high-speed logic, digital storage, and computing aspects of the systems described herein. The optical emitter elements are described utilizing fabricated semiconductor LED and laser diode nomenclature. In other embodiments the requirements for the various techniques described herein may be accomplished with the use of any controllable photon-emitting elements wherein the output frequency of the emitted photons is known or characterizable, is controllable with logic elements, and is of sufficient switching speed.
(116) In some embodiments, the light energy or light packet is emitted and received as near-collimated, coherent, or wide-angle electromagnetic energy, such as common laser wavelengths of 650 nm, 905 nm or 1550 nm. In some embodiments, the light energy can be in the wavelength ranges of ultraviolet (UV)—100-400 nm, visible—400-700 nm, near infrared (NIR)—700-1400 nm, infrared (IR)—1400-8000 nm, long-wavelength IR (LWIR)—8 um-15 um, far IR (FIR)—15 um-1000 um, or terahertz—0.1 mm-1 mm. Various embodiments can provide increased device resolution, higher effective sampling rates and increased device range at these various wavelengths.
(117) Detectors as utilized in the various embodiments refer to discrete devices or a focal plane array of devices that convert optical energy to electrical energy. Detectors as defined herein can take the form of PIN photodiodes, avalanche photodiodes, photodiodes operating at or near Geiger mode biasing, or any other devices that convert optical to electrical energy whereby the electrical output of the device is related to the rate at which target photons are impacting the surface of the detector.
(118) Persons of ordinary skill in the relevant arts will recognize that embodiments may comprise fewer features than illustrated in any individual embodiment described above. The embodiments described herein are not meant to be an exhaustive presentation of the ways in which the various features of the embodiments may be combined. Accordingly, the embodiments are not mutually exclusive combinations of features; rather, embodiments can comprise a combination of different individual features selected from different individual embodiments, as understood by persons of ordinary skill in the art. Moreover, elements described with respect to one embodiment can be implemented in other embodiments even when not described in such embodiments unless otherwise noted. Although a dependent claim may refer in the claims to a specific combination with one or more other claims, other embodiments can also include a combination of the dependent claim with the subject matter of each other dependent claim or a combination of one or more features with other dependent or independent claims. Such combinations are proposed herein unless it is stated that a specific combination is not intended. Furthermore, it is intended also to include features of a claim in any other independent claim even if this claim is not directly made dependent to the independent claim.
(119) Any incorporation by reference of documents above is limited such that no subject matter is incorporated that is contrary to the explicit disclosure herein. Any incorporation by reference of documents above is further limited such that no claims included in the documents are incorporated by reference herein. Any incorporation by reference of documents above is yet further limited such that any definitions provided in the documents are not incorporated by reference herein unless expressly included herein.
(120) For purposes of interpreting the claims, it is expressly intended that the provisions of Section 112, sixth paragraph of 35 U.S.C. are not to be invoked unless the specific terms “means for” or “step for” are recited in a claim.