High Dynamic Range Imaging of Environment with a High Intensity Reflecting/Transmitting Source
20170234976 · 2017-08-17
Assignee
Inventors
- Yoav Grauer (Haifa, IL)
- Ofer David (Haifa, IL)
- Eyal Levi (Haifa, IL)
- Ezri Sonn (Haifa, IL)
- Haim Garten (Haifa, IL)
Cpc classification
G01S7/4868
PHYSICS
International classification
Abstract
Active-gated imaging system and method for imaging environment with at least one high-intensity source. A light source emits light pulses toward the environment, and an image sensor with a pixelated sensor array receives reflected pulses from a selected depth of field and generates a main image. The image sensor exposure mechanism includes a pixelated transfer gate synchronized with the emitted pulses. An image processor identifies oversaturated image portions of the main image resulting from a respective high-intensity source, and interprets the oversaturated image portions using supplementary image information acquired by image sensor. The supplementary information may be obtained from: a low-illumination secondary image having substantially fewer gating cycles than the main image; by accumulating reflected pulses from the high-intensity source after the reflected pulses undergo internal reflections between optical elements of the camera; or a low-illumination secondary image acquired by residual photon accumulation during a non-exposure state of image sensor.
Claims
1. An active-gated imaging system, for imaging an environment with the presence of at least one high-intensity source, the imaging system comprising: a light source, configured to emit light pulses toward said environment; a gated camera comprising an image sensor with a pixelated sensor array configured for digital image acquisition, said image sensor configured to receive reflected pulses from a selected depth of field (DOF) in said environment and to generate a main image, wherein the exposure mechanism of said image sensor comprises a pixelated transfer gate synchronized with the emitted pulses; a controller, configured to control the operation of said light source and said image sensor; and an image processor, configured to identify at least one oversaturated image portion of said main image resulting from a respective said high-intensity source, and to interpret said oversaturated image portion using supplementary image information acquired by said image sensor.
2. The imaging system of claim 1, wherein said image processor is further configured to generate a merged image by combining said main image with said supplementary image information.
3. The imaging system of claim 1, wherein said image sensor is configured to acquire at least one low-illumination secondary image of said DOF, wherein the number of gating cycles of said secondary image is substantially less than the number of gating cycles of said main image, said supplementary image information being obtained from said secondary image.
4. The imaging system of claim 1, wherein said image sensor is configured to generate said supplementary image information in said main image, by accumulating reflected pulses from said high-intensity source after said reflected pulses undergo internal reflections between optical elements of said camera.
5. The imaging system of claim 4, wherein said optical elements is selected from the group consisting of: an optical lens; an external spectral filter ; and an optical element comprising an anti-reflection coating.
6. The imaging system of claim 1, wherein said image sensor is configured to acquire at least one low-illumination secondary image frame of said DOF, by residual photon accumulation when said image sensor is in a non-exposure state, said supplementary image information being obtained from said secondary image frame.
7. The imaging system of claim 6, wherein said controller is configured to apply said non-exposure state by closing said transfer gate, for at least one pixel of the sensor array.
8. The imaging system of claim 2, further comprising a display, configured to display at least one of: said main image; said supplementary image information; and said merged image.
9. The imaging system of claim 1, wherein said controller is further configured to adaptively control at least one gating parameter of said light source or said camera, in accordance with said supplementary image information.
10. The imaging system of claim 9, wherein said controller is configured to minimize the frame duration (T.sub.FRAME) of at least one image frame, to reduce ambient light accumulation in said image frame.
11. The imaging system of claim 1, wherein said processor is configured to provide an indication of at least one object of interest in said environment.
12. The imaging system of claim 11, wherein said indication comprises a driving assistance feature provided to an operator of a vehicle, said driving assistance feature being selected from the group consisting of: forward collision warning (FCW); lane departure warning (LDW); traffic sign recognition (TSR); pedestrian/vehicle detection; navigational instructions; and any combination thereof.
13. The imaging system of claim 1, wherein said image sensor further comprises an anti-blooming mechanism, configured to direct excess saturation from a respective pixel of said sensor array to neighboring pixels.
14. The imaging system of claim 13, wherein said anti-blooming mechanism is adaptively controllable.
15. The imaging system of claim 1, wherein said image sensor comprises a linear response image sensor.
16. The imaging system of claim 1, wherein said image sensor comprises an addressable switching mechanism, configured to selectively adjust the intensity level of at least one selected pixel by controlling the number of exposures of said selected pixel, to minimize intense reflections from said high-intensity sources incident on said selected pixel.
17. The imaging system of claim 1, further comprising an additional detector or imaging source selected from the group consisting of: a radar detector; a lidar detector; a stereoscopic camera; a rangefinder; a location data source; and a digital map.
18. A vehicle, comprising the imaging system of claim 1.
19. A method for active-gated imaging of an environment with the presence of at least one high-intensity source, the method comprising the procedures of: emitting light pulses toward said environment, using a light source; receiving reflected pulses from a selected DOF in said environment to generate a main image, using a gated camera comprising an image sensor with a pixelated sensor array configured for digital image acquisition, wherein the exposure mechanism of said image sensor comprises a pixelated transfer gate synchronized with the emitted pulses; identifying at least one oversaturated image portion of said main image resulting from a respective said high-intensity source; and interpreting said oversaturated image portion using supplementary image information acquired using said image sensor.
20. The method of claim 19, further comprising the procedure of generating a merged image by combining said main image with said supplementary image information.
21. The method of claim 19, wherein said procedure of acquiring supplementary image information comprises acquiring at least one low-illumination secondary image frame of said DOF, wherein the number of gating cycles of said secondary image is substantially less than the number of gating cycles of said main image.
22. The method of claim 19, wherein said procedure of acquiring supplementary image information comprises obtaining low-illumination image content in said main image, by accumulating reflected pulses from said high-intensity source after said reflected pulses undergo internal reflections between optical elements of said camera.
23. The method of claim 19, wherein said procedure of acquiring supplementary image information comprises acquiring at least one low-illumination secondary image frame of said DOF by residual photon accumulation when said image sensor is in a non-exposure state.
24. The method of claim 23, further comprising the procedure of closing said transfer gate, for at least one pixel of the sensor array, to apply said non-exposure state.
25. The method of claim 20, further comprising the procedure of displaying at least one of: said main image; said supplemental image information; and said merged image.
26. The method of claim 19, further comprising the procedure of adaptively controlling at least one gating parameter of said light source or said camera, in accordance with said supplementary image information.
27. The method of claim 26, wherein said procedure of adaptively controlling at least one gating parameter comprises minimizing the frame duration (T.sub.FRAME) of at least one image frame, to reduce ambient light accumulation in said image frame.
28. The method of claim 19, further comprising the procedure of providing an indication of at least one object of interest in said environment.
29. The method of claim 28, wherein said indication comprises a driving assistance feature provided to an operator of a vehicle, said driving assistance feature being selected from the group consisting of: forward collision warning (FCW); lane departure warning (LDW); traffic sign recognition (TSR); pedestrian/vehicle detection; navigational instructions; and any combination thereof.
30. The method of claim 19, further comprising the procedure of directing excess saturation from a respective pixel of said sensor array to neighboring pixels, with an anti-blooming mechanism of said image sensor.
31. An imaging system, for imaging an environment with the presence of at least one high-intensity source, the imaging system comprising: an image sensor, configured to generate a main image by receiving radiation emitted or reflected from objects in said environment, and further configured to acquire at least one low-illumination secondary image by residual photon accumulation when said image sensor is in a non-exposure state; and an image processor, configured to identify at least one oversaturated image portion of said main image resulting from a respective said high-intensity source, and to interpret said oversaturated image portion using said secondary image frame.
32. The imaging system of claim 31, wherein said non-exposure state of said image sensor is applied by closing the pixelated transfer gate that transfers charge from the photodiode to the floating diffusion node of the pixel, for at least one pixel of the sensor array.
33. The imaging system of claim 31, wherein said image sensor comprises a CMOS image sensor.
34. A method for imaging an environment with the presence of at least one high-intensity source, the method comprising the procedures of: receiving radiation emitted or reflected from objects in said environment to generate a main image, using an image sensor; acquiring at least one low-illumination secondary image using said image sensor, by residual photon accumulation when said image sensor is in a non-exposure state identifying at least one oversaturated image portion of said main image resulting from a respective said high-intensity source; and interpreting said oversaturated image portion using said secondary image frame.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The present invention will be understood and appreciated more fully from the following detailed description taken in conjunction with the drawings in which:
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0023] The present invention overcomes the disadvantages of the prior art by providing an active-gated imaging system and method for imaging of an environment with the presence of at least one high-intensity source, to produce a high dynamic range (HDR) image, even when using a linear (non-logarithmic) imaging sensor. The term “high-intensity source”, as used herein, refers to any object or entity that reflects and/or emits a substantially high level of radiant intensity, such that the reflections received therefrom by an (active or passive) imaging system would result in an unclear or incomprehensible image portion (e.g., due to undesirable electro-optical phenomena, such as saturation or blooming effects). For example, a high-intensity source may be a “highly-reflective source”, such as a retro-reflector (e.g., a retro-reflective traffic sign or a retroreflective sheet on a vehicle rear bumper), and/or may be a “highly-transmitting source”, such as: sunlight, vehicle high beams, or a light source of another active imaging system (e.g., on an oncoming vehicle). Alternatively, a high-intensity source may be considered an object or entity from which the received reflection signal exceeds the dynamic range of the image sensor pixels, resulting in pixel saturation and perhaps also photon overflow into neighboring pixels. For example, a retro-reflective source may reflect light at least two orders of magnitude greater than would a diffusive source located at the same distance.
[0024] Reference is now made to
[0025] Imaging system 100 may operate using active imaging, in which an image of the scene is generated from accumulated light reflections (by image sensor 112) after the transmission of light (by light source 102) to illuminate the scene. Imaging system 100 is configured with a gated imaging capability, such that the activation of camera 104 is synchronized with the illumination pulses 122 in order to image a particular depth of field (DOF). For example, camera 104 is activated to accumulate photons when the reflected pulses 124 from a specific distance are due to arrive at camera 104, and is deactivated (prevented from accumulated photons) during other time periods. Imaging system 100 may also operate in a non-gated imaging mode. According to some embodiments of the present invention, imaging system 100 may operate using passive imaging, i.e., without actively illuminating the scene by light source 102, such that image sensor 112 receives emitted or reflection radiation with only the existing ambient light.
[0026] Light source 102 emits a series of light pulses, such as light pulse 122, toward an area or environment to be imaged by system 100. Light source 102 may alternatively emit continuous wave (CW) radiation. The emitted light may be any suitable wavelength, such as in the near infrared (NIR) or short wave infrared (SWIR) spectral ranges. Light source 102 may be embodied by a laser diode, such as an edge-emitting semiconductor laser or a vertical-cavity surface-emitting laser (VCSEL), or by a non-laser light source, such as a light emitting-diode (LED) or a gas discharge lamp. The particular characteristics of the emitted light may be selected in accordance with the imaged area and the environmental conditions. For example, the pulse width, the intensity (peak power), the polarization and/or the shape of the illumination pulse 122 may be controlled as a function of the distance to an object to be imaged (i.e., the desired DOF).
[0027] Camera 104 receives reflected light, such as reflected light pulse 124, reflected from objects illuminated by emitted light pulses 122. Camera 104 includes at least one image sensor 112 that accumulates the reflected light pulses 124 and generates a digital image of the scene. Image sensor 112 may be, for example, a CCD sensor or a CMOS sensor, such as an active pixel sensor (APS) array. Image sensor 112 may also be a hybrid sensor (e.g., an indium gallium arsenide (InGaAs) based photodetector or a mercury cadmium telluride (MCT) based photodetector), with or without gain. Camera 104 may also include an image intensifier coupled with the sensor array 112. The exposure mechanism of image sensor 112 involves a pixelated transfer gate that transfers charge from a photodiode to a floating diffusion node for each individual pixel in the sensor array (where each pixel may be associated with more than one transfer gate element and more than one floating diffusion node element). Image sensor 112 operates in a substantially similar spectral range as light source 102 (e.g., in the NIR, and/or SWIR spectrum). Image sensor 112 is operative to acquire at least one image frame, such as a sequence of consecutive image frames representing a video image, which may be converted into an electronic signal for subsequent processing and/or transmission. The image generated by image sensor 112 is referred to herein as a “reflection-based image” or a “main image”, interchangeably, which encompasses any optical or digital signal representation of a scene acquired at any spectral region, encompasses images obtained by either active illumination imaging or passive imaging, and encompasses both a single image frame and a sequence of image frames (i.e., a “video image”).
[0028] Camera 104 further includes optics 114, operative to direct reflected light pulses 124 to image sensor 112. Optics 114 may include: lenses, mirrors, fiber optics, waveguides, and the like. Camera 104 includes optional filters 116, operative to filter incoming light 124 according to particular filtering criteria. Filters 116 may be integrated with image sensor 112, and/or disposed in adjacent optics 114. For example, filters 116 may include at least one bandpass filter, which passes through only wavelengths in the spectral range emitted by light source 102 (e.g., NIR light), while blocking light at other wavelengths. Such a bandpass filter may thus reduce the level of incoming light from certain high-intensity sources in the imaged scene, such as those that reflect/emit light in the visible spectrum (e.g., the headlights of oncoming vehicles). Filters 116 may also include a spectral filter, such as to direct selected wavelengths to different pixels of image sensor 112. For example, some pixels may be configured to receive light only in the NIR spectrum, while other pixels may be configured to receive light only in the SWIR spectrum. Filters 116 may further include a polarization filter, such as in conjunction with a light source 102 that emits polarized light, where the polarization filter is configured to block incoming light having a particular polarization from reaching image sensor 112. Generally, objects reflect light without preserving the polarization of the incident light, but certain highly-reflective objects, such as retroreflective traffic signs, do preserve the incident light polarization. Thus, a polarization filter may be configured to pass through received pulses 124 with a substantially perpendicular polarization to that of the emitted pulses 122, thereby reducing intense reflections from high-intensity sources and mitigating potential saturation or blooming effects in the generated active image. Imaging system 100 may adjust the degree by which the polarization is altered, such as by applying a partial rotation of the polarization (e.g., between 0-90° rotation) to reduce reflections from objects further away in the environment. Filters 116 may be implemented on the pixel array of image sensor 112 (i.e., such that different sensor array pixels are configured to only accumulate light pulses having different wavelength/spectral/polarization properties).
[0029] According to an embodiment of the present invention, image sensor 112 is a linear non-HDR sensor (i.e., having a linear/non-logarithmic pixel read-out scheme). Image sensor 112 may alternatively be embodied by a logarithmic HDR sensor, or a sensor with a combined linear/logarithmic response. The signals 124 received by image sensor 112 may be processed using an adaptive beamforming scheme to provide directional sensitivity. Imaging system 100 may optionally include multiple cameras 104 and/or image sensors 112, such that different cameras/sensors are configured to collect reflections of different transmitted laser pulses 122. For example, 3D information (i.e., a stereoscopic image) can be extracted using a triangulation and/or pulsing/gating scheme.
[0030] Controller 106 dynamically and adaptively controls the operation of light source 102 and/or camera 104. For example, controller 106 synchronizes the emission of laser pulses 122 by light source 102 with the exposure of camera 104 for implementing active-gated imaging. Controller 106 also sets the various parameters of the emitted light pulses 122, such as the pulse start time, the pulse duration (pulse width), the number of pulses per frame, and the pulse shape and pattern. Controller 106 may adjust the frame rate of camera 104, or other parameters relating to the image frames captured by cameras 104. For example, controller 106 may establish the illumination level for each acquired frame and for each portion or “slice” (i.e., DOF) of a frame, such as by controlling the number of emitted light pulses 122 and collected reflections 124 for each frame slice, by controlling the number of frame slices within each frame, and/or by controlling the exposure duration of camera 104 as well as the timing of the exposure with respect to the emitted light pulse 122.
[0031] Controller 106 may also control the gain of image sensor 112, such as by using an automatic gain control (AGC) mechanism. In general, controller 106 may dynamically adjust any parameter as necessary during the course of operation of imaging system 100. Controller 106 may be integrated in a single unit together with camera 104 and/or with image processor 108.
[0032] Image processor 108 receives the main image captured by camera 104 and performs relevant image processing and analysis. Image processor 108 may merge or combine supplementary image information acquired by image sensor 112 with the main image to generate a fused image, as will be elaborated upon hereinbelow. Image processor 108 may also analyze the acquired images (and/or a fused image) to detect and/or identify at least one object of interest in the environment, as will be discussed further hereinbelow.
[0033] Display 110 displays images generated by imaging system 100. The displayed image may be combined with the ambient scenery, allowing a user to view both the display image and the ambient scene simultaneously, while maintaining external situational awareness. For example, display 110 may be a head-up display (HUD), such as a HUD integrated in a vehicle windshield of a vehicle-mounted night vision system. Display 110 may also be a wearable display, embedded within an apparatus worn by the user (e.g., a helmet, a headband, a visor, spectacles, goggles, and the like), or alternatively may be the display screen of a mobile or handheld device (e.g., a smartphone or tablet computer).
[0034] System further includes a data communication channel 120, which allows for sending images, notifications, or other data between internal system components or to an external location. Data communication channel 120 may include or be coupled with an existing system communications platform, such as in accordance with the CAN bus and/or on-board diagnostics (OBD) protocols in a vehicle. For example, imaging system 100 may receive information relating to the current vehicle status, such as: velocity; acceleration; orientation; and the like, through the vehicle communication bus. Imaging system 100 may also receive information from external sources over communication channel 120, such as location coordinates from a global positioning system (GPS), and/or traffic information or safety warnings from other vehicles or highway infrastructure, using a vehicular communication system such as vehicle-to-vehicle (V2V) or vehicle-to-infrastructure (V2I).
[0035] Imaging system 100 may optionally include and/or be associated with additional components not shown in
[0036] The components and devices of imaging system 100 may be based in hardware, software, or combinations thereof. It is appreciated that the functionality associated with each of the devices or components of system 100 may be distributed among multiple devices or components, which may reside at a single location or at multiple locations. For example, the functionality associated with controller 106 or image processor 108 may be distributed between multiple controllers or processing units.
[0037] According to an embodiment of the present invention, imaging system 100 is mounted onto a vehicle. The term “vehicle” as used herein should be broadly interpreted to refer to any type of transportation device, including but not limited to: an automobile, a motorcycle, a truck, a bus, an aircraft, a boat, a ship, and the like. It is appreciated that the imaging system of the present invention may alternatively be mounted or integrated at least partially on a non-vehicular platform, such as a stationary, portable or moveable platform (e.g., a pole, fence or wall of a secured perimeter or surveillance zone), or further alternatively embedded within a stationary or mobile device (such as a smartphone, a computer, a camera, and the like).
[0038] Reference is now made to
[0039] System 100 images the environment in the vicinity of vehicle 130. In particular, light source 102 emits a series of light pulses 122 to illuminate the scene, at a FOI 132 generally spanning at least the width of the road (including various traffic signs at the side of the road), and camera 104 collects the light pulses 124 reflected from objects in the scene and generates a reflection-based image. Image processor 108 receives the acquired reflection-based image and identifies at least one oversaturated image portion in the reflection-based image, resulting from at least one high-intensity source in the imaged environment. The term “oversaturated image portion”, as used herein, refers to image content characterized by excessive brightness (i.e., appearing overexposed or “washed out”) that renders the image details unclear, ambiguous and/or indecipherable, as a result of the reflection signal exceeding the dynamic range of the associated sensor pixel and leading to pixel saturation and perhaps also photon overflow into neighboring pixels.
[0040] Image processor 108 uses supplementary image information acquired by image sensor 112, in order to decipher or interpret the oversaturated image portion in the reflected image. The term “interpret” is used herein broadly as obtaining any meaningful information from the oversaturated image portion, such as for example, identifying the type and/or the content of a road sign associated with the oversaturated image portion. The supplementary information may take on multiple forms, as elaborated upon hereinbelow. Image processor 108 may detect and identify the presence of an object of interest in the oversaturated image portion using the supplementary image information, and provide an indication of the detected object to a user of system 100.
[0041] According to a first embodiment of the present invention, the supplementary image information is obtained from a low-illumination secondary image frame acquired using a reduced illumination gating scheme. Reference is now made to
[0042] According to a second embodiment of the present invention, the supplementary image information is obtained from low-illumination secondary image content that appears in the acquired image frame due to internal reflections of the reflected pulses between optical elements of the camera. Reference is now made to
[0043] According to a third embodiment of the present invention, the supplementary image information is obtained from a low-illumination secondary image frame acquired by residual photo accumulation when image sensor 112 is in a non-exposure state. Reference is now made to
[0044] Image processor 108 processes the reduced illumination secondary image 200 resulting from the residual photon accumulation, and identifies image portions 202, 204, 206, 208 corresponding to the respective oversaturated image portions 192, 194, 196, 198 of first image frame 190. Image processor 108 interprets and identifies the features contained within image portions 202, 204, 206, 208 (as representing: a retroreflective sign (202); vehicle headlights (204, 206); and a vehicle license plate (208), respectively), including relevant details which were imperceptible in main image 190. Image processor 108 may optionally generate a merged image (not shown), by combining image portions 202, 204, 206, 208 with main image 190, using suitable image fusion techniques. Display 110 may then display the merged image frame (and/or main image frame 190 and/or reduced illumination image frame 200) to the user. The respective image portions 202, 204, 206, 208 (or an image fusion version thereof) will thus appear clearer and more easily discernible to the viewer, as compared to the corresponding oversaturated image portions in the first image 190. Instead of (or in addition to) being displayed, the merged image may be used directly by an automated night vision or driving assistance application. It is appreciated that the third embodiment of the present invention may also be implemented using passive imaging (rather than active illumination imaging), where image frames 190, 200 are captured without the use of light source 102.
[0045] Image processor 108 may be configured to provide an indication of at least one object of interest in the environment, such as by detecting and identifying a high-intensity source associated with oversaturated image content in an acquired image. An object of interest may represent a unified physical object or entity located in the real-world environment, or may represent a general environmental feature or collection of features (and not necessarily a unified physical object). For example, processor 108 may detect obstacles or relevant objects located along the current path or route of vehicle 130, such as the presence of: a pedestrian, another vehicle, an animal, a traffic sign, and the like. Processor 108 may designate a detected object of interest in the environment for further investigation and/or to be or brought to the attention of a user (e.g., a driver or passenger of vehicle 130). System 100 may generate an alert or notification relating to an object of interest, such as by providing a visual or audio indication thereof. For example, system 100 may present supplementary content (e.g., augmented reality) overlaid onto displayed images (e.g., fused image 170), such as text/graphics/symbols indicating information or characteristics associated with objects of interest in the imaged environment (e.g., type of object; distance from vehicle 130; level of potential danger; and the like). The alert or notification may be integrated with a driving assistance module in vehicle 130 configured to provide a driving assistance feature, such as: forward collision warning (FCW), lane departure warning (LDW), traffic sign recognition (TSR), high beam control, vehicle/pedestrian/animal detection, and any combination thereof.
[0046] Filters 116 may also be used to provide supplementary image information and to assist in identifying a high-intensity source associated with oversaturated image content. For example, camera 104 may include a spectral filter 116, configured to direct selected wavelengths to different pixels of image sensor 112. A spectral filter 116 may be embodied by a repeating cluster of a 2 by 2 pixel array, with the cluster repeating itself upon a portion (or the entirety) of the image sensor array, where for example, the first pixel is configured to receive light in the Blue spectrum, the second pixel is configured to receive light in the Green spectrum, the third pixel is configured to receive light in the Red spectrum, and the fourth pixel is configured to receive light in the NIR spectrum.
[0047] Image sensor 112 may include an anti-blooming mechanism, configured to direct excess saturation from a respective sensor array pixel to the neighboring pixels, in order to avoid or minimize a blooming (“halo”) effect in the sensor image. The anti-blooming mechanism may be embodied by an anti-blooming (AB) gate configured to reset the photodiode and direct excess saturation to neighboring pixels. Alternatively, blooming can be controlled by setting the voltage on the reset gate (instead of ground) during integration. Further alternatively, anti-blooming may be achieved by implanting a drain to the sensor pixel for drawing off excess photons from the reflection signal. The anti-blooming mechanism may be adaptively controlled, such as by using feedback from previous image frames to determine which sensor pixels to drain and by how much. An anti-blooming mechanism is particularly applicable for the third embodiment of the present invention (acquiring secondary image frame by residual photo accumulation), but is generally applicable for the first and second embodiments as well.
[0048] In addition to supplementary imaging of high-intensity sources in accordance with at least one of the three aforementioned approaches, imaging system 100 may also adaptively control gating parameters in order to minimize excessively intense reflections from the high-intensity sources. For example, controller 106 may modify at least one parameter of light source 102 or camera 104, such as in real-time. Examples of such parameters include: the pulse width; the pulse intensity (peak power); the pulse shape; number of emitted pulses; a gating cycle duration; a delay time of at least one gating cycle; frame rate; at least one DOF; a maximum range to be imaged; the timing and duration of camera 104 activation (e.g., exposure rise and fall times); the voltage supply of gating control transfer of a sensor pixel; the gain of image sensor 112; intrinsic parameters of light source 102; intrinsic parameters of image sensor 112 or camera 104; sensitivity of image sensor 112 (e.g., sensor control and/or gain voltages); and the like. For example, if a high-intensity source is known to be present at a particular distance from vehicle 130 (e.g., at approximately 50 meters), then the illumination pulses 122 and reflected pulses 124 may be established so as to image distance slices (DOFs) located before and after the high-intensity source, while “skipping over” the distance slice in the immediate vicinity of the high-intensity source (e.g., by imaging a first DOF of up to 45 meters, and a second DOF from 55 meters and beyond). This gating scheme can be updated (e.g., in real-time) to take into account the movements of vehicle 130 and/or the movements of the high-intensity source (i.e. by updating the DOFs accordingly). Controller 106 may take into account the environmental conditions in the imaged scene when adjusting the parameters, such as, for example, the weather and climate or the road conditions. Imaging system 100 may implement a selected gating scheme so as to generate successive image frames at varying illumination levels, allowing for image fusion between low and high illumination frames, and the subsequent interpretation of image details associated with high-intensity sources that are indecipherable in the high illumination image frames.
[0049] A further approach to minimize excessively intense reflections from high-intensity sources located at known distances is the use of light polarization, by means of a polarization filter (as discussed hereinabove). For another example, the phenomenon of residual “ghost” images, which refers to the appearance of phantom copies of the high-intensity sources in the acquired image (generally caused by internal lens reflections), may be mitigated by tilting (adjusting the inclination) of a spectral filter 116 and/or optics 114 of camera 104.
[0050] Yet another approach for minimizing excessively intense reflections from high-intensity sources involves using the addressable switching mechanisms of image sensor 112. In particular, if image sensor 112 is configured such that individual pixels, or groups of pixel clusters, may be switched (gateable) independently, then after identifying oversaturated image portions in an acquired image frame, the internal switch circuitry of image sensor 112 may be configured so that the relevant sensor pixels (associated with the oversaturated image portions) will have fewer pulses/exposures (gates), thus accumulating a lower intensity level of the incident photons, relative to the other sensor pixels, which will remain at the default switching setting.
[0051] It is noted that the main image (e.g., image frames 150, 190) and/or the supplementary image information (e.g., image frames 160, 180, 200) may be pre-processed prior to fusion, such as by undergoing fixed-pattern noise (FPN) suppression, contrast enhancement, gamma correction, and the like. Image processor 108 may also implement image registration of the image frames if necessary, such as when vehicle 130 is in motion during the operation of imaging system 100, or if elements in the imaged environment are moving with respect to imaging system 100.
[0052] According to an embodiment of the present invention, image processor 108 may further perform character recognition of objects in the scene with text or numerical data, such as traffic signs, for example by using optical character recognition (OCR) techniques known in the art. Image processor 108 may also analyze textual or numerical content to provide supplemental driving assistance features, such as to identify potential driving hazards or for navigation purposes. For example, system 100 may notify the driver of vehicle 130 if he/she is turning onto the correct road by analyzing the content of traffic or street signs (representing a high-intensity source) in the vicinity of the vehicle 130, optionally in conjunction with available maps and real-time location information of vehicle 130. System 100 may determine the optimal illumination level for imaging, in order for the visibility of characters on the sign to be as high as possible, and control light source 102 accordingly. For example, controller 106 may adjust the operation parameters of light source 102 and/or camera 104 such as to acquire the lowest illumination image that will enable accurate pattern and text recognition (e.g., in order to conserve power and to minimize saturation effects). Following a general determination of the type of traffic or street sign (or other high-intensity source), such as based on the shape and/or image information associated with the sign (e.g., text/numerical data), image processor 108 may also add color information to the traffic signs on an acquired or fused image. Such color information may also be obtained from spectral filters 116 of camera 104. Active-gated imaging may also be applied for removing unwanted markings in the image frames, such as road tar marks or concrete grooves.
[0053] Imaging system 100 may optionally include additional detection/measurement units or imaging sources (not shown), such as: a radar detector; a lidar detector; stereoscopic cameras; and the like. The additional detection sources may be remotely located from at least some components of system 100, and may forward measurement data to system 100 via an external (e.g., wireless) communication link. The information obtained from the additional sources may be used to enhance an acquired or generated (fused) image, and/or to control the operation of light source 102 or camera 104. For example, system 100 may obtain distance information relative to potential high-intensity sources in the environment (e.g., from a laser rangefinder), and controller 106 may then adjust at least one gating parameter accordingly, such as to minimize or avoid excessive reflections from the DOF where the high-intensity source is located. System 100 may also utilize distance information for object detection and identification purposes. For another example, system 100 may obtain information relating to the environmental conditions in the imaged environment, such as for example: lighting conditions (e.g., sunny or overcast); weather or climate conditions (e.g., rain, fog, or snow); time of day (e.g., day or night); month of year or season; and the like. The obtained environmental conditions may be utilized for enhancing an acquired or generated (fused) image (e.g., adjusting the brightness level in the image); for controlling the operation of light source 102 and/or camera 104 (e.g., adjusting at least one gating parameter); and/or for enhancing object detection and identification. For yet another example, image processor 108 may use a digital map or other location data source to assist and enhance the interpretation of high-intensity sources (e.g., to navigate a driver of vehicle 130 based on character recognition of street signs in the image in conjunction with map analysis).
[0054] According to another embodiment of the present invention, the frame duration (T.sub.FRAME) of either the main image and/or the (low-illumination) secondary image may be selectively controlled. For example, the frame readout (i.e., reading the accumulated signal data for each pixel) may be performed immediately following the final gating cycle of that frame, or during the gating cycles of the subsequent image frame, so as to minimize the total frame duration, thus reducing the collected ambient light in that image frame (which may degrade image quality). For example, if the cycle duration (T.sub.CYCLE) for each gating cycle is 5 μs, then a main image frame characterized by 1000 pulses/exposures would have a total frame duration of: T.sub.FRAME-main=5 μs×1000 pulses/exposures=5 ms, while a secondary image frame characterized by 5 pulses/exposures would have a total frame duration of: T.sub.FRAME-secondary=5 μs×5 pulses/exposures=0.025 ms (not including the frame readout time).
[0055] It will be appreciated that the three disclosed approaches of supplementary imaging of high-intensity sources may substantially enhance the dynamic range of an acquired active-gated image. For example, the linear gain of a merged image (generated by fusing an acquired image with supplementary image information in accordance with an embodiment of the present invention), may be characterized by a high dynamic range of approximately 80-85 dB (or greater), as compared to a standard range of approximately 60 dB, for a linear response sensor.
[0056] It is further appreciated that imaging system may be configured to operate during both day and night, and in variable weather and climate conditions (e.g., clear and sunny, overcast, rain, fog, snow, hail, etc), allowing for effective imaging and object identification by system 100 in varying environmental conditions (e.g., whether driving through dark tunnels or brightly lit outdoors).
[0057] According to another embodiment of the present invention, a plurality of imaging systems, similar to system 100 of
[0058] Reference is now made to
[0059] In procedure 246, at least one oversaturated image portion resulting from a respective high-intensity source is identified in the main image. With reference to
[0060] In procedure 248, the oversaturated image portion in the main image is interpreted using supplementary image information acquired by the image sensor. Procedure 248 may be implemented via any one of sub-procedures 250, 252, 254. In sub-procedure 250, a low illumination image frame is acquired using a reduced illumination gating scheme.
[0061] Referring to
[0062] In optional procedure 256, a merged image is generated by fusing the supplementary image information with the main image. With reference to
[0063] In optional procedure 260, the merged image is displayed. With reference to
[0064] In optional procedure 258, at least one high-intensity source is detected and identified in the main image and/or merged image using the supplemental image information. Referring to
[0065] In optional procedure 262, an alert or notification relating to a detected high-intensity source is provided. With reference to
[0066] The method of
[0067] While certain embodiments of the disclosed subject matter have been described, so as to enable one of skill in the art to practice the present invention, the preceding description is intended to be exemplary only. It should not be used to limit the scope of the disclosed subject matter, which should be determined by reference to the following claims.