Vehicular vision system
11518377 · 2022-12-06
Assignee
Inventors
- Yuesheng Lu (Farmington Hills, MI, US)
- Joel S. Gibson (Linden, MI, US)
- Duane W. Gebauer (Gregory, MI, US)
- Richard D. Shriner (Grand Blanc, MI, US)
- Patrick A. Miller (Grand Blanc, MI, US)
Cpc classification
G08G1/168
PHYSICS
H04N25/61
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
B62D15/0295
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/305
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
B62D15/02
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A vehicular vision system includes a camera, a distance sensor and a controller having at least one processor. Image data captured by the camera and sensor data captured by the distance sensor are processed at the controller. The controller, responsive to processing of captured image data and of captured sensor data, detects an object. The controller determines the distance to the detected object based at least in part on difference between the positions of the detected object in captured image data and in captured sensor data. The controller, responsive to processing of captured image data and of captured sensor data, and responsive to the determined distance to the detected object, determines that the detected object represents a collision risk. The controller alerts a driver of the vehicle of the collision risk and/or controls the vehicle to mitigate the collision risk.
Claims
1. A vehicular vision system, said vehicular vision system comprising: a camera comprising a lens and an image sensor, wherein the camera is disposed at a vehicle equipped with said vehicular vision system and has a field of view exterior of the equipped vehicle; a distance sensor disposed at the equipped vehicle and having a field of sensing exterior of the equipped vehicle; wherein the distance sensor comprises a plurality of infrared light-emitting light sources, and wherein the distance sensor senses infrared light; a controller comprising at least one processor, wherein image data captured by the camera and sensor data captured by the distance sensor are processed at the controller; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, detects an object present in the field of view of the camera and in the field of sensing of the distance sensor; wherein the controller determines distance to the detected object based at least in part on difference between position of the detected object in image data captured by the camera and position of the detected object in sensor data captured by the distance sensor; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, and responsive to the determined distance to the detected object, determines that the detected object represents a collision risk; and wherein, responsive to determination that the detected object represents a collision risk, the controller controls the equipped vehicle to mitigate the collision risk.
2. The vehicular vision system as claimed in claim 1, comprising a display disposed in an interior of the equipped vehicle and viewable by a driver of the equipped vehicle, and wherein the display displays video images derived from image data captured by the camera, and wherein the displayed video images include images of the detected object.
3. The vehicular vision system as claimed in claim 2, wherein the display displays an overlay that highlights the displayed detected object.
4. The vehicular vision system as claimed in claim 2, wherein the camera is positioned at an actual viewing angle, and wherein said vehicular vision system has a bird's eye viewing mode in which the displayed video images appear to be at an apparent viewing angle that is more vertically oriented than the actual viewing angle of the camera.
5. The vehicular vision system as claimed in claim 4, wherein the controller compresses a lower portion of captured image data and stretches an upper portion of captured image data so that the apparent viewing angle is more vertically oriented than the actual viewing angle.
6. The vehicular vision system as claimed in claim 4, wherein the equipped vehicle has a towing hitch, and wherein the towing hitch is in the field of view of the camera, and wherein said vehicular vision system has a hitch viewing mode in which the towing hitch in the displayed video images is magnified to a magnification level that is greater than a magnification level provided in the bird's eye viewing mode.
7. The vehicular vision system as claimed in claim 2, wherein the controller processes image data captured by the camera and processes sensor data captured by the distance sensor to determine a projected trajectory for the equipped vehicle, and wherein the controller applies a projected path overlay at the displayed video images, and wherein the projected path overlay comprises a representation of the projected trajectory.
8. The vehicular vision system as claimed in claim 2, wherein the controller modifies raw image data captured by the camera to produce a processed image, and wherein the controller determines position of a selected feature in the field of view of the camera and applies an overlay on the selected feature in an initial raw image data to produce an initial processed image, and wherein the controller thereafter holds the overlay in a fixed position in subsequent processed images regardless of movement of the selected feature in subsequent raw image data.
9. The vehicular vision system as claimed in claim 8, wherein the selected feature is a feature on a trailer being towed by the equipped vehicle.
10. The vehicular vision system as claimed in claim 1, wherein, responsive to determination that the detected object represents the collision risk, the controller alerts a driver of the equipped vehicle of the collision risk.
11. The vehicular vision system as claimed in claim 1, wherein, responsive to determination that the detected object represents the collision risk, the controller controls at least one vehicle component to mitigate collision by the equipped vehicle with the detected object.
12. The vehicular vision system as claimed in claim 11, wherein, responsive to determination that the detected object represents the collision risk, the controller controls braking of the equipped vehicle.
13. The vehicular vision system as claimed in claim 1, wherein image data captured by the camera has a first resolution and sensor data captured by the distance sensor has a second resolution that is lower than the first resolution.
14. The vehicular vision system as claimed in claim 1, wherein, prior to determining whether the detected object represents a collision risk, the controller determines whether the detected object falls within a category of objects that do not represent a collision risk based at least in part on processing of image data captured by the camera.
15. The vehicular vision system as claimed in claim 1, wherein, prior to determining whether the detected object represents a collision risk, the controller determines whether the detected object falls within a category of objects that represent objects of interest based at least in part on processing of image data captured by the camera.
16. The vehicular vision system as claimed in claim 15, wherein the category of objects that represent objects of interest include at least one selected from the group consisting of people, animals and other vehicles.
17. A vehicular vision system, said vehicular vision system comprising: a camera comprising a lens and an image sensor, wherein the camera is disposed at a vehicle equipped with said vehicular vision system and has a field of view exterior of the equipped vehicle; a distance sensor disposed at the equipped vehicle and having a field of sensing exterior of the equipped vehicle; wherein the distance sensor comprises a plurality of infrared light-emitting light sources, and wherein the distance sensor senses infrared light; a controller comprising at least one processor, wherein image data captured by the camera and sensor data captured by the distance sensor are processed at the controller; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, detects an object present in the field of view of the camera and in the field of sensing of the distance sensor; a display disposed in an interior of the equipped vehicle and viewable by a driver of the equipped vehicle, and wherein the display displays video images derived from image data captured by the camera, and wherein the displayed video images include images of the detected object; wherein the controller determines distance to the detected object based at least in part on difference between position of the detected object in image data captured by the camera and position of the detected object in sensor data captured by the distance sensor; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, and responsive to the determined distance to the detected object, determines that the detected object represents a collision risk; and wherein, responsive to determination that the detected object represents the collision risk, the controller alerts a driver of the equipped vehicle of the collision risk.
18. The vehicular vision system as claimed in claim 17, wherein the display displays an overlay that highlights the displayed detected object.
19. The vehicular vision system as claimed in claim 17, wherein the camera is positioned at an actual viewing angle, and wherein said vehicular vision system has a bird's eye viewing mode in which the displayed video images appear to be at an apparent viewing angle that is more vertically oriented than the actual viewing angle of the camera.
20. The vehicular vision system as claimed in claim 19, wherein the controller compresses a lower portion of captured image data and stretches an upper portion of captured image data so that the apparent viewing angle is more vertically oriented than the actual viewing angle.
21. The vehicular vision system as claimed in claim 19, wherein the equipped vehicle has a towing hitch, and wherein the towing hitch is in the field of view of the camera, and wherein said vehicular vision system has a hitch viewing mode in which the towing hitch in the displayed video images is magnified to a magnification level that is greater than a magnification level provided in the bird's eye viewing mode.
22. The vehicular vision system as claimed in claim 17, wherein the controller processes image data captured by the camera and processes sensor data captured by the distance sensor to determine a projected trajectory for the equipped vehicle, and wherein the controller applies a projected path overlay at the displayed video images, and wherein the projected path overlay comprises a representation of the projected trajectory.
23. The vehicular vision system as claimed in claim 17, wherein the controller modifies raw image data captured by the camera to produce a processed image, and wherein the controller determines position of a selected feature in the field of view of the camera and applies an overlay on the selected feature in an initial raw image data to produce an initial processed image, and wherein the controller thereafter holds the overlay in a fixed position in subsequent processed images regardless of movement of the selected feature in subsequent raw image data.
24. The vehicular vision system as claimed in claim 23, wherein the selected feature is a feature on a trailer being towed by the equipped vehicle.
25. A vehicular vision system, said vehicular vision system comprising: a camera comprising a lens and an image sensor, wherein the camera is disposed at a vehicle equipped with said vehicular vision system and has a field of view exterior of the equipped vehicle; a distance sensor disposed at the equipped vehicle and having a field of sensing exterior of the equipped vehicle; wherein the distance sensor comprises a plurality of infrared light-emitting light sources, and wherein the distance sensor senses infrared light; a controller comprising at least one processor, wherein image data captured by the camera and sensor data captured by the distance sensor are processed at the controller; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, detects an object present in the field of view of the camera and in the field of sensing of the distance sensor; wherein the controller determines distance to the detected object based at least in part on difference between position of the detected object in image data captured by the camera and position of the detected object in sensor data captured by the distance sensor; wherein the controller determines whether the detected object falls within a category of objects that represent objects of interest based at least in part on processing of image data captured by the camera; wherein the controller, responsive to processing at the controller of image data captured by the camera and of sensor data captured by the distance sensor, and responsive to determination that the detected object falls within a category of objects that represent objects of interest, and responsive to the determined distance to the detected object, determines that the detected object represents a collision risk; wherein, responsive to determination that the detected object represents the collision risk, the controller controls braking of the equipped vehicle to mitigate collision by the equipped vehicle with the detected object; and wherein the detected object comprises a person.
26. The vehicular vision system as claimed in claim 25, wherein image data captured by the camera has a first resolution and sensor data captured by the distance sensor has a second resolution that is lower than the first resolution.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The present invention will now be described by way of example only with reference to the attached drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION OF THE INVENTION
(18)
(19) In an embodiment not shown in
(20) The I2C bus 106 is used by the microcontroller 104 to send command data to the image processor 102, including for example memory instructions for where to draw application data including for example, overlay data. The SPI bus 108 is used to communicate the application data between the microcontroller 104 and the image processor (specifically between the flash memory 110 contained on the microcontroller 104 and the image processor 102).
(21) The implementation of the emulated flash provides a convenient method for the camera system 100 to access imager-specific data, including custom image settings, overlays, and digital correction algorithms. The imager-specific data is organized into a series of records, including custom register settings sets, digital correction algorithms, overlays, and imager patches. Each record is organized into tables indexed by a table of contents. Each table is in turn indexed by a master table of contents. Furthermore, initialization tables are made available for custom initialization routines.
(22) Thus, other advantages flowing from the invention includes the fact that flash drivers do not need to be developed to support an external device. Rather, the microcontroller has a means for performing flash-based operations on program memory. The bootloader also has reduced complexity as the microcontroller does not need to maintain a separate flash driver for an external flash. There are also a reduced number of physical connections/communication channels. With emulated flash, a single SPI communication channel exists between the microcontroller and image processor. With an external flash, an additional SPI connection between the microcontroller and external flash would be required, in order to allow for re-flashing.
(23) Reference is made to
(24) The lens assembly 12 is an assembly that includes a lens 22 and a lens barrel 24. The lens 22 may be held in the lens barrel 24 in any suitable way. The lens 22 preferably includes optical distortion correction features and is tailored to some extent for use as a rearview camera for a vehicle. The distance between the top of the lens 22 and the plane of the imager is preferably about 25 mm.
(25) The lens barrel 24 may have either a threaded exterior wall or a threadless exterior wall. In embodiments wherein the exterior wall is threaded, the thread may be an M12x0.5 type thread. The barrel 24 is preferably made from Aluminum or plastic, however other materials of construction are also possible. The lens barrel 24 preferably has a glue flange thereon.
(26) A lens retainer cap, if provided, preferably has a diameter of about 20 mm. The imager 16 is preferably a ¼ inch CMOS sensor with 640×480 pixels, with a pixel size of 5.6 micrometers×5.6 micrometers. The imager preferably has an active sensor area of 3.584 mm horizontal×2.688 mm vertical, which gives a diagonal length of 4.480 mm.
(27) The field of view of the camera 10 may be about 123.4 degrees horizontally×about 100.0 degrees vertically, which gives a diagonal field of view of about 145.5 degrees. It will be understood that other fields of view are possible, and are described further below.
(28) The F number of the lens is preferably about 2.0 or lower.
(29) The relative illumination is preferably >about 50% inside an image circle having a diameter of about 4.480 mm.
(30) The geometrical distortion is preferably better than about −47.
(31) The modulation transfer function (MTF) values for both tangential MTF and sagittal MTF are preferably greater than about 0.45 at 45 lp/mm on the lens axis, and greater than or equal to about 0.30 at 45 lp/mm off the camera axis between about 0 degrees and about 60 degrees. Exemplary curves for the MTF value at 0 degrees and at 60 degrees are shown in
(32) The lens 22 preferably has an integrated infrared cutoff filter. The filter can be directly coated on one of the lens elements. Alternatively, the filter can be an add-on thin glass element with the coating thereon. The pass-band wavelengths of the infrared cutoff filter may be about 410 nm for a low pass, and about 690 nm for a high pass.
(33) The infrared cutoff filter preferably has at least about 85% transmission over the pass-band spectrum, and at least about 50% transmission at both 410 nm and 690 nm.
(34) The lens 22 preferably has an anti-reflective coating on each surface of each lens element, except the surface with the infrared cutoff filter thereon and except any surfaces that are cemented to the lens barrel or to some other component.
(35) The image circle diameter of the lens is preferably less than 4.80 mm. The angle between the optical axis of the lens 22 and the reference diameter axis for the barrel 24 is preferably less than 1.0 degrees.
(36) Preferably, the lens 22 is substantially free from artificial optical effects, such as halo, veiling glare, lens flare and ghost images.
(37) Preferably, the lens 22 has a hydrophobic coating on its outer surfaces, for repelling water and for providing a self-cleaning capability to the lens 22.
(38) Preferably, the lens 22 is capable of withstanding the following conditions without any detrimental effects, such as peel-off, cracking, crazing, voids, bubbles of lens elements, dust/water/fluid ingress, moisture condensation, foreign objects, cropping of field, non-uniform image field and distortion that did not exist prior to the conditions, or any visible and irreversible change to the appearance of the lens 22 including color and surface smooth level, or any of the aforementioned lens properties changing outside a selected range.
(39) The conditions to be withstood by the lens 22 include: Subjecting the lens 22 to a temperature of anywhere from −40 degrees Celsius to 95 degrees Celsius; Enduring 1000 hours at 95 degrees Celsius; Cycling of the lens a selected number of times between −40 and 95 degrees Celsius, with a dwell time for the lens in each temperature of at least a selected period of time and a ramping time to reach one or the other temperature of less than another selected period of time; Exposing the lens 22 to 85 degrees Celsius and 85% relative humidity for 1200 hours; Subjecting the lens 22 to 10 cycles of the test profile shown in
(40) For each chemical tested, the test is preferably conducted for 24 hours of exposure. The test is conducted in accordance with the following procedure: 1. Place test sample in a temperature chamber maintained at 40° C. on a test fixture representative of in-vehicle position with any protective surrounding structures. 2. Keep test sample at 40° C. for one hour. Remove sample from the chamber and apply 100 ml of test chemical/fluid by either spraying or pouring to front glass surface and upper exterior body. Store the sample at outside ambient temperature (RT) for 1 hour. 3. Replace the sample in the chamber and keep it at 40° C. for one hour. Then ramp up the chamber temperature to 70° C. (60° C. for battery acid) within 30 minutes and keep the sample at that temperature for 4 hours (dwell time). Ramp down the chamber temperature to 40° C. within 30 minutes. 4. Repeat step B and C for the same fluid but prolong dwell time from 4 hours to 12 hours at high temperature. 5. Repeat step B, C and D for the next fluid in the set. Continue the process up to a maximum of four fluids per sample. Exposing the lens to a test procedure laid out in IEC 60068-2-60 method 4, whereby the lens 22 is exposed to H2S in a gas concentration of 10 ppb, SO.sub.2 in a gas concentration of 200 ppb, Chlorine in a gas concentration of 10 ppb and NO.sub.2 in a gas concentration of 200 ppb; Subjecting the lens 22 to UV exposure test as referenced to SAE J1960v002, with a minimum exposure level is 2500 KJ/m.sup.2. There must be no significant change of color and gloss level or other visible detrimental surface deterioration in any part of the lens upper body. The lens upper body includes and is not limited to lens cap surface, the subsurface under the first glass element, top glass and its surface coating. There is preferably no crack, crazing, bubbles or any defect or particles appear after UV exposure in and on any of the glass or plastic lens elements and its coatings.
(41) The lens cap is preferably black and free of ding, dig, crack, bur, scratch, or other visible defects. There must be no visible color variation on a lens cap and among lenses.
(42) The coating of the first glass is preferably free of dig, scratch, peeling, crack, bubble or flaking. The color appearance of the AR coating should have no or minimum visible color variation among lenses.
(43) The interior of the lens 22 (at the mating surfaces of the constituent lens elements) preferably has little or no moisture inside it, which can cause water spots on the inside surfaces of the lens 22 after cycles of condensation and subsequent evaporation, and which can leak out from the lens 22 into portions of the camera 10 containing electrical components thereby causing problems with those electrical components. Preferably, the lens 22 is manufactured in a controlled, low-humidity environment. Other optional steps that can be taken during lens manufacture include: drying the interior surfaces of the lens 22, and vacuum packing the lens for transportation.
(44) The manufacture of the lens 22 using a plurality of lens elements that are joined together can optically correct for distortion that can otherwise occur. Such optical distortion correction is advantageous particularly for a lens with a field of view that approaches 180 degrees horizontally, but is also advantageous for a lens 22 with a lesser field of view, such as a 135 degree field of view. An example of the effects of optical distortion correction is shown in
(45) Aside from optical distortion correction, the camera 10 preferably also provides other forms of distortion correction. To carry this out, selected techniques may be employed. For example, one technique is to position the lens 22 so that the horizon line (shown at 28) in the field of view (shown at 30) lies near the optical axis of the lens 22 (shown at 32). As a result, there will be less distortion in the horizon line in the image sent from the camera 10 to the in-vehicle display. Aside from positioning the lens 22 so that the horizon line is closer to the optical axis of the lens 22, the microcontroller 18 preferably processes the image to straighten the horizon line digitally (i.e., by compressing and/or stretching selected vertically extending portions of the image). In some embodiments, the amount of distortion in the image will increase proportionally with horizontal distance from the optical axis. Thus, the amount of compression or stretching of vertically extending strips of the image will vary depending on the horizontal position of the strip. The portions of the image to stretch or compress and the amounts by which to compress them can be determined empirically by testing an example of the camera so as to determine the amount of distortion present in vertically extending portions (i.e., vertical strips) of the image that contain image elements that have a known shape. For example, the horizon line should appear as a straight horizontal line in camera images. Thus when testing the camera, values can be manually calculated for use to compress or stretch vertically extending portions (e.g., strips) above and below the horizon line so that the horizon line appears straight in the image. These values can then be used in production versions of the camera that will have the same lens and the same orientation relative to the horizon line. As an alternative instead of manually calculating the compression and stretch values to apply to vertical strips of the image, the microcontroller 18 may be programmed to carry out a horizon line detection routine, taking into account that the horizon line is vertically close to the optical axis in the center region of the image to assist the routine in finding the horizon line. It will be understood that other selected known elements could be positioned proximate the optical axis and used instead of the horizon line.
(46) The microcontroller 18 (
(47) Aside from reducing the amount of warping in the portions of the image containing the bumper and the horizon line, the microcontroller 18 can also modify the image in a way to make vertical elements of the image appear approximately vertical. These digital distortion correction steps can take place in a selected order. For example, the first step can be to straighten the vehicle bumper. A second step can be to straighten the horizon line. A third step can be to straighten vertical objects.
(48) In embodiments or applications wherein other artifacts are always in the image received by the camera 10, these artifacts may be used to determine the image modification that can be carried out by the microcontroller 18 to at least straighten out portions of the image that show the artifacts.
(49) As shown in
(50) In addition to the distortion correction, the microcontroller 18 is preferably capable of providing a plurality of different image types. For example, the microcontroller 18 can provide a standard viewing mode which gives an approximately 135 degree field of view horizontally, a ‘cross-traffic’ viewing mode which gives an approximately 180 degree field of view horizontally, and a bird's eye viewing mode, which gives a view that appears to be from a camera that is spaced from the vehicle and is aimed directly downwards. The standard viewing mode (with optical and digital distortion correction) is shown in
(51) The cross-traffic viewing mode is shown in
(52) It will be understood that, in order to provide the cross-traffic viewing mode, the lens 22 (
(53) While a 180 degree lens 22 is preferable for the camera 10, it is alternatively possible for the lens 22 to be a 135 degree lens. In such a case, the camera 10 would not provide a cross-traffic viewing mode.
(54) The bird's eye viewing mode (
(55) The bird's eye viewing mode can be used, for example, to assist the driver when backing up the vehicle to connect it to a trailer hitch. The bird's eye viewing mode provides a view that appears to come from a viewpoint that is approximately directly above the tow ball on the vehicle. This viewing mode is discussed further below.
(56) In addition to providing a plurality of viewing modes, the camera 10 is preferably capable of providing graphical overlays on the images prior to the images being sent from the camera 10 to an in-vehicle display. Preferably, the camera 10 can provide both static overlays and dynamic overlays. Static overlays are overlays that remain constant in shape, size and position on the image. An example of a static overlay is shown at 36 in
(57) Many different types of dynamic overlay can be provided for many different functions. A first example of a dynamic overlay is shown at 38 in
(58) Another example of a dynamic overlay is shown at 40 in
(59) Another application of the camera 10 that combines the features of overlays and the bird's eye viewing mode is in a 360 degree view system, an example of which is shown at 50 in
(60) Because the cameras 10 are each capable of dewarping the images they receive, and of processing the images to provide a bird's eye view, and of adding graphic overlays on the images, the cameras 10 can be used for all of these functions and the processed images produced by the cameras 10 and be sent to the central controller 52 using inexpensive electrical cables, such as shielded twisted (or untwisted) pair cables, shown schematically in
(61) By contrast, if such as a system were provided with cameras that were not themselves equipped with sufficiently powerful on-board microcontrollers 18 to carry out the aforementioned functions, the functions would have to be carried out externally, e.g., by the central controller 52. In such a situation, the raw images received by the cameras 10 would have to be sent to the central controller 52 for processing. A relatively great amount of care would need to be taken to ensure that the raw images were transmitted to the central controller 52 in relatively pristine condition, since the processing of the images will result in some degradation of the images. In order to minimize the degradation of the images in their transmission from the cameras to the central controller, the electrical cables and any connectors could be relatively expensive (e.g., coaxial cable). In addition, the electrical cables could be relatively inflexible and difficult to route through the vehicle.
(62) Reference is made to
(63) Reference is made to
(64) Reference is made to
(65) The distance sensor system 204 preferably includes an infrared time-of-flight sensor 214 (which may be referred to as a TOF sensor) and a plurality of light sources 216. The light sources 216 emit modulated light, which reflects off any objects behind the vehicle. The reflected light is received by an imager 218 that is part of the TOF sensor 214. The image that is formed on the imager 218 is a greyscale image, which may be referred to as a second image. The second image has a second image resolution that depends on the resolution of the imager 218. In a typical application, the second image resolution will be lower than the resolution of the first image with the camera 10. The first and second images may be processed by the fusion controller 208 to generate a stereo image, which provides the fusion controller 208 with depth information relating to objects in the two images. The fusion controller 208 uses the depth information to determine if any objects behind the vehicle represent a collision risk, in which case the fusion controller 208 determines what action, if any, to take. One action that can be taken is for the fusion controller 208 to send a signal to a vehicle control system (shown at 219) to apply the parking brake, or the regular vehicle brakes or to prevent their release. Additionally, the fusion controller 208 communicates with the overlay generator 210 to apply an overlay on the image warning the driver of the object or objects that represent a collision risk. The overlay could, for example be a box around any such objects in the image. The overlay generator 210 may communicate the overlay information back to the camera 10, which then sends the image with the overlay to the in-vehicle display, shown at 220, which makes up part of the HMI 212. The rest of the HMI may be made up of a touch screen input 221 that is superimposed on the display 220. In order to override the system 200 and release the applied brake, the driver can interact with the HMI 212 to press a selected on-screen button to indicate to the system 200 that the objects have been seen and the driver does not feel that they represent a collision risk, at which time the system 200 can release the applied brake. In addition to the overlay, the driver of the vehicle can be notified of a collision risk by way of sound (e.g., a chime, a beep or a voice message) or by way of tactile feedback, such as by vibration of the steering wheel or seat.
(66) Additionally, the image received by the imager 218 can be processed by the distance sensor processor 106 to determine the phase shift of the light at each pixel on the imager 218. The phase shift of the light is used to determine the distance of the surface that reflected the light to the TOF sensor 214. Thus, the distance sensor processor 206 can generate a depth map relating to the image. Optionally, the distance sensor processor 206 processes the pixels in groups and not individually. For example, the processor 206 may obtain average phase shift data from groups of 2×2 pixels. Thus, the depth map has a third resolution that is lower than the second image resolution.
(67) The depth map is sent to the fusion controller 208, which can interpret it to determine if any objects shown therein represent a collision risk. The fusion controller 208 preferably works with both the depth map and the camera image to improve the determination of whether detected objects are collision risks. For example, the fusion controller 208 may determine from the depth map, that there is an object that is within 2 meters from the vehicle towards the lower central portion of the depth map. However, the fusion controller 208 may determine from the camera processor 202 that the object in that region is a speed bump, in which case the fusion controller 208 may determine that this does not represent a collision risk and so the system 200 would not warn the driver, and would not apply the brake.
(68) In different situations, the fusion controller 208 gives greater weight to the information from either the depth map or the camera image. For example, for objects that are farther than 3 meters away, the fusion controller 208 gives greater weight to information from the camera 10 and the camera processor 202. For objects that are closer than 3 meters (or some other selected distance) away, the fusion controller 208 gives greater weight to information from the distance sensor 204 and the distance sensor processor 206.
(69) The camera processor 202 is configured to recognize certain types of object in the images it receives from the camera 10, such as an adult, a child sitting, a toddler, a vehicle, a speed bump, a child on a bicycle, tall grass, fog (e.g., fog from a sewer grating or a manhole, or fog from the exhaust of the vehicle itself). In some situations, the fusion controller 208 determines whether there is movement towards the vehicle by any objected in the images it receives. In some situations, when the fusion controller 208 determines from the depth map that an object is too close to the vehicle, it uses information from the camera processor 202 to determine what the object is. This assists the fusion controller 208 in determine whether the object is something to warn the driver about (e.g., a child, or a tree), or if it is something to be ignored (e.g., exhaust smoke from the vehicle itself, or a speed bump). Additionally, this information can also be used by the overlay generator 210 to determine the size and shape of the overlay to apply to the image. It will be understood that some of the elements recognized by the camera processor 202 belong to a category containing objects of interest, such as the adult, the child sitting, the toddler, the vehicle and the child on the bicycle. Other elements recognized by the camera processor 202 may belong to a category of elements that do not represent a collision risk, such as, a speed bump, tall grass and fog.
(70) After the TOF sensor 214 and the camera 10 are installed on the vehicle, a calibration procedure is preferably carried out. The calibration procedure includes displaying an image with high contrast elements on it, at a selected distance from the vehicle, at a selected position vertically and laterally relative to the vehicle. The image can be for example, a white rectangle immediately horizontally adjacent a black rectangle. The fusion controller 208 determines the relative positions of the mating line between the two rectangles on the two imagers 16 and 218. During use of the camera system 200 this information is used by the fusion controller 208 to determine the distances of other objects viewed by the camera 10 and the TOF sensor 214. The calibration procedure could alternatively be carried out using a checkerboard pattern of four or more rectangles so that there are vertical and horizontal mating lines between rectangles.
(71) Throughout this disclosure the term imager and image processor have been used. These terms both indicate a device that includes an image sensor and some control elements. The microcontroller and the portion of the imager including the control elements together make up a ‘controller’ for the camera. It will be understood that in some embodiments, and for some purposes the control of the camera need not be split into the control elements in the imager and the microcontroller that is external to the imager. It will be understood that the term ‘controller’ is intended to include any device or group of devices that control the camera.
(72) While the above description constitutes a plurality of embodiments of the present invention, it will be appreciated that the present invention is susceptible to further modification and change without departing from the fair meaning of the accompanying claims.