Patent classifications
B60R2300/307
METHOD FOR OPERATING A DRIVER ASSISTANCE SYSTEM OF A MOTOR VEHICLE, DRIVER ASSISTANCE SYSTEM AND MOTOR VEHICLE
The invention relates to a method for operating a driver assistance system (2) of a motor vehicle (1), in which a rear image of an environmental region (11, 12, 14) of the motor vehicle (1) located substantially next to and/or behind the motor vehicle (1) is captured by at least one camera (3, 4) of the driver assistance system (2), the camera being provided on the vehicle, wherein at least one road marking (19) of a roadway (17) is recognized in the environmental region (11, 12, 14) based on the captured rear image.
AUTONOMOUS TRAVELING APPARATUS
In an autonomous traveling apparatus, a normal traveling area and a deceleration area are set for a monitoring area in an area setting unit. A speed control unit limits a traveling speed of an apparatus main body on the basis of the monitoring area set in the area setting unit and a distance from the apparatus main body to an obstacle within the monitoring area if the obstacle present within the monitoring area is detected. If the obstacle is a movable body, an area change unit changes the deceleration area within the monitoring area that is set in the area setting unit to a deceleration area for movable body. This configuration makes it possible to support even a case where an obstacle is a movable body, in limiting the traveling speed in response to obstacle detection.
VEHICLE POSITIONING BY VISIBLE LIGHT COMMUNICATION
A vehicle optical wireless data communication system includes a plurality of light sources disposed at a structure where vehicles travel. Each of the light sources emits visible light to illuminate the building or structure. Each of the light sources emits optical signals indicative of a location of the respective light source. A sensor is disposed at a vehicle and is operable to sense optical signals emitted by the light sources when the vehicle is in the vicinity of the light sources. Responsive to sensing by the sensor of optical signals emitted by at least one of the light sources, the sensor generates an output to a processor disposed at the vehicle. The processor processes the output of the sensor to determine a location of the vehicle relative to at least one of the light sources.
Imaging system for vehicle
An imaging system for a vehicle includes an imaging sensor and a video display device. The imaging system generates an overlay that is electronically superimposed on the displayed images to assist a driver of the vehicle when executing a backup maneuver. The overlay has first, second and third overlay zones, with the overlay zones indicative of respective distance ranges from the rear of the vehicle to respective distances. As indicated to the driver viewing the video display screen when executing a backup maneuver, the first distance is closer to the rear of the vehicle than the second distance and the second distance is closer to the rear of the vehicle than the third distance. The first overlay zone may be a first color and the second overlay zone may be a second color and the third overlay zone may be a third color.
MULTI-SENSOR USING A THERMAL CAMERA
An infrared thermal camera can be installed in a vehicle, such as a car, to provide different capabilities including night vision, passenger temperature monitoring, liveness detection, and avoidance of collisions with animals. The camera may be located inside the vehicle facing outwards through the front windshield. The camera may be used in multiple modes. In a first mode, the camera may be used to provide night vision or other thermal imaging of a scene outside the vehicle using a first field of view. In a second mode, the camera may be used to provide thermal imaging of a scene at least partially inside the vehicle using a second field of view, which may be useful, for example, for scanning the skin temperatures of occupants.
DRIVER ASSISTANCE SYSTEM, DRIVER ASSISTING METHOD, AND NON-TRANSITORY STORAGE MEDIUM
A driver assistance system has a periphery monitoring device, a drive recorder, and an illuminance detecting section. The periphery monitoring device includes an imaging section that is mounted at a vehicle and captures images of a vehicle periphery, a memory, a processor that is coupled to the memory and that serves as a color tone correction processing section that corrects color tone of an image captured by the imaging section, and a display portion that displays an image having color tone that has been corrected by the color tone correction processing section. The drive recorder includes the imaging section, the memory, the processor that serves as the color tone correction processing section, and a recording section that records an image having color tone that has been corrected by the color tone correction processing section. The processor is configured so as to, in case in which an illuminance that is detected by the illuminance detecting section at a time of imaging by the imaging section is less than a predetermined reference value, correct color tone of an image captured by the imaging section such that color tone correction that is executed for recording in the recording section is color tone correction that is dark as compared with color tone correction that is executed for display at the display portion, and, in a case in which the illuminance that is detected by the illuminance detecting section at the time of imaging by the imaging section is greater than or equal to the predetermined reference value, correct the color tone of the image captured by the imaging section such that color tone correction that is executed for recording in the recording section is color tone correction that is bright as compared with color tone correction that is executed for display at the display portion.
IMAGE PROCESSING DEVICE, IN-VEHICLE DISPLAY SYSTEM, DISPLAY DEVICE, IMAGE PROCESSING METHOD, AND COMPUTER READABLE MEDIUM
In an image processing device (120), an extraction unit (121) extracts a plurality of objects from a captured image (101). A prediction unit (122) predicts a future distance between the plurality of objects extracted by the extraction unit (121). A classification unit (123) classifies into groups the plurality of objects extracted by the extraction unit (121) based on the future distance predicted by the prediction unit (122). A processing unit (124) processes the captured image (101) into a highlight image (102). The highlight image (102) is an image in which the plurality of objects classified by the classification unit (123) are highlighted separately for each group.
ROAD FEATURE DETECTION USING A VEHICLE CAMERA SYSTEM
Examples of techniques for road feature detection using a vehicle camera system are disclosed. In one example implementation, a computer-implemented method includes receiving, by a processing device, an image from a camera associated with a vehicle on a road. The computer-implemented method further includes generating, by the processing device, a top view of the road based at least in part on the image. The computer-implemented method further includes detecting, by the processing device, lane boundaries of a lane of the road based at least in part on the top view of the road. The computer-implemented method further includes detecting, by the processing device, a road feature within the lane boundaries of the lane of the road using machine learning.
Image processing apparatus, imaging apparatus and drive assisting method
An image processing apparatus includes an I/F, a synthesizer, and a color determinator. The I/F obtains a captured image, which is generated by imaging a subject in the vicinity of a vehicle. The synthesizer superimposes an indicator on the captured image. The color determinator, when color of the captured image is similar to a first color, changes color of the indicator from the first color.
Display control device
A display control device includes: an image acquisition unit configured to acquire captured image data from an imaging unit that captures an image of a peripheral area of a vehicle; a display control unit configured to cause image data based on the captured image data to be displayed on a screen; and an operation receiving unit configured to receive designation of an arbitrary point on the screen, in which the display control unit enlarges display on the screen about the designated point as a center of enlargement in a state where a display position of the designated point does not move on the screen when the operation receiving unit receives the designation of the arbitrary point on the screen.