Patent classifications
G06T2207/30261
VEHICLE EXTERIOR MOVING OBJECT DETECTION SYSTEM
The vehicle exterior moving object detection system includes a surrounding-area image input section for capturing images in areas surrounding its own vehicle and delivering the captured images as surrounding-area images, a combined image constructing section for constructing a combined image including a result output from the detection of a moving object by combining the surrounding-area images, and an output section for presenting the combined image to the user. The vehicle exterior moving object detection system includes a movable-component region setting section for setting shapes and layout of movable components of the own vehicle in the surrounding-area images and calculating regions for the movable components of the vehicle, and a moving object detecting section for being supplied with the movable-component regions calculated by the movable-component region setting section and the surrounding-area images from the surrounding-area image input section, performing a process of detecting a moving object, and outputting a detection result.
SURROUNDING RISK DISPLAYING APPARATUS
A surrounding risk displaying apparatus includes an environment recognizer, a surrounding risk recognizer, and a display. The environment recognizer is capable of recognizing an environment around a vehicle. The surrounding risk recognizer is capable of extracting risk objects each having a risk potential not less than a predetermined risk potential, estimating a distribution of the risk potential around each of the risk objects, and calculating a risk approaching determination value that increases depending on relative approaching of the risk objects. The display is capable of displaying images in a superimposed fashion on the corresponding risk objects. The images each indicate the distribution of the risk potential around corresponding one of the risk objects. The display is capable of displaying, when the risk approaching determination value is not less than a predetermined threshold, a passage risk display indicating that passing through, by the vehicle, a clearance between the risk objects involves a risk.
VEHICLE CONTROL APPARATUS AND METHOD THEREOF
A vehicle control apparatus for a vehicle, and including a display configured to display image information associated with the vehicle; at least one light source configured to emit light so as to form at least one reflected light in one region of a pupil and eyeball of a user gazing at one region on the display; a memory configured to store coordinate information on each region within the vehicle and the at least one light source; at least one camera configured to obtain an image including the one region of the pupil and eyeball of the user; and a controller configured to calculate a first coordinate from a center of the pupil and calculate a second coordinate from the at least one reflected light included in the obtained image, calculate a coordinate of one point within the vehicle as a reference coordinate from prestored coordinate information of the at least one light source when a distance between the first coordinate and the second coordinate is less than a preset distance, and perform calibration on the reference coordinate for a coordinate corresponding to a direction in which the user gazes.
PRECEDING TRAFFIC ALERT SYSTEM AND METHOD
Various systems and methods for providing alerts of preceding traffic are described herein. A system for providing alerts of preceding traffic comprising a processor installed on a trailing vehicle operated by a user, the processor to: receive image data from a camera and identify a preceding vehicle in front of the trailing vehicle; receive data from a distance sensor to detect a change in relative velocity between the preceding vehicle and the trailing vehicle that exceeds a threshold; and cause an augmented reality content to be displayed in a head-mounted display worn by the user, the augmented reality content to alert the user of the change in relative velocity.
Vehicle periphery monitoring device
The present invention relates to a vehicle periphery monitoring device. An object identification unit comprises: a first identifier requiring a relatively low computation volume for an object identification process; and a second identifier requiring a relatively high computation volume for the object identification process. A region to be identified determination unit determines at least one region to be identified which is presented in the identification process by the second identifier, by carrying out a clustering process relating to location and/or scale, with respect to a plurality of region candidates which are extracted by the first identifier as wherein objects are present.
NAVIGATION SYSTEM WITH CAMERA ASSIST
One embodiment is a navigation system for an aircraft including a positioning system to generate information related to a position of the aircraft, a group of cameras mounted to a body of the aircraft, each camera of the group of cameras to simultaneously capture images of a portion of an environment that surrounds the aircraft, and a processing component coupled to the positioning system and the group of cameras, the processing component to determine a current position of the aircraft based on the information related to the position of the aircraft and the images.
Object recognition apparatus
In an object recognition apparatus mounted on a vehicle, comprising: a plurality of recognizers each adapted to conduct object recognition ahead of the vehicle at intervals; and an object continuity determiner adapted to conduct object continuity determination based on a result of the object recognition conducted by the recognizers; the object continuity determiner determines that, when a first object recognized by any of the object recognizers at time (N) is present at a position within a predetermined area defined by a position of a second object recognized by other of the object recognizers at time (N−1) earlier than the time (N), the first object and the second object are identical to each other to be one object which is kept recognized continuously for a time period ranging from at least the time (N−1) to the time (N).
AUTOMATIC SURROUND VIEW HOMOGRAPHY MATRIX ADJUSTMENT, AND SYSTEM AND METHOD FOR CALIBRATION THEREOF
An imaging system adjusts a display of images obtained of an area to compensate for one or more relationships of cameras on a vehicle relative to a peripheral area. The system comprises a processor, an image obtaining unit, a non-transient memory, a situational compensation unit, and a display unit. The image obtaining unit receives first image data representative of a first image, and the memory stores intrinsic image coordinate transformation data representative of an intrinsic mapping between the first image data and first display data representative of an uncompensated bird's eye view image of the peripheral area of the associated vehicle. The situational compensation selectively modifies the intrinsic mapping as an adjusted intrinsic mapping between the first image data and the first display data in accordance with a signal representative of the one or more relationships of the associated vehicle relative to the peripheral area.
VEHICULAR CONTROL SYSTEM WITH ENHANCED LANE CENTERING
A vehicular control system includes a camera that captures image data. The system includes an electronic control unit (ECU) for processing image data captured by the camera. The ECU, via processing by an image processor of image data captured by the camera, determines lane information of a traffic lane along a road being traveled by the equipped vehicle. The ECU determines a lane quality value that represents a confidence in the determined lane information. When the lane quality value exceeds a threshold value, and based at least in part on the determined lane information, the ECU provides a steering command to a steering system of the equipped vehicle to adjust a heading of the equipped vehicle to center the equipped vehicle within the traffic lane of the road being traveled by the equipped vehicle.
MULTIPLE RESOLUTION DEEP NEURAL NETWORKS FOR VEHICLE AUTONOMOUS DRIVING SYSTEMS
Techniques for training multiple resolution deep neural networks (DNNs) for vehicle autonomous driving comprise obtaining a training dataset for training a plurality of DNNs for an autonomous driving feature of the vehicle, sub-sampling the training dataset to obtain a plurality of training datasets comprising the training dataset and one or more sub-sampled datasets each having a different resolution than a remainder of the plurality of training datasets, training the plurality of DNNs using the plurality of training datasets, respectively, determining a plurality of outputs for the autonomous driving feature using the plurality of trained DNNs and the input data, receiving input data for the autonomous driving feature captured by a sensor device, and determining a best output for the autonomous driving feature using the plurality of outputs.