G06T2207/30261

Systems and methods for navigating a vehicle among encroaching vehicles

Systems and methods use cameras to provide autonomous navigation features. In one implementation, a method for navigating a user vehicle may include acquiring, using at least one image capture device, a plurality of images of an area in a vicinity of the user vehicle; determining from the plurality of images a first lane constraint on a first side of the user vehicle and a second lane constraint on a second side of the user vehicle opposite to the first side of the user vehicle; enabling the user vehicle to pass a target vehicle if the target vehicle is determined to be in a lane different from the lane in which the user vehicle is traveling; and causing the user vehicle to abort the pass before completion of the pass, if the target vehicle is determined to be entering the lane in which the user vehicle is traveling.

IMAGE CAPTURING DEVICE AND METHOD, PROGRAM, AND RECORD MEDIUM

An object having a high attention degree is selected from objects detected by a detection means, brightness of a captured image is calculated by using an attention region corresponding to the selected object as a detection frame, and exposure control is performed based on the calculated brightness. The attention degree is evaluated higher with the decrease in the distance. Alternatively, the attention degree is evaluated higher as the direction becomes closer to the traveling direction. The attention region is made larger with the decrease in the distance to the object. It is also possible to judge the type of the object and determine the size of the attention region based on the result of the judgment. A subject to be paid attention to is made clearly visible.

MOVING OBJECT DETECTION DEVICE, IMAGE PROCESSING DEVICE, MOVING OBJECT DETECTION METHOD, AND INTEGRATED CIRCUIT

A moving object detection device includes: an image capturing unit with which a vehicle is equipped, and which is configured to obtain captured images by capturing views in a travel direction of the vehicle; a setting unit configured to set, for each of frames that are the captured images, a movement vanishing point at which movement of a stationary object in the captured images due to the vehicle traveling does not occur; a calculation unit configured to calculate, for each of unit regions of the captured images, a first motion vector indicating movement of an image in the unit region; and a detection unit configured to detect a moving object present in the travel direction, based on the movement vanishing points set by the setting unit and the first motion vectors calculated by the calculation unit.

MOVING OBJECT DETECTION DEVICE, IMAGE PROCESSING DEVICE, MOVING OBJECT DETECTION METHOD, AND INTEGRATED CIRCUIT
20180012368 · 2018-01-11 ·

A moving object detection device includes: an image capturing unit with which a vehicle is equipped, and which is configured to obtain a captured image by capturing a view in a travel direction of the vehicle; a calculation unit configured to calculate, for each of first regions which are unit regions of the captured image, a first motion vector indicating movement of an image in the first region; an estimation unit configured to estimate, for each of one or more second regions which are unit regions each including first regions, a second motion vector using first motion vectors, the second motion vector indicating movement of a stationary object which has occurred in the captured image due to the vehicle traveling; and a detection unit configured to detect a moving object present in the travel direction, based on a difference between a first motion vector and a second motion vector.

STEREO IMAGE MATCHING APPARATUS AND METHOD REQUIRING SMALL CALCULATION
20180014001 · 2018-01-11 ·

A stereo image matching apparatus includes a processor which includes: a bit distributor distributing values of each pixel of stereo images into sequential N bits and outputting a plurality of stereo images including the sequential N bits; a plurality of cost calculators each receiving the plurality of stereo images and calculating matching cost values for each pixel of each of the stereo images; a confidence calculator calculating a matching confidence by using cost characteristics lit of the respective matching cost values calculated by the plurality of cost calculators; and a depth determiner determining that a depth value of which the matching confidence is high and the matching cost values are relatively low is a final depth value.

Computer Vision Based Driver Assistance Devices, Systems, Methods and Associated Computer Executable Code

The present invention includes computer vision based driver assistance devices, systems, methods and associated computer executable code (hereinafter collectively referred to as: “ADAS”). According to some embodiments, an ADAS may include one or more fixed image/video sensors and one or more adjustable or otherwise movable image/video sensors, characterized by different dimensions of fields of view. According to some embodiments of the present invention, an ADAS may include improved image processing. According to some embodiments, an ADAS may also include one or more sensors adapted to monitor/sense an interior of the vehicle and/or the persons within. An ADAS may include one or more sensors adapted to detect parameters relating to the driver of the vehicle and processing circuitry adapted to assess mental conditions/alertness of the driver and directions of driver gaze. These may be used to modify ADAS operation/thresholds.

SYSTEMS AND METHODS FOR MAPPING AN ENVIRONMENT
20180012370 · 2018-01-11 ·

A method for mapping an environment by an electronic device is described. The method includes obtaining a set of sensor measurements. The method also includes determining a set of voxel occupancy probability distributions respectively corresponding to a set of voxels based on the set of sensor measurements. Each of the voxel occupancy probability distributions represents a probability of occupancy of a voxel over a range of occupation densities. The range includes partial occupation densities.

Neural network object detection

A first six degree-of-freedom (DoF) pose of an object from a perspective of a first image sensor is determined with a neural network. A second six DoF pose of the object from a perspective of a second image sensor is determined with the neural network. A pose offset between the first and second six DoF poses is determined. A first projection offset is determined for a first two-dimensional (2D) bounding box generated from the first six DoF pose. A second projection offset is determined for a second 2D bounding box generated from the second six DoF pose. A total offset is determined by combining the pose offset, the first projection offset, and the second projection offset. Parameters of a loss function are updated based on the total offset. The updated parameters are provided to the neural network to obtain an updated total offset.

DETERMINING ROAD LOCATION OF A TARGET VEHICLE BASED ON TRACKED TRAJECTORY
20230237689 · 2023-07-27 · ·

Systems and methods are provided for navigating a host vehicle. In an embodiment, a processing device may be configured to receive images captured over a time period; analyze images to identify a target vehicle; receive map information associated including a plurality of target trajectories; determine, based on analysis of the images, first and second estimated positions of the target vehicle within the time period; determine, based on the first and second estimated positions, a trajectory of the target vehicle over the time period; compare the determined trajectory to the plurality of target trajectories to identify a target trajectory traversed by the target vehicle; determine, based on the identified target trajectory, a position of the target vehicle; and determine a navigational action for the host vehicle based on the determined position.

Multiple Stage Image Based Object Detection and Recognition

Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.