G06T2207/30256

3D position estimation of objects from a monocular camera using a set of known 3D points on an underlying surface
11281918 · 2022-03-22 · ·

Disclosed herein are methods and systems for determining a location of an object within an environment. An example method may include determining a three-dimensional (3D) location of a plurality of reference points in an environment, receiving a two-dimensional (2D) image of a portion of the environment that contains an object, selecting certain reference points from the plurality of reference points that form a polygon when projected into the 2D image that contains at least a portion of the object, determining an intersection point of a ray directed toward the object and a 3D polygon formed by the selected reference points, and based on the intersection point of the ray directed toward the object and the 3D polygon formed by the selected reference points, determining a 3D location of the object in the environment.

DETECTOR FOR DETERMINING A POSITION OF AT LEAST ONE OBJECT

Described herein is a detector for determining a position of an object. The detector includes a sensor element having a matrix of optical sensors, wherein the sensor element is configured to determine a reflection image. The detector also includes an evaluation device configured to select a reflection feature of the reflection image at a first image position in the reflection image, determine a longitudinal coordinate z of the selected reflection feature by optimizing a blurring function f.sub.a, and determine a reference feature in a reference image at a second image position in the reference image corresponding to the reflection feature. The reference image and the reflection image are determined at two different spatial configurations, wherein the spatial configurations differ by a relative spatial constellation, wherein the evaluation device is configured to determine the relative spatial constellation from the longitudinal coordinate z and the first and the second image positions.

Apparatus for a driver assistance system
11292464 · 2022-04-05 · ·

A driver assistance apparatus being configured to determine an object position sequence for each of a plurality of objects and generate an object track to approximate each respective object position sequence. The apparatus also sorts the object tracks in to at least one object group according to the value of at least one parameter of each of the object tracks. For each object group, a swarm function is generated to approximate the object position sequences of the object tracks that are members of the respective object group. A swarm lane is generated according to each the swarm function, the swarm lane portion representing a portion of a lane. A corresponding method is also provided.

PREDICTING THREE-DIMENSIONAL FEATURES FOR AUTONOMOUS DRIVING

A processor coupled to memory is configured to receive image data based on an image captured by a camera of a vehicle. The image data is used as a basis of an input to a trained machine learning model trained to predict a three-dimensional trajectory of a machine learning feature. The three-dimensional trajectory of the machine learning feature is provided for automatically controlling the vehicle.

SYSTEMS AND METHODS FOR DETECTING TRAILER ANGLE
20220092318 · 2022-03-24 ·

Systems and methods for detecting trailer angle are provided. In one aspect, an in-vehicle control system includes an optical sensor configured to be mounted on a tractor so as to face a trailer coupled to the tractor, the optical sensor further configured to generate optical data indicative of an angle formed between the trailer and the tractor. The system further includes a processor and a computer-readable memory in communication with the processor and having stored thereon computer-executable instructions to cause the processor to receive the optical data from the optical sensor, determine at least one candidate plane representative of a surface of the trailer visible in the optical data based on the optical data, and determine an angle between the trailer and the tractor based on the at least one candidate plane.

Detection of misalignment hotspots for high definition maps for navigating autonomous vehicles
11280609 · 2022-03-22 · ·

A high-definition map system receives sensor data from vehicles travelling along routes and combines the data to generate a high definition map for use in driving vehicles, for example, for guiding autonomous vehicles. A pose graph is built from the collected data, each pose representing location and orientation of a vehicle. The pose graph is optimized to minimize constraints between poses. Points associated with surface are assigned a confidence measure determined using a measure of hardness/softness of the surface. A machine-learning-based result filter detects bad alignment results and prevents them from being entered in the subsequent global pose optimization. The alignment framework is parallelizable for execution using a parallel/distributed architecture. Alignment hot spots are detected for further verification and improvement. The system supports incremental updates, thereby allowing refinements of subgraphs for incrementally improving the high-definition map for keeping it up to date.

Vehicle surroundings recognition apparatus

In a vehicle surroundings recognition apparatus that recognizes a specific target around a vehicle from an image captured by an imaging unit, a shadow detection unit is configured to detect a shadow region based on a difference, between a plurality of elements constituting the image, in intensity of a specific color component included in colors represented by the plurality of elements and a difference in luminance between the plurality of elements, the shadow region being a region in the image in which a shadow is cast on a surface of the road. A feature point detection unit is configured to detect feature points in the image. A recognition unit is configured to recognize the target based on the shadow region detected by the shadow detection unit and a group of feature points detected by the feature point detection unit.

Demarcation line recognition device

The present invention addresses the problem of enabling an accurate determination of line type using a plurality of cameras. A demarcation line recognition device of the present invention includes a plurality of cameras (101); a line type determination unit (210) that senses demarcation line candidates from respective images obtained by the plurality of cameras and determines line types of the sensed demarcation line candidates; a demarcation line position calculation unit that calculates the positions of the sensed demarcation line candidates; a reliability level calculation unit that computes a reliability level of each demarcation line candidate by using positional relationships between the sensed demarcation line candidates; a probability computation unit (204) to which line types of demarcation line candidates for which the reliability level is equal to or greater than a threshold value are input and that determines the probability of each line type; and a line type decision unit (205) that finally decides the line types of the demarcation lines on the basis of the results of the probability computation unit.

Machine learning enhanced vehicle merging

A method receives a first image set depicting a merging zone, the first image set including first image(s) associated with a first timestamp; determines, using a trained first machine learning logic, a first state describing a traffic condition of the merging zone at the first timestamp using the first image set; determines, from a sequence of states describing the traffic condition of the merging zone at a sequence of timestamps, using a trained second machine learning logic, second state(s) associated with second timestamp(s) prior to the first timestamp of the first state using a trained backward time distance; computes, using a trained third machine learning logic, impact metric(s) for merging action(s) using the first state, the second state(s), and the merging action(s); selects, from the merging action(s), a first merging action based on the impact metric(s); and provides a merging instruction including the first merging action to a merging vehicle.

Stereo camera device

An objective of the present invention is, in a stereo camera device, to determine an accurate image position in a direction of progress to detect at an early stage an obstacle or a preceding vehicle on a road. Provided is a stereo camera device for measuring the distance to a solid object from images photographed with a plurality of cameras, said device characterized by: a wide-angle image cropping part for cropping a portion of the images; a distance image cropping part for cropping and enlarging a portion of the images; a road shape determination part for determining a road shape, including slope information, of a road being traveled; and determining, on the basis of the road shape in a prescribed distance, which has been derived with the road shape determination part, the cropping position and/or range of the distance image cropping part.