G06T2207/30256

SENSOR FUSION FOR AUTONOMOUS MACHINE APPLICATIONS USING MACHINE LEARNING

In various examples, a multi-sensor fusion machine learning model – such as a deep neural network (DNN) – may be deployed to fuse data from a plurality of individual machine learning models. As such, the multi-sensor fusion network may use outputs from a plurality of machine learning models as input to generate a fused output that represents data from fields of view or sensory fields of each of the sensors supplying the machine learning models, while accounting for learned associations between boundary or overlap regions of the various fields of view of the source sensors. In this way, the fused output may be less likely to include duplicate, inaccurate, or noisy data with respect to objects or features in the environment, as the fusion network may be trained to account for multiple instances of a same object appearing in different input representations.

METHOD AND APPARATUS FOR DETECTING DRIVABLE AREA, MOBILE DEVICE AND STORAGE MEDIUM
20230278587 · 2023-09-07 ·

A method for detecting a drivable area includes: collecting N consecutive video frames of a road when a vehicle is driving, where N is a positive integer greater than 1; determining a historical trajectory of a dynamic obstacle and position information of a static obstacle included in the N consecutive video frames by analyzing the N consecutive video frames with a 3D detection algorithm; correcting the historical trajectory and the position information based on a preset rule; determining a predicted trajectory of the dynamic obstacle based on the corrected historical trajectory; and determining the drivable area of the vehicle based on the predicted trajectory and the corrected position information.

Road plane output with lateral slope
11651595 · 2023-05-16 · ·

The present disclosure generally relates to processing visual data of a road surface that includes a vertical deviation with a lateral slope. In some embodiments, a system determines a path expected to be traversed by at least one wheel of the vehicle on a road surface. In some embodiments, a system determines, using at least two images captured by one or more cameras, a height of the road surface for at least one point along the path to be traversed by the wheel. In some embodiments, a system computes an indication of a lateral slope of the road surface at the at least one point along the path. In some embodiments, a system outputs, on a vehicle interface bus, an indication of the height of the point and an indication of the lateral slope at the at least one point along the path.

System and method for determining car to lane distance
11727811 · 2023-08-15 · ·

A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.

Method and apparatus for estimating a location of a vehicle

A method, apparatus and computer program product are provided to estimate the location of a vehicle based at least in part upon two or more road signs that are depicted by one or more images captured by one or more image capture devices onboard the vehicle. By relying at least in part upon the two or more road signs, the location of the vehicle may be refined or otherwise estimated with enhanced accuracy, such as in instances in which there is an inability to maintain a line-of-sight with the satellites of a satellite positioning system or otherwise in instances in which the location estimated based upon reliance on satellite or radio signals is considered insufficient. As a result, the vehicle may be navigated in a more informed and reliable manner and the relationship of the vehicle to other vehicles may be determined with greater confidence.

IMAGE PROCESSING DEVICE, MOBILE OBJECT, IMAGE PROCESSING METHOD, AND STORAGE MEDIUM
20230132456 · 2023-05-04 ·

To implement an image processing device capable of improving rear visibility of a mobile object, the image processing device includes: an acquisition unit configured to acquire a video from an imaging device that generates images of a rear of a mobile object; a display control unit configured to cause a display unit to display a first range in the video acquired by the acquisition unit; and a detection unit configured to detect a predetermined target based on the video acquired by the acquisition unit. When the detection unit detects the predetermined target, the display control unit causes the display unit to display the video in a second range different from the first range.

HIGHLY-ACCURATE AND SELF-ADJUSTING IMAGING SENSOR AUTO-CALIBRATION FOR IN-VEHICLE ADVANCED DRIVER ASSISTANCE SYSTEM (ADAS) OR OTHER SYSTEM
20230136214 · 2023-05-04 ·

A method includes obtaining multiple images of a scene using an imaging sensor associated with a vehicle, where the images of the scene capture lane marking lines associated with a traffic lane in which the vehicle is traveling. The method also includes identifying, in each of at least some of the images, a vanishing point based on the lane marking lines captured in the image. The method further includes identifying an average position of the vanishing points in the at least some of the images. In addition, the method includes determining one or more extrinsic calibration parameters of the imaging sensor based on the average position of the vanishing points. The one or more extrinsic calibration parameters of the imaging sensor may include a pitch angle and a yaw angle of the imaging sensor.

Systems and methods for vehicle offset navigation

A system for a vehicle is provided. The system may include a memory and at least one processor configured to: access a plurality of images of a forward-facing view from the vehicle, the plurality of images corresponding to image data obtained by a camera; determine from the images a first lane marking on a first side of a lane, the lane through which the vehicle can navigate, and a second lane marking on a second side of the lane opposite of the first side; navigate the vehicle autonomously relatively centered between the first and second lane markings; determine from the plurality of images that an object is on the first side or the second side of the lane, and the object beyond the first or second lane marking; and navigate the vehicle autonomously to travel over a driving path that is offset from a center of the lane.

SYSTEM AND METHOD FOR FREE SPACE ESTIMATION

A system and method for estimating free space including applying a machine learning model to camera images of a navigation area, where the navigation area is broken into cells, synchronizing point cloud data from the navigation area with the processed camera images, and associating probabilities that the cell is occupied and object classifications of objects that could occupy the cells with cells in the navigation area based on sensor data, sensor noise, and the machine learning model.

Lane detection and distance estimation using single-view geometry

Disclosed are methods, devices, and computer-readable media for detecting lanes and objects in image frames of a monocular camera. In one embodiment, a method is disclosed comprising receiving a sample set of image frames; detecting a plurality of markers in the sample set of image frames using a convolutional neural network (CNN); fitting lines based on the plurality of markers; detecting a plurality of vanishing points based on the lines; identifying a best fitting horizon for the sample set of image frames via a RANSAC algorithm; computing an inverse perspective mapping (IPM) based on the best fitting horizon; and computing a lane width estimate based on the sample set of image frames using the IPM in a rectified view and the parallel line fitting.