G01S7/4802

Methods And System For Predicting Trajectories Of Actors With Respect To A Drivable Area
20230043601 · 2023-02-09 ·

Methods and systems for controlling navigation of a vehicle are disclosed. The system will first identify a plurality of goal points corresponding to a drivable area that a vehicle is traversing or will traverse, where the plurality of goal points are potential targets that an uncertain road user (URU) within the drivable area can use to exit the drivable area. The system will then receive perception information relating to the URU within the drivable area, and identify a target exit point from the plurality goal points based on a score. The score is computed based on the received perception information and a loss function. The system will generate a trajectory of the URU from a current position of the URU to the target exit point, and control navigation of the vehicle to avoid collision with the URU.

Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles

In one embodiment, a method for extracting point cloud features for use in localizing an autonomous driving vehicle (ADV) includes selecting a first set of keypoints from an online point cloud, the online point cloud generated by a LiDAR device on the ADV for a predicted pose of the ADV; and extracting a first set of feature descriptors from the first set of keypoints using a feature learning neural network running on the ADV, The method further includes locating a second set of keypoints on a pre-built point cloud map, each keypoint of the second set of keypoints corresponding to a keypoint of the first set of keypoint; extracting a second set of feature descriptors from the pre-built point cloud map; and estimating a position and orientation of the ADV based on the first set of feature descriptors, the second set of feature descriptors, and a predicted pose of the ADV.

Object detection in vehicles using cross-modality sensors

A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.

Vehicle systems and methods utilizing LIDAR data for road condition estimation

A system and method for estimating road conditions ahead of a vehicle, including: a LIDAR sensor operable for generating a LIDAR point cloud; a processor executing a road condition estimation algorithm stored in a memory, the road condition estimation algorithm performing the steps including: detecting a ground plane or drivable surface in the LIDAR point cloud; superimposing an M×N matrix on at least a portion of the LIDAR point cloud; for each patch of the LIDAR point cloud defined by the M×N matrix, statistically evaluating a relative position, a feature elevation, and a scaled reflectance index; and, from the statistically evaluated relative position, feature elevation, and scaled reflectance index, determining a slipperiness probability for each patch of the LIDAR point cloud; and a vehicle control system operable for, based on the determined slipperiness probability for each patch of the LIDAR point cloud, affecting an operation of the vehicle.

BALANCING COLORS IN A SCANNED THREE-DIMENSIONAL IMAGE
20180014002 · 2018-01-11 ·

A method of balancing colors of three-dimensional (3D) points measured by a scanner from a first location and a second location. The scanner measures 3D coordinates and colors of first object points from a first location and second object points from a second location. The scene is divided into local neighborhoods, each containing at least a first object point and a second object point. An adapted second color is determined for each second object point based at least in part on the colors of first object points in the local neighborhood.

Method for detecting a living being on a seat of a vehicle, detection arrangement and vehicle

A method for detecting a living being on a seat of a vehicle, further relating to a detection arrangement and to a vehicle. The method may include emitting electromagnetic waves at predetermined frequency or at a predetermined frequency band towards the seat by an electromagnetic radiator, receiving electromagnetic waves reflected on a surface by a sensor, detecting an object on the seat from a transit time of the emitted and the reflected electromagnetic waves between the radiator, the surface and the sensor by a detection device, detecting movements of the object from the reflected electromagnetic waves by the detection device if an object has been detected, determining from the detected movements of the object whether the detected object is a living being, and outputting a detection signal by way of the detection device if it has been determined that the detected object is a living being.

DUAL FREQUENCY AUTOFOCUS SYSTEM

An apparatus, system, and method of focus compensation for a vehicle-mounted, downward looking optical detection system. A first stage compensator addresses high frequency events needing rapid, small displacement compensation. A second stage compensator addresses lower frequency but sometimes larger displacement compensation.

TIME-OF-FLIGHT SENSING FOR HORTICULTURE
20230003856 · 2023-01-05 ·

The invention provides a sensing system (1000), e.g. for agricultural application, comprising a radiation generator (100), a sensing apparatus (200), and a control system (300) functionally coupled to the radiation generator (100) and the sensing apparatus (200), wherein the sensing system (1000) has one or more time-of-flight sensing modes of operation, wherein the generator (100) is configured to generate a pulse of radiation (111) in the one or more time-of-flight sensing modes of operation, and wherein the sensing apparatus (200) is configured to sense wavelength dependent spectral intensities of radiation received by the sensing apparatus (200) as a function of time in the one or more time-of-flight sensing modes, to provide a sensing system signal; wherein the sensing system signal is indicative of the wavelength dependent spectral intensity distribution of the received radiation as a function of time in the one or more time-of-flight sensing modes.

AGRICULTURAL VEHICLE, CONTROL DEVICE, AND CONTROL METHOD

A control device includes a direction identifying data generator that generates direction identifying data including at least a portion of acquired point group data indicating a position of a region including the ridge in front of an agricultural vehicle in a traveling direction, a direction identification part that identifies a direction of the ridge on the basis of the direction identifying data, and a travel control part that controls the agricultural vehicle such that the agricultural vehicle travels in the direction of the ridge identified by the direction identification part.

MOBILE PHOTOELECTRIC DETECTION AND IDENTIFICATION SYSTEM FOR LOW, SLOW AND SMALL TARGETS

The disclosure discloses a mobile photoelectric detection and identification system for low, slow and small targets. The optical detection subsystem and the photoelectric parallel processing and identification subsystem are arranged on the servo subsystem, and the servo subsystem is carried on an installation platform of a vehicle. The optical detection subsystem is configured to collect multi-wavelength band optical information from the target and the background. The co-processing module of various wavelength bands is configured to perform single-frame detection and identification of the target from the image information of the corresponding wavelength band. The information processing main control module is configured to use JPEG image compression, track association and multi-frame combining methods to perform a multi-frame detection and identification on the target. The servo subsystem is configured to complete target tracking according to the multi-frame detection and identification results.