G06T2207/10044

De-Aliased Imaging for a Synthetic Aperture Radar
20230018183 · 2023-01-19 ·

This document describes techniques for enabling de-aliased imaging for a synthetic aperture radar. Radar signals processed by a synthetic aperture radar (SAR) system may include false detections in the form of aliasing induced by grating lobes. The techniques described herein can reduce the adverse effects of grating lobes by obtaining an initial SAR image using a back-projection algorithm. Aliasing effects (e.g., false detections) in this initial image may be common due to the limitations of an SAR system moving at non-uniform speeds. A refined image is produced from the initial image by applying a de-aliasing filter to the initial image. The refined image may have reduced or eliminated false detections that attribute to aliasing effects, resulting in a better representation of the environment of the vehicle.

SENSOR FUSION ARCHITECTURE FOR LOW-LATENCY ACCURATE ROAD USER DETECTION

Aspects described herein provide sensor data stream processing for enabling camera/radar sensor fusion, with application to road user detection in the context of Autonomous Driving/Assisted Driving (ADAS). In particular, a scheme to extract Region-of-Interests (ROI) from a high-resolution, high-dimensional radar data cube that can then be transmitted to a sensor fusion unit is described. The ROI scheme allows to extract relevant information, thus reducing the latency and data transmission rate to the sensor fusion module, without trading-off accuracy and detection rates. The sensor data stream processing comprises receiving a first data stream from a radar sensor, forming a point cloud by extracting 3D points from the 3D data cube, performing clustering on the point cloud in order to identify high-density regions representing one or ROIs, and extracting one or more 3D bounding boxes from the 3D data cube corresponding to the one or more ROIs and classifying each ROI.

MULTI-CHANNEL OBJECT MATCHING
20230011829 · 2023-01-12 · ·

A method may include obtaining first sensor data captured by a first sensor system and second sensor data captured by a second sensor system of a different type from the first sensor system. The method may include detecting a first object included in the first sensor data and a second object included in the second sensor data. The method may include assigning a first label to the first object and a second label to the second object after comparing the first and the second sensor data. The first and second labels may indicate degrees to which the first and the second objects match. Responsive to the first and second labels indicating that the first and the second objects match, the method may include designating a matched object representative of the first object and the second object and sending the matched object to a downstream computing system of an autonomous vehicle.

AUTOMOTIVE LOCALIZATION AND MAPPING IN LOW-LIGHT ENVIRONMENT TECHNICAL FIELD

A localization and mapping system and method for a motor vehicle is disclosed and includes at least one camera configured to obtain images of an environment surrounding the motor vehicle, at least one sensor configured to obtain location information for objects surrounding the motor vehicle and a controller configured to receive the images captured by the at least one camera and the location information obtained by the at least one sensor. The controller enhances the captured images utilizing a neural network and combines the enhanced images with the location information to localize the vehicle within the mapped environment.

Learned state covariances
11537819 · 2022-12-27 · ·

Techniques are disclosed for a covariance model that may generate observation covariances based on observation data of object detections. Techniques may include determining observation data for an object detection of an object represented in sensor data, determining that track data of a track is associated with the object, and inputting the observation data associated with the object detection into a machine-learned model configured to output a covariance (a covariance model). The covariance model may output one or more observation covariance values for the observation data. In some examples, the techniques may include determining updated track data based on the track data, the one or more observation covariance values, and the observation data.

OBJECT DETECTION CIRCUITRY AND OBJECT DETECTION METHOD
20220406044 · 2022-12-22 · ·

The present disclosure generally pertains to an object detection circuitry configured to: obtain first feature data which are based on first sensing data of a first sensor; compare the first feature data to a first predetermined feature model being representative of a predefined object, wherein the first predetermined feature model is specific for the first sensor, thereby generating first object probability data; obtain second feature data which are based on second sensing data of a second sensor; compare the second feature data to a second predetermined feature model being representative of the predefined object, wherein the second predetermined feature model is specific for the second sensor, thereby generating second object probability data; and combine the first and the second object probability data, thereby generating combined probability data for detecting the predefined object.

Autonomous aircraft sensor-based positioning and navigation system using markers

A system and method are disclosed for design of a suite of multispectral (MS) sensors and processing of enhanced data streams produced by the sensors for autonomous aircraft flight. The onboard suite of MS sensors is specifically configured to sense and use a MS variety of sensor-tuned objects, either strategically placed objects and/or surveyed and sensor significant existing objects to determine a position and verify position accuracy. The received MS sensor data enables an autonomous aircraft object identification and positioning system to correlate MS sensor data output with a-priori information stored onboard to determine and verify position and trajectory of the autonomous aircraft. Once position and trajectory are known, the object identification and positioning system commands the autonomous aircraft flight management system and autopilot control of the autonomous aircraft.

SEMANTIC UNDERSTANDING OF DYNAMIC IMAGERY USING BRAIN EMULATION NEURAL NETWORKS
20220391692 · 2022-12-08 ·

Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for receiving sensor data generated by one or more sensors that characterizes motion of an object over multiple time steps, providing the sensor data characterizing the motion of the object to a motion prediction neural network having a brain emulation sub-network with an architecture that is specified by synaptic connectivity between neurons in a brain of a biological organism, and processing the sensor data characterizing the motion of the object using the motion prediction neural network having the brain emulation sub-network to generate a network output that defines a prediction characterizing the motion of the object.

METHOD AND DEVICE FOR ASSISTING IN LANDING AN AIRCRAFT UNDER POOR VISIBILITY CONDITIONS

Method and device for assisting with landing an aircraft under poor visibility conditions are provided. The method allows sensor data to be received during a phase of approach toward a runway when the runway and/or an approach lighting system are not visible to the pilot from the cockpit; then, in the received sensor data, data of interest characteristic of the runway and/or the approach lighting system to be determined; then, on the basis of the data of interest, the coordinates of a target area to be computed; and, on a head-up display, a guiding symbol representative of the target area to be displayed, the guiding symbol being displayed before the aircraft reaches the decision height, in order to provide the pilot with a visual cue in which to search for the runway and/or approach lighting system.

Multipath ghost mitigation in vehicle radar system

Systems and methods involve detecting objects using a radar system of a vehicle. Tracks of the objects are initiated in a track database. The tracks store data, respectively, for the objects and are updated based on additional detections of the objects. The tracks of the objects are initially unclassified tracks. Two tracks corresponding to two of the objects are selected as a candidate pair. Criteria are applied to the candidate pair to determine whether one track is of a ghost object and another track is of a true object corresponding with the ghost object. The ghost object represents detection of the true object in an incorrect location. The candidate pair is classified as tracks of a true object and ghost object pair based on determining that the one track is of the ghost object and the other track is of the true object corresponding with the ghost object.