G01S17/006

Optical system to reduce local internal backscatter

A LADAR system includes a transmitter configured to emit a directed optical signal. The LADAR system includes a shared optical aperture through which the directed optical signal is emitted. The shared optical aperture includes a first pupil plane. The shared optical aperture receives a return optical signal that is based on the directed optical signal. The system includes a mirror with a hole through which the directed optical signal passes. The mirror also reflects the return optical signal towards an imager. The imager receives the return optical signal and generates an image. The image is based on a portion of the return optical signal. The system also includes a partial aperture obscuration at a second pupil plane. The partial aperture obscuration may block a portion of internal backscatter in the return optical signal. The system also includes a focal plane to record the image.

Target tracking method, system, device and storage medium
11821986 · 2023-11-21 · ·

The present invention provides a target tracking method, system, device and storage medium, which includes: Determining a target area based on the current frame of a training sample, extracting and fusing histogram of oriented gradient (HOG), color naming (CN), and color space HSV features of the target area to obtain a target template; Determining a target function according to the target template and a spatial regularization weight factor; Introducing the Sherman-Morrison formula into the alternating direction method of multipliers (ADMM) to accelerate the solution of the target function and obtain the response value; Iterating the target tracking model when the response value meets the preset confidence threshold until training is completed to obtain a trained target tracking model, and tracking the target in the video to be observed by using the trained target tracking model.

Systems and methods for generating synthetic sensor data via machine learning

The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.

Methods to simulate continuous wave lidar sensors

The disclosure relates to a method for simulating sensor data of a continuous wave (CW) Light Detection and Ranging (lidar) sensor. The method includes generating a ray set comprising at least one ray, based on a CW signal, where each ray in the ray set has an emission starting time and an emission duration. The method further includes propagating, for each ray in the ray set, the ray through a simulated scene including at least one object; computing, for each ray in the ray set, a signal contribution of the propagated ray at a detection location in the simulated scene; generating an output signal, based on mixing the CW signal with the computed signal contributions of the rays in the ray set; and at least one of storing and outputting the output signal.

Systems and Methods for Generating Synthetic Sensor Data via Machine Learning

The present disclosure provides systems and methods that combine physics-based systems with machine learning to generate synthetic LiDAR data that accurately mimics a real-world LiDAR sensor system. In particular, aspects of the present disclosure combine physics-based rendering with machine-learned models such as deep neural networks to simulate both the geometry and intensity of the LiDAR sensor. As one example, a physics-based ray casting approach can be used on a three-dimensional map of an environment to generate an initial three-dimensional point cloud that mimics LiDAR data. According to an aspect of the present disclosure, a machine-learned model can predict one or more dropout probabilities for one or more of the points in the initial three-dimensional point cloud, thereby generating an adjusted three-dimensional point cloud which more realistically simulates real-world LiDAR data.

Method of modelling a scanning distance sensor for prototyping parameters of such sensor and/or for prototyping software processing the output of such sensor
11415686 · 2022-08-16 · ·

A method of modelling a scanning distance sensor determining a set of detections is determined as if obtained by the sensor when scanning a field of view of the sensor, wherein each of the detections corresponds to a different line of sight originating from the sensor and comprises information about the orientation of the respective line of sight and about the distance of a respective target point from the sensor, the target point being the point in space where the line of sight first crosses any of the objects at the respective point in time. The method includes that the set of detections is modified by estimating the effect of sequentially scanning the field of view in discrete time steps on the detections and inversely applying the estimated effect to the set of detections.

APPARATUS FOR SELECTING LIDAR TARGET SIGNAL, LIDAR SYSTEM HAVING THE SAME, AND METHOD THEREOF

A Light Detection and Ranging (LiDAR) target signal selection apparatus may include a processor configured to estimate a target signal among signals of a current frame N by use of a determined target signal of a previous frame N−1 among N LiDAR receiving signals, and to determine the estimated target signal based on deviations of previous frames 1 to N−1; and a storage configured to store data and algorithms driven by the processor.

SYNTHETIC GENERATION OF RADAR AND LIDAR POINT CLOUDS

A method for synthetically generating a point cloud of radar or LIDAR reflections, a reflection indicating at least one location at which radar or LIDAR interrogating radiation has been reflected. In the method, distribution functions which according to a random distribution provide samples in each case for at least one of the variables contained in the radar or LIDAR reflections are provided; synthetic reflections are generated by drawing samples in each case from the distribution functions for variables contained in the radar or LIDAR reflections, one of multiple distribution functions being selected according to at least one selection random distribution in order to draw each sample; the synthetic reflections are combined to form the sought point cloud.

MODELING FOLIAGE IN A SYNTHETIC ENVIRONMENT
20220317301 · 2022-10-06 ·

The subject disclosure relates to techniques for integrating synthetic plants into a virtual scene. A process of the disclosed technology can include receiving a digital asset comprising synthetic foliage, processing the digital asset to modify at least one parameter associated with the synthetic foliage to generate a modified digital asset, acquiring synthetic sensor data corresponding with the modified digital asset, and calculating a classification score for the modified digital asset based on the synthetic sensor data.

OBJECT TRACKING BY GENERATING VELOCITY GRIDS
20220315036 · 2022-10-06 ·

A computer-implemented method is provided for creating a velocity grid using a simulated environment for use by an autonomous vehicle. The method may include simulating a road scenario with simulated objects. The method may also include recording image data collected from a camera sensor, the image data comprising a first 2D image frame comprising the simulated objects made up of a plurality of pixels. The method may also include identifying a first 3D point on the first simulated object in a 3D view of the simulated road scenario, wherein the first 3D point corresponds to the first pixel in the first 2D image frame. The method may also include generating a velocity of the first point based upon a velocity of the first simulated object, and projecting the velocity back into the first 2D image frame. The method may further include encoding the velocity for the first pixel prior to a simulated movement of the simulated object with respect to a second pixel after the simulated movement of the simulated object in a 2D velocity grid.