Patent classifications
G01S7/417
LEVEL MEASURING DEVICE FOR MONITORING THE SURFACE TOPOLOGY OF A BULK MATERIAL
A level measuring device is provided. The level measuring device can be configured to monitor the surface topology of a bulk material, and can include a sensor unit for scanning several areas of the bulk material surface and an evaluation unit for calculating the volumes under these areas.
Methods and Systems for Radar Reflection Filtering During Vehicle Navigation
Example embodiments relate to radar reflection filtering using a vehicle sensor system. A computing device may detect a first object in radar data from a radar unit coupled to a vehicle and, responsive to determining that information corresponding to the first object is unavailable from other vehicle sensors, use the radar data to determine a position and a velocity for the first object relative to the radar unit. The computing device may also detect a second object aligned with a vector extending between the radar unit and the first object. Based on a geometric relationship between the vehicle, the first object, and the second object, the computing device may determine that the first object is a self-reflection of the vehicle caused at least in part by the second object and control the vehicle based on this determination.
Method and apparatus for predicting severe convection weather
Embodiments of the present disclosure provide a method and apparatus for predicting a severe convection weather. The method may include: acquiring a current radar echo map sequence, the current radar echo map sequence being a radar echo map sequence within a current time period; generating, based on the current radar echo map sequence, a future radar echo map sequence, the future radar echo map sequence being a radar echo map sequence within a future time period; and inputting the future radar echo map sequence into a pre-trained severe convection weather predicting model to obtain a severe convection weather intensity predicting map, where the severe convection weather predicting model is used to predict the intensity of a severe convection weather.
Auto labeler
Aspects of the disclosure relate to training a labeling model to automatically generate labels for objects detected in a vehicle's environment. In this regard, one or more computing devices may receive sensor data corresponding to a series of frames perceived by the vehicle, each frame being captured at a different time point during a trip of the vehicle. The computing devices may also receive bounding boxes generated by a first labeling model for objects detected in the series of frames. The computing devices may receive user inputs including an adjustment to at least one of the bounding boxes, the adjustment corrects a displacement of the at least one of the bounding boxes caused by a sensing inaccuracy. The computing devices may train a second labeling model using the sensor data, the bounding boxes, and the adjustment to increase accuracy of the second labeling model when automatically generating bounding boxes.
Map creation and localization for autonomous driving applications
An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
Segmentation and classification of point cloud data
A system can include a computer including a processor and a memory, the memory storing instructions executable by the processor to receive point cloud data. The instructions further include instructions to generate a plurality of feature maps based on the point cloud data, each feature map of the plurality of feature maps corresponding to a parameter of the point cloud data. The instructions further include instructions to aggregate the plurality of feature maps into an aggregated feature map. The instructions further include instructions to generate, via a feedforward neural network, at least one of a segmentation output or a classification output based on the aggregated feature map.
NEURAL NETWORK BASED RADIOWAVE MONITORING OF PATIENT DEGENERATIVE CONDITIONS
A method and system of training a machine learning neural network (MLNN) in anatomical degenerative conditions in accordance with anatomical dynamics. The method comprises receiving, in a first input layer of the MLNN, from a millimeter wave (mmWave) radar sensing device, a first set of mmWave radar point cloud data representing a first gait characteristic of a subject in motion, comprising an arm swing velocity, receiving, in a second layer, a second set of mmWave radar point cloud data representing a second gait characteristic comprising a measure of dynamic postural stability, the input layers being interconnected with an output layer of the MLNN via an intermediate layer, and training a MLNN classifier in accordance with a classification that increases a correlation between a degenerative condition of the subject as generated at the output layer and the sets of mmWave point cloud data.
Mobile device-based radar system for providing a multi-mode interface
This document describes techniques and systems that enable a mobile device-based radar system (104) for providing a multi-mode interface (114). A radar field (110) is used to enable a user device (102, 702) to accurately determine a presence or threshold movement of a user near the user device. The user device provides a multi-mode interface having at least first and second modes and providing a black display or a low-luminosity display in the first mode. The user device detects, based on radar data and during the first mode, a presence or threshold movement by the user relative to the user device and responsively changes the multi-mode interface from the first mode to the second mode. Responsive to the change to the second mode, the user device provides visual feedback corresponding to the implicit interaction by adjusting one or more display parameters of the black display or the low-luminosity display.
Method and apparatus for biometric authentication using face radar signal
An electronic device, a method, and computer readable medium are disclosed. The method includes transmitting radar signals via a radar transceiver. The method also includes identifying signals of interest that represent biometric information of a user based on reflections of the radar signals received by the radar transceiver. The method further includes generating an input based on the signals of interest that include the biometric information. The method additionally includes extracting a feature vector based on the input. The method also includes authenticating the user based on comparison of the feature vector to a threshold of similarity with preregistered user data.
CAMERA-RADAR SENSOR FUSION USING LOCAL ATTENTION MECHANISM
Methods, computer systems, and apparatus, including computer programs encoded on computer storage media, for processing sensor data. In one aspect, a method includes obtaining image data representing a camera sensor measurement of a scene; obtaining radar data representing a radar sensor measurement of the scene; generating a feature representation of the image data; generating a respective initial depth estimate for each of a subset of the plurality of pixels; generating a feature representation of the radar data; for each of the subset of the plurality of pixels, generating a respective adjusted depth estimate for the pixel using the initial depth estimate for the pixel and the radar feature vectors for a corresponding subset of the plurality of radar reflection points; generating a fused point cloud that includes a plurality of three-dimensional data points; and processing the fused point cloud to generate an output that characterizes the scene.