Patent classifications
G01S13/867
Estimating three-dimensional target heading using a single snapshot
Provided herein is a system and method to determine a three-dimensional heading of a target. The system includes a radar sensor that obtains a three-dimensional snapshot of radar data comprising Doppler velocities and spatial positions of a plurality of detection points of a target, one or more processors, and a memory storing instructions that, when executed by the one or more processors, causes the system to perform conducting a first estimation of a three-dimensional heading of the target based on the spatial positions; conducting a second estimation of the three-dimensional heading of the target based on the Doppler velocities; and obtaining a combined estimation of the three-dimensional heading of the target based on a weighted sum of the first estimation and the second estimation.
Autonomy first route optimization for autonomous vehicles
Embodiments herein can determine an optimal route for an autonomous electric vehicle. The system may score viable routes between the start and end locations of a trip using a numeric or other scale that denotes how viable the route is for autonomy. The score is adjusted using a variety of factors where a learning process leverages both offline and online data. The scored routes are not based simply on the shortest distance between the start and end points but determine the best route based on the driving context for the vehicle and the user.
PERSONAL PROTECTIVE EQUIPMENT FOR NAVIGATION AND MAP GENERATION WITHIN A VISUALLY OBSCURED ENVIRONMENT
- Nicholas T. Gabriel ,
- John M. Kruse ,
- Gautam Singh ,
- Brian J. Stankiewicz ,
- Jason L. Aveldson ,
- Glenn E. Casner ,
- Elisa J. Collins ,
- Samuel J. Fahey ,
- Haleh Hagh-Shenas ,
- Frank T. Herfort ,
- Ronald D. Jesme ,
- Steven G. Lucht ,
- Carolyn L. Nye ,
- Adam C. Nyland ,
- Jacob E. Odom ,
- Antonia E. Schaefer ,
- Justin Tungjunyatham
The disclosure describes systems (2) of navigating a hazardous environment (8). The system includes personal protective equipment (PPE) (13) and computing device(s) (32) configured to process sensor data from the PPE (13), generate pose data of an agent (10) based on the processed sensor data, and track the pose data as the agent (10) moves through the hazardous environment (8). The PPE (13) may include an inertial measurement device to generate inertial data and a radar device to generate radar data for detecting a presence or arrangement of objects in a visually obscured environment (8). The PPE (13) may include a thermal image capture device to generate thermal image data for detecting and classifying thermal features of the hazardous environment (8). The PPE (13) may include one or more sensors to detect a fiducial marker (21) in a visually obscured environment (8) for identifying features in the visually obscured environment (8). In these ways, the systems (2) may more safely navigate the agent (10) through the hazardous environment (8).
ACTIVE ALIGNMENT OF AN OPTICAL ASSEMBLY WITH INTRINSIC CALIBRATION
Provided are methods for active alignment of an optical assembly with intrinsic calibration. Some methods described include performing a first active alignment using a multi-collimator assembly, determining a principal point of the camera assembly using a diffractive optical element (DOE) intrinsic calibration module, and adjusting the relative position of one or more of the lens and the image sensor to align the principal point of the camera assembly with an image center of the image sensor and to perform a second active alignment. Systems and computer program products are also provided.
AUTOMOTIVE SENSOR INTEGRATION MODULE
An automotive sensor integration module including a plurality of sensors which differ in at least one of a sensing period or an output data format, and a signal processing unit, which simultaneously outputs, as sensing data, pieces of detection data respectively output from the plurality of sensors on the basis of the sensing period of any one of the plurality of sensors, determines whether each region of an outer cover corresponding to a location of each of the plurality of sensors is contaminated on the basis of the pieces of detection data, and outputs a determination result as contamination data.
Apparatus and Method for Controlling Mobile Body
An apparatus and the like for controlling a mobile body that are capable of adjusting a detection result by a radar device in accordance with a three-dimensional shape for each region of a three-dimensional map generated from an image captured by an image-capturing device are provided. A mobile body control unit 105 is an apparatus for controlling the vehicle (mobile body) including an image-capturing device 101 and a millimeter wave radar device 102 (radar device). A three-dimensional map generation unit 203 generates a three-dimensional map around the vehicle from an image captured by the image-capturing device 101. A radar weight map estimation unit 204 (weight estimation unit) estimates the weight of the detection result by the millimeter wave radar device 102 for each region of the three-dimensional map from the three-dimensional shape for each region of the three-dimensional map. A weight adjustment unit 205 (adjustment unit) adjusts a detection result by the millimeter wave radar device 102 on the basis of a weight.
SENSOR FUSION
A plurality of images can be acquired from a plurality of sensors and a plurality of flattened patches can be extracted from the plurality of images. An image location in the plurality of images and a sensor type token identifying a type of sensor used to acquire an image in the plurality of images from which the respective flattened patch was acquired can be added to each of the plurality of flattened patches. The flattened patches can be concatenated into a flat tensor and add a task token indicating a processing task to the flat tensor, wherein the flat tensor is a one-dimensional array that includes two or more types of data. The flat tensor can be input to a first deep neural network that includes a plurality of encoder layers and a plurality of decoder layers and outputs transformer output. The transformer output can be input to a second deep neural network that determines an object prediction indicated by the token and the object predictions can be output.
METHOD AND SYSTEM FOR PROVIDING INTELLIGENT CONTROL BY USING RADAR SECURITY CAMERA
An intelligent control method and system using a radar security camera are disclosed, wherein a target is detected by 360° radar sensing regardless of the rotation radius of a camera by using the security camera having a built-in radar, and the camera is enabled to track the target according to the moving direction and specific signs of the target after the target is identified as a person and a vehicle sequentially according to a decision priority order.
Cross-validating sensors of an autonomous vehicle
Methods and systems are disclosed for cross-validating a second sensor with a first sensor. Cross-validating the second sensor may include obtaining sensor readings from the first sensor and comparing the sensor readings from the first sensor with sensor readings obtained from the second sensor. In particular, the comparison of the sensor readings may include comparing state information about a vehicle detected by the first sensor and the second sensor. In addition, comparing the sensor readings may include obtaining a first image from the first sensor, obtaining a second image from the second sensor, and then comparing various characteristics of the images. One characteristic that may be compared are object labels applied to the vehicle detected by the first and second sensor. The first and second sensors may be different types of sensors.
Method, device, and system for interference reduction in a frequency-modulated continuous-wave radar unit
A method for interference reduction in a stationary radar unit of a frequency-modulated continuous-wave (FMCW) type is provided. A sequence of beat signals is received, and a reference beat signal is calculated as an average or a median of one or more of the beat signals in the sequence. By comparing a difference between a beat signal and the reference beat signal, or a derivative of the difference, to one or more thresholds, a segment which is subject to interference is identified. The segment of the beat signal is replaced by one or more of a corresponding segment of an adjacent beat signal in the sequence, and a corresponding segment of the reference beat signal.