Patent classifications
G01S2013/9322
Automatic autonomous vehicle and robot LiDAR-camera extrinsic calibration
Extrinsic calibration of a Light Detection and Ranging (LiDAR) sensor and a camera can comprise constructing a first plurality of reconstructed calibration targets in a three-dimensional space based on physical calibration targets detected from input from the LiDAR and a second plurality of reconstructed calibration targets in the three-dimensional space based on physical calibration targets detected from input from the camera. Reconstructed calibration targets in the first and second plurality of reconstructed calibration targets can be matched and a six-degree of freedom rigid body transformation of the LiDAR and camera can be computed based on the matched reconstructed calibration targets. A projection of the LiDAR to the camera can be computed based on the computed six-degree of freedom rigid body transformation.
Radar system for internal and external environmental detection
Examples disclosed herein relate to radar systems to coordinate detection of objects external to the vehicle and distractions within the vehicle. A method of environmental detection with a radar system includes detecting an object in an external environment of a vehicle with the radar system positioned on the vehicle. The method includes determining a distraction metric from measurements of user activity obtained within the vehicle with the radar system. The method includes adjusting one or more detection parameters of the radar system based at least on the detected object and the distraction metric. Other examples disclosed herein relate to a radar sensing unit for a vehicle that includes an internal distraction sensor, an external object detection sensor, a coordination sensor and a central controller for internal and external environmental detection.
Vehicular forward-sensing system
A vehicular forward-sensing system includes a radar sensor and a forward viewing image sensor disposed within a windshield electronics module that is removably installed within the vehicle cabin at the vehicle windshield. A control is responsive to an output of the radar sensor and responsive to an output of the image sensor. Responsive to the image sensor viewing an object present in the path of forward travel of the vehicle and responsive to the radar sensor sensing the object present in the path of forward travel of the vehicle, the control determines that the object is an object of interest by processing by an image processing chip of image data of the object captured by the image sensor at a portion of an image plane of the image sensor that is spatially related to a location of the object present in the path of forward travel of the vehicle.
Fine-motion virtual-reality or augmented-reality control using radar
This document describes techniques for fine-motion virtual-reality or augmented-reality control using radar. These techniques enable small motions and displacements to be tracked, even in the millimeter or sub-millimeter scale, for user control actions even when those actions are small, fast, or obscured due to darkness or varying light. Further, these techniques enable fine resolution and real-time control, unlike conventional RF-tracking or optical-tracking techniques.
Antenna reference signals for distance measurements
The present invention provides a method of communicating vehicle positioning information, wherein signals are transmitted from at least one vehicle mounted antenna for indicating a position of the vehicle to another entity, the signals including information concerning at least one of an identity of the at least one antenna and information providing a displacement between the at least one antenna and a boundary of the vehicle.
Systems and methods for virtual aperture radar tracking
A system for virtual aperture array radar tracking includes a transmitter that transmits first and second probe signals; a receiver array including a first plurality of radar elements positioned along a first radar axis; and a signal processor that calculates a target range from first and second reflected probe signals, corresponds signal instances of the first reflected probe signal to physical receiver elements of the radar array, corresponds signal instances of the second reflected probe signal to virtual elements of the radar array, calculates a first target angle between a first reference vector and a first projected target vector from the first reflected probe signal, and calculates a position of the tracking target relative to the radar array from the target range and first target angle.
Super-resolution radar for autonomous vehicles
Examples disclosed herein relate to an autonomous driving system in an vehicle. The autonomous driving system includes a radar system configured to detect a target in a path and a surrounding environment of the vehicle and produce radar data with a first resolution that is gathered over a continuous field of view on the detected target. The system includes a super-resolution network configured to receive the radar data with the first resolution and produce radar data with a second resolution different from the first resolution using first neural networks. The system also includes a target identification module configured to receive the radar data with the second resolution and to identify the detected target from the radar data with the second resolution using second neural networks. Other examples disclosed herein include a method of operating the radar system in the autonomous driving system of the vehicle.
Method for determining the position of a vehicle
A method is described for determining the position of a vehicle equipped with a radar system that includes at least one radar sensor adapted to receive radar signals emitted from at least one radar emitter of the radar system and reflected the radar sensor. The method comprises: acquiring at least one radar scan comprising a plurality of radar detection points, wherein each radar detection point is evaluated from a radar signal received at the radar sensor and representing a location in the vicinity of the vehicle; determining, from a database, a predefined map, wherein the map comprises at least one element representing a static landmark in the vicinity of the vehicle; matching at least a subset of the plurality of radar detection points of the at least one scan and the at least one element of the map; deter-mining the position of the vehicle based on the matching.
Deep learning for object detection using pillars
Among other things, we describe techniques for detecting objects in the environment surrounding a vehicle. A computer system is configured to receive a set of measurements from a sensor of a vehicle. The set of measurements includes a plurality of data points that represent a plurality of objects in a 3D space surrounding the vehicle. The system divides the 3D space into a plurality of pillars. The system then assigns each data point of the plurality of data points to a pillar in the plurality of pillars. The system generates a pseudo-image based on the plurality of pillars. The pseudo-image includes, for each pillar of the plurality of pillars, a corresponding feature representation of data points assigned to the pillar. The system detects the plurality of objects based on an analysis of the pseudo-image. The system then operates the vehicle based upon the detecting of the objects.
SYSTEM FOR DETECTING BLACK ICE ON ROADS USING BEAMFORMING ARRAY RADAR
Disclosed herein is a black ice detection system, and more particularly, a system for detecting black ice on roads, which is capable of using a reflector and beamforming array radar installed along a road so as to measure a change in permittivity depending on the change of state of water and ice on the road and to warn of and take an appropriate action with regard to freezing conditions by detecting the same.