Patent classifications
G01S17/89
Method and apparatus for detecting obstacle
Embodiments of the present disclosure provide a method and apparatus for detecting an obstacle. The method may include: acquiring first point cloud data collected by a first vehicle-mounted laser radar and second point cloud data collected by a second vehicle-mounted laser radar, where a height of the first vehicle-mounted laser radar from a ground is greater than a height of the second vehicle-mounted laser radar from the ground, and a number of wiring harnesses of the first vehicle-mounted laser radar is greater than a number of wiring harnesses of the second vehicle-mounted laser radar; performing ground estimation based on the first point cloud data; filtering out a ground point in the second point cloud data according to the ground estimation result of the first point cloud data; and performing obstacle detection based on the second point cloud data after the ground point is filtered out.
Method and apparatus for detecting obstacle
Embodiments of the present disclosure provide a method and apparatus for detecting an obstacle. The method may include: acquiring first point cloud data collected by a first vehicle-mounted laser radar and second point cloud data collected by a second vehicle-mounted laser radar, where a height of the first vehicle-mounted laser radar from a ground is greater than a height of the second vehicle-mounted laser radar from the ground, and a number of wiring harnesses of the first vehicle-mounted laser radar is greater than a number of wiring harnesses of the second vehicle-mounted laser radar; performing ground estimation based on the first point cloud data; filtering out a ground point in the second point cloud data according to the ground estimation result of the first point cloud data; and performing obstacle detection based on the second point cloud data after the ground point is filtered out.
Modular ladar sensor
A lightweight, inexpensive LADAR sensor incorporating 3-D focal plane arrays is adapted specifically for modular manufacture and rapid field configurability and provisioning. The sensor generates, at high speed, 3-D image maps and object data at short to medium ranges. The techniques and structures described may be used to extend the range of long range systems as well, though the focus is on compact, short to medium range ladar sensors suitable for use in multi-sensor television production systems and 3-D graphics capture and moviemaking. 3-D focal plane arrays are used in a variety of physical configurations to provide useful new capabilities.
Modular ladar sensor
A lightweight, inexpensive LADAR sensor incorporating 3-D focal plane arrays is adapted specifically for modular manufacture and rapid field configurability and provisioning. The sensor generates, at high speed, 3-D image maps and object data at short to medium ranges. The techniques and structures described may be used to extend the range of long range systems as well, though the focus is on compact, short to medium range ladar sensors suitable for use in multi-sensor television production systems and 3-D graphics capture and moviemaking. 3-D focal plane arrays are used in a variety of physical configurations to provide useful new capabilities.
Hybrid sensor and compact Lidar sensor
The present exemplary embodiments provide a hybrid sensor, a Lidar sensor, and a moving object which generate composite data by mapping distance information on an obstacle obtained through the Lidar sensor to image information on an obstacle obtained through an image sensor and predict distance information of composite data based on intensity information of a pixel, to generate precise composite data.
Hybrid sensor and compact Lidar sensor
The present exemplary embodiments provide a hybrid sensor, a Lidar sensor, and a moving object which generate composite data by mapping distance information on an obstacle obtained through the Lidar sensor to image information on an obstacle obtained through an image sensor and predict distance information of composite data based on intensity information of a pixel, to generate precise composite data.
Enhanced pointing angle validation
Devices, systems, and methods are provided for enhanced pointing angle validation. A device may generate a collimated beam using a light source emitting a light beam through a fiducial target in an optical instrument. The device may capture an image fiducial target using a camera, wherein the camera is mounted on a mounting datum that is orthogonal to the collimated beam. The device may determine a pointing angle associated with camera based on the captured image of the fiducial target. The device may compare a location of the fiducial target in the image to an optical center of the camera. The device may determine a validation status of camera based on the location of the fiducial target in the image.
Multi-channel lidar sensor module
The present invention relates to a multi-channel lidar sensor module capable of measuring at least two target objects using one image sensor. The multi-channel lidar sensor module according to an embodiment of the present invention includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects.
Multi-channel lidar sensor module
The present invention relates to a multi-channel lidar sensor module capable of measuring at least two target objects using one image sensor. The multi-channel lidar sensor module according to an embodiment of the present invention includes at least one pair of light emitting units configured to emit laser beams and a light receiving unit formed between the at least one pair of emitting units and configured to receive at least one pair of reflected laser beams which are emitted from the at least one pair of light emitting units and reflected by target objects.
Dynamic power throttling of spinning LIDAR
An autonomous vehicle having a LIDAR system that scans a field of view is described herein. With more specificity, a computing system of the autonomous vehicle defines a region of interest in the field of view for a scan of the field of view by the LIDAR system. The region of interest is a portion of the field of view. Based on the region of interest, the computing system transmits a control signal to the LIDAR system that causes the LIDAR system to emit first light pulses with a first intensity within the region of interest during the scan and second light pulses with a second intensity outside the region of interest during the scan. The first intensity is different from the second intensity to provide different ranges for distance measurements inside and outside the region of interest.