Patent classifications
G01S7/4808
PRIMARY-SECONDARY TYPE INFRASTRUCTURE DISEASE DETECTION AND REPAIR SYSTEM AND METHOD
A surface disease repair system and method for an infrastructure based on climbing robots are provided. The system includes a detection and marking climbing robot and a repair climbing robot. In the process of moving on a surface of an infrastructure to be detected, the detection and marking climbing robot collects a front surface image in real time through a binocular camera arranged at a front end, detects a disease on the basis of the front surface image, and performs localization and map reconstruction at the same time; when a disease is detected, the position of the disease is recorded, and a marking device is controlled to mark the disease; after detection and marking are completed, the position of the disease and the map are sent to the repair climbing robot; and the repair climbing robot receives the map and the position of the disease, reaches the position of the disease, and repairs the disease according to the mark by using a repair device.
INFORMATION PROCESSING DEVICE, CONTROL METHOD, PROGRAM AND STORAGE MEDIUM
A control unit 15 of an in-vehicle device 1 configured to acquire, from landmark data LD that is map data including position information of one or more features, plural pieces of position information of a feature which is drawn on a road surface and which exists at or around a vehicle. Then, the control unit 15 is configured to calculate a normal vector of an approximate plane calculated based on the acquired plural pieces of the position information. Then, the control unit 15 is configured to calculate at least one of a pitch angle or a roll angle of the vehicle based on the orientation of the vehicle and the normal vector.
ADAPTIVE MOTION COMPENSATION OF PERCEPTION CHANNELS
A method may include obtaining sensor data describing a total measurable world around a motion sensor. The method may include processing the sensor data to generate a pre-compensation scan of the total measurable world around the motion sensor based on the sensor data. The method may include determining a delay between the obtaining the sensor data and the generation of the pre-compensation scan. The method may include obtaining motion data corresponding to motion of the motion sensor and generating a motion model of the motion sensor based on the motion data. The method may include generating an after-compensation scan of the motion sensor using the delay and the motion model to compensate for continued motion during the delay.
ELECTRONIC APPARATUS AND CONTROLLING METHOD THEREOF
An electronic apparatus is provided. The electronic apparatus includes: a memory configured to store an image; a sensor part including at least one sensor; a projection part including a projection lens configured to output the image to a projection surface; and a processor configured to acquire distance information from the electronic apparatus to the projection surface through the sensor part, acquire output size information of the image based on the acquired distance information, acquire projection surface information corresponding to a bend of the projection surface through the sensor part, acquire movement information of the projection lens based on the output size information of the image and the projection surface information, and control the projection part to output the image based on the movement information.
Deep learning-based feature extraction for LiDAR localization of autonomous driving vehicles
In one embodiment, a method for extracting point cloud features for use in localizing an autonomous driving vehicle (ADV) includes selecting a first set of keypoints from an online point cloud, the online point cloud generated by a LiDAR device on the ADV for a predicted pose of the ADV; and extracting a first set of feature descriptors from the first set of keypoints using a feature learning neural network running on the ADV, The method further includes locating a second set of keypoints on a pre-built point cloud map, each keypoint of the second set of keypoints corresponding to a keypoint of the first set of keypoint; extracting a second set of feature descriptors from the pre-built point cloud map; and estimating a position and orientation of the ADV based on the first set of feature descriptors, the second set of feature descriptors, and a predicted pose of the ADV.
Vehicle sensor fusion
A computer, including a processor and a memory, the memory including instructions to be executed by the processor to obtain velocity lidar point cloud data acquired with a frequency modulated continuous wave (FMCW) lidar sensor, wherein the velocity lidar point cloud data includes a speed with which a data point is moving with respect to the FMCW lidar sensor, filter the velocity lidar point cloud data to select static velocity data points, wherein the static velocity data points are velocity data points each correspond to a point on a roadway around a vehicle. The instructions can include further instructions to determine FMCW lidar sensor accelerations in six degrees of freedom based on the static velocity lidar data points and determine FMCW lidar sensor rotations and translations in six degrees of freedom based on the FMCW lidar sensor accelerations in six degrees of freedom. The instructions can include further instructions to determine vehicle rotations and translations in six degrees of freedom based on inertial measurement unit (IMU) data, determine FMCW lidar sensor mis-alignment based on comparing the FMCW lidar sensor rotations and translations with the vehicle rotations and translations and align the FMCW lidar sensor based on the FMCW lidar sensor mis-alignment. The instructions can include further instructions to operate a vehicle based on the aligned FMCW lidar sensor.
Object detection in vehicles using cross-modality sensors
A system includes first and second sensors and a controller. The first sensor is of a first type and is configured to sense objects around a vehicle and to capture first data about the objects in a frame. The second sensor is of a second type and is configured to sense the objects around the vehicle and to capture second data about the objects in the frame. The controller is configured to down-sample the first and second data to generate down-sampled first and second data having a lower resolution than the first and second data. The controller is configured to identify a first set of the objects by processing the down-sampled first and second data having the lower resolution. The controller is configured to identify a second set of the objects by selectively processing the first and second data from the frame.
Method of processing azimuth, elevation and range data from laser scanning an object
A method of generating point cloud data from a laser scanning device, retaining a scanner pattern based on point cloud data, and generating an abbreviated mesh from the point cloud such that it can be faithfully restored to the original point cloud. The point cloud data must be structured such that azimuth, elevation, and range data can be extracted. The abbreviated mesh version of the point cloud is generated utilizing selected azimuth, elevation, and range data. Scanner patterns are generated utilizing the azimuth and elevation data. To faithfully regenerate the point cloud data from the abbreviated mesh, the mesh and the scanner pattern are cross referenced such that the regenerated point cloud has minimal data loss.
Real time gating and signal routing in laser and detector arrays for LIDAR application
A Light Detection and Ranging (LIDAR) system integrated in a vehicle includes a LIDAR transmitter configured to transmit laser beams into a field of view, the field of view having a center of projection, and the LIDAR transmitter including a laser to generate the laser beams transmitted into the field of view. The LIDAR system further includes a LIDAR receiver including at least one photodetector configured to receive a reflected light beam and generate electrical signals based on the reflected light beam. The LIDAR system further includes a controller configured to receive feedback information and modify a center of projection of the field of view in a vertical direction based on the feedback information.
PREDICTIVE SENSOR ARRAY CONFIGURATION SYSTEM FOR AN AUTONOMOUS VEHICLE
An autonomous vehicle (AV) can include a set of sensors generating sensor data corresponding to a surrounding environment of the AV. The AV can further include a control system that determines imminent lighting conditions for one or more cameras of the set of sensors, and executes a set of configurations for the one or more cameras to preemptively compensate for the imminent lighting conditions.