Patent classifications
G01S7/4808
Active power control of sensors
Sensors, including time-of-flight sensors, may be used to detect objects in an environment. In an example, a vehicle may include a time-of-flight sensor that images objects around the vehicle, e.g., so the vehicle can navigate relative to the objects. Sensor data generated by the time-of-flight sensor can return unreliable pixels, e.g., in the case of over- or under-exposure. In some examples, parameters associated with power of a time-of-flight sensor can be altered based on a number of unreliable pixels in measured data and/or based on intensity values of the measured data. For example, unreliable pixels can be determined using phase frame information captured at a receiver of the sensor.
Method for detecting an obstacle, detection device, detection system and associated vehicle
A method for detecting an obstacle includes the steps of: calculating, for each point of a space around a telemeter, a plurality of corresponding intermediate probabilities of presence, each intermediate probability of presence being associated with a respective orientation of the telemeter among a plurality of predetermined orientations around a current orientation of the telemeter, each orientation being certain; for each point of space, calculating a probability of the presence of an obstacle from each corresponding intermediate probability of presence and from an uncertainty model on the orientation of the telemeter; and generating an alert if the probability of the presence of an obstacle in a predetermined zone with respect to the telemeter is greater than or equal to a predetermined alert threshold.
Semantic segmentation of radar data
Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
Partial point cloud-based pedestrians' velocity estimation method
A method, apparatus, and system for estimating a moving speed of a detected pedestrian at an autonomous driving vehicle (ADV) is disclosed. A pedestrian is detected in a plurality of frames of point clouds generated by a LIDAR device installed at an autonomous driving vehicle (ADV). In each of at least two of the plurality of frames of point clouds, a minimum bounding box enclosing points corresponding to the pedestrian excluding points corresponding to limbs of the pedestrian is generated. A moving speed of the pedestrian is estimated based at least in part on the minimum bounding boxes across the at least two of the plurality of frames of point clouds. A trajectory for the ADV is planned based at least on the moving speed of the pedestrian. Thereafter, control signals are generated to drive the ADV based on the planned trajectory.
Structure diagnosis system and structure diagnosis method
The disclosure provides a structure diagnosis system and a structure diagnosis method. The structure diagnosis system includes: a lidar scanner scanning a structure to generate a point cloud data; an input interface receiving the point cloud data; and a processor receiving the point cloud data and generating a point cloud data set. The processor executes a surface degradation and geometry abnormal coupling diagnosis module to: marking a first point cloud range of a surface degradation area according to color space value of the point cloud data set; marking a second point cloud range of a geometry abnormal area according to coordinate value of the point cloud data set; when an abnormal area includes the first point cloud range and the second point cloud range at least partially overlapping each other, determining surface degradation or geometry abnormal occurring at the abnormal area and mark the abnormal area with a predetermined mode.
COMPENSATION METHOD AND APPARATUS FOR CONTINUOUS WAVE RANGING AND LIDAR
This application discloses a compensation method and apparatus for continuous wave ranging and a LiDAR. The compensation method includes: calculating a reflectivity of an object detected by a receiving unit, querying, based on a preset mapping relation, for a target distance response non-uniformity (DRNU) calibration compensation matrix associated with the reflectivity, and compensating, using the target DRNU calibration compensation matrix, for a distance of the object detected by the receiving unit.
INFORMATION PROCESSING METHOD, NON-TRANSITORY STORAGE MEDIUM, AND INFORMATION PROCESSING SYSTEM
An information processing method includes a detection step including detecting a target based on a distance image of a monitoring region; and a stay decision step including making a stay decision. The stay decision includes determining whether any stay of the target has occurred. The stay decision step includes making the decision based on an index indicating a positional change of the target with the passage of time.
Method for Detecting Lost Image Information, Control Apparatus for Carrying Out a Method of this Kind, Detection Device Having a Control Apparatus of this Kind and Motor Vehicle Having a Detection Device of this Kind
A method for detecting lost image information via a lighting device and an optical sensor. The lighting device and the optical sensor are controlled so as to be chronologically aligned with each other. A visible spacing region in an observation region of the optical sensor is determined from the chronological alignment of the control of the lighting device and the optical sensor. A recording of the observation region with the optical sensor is generated via the aligned control. Image information is identified in the recording in regions outside of the spacing region visible in the image, so as to make the identified image information accessible.
INFORMATION PROCESSING DEVICE
The length of a moving body made of a material that hardly reflects laser light is measured with high accuracy.
First point cloud information based on three-dimensional point cloud information of a first region A1 of a moving body path RW in which a movement direction is set, second point cloud information based on three-dimensional point cloud information of a second region A2, and third point cloud information based on three-dimensional point cloud information of a third region A3 downstream of the second region A2 are acquired in a time series. A velocity VAM of a moving body AM is calculated based on a temporal change of the first point cloud information A1. A front end position FE of the moving body AM at a first time T1 is calculated based on the second point cloud information A2. A rear end position RE of the moving body AM at a second time T2 is calculated based on the third point cloud information A3. A length LAM of the moving body is calculated based on the velocity VAM of the moving body, the front end position FE of the moving body AM at the first time T1, and the rear end position RE of the moving body AM at the second time T2.
FLEXIBLE MULTI-CHANNEL FUSION PERCEPTION
A method may include obtaining first sensor data from a first sensor system and second sensor data from a second sensor system. The first and the second sensor systems may capture sensor data from a total measurable world. The method may include identifying a first object included in the first sensor data and a second object included in the second sensor data and determining first parameters corresponding to the first object and second parameters corresponding to the second object. The first parameters may be compared with the second parameters and whether the first object and the second object are a same object may be determined based on the comparing the first parameters and the second parameters. Responsive to determining that the first object and the second object are the same object, a set of objects representative of objects in the total measurable world including the same object may be generated.