G01C21/1652

Aligning measured signal data with SLAM localization data and uses thereof
11506500 · 2022-11-22 · ·

A method includes retrieving a map of a 3D geometry of an environment the map including a plurality of non-spatial attribute values each corresponding to one of a plurality of non-spatial attributes and indicative of a plurality of non-spatial sensor readings acquired throughout the environment, receiving a plurality of sensor readings from a device within the environment wherein each of the sensor readings corresponds to at least one of the non-spatial attributes and matching the plurality of received sensor readings to at least one location in the map to produce a determined sensor location.

Laser scanner with real-time, online ego-motion estimation
11585662 · 2023-02-21 · ·

A mapping system, comprising an inertial measurement unit; a camera unit; a laser scanning unit; and a computing system in communication with the inertial measurement unit, the camera unit, and the laser scanning unit, wherein the computing system computes first measurement predictions based on inertial measurement data from the inertial measurement unit at a first frequency, second measurement predictions based on the first measurement predictions and visual measurement data from the camera unit at a second frequency and third measurement predictions based on the second measurement predictions and laser ranging data from the laser scanning unit at a third frequency.

Method and apparatus for recognizing a stuck status as well as computer storage medium

The present disclosure proposes a method and an apparatus for recognizing a stuck status as well as a computer storage medium, with the method comprising: building an environmental map within a preset extent by taking the current position of the mobile robot as center; real-time monitoring the march information of the mobile robot and predicting whether the mobile robot is stuck or not; acquiring data from multiple sensors of the mobile robot, if the mobile robot is stuck; and recognizing the current stuck status of the mobile robot based on the data from multiple sensors.

DRIVER ASSISTANCE SYSTEM AND METHOD
20230047404 · 2023-02-16 ·

A driver assistance system for an ego vehicle, and a method for a driver assistance system is provided. The system is configured to refine a coarse geolocation method based on the detection of the static features located in the vicinity of the ego vehicle. The system performs at least one measurement of the visual appearance of each of at least one static feature located in the vicinity of the ego vehicle. Using the at least one measurement, a position of the ego vehicle relative to the static feature is calculated. The real world position of the static feature is identified. The position of the ego vehicle relative to the static feature is calculated, which is, in turn, used to calculate a static feature measurement of the vehicle location. The coarse geolocation measurement and the static feature measurement are combined to form a fine geolocation position. By combining the measurements, a more accurate location of the ego vehicle can be determined.

LIDAR and rem localization

A navigation system for a host vehicle may include a processor programmed to: receive, from an entity remotely located relative to the host vehicle, a sparse map associated with at least one road segment to be traversed by the host vehicle; receive point cloud information from a LIDAR system onboard the host vehicle, the point cloud information being representative of distances to various objects in an environment of the host vehicle; compare the received point cloud information with at least one of the plurality of mapped navigational landmarks in the sparse map to provide a LIDAR-based localization of the host vehicle relative to at least one target trajectory; determine an navigational action for the host vehicle based on the LIDAR-based localization of the host vehicle relative to the at least one target trajectory; and cause the at least one navigational action to be taken by the host vehicle.

AUGMENTATION OF GLOBAL NAVIGATION SATELLITE SYSTEM BASED DATA

A vehicle computing system validates location data received from a Global Navigation Satellite System receiver with other sensor data. In one embodiment, the system calculates velocities with the location data and the other sensor data. The system generates a probabilistic model for velocity with a velocity calculated with location data and variance associated with the location data. The system determines a confidence score by applying the probabilistic model to one or more of the velocities calculated with other sensor data. In another embodiment, the system implements a machine learning model that considers features extracted from the sensor data. The system generates a feature vector for the location data and determines a confidence score for the location data by applying the machine learning model to the feature vector. Based on the confidence score, the system can validate the location data. The validated location data is useful for navigation and map updates.

ATTITUDE SYNCHRONOUS SONAR SYSTEM
20220350005 · 2022-11-03 ·

A marine sonar system comprises a sonar transducer, an attitude sensor, and a processing element. The sonar transducer is configured to transmit a sonar beam into a body of water according to a transmit electronic signal, receive reflections of the sonar beam, and output a receive electronic signal according to the reflections of the sonar beam. The attitude sensor is configured to determine an attitude angle of a marine vessel with which the marine sonar system is utilized and to output an attitude electronic signal whose value varies according to the attitude angle. The processing element configured to receive the attitude electronic signal receive and, based on the attitude electronic signal, control the output of the transmit electronic signal to the sonar transducer.

SURVEY DEVICE, SYSTEM AND METHOD

A system includes a survey device and at least one processor. The survey device includes a support and a sensor attached to the support. The sensor is configured to capture measurement data. The at least one processor is coupled to the sensor to receive the measurement data. The at least one processor is configured to obtain a scene model corresponding to an initial set of the measurement data captured by the sensor when the support is located at an initial position, determine a location of the survey device relative to the scene model based on the initial set of the measurement data and the scene model, and update the location of the survey device relative to the scene model, based on subsequent sets of the measurement data captured by the sensor when the support is located at corresponding subsequent positions.

DATA FUSION
20230093680 · 2023-03-23 ·

A data fusion method is provided. In one embodiment, the method comprises: acquiring a rotation angle of a laser transmitter of a lidar; selecting, according to a predetermined correspondence between rotation angle intervals and image sensors, an image sensor corresponding to a rotation angle interval in which the obtained rotation angle of the laser emitter is located as a specified image sensor, sending a trigger signal to the specified image sensor, to enable the specified image sensor to acquire an image, receiving the image and a point cloud that is acquired and returned by the lidar within the rotation angle interval in which the obtained rotation angle is located, and fuse information of pixels in the image and information of points in the point cloud according to pose change information of a vehicle in a process of acquiring the image and the point cloud.

METHODS AND SYSTEMS FOR MODELING POOR TEXTURE TUNNELS BASED ON VISION-LIDAR COUPLING

The present disclosure provides a method and a system for modelling a poor texture tunnel based on a vision-lidar coupling. The method includes: obtaining point cloud information collected by a depth camera, laser information collected by a lidar, and motion information of an unmanned aerial vehicle (UAV); generating a raster map based on the laser information, and obtaining pose information of the UAV based on the motion information; obtaining a map model through fusing the point cloud information, the raster map, and the pose information by a Bayesian fusion method; and correcting a latest map model by feature matching based on a previous map model.