G06T17/05

System and method for generating terrain maps

Fusing online and mapped terrain estimates by using weighted grid cells that scales the values returned from online terrain and mapped terrain is disclosed. Previously mapped terrain data and online terrain data are fused and a grid having cells of a predetermined size is overlaid on the terrain map. Each cell may include terrain data based on weighted mapped terrain data and weighted online terrain data, where the weighting values for the mapped terrain data and for the online terrain data may be different. A fused terrain estimate may be a result of a weighted mean for each cell smoothed to reduce noise.

System and method for generating terrain maps

Fusing online and mapped terrain estimates by using weighted grid cells that scales the values returned from online terrain and mapped terrain is disclosed. Previously mapped terrain data and online terrain data are fused and a grid having cells of a predetermined size is overlaid on the terrain map. Each cell may include terrain data based on weighted mapped terrain data and weighted online terrain data, where the weighting values for the mapped terrain data and for the online terrain data may be different. A fused terrain estimate may be a result of a weighted mean for each cell smoothed to reduce noise.

MAP CONSTRUCTION METHOD, RELOCALIZATION METHOD, AND ELECTRONIC DEVICE
20220415010 · 2022-12-29 ·

Provided are a map construction method, a relocalization method, and an electronic device. The map construction method includes: acquiring a target keyframe, performing feature extraction on the target keyframe to obtain feature point information of the target keyframe, and determining semantic information corresponding to the feature point information of the target keyframe; acquiring feature point information of a previous keyframe of the target keyframe and semantic information corresponding to the feature point information of the target keyframe; determining a feature matching result of a matching of the semantic information and a matching of the feature point information between the target key frame and the previous key frame; and constructing a map based on the feature matching result.

Methods for Correcting and Encrypting Space Coordinates of Three-Dimensional Model

The present disclosure provides a method for correcting and encrypting space coordinates of a three-dimensional model. The method for correcting space coordinates of a three-dimensional model includes: step S1, reading information of an original coordinate frame of a three-dimensional model in a first format and the origin of coordinates of the model; reading information of nodes from three-dimensional model data in the first format, and calculating original coordinates of the nodes; step S2, calculating parameters of correction between the original coordinate frame and a target coordinate frame based on space coordinates of four or more control points in the original coordinate frame in the first format and corresponding space coordinates of the control points in the target coordinate frame in a second format, and constructing a space coordinate correction matrix; step S3, transforming and correcting the coordinates of the origin and nodes of the three-dimensional model in the first format one by one by using the space coordinate correction matrix to obtain information of coordinate points of the three-dimensional model in the second format; and step S4, storing a file of the three-dimensional model in the second format with corrected space coordinates. Thus, the production efficiency is improved.

Methods for Correcting and Encrypting Space Coordinates of Three-Dimensional Model

The present disclosure provides a method for correcting and encrypting space coordinates of a three-dimensional model. The method for correcting space coordinates of a three-dimensional model includes: step S1, reading information of an original coordinate frame of a three-dimensional model in a first format and the origin of coordinates of the model; reading information of nodes from three-dimensional model data in the first format, and calculating original coordinates of the nodes; step S2, calculating parameters of correction between the original coordinate frame and a target coordinate frame based on space coordinates of four or more control points in the original coordinate frame in the first format and corresponding space coordinates of the control points in the target coordinate frame in a second format, and constructing a space coordinate correction matrix; step S3, transforming and correcting the coordinates of the origin and nodes of the three-dimensional model in the first format one by one by using the space coordinate correction matrix to obtain information of coordinate points of the three-dimensional model in the second format; and step S4, storing a file of the three-dimensional model in the second format with corrected space coordinates. Thus, the production efficiency is improved.

INFORMATION PROCESSING DEVICE, INFORMATION PROCESSING METHOD, AND PROGRAM
20220414983 · 2022-12-29 ·

An object detection unit 31 detects, for example, a moving object and a detection target object that coincides with a registered object registered in an object database from an input image. A map processing unit 32 updates information of an area corresponding to the detected object in a 3D map including a signed distance, a weight parameter, and an object ID label according to an object detection result by the object detection unit 31. For example, when the moving object is detected, the map processing unit 32 initializes information of an area corresponding to the moving object in the 3D map. The map processing unit 32 registers an object map of the detected moving object in the object database. When the detection target object is detected, the map processing unit 32 converts an object map of the registered object that coincides with the detection target object according to a posture of the detection target object and integrates the same with the 3D map. Movement of the object may be quickly reflected on the map.

APPROACHES OF OBTAINING GEOSPATIAL COORDINATES OF SENSOR DATA
20220412737 · 2022-12-29 ·

Systems and methods are provided for one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform: receiving successive frames of sensor data, the successive frames comprising a first frame and a second frame; determining transformations, in sensor coordinates, between coordinates of corresponding elements in the successive frames; determining a mapping between the transformations in sensor coordinates and transformations in geospatial coordinates of the corresponding elements in the successive frames; and determining second geospatial coordinates of the corresponding elements of a third frame based on: a transformation between the second frame and the third frame, and the mapping.

SYSTEMS AND METHODS FOR BIRDS EYE VIEW SEGMENTATION
20220414887 · 2022-12-29 ·

Systems and methods for bird's eye view (BEV) segmentation are provided. In one embodiment, a method includes receiving an input image from an image sensor on an agent. The input image is a perspective space image defined relative to the position and viewing direction of the agent. The method includes extracting features from the input image. The method includes estimating a depth map that includes depth values for pixels of the plurality of pixels of the input image. The method includes generating a 3D point map including points corresponding to the pixels of the input image. The method includes generating a voxel grid by voxelizing the 3D point map into a plurality voxels. The method includes generating a feature map by extracting feature vectors for pixels based on the points included in the voxels of the plurality of voxels and generating a BEV segmentation based on the feature map.

SYSTEMS AND METHODS FOR BIRDS EYE VIEW SEGMENTATION
20220414887 · 2022-12-29 ·

Systems and methods for bird's eye view (BEV) segmentation are provided. In one embodiment, a method includes receiving an input image from an image sensor on an agent. The input image is a perspective space image defined relative to the position and viewing direction of the agent. The method includes extracting features from the input image. The method includes estimating a depth map that includes depth values for pixels of the plurality of pixels of the input image. The method includes generating a 3D point map including points corresponding to the pixels of the input image. The method includes generating a voxel grid by voxelizing the 3D point map into a plurality voxels. The method includes generating a feature map by extracting feature vectors for pixels based on the points included in the voxels of the plurality of voxels and generating a BEV segmentation based on the feature map.

SYSTEMS AND METHODS FOR TERRAIN MAPPING USING LIDAR

Systems and methods for generating ground-level terrain elevation models, preparing vector street data to assist in generating such models, and finding approximate elevation of any point using such terrain models are provided. Lidar data can be analyzed, and Lidar elevation values at roadway/street intersections can be used to determine a model of the ground-level elevation in an area or region. Outliers can be removed. The ground-level elevation at any point in the mapped area can be determined using elevation levels for nearby roadway intersections.