Patent classifications
G01C11/12
Positioning method and apparatus
The present invention discloses a positioning method and apparatus. The method includes: acquiring a first image captured by an optical device, where the first image includes an observation object and a plurality of predetermined objects, and the predetermined objects are objects with known geographic coordinates; selecting a first predetermined object from the predetermined objects based on the first image; acquiring a second image, where the first predetermined object is located in a center of the second image; determining a first attitude angle of the optical device based on the first predetermined object in the second image and measurement data captured by an inertial navigation system; modifying the first attitude angle based on a positional relationship between the observation object and the first predetermined object in the second image, to obtain a second attitude angle; and calculating geographic coordinates of the observation object based on the second attitude angle. According to the present invention, a prior-art technical problem that costs of accurately locating an observation object are high is resolved.
Positioning method and apparatus
The present invention discloses a positioning method and apparatus. The method includes: acquiring a first image captured by an optical device, where the first image includes an observation object and a plurality of predetermined objects, and the predetermined objects are objects with known geographic coordinates; selecting a first predetermined object from the predetermined objects based on the first image; acquiring a second image, where the first predetermined object is located in a center of the second image; determining a first attitude angle of the optical device based on the first predetermined object in the second image and measurement data captured by an inertial navigation system; modifying the first attitude angle based on a positional relationship between the observation object and the first predetermined object in the second image, to obtain a second attitude angle; and calculating geographic coordinates of the observation object based on the second attitude angle. According to the present invention, a prior-art technical problem that costs of accurately locating an observation object are high is resolved.
DISPLACEMENT MEASUREMENT DEVICE AND DISPLACEMENT MEASUREMENT METHOD
A displacement measurement device includes: a first machine learning model trained to generate, from one image which contains a subject and has noise, at least one image which contains the subject and which has noise or has had noise removed; a first obtainer that obtains a first image which contains the subject and has noise and a second image which contains the subject and has noise; a first generator that, using the first machine learning model, generates M template images containing the subject from the first image and generates M target images containing the subject from the second image, M being an integer of 2 or higher; a hypothetical displacement calculator that calculates M hypothetical displacements of the subject from the M template images and the M target images; and a displacement calculator that calculates a displacement of the subject by performing statistical processing on the M hypothetical displacements.
DISPLACEMENT MEASUREMENT DEVICE AND DISPLACEMENT MEASUREMENT METHOD
A displacement measurement device includes: an obtainer that obtains a first image which contains a subject and a second image which contains the subject; a generator that generates M template images which contain the subject and which have noise from the first image and generates M target images which contain the subject and which have noise from the second image, M being an integer of 2 or higher; a hypothetical displacement calculator that calculates M hypothetical displacements of the subject from the M template images and the M target images; and a displacement calculator that calculates a displacement of the subject by performing statistical processing on the M hypothetical displacements.
Visual odometry and pairwise alignment for high definition map creation
As an autonomous vehicle moves through a local area, pairwise alignment may be performed to calculate changes in the pose of the vehicle between different points in time. The vehicle comprises an imaging system configured to capture image frames depicting a portion of the surrounding area. Features are identified from the captured image frames, and a 3-D location is determined for each identified feature. The features of different image frames corresponding to different points in time are analyzed to determine a transformation in the pose of the vehicle during the time period between the image frames. The determined poses of the vehicle are used to generate an HD map of the local area.
Visual odometry and pairwise alignment for high definition map creation
As an autonomous vehicle moves through a local area, pairwise alignment may be performed to calculate changes in the pose of the vehicle between different points in time. The vehicle comprises an imaging system configured to capture image frames depicting a portion of the surrounding area. Features are identified from the captured image frames, and a 3-D location is determined for each identified feature. The features of different image frames corresponding to different points in time are analyzed to determine a transformation in the pose of the vehicle during the time period between the image frames. The determined poses of the vehicle are used to generate an HD map of the local area.
Method of controlling a portable device and a portable device
A method (100) of controlling a portable device comprising a first camera and a second camera facing in the same direction. The method comprises: selecting (110) one of the first camera and the second camera as a visualization camera; initializing (120) a localization algorithm having as an input image data representing images captured by one of the first camera and the second camera; determining (130) a respective focus score for at least one of the first camera and the second camera, said focus score indicating a focus quality of features identified from images captured by one of the respective camera; selecting (140,) one of the first camera and the second camera as an enabled camera based on the at least one focus score; and generating a control signal configured to cause the selected camera to be enabled such that the image data representing images captured by the enabled camera are provided as the input to the localization algorithm.
Method of controlling a portable device and a portable device
A method (100) of controlling a portable device comprising a first camera and a second camera facing in the same direction. The method comprises: selecting (110) one of the first camera and the second camera as a visualization camera; initializing (120) a localization algorithm having as an input image data representing images captured by one of the first camera and the second camera; determining (130) a respective focus score for at least one of the first camera and the second camera, said focus score indicating a focus quality of features identified from images captured by one of the respective camera; selecting (140,) one of the first camera and the second camera as an enabled camera based on the at least one focus score; and generating a control signal configured to cause the selected camera to be enabled such that the image data representing images captured by the enabled camera are provided as the input to the localization algorithm.
Detection of vertical structures based on LiDAR scanner data for high-definition maps for autonomous vehicles
A vehicle computing system performs enhances relatively sparse data collected by a LiDAR sensor by increasing the density of points in certain portions of the scan. For instance, the system generates 3D triangles based on a point cloud collected by the LiDAR sensor and filters the 3D triangles to identify a subset of 3D triangles that are proximate to the ground. The system interpolates points within the subset of 3D triangles to identify additional points on the ground. As another example, the system uses data collected by the LiDAR sensor to identify vertical structures and interpolate additional points on those vertical structures. The enhanced data can be used for a variety of applications related to autonomous vehicle navigation and HD map generation, such as detecting lane markings on the road in front of the vehicle or determining a change in the vehicle's position and orientation.
Detection of vertical structures based on LiDAR scanner data for high-definition maps for autonomous vehicles
A vehicle computing system performs enhances relatively sparse data collected by a LiDAR sensor by increasing the density of points in certain portions of the scan. For instance, the system generates 3D triangles based on a point cloud collected by the LiDAR sensor and filters the 3D triangles to identify a subset of 3D triangles that are proximate to the ground. The system interpolates points within the subset of 3D triangles to identify additional points on the ground. As another example, the system uses data collected by the LiDAR sensor to identify vertical structures and interpolate additional points on those vertical structures. The enhanced data can be used for a variety of applications related to autonomous vehicle navigation and HD map generation, such as detecting lane markings on the road in front of the vehicle or determining a change in the vehicle's position and orientation.