G01C11/06

Sensor calibration and verification using induced motion

Motion can be induced at a vehicle, e.g., by actuating components of an active suspension system, and first sensor data and second sensor data representing an environment of the vehicle can be captured at a first position and a second position, respectively, resulting from the induced motion. A second sensor can determine motion information associated with the first position and the second position. Calibration information about the sensor, the first sensor data, and the motion information can be used to determine an expectation of sensor data at the second position. A calibration error can be the difference between the second sensor data and the expected sensor data.

Backup navigation system for unmanned aerial vehicles
11656638 · 2023-05-23 · ·

Described is a method that involves operating an unmanned aerial vehicle (UAV) to begin a flight, where the UAV relies on a navigation system to navigate to a destination. During the flight, the method involves operating a camera to capture images of the UAV's environment, and analyzing the images to detect features in the environment. The method also involves establishing a correlation between features detected in different images, and using location information from the navigation system to localize a feature detected in different images. Further, the method involves generating a flight log that includes the localized feature. Also, the method involves detecting a failure involving the navigation system, and responsively operating the camera to capture a post-failure image. The method also involves identifying one or more features in the post-failure image, and determining a location of the UAV based on a relationship between an identified feature and a localized feature.

Backup navigation system for unmanned aerial vehicles
11656638 · 2023-05-23 · ·

Described is a method that involves operating an unmanned aerial vehicle (UAV) to begin a flight, where the UAV relies on a navigation system to navigate to a destination. During the flight, the method involves operating a camera to capture images of the UAV's environment, and analyzing the images to detect features in the environment. The method also involves establishing a correlation between features detected in different images, and using location information from the navigation system to localize a feature detected in different images. Further, the method involves generating a flight log that includes the localized feature. Also, the method involves detecting a failure involving the navigation system, and responsively operating the camera to capture a post-failure image. The method also involves identifying one or more features in the post-failure image, and determining a location of the UAV based on a relationship between an identified feature and a localized feature.

SURVEY SYSTEM, SURVEY METHOD, AND SURVEY PROGRAM
20230152094 · 2023-05-18 ·

A survey system for accurately surveying an area includes a coordinate acquisition section that acquires a set of three-dimensional coordinates of a survey point or a base station used for determining sets of coordinates of the area, as a set of measurement coordinates, a comparative coordinate acquisition section that acquires at least a height-direction coordinate value of a set of comparative coordinates indicating a position within a predetermined range from the acquired set of measurement coordinates; and a determining section that calculates a difference between a height-direction coordinate value of the set of measurement coordinates and the height-direction coordinate value of the set of comparative coordinates and determines that at least any one of the set of measurement coordinates and the set of comparative coordinates are incorrect when the difference is larger than a predetermined value.

METHOD FOR GEOREFERENCING OF OPTICAL IMAGES
20230141795 · 2023-05-11 ·

A method (100) for referencing an optical image (19) including: obtaining (110, 120) a stereoscopic image pair (19, 23) of the optical image (19) and a SAR image (35), the surface areas covered by the images (19, 23, 35) on the ground having an overlapping area (39); selecting (130) an area of interest (42) in the overlapping area (39); from the area of interest (42): obtaining (140) a 3D model (40); calculating (150) a simulated radar image (44); estimating (160) an offset (di, dj) between the simulated image (44) and the radar image (35); selecting (170) a reference point (46); projecting (180) and shifting (di, dj) the reference point (46) in the radar image (35) to correct the radar connection point (46′″); determining (175) a pair of connection points (46′, 46″) in the image pair; and referencing the optical image (19) based on the connection points (46′, 46″, 46′″).

METHOD FOR GEOREFERENCING OF OPTICAL IMAGES
20230141795 · 2023-05-11 ·

A method (100) for referencing an optical image (19) including: obtaining (110, 120) a stereoscopic image pair (19, 23) of the optical image (19) and a SAR image (35), the surface areas covered by the images (19, 23, 35) on the ground having an overlapping area (39); selecting (130) an area of interest (42) in the overlapping area (39); from the area of interest (42): obtaining (140) a 3D model (40); calculating (150) a simulated radar image (44); estimating (160) an offset (di, dj) between the simulated image (44) and the radar image (35); selecting (170) a reference point (46); projecting (180) and shifting (di, dj) the reference point (46) in the radar image (35) to correct the radar connection point (46′″); determining (175) a pair of connection points (46′, 46″) in the image pair; and referencing the optical image (19) based on the connection points (46′, 46″, 46′″).

GEO-POSITIONING
20170364250 · 2017-12-21 ·

The invention is a method of geo-positioning geographic data for visualisation of a geographic area, particularly a mine site, and a device to work the method. The method includes the steps of: importing two or more data sources having geographic data of the geographic area; selecting a first control in a first data source of the two or more data sources and the same first control in a second data source of the two or more data sources; selecting a second control in the first data source and the same second control in the second data source; and applying an algorithm in a processor to process the first control in the first data source, the first control in the second data source, the second control in the first data source and the second control in the second data source by overlaying, rotating and scaling the data sources until at least the first control in the first data source matches the first control in the second data source and the second control in the first data source matches the second control in the second data source.

MAT FOR CARRYING OUT A PHOTOGRAMMETRY METHOD, USE OF THE MAT AND ASSOCIATED METHOD
20230194260 · 2023-06-22 ·

A mat for carrying out a photogrammetry method, having an upper side with a scanning surface, on which an object to be captured by means of the photogrammetry method can be placed, wherein a plurality of markers which can preferably be distinguished from one another is arranged on the scanning surface, said markers being detectable when carrying out the photogrammetry method, so as to be used during the creation of a 3D model of the object being captured by means of the photogrammetry method. The invention further specifies the use of the mat for carrying out a photogrammetry method, a computer-implemented method for the photogrammetric creation of a 3D model of an object, and a computer-readable storage medium.

Coordination of multiple structured light-based 3D image detectors

Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.

Coordination of multiple structured light-based 3D image detectors

Technologies are generally described for coordination of structured light-based image detectors. In some examples, one or more structured light sources may be configured to project sets of points onto the scene. The sets of points may be arranged into disjoint sets of geometrical shapes such as lines, where each geometrical shape includes a subset of the points projected by an illumination source. A relative position and or a color of the points in each geometrical shape may encode an identification code with which each illumination source may be identified. Thus, even when the point clouds projected by each of the illumination sources overlap, the geometrical shapes may still be detected, and thereby a corresponding illumination source may be identified. A depth map may then be estimated based on stereovision principles or depth-from-focus principles by one or more image detectors.