G01C11/28

SITE SCANNING USING A WORK MACHINE WITH A CAMERA
20190073762 · 2019-03-07 ·

A method of detecting a defect in a surface of an infrastructure includes providing a work machine having a controller, a plurality of sensors including an inertial measurement unit (IMU) and a global positioning sensor (GPS), and a camera oriented in a direction substantially perpendicular to the surface. The camera takes a first image of the surface at a first location, and information is collected with the IMU and the GPS at the first location. The method includes linking the first image with the information collected at the first location, and storing the first image and the information collected at the first location in a database.

Approaches of obtaining geospatial coordinates of sensor data
12298134 · 2025-05-13 · ·

Systems and methods are provided for one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform: receiving successive frames of sensor data, the successive frames comprising a first frame and a second frame; determining transformations, in sensor coordinates, between coordinates of corresponding elements in the successive frames; determining a mapping between the transformations in sensor coordinates and transformations in geospatial coordinates of the corresponding elements in the successive frames; and determining second geospatial coordinates of the corresponding elements of a third frame based on: a transformation between the second frame and the third frame, and the mapping.

Approaches of obtaining geospatial coordinates of sensor data
12298134 · 2025-05-13 · ·

Systems and methods are provided for one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform: receiving successive frames of sensor data, the successive frames comprising a first frame and a second frame; determining transformations, in sensor coordinates, between coordinates of corresponding elements in the successive frames; determining a mapping between the transformations in sensor coordinates and transformations in geospatial coordinates of the corresponding elements in the successive frames; and determining second geospatial coordinates of the corresponding elements of a third frame based on: a transformation between the second frame and the third frame, and the mapping.

ROVER ORIENTATION MEASUREMENT FOR SURVEYING TILT ORIENTATION

An orientation of a rover, such as a surveying rod or a robot, is used for tilt-compensated surveying. A surveying device, separate from the rover measures a position of a first part of the rover. A position of a second part of the rover is calculated based on measuring the position of the first part of the rover, the orientation of the rover, and a known relation between the first part of the rover and the second part of the rover.

ROVER ORIENTATION MEASUREMENT FOR SURVEYING TILT ORIENTATION

An orientation of a rover, such as a surveying rod or a robot, is used for tilt-compensated surveying. A surveying device, separate from the rover measures a position of a first part of the rover. A position of a second part of the rover is calculated based on measuring the position of the first part of the rover, the orientation of the rover, and a known relation between the first part of the rover and the second part of the rover.

APPROACHES OF OBTAINING GEOSPATIAL COORDINATES OF SENSOR DATA
20250377200 · 2025-12-11 ·

Systems and methods are provided for one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform: receiving successive frames of sensor data, the successive frames comprising a first frame and a second frame; determining transformations, in sensor coordinates, between coordinates of corresponding elements in the successive frames; determining a mapping between the transformations in sensor coordinates and transformations in geospatial coordinates of the corresponding elements in the successive frames; and determining second geospatial coordinates of the corresponding elements of a third frame based on: a transformation between the second frame and the third frame, and the mapping.

APPROACHES OF OBTAINING GEOSPATIAL COORDINATES OF SENSOR DATA
20250377200 · 2025-12-11 ·

Systems and methods are provided for one or more processors; and memory storing instructions that, when executed by the one or more processors, cause the system to perform: receiving successive frames of sensor data, the successive frames comprising a first frame and a second frame; determining transformations, in sensor coordinates, between coordinates of corresponding elements in the successive frames; determining a mapping between the transformations in sensor coordinates and transformations in geospatial coordinates of the corresponding elements in the successive frames; and determining second geospatial coordinates of the corresponding elements of a third frame based on: a transformation between the second frame and the third frame, and the mapping.

Vision sensing device and method

Provided is a vision sensing device including a housing, a camera, a laser pattern generator, an inertial measurement unit, and at least one processor configured to project a laser pattern within the field of view of the camera, capture inertial data from the inertial measurement unit as a user moves the housing, capture visual data from the field of view with the camera as the user moves the housing, capture depth data with the laser pattern generator as the user moves the housing, and generate an RGB-D point cloud based on the visual data, the inertial data, and the depth data.

Vision sensing device and method

Provided is a vision sensing device including a housing, a camera, a laser pattern generator, an inertial measurement unit, and at least one processor configured to project a laser pattern within the field of view of the camera, capture inertial data from the inertial measurement unit as a user moves the housing, capture visual data from the field of view with the camera as the user moves the housing, capture depth data with the laser pattern generator as the user moves the housing, and generate an RGB-D point cloud based on the visual data, the inertial data, and the depth data.