G01C11/025

HYPER CAMERA WITH SHARED MIRROR

An imaging system can include a first and second camera configured to capture first and second sets of oblique images along first and second scan paths, respectively, on an object area. A drive is coupled to a scanning mirror structure, having at least one mirror surface, and configured to rotate the structure about a scan axis based on a scan angle. The first and second cameras each have an optical axis set at an oblique angle to the scan axis and include a respective lens to focus first and second imaging beams reflected from the mirror surface to an image sensor located in each of the cameras. The first and second imaging beams captured by their respective cameras can vary according to the scan angle. Each of the image sensors captures respective sets of oblique images by sampling the imaging beams at first and second values of the scan angle.

APPARATUS AND METHOD FOR FAULT-PROOF COLLECTION OF IMAGERY FOR UNDERWATER SURVEY
20210364289 · 2021-11-25 · ·

An apparatus and method are presented comprising one or more sensors or cameras configured to rotate about a central motor. In some examples, the motor is configured to travel at a constant linear speed while the one or more cameras face downward and collect a set of images in a predetermined region of interest. The apparatus and method are configured for image acquisition with non-sequential image overlap. The apparatus and method are configured to eliminate gaps in image detection for fault-proof collection of imagery for an underwater survey. In some examples, long baseline (LBL) is utilized for mapping detected images to a location. In some examples, ultra-short baseline (USBL) is utilized for mapping detected images to a location. The apparatus and method are configured to utilize a simultaneous localization and mapping (SLAM) approach.

Feature/ground height-based colored image generating apparatus and feature height-based colored image generating program
11181367 · 2021-11-23 · ·

Provided are a first DSM generating unit, a first DEM generating unit, a first DHM generating unit, a first inclination image generating unit, a first inclination image storing unit, a first red relief image generating unit, a first gradient-tinted image generating unit, a first feature height-based colored image generating unit, a first building height comparison image generating unit, a second red relief image generating unit, a first feature height comparison image generating unit, a first terrain/feature height-based colored image generating unit, and the like, to obtain a terrain/feature height-based colored image, in which a terrain is expressed in color in accordance with a height and an inclination thereof, and in which a feature is expressed in color in accordance with a height and an inclination thereof.

Creation of a 3D city model from oblique imaging and lidar data

A method and a hybrid 3D-imaging device for surveying of a city scape for creation of a 3D city model. According to the invention, lidar data is acquired simultaneously with the acquisition of imaging data for stereoscopic imaging, i.e. acquisition of imaging and lidar data in one go during the same measuring process. The lidar data is combined with the imaging data for generating a 3D point cloud for extraction of a 3D city model, wherein the lidar data is used for compensating and addressing particular problem areas of generic stereoscopic image processing, in particular areas with unfavourable lighting conditions and areas where the accuracy and efficiency of stereoscopic point matching and point extraction is strongly reduced.

SENSOR APPARATUS

There is provided a portable sensor apparatus (10) for surveying within a room (30) of a building. The sensor apparatus (10) comprises: a sensor unit (12) for temporary insertion into a room (30), the sensor unit (12) being moveable in a scanning motion, and comprising a plurality of outwardly directed sensors (16, 20, 24) arranged to capture sensor data associated with an environment of the sensor apparatus (10) as the sensor unit is moved through the scanning motion. The plurality of sensors (16, 20, 24) comprises: a rangefinder sensor (16); a thermal imaging sensor (20); and a camera (24).

Integrated Visual Geo-Referencing Target Unit And Method Of Operation
20210341630 · 2021-11-04 ·

Integrated Visual Geo-referencing Target Unit ABSTRACT A georeferencing target unit including: a generally planar top surface including a visual marker structure on the top surface, dimensioned to be observable at a distance by a remote visual capture device; an internal GPS tracking unit tracking the current position of the target unit; a microcontroller and storage means for storing GPS tracking data; and wireless network interconnection unit for interconnecting wirelessly with an external network for the downloading of stored GPS tracking data; a power supply for driving the GPS tracking unit, microcontroller, storage and wireless network interconnection unit, a user interface including an activation mechanism for activating the internal GPS tracking unit to track the current position of the target unit over an extended time frame and store the tracked GPS tracking data in the storage means.

Generating a point cloud capture plan

In some implementations, a device may receive a two-dimensional layout of an area to be captured by a scanner and scanner parameters associated with the scanner. The device may calculate a reduced field of view for the scanner based on the scanner parameters. The device may calculate, based on the two-dimensional layout and the reduced field of view, a point cloud capture plan identifying a minimum quantity of locations in the area at which to position the scanner to capture all of the area. The device may modify the point cloud capture plan, based on obstructions identified in the two-dimensional layout and based on the reduced field of view, to generate a modified point cloud capture plan, and may optimize the modified point cloud capture plan to generate a final point cloud capture plan. The device may perform one or more actions based on the final point cloud capture plan.

MOVABLE OBJECT FOR PERFORMING REAL-TIME MAPPING

Techniques are disclosed for real-time mapping in a movable object environment. A real-time mapping system can include at least an unmanned aerial vehicle (UAV), comprising a propulsion system, a main body coupled to the propulsion system and a payload assembly coupled to the main body via a mounting assembly, wherein the payload assembly includes a payload comprising a scanning sensor and a positioning sensor, the payload assembly configured to orient the scanning sensor at a plurality of angles relative to the main body.

Vehicle parking data collection system and method

A method of collecting vehicle parking data includes preprocessing images of a parking area, identifying a vehicle in a parking spot shown in one or more of the images, and detecting a vehicle change in the parking spot in the images collected at different times.

USER INTERFACE FOR DISPLAYING POINT CLOUDS GENERATED BY A LIDAR DEVICE ON A UAV

Techniques are disclosed for real-time mapping in a movable object environment. A system for real-time mapping in a movable object environment, may include at least one movable object including a computing device, a scanning sensor electronically coupled to the computing device, and a positioning sensor electronically coupled to the computing device. The system may further include a client device in communication with the at least one movable object, the client device including a visualization application which is configured to receive point cloud data from the scanning sensor and position data from the positioning sensor, record the point cloud data and the position data to a storage location, generate a real-time visualization of the point cloud data and the position data as it is received, and display the real-time visualization using a user interface provided by the visualization application.