G01C11/10

Systems and methods for rapid alignment of digital imagery datasets to models of structures

Systems and methods for aligning digital image datasets to a computer model of a structure. The system receives a plurality of reference images from an input image dataset and identifies common ground control points (GCPs) in the reference images. The system then calculates virtual three-dimensional (3D) coordinates of the measured GCPs. Next, the system calculates and projects two-dimensional (2D) image coordinates of the virtual 3D coordinates into all of the images. Finally, using the projected 2D image coordinates, the system performs spatial resection of all of the images in order to rapidly align all of the images.

DISTANCE DETERMINATION OF A SAMPLE PLANE IN A MICROSCOPE SYSTEM
20200200531 · 2020-06-25 ·

A distance determination system for a microscope system for coarse focus setting includes a sample stage with a placement surface for holding a sample carrier which is displaceable along at least one direction of extent of a sample plane, an overview camera with a non-telecentric objective for producing digital images, directed at the sample stage, and an evaluation unit, which includes a storage system storing at least two recorded digital images of the sample stage at different viewing angles, a trained machine-learning-based system for identifying corresponding structures of a sample carrier that has been placed into the sample stage in the two recorded digital images and a distance determination unit, which is adapted for determining the distance of a reference point of the sample carrier from a reference point of the overview camera based on the different viewing angles onto the sample stage, a pixel distance of the two recorded digital images with respect to one another using the associated corresponding structures contained in the recorded images.

Surveying Instrument And Photogrammetric Method
20200182614 · 2020-06-11 ·

There is provided a surveying instrument including a distance measuring unit configured to measure a distance to an object to be measured, a measuring direction image pickup module which includes the object to be measured and is configured to acquire as observation image, an attitude detector is configured to detect a tilt of the surveying instrument main body and a arithmetic control module, and wherein the arithmetic control module is configured to extract each common corresponding point from a first image acquired at a first installing point and a second image acquired at a second installing point, perform the matching based on the corresponding point, and make a measurement of a positional relationship of the object to be measured with respect to the first installing point and the second installing point based on a matching image.

Surveying Instrument And Photogrammetric Method
20200182614 · 2020-06-11 ·

There is provided a surveying instrument including a distance measuring unit configured to measure a distance to an object to be measured, a measuring direction image pickup module which includes the object to be measured and is configured to acquire as observation image, an attitude detector is configured to detect a tilt of the surveying instrument main body and a arithmetic control module, and wherein the arithmetic control module is configured to extract each common corresponding point from a first image acquired at a first installing point and a second image acquired at a second installing point, perform the matching based on the corresponding point, and make a measurement of a positional relationship of the object to be measured with respect to the first installing point and the second installing point based on a matching image.

System and process of using photogrammetry for digital as-built site surveys and asset tracking
10665035 · 2020-05-26 · ·

The invention relates to a system and process for generating a two-dimensional stitched and annotated digital image of a site having at least one as-built structure thereon. The process includes acquiring a plurality of digital images, still frames and/or video images of the site, the structure, or both, with each of the digital images including one or more reference objects positioned on or about the site, the structure, or both. The reference objects are configured to accurately scale and orient each of the digital images. The process photogrammetrically generates a three-dimensional point cloud from the digital images, and one or more reference objects and features of interest are identified in the three-dimensional point cloud. Based on the identified reference objects and features, the process and system generates the two-dimensional stitched and annotated digital image of the site and/or the structure.

System and process of using photogrammetry for digital as-built site surveys and asset tracking
10665035 · 2020-05-26 · ·

The invention relates to a system and process for generating a two-dimensional stitched and annotated digital image of a site having at least one as-built structure thereon. The process includes acquiring a plurality of digital images, still frames and/or video images of the site, the structure, or both, with each of the digital images including one or more reference objects positioned on or about the site, the structure, or both. The reference objects are configured to accurately scale and orient each of the digital images. The process photogrammetrically generates a three-dimensional point cloud from the digital images, and one or more reference objects and features of interest are identified in the three-dimensional point cloud. Based on the identified reference objects and features, the process and system generates the two-dimensional stitched and annotated digital image of the site and/or the structure.

Position and attitude determination method and system using edge images

A method of determining at least one of position and attitude in relation to an object is provided. The method includes capturing at least two images of the object with at least one camera. Each image is captured at a different position in relation to the object. The images are converted to edge images. The edge images of the object are converted into three-dimensional edge images of the object using positions of where the at least two images were captured. Overlap edge pixels in the at least two three-dimensional edge images are located to identify overlap points. A three dimensional edge candidate point image of the identified overlapped points in an evidence grid is built. The three dimensional candidate edge image in the evidence grid is compared with a model of the object to determine at least one of a then current position and attitude in relation to the object.

Position and attitude determination method and system using edge images

A method of determining at least one of position and attitude in relation to an object is provided. The method includes capturing at least two images of the object with at least one camera. Each image is captured at a different position in relation to the object. The images are converted to edge images. The edge images of the object are converted into three-dimensional edge images of the object using positions of where the at least two images were captured. Overlap edge pixels in the at least two three-dimensional edge images are located to identify overlap points. A three dimensional edge candidate point image of the identified overlapped points in an evidence grid is built. The three dimensional candidate edge image in the evidence grid is compared with a model of the object to determine at least one of a then current position and attitude in relation to the object.

SELF-RELIANT AUTONOMOUS MOBILE PLATFORM
20200034620 · 2020-01-30 ·

A drone (105) and a method for stitching video data in three dimensions. The method comprises generating video data, localizing and mapping the video data, generating a three-dimensional stitched map, and wirelessly transmitting data for the stitched map. The data is generated using at least one camera (225) mounted on a drone (105), and includes multiple viewpoints of objects in an area. The data, including the multiple viewpoints, is localized and mapped by at least one processor (210) on the drone. The three-dimensional stitched map of the area is generated using the localized and mapped video data. The data for the stitched map is wirelessly transmitted by a transceiver (220) on the drone.

SELF-RELIANT AUTONOMOUS MOBILE PLATFORM
20200034620 · 2020-01-30 ·

A drone (105) and a method for stitching video data in three dimensions. The method comprises generating video data, localizing and mapping the video data, generating a three-dimensional stitched map, and wirelessly transmitting data for the stitched map. The data is generated using at least one camera (225) mounted on a drone (105), and includes multiple viewpoints of objects in an area. The data, including the multiple viewpoints, is localized and mapped by at least one processor (210) on the drone. The three-dimensional stitched map of the area is generated using the localized and mapped video data. The data for the stitched map is wirelessly transmitted by a transceiver (220) on the drone.