G01C11/28

REMOTE SENSING METHOD TO MODEL TERRAIN SHAPE BY DETECTING RELIABLE GROUND POINTS

According to some embodiments, a system, method and non-transitory computer-readable medium are provided comprising an imagery data source storing image data from a plurality of images; a ground point module; a memory storing program instructions; and a ground point processor, coupled to the memory, and in communication with the ground point module and operative to execute the program instructions to: receive image data for an area of interest (AOI); generate a digital surface map from the received image data, wherein the digital surface map includes an elevation value for each of a plurality of points on the digital surface map; generate a ground point sampling based on the elevation values for the plurality of points on the digital surface map; generate an image boundary sampling based on elevation values for the plurality of points along a plurality of edges of the area of interest; and interpolate the generated ground point sampling and the image boundary sampling to generate a digital terrain map. Numerous other aspects are provided.

REMOTE SENSING METHOD TO MODEL TERRAIN SHAPE BY DETECTING RELIABLE GROUND POINTS

According to some embodiments, a system, method and non-transitory computer-readable medium are provided comprising an imagery data source storing image data from a plurality of images; a ground point module; a memory storing program instructions; and a ground point processor, coupled to the memory, and in communication with the ground point module and operative to execute the program instructions to: receive image data for an area of interest (AOI); generate a digital surface map from the received image data, wherein the digital surface map includes an elevation value for each of a plurality of points on the digital surface map; generate a ground point sampling based on the elevation values for the plurality of points on the digital surface map; generate an image boundary sampling based on elevation values for the plurality of points along a plurality of edges of the area of interest; and interpolate the generated ground point sampling and the image boundary sampling to generate a digital terrain map. Numerous other aspects are provided.

Workpiece-based setting of weld parameters

Various embodiments of welding systems that enable determination of suitable weld settings for a weld part are provided. In one embodiment, a welding system includes a weld part having at least one weld joint to be welded. The welding system also includes a visual acquisition system including an imaging device and being adapted to acquire a visual representation of the weld part and to convert the visual representation into a digital signal representative of the weld part features. The welding system further includes a part recognition system having processing circuitry and memory. The processing circuitry is adapted to receive the digital signal and to compare the digital signal to a database stored in the memory to identify the weld part, weld settings appropriate for welding the weld part, or both.

Workpiece-based setting of weld parameters

Various embodiments of welding systems that enable determination of suitable weld settings for a weld part are provided. In one embodiment, a welding system includes a weld part having at least one weld joint to be welded. The welding system also includes a visual acquisition system including an imaging device and being adapted to acquire a visual representation of the weld part and to convert the visual representation into a digital signal representative of the weld part features. The welding system further includes a part recognition system having processing circuitry and memory. The processing circuitry is adapted to receive the digital signal and to compare the digital signal to a database stored in the memory to identify the weld part, weld settings appropriate for welding the weld part, or both.

FULLY AUTOMATIC POSITION AND ALIGNMENT DETERMINATION METHOD FOR A TERRESTRIAL LASER SCANNER AND METHOD FOR ASCERTAINING THE SUITABILITY OF A POSITION FOR A DEPLOYMENT FOR SURVEYING

One aspect of the invention relates to a fully automatic method for calculating the current, geo-referenced position and alignment of a terrestrial scan-surveying device in situ on the basis of a current panoramic image recorded by the surveying device and at least one stored, geo-referenced 3D scan panoramic image.

FULLY AUTOMATIC POSITION AND ALIGNMENT DETERMINATION METHOD FOR A TERRESTRIAL LASER SCANNER AND METHOD FOR ASCERTAINING THE SUITABILITY OF A POSITION FOR A DEPLOYMENT FOR SURVEYING

One aspect of the invention relates to a fully automatic method for calculating the current, geo-referenced position and alignment of a terrestrial scan-surveying device in situ on the basis of a current panoramic image recorded by the surveying device and at least one stored, geo-referenced 3D scan panoramic image.

System and process of using photogrammetry for digital as-built site surveys and asset tracking
10665035 · 2020-05-26 · ·

The invention relates to a system and process for generating a two-dimensional stitched and annotated digital image of a site having at least one as-built structure thereon. The process includes acquiring a plurality of digital images, still frames and/or video images of the site, the structure, or both, with each of the digital images including one or more reference objects positioned on or about the site, the structure, or both. The reference objects are configured to accurately scale and orient each of the digital images. The process photogrammetrically generates a three-dimensional point cloud from the digital images, and one or more reference objects and features of interest are identified in the three-dimensional point cloud. Based on the identified reference objects and features, the process and system generates the two-dimensional stitched and annotated digital image of the site and/or the structure.

System and process of using photogrammetry for digital as-built site surveys and asset tracking
10665035 · 2020-05-26 · ·

The invention relates to a system and process for generating a two-dimensional stitched and annotated digital image of a site having at least one as-built structure thereon. The process includes acquiring a plurality of digital images, still frames and/or video images of the site, the structure, or both, with each of the digital images including one or more reference objects positioned on or about the site, the structure, or both. The reference objects are configured to accurately scale and orient each of the digital images. The process photogrammetrically generates a three-dimensional point cloud from the digital images, and one or more reference objects and features of interest are identified in the three-dimensional point cloud. Based on the identified reference objects and features, the process and system generates the two-dimensional stitched and annotated digital image of the site and/or the structure.

SELF-RELIANT AUTONOMOUS MOBILE PLATFORM
20200034620 · 2020-01-30 ·

A drone (105) and a method for stitching video data in three dimensions. The method comprises generating video data, localizing and mapping the video data, generating a three-dimensional stitched map, and wirelessly transmitting data for the stitched map. The data is generated using at least one camera (225) mounted on a drone (105), and includes multiple viewpoints of objects in an area. The data, including the multiple viewpoints, is localized and mapped by at least one processor (210) on the drone. The three-dimensional stitched map of the area is generated using the localized and mapped video data. The data for the stitched map is wirelessly transmitted by a transceiver (220) on the drone.

SELF-RELIANT AUTONOMOUS MOBILE PLATFORM
20200034620 · 2020-01-30 ·

A drone (105) and a method for stitching video data in three dimensions. The method comprises generating video data, localizing and mapping the video data, generating a three-dimensional stitched map, and wirelessly transmitting data for the stitched map. The data is generated using at least one camera (225) mounted on a drone (105), and includes multiple viewpoints of objects in an area. The data, including the multiple viewpoints, is localized and mapped by at least one processor (210) on the drone. The three-dimensional stitched map of the area is generated using the localized and mapped video data. The data for the stitched map is wirelessly transmitted by a transceiver (220) on the drone.