G06T17/05

Waypoint creation in map detection

An augmented reality (AR) device can be configured to generate a virtual representation of a user's physical environment. The AR device can capture images of the user's physical environment to generate a mesh map. The AR device can project graphics at designated locations on a virtual bounding box to guide the user to capture images of the user's physical environment. The AR device can provide visual, audible, or haptic guidance to direct the user of the AR device to look toward waypoints to generate the mesh map of the user's environment.

Electrical power grid modeling

Methods, systems, and apparatus, including computer programs encoded on a storage device, for electric grid asset detection are enclosed. An electric grid asset detection method includes: obtaining overhead imagery of a geographic region that includes electric grid wires; identifying the electric grid wires within the overhead imagery; and generating a polyline graph of the identified electric grid wires. The method includes replacing curves in polylines within the polyline graph with a series of fixed lines and endpoints; identifying, based on characteristics of the fixed lines and endpoints, a location of a utility pole that supports the electric grid wires; detecting an electric grid asset from street level imagery at the location of the utility pole; and generating a representation of the electric grid asset for use in a model of the electric grid.

Electrical power grid modeling

Methods, systems, and apparatus, including computer programs encoded on a storage device, for electric grid asset detection are enclosed. An electric grid asset detection method includes: obtaining overhead imagery of a geographic region that includes electric grid wires; identifying the electric grid wires within the overhead imagery; and generating a polyline graph of the identified electric grid wires. The method includes replacing curves in polylines within the polyline graph with a series of fixed lines and endpoints; identifying, based on characteristics of the fixed lines and endpoints, a location of a utility pole that supports the electric grid wires; detecting an electric grid asset from street level imagery at the location of the utility pole; and generating a representation of the electric grid asset for use in a model of the electric grid.

Passive wide-area three-dimensional imaging

Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.

Passive wide-area three-dimensional imaging

Radar, lidar, and other active 3D imaging techniques require large, heavy sensors that consume lots of power. Passive 3D imaging techniques based on feature matching are computationally expensive and limited by the quality of the feature matching. Fortunately, there is a robust, computationally inexpensive way to generate 3D images from full-motion video acquired from a platform that moves relative to the scene. The full-motion video frames are registered to each other and mapped to the scene coordinates using data about the trajectory of the platform with respect to the scene. The time derivative of the registered frames equals the product of the height map of the scene, the projected angular velocity of the platform, and the spatial gradient of the registered frames. This relationship can be solved in (near) real time to produce the height map of the scene from the full-motion video and the trajectory.

DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT

The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.

DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT

The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.

HIGH-DEFINITION MAP CREATION METHOD AND DEVICE, AND ELECTRONIC DEVICE

A high-definition map creation method includes: obtaining point cloud data collected with respect to a target region, the point cloud data including K frames of point clouds and an initial pose of each frame of point cloud, K being an integer greater than 1; associating the K frames of point clouds with each other in accordance with the initial pose to obtain a first point cloud relation graph of the K frames of point clouds; performing point cloud registration on the K frames of point clouds in accordance with the first point cloud relation graph and the initial pose to obtain a target relative pose of each frame of point cloud in the K frames of point clouds; and splicing the K frames of point clouds in accordance with the target relative pose to obtain a point cloud map of the target region.

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of processing an image, an electronic device, and a storage medium, which relate to the artificial intelligence field, in particular to fields of computer vision and intelligent transportation technologies. The method includes: determining at least one key frame image in a scene image sequence captured by a target camera; determining a camera pose parameter associated with each key frame image in the at least one key frame image, according to a geographic feature associated with the key frame image; and projecting each scene image in the scene image sequence to obtain a target projection image according to the camera pose parameter associated with the key frame image, so as to generate a scene map based on the target projection image. The geographic feature associated with any key frame image indicates localization information of the target camera at a time instant of capturing the corresponding key frame image.

METHOD OF PROCESSING IMAGE, ELECTRONIC DEVICE, AND STORAGE MEDIUM

A method of processing an image, an electronic device, and a storage medium, which relate to the artificial intelligence field, in particular to fields of computer vision and intelligent transportation technologies. The method includes: determining at least one key frame image in a scene image sequence captured by a target camera; determining a camera pose parameter associated with each key frame image in the at least one key frame image, according to a geographic feature associated with the key frame image; and projecting each scene image in the scene image sequence to obtain a target projection image according to the camera pose parameter associated with the key frame image, so as to generate a scene map based on the target projection image. The geographic feature associated with any key frame image indicates localization information of the target camera at a time instant of capturing the corresponding key frame image.