G06T17/05

Predicting terrain traversability for a vehicle

Embodiments of the present disclosure relate generally to generating and utilizing three-dimensional terrain maps for vehicular control. Other embodiments may be described and/or claimed.

Predicting terrain traversability for a vehicle

Embodiments of the present disclosure relate generally to generating and utilizing three-dimensional terrain maps for vehicular control. Other embodiments may be described and/or claimed.

Graphical user interface for controlling a solar ray mapping

Systems, methods, and computer-readable media are described herein to model divergent beam ray paths between locations on a roof (e.g., of a structure) and modeled locations of the sun at different times of the day and different days during a week, month, year, or another time period. A graphical user interface allows for visualization of the modeled ray paths and graphical manipulation of the resolution and parameters of the modeling process.

Graphical user interface for controlling a solar ray mapping

Systems, methods, and computer-readable media are described herein to model divergent beam ray paths between locations on a roof (e.g., of a structure) and modeled locations of the sun at different times of the day and different days during a week, month, year, or another time period. A graphical user interface allows for visualization of the modeled ray paths and graphical manipulation of the resolution and parameters of the modeling process.

Some automated and semi-automated tools for linear feature extraction in two and three dimensions
11551439 · 2023-01-10 · ·

A system for vector extraction comprising a vector extraction engine stored and operating on a network-connected computing device that loads raster images from a database stored and operating on a network-connected computing device, identifies features in the raster images, and computes a vector based on the features, and methods for feature and vector extraction.

Photorealistic three dimensional texturing using canonical views and a two-stage approach

Images of various views of objects can be captured. An object mesh structure can be created based at least in part on the object images. The object mesh structure represents the three-dimensional shape of the object and includes a mesh with mesh elements. The mesh elements are assigned views first from a subset of views to texture large contiguous portions of the object from relatively few views. Portions that are not textured from the subset views are textured using the full set of views, such that all mesh elements are assigned views. The views first assigned from the subset of views and the views then assigned from the full plurality of views can be packaged into a texture atlas. These texture atlas views can be packaged with mapping data to map the texture atlas views to their corresponding mesh elements. The texture atlas and the object mesh structure can be sent to a client device to render a representation of the object. The representation can be manipulated on the client device in an augmented reality setting.

Photorealistic three dimensional texturing using canonical views and a two-stage approach

Images of various views of objects can be captured. An object mesh structure can be created based at least in part on the object images. The object mesh structure represents the three-dimensional shape of the object and includes a mesh with mesh elements. The mesh elements are assigned views first from a subset of views to texture large contiguous portions of the object from relatively few views. Portions that are not textured from the subset views are textured using the full set of views, such that all mesh elements are assigned views. The views first assigned from the subset of views and the views then assigned from the full plurality of views can be packaged into a texture atlas. These texture atlas views can be packaged with mapping data to map the texture atlas views to their corresponding mesh elements. The texture atlas and the object mesh structure can be sent to a client device to render a representation of the object. The representation can be manipulated on the client device in an augmented reality setting.

METHOD AND SYSTEM FOR MAP TARGET TRACKING
20230215036 · 2023-07-06 ·

A method of tracking a map target according to one embodiment of the present disclosure, which tracks the map target through a map target tracking application executed by at least one processor of a terminal, includes: acquiring a basic image obtained by photographing a 3D space; acquiring a plurality of sub-images obtained by dividing the acquired basic image for respective sub-spaces in the 3D space; creating a plurality of sub-maps based on the plurality of acquired sub-images; determining at least one main key frame for each of the plurality of created sub-maps; creating a 3D main map by combining the plurality of sub-maps for which the at least one main key frame is determined; and tracking current posture information in the 3D space based on the created 3D main map.

METHOD AND SYSTEM FOR MAP TARGET TRACKING
20230215036 · 2023-07-06 ·

A method of tracking a map target according to one embodiment of the present disclosure, which tracks the map target through a map target tracking application executed by at least one processor of a terminal, includes: acquiring a basic image obtained by photographing a 3D space; acquiring a plurality of sub-images obtained by dividing the acquired basic image for respective sub-spaces in the 3D space; creating a plurality of sub-maps based on the plurality of acquired sub-images; determining at least one main key frame for each of the plurality of created sub-maps; creating a 3D main map by combining the plurality of sub-maps for which the at least one main key frame is determined; and tracking current posture information in the 3D space based on the created 3D main map.

METHOD AND SYSTEM FOR PROVIDING USER INTERFACE FOR MAP TARGET CREATION
20230215092 · 2023-07-06 ·

A method of providing a user interface for map target creation according to one embodiment of the present disclosure, in which a map target application executed by at least one processor of a terminal provides the user interface for map target creation, includes: acquiring an image captured by photographing a 3D space; extracting a key frame of the captured image; detecting feature points in the extracted key frame; generating a 3D map based on the detected feature points; generating object information including class information and object area information for at least one key object in the key frame; mapping the generated object information to the 3D map; displaying the object information mapped on the 3D map; and providing an object function setting interface based on the 3D map.