Patent classifications
G06T2210/36
Hierarchical point cloud compression with smoothing
A system comprises an encoder configured to compress attribute information for a point cloud and/or a decoder configured to decompress compressed attribute for the point cloud. To compress the attribute information, multiple levels of detail are generated based on spatial information. Also, attribute values are predicted based on the level of details. A decoder follows a similar prediction process based on level of details. Also, attribute correction values may be determined to correct predicted attribute values and may be used by a decoder to decompress a point cloud compressed using level of detail attribute compression. In some embodiments, an update operation is performed to smooth attribute correction values taking into account an influence factor of respective points in a given level of detail on attributes in other levels of detail.
SPATIAL PROCESSING FOR MAP GEOMETRY SIMPLIFICATION
A method of simplifying a digital map for display is disclosed. The method comprises receiving a digital map for a geographical region, the digital map being organized into a plurality of raw map tiles associated with a plurality of sub-regions of the geographical region; retrieving configuration data related to visibility to humans for simplifying the digital map; identifying one or more features from each of the plurality of raw map tiles, each feature corresponding to a cluster of pixels, at least two features corresponding to a common pixel; generating a specific modified map tile for a particular raw map tile based on the configuration data by assigning a maximum of all values of pixels of certain features associated with the particular raw map tile that correspond to one or more common pixels to at least one pixel of the one or more common pixels not already having the maximum as a value.
Systems and methods for dynamically rendering three-dimensional images with varying detail to emulate human vision
Disclosed is a system and associated methods for dynamically rendering an image with varying detail that emulates human vision and that provides a dynamic resolution or level of detail at each layer of the image that is equal to or greater than the resolvable detail that can be detected by human vision within each layer. The system may adjust a non-linear function based on one or more of a display size, a display resolution, and a viewer distance from a display. The system may determine a dynamic resolution or level of detail for each layer of the image based on the adjusted non-linear function. The system may render the image data at or greater than the dynamic resolution or level of detail determined for each layer.
Geometry-aware encoding of 2D elements
Techniques for a texture modification feature are described herein. First data identifying a position and a distance of an actor in an environment from a view point is obtained. The actor may correspond to a mesh comprised of a plurality of triangles. Second data identifying a location, an angle, and size for each triangle of the plurality of triangles with respect to a spectrum of pre-defined viewpoints is obtained. A value for each triangle may be determined based on the first data and the second data. The value may represent a level of detail to optimize viewing of each triangle of the actor from a corresponding viewpoint of the spectrum of pre-defined viewpoints. One or more areas of a texture that corresponds to the mesh may be modified prior to applying the texture to the mesh based on the associated values for triangles of the mesh.
OPERATIONS USING SPARSE VOLUMETRIC DATA
A volumetric data structure models a particular volume representing the particular volume at a plurality of levels of detail. A first entry in the volumetric data structure includes a first set of bits representing voxels at a first level of detail, the first level of detail includes the lowest level of detail in the volumetric data structure, values of the first set of bits indicate whether a corresponding one of the voxels is at least partially occupied by respective geometry, where the volumetric data structure further includes a number of second entries representing voxels at a second level of detail higher than the first level of detail, the voxels at the second level of detail represent subvolumes of volumes represented by voxels at the first level of detail, and the number of second entries corresponds to a number of bits in the first set of bits with values indicating that a corresponding voxel volume is occupied.
VIRTUAL REALITY SYSTEM FOR VIEWING POINT CLOUD VOLUMES WHILE MAINTAINING A HIGH POINT CLOUD GRAPHICAL RESOLUTION
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
Point cloud attribute transfer algorithm
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. A point cloud attribute transfer algorithm may be used to determine distortion between an original point cloud and a reconstructed point cloud. Additionally, the point cloud attribute transfer algorithm may be used to select attribute values for a reconstructed point cloud such that distortion between an original point cloud and a reconstructed version of the original point cloud is minimized.
Spatial processing for map geometry simplification
A computer system and related computer-implemented methods are disclosed. The system is programmed to simplify one or more digital maps for a geographical region by reducing their sizes while maintaining their physical appearances to the human eyes.
Point Cloud Attribute Transfer Algorithm
A system comprises an encoder configured to compress attribute information and/or spatial for a point cloud and/or a decoder configured to decompress compressed attribute and/or spatial information for the point cloud. A point cloud attribute transfer algorithm may be used to determine distortion between an original point cloud and a reconstructed point cloud. Additionally, the point cloud attribute transfer algorithm may be used to select attribute values for a reconstructed point cloud such that distortion between an original point cloud and a reconstructed version of the original point cloud is minimized.
GENERATING MODIFIED DIGITAL IMAGES UTILIZING A GLOBAL AND SPATIAL AUTOENCODER
The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.