Patent classifications
G06T15/40
Tessellating patches of surface data in tile based computer graphics rendering
A method and system for culling a patch of surface data from one or more tiles in a tile based computer graphics system. A rendering space is divided into a plurality of tiles and a patch of surface data read. Then, at least a portion of the patch is analysed to determine data representing a bounding depth value evaluated over at least one tile. This may comprise tessellating the patch of surface data to derive a plurality of tessellated primitives and analysing at least some of the tessellated primitives. For each tile within which the patch is located, the data representing the bounding depth value is then used to determine whether the patch is hidden in the tile, and at least a portion of the patch is rendered, if the patch is determined not to be hidden in at least one tile.
VIEWABILITY TESTING IN A COMPUTER-GENERATED ENVIRONMENT
A system configured to determine an extent to which an object in a computer-generated scene is visible from a virtual camera, including a rendering engine comprising a depth buffer and arranged to render the computer-generated scene, and a viewability testing module. The viewability testing module is configured to: generate a plurality of points distributed across a surface of the object; determine a depth map value for each point within a field of view of the virtual camera; determine whether each such point is visible from the perspective of the virtual camera based on a comparison between the determined depth map value for the point and a corresponding one or more depth map values stored in the depth buffer; and determining the extent to which the object is visible in dependence on which of the plurality of points are determined to be visible from the perspective of the virtual camera.
VIEWABILITY TESTING IN A COMPUTER-GENERATED ENVIRONMENT
A system configured to determine an extent to which an object in a computer-generated scene is visible from a virtual camera, including a rendering engine comprising a depth buffer and arranged to render the computer-generated scene, and a viewability testing module. The viewability testing module is configured to: generate a plurality of points distributed across a surface of the object; determine a depth map value for each point within a field of view of the virtual camera; determine whether each such point is visible from the perspective of the virtual camera based on a comparison between the determined depth map value for the point and a corresponding one or more depth map values stored in the depth buffer; and determining the extent to which the object is visible in dependence on which of the plurality of points are determined to be visible from the perspective of the virtual camera.
Virtual reality system for viewing point cloud volumes while maintaining a high point cloud graphical resolution
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
Virtual reality system for viewing point cloud volumes while maintaining a high point cloud graphical resolution
A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.
Image occlusion processing method, device, apparatus and computer storage medium
This disclosure provides a method and apparatus for processing occlusion in an image, a device, and a computer storage medium. The method includes: determining a current viewpoint parameter used for drawing a current image frame; obtaining a predicted depth map matching the current viewpoint parameter as a target depth map of the current image frame; and determining an occlusion culling result of an object in the current image frame according to the target depth map.
Image occlusion processing method, device, apparatus and computer storage medium
This disclosure provides a method and apparatus for processing occlusion in an image, a device, and a computer storage medium. The method includes: determining a current viewpoint parameter used for drawing a current image frame; obtaining a predicted depth map matching the current viewpoint parameter as a target depth map of the current image frame; and determining an occlusion culling result of an object in the current image frame according to the target depth map.
Caching and updating of dense 3D reconstruction data
A method to efficiently update and manage outputs of real time or offline 3D reconstruction and scanning in a mobile device having limited resource and connection to the Internet is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data, in either single user applications or multi-user applications sharing and updating the same 3D reconstruction data. The method includes a block-based 3D data representation that allows local update and maintains neighbor consistency at the same time, and a multi-layer caching mechanism that retrieves, prefetches, and stores 3D data efficiently for XR applications. Between sessions of an XR device, blocks may be persisted on the device or in remote storage in one or more cache layers. The device may, upon starting a new session, selectively use the blocks from one or more layers of the cache.
Caching and updating of dense 3D reconstruction data
A method to efficiently update and manage outputs of real time or offline 3D reconstruction and scanning in a mobile device having limited resource and connection to the Internet is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data, in either single user applications or multi-user applications sharing and updating the same 3D reconstruction data. The method includes a block-based 3D data representation that allows local update and maintains neighbor consistency at the same time, and a multi-layer caching mechanism that retrieves, prefetches, and stores 3D data efficiently for XR applications. Between sessions of an XR device, blocks may be persisted on the device or in remote storage in one or more cache layers. The device may, upon starting a new session, selectively use the blocks from one or more layers of the cache.
Systems, methods, and media for generating visualization of physical environment in artificial reality
In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.