Patent classifications
G06T15/405
Hidden culling in tile-based computer generated images
A method and system is provided for culling hidden objects in a tile-based graphics system before they are indicated in a display list for a tile. A rendering space is divided into a plurality of regions which may for example be a plurality of tiles or a plurality of areas into which one or more tiles are divided. Depth thresholds for the regions, which are used to identify hidden objects for culling, are updated when an object entirely covers a region and in dependence on a comparison between a depth value for the object and the depth threshold for the region. For example, if the depth threshold is a maximum depth threshold, the depth threshold may be updated if an object entirely covers the tile and the maximum depth value of the object is less than the maximum depth threshold.
Systems and methods for training a machine-learning-based monocular depth estimator
Systems and methods described herein relate to training a machine-learning-based monocular depth estimator. One embodiment selects a virtual image in a virtual dataset, the virtual dataset including a plurality of computer-generated virtual images; generates, from the virtual image in accordance with virtual-camera intrinsics, a point cloud in three-dimensional space based on ground-truth depth information associated with the virtual image; reprojects the point cloud back to two-dimensional image space in accordance with real-world camera intrinsics to generate a transformed virtual image; and trains the machine-learning-based monocular depth estimator, at least in part, using the transformed virtual image.
VIEWPOINT DEPENDENT BRICK SELECTION FOR FAST VOLUMETRIC RECONSTRUCTION
A method to culling parts of a 3D reconstruction volume is provided. The method makes available to a wide variety of mobile XR applications fresh, accurate and comprehensive 3D reconstruction data with low usage of computational resources and storage spaces. The method includes culling parts of the 3D reconstruction volume against a depth image. The depth image has a plurality of pixels, each of which represents a distance to a surface in a scene. In some embodiments, the method includes culling parts of the 3D reconstruction volume against a frustum. The frustum is derived from a field of view of an image sensor, from which image data to create the 3D reconstruction is obtained.
CULLING OBJECTS FROM A 3-D GRAPHICS PIPELINE USING HIERARCHICAL Z BUFFERS
A shader in a graphics pipeline accesses an object that represents a portion of a model of a scene in object space and one or more far-z values that indicate a furthest distance of a previously rendered portion of one or more tiles from a viewpoint used to render the scene on a screen. The one or more tiles overlap a bounding box of the object in a plane of the screen. The shader culls the object from the graphics pipeline in response to the one or more far-z values being smaller than a near-z value that represents a closest distance of a portion of the object to the viewpoint.
SYSTEMS AND METHODS FOR DYNAMIC OCCLUSION HANDLING
A computing system includes a processing system with at least one processing unit. The processing system is configured to receive a depth map with a first boundary of an object. The processing system is configured to receive a color image that corresponds to the depth map. The color image includes a second boundary of the object. The processing system is configured to extract depth edge points of the first boundary from the depth map. The processing system is configured to identify target depth edge points on the depth map. The target depth edge points correspond to color edge points of the second boundary of the object in the color image. In addition, the processing system is configured to snap the depth edge points to the target depth edge points such that the depth map is enhanced with an object boundary for the object.
Single pass surface splatting
At least one processor may retrieve from a framebuffer a depth value associated with a pixel. The at least one processor may determine whether a fragment depth value associated with a fragment of a splat is within a non-zero offset of the depth value associated with the pixel. Responsive to determining that the fragment depth value associated with the fragment of the splat is within the non-zero offset of the depth value associated with the pixel, the at least one processor may output updated data for the pixel to the framebuffer based at least in part on data associated with the fragment of the splat.
Light field displays having synergistic data formatting, re-projection, foveation, tile binning and image warping technology
Systems, methods and apparatuses may provide for technology to reduce rendering overhead associated with light field displays. The technology may conduct data formatting, re-projection, foveation, tile binning and/or image warping operations with respect to a plurality of display planes in a light field display.
MULTIPLE-PASS RENDERING OF A DIGITAL THREE-DIMENSIONAL MODEL OF A STRUCTURE
A method is provided for rendering a scene including a digital three-dimensional (3D) model of a structure. The method includes traversing a scene graph composed of a hierarchical group of nodes representing respective 3D objects of the digital 3D model, and selecting nodes of the hierarchical group of nodes. The method includes adding a plurality of objects represented by the selected nodes to a render queue, performing a multiple-pass rendering of the plurality of 3D objects from the render queue. This includes in a pass of a plurality of passes, rendering a threshold portion but not all of the plurality of 3D objects to a framebuffer for output to a display device, with at least one of the plurality of 3D objects being left in the render queue after rendering the threshold portion. The method may also include a mesh simplification and/or z-occlusion.
Apparatus and method for controlling early depth processing and post depth processing
A depth processing apparatus includes a depth buffer, an early depth processing circuit, a post depth processing circuit, and a depth processing controller. The depth buffer stores depth information of a plurality of pixels of a screen space. The early depth processing circuit performs early depth processing based on at least a portion of the depth information before a pixel shading stage. The post depth processing circuit performs post depth processing based on at least a portion of the depth information after the pixel shading stage. The depth processing controller manages a plurality of dependency indication values corresponding to a plurality of sub-regions in the screen space, respectively, and configured to control a first pixel for undergoing at least one of the early depth processing and the post depth processing by referring a first dependency indication value of a first sub-region in which the first pixel is located.
Using tiling depth information in hidden surface removal in a graphics processing system
A graphics processing system includes a tiling unit for performing tiling calculations and a hidden surface removal (HSR) unit for performing HSR on fragments of the primitives. Primitive depth information is calculated in the tiling unit and forwarded for use by the HSR unit in performing HSR on the fragments. This takes advantage of the tiling unit having access to the primitive data before the HSR unit performs the HSR on the primitives, to determine some depth information which can simplify the HSR performed by the HSR unit. Therefore, the final values of a depth buffer determined in the tiling unit can be used in the HSR unit to determine that a particular fragment will subsequently be hidden by a fragment of a primitive which is yet to be processed in the HSR unit, such that the particular fragment can be culled.