Patent classifications
G06T15/405
DYNAMIC LOW-RESOLUTION Z TEST SIZES
A graphics processing unit (GPU) may perform a binning pass to determine primitive-tile intersections for a plurality of primitives and a plurality of tiles making up a graphical scene, including performing low-resolution z-culling of representations of the plurality of primitives based at least in part on a first set of culling z-values each having a first test size to determine a first set of visible primitives from the plurality of primitives. The GPU may further perform a rendering pass to render the plurality of tiles based at least in part on performing the low-resolution z-culling of representations of the first set of visible primitives based at least in part on a second set of culling z-values that represents a second test size to determine a second set of visible primitives from the first set of visible primitives, wherein the first test size is greater than the second test size.
Apparatus and method for depth-based image scaling of 3D visual content
A system for performing depth-based scaling of 3D content. The system comprises: 1) a content source configured to provide an input image comprising a plurality of input image objects; and 2) a processor configured to receive the input image and to receive a depth map comprising depth data associated with each of the plurality of input image objects. The processor generates an output image comprising a plurality of output image objects, wherein each of the plurality of output image objects corresponding to one of the plurality of input image objects. The processor scales a size of a first output image object relative to the size of a second output image object based on depth data associated with the first output image object and the second output image object.
DEPTH BUFFER DILATION FOR REMOTE RENDERING
Techniques for improving remote rendering and reprojection are disclosed herein. A color image is generated, where this color image includes overlapping content regions. A depth buffer is generated for the color image and includes depth values for the pixels in the color image. The depth buffer includes both essential and non-essential depth discontinuities. While preserving the essential depth discontinuities, the non-essential depth discontinuities are eliminated from the depth buffer. New non-essential discontinuities are prevented from being included in the final version of the depth buffer. The color image is encoded into a color image video stream, and the modified depth buffer is encoded into a depth buffer stream. The color image video stream and the depth buffer stream are then transmitted to a remotely located HMD. The HMD then reprojects the color image based on the depth values in the depth buffer.
OCCLUSION QUERY APPARATUS AND METHOD FOR ACCELERATED RENDERING
An apparatus and method are described for occlusion queries for accelerated rendering. For example, one embodiment of a method comprises: performing an occlusion query for a plurality of tiles of an image, the occlusion query to determine whether one or more of the tiles are occluded; generating a bit mask in response to the occlusion query, the bit mask comprising data indicating which of the tiles are occluded; and reading the bit mask when rendering the image to remove work associated with those tiles which are occluded.
PLANAR DEPTH REPRESENTATIONS FOR BLOCK COMPRESSION
Described herein are technologies related to facilitate high precision and resolution of a depth (Z) buffer storage during a process of rendering 3D scenes. More particularly, during an interpolation, encoding, and/or storing processes in a graphic pipeline for rendering the 3D scenes, a particular depth (Z) plane representation is configured to support an un-normalized depth and a floating point depth formats that may be used to store Z values to the Z buffer storage.
Automultiscopic display with viewpoint tracking and scalability for multiple views
In one aspect, a computer-implemented method for efficiently rendering and displaying multiple images on an electronic device having an automultiscopic display may generally include detecting, with the electronic device, a position of at least one eye relative to the automultiscopic display. The automultiscopic display may include an array of multipixels, with each multipixel including a plurality of sub-multipixels. In addition, the method may include rendering a viewpoint-specific image for each detected eye position and selectively coloring at least one sub-multipixel within one or more of the multipixels such that colors associated with the rendered viewpoint-specific image are only displayed within a multipixel display zone defined for each of the one or more multipixels with respect to each detected eye position.
Primitive Processing in a Graphics Processing System
A graphics processing system has a rendering space which is divided into tiles. Primitives within the tiles are processed to perform hidden surface removal and to apply texturing to the primitives. The graphics processing system includes a plurality of depth buffers, thereby allowing a processing module to process primitives of one tile by accessing one of the depth buffers while primitive identifiers of another, partially processed tile are stored in another one of the depth buffers. This allows the graphics processing system to have “multiple tiles in flight”, which can increase the efficiency of the graphics processing system.
GPU accelerated geospatial queries and geometric operations
A method including receiving a spatial query on spatial data. The spatial query has a spatial query extent including a sub-portion of the spatial data. A projection type is selected for the spatial query. A framebuffer is created for the selected projection type. Vertex buffers are established to hold a geometry of the selected projection type. The vertex buffers are passed from a CPU to a GPU. A spatial geometry of the spatial query extent is rendered into the framebuffer by projecting feature vertex data for features that fall at least partly within the spatial query extent into the vertex buffers. Rendering generates rendered framebuffer pixel values. Pixel values of the rendered framebuffer are retrieved as bytes on the CPU. A spatial query result is processed that includes or uses the pixel values.
GRAPHICS PROCESSING SYSTEMS
A graphics processing pipeline comprises vertex shading circuitry that operates to vertex shade position attributes of vertices of a set of vertices to be processed by the graphics processing pipeline, to generate, inter alia, a separate vertex shaded position attribute value for each view of the plural different views. Tiling circuitry then determines for the vertices that have been subjected to the first vertex shading operation, whether the vertices should be processed further. Vertex shading circuitry then performs a second vertex shading operation on the vertices that it has been determined should be processed further, to vertex shade the remaining vertex attributes for each vertex that it has been determined should be processed further, to generate, inter alia, a single vertex shaded attribute value for the set of plural views.
Multi-stage block mesh simplification
A method of operating a computing system to generate a model of an environment represented by a mesh is provided. The method allows to update 3D meshes to client applications in real time with low latency to support on the fly environment changes. The method provides 3D meshes adaptive to different levels of simplification requested by various client applications. The method provides local update, for example, updating the mesh parts that are changed since last update. The method also provides 3D meshes with planarized surfaces to support robust physics simulations. The method includes segmenting a 3D mesh into mesh blocks. The method also includes performing a multi-stage simplification on selected mesh blocks. The multi-stage simplification includes a pre-simplification operation, a planarization operation, and a post-simplification operation.