G06T15/40

Systems, methods, and media for generating visualization of physical environment in artificial reality

In one embodiment for generating passthrough, a system may receive an image and depth measurements of an environment and generate a corresponding 3D model. The system identifies, in the image, first pixels depicting a physical object and second pixels corresponding to a padded boundary around the first pixels. The system associates the first pixels with a first portion of the 3D model representing the physical object and a first representative depth value computed based on the depth measurements. The system associates the second pixels with a second portion of the 3D model representing a region around the physical object and a second representative depth value farther than the first representative depth value. The system renders an output image depicting a virtual object and the physical object. Occlusions between the virtual object and the physical object are determined using the first representative depth value and the second representative depth value.

GRAPHICS PROCESSING

A method of operating a graphics processor that executes a graphics processing pipeline that includes an early culling tester that can access plural different culling test data buffers is disclosed. Information is maintained indicating which of the plural culling test data buffers is expected to be accessed, and the information is used to control the early culling tester. The information may be used to control the early culling tester such that processing delays associated with waiting for dependencies to resolve are reduced.

VIRTUAL REALITY SYSTEM FOR VIEWING POINT CLOUD VOLUMES WHILE MAINTAINING A HIGH POINT CLOUD GRAPHICAL RESOLUTION
20230041314 · 2023-02-09 ·

A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.

VIRTUAL REALITY SYSTEM FOR VIEWING POINT CLOUD VOLUMES WHILE MAINTAINING A HIGH POINT CLOUD GRAPHICAL RESOLUTION
20230041314 · 2023-02-09 ·

A virtual reality (VR) system that includes a three-dimensional (3D) point cloud having a plurality of points, a VR viewer having a current position, a graphics processing unit (GPU), and a central processing unit (CPU). The CPU determines a field-of-view (FOV) based at least in part on the current position of the VR viewer, selects, using occlusion culling, a subset of the points based at least in part on the FOV, and provides them to the GPU. The GPU receives the subset of the plurality of points from the CPU and renders an image for display on the VR viewer based at least in part on the received subset of the plurality of points. The selecting a subset of the plurality of points is at a first frame per second (FPS) rate and the rendering is at a second FPS rate that is faster than the first FPS rate.

Hierarchical depth buffer back annotation

Briefly, in accordance with one or more embodiments, a processor performs a coarse depth test on pixel data, and performs a final depth test on the pixel data. Coarse depth data is stored in a coarse depth cache, and per pixel depth data is stored in a per pixel depth cache. If a result of the coarse depth test is ambiguous, the processor is to read the per pixel depth data from the per pixel depth cache, and to update the coarse depth data with the per pixel depth data if the per pixel depth data has a smaller depth range than the coarse depth data.

Method and graphics processing system for rendering one or more fragments having shader-dependent properties

A graphics processing unit and method for processing fragments in a graphics processing system which includes: (i) hidden surface removal logic configured to perform hidden surface removal on fragments, and (ii) processing logic configured to execute shader programs for fragments. Initial processing of fragments is performed at the hidden surface removal logic. Some of the fragments have a shader-dependent property. A shader program for a particular fragment having the shader-dependent property is split into two stages. The initial processing comprises performing a depth test on the particular fragment. In response to the particular fragment passing the depth test of the initial processing in the hidden surface removal logic, a first stage, but not a second stage, of the shader program is executed for the particular fragment at the processing logic. The first stage of the shader program has instructions for determining the property of the particular fragment.

Method and graphics processing system for rendering one or more fragments having shader-dependent properties

A graphics processing unit and method for processing fragments in a graphics processing system which includes: (i) hidden surface removal logic configured to perform hidden surface removal on fragments, and (ii) processing logic configured to execute shader programs for fragments. Initial processing of fragments is performed at the hidden surface removal logic. Some of the fragments have a shader-dependent property. A shader program for a particular fragment having the shader-dependent property is split into two stages. The initial processing comprises performing a depth test on the particular fragment. In response to the particular fragment passing the depth test of the initial processing in the hidden surface removal logic, a first stage, but not a second stage, of the shader program is executed for the particular fragment at the processing logic. The first stage of the shader program has instructions for determining the property of the particular fragment.

Intersection testing in a ray tracing system using scaled minimum and maximum culling distances
11615577 · 2023-03-28 · ·

A method and intersection testing module in a ray tracing system for determining whether a ray intersects a 3D axis-aligned box that represents a volume defined by a front-facing plane and a back-facing plane for each dimension. Scaled inverse ray components are determined and a scaled minimum culling distance is determined using a result of multiplying an unscaled minimum culling distance for the ray by a predetermined magnitude. Scaled intersection distances to the planes defining the box are determined using scaled inverse ray components. The largest of the determined scaled intersection distances to a front-facing plane of the box is identified. The smallest of the determined scaled intersection distances to a back-facing plane of the box is identified. It is determined that the ray intersects the box if all of three determinations are satisfied, and it is determined that the ray misses the box if one or more of the three determinations are not satisfied.

Intersection testing in a ray tracing system using scaled minimum and maximum culling distances
11615577 · 2023-03-28 · ·

A method and intersection testing module in a ray tracing system for determining whether a ray intersects a 3D axis-aligned box that represents a volume defined by a front-facing plane and a back-facing plane for each dimension. Scaled inverse ray components are determined and a scaled minimum culling distance is determined using a result of multiplying an unscaled minimum culling distance for the ray by a predetermined magnitude. Scaled intersection distances to the planes defining the box are determined using scaled inverse ray components. The largest of the determined scaled intersection distances to a front-facing plane of the box is identified. The smallest of the determined scaled intersection distances to a back-facing plane of the box is identified. It is determined that the ray intersects the box if all of three determinations are satisfied, and it is determined that the ray misses the box if one or more of the three determinations are not satisfied.

Enhancing hierarchical depth buffer culling efficiency via mask accumulation
11615585 · 2023-03-28 · ·

Embodiments described herein provide for a technique to improve the culling efficiency of coarse depth testing. One embodiment provides for a graphics processor that includes a depth pipeline that is configured to perform a method to track a history of source fragments that are tested against a destination tile. When a combination of partial fragments sum to full coverage, the most conservative source far depth value is used instead of the previous destination far depth value. When the combination sums to partial coverage, the previous destination far depth value is retained.