G06T15/60

Frustum rendering in computer graphics

A graphics processing system includes a tiling unit configured to tile a first view of a scene into a plurality of tiles, a processing unit configured to identify a first subset of the tiles that are associated with regions of the scene that are viewable in a second view, and a rendering unit configured to render to a render target each of the identified tiles.

Frustum rendering in computer graphics

A graphics processing system includes a tiling unit configured to tile a first view of a scene into a plurality of tiles, a processing unit configured to identify a first subset of the tiles that are associated with regions of the scene that are viewable in a second view, and a rendering unit configured to render to a render target each of the identified tiles.

User authentication in a three-dimensional (3D) alternative reality software application
11500977 · 2022-11-15 · ·

Methods and apparatuses are described for user authentication in a three-dimensional (3D) alternative reality software application. A computing device coupled to an alternative reality viewing device generates a 3D virtual environment for display in the alternative reality viewing device, the 3D virtual environment comprising a plurality of 3D objects. The computing device identifies a subset of the plurality of 3D objects selected by the user of the alternative reality viewing device. The computing device captures a first set of actions of the user with respect to the subset of 3D objects, including recording a sequence of the first set of actions. The computing device generates a multidimensional authentication credential for the user based upon the first set of actions and stores the multidimensional authentication credential in a database.

User authentication in a three-dimensional (3D) alternative reality software application
11500977 · 2022-11-15 · ·

Methods and apparatuses are described for user authentication in a three-dimensional (3D) alternative reality software application. A computing device coupled to an alternative reality viewing device generates a 3D virtual environment for display in the alternative reality viewing device, the 3D virtual environment comprising a plurality of 3D objects. The computing device identifies a subset of the plurality of 3D objects selected by the user of the alternative reality viewing device. The computing device captures a first set of actions of the user with respect to the subset of 3D objects, including recording a sequence of the first set of actions. The computing device generates a multidimensional authentication credential for the user based upon the first set of actions and stores the multidimensional authentication credential in a database.

Producing rendering outputs from a 3-D scene using volume element light transport data

Rendering system combines point sampling and volume sampling operations to produce rendering outputs. For example, to determine color information for a surface location in a 3-D scene, one or more point sampling operations are conducted in a volume around the surface location, and one or more sampling operations of volumetric light transport data are performed farther from the surface location. A transition zone between point sampling and volume sampling can be provided, in which both point and volume sampling operations are conducted. Data obtained from point and volume sampling operations can be blended in determining color information for the surface location. For example, point samples are obtained by tracing a ray for each point sample, to identify an intersection between another surface and the ray, to be shaded, and volume samples are obtained from a nested 3-D grids of volume elements expressing light transport data at different levels of granularity.

Producing rendering outputs from a 3-D scene using volume element light transport data

Rendering system combines point sampling and volume sampling operations to produce rendering outputs. For example, to determine color information for a surface location in a 3-D scene, one or more point sampling operations are conducted in a volume around the surface location, and one or more sampling operations of volumetric light transport data are performed farther from the surface location. A transition zone between point sampling and volume sampling can be provided, in which both point and volume sampling operations are conducted. Data obtained from point and volume sampling operations can be blended in determining color information for the surface location. For example, point samples are obtained by tracing a ray for each point sample, to identify an intersection between another surface and the ray, to be shaded, and volume samples are obtained from a nested 3-D grids of volume elements expressing light transport data at different levels of granularity.

Shadow rendering method and apparatus, computer device, and storage medium

This application discloses a shadow rendering method and apparatus, a computer device, and a storage medium, the method including: obtaining at least one rendering structure in a virtual scene according to an illumination direction in the virtual scene; obtaining model coordinates of a plurality of pixels according to a current viewing angle associated with the virtual scene and depth information of the plurality of pixels; sampling at least one shadow map according to the model coordinates of the plurality of pixels to obtain a plurality of sampling points corresponding to the plurality of pixels; and rendering the plurality of sampling points in the virtual scene to obtain at least one shadow associated with the at least one virtual object.

Shadow rendering method and apparatus, computer device, and storage medium

This application discloses a shadow rendering method and apparatus, a computer device, and a storage medium, the method including: obtaining at least one rendering structure in a virtual scene according to an illumination direction in the virtual scene; obtaining model coordinates of a plurality of pixels according to a current viewing angle associated with the virtual scene and depth information of the plurality of pixels; sampling at least one shadow map according to the model coordinates of the plurality of pixels to obtain a plurality of sampling points corresponding to the plurality of pixels; and rendering the plurality of sampling points in the virtual scene to obtain at least one shadow associated with the at least one virtual object.

METHOD, APPARATUS AND DEVICE FOR PROCESSING SHADOW TEXTURE, COMPUTER-READABLE STORAGE MEDIUM, AND PROGRAM PRODUCT

A method for processing a shadow texture can compute data concurrently to improve computation density, reduce data transmission batches, and improve shadow texture processing efficiency. Model data and light source information of at least one object in a virtual scene is acquired. A shadow with a first resolution corresponding to each object is acquired. A second resolution is determined based on the model data of each object. A shadow edge formed by each object under the influence of the light source information is determined in parallel by utilizing rasterized pixels of the model data. The distance from the rasterized pixels to the shadow edge corresponding to each object is computed, and the corresponding distance data is stored.

METHOD, APPARATUS AND DEVICE FOR PROCESSING SHADOW TEXTURE, COMPUTER-READABLE STORAGE MEDIUM, AND PROGRAM PRODUCT

A method for processing a shadow texture can compute data concurrently to improve computation density, reduce data transmission batches, and improve shadow texture processing efficiency. Model data and light source information of at least one object in a virtual scene is acquired. A shadow with a first resolution corresponding to each object is acquired. A second resolution is determined based on the model data of each object. A shadow edge formed by each object under the influence of the light source information is determined in parallel by utilizing rasterized pixels of the model data. The distance from the rasterized pixels to the shadow edge corresponding to each object is computed, and the corresponding distance data is stored.