G06T15/55

Informed choices in primary sample space for light transport simulation

Systems, methods and articles of manufacture for rendering three-dimensional virtual environments using reversible jumps are disclosed herein. In one embodiment, mappings from random numbers to light paths are modeled as an explicit iterative random walk. Inverses of path construction techniques are employed to turn light transport paths back into the random numbers that produced them. In particular, such inverses may be used to extend the Multiplexed Metropolis Light Transport (MMLT) technique to perform path-invariant perturbations that produce a new path sample using a different path construction technique but preserve the path's geometry. To render an image, a rendering application in one embodiment may trace light paths through a virtual scene, with some path samples being generated by probabilistically selecting one or more techniques through technique perturbation and using inverses of the selected technique(s) to invert existing path(s), and with new paths being obtained by mutating or perturbing existing paths.

Participating media baking

According to one embodiment, a method includes identifying a scene to be rendered, creating a plurality of light scattering tables within the scene, performing a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, determining a homogeneous scattering coefficient for spatially homogeneous media of the scene, and applying to the spatially homogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient.

Participating media baking

According to one embodiment, a method includes identifying a scene to be rendered, creating a plurality of light scattering tables within the scene, performing a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, determining a homogeneous scattering coefficient for spatially homogeneous media of the scene, and applying to the spatially homogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient.

System and method for computing reduced-resolution indirect illumination using interpolated directional incoming radiance

A system for, and method of, computing reduced-resolution indirect illumination using interpolated directional incoming radiance and a graphics processing subsystem incorporating the system or the method. In one embodiment, the system includes: (1) a cone tracing shader executable in a graphics processing unit to compute directional incoming radiance cones for sparse pixels and project the directional incoming radiance cones on a basis and (2) an interpolation shader executable in the graphics processing unit to compute outgoing radiance values for untraced pixels based on directional incoming radiance values for neighboring ones of the sparse pixels.

System and method for computing reduced-resolution indirect illumination using interpolated directional incoming radiance

A system for, and method of, computing reduced-resolution indirect illumination using interpolated directional incoming radiance and a graphics processing subsystem incorporating the system or the method. In one embodiment, the system includes: (1) a cone tracing shader executable in a graphics processing unit to compute directional incoming radiance cones for sparse pixels and project the directional incoming radiance cones on a basis and (2) an interpolation shader executable in the graphics processing unit to compute outgoing radiance values for untraced pixels based on directional incoming radiance values for neighboring ones of the sparse pixels.

REAL-TIME COMPOSITING IN MIXED REALITY

A system may include a memory device to store instructions and data, and at least one processing device to execute the instructions stored in the memory device to: receive a background image and a digital object to be composited onto the background image in a mixed reality view, generate a 2D bounding region for the digital object, select a version of the background image at a pre-defined resolution, overlay the 2D bounding region on the selected version of the background image and obtain a set of samples of the colors of pixels of the selected version along a perimeter of the 2D bounding region, and determine a value for one or more digital lighting sources to illuminate the digital object in the mixed reality view, based, at least in part, on the set of samples.

REAL-TIME COMPOSITING IN MIXED REALITY

A system may include a memory device to store instructions and data, and at least one processing device to execute the instructions stored in the memory device to: receive a background image and a digital object to be composited onto the background image in a mixed reality view, generate a 2D bounding region for the digital object, select a version of the background image at a pre-defined resolution, overlay the 2D bounding region on the selected version of the background image and obtain a set of samples of the colors of pixels of the selected version along a perimeter of the 2D bounding region, and determine a value for one or more digital lighting sources to illuminate the digital object in the mixed reality view, based, at least in part, on the set of samples.

REAL-TIME COMPOSITING IN MIXED REALITY

A system may include a memory device to store instructions and data, and at least one processing device to execute the instructions stored in the memory device to: receive a background image and a digital object to be composited onto the background image in a mixed reality view, generate a 2D bounding region for the digital object, select a version of the background image at a pre-defined resolution, overlay the 2D bounding region on the selected version of the background image and obtain a set of samples of the colors of pixels of the selected version along a perimeter of the 2D bounding region, and determine a value for one or more digital lighting sources to illuminate the digital object in the mixed reality view, based, at least in part, on the set of samples.

COMPRESSED RAY DIRECTION DATA IN A RAY TRACING SYSTEM
20240078740 · 2024-03-07 ·

Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.

COMPRESSED RAY DIRECTION DATA IN A RAY TRACING SYSTEM
20240078740 · 2024-03-07 ·

Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.