G06T15/55

Apparatus and method for generation of a light transport map with transparency information

A generator of an image processing apparatus may generate a light transport map (LTM) by sampling depth information from a light to an object based on a transparency of the object, wherein the LTM may be used to compute a visibility of the light with respect to a first point to be rendered.

Efficiently determining an absorption coefficient of a virtual volume in 3D computer graphics
11200732 · 2021-12-14 · ·

Disclosed is a method to derive the absorption coefficient, transparency, and/or the scattering coefficient from the user-specified parameters including roughness, phase function, index of refraction (IOR), and color by performing the simulation once, and storing the results of the simulation in an easy to retrieve representation, such as a lookup table, or an analytic function. To create the analytic function, one or more analytic functions can be fitted to the results of the simulation for the multiple parameters including roughness, phase function, IOR, and color. The lookup table can be combined with the analytic representation. For example, the lookup table can be used to represent the color, roughness, and phase function, while the IOR can be represented by an analytic function. For example, when the IOR is above 2, the lookup table becomes three-dimensional and the IOR is calculated using the analytic function.

Efficiently determining an absorption coefficient of a virtual volume in 3D computer graphics
11200732 · 2021-12-14 · ·

Disclosed is a method to derive the absorption coefficient, transparency, and/or the scattering coefficient from the user-specified parameters including roughness, phase function, index of refraction (IOR), and color by performing the simulation once, and storing the results of the simulation in an easy to retrieve representation, such as a lookup table, or an analytic function. To create the analytic function, one or more analytic functions can be fitted to the results of the simulation for the multiple parameters including roughness, phase function, IOR, and color. The lookup table can be combined with the analytic representation. For example, the lookup table can be used to represent the color, roughness, and phase function, while the IOR can be represented by an analytic function. For example, when the IOR is above 2, the lookup table becomes three-dimensional and the IOR is calculated using the analytic function.

VISIBILITY-BASED ENVIRONMENT IMPORTANCE SAMPLING FOR LIGHT TRANSPORT SIMULATION SYSTEMS AND APPLICATIONS
20230298257 · 2023-09-21 ·

Systems and methods to implement a technique for determining an environment importance sampling function. An environment map may be provided where lighting information about the environment is known, but where certain pixels within a scene associated with the environment map are shaded. From these shaded pixels, rays may be drawn in random directions to determine whether the rays are occluded or can interact with the environment map, which provides an indication of a source of lighting that can be used for light transport simulations. A mask may be generated based on these occlusions and used to update the environment importance sampling function.

VISIBILITY-BASED ENVIRONMENT IMPORTANCE SAMPLING FOR LIGHT TRANSPORT SIMULATION SYSTEMS AND APPLICATIONS
20230298257 · 2023-09-21 ·

Systems and methods to implement a technique for determining an environment importance sampling function. An environment map may be provided where lighting information about the environment is known, but where certain pixels within a scene associated with the environment map are shaded. From these shaded pixels, rays may be drawn in random directions to determine whether the rays are occluded or can interact with the environment map, which provides an indication of a source of lighting that can be used for light transport simulations. A mask may be generated based on these occlusions and used to update the environment importance sampling function.

IMAGE PROCESSING TO DETERMINE RADIOSITY OF AN OBJECT
20220005264 · 2022-01-06 ·

The present disclosure provides a method (500) comprising receiving (510) images (e.g., 125A to 125G) of an object (110), the images (e.g., 125A to 125G) comprising first and second images. The method (500) then determines (530) feature points (810, 820) of the object (110) using the first images and determines (530, 540, 550) a three-dimensional reconstruction of a scene having the object (110). The method (500) then proceeds with aligning (560) the three-dimensional reconstruction with a three-dimensional mesh model of the object (110). The alignment can then be used to map (570) pixel values of pixels of the second images onto the three-dimensional mesh model. The directional radiosity of each mesh element of the three-dimensional mesh model can then be determined (580) and the hemispherical radiosity of the object (110) is determined (590) based on the determined directional radiosity.

IMAGE PROCESSING TO DETERMINE RADIOSITY OF AN OBJECT
20220005264 · 2022-01-06 ·

The present disclosure provides a method (500) comprising receiving (510) images (e.g., 125A to 125G) of an object (110), the images (e.g., 125A to 125G) comprising first and second images. The method (500) then determines (530) feature points (810, 820) of the object (110) using the first images and determines (530, 540, 550) a three-dimensional reconstruction of a scene having the object (110). The method (500) then proceeds with aligning (560) the three-dimensional reconstruction with a three-dimensional mesh model of the object (110). The alignment can then be used to map (570) pixel values of pixels of the second images onto the three-dimensional mesh model. The directional radiosity of each mesh element of the three-dimensional mesh model can then be determined (580) and the hemispherical radiosity of the object (110) is determined (590) based on the determined directional radiosity.

Compressed ray direction data in a ray tracing system

Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.

Compressed ray direction data in a ray tracing system

Ray tracing systems process rays through a 3D scene to determine intersections between rays and geometry in the scene, for rendering an image of the scene. Ray direction data for a ray can be compressed, e.g. into an octahedral vector format. The compressed ray direction data for a ray may be represented by two parameters (u,v) which indicate a point on the surface of an octahedron. In order to perform intersection testing on the ray, the ray direction data for the ray is unpacked to determine x, y and z components of a vector to a point on the surface of the octahedron. The unpacked ray direction vector is an unnormalised ray direction vector. Rather than normalising the ray direction vector, the intersection testing is performed on the unnormalised ray direction vector. This avoids the processing steps involved in normalising the ray direction vector.

Rendering textured surface using surface-rendering neural networks

Methods and systems disclosed herein relate generally to surface-rendering neural networks to represent and render a variety of material appearances (e.g., textured surfaces) at different scales. The system includes receiving image metadata for a texel that includes position, incoming and outgoing radiance direction, and a kernel size. The system applies a offset-prediction neural network to the query to identify an offset coordinate for the texel. The system inputs the offset coordinate to a data structure to determine a feature vector for the texel of the textured surface. The reflectance feature vector is then processed using a decoder neural network to estimate a light-reflectance value of the texel, at which the light-reflectance value is used to render the texel of the textured surface.