G06T15/55

PARTICIPATING MEDIA BAKING
20190295314 · 2019-09-26 ·

According to one embodiment, a method includes identifying a scene to be rendered, creating a plurality of light scattering tables within the scene, performing a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, approximating spatially heterogeneous media of the scene as spatially homogeneous media of the scene by performing a volume intersection for each light ray associated with the spatially heterogeneous media of the scene to determine a homogeneous scattering coefficient for the light ray, and applying to the spatially heterogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient.

PARTICIPATING MEDIA BAKING
20190295314 · 2019-09-26 ·

According to one embodiment, a method includes identifying a scene to be rendered, creating a plurality of light scattering tables within the scene, performing a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, approximating spatially heterogeneous media of the scene as spatially homogeneous media of the scene by performing a volume intersection for each light ray associated with the spatially heterogeneous media of the scene to determine a homogeneous scattering coefficient for the light ray, and applying to the spatially heterogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient.

AUGMENTED REALITY LIGHTING EFFECTS
20190279430 · 2019-09-12 ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.

AUGMENTED REALITY LIGHTING EFFECTS
20190279430 · 2019-09-12 ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.

A RENDERING PROCESS AND SYSTEM

A rendering process and system that may be used to composite virtual objects in panoramic video to provide a virtual reality and augmented reality experience. The process includes receiving low dynamic range (LDR) video data, e.g. 360 video; generating radiance maps, such as diffuse and specular maps, from the LDR data; inverse tone mapping the LDR data of the maps to generate high dynamic range (HDR) data for the maps; and receiving at least one virtual object and applying image based lighting (IBL) to the virtual object using the HDR data of the maps. A perceptually based threshold is also applied to radiance maps to detect prominent pixels, and using the prominent pixels as salient lights for image based shadowing (IBS) associated the virtual object. Objects are composited 360 video in real time using IBL and IBS without precomputation allowing user interaction with the objects.

A RENDERING PROCESS AND SYSTEM

A rendering process and system that may be used to composite virtual objects in panoramic video to provide a virtual reality and augmented reality experience. The process includes receiving low dynamic range (LDR) video data, e.g. 360 video; generating radiance maps, such as diffuse and specular maps, from the LDR data; inverse tone mapping the LDR data of the maps to generate high dynamic range (HDR) data for the maps; and receiving at least one virtual object and applying image based lighting (IBL) to the virtual object using the HDR data of the maps. A perceptually based threshold is also applied to radiance maps to detect prominent pixels, and using the prominent pixels as salient lights for image based shadowing (IBS) associated the virtual object. Objects are composited 360 video in real time using IBL and IBS without precomputation allowing user interaction with the objects.

Three-dimensional character rendering system using general purpose graphic processing unit, and processing method thereof
10403023 · 2019-09-03 ·

The present invention relates to a system for rendering a three dimensional character and a method for processing thereof. The system for rendering a three dimensional character renders a three dimensional character model, for example, a skin having a multilayered structure such as a face of the person to enable realistic skin expressions according to reflection and scattering of light using a GPGPU. To this end, the system for rendering a three dimensional character includes a plurality of GPGPU modules corresponding to a render pass. According to the present invention, an irradiance texture of an image for each layer of the skin is created and processed using the GPGPU without passing through a render pass of a rendering library, thereby reducing a load of the system for rendering and enabling realistic skin expressions in real time.

Three-dimensional character rendering system using general purpose graphic processing unit, and processing method thereof
10403023 · 2019-09-03 ·

The present invention relates to a system for rendering a three dimensional character and a method for processing thereof. The system for rendering a three dimensional character renders a three dimensional character model, for example, a skin having a multilayered structure such as a face of the person to enable realistic skin expressions according to reflection and scattering of light using a GPGPU. To this end, the system for rendering a three dimensional character includes a plurality of GPGPU modules corresponding to a render pass. According to the present invention, an irradiance texture of an image for each layer of the skin is created and processed using the GPGPU without passing through a render pass of a rendering library, thereby reducing a load of the system for rendering and enabling realistic skin expressions in real time.

Participating media baking

According to one embodiment, a method includes identifying a scene to be rendered, pre-computing one or more lighting elements within the scene, including creating a plurality of light scattering tables, performing, during the pre-computing, a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, approximating spatially heterogeneous media of the scene as spatially homogeneous media of the scene by performing a volume intersection for each light ray associated with the spatially heterogeneous media of the scene to determine a homogeneous scattering coefficient for the light ray, and applying to the spatially heterogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient, and a table lookup is adjusted for the one of the plurality of light scattering tables utilizing an analytic correction factor in order to apply the one of the plurality of light scattering tables with a different homogeneous scattering coefficient.

Participating media baking

According to one embodiment, a method includes identifying a scene to be rendered, pre-computing one or more lighting elements within the scene, including creating a plurality of light scattering tables, performing, during the pre-computing, a computation of light extinction and light in-scattering within participating media of the scene, utilizing the plurality of light scattering tables, and during a ray tracing of the scene, approximating spatially heterogeneous media of the scene as spatially homogeneous media of the scene by performing a volume intersection for each light ray associated with the spatially heterogeneous media of the scene to determine a homogeneous scattering coefficient for the light ray, and applying to the spatially heterogeneous media of the scene one of the plurality of light scattering tables, where each of the plurality of light scattering tables corresponds to a single homogeneous scattering coefficient, and a table lookup is adjusted for the one of the plurality of light scattering tables utilizing an analytic correction factor in order to apply the one of the plurality of light scattering tables with a different homogeneous scattering coefficient.