G06T15/55

Method and algorithm for simulating the influence of thermally coupled surface radiation in casting processes
10970429 · 2021-04-06 · ·

A method for simulating the influence of thermally coupled surface radiation on a solid body, which solid body has at least one surface capable of being exposed to radiation, by calculating the radiative exchange between grey, diffuse surfaces, characterized in that the surface or surfaces to be exposed to radiation is/are subdivided adaptively, hierarchically into radiation tiles of the same or virtually the same radiation intensity, and the surface temperature resulting from irradiation is achieved by means of a hierarchical view factor method, which view factor method comprises the evaluation of a solid angle integral using a primary solid angle subdivision, which primary solid angle subdivision comprises a homogeneous view factor discretization, wherein each solid angle subdivision is adaptively and hierarchically discretized into its partial areas by spherical projection and wherein the total of all partial amounts of that solid angle integral can be determined by means of ray tracing.

Method and algorithm for simulating the influence of thermally coupled surface radiation in casting processes
10970429 · 2021-04-06 · ·

A method for simulating the influence of thermally coupled surface radiation on a solid body, which solid body has at least one surface capable of being exposed to radiation, by calculating the radiative exchange between grey, diffuse surfaces, characterized in that the surface or surfaces to be exposed to radiation is/are subdivided adaptively, hierarchically into radiation tiles of the same or virtually the same radiation intensity, and the surface temperature resulting from irradiation is achieved by means of a hierarchical view factor method, which view factor method comprises the evaluation of a solid angle integral using a primary solid angle subdivision, which primary solid angle subdivision comprises a homogeneous view factor discretization, wherein each solid angle subdivision is adaptively and hierarchically discretized into its partial areas by spherical projection and wherein the total of all partial amounts of that solid angle integral can be determined by means of ray tracing.

IMAGE PROCESSING TO DETERMINE RADIOSITY OF AN OBJECT
20210104094 · 2021-04-08 ·

The present disclosure provides a method (500) comprising receiving (510) images (e.g., 125A to 125G) of an object (110), the images (e.g., 125A to 125G) comprising first and second images. The method (500) then determines (530) feature points (810, 820) of the object (110) using the first images and determines (530, 540, 550) a three-dimensional reconstruction of a scene having the object (110). The method (500) then proceeds with aligning (560) the three-dimensional reconstruction with a three-dimensional mesh model of the object (110). The alignment can then be used to map (570) pixel values of pixels of the second images onto the three-dimensional mesh model. The directional radiosity of each mesh element of the three-dimensional mesh model can then be determined (580) and the hemispherical radiosity of the object (110) is determined (590) based on the determined directional radiosity.

IMAGE PROCESSING TO DETERMINE RADIOSITY OF AN OBJECT
20210104094 · 2021-04-08 ·

The present disclosure provides a method (500) comprising receiving (510) images (e.g., 125A to 125G) of an object (110), the images (e.g., 125A to 125G) comprising first and second images. The method (500) then determines (530) feature points (810, 820) of the object (110) using the first images and determines (530, 540, 550) a three-dimensional reconstruction of a scene having the object (110). The method (500) then proceeds with aligning (560) the three-dimensional reconstruction with a three-dimensional mesh model of the object (110). The alignment can then be used to map (570) pixel values of pixels of the second images onto the three-dimensional mesh model. The directional radiosity of each mesh element of the three-dimensional mesh model can then be determined (580) and the hemispherical radiosity of the object (110) is determined (590) based on the determined directional radiosity.

AUGMENTED REALITY LIGHTING EFFECTS
20210082198 · 2021-03-18 ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.

AUGMENTED REALITY LIGHTING EFFECTS
20210082198 · 2021-03-18 ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.

DEFORMABLE NEURAL RADIANCE FIELDS

Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.

DEFORMABLE NEURAL RADIANCE FIELDS

Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.

Augmented reality lighting effects
10867450 · 2020-12-15 · ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.

Augmented reality lighting effects
10867450 · 2020-12-15 · ·

The present invention embraces a system, device, and method for adding lighting effects to augmented reality (AR) content (i.e., virtual objects). Light sensors in an augmented reality (AR) system monitor an environment's lighting conditions to acquire lighting data that can be used to create (or update) virtual light sources. Depth sensors in the AR system sense the environment to acquire mapping data that can be used to create a 3D model of the environment while tracking the system's location within the environment. Algorithms running on a processor may then add the virtual light sources to the 3D model of the environment so that, when AR content is created, lighting effects corresponding to the virtual light sources can be added. The resulting AR content with virtual lighting effects appear more realistic to a user.