G06T15/506

Deep relightable appearance models for animatable face avatars

A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.

Acquisition of optical characteristics

An apparatus (1, 5, 6) is described which includes two or more colour displays (2) arranged to provide piece-wise continuous illumination of a volume. The apparatus (1, 5, 6) also includes one or more cameras (3). Each camera (3) is arranged to image the volume. The apparatus (1, 5, 6) is configured to control the two or more colour displays (2) and the one or more cameras (3) to illuminate the volume with each of two or more illumination conditions. The apparatus (1, 5, 6) is also configured to obtain two or more sets of images. Each set of images is obtained during illumination of the volume with one or more corresponding illumination conditions. The two or more sets of images include sufficient information for calculation of a reflectance map and a photometric normal map of an object or subject (4) positioned within the volume. When viewed from the volume, the apparatus (1, 5, 6) only provides direct illumination of the volume from angles within a zone of a hemisphere. The zone is less than a hemisphere and corresponds to a first range (Δα) of latitudinal angles and a second range (Δβ) of longitudinal angles. Each of the first (Δα) and second ranges (Δβ) is no more than 17π/18.

SYSTEMS AND METHODS FOR PHYSICALLY-BASED NEURAL FACE SHADER VIA VOLUMETRIC LIGHTMAPS

Methods and systems are provided for rendering photo-realistic images of a subject or an object using a differentiable neural network for predicting indirect light behavior. In one example, the differentiable neural network outputs a volumetric light map comprising a plurality of spherical harmonic representations. Further, using a reflectance neural network, roughness and scattering coefficients associated with the subject or the object is computed. The volumetric light map, as well as the roughness and scattering coefficients are the utilized for rendering a final image under one or more of a desired lighting condition, desired camera view angle, and/or with a desired visual effect (e.g., expression change).

METHOD FOR GENERATING IMAGES OF A VEHICLE-INTERIOR CAMERA

A method for generating synthetic images, each image simulating an image of an individual acquired by a vehicle interior camera, including: generating a plurality of models of individuals, each model including a three-dimensional representation of an individual's head; receiving a set of variable parameters and a probability distribution associated with each parameter; generating a set of configurations, each configuration corresponding to a combination of values or states taken by each parameter, such that the set of configurations is representative of the probability distribution of each parameter; generating, for each model of an individual, a set of images simulating images of the model of an individual acquired by a vehicle interior camera, where each image corresponds to a configuration generated for a variable parameter, and where each image further includes the three-dimensional positions of a set of points characterizing the individual's head; and storing all the images in a memory.

3-D graphics rendering with implicit geometry

Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.

FULLY-FUSED NEURAL NETWORK EXECUTION

A fully-connected neural network may be configured for execution by a processor as a fully-fused neural network by limiting slow global memory accesses to reading and writing inputs to and outputs from the fully-connected neural network. The computational cost of fully-connected neural networks scale quadratically with its width, whereas its memory traffic scales linearly. Modern graphics processing units typically have much greater computational throughput compared with memory bandwidth, so that for narrow, fully-connected neural networks, the linear memory traffic is the bottleneck. The key to improving performance of the fully-connected neural network is to minimize traffic to slow “global” memory (off-chip memory and high-level caches) and to fully utilize fast on-chip memory (low-level caches, “shared” memory, and registers), which is achieved by the fully-fused approach. A real-time neural radiance caching technique for path-traced global illumination is implemented using the fully-fused neural network for caching scattered radiance components of global illumination.

Systems and methods for visualization of building structures
11704866 · 2023-07-18 · ·

Methods and systems for real-time visualization of building structures are disclosed. A computing system may calculate physical illumination characteristics for each of a plurality of predefined virtual external building-surface elements layered in simulation at specified surface locations of a virtual three-dimensional (3D) model of a building structure, wherein the virtual 3D model is constructed based on data descriptive of the building structure. Each of the plurality of predefined virtual external building-surface elements may be associated with its calculated illumination characteristics in a database. A spatially-manipulable rendered image of the building structure may displayed on an interactive display in real-time based on the virtual 3D model. On the interactive display device, one or more of the plurality of the predefined virtual external building-surface elements may be rendered in real-time at respectively specified locations on the rendered image. On the interactive display device, illumination of each of the one or more of the plurality of the predefined virtual external building-surface elements may be simulated in real-time based on its associated calculated illumination characteristics, its respectively specified location on the rendered image, and a specification of environmental illumination conditions.

Three-dimensional environment analysis method and device, computer storage medium and wireless sensor system
11704916 · 2023-07-18 · ·

A three-dimensional environment analysis method is disclosed. The method includes (i) receiving original point cloud data of a working environment, (ii) processing a map constructed on the basis of the original point cloud data in order to separate out a ground surface, a wall surface and an obstacle in the working environment, (iii) pairing the ground surface with the wall surface according to the degree of proximity between the ground surface and wall surface that are separated out to form one or more adjacent ground-wall pair sets, and (iv) subjecting the one or more adjacent ground-wall pair sets to ray tracing analysis in order to obtain a line-of-sight zone and a non-line-of-sight zone in the working environment. A three-dimensional environment analysis device, a computer storage medium and a wireless sensor system is also disclosed.

METHOD AND APPARATUS FOR LIGHT ESTIMATION
20230019751 · 2023-01-19 · ·

A processor-implemented method for light estimation includes: estimating light information corresponding to an input image using a light estimation model; detecting a reference object in the input image; determining object information of the reference object and plane information of a reference plane supporting the reference object; rendering a virtual object corresponding to the reference object based on the light information, the object information, and the plane information; and training the light estimation model by updating the light estimation model based on a result of comparing the reference object and the rendered virtual object.

METHOD FOR GENERATING AND RECOGNIZING DEFORMABLE OF FIDUCIAL MARKERS BASED ON ARTIFICIAL INTELLIGENCE IN END-TO-END MANNER AND SYSTEM THEREOF
20230016057 · 2023-01-19 ·

The inventive concept relate to a technology, which recognizes widely deformable markers with high accuracy in an end-to-end manner of message encoding and decoding, and which generates and recognizes deformable fiducial markers based on artificial intelligence, and includes generating, by a marker generator, a unique marker pattern as a fiducial marker in an input binary message, rendering, by an imaging simulator, an image by generating a training dataset of a realistic scene image with the generated fiducial marker, and training a marker detector with the rendered image.