Patent classifications
G06T15/04
Virtual 3D communications with actual to virtual cameras optical axes compensation
A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.
Virtual 3D communications with actual to virtual cameras optical axes compensation
A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.
DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT
The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.
DIGITAL REALITY PLATFORM PROVIDING DATA FUSION FOR GENERATING A THREE-DIMENSIONAL MODEL OF THE ENVIRONMENT
The present invention relates to three-dimensional reality capturing of an environment, wherein data of various kinds of measurement devices are fused to generate a three-dimensional model of the environment. In particular, the invention relates to a computer-implemented method for registration and visualization of a 3D model provided by various types of reality capture devices and/or by various surveying tasks.
METHOD AND APPARATUS FOR PROCESSING HUMAN BODY MODEL DATA, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method and apparatus for processing human body model data, an electronic device and a storage medium are provided. The method includes: obtaining 3D human body model data, and classifying the 3D human body model data into multiple data sets according to a predetermined classification condition, wherein the predetermined classification condition includes medical anatomy category information and art resource category information; determining, according to each of the data sets, a duplicate resource in the data set, and reorganized data sets where the duplicate resource is removed; and packing each of the duplicate resource and the reorganized data sets into a respective data package, and storing all of the data packages.
METHOD AND APPARATUS FOR PROCESSING HUMAN BODY MODEL DATA, ELECTRONIC DEVICE AND STORAGE MEDIUM
A method and apparatus for processing human body model data, an electronic device and a storage medium are provided. The method includes: obtaining 3D human body model data, and classifying the 3D human body model data into multiple data sets according to a predetermined classification condition, wherein the predetermined classification condition includes medical anatomy category information and art resource category information; determining, according to each of the data sets, a duplicate resource in the data set, and reorganized data sets where the duplicate resource is removed; and packing each of the duplicate resource and the reorganized data sets into a respective data package, and storing all of the data packages.
FACE IMAGE PROCESSING METHOD AND APPARATUS, FACE IMAGE DISPLAY METHOD AND APPARATUS, AND DEVICE
A face image processing method and apparatus, a face image display method and apparatus, and a device are provided, belonging to the technical field of image processing. The method includes: acquiring a first face image of a person; invoking an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age; and performing image processing on the first face image based on the texture difference map to obtain the second face image.
FACE IMAGE PROCESSING METHOD AND APPARATUS, FACE IMAGE DISPLAY METHOD AND APPARATUS, AND DEVICE
A face image processing method and apparatus, a face image display method and apparatus, and a device are provided, belonging to the technical field of image processing. The method includes: acquiring a first face image of a person; invoking an age change model to predict a texture difference map of the first face image at a specified age, the texture difference map being used for reflecting a texture difference between a face texture in the first face image and a face texture of a second face image of the person at the specified age; and performing image processing on the first face image based on the texture difference map to obtain the second face image.
LARGE-SCALE GENERATION OF PHOTOREALISTIC 3D MODELS
A system and methods are provided for large-scale generation of photorealistic 3D models, including training texture map and 3D mesh encoder and decoder neural networks, and training a sampler neural network to convert random seeds into input vectors for the texture map and 3D mesh decoder networks. Training the sampler neural network may include feeding random seeds to the sampler neural network, generating training 3D models from the texture map and 3D mesh decoders, rendering 2D images from the training 3D models, back-propagating output of realism classifier and of a uniqueness function of the 2D images to the sampler neural network; and providing the trained sampler neural network with additional random seed inputs to generate multiple respective input vectors for the texture map and 3D mesh decoders, and responsively generating by the texture map and 3D mesh decoders multiple new 3D models.
LARGE-SCALE GENERATION OF PHOTOREALISTIC 3D MODELS
A system and methods are provided for large-scale generation of photorealistic 3D models, including training texture map and 3D mesh encoder and decoder neural networks, and training a sampler neural network to convert random seeds into input vectors for the texture map and 3D mesh decoder networks. Training the sampler neural network may include feeding random seeds to the sampler neural network, generating training 3D models from the texture map and 3D mesh decoders, rendering 2D images from the training 3D models, back-propagating output of realism classifier and of a uniqueness function of the 2D images to the sampler neural network; and providing the trained sampler neural network with additional random seed inputs to generate multiple respective input vectors for the texture map and 3D mesh decoders, and responsively generating by the texture map and 3D mesh decoders multiple new 3D models.