G06T15/205

Iterative synthesis of views from data of a multi-view video
20230053005 · 2023-02-16 ·

Synthesis of an image of a view from data of a multi-view video. The synthesis includes an image processing phase as follows: generating image synthesis data from texture data of at least one image of a view of the multi-view video; calculating an image of a synthesised view from the generated synthesis data and at least one image of a view of the multi-view video; analysing the image of the synthesised view relative to a synthesis performance criterion; if the criterion is met, delivering the image of the synthesised view; and if not, iterating the processing phase. The calculation of an image of a synthesised view at a current iteration includes modifying, based on synthesis data generated in the current iteration, an image of the synthesised view calculated during a processing phase preceding the current iteration.

VOLUMETRIC VIDEO FROM AN IMAGE SOURCE

A method for generating one or more 3D models of at least one living object from at least one 2D image comprising the at least one living object. The one or more 3D models can be modified and enhanced. The resulting one or more 3D models can be transformed into at least one 2D display image; the point of view of the output 2D image(s) can be different from that of the input 2D image(s).

3D BUILDING GENERATION USING TOPOLOGY
20230046926 · 2023-02-16 ·

Embodiments provide systems and methods for three-dimensional building generation from machine learning and topological models. The method uses topology models that are converted into vertices and edges. A BGAN (Building generative adversarial network) is used to create fake vertices/edges. The BGAN is then used to generate random samples from seen sample of different structures of building based on relationship of vertices and edges. The embeddings are then fed into a machine trained network to create a digital structure from the image.

Method and system for determining a current gaze direction
11579687 · 2023-02-14 · ·

A method for determining a current gaze direction of a user in relation to a three-dimensional (“3D”) scene, the 3D scene sampled by a rendering function to produce a two-dimensional (“2D”) projection image of the 3D scene, the sampling performed based on a virtual camera in turn being associated with a camera position and camera direction in the 3D scene. The method includes determining, by a gaze direction detection means, a first gaze direction of the user related to the 3D scene at a first gaze time point. The method includes determining a time-dependent virtual camera 3D transformation representing a change of a virtual camera position and/or virtual camera direction between the first gaze time point and a second sampling. The method includes determining the current gaze direction as a modified gaze direction calculated based on the first gaze direction and an inverse of the time-dependent virtual camera 3D transformation.

Dynamic image capturing apparatus and method using arbitrary viewpoint image generation technology

Embodiments relate to a dynamic image capturing method and apparatus using an arbitrary viewpoint image generation technology, in which an image of background content displayed on a background content display unit or an image of background content implemented in a virtual space through a chroma key screen, having a view matching to a view of seeing a subject at a viewpoint of a camera is generated, and a final image including the image of the background content and a subject area is obtained.

Virtual 3D communications with actual to virtual cameras optical axes compensation

A method for conducting a three dimensional (3D) video conference between multiple participants, the method may include determining, for each participant, updated 3D participant representation information within the virtual 3D video conference environment, that represents participant; wherein the determining comprises compensating for difference between an actual optical axis of a camera that acquires images of the participant and a desired optical axis of a virtual camera; and generating, for at least one participant, an updated representation of virtual 3D video conference environment, the updated representation of virtual 3D video conference environment represents the updated 3D participant representation information for at least some of the multiple participants.

Method and device for carrying out eye gaze mapping

The invention relates to a device and a method for performing an eye gaze mapping (M), in which at least one point of vision (B) and/or a viewing direction of at least one person (10) in relation to at least one scene recording (S) of a scene (12) viewed by the at least one person (10) is mapped onto a reference (R). At least a part of an algorithm (A1, A2, A3) for performing the eye gaze mapping (M) is thereby selected from multiple predetermined algorithms (A1, A2, A3) as a function of at least one parameter (P), and the eye gaze mapping (M) is performed on the basis of the at least one part of the algorithm (A1, A2, A3).

METHOD, APPARATUS, SYSTEM, AND STORAGE MEDIUM FOR 3D RECONSTRUCTION
20230040550 · 2023-02-09 · ·

A method, device, computer system and computer readable storage medium for 3D reconstruction are provided. The method comprises: performing a 3D reconstruction of an original 2D image of a target object to generate an original 3D object corresponding to the original 2D image; selecting a complementary view of the target object from candidate views based on a reconstruction quality of the original 3D object at the candidate views; obtaining a complementary 2D image of the target object based on the complementary view; performing a 3D reconstruction of the complementary 2D image to generate a complementary 3D object corresponding to the complementary 2D image; and fusing the original 3D object and the complementary 3D object to obtain a 3D reconstruction result of the target object.

SYSTEM AND METHOD FOR PROVIDING PERSONALIZED TRANSACTIONS BASED ON 3D REPRESENTATIONS OF USER PHYSICAL CHARACTERISTICS

The disclosed systems, components, methods, and processing steps are directed to determining user-item fit characteristics of an item for a user body part by accessing a three-dimensional (3D) reconstructed model of the user body part, accessing information about one or more 3D reference models of the item, the information for each 3D reference model including respective dimensional measurement, spatial, and geometrical attributes, performing a 3D matching process based on the 3D reconstructed model and the accessed information of the one or more 3D reference models to determine a best-fitting 3D reference model from the one or more 3D reference models, integrating the best-fitting 3D reference model with the 3D reconstructed model to provide a 3D best fit representation and displaying the 3D best fit representation along with visual indications of user-item fit characteristics.

Image synthesis for balanced datasets

A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.