G06T17/205

Garment deformation method based on the human body's Laplacian deformation

A method of garment deformation based on Laplacian deformation of a human body, including the following steps: inputting polygonal mesh models of the human body and the garment; discretizing non-homogeneous mesh models of the human body and the garment inputted in the first step; clustering all the discretized mesh vertices, to reduce a number of vertices and to form a set of homogeneous discrete vertices; constructing Laplacian matrices of the human body and the garment; preprocessing and solving inverse matrices; editing by using the human body mesh as a control vertex, to drive a real-time smooth deformation of the garment mesh; and mapping a deformed and simplified mesh back to a mesh space of an original resolution to obtain deformed human body and garment mesh models.

SYSTEMS AND METHODS OF CONTRASTIVE POINT COMPLETION WITH FINE-TO-COARSE REFINEMENT
20230019972 · 2023-01-19 ·

An electronic apparatus performs a method of recovering a complete and dense point cloud from a partial point cloud. The method includes: constructing a sparse but complete point cloud from the partial point cloud through a contrastive teacher-student neural network; and transforming the sparse but complete point cloud to the complete and dense point cloud. In some embodiments, the contrastive teacher-student neural network has a dual network structure comprising a teacher network and a student network both sharing the same architecture. The teacher network is a point cloud self-reconstruction network, and the student network is a point cloud completion network.

Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom

Various examples are provided related to systems and processes for generating verified wireframes corresponding to at least part of a structure or element of interest can be generated from 2D images, 3D representations (e.g., a point cloud), or a combination thereof. The wireframe can include one or more features that correspond to a structural aspect of the structure or element of interest. The verification can comprise projecting or overlaying the generated wireframe over selected 2D images and/or a point cloud that incorporates the one or more features. The wireframe can be adjusted by a user and/or a computer to align the 2D images and/or 3D representations thereto, thereby generating a verified wireframe including at least a portion of the structure or element of interest. The verified wireframes can be used to generate wireframe models, measurement information, reports, construction estimates or the like.

SYSTEM AND METHOD FOR ARTICULAR CARTILAGE THICKNESS MAPPING AND LESION QUANTIFICATION

Systems and methods for articular cartilage thickness mapping and lesion quantification operate on 3D medical image data to reconstruct cartilage surfaces, estimate surface normals, determine cartilage thickness, and identify regions of full-thickness cartilage loss (FCL). Reconstructed cartilage surfaces can be parcellated into subregions using a rule-based approach.

Method, system and computing device for reconstructing three-dimensional planes

A method, a system, and a computing device for reconstructing three-dimensional planes are provided. The method includes the following steps: obtaining a series of color information, depth information and pose information of a dynamic scene by a sensing device; extracting a plurality of feature points according to the color information and the depth information, and marking part of the feature points as non-planar objects including dynamic objects and fragmentary objects; computing point cloud according to the unmarked feature points and the pose information, and instantly converting the point cloud to a three-dimensional mesh; and growing the three-dimensional mesh to fill vacancy corresponding to the non-planar objects according to the information of the three-dimensional mesh surrounding or adjacent to the non-planar objects.

AUGMENTED REALITY PRODUCT RECOMMENDATIONS

Aspects of the present disclosure involve a system comprising a computer-readable storage medium storing a program and a method for performing operations comprising: receiving a video that includes a depiction of a real-world object in a real-world environment; determining a classification for the real-world environment by processing the real-world object depicted in the video; selecting an augmented reality (AR) item based on the classification of the real-world environment and the real-world object depicted in the video; modifying pixels corresponding to the real-world object depicted in the video to generate a modified video that excludes the depiction of the real-world object; and adding the AR item to the modified video at a display position corresponding to the modified pixels.

SYSTEM AND METHOD FOR ADAPTIVE VOLUME-BASED SCENE RECONSTRUCTION FOR XR PLATFORM APPLICATIONS
20230215108 · 2023-07-06 ·

A system and method for adaptive volume-based scene reconstruction for XR platform application are provided. The system includes an image sensor and a processor to perform the method for display distortion calibration. The method includes determining a processor computation load. The method also includes, based on the determined computation load, adjusting one or more parameters for the 3D scene reconstruction to compensate for the determined computation load. The method further includes rendering a reconstructed 3D scene.

Systems and methods for real-time complex character animations and interactivity

Systems, methods, and non-transitory computer-readable media can identify a virtual character being presented to a user within a real-time immersive environment. A first animation to be applied to the virtual character is determined. A nonverbal communication animation to be applied to the virtual character simultaneously with the first animation is determined. The virtual character is animated in real-time based on the first animation and the nonverbal communication animation.

Techniques for training a machine learning model to modify portions of shapes when generating designs for three-dimensional objects

In various embodiments, a training application trains a machine learning model to modify portions of shapes when designing 3D objects. The training application converts first structural analysis data having a first resolution to first coarse structural analysis data having a second resolution that is lower than the first resolution. Subsequently, the training application generates one or more training sets based on a first shape, the first coarse structural analysis data, and a second shape that is derived from the first shape. Each training set is associated with a different portion of the first shape. The training application then performs one or more machine learning operations on the machine learning model using the training set(s) to generate a trained machine learning model. The trained machine learning model modifies at least a portion of a shape having the first resolution based on coarse structural analysis data having the second resolution.

3D microgeometry and reflectance modeling

A system and method for three-dimensional (3D) microgeometry and reflectance modeling is provided. The system receives images comprising a first set of images of a face and a second set of images of the face. The faces in the first set of images and the second set of images are exposed to omni-directional lighting and directional lighting, respectively. The system generates a 3D face mesh based on the received images and executes a set of skin-reflectance modeling operations by using the generated 3D face mesh and the second set of images, to estimate a set of texture maps for the face. Based on the estimated set of texture maps, the system texturizes the generated 3D face mesh. The texturization includes an operation in which texture information, including microgeometry skin details and skin reflectance details, of the estimated set of texture maps is mapped onto the generated 3D face mesh.