G06T2219/2021

RENDERER USING EXPLICIT OBJECT REPRESENTION VIA RAYS TRACING VOLUME DENSITY AGGREGATION
20230237728 · 2023-07-27 ·

The present disclosure describes techniques of rendering images using explicit object representation via rays tracing volume density aggregation. The techniques comprise reconstructing an object into a plurality of Gaussian ellipsoids; determining a volume density of each of the plurality of Gaussian ellipsoids along each of a plurality of viewing rays; determining a weight of each of the plurality of Gaussian ellipsoids based on the volume density; and synthesizing an image of the object using the determined weight on each pixel of the image to interpolate attributes of each of the plurality of Gaussian ellipsoids.

DATA-DRIVEN PHYSICS-BASED MODELS WITH IMPLICIT ACTUATIONS

One embodiment of the present invention sets forth a technique for generating actuation values based on a target shape such that the actuation values cause a simulator to output a simulated soft body that matches the target shape. The technique includes inputting a latent code that represents a target shape and a point on a geometric mesh into a first machine learning model. The technique further includes generating, via execution of the first machine learning model, one or more simulator control values that specify a deformation of the geometric mesh, where each of the simulator control values is based on the latent code and corresponds to the input point, and generating, via execution of the simulator, a simulated soft body based on the one or more simulator control values and the geometric mesh. The technique further includes causing the simulated soft body to be outputted to a computing device.

OBJECT DEFORMATION DETERMINATION

Examples of methods for object deformation determination are described herein. In some examples, a method includes aligning a first bounding box of a three-dimensional (3D) object model with a second bounding box of a scan. In some examples, the method includes determining a deformation between the 3D object model and the scan based on the alignment.

AUGMENTED REALITY ARTIFICIAL INTELLIGENCE ENHANCE WAYS USER PERCEIVE THEMSELVES
20230025585 · 2023-01-26 ·

Methods and systems are provided for generating augmented reality (AR) scenes where the AR scenes can be adjusted to modify at least part of an image of the physical features of a user to produce a virtual mesh of the physical features. The method includes generating an augmented reality (AR) scene for rendering on a display for a user wearing AR glasses, the AR scene includes a real-world space and virtual objects overlaid in the real-world space. The method includes analyzing a field of view into the AR scene from the AR glasses; the analyzing is configured to detect images of physical features of the user when the field of view is directed toward at least part of said physical features of the user. The method includes adjusting the AR scene, in substantial real-time, to modify at least part of the images of the physical features of the user when the physical features of the user are detected to be in the AR scene as viewed from the field of view of the AR glasses, wherein said modifying includes detecting depth data and original texture data from said physical features to produce a virtual mesh of said physical features; the virtual mesh is changed in size and shape and rendered using modified texture data that blends with said original texture data. In one embodiment, the modified physical features of the user appear to the user when viewed via the AR glasses as existing in the real-world space. In this way, when the physical features of a user are detected to be in the AR scene, the physical features are augment in the AR scene which can result in the self-perception of the user improving which in turn can provide the user with confidence to overcome challenging tasks or obstacles during the gameplay of the user.

Systems and methods for planning and performing image free implant revision surgery

Systems and methods for planning and performing image free implant revision surgery are discussed. For example, a method for generating a revision plan can include collecting pre-defined parameters characterizing a target bone, generating a 3D model, collecting a plurality of surface points, and generating a reshaped 3D model. Generating the 3D model of the target bone can be based on a first portion of the pre-defined parameters. Generating the reshaped 3D model can be done based on the plurality of surface points collected from a portion of the surface of the target bone.

PRIOR BASED GENERATION OF THREE-DIMENSIONAL MODELS
20230230331 · 2023-07-20 ·

The present disclosure generally relates to systems and techniques for constructing three-dimensional (3D) models. Certain aspects of the present disclosure provide an apparatus for model generation. The apparatus generally includes a memory, and one or more processors coupled to the memory. The one or more processors and the memory may be configured to receive one or more images depicting an object to be modeled, determine a category associated with the object to be modeled, select a shape of a space based on the category, and generate a 3D model of the object at least in part by carving one or more points associated with the space based on the one or more images depicting the object.

Site specifying device, site specifying method, and storage medium
11562532 · 2023-01-24 · ·

A site specifying device, includes a memory; and a processor coupled to the memory and the processor configured to: store three-dimensional model data indicating a three-dimensional model of an object, display the three-dimensional model based on the three-dimensional model data, and select from the three-dimensional model a site in a range of a depth specified toward an inner side of the three-dimensional model from a region surrounded by a closed curve on a surface of the three-dimensional model according to an input of the closed curve to the surface of the displayed three-dimensional model and an input to specify the depth from the surface of the three-dimensional model.

VOLUMETRIC CAPTURE AND MESH-TRACKING BASED MACHINE LEARNING 4D FACE/BODY DEFORMATION TRAINING
20230230304 · 2023-07-20 ·

Mesh-tracking based dynamic 4D modeling for machine learning deformation training includes: using a volumetric capture system for high-quality 4D scanning, using mesh-tracking to establish temporal correspondences across a 4D scanned human face and full-body mesh sequence, using mesh registration to establish spatial correspondences between a 4D scanned human face and full-body mesh and a 3D CG physical simulator, and training surface deformation as a delta from the physical simulator using machine learning. The deformation for natural animation is able to be predicted and synthesized using the standard MoCAP animation workflow. Machine learning based deformation synthesis and animation using standard MoCAP animation workflow includes using single-view or multi-view 2D videos of MoCAP actors as input, solving 3D model parameters (3D solving) for animation (deformation not included), and given 3D model parameters solved by 3D solving, predicting 4D surface deformation from ML training.

Digital block out of digital preparation

A system and method include performing digital block-out of one or more digital preparation teeth.

Method, system and device for combining models in virtual scene, and medium
11704884 · 2023-07-18 · ·

The present invention relates to the technical field of two-dimensional (2D)/three-dimensional (3D) modeling, and in particular to a method, system, and device for combining models in a virtual scene, and a medium. The method of the present invention includes: placing a first model into a second model; determining a filling space and a removing space of the first model; filling an overlapping space between the first model and the second model with the second model, and filling the filling space of the first model with the second model; and removing the second model with which the removing space of the first model is filled, wherein when the overlapping space between the first model and the second model is filled with the second model, and the filling space of the first model is filled with the second model, the removing space of the first model is filled with the second model. The present invention simplifies a workflow of a scene designer, reduces repetitive work, and achieves a desired effect of the models.