Patent classifications
G06T2219/2021
Storage medium, shape data output method, and information processing device
A non-transitory computer-readable storage medium storing a shape data output program that causes at least one computer to execute process, the process includes, normalizing each shape data of a plurality of pieces of shape data for each component in each coordinate axis direction to create unit shape data; classifying the plurality of pieces of shape data based on the created unit shape data of each of the pieces of shape data; specifying, based on dimensions of sites of each shape data in classified group, a dimensional relationship between different sites of the shape data in the group; and outputting information indicating the specified dimensional relationship in association with the unit shape data of the shape data in the group.
Skin 3D model for medical procedure
The present disclosure provides a method of medical procedure using augmented reality for superimposing a patient's medical images (e.g., CT or MRI) over a real-time camera view of the patient. Prior to the medical procedure, the patient's medical images are processed to generate a 3D model that represents a skin contour of the patient's body. The 3D model is further processed to generate a skin marker that comprises only selected portions of the 3D model. At the time of the medical procedure, 3D images of the patient's body are captured using a camera, which are then registered with the skin marker. Then, the patient's medical images can be superimposed over the real-time camera view that is presented to the person performing the medical procedure.
Virtual object kit
In some implementations, a method includes obtaining a virtual object kit that includes a set of virtual object templates of a particular virtual object type. In some implementations, the virtual object kit includes a plurality of groups of components. In some implementations, each of the plurality of groups of components is associated with a particular portion of a virtual object. In some implementations, the method includes receiving a request to assemble a virtual object. In some implementations, the request includes a selection of components from at least some of the plurality of groups of components. In some implementations, the method includes synthesizing the virtual object in accordance with the request.
Systems and methods for designing and manufacturing custom immobilization molds for use in medical procedures
Described herein are systems and methods of processing immobilization molds for application of treatment, A computing system may generate a three-dimensional mold model of immobilization mold within with a subject is to be positioned for application of a treatment. The computing system may subtract a three-dimensional scan of at least a portion of the subject from the three-dimensional mold model to define an opening therein. The computing system may remove, from the three-dimensional mold model, a first portion to define an imprint in the opening from a first axis along which the subject is to enter. The computing system may remove, from a second portion of the three-dimensional mold model remaining with the removal of the first portion, inward protrusions into the imprint of relative to the second axis intersecting the first axis.
Processing an object representation
A method of adjusting a three-dimensional representation of an object to be manufactured in an additive manufacturing process comprises determining a processing operation to be applied to the object, and adjusting the three-dimensional representation of the object based on adjustment parameters associated with the processing operation.
Three-dimensional shape data editing device, and non-transitory computer readable medium storing three-dimensional shape data editing program
A three-dimensional shape data editing device including a processor configured to, with regard to an object represented by a first three-dimensional shape configured using multiple forming surfaces of at least one of flat surfaces and curved surfaces, a second three-dimensional shape in which the object is configured using multiple three-dimensional elements, and a third three-dimensional shape that is converted from the second three-dimensional shape such that the object is represented by the multiple forming surfaces, specify edge forming surfaces that correspond to edges of the object extracted from the first three-dimensional shape, with respect to the third three-dimensional shape, and configure the edge forming surfaces in the third three-dimensional shape such that shapes of the edges of the object represented by the first three-dimensional shape are reproduced.
Hierarchies to generate animation control rigs
An animation system is provided for generating an animation control rig for character development, configured to manipulate a skeleton of an animated character. Hierarchical representation of puppets includes groups of functions related in a hierarchy according to character specialization for creating the animated rig are derived using base functions of a core component node. The hierarchical nodes may include an archetype node, at least one appendage node, and at least one feature node. In some implementations, portions of a hierarchical node, including the functions from the core component node, may be shared to generate different animation rigs for a variety of characters. In some implementations, portions of a hierarchical node, including the component node functions, may be reused to build similar appendages of a same animation rig.
Adaptive augmented reality system for dynamic processing of spatial component parameters based on detecting accommodation factors in real time
Embodiments of the invention are directed to systems, methods, and computer program products for adaptive augmented reality for dynamic processing of spatial component parameters based on detecting accommodation factors in real time. The system is further configured for dynamic capture, analysis and modification of spatial component parameters in a virtual reality (VR) space and real-time transformation to composite plan files. Moreover, the system comprises one or more composite credential sensor devices, comprising one or more VR spatial sensor devices configured for capture and imaging of VR spatial movement and position credentials. The system is also configured to dynamically transform and adapt a first immersive virtual simulation structure associated with the first physical location sector, in real-time, based on detecting and analyzing mobility assist devices associated with users.
DYNAMIC FACIAL HAIR CAPTURE OF A SUBJECT
Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).
METHOD, DEVICE, AND COMPUTER PROGRAM PRODUCT FOR GENERATING VIRTUAL IMAGE
Embodiments of the present disclosure relate to a method, an electronic device, and a computer program product for generating a virtual image. The method includes extracting an audio feature of an audio input of a target object; and acquiring an expression parameter and a pose parameter associated with the target object based on the audio feature. The method further includes generating, based on the audio feature, auxiliary information related to a texture for at least a portion of the target object and a geometric shape of at least a portion of the target object. The method further includes generating a virtual image of the target object based on the expression parameter, the pose parameter, and the auxiliary information.