Patent classifications
G06V2201/033
DATA-DRIVEN EXTRACTION AND COMPOSITION OF SECONDARY DYNAMICS IN FACIAL PERFORMANCE CAPTURE
A modeling engine generates a prediction model that quantifies and predicts secondary dynamics associated with the face of a performer enacting a performance. The modeling engine generates a set of geometric representations that represents the face of the performer enacting different facial expressions under a range of loading conditions. For a given facial expression and specific loading condition, the modeling engine trains a Machine Learning model to predict how soft tissue regions of the face of the performer change in response to external forces applied to the performer during the performance. The modeling engine combines different expression models associated with different facial expressions to generate a prediction model. The prediction model can be used to predict and remove secondary dynamics from a given geometric representation of a performance or to generate and add secondary dynamics to a given geometric representation of a performance.
ELECTRONIC DEVICE AND PROGRAM
An electronic device may include: an acquisition section configured to acquire captured image data of a hand of an operator; a presumption section configured to presume, in accordance with the captured image data, skeleton data corresponding to the hand; and a determination section configured to determine, in accordance with the skeleton data, a cursor position for operating the electronic device.
Graphical user interface for displaying automatically segmented individual parts of anatomy in a surgical navigation system
A surgical navigation system includes a source of a patient anatomy data, wherein the patient anatomy data comprises a three-dimensional reconstruction of a segmented model comprising at least two sections representing parts of the anatomy. A surgical navigation image generator is configured to generate a surgical navigation image comprising the patient anatomy. A 3D display system is configured to show the surgical navigation image wherein the display of the patient anatomy is selectively configurable such that at least one section of the anatomy is displayed and at least one other section of the anatomy is not displayed.
Automated tooth localization, enumeration, and diagnostic system and method
A system and method for automated localization, enumeration, and diagnoses of a tooth/condition. The system detects a condition for at least one defined localized and enumerated tooth structure within a cropped image from a full mouth series based on any one of a pixel-level prediction, wherein said condition is detected by at least one of detecting or segmenting a condition on at least one of the enumerated tooth structures within the cropped image by a 2-D R-CNN.
HAND SKELETON LEARNING, LIFTING, AND DENOISING FROM 2D IMAGES
A processor identifies keypoints on a hand in a two-dimensional image that is captured by a camera. A three-dimensional pose of the hand is determined using locations of the keypoints to access lookup tables (LUTs) that represent potential poses of the hand as a function of the locations of the keypoints. In some embodiments, the keypoints include locations of tips of fingers and a thumb, joints that connect phalanxes of the fingers and the thumb, palm knuckles that represent a point of attachment of the fingers and the thumb to a palm, and a wrist location that indicates a point of attachment of the hand to a forearm. Some embodiments of the LUTs represent 2D coordinates of the fingers and the thumb in corresponding finger pose planes as a function of the locations of the tips of the fingers or thumb relative to the corresponding palm knuckles.
SYSTEMS AND METHODS FOR MODELING DENTAL STRUCTURES
The present disclosure provides method for generating a three-dimensional (3D) model of a dental structure of a subject. The method comprises: capturing image data about the dental structure of the subject using a camera of a mobile device; constructing a first 3D model of the dental structure from the image data; registering the first 3D model with an initial 3D surface model to determine a transformation for at least one element of the dental structure; and updating the initial 3D surface model by (i) applying the transformation to update a position of the at least one element and/or (ii) deforming a surface of a local area of the at least one element using a deformation algorithm.
SYSTEMS AND METHODS FOR HUMAN POSE AND SHAPE RECOVERY
The pose and shape of a human body may be recovered based on joint location information associated with the human body. The joint location information may be derived based on an image of the human body or from an output of a human motion capture system. The recovery of the pose and shape of the human body may be performed by a computer-implemented artificial neural network (ANN) trained to perform the recovery task using training datasets that include paired joint location information and human model parameters. The training of the ANN may be conducted in accordance with multiple constraints designed to improve the accuracy of the recovery and by artificially manipulating the training data so that the ANN can learn to recover the pose and shape of the human body even with partially observed joint locations.
SYSTEMS AND METHODS FOR GENERATING AND DISPLAYING AN IMPLEMENTABLE TREATMENT PLAN BASED ON 2D INPUT IMAGES
A method of visualizing a treatment of teeth includes maintaining a final position model trained to output movement of teeth from initial to final positions, receiving from a user device a first representation comprising teeth of a user in an initial position where the first representation is based on a 2D image transmitted from the user device, determining tooth movements of the teeth from the initial position to a final position using to the final position model, receiving by the user device a graphical visualization of a treatment plan for moving the teeth of the user where the graphical visualization is generated based on the final position and the treatment plan is suitable for correcting the user's malocclusion, and displaying by the user device the graphical visualization where the graphical visualization comprises a three-dimensional (3D) representation corresponding to the teeth of the user.
PERSONALIZED REGISTRATION METHOD FOR TEMPLATE LIBRARY OF ANATOMICAL MORPHOLOGY AND MECHANICAL PROPERTIES OF MATERIALS OF BONE CT IMAGES
The present invention provides a personalized registration method for a template library of anatomical morphology and mechanical properties of materials of bone CT images. In the method, a large number of bone CT images of healthy persons are used to build a statistical model capable of containing anatomical morphology and mechanical properties of materials of bones, the parameterized description of bones of a patient is realized by a personalized registration method for the statistical model and bone CT images, a prosthesis template library is built by using images of the patient and registration parameters, and the template library is matched with the CT images of the patient by the personalized registration method to retrieve template images and prosthesis models similar to the bone conditions of the patient from the template library as the initial reference template for design of personalized prosthesis implants.
CLASSIFICATION IN HIERARCHICAL PREDICTION DOMAINS
There is a need for solutions that classification solutions in hierarchical prediction domains. In one embodiment, this need can be addressed by, for example, performing one or more online machine learning, co-occurrence analysis machine learning, structured fusion machine learning, and/or unstructured fusion machine learning. In one particular example, structured predictions inputs are processed in accordance with an online machine learning analysis to generate structurally hierarchical predictions and in accordance with a co-occurrence analysis machine learning analysis to generate structurally non-hierarchical predictions. Then, the structurally hierarchical predictions and the structurally non-hierarchical predictions in accordance with processed by a structured fusion model to generate structure-based predictions. Afterward, the structure-based predictions and non-structure-based predictions can be processed in accordance with an unstructured fusion model to generate one or more unstructured-fused predictions.