G06T2210/44

METHODS AND SYSTEMS FOR OBTAINING A SCALE REFERENCE AND MEASUREMENTS OF 3D OBJECTS FROM 2D PHOTOS
20230052613 · 2023-02-16 ·

Disclosed are systems and methods for obtaining a scale factor and 3D measurements of objects from a series of 2D images. An object to be measured is selected from a menu of an Augmented Reality (AR) based measurement application being executed by a mobile computing device. Measurement instructions corresponding to the selected object are retrieved and used to generate a series of image capture screens. A series of image capture screens assist the user in positioning the device relative to the object in a plurality of imaging positions to capture the series of 2D images. The images are used to determine one or more scale factors and to build a complete scaled 3D model of the object in virtual 3D space. The 3D model is used to generate one or more measurements of the object.

MODEL-BASED IMAGE SEGMENTATION

Presented are concepts for initialising a model for model-based segmentation of an image which use specific landmarks (e.g. detected using other techniques) to initialize the segmentation mesh. Using such an approach, embodiments need not be limited to predefined model transformations, but can initialise a segmentation mesh with arbitrary shape. In this way, embodiments may provide for an image segmentation algorithm that not only delivers a robust surface-based segmentation result but also does so for strongly varying target structure variations (in terms of shape).

HEAD MODELING FOR A THERAPEUTIC OR DIAGNOSTIC PROCEDURE

A model of a human subject's head may be generated to assist in a therapeutic and/or diagnostic procedure. A treatment and/or diagnostic system may generate a fitted head model using a predetermined head model and a plurality of points. The plurality of points may include facial feature information and may be determined using a sensor, for example, an IR or optical sensor. One or more anatomical landmarks may be determined and registered in association with the fitted head model using the facial feature information, for example, without the use of additional image information, such as an MM image. The fitted head model may include visual aids, for example, anatomical landmarks, reference points, marking of the human subject's MT location, and/or marking of the human subject's treatment location. The visual aids may assist a technician to perform the therapeutic and/or diagnostic procedure of the human subject.

Real-time context based emoticon generation system and method thereof

A method for generating a real-time context-based emoticon may include receiving conversation information associated with a conversation between a set of users. The method may include identifying an attribute associated with the conversation. The method may include generating a base emoticon. The method may include generating an output shape based on a fiducial point of the base emoticon. The method may include transforming the output shape of the base emoticon with an accessory. The method may include generating the real-time context-based emoticon for the conversation, based on transforming the output shape of the base emoticon with the accessory.

METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION OF A FACE WITH TOOTHED PORTION FROM A SINGLE IMAGE
20230222750 · 2023-07-13 ·

Disclosed is a 3D reconstruction method for obtaining, from a 2D colour image of a human face with a visible toothed portion (1), a single reconstructed 3D surface of the toothed portion and of the facial portion (4) without toothed portion. The method comprises segmenting the 2D image into a first part (22) corresponding to the toothed portion (1) and a second part corresponding to the facial portion (4) without said toothed portion, enhancing the first part of the 2D image in order to modify the photometric characteristics, and generating a 3D surface of the face reconstructed from the enhanced first part of the 2D image and from the second part of the 2D image. The obtained 3D surface of the face is suitable for simulating a dental treatment, by substituting on the area of the 3D surface corresponding to the toothed portion (1), another 3D surface corresponding to the toothed portion after the projected treatment.

AVATAR GENERATION IN A VIDEO COMMUNICATIONS PLATFORM

Methods, systems, and apparatus, including computer programs encoded on computer storage media relate to a method for generating an avatar within a video communication platform. The system may receive a selection of an avatar model from a group of one or more avatar models. The system receives a first video stream and audio data of a first video conference participant. The system analyzes image frames of the first video stream to determine a group of pixels representing the first video conference participant. The system determines a plurality of facial expression parameter associated with the determined group of pixels. Based on the determined plurality of facial expression parameter values, the system generates a first modified video stream depicting a digital representation of the first video conference participant in an avatar form.

Generating gaze corrected images using bidirectionally trained network

An example apparatus for adjusting eye gaze in images one or more processors to execute instructions to bidirectionally train a neural network; access a target angle and an input image, the input image including an eye in a first position; generate a vector field with the neural network; and generate a gaze-adjusted image based on the vector field, the gaze-adjusted image including the eye in a second position.

Techniques for patient-specific morphing of virtual boundaries

Systems, methods, software and techniques are disclosed for morphing a generic virtual boundary into a patient-specific virtual boundary for an anatomical model. The generic virtual boundary comprises one or more morphable faces. An intersection of the generic virtual boundary and the anatomical model is computed to define a cross-sectional contour of the anatomical model. One or more faces of the generic virtual boundary are morphed to conform to the cross-sectional contour of the anatomical model to produce the patient-specific virtual boundary. In some cases, the morphed faces are spaced apart from the cross-sectional contour by an offset distance that accounts for a geometric feature of a surgical tool.

PROTECTING IMAGE FEATURES IN STYLIZED REPRESENTATIONS OF A SOURCE IMAGE

Systems and methods herein describe an image stylization system. The image stylization system accesses a set of images corresponding to a target domain style, generates a set of paired images using a first machine learning model, analyze the generated set of paired images using a second machine learning model trained to analyze the generated set of paired images based on a plurality of protected feature criteria, determines a set of image transformations for the generated set of pairs, generates a transformed set of paired images by performing the set of image transformations on the set of paired images, and generates stylized images corresponding to the target domain style using a supervised image translation model trained on the transformed set of paired images.

SYSTEMS AND METHODS OF USING THREE-DIMENSIONAL IMAGE RECONSTRUCTION TO AID IN ASSESSING BONE OR SOFT TISSUE ABERRATIONS FOR ORTHOPEDIC SURGERY

Systems and methods for calculating external bone loss for alignment of pre-diseased joints comprising: generating a three-dimensional (“3D”) computer model of an operative area from at least two two-dimensional (“2D”) radiographic images, wherein at least a first radiographic image is captured at a first position, and wherein at least a second radiographic image is captured at a second position, and wherein the first position is different than the second position; identifying an area of bone loss on the 3D computer model; and applying a surface adjustment algorithm to calculate an external missing bone surface fitting the area of bone loss.