G06T2210/44

METHOD FOR PROCESSING IMAGES, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20220392253 · 2022-12-08 ·

Provided is a method for processing images. The method includes: acquiring an initial morphed state of a target object in response to a change of an object recognition result for the target object in an image; acquiring a target morphed state of the target object; and displaying the target object progressively morphing from the initial morphed state to the target morphed state.

METHOD AND APPARATUS FOR GENERATING OBJECT MODEL, ELECTRONIC DEVICE AND STORAGE MEDIUM

A method for generating an object model includes: obtaining an initial morphable model; obtaining a plurality of initial images of an object, and depth images corresponding to the plurality of initial images; obtaining a plurality of target topological images by processing the plurality of initial images based on the depth images; obtaining a plurality of models to be synthesized by processing the initial morphable model based on the plurality of target topological images; and generating a target object model based on the plurality of models to be synthesized.

METHOD AND SYSTEM FOR IMAGE RETARGETING

A method of image retargeting is provided. The method includes obtaining a source image, obtaining a target size for a retargeted image based on the source image, generating a two-dimensional importance map for the source image, generating, based on the two-dimensional importance map and the target size, a warping mesh having a distortion metric below a threshold value, determining whether a size of the warping mesh corresponds to the target size, and based on the size of the warping mesh being determined to correspond to the target size, rendering the retargeted image by applying the warping mesh to the source image.

FRAME INTERPOLATION FOR RENDERED CONTENT

One embodiment of the present invention sets forth a technique for performing frame interpolation. The technique includes generating (i) a first set of feature maps based on a first set of rendering features associated with a first key frame, (ii) a second set of feature maps based on a second set of rendering features associated with a second key frame, and (iii) a third set of feature maps based on a third set of rendering features associated with a target frame. The technique also includes applying one or more neural networks to the first, second, and third set of feature maps to generate a set of mappings from a first set of pixels in the first key frame to a second set of pixels in the target frame. The technique further includes generating the target frame based on the set of mappings.

LATE WARPING TO MINIMIZE LATENCY OF MOVING OBJECTS
20220375026 · 2022-11-24 ·

A method for minimizing latency of moving objects in an augmented reality (AR) display device is described. In one aspect, the method includes determining an initial pose of a visual tracking device, identifying an initial location of an object in an image that is generated by an optical sensor of the visual tracking device, the image corresponding to the initial pose of the visual tracking device. rendering virtual content based on the initial pose and the initial location of the object, retrieving an updated pose of the visual tracking device, tracking an updated location of the object in an updated image that corresponds to the updated pose, and applying a time warp transformation to the rendered virtual content based on the updated pose and the updated location of the object to generate transformed virtual content.

Full Body Virtual Reality Utilizing Computer Vision From a Single Camera and Associated Systems and Methods
20220366653 · 2022-11-17 ·

Methods and systems for constructing a three-dimensional (3D) model of a user in a virtual environment for full body virtual reality (VR) applications are described. The method includes receiving an image of the user captured using an RGB camera; detecting a body bounding box associated with the user using a first trained neural network; determining a segmentation map of the user, based on the body bounding box; determining a two-dimensional (2D) contour of the user from the segmentation map; forming a 3D extrusion model by extruding the 2D contour; and constructing the 3D model of the user in the virtual environment by applying a geometric transformation to the 3D extrusion model. Applications of full body VR include physical training and fitness sessions, games, control of computing devices, manipulation and display of data, interactive social media with VR, and the like.

ADAPTIVE BOUNDING FOR THREE-DIMENSIONAL MORPHABLE MODELS
20230035282 · 2023-02-02 ·

Systems and techniques are provided for generating one or more models. For example, a process can include obtaining a plurality of input images corresponding to faces of one or more people during a training interval. The process can include determining a value of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval. The process can include determining, from the determined values of the coefficient representing at least the portion of the facial expression for each of the plurality of input images during the training interval, an extremum value of the coefficient representing at least the portion of the facial expression during the training interval. The process can include generating an updated bounding value for the coefficient representing at least the portion of the facial expression based on the initial bounding value and the extremum value.

Head modeling for a therapeutic or diagnostic procedure

A model of a human subject's head may be generated to assist in a therapeutic and/or diagnostic procedure. A treatment and/or diagnostic system may generate a fitted head model using a predetermined head model and a plurality of points. The plurality of points may include facial feature information and may be determined using a sensor, for example, an IR or optical sensor. One or more anatomical landmarks may be determined and registered in association with the fitted head model using the facial feature information, for example, without the use of additional image information, such as an MRI image. The fitted head model may include visual aids, for example, anatomical landmarks, reference points, marking of the human subject's MT location, and/or marking of the human subject's treatment location. The visual aids may assist a technician to perform the therapeutic and/or diagnostic procedure of the human subject.

CONTROLLABLE IMAGE-BASED VIRTUAL TRY-ON SYSTEM
20230086880 · 2023-03-23 · ·

The invention concerns a method and a system of generating high-resolution digital try-on images of human models wearing arbitrary combinations of garments and shoes with faithfully represented spatial interrelationships and transformations using a system of neural networks. The method allows for a realistic representation and combination of neutral garment images from different sources on a human body model and has a potential for commercial use in online shopping experiences. The input of the system is 2D human body, garment, and shoe images. The method involves adjusting the human body to the position of the shoes, taking steps to create a controllable intermediate representation that predicts the garments' position and deformation on the body, and creating a semantic layout of the body wearing the garments. The method allows for adjusting the position and the dimension of every garment, including the creation of tucked-in tops and open or closed outerwear.

Artificially rendering images using viewpoint interpolation and extrapolation

Various embodiments of the present invention relate generally to mechanisms and processes relating to artificially rendering images using viewpoint interpolation and extrapolation. According to particular embodiments, a method includes applying a transform to estimate a path outside the trajectory between a first frame and a second frame, where the first frame includes a first image captured from a first location and the second frame includes a second image captured from a second location. The process also includes generating an artificially rendered image corresponding to a third location positioned on the path. The artificially rendered image is generated by interpolating a transformation from the first location to the third location and from the third location to the second location, gathering image information from the first frame and the second frame by transferring first image information from the first frame to the third frame and second image information from the second frame to the third frame, and combining the first image information and the second image information.