Patent classifications
G06T15/02
Techniques for inferring three-dimensional poses from two-dimensional images
In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.
ADAPTIVE DEPTH-GUIDED NON-PHOTOREALISTIC RENDERING METHOD, CORRESPONDING COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE CARRIER MEDIUM AND DEVICE
A method for rendering a non-photorealistic (NPR) content from a set (SI) of at least one image of a same scene is provided. The set of images (SI) is associated with a depth image comprising a set of regions. Each region corresponds to a region of a given depth. The method for rendering a non-photorealistic content includes generation of a segmented image having at least one segmented region generated with a given segmentation scale. The at least one segmented region corresponds to at least one region of the set of regions. A binary edge image is generated in which at least one binary edge region is generated with a given edge extraction scale, the at least one binary edge region corresponding to at least one region of the set of regions. The non-photorealistic content is rendered by combining the segmented image and the binary edge image.
ADAPTIVE DEPTH-GUIDED NON-PHOTOREALISTIC RENDERING METHOD, CORRESPONDING COMPUTER PROGRAM PRODUCT, COMPUTER-READABLE CARRIER MEDIUM AND DEVICE
A method for rendering a non-photorealistic (NPR) content from a set (SI) of at least one image of a same scene is provided. The set of images (SI) is associated with a depth image comprising a set of regions. Each region corresponds to a region of a given depth. The method for rendering a non-photorealistic content includes generation of a segmented image having at least one segmented region generated with a given segmentation scale. The at least one segmented region corresponds to at least one region of the set of regions. A binary edge image is generated in which at least one binary edge region is generated with a given edge extraction scale, the at least one binary edge region corresponding to at least one region of the set of regions. The non-photorealistic content is rendered by combining the segmented image and the binary edge image.
Systems and Methods to Generate Comic Books or Graphic Novels from Videos
Systems and methods which auto-create a comic book from a movie, TV show or user generated videos. The comic book can be read in an eBook or print format. This gives the user an alternate way of consuming video content by “reading” it, instead of watching and listening to it.
Systems and Methods to Generate Comic Books or Graphic Novels from Videos
Systems and methods which auto-create a comic book from a movie, TV show or user generated videos. The comic book can be read in an eBook or print format. This gives the user an alternate way of consuming video content by “reading” it, instead of watching and listening to it.
METHODS FOR OBJECT RECOGNITION AND RELATED ARRANGEMENTS
Methods and arrangements involving portable user devices such smartphones and wearable electronic devices are disclosed, as well as other devices and sensors distributed within an ambient environment. Some arrangements enable a user to perform an object recognition process in a computationally- and time-efficient manner. Other arrangements enable users and other entities to, either individually or cooperatively, register or enroll physical objects into one or more object registries on which an object recognition process can be performed. Still other arrangements enable users and other entities to, either individually or cooperatively, associate registered or enrolled objects with one or more items of metadata. A great variety of other features and arrangements are also detailed.
Filter configuration for software based image path
A method, non-transitory computer readable medium and apparatus for generating an output image are disclosed. For example, the method includes receiving an image, applying a first filter, applying a second filter, calculating a difference between the image with the second filter that was applied and the image with the first filter that was applied and transmitting the difference to a sharpening module and a segmentation module to generate the output image.
Generation Of A Personalised Animated Film
Method of generating a personalized animation film, which method is carried out by informatic means, comprised of receiving a photograph, in digital form, displaying at least one personalized pattern; associating the at least one personalized pattern with a basic pattern which is part of a set of basic patterns which have been previously stored; and generating an animation film from the at least one personalized pattern, the associated basic pattern, and a scenario comprised of a pre-defined environment.
Generation Of A Personalised Animated Film
Method of generating a personalized animation film, which method is carried out by informatic means, comprised of receiving a photograph, in digital form, displaying at least one personalized pattern; associating the at least one personalized pattern with a basic pattern which is part of a set of basic patterns which have been previously stored; and generating an animation film from the at least one personalized pattern, the associated basic pattern, and a scenario comprised of a pre-defined environment.
Face Reconstruction from a Learned Embedding
The present disclosure provides systems and methods that perform face reconstruction based on an image of a face. In particular, one example system of the present disclosure combines a machine-learned image recognition model with a face modeler that uses a morphable model of a human's facial appearance. The image recognition model can be a deep learning model that generates an embedding in response to receipt of an image (e.g., an uncontrolled image of a face). The example system can further include a small, lightweight, translation model structurally positioned between the image recognition model and the face modeler. The translation model can be a machine-learned model that is trained to receive the embedding generated by the image recognition model and, in response, output a plurality of facial modeling parameter values usable by the face modeler to generate a model of the face.