G06T15/02

FREEHAND SKETCH IMAGE GENERATING METHOD AND SYSTEM FOR MACHINE LEARNING

Provided is a method for generating a freehand sketch of a 3D model for machine learning, which is executed by one or more processors, in which a method for generating freehand sketch data for machine learning includes receiving a 3D model of a target object, and generating a plurality of different freehand sketch images for the target object based on the 3D model.

Controlling Patch Usage in Image Synthesis

Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.

Controlling Patch Usage in Image Synthesis

Techniques for controlling patch-usage in image synthesis are described. In implementations, a curve is fitted to a set of sorted matching errors that correspond to potential source-to-target patch assignments between a source image and a target image. Then, an error budget is determined using the curve. In an example, the error budget is usable to identify feasible patch assignments from the potential source-to-target patch assignments. Using the error budget along with uniform patch-usage enforcement, source patches from the source image are assigned to target patches in the target image. Then, at least one of the assigned source patches is assigned to an additional target patch based on the error budget. Subsequently, an image is synthesized based on the source patches assigned to the target patches.

Illumination-Guided Example-Based Stylization of 3D Renderings

Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.

Illumination-Guided Example-Based Stylization of 3D Renderings

Techniques for illumination-guided example-based stylization of 3D renderings are described. In implementations, a source image and a target image are obtained, where each image includes a multi-channel image having at least a style channel and multiple light path expression (LPE) channels having light propagation information. Then, the style channel of the target image is synthesized to mimic a stylization of individual illumination effects from the style channel of the source image. As part of the synthesizing, the light propagation information is applied as guidance for synthesis of the style channel of the target image. Based on the guidance, the stylization of individual illumination effects from the style channel of the source image is transferred to the style channel of the target image. Based on the transfer, the style channel of the target image is then generated for display of the target image via a display device.

Display Engine for Post-Rendering Processing
20220366644 · 2022-11-17 ·

In one embodiment, a computing system may access surfaces and texel data of an artificial reality scene. The surfaces may be generated based on a first viewing position of a viewer. The system may determine tiles on a display to test for a visibility of the surfaces from a second viewing position. The tiles may include first tiles that need more computational resources and second tiles that need less computational resources. The system may determine a tile order which interleaves the first and second tiles. The system may generate rays based on the tile order. The system may determine the visibility of the surfaces from the second viewing position based on the ray-surface intersections. The system may generate color values of a subframe based on the surface visibility and the texel data. The system may provide the color values to the display.

Display Engine for Post-Rendering Processing
20220366644 · 2022-11-17 ·

In one embodiment, a computing system may access surfaces and texel data of an artificial reality scene. The surfaces may be generated based on a first viewing position of a viewer. The system may determine tiles on a display to test for a visibility of the surfaces from a second viewing position. The tiles may include first tiles that need more computational resources and second tiles that need less computational resources. The system may determine a tile order which interleaves the first and second tiles. The system may generate rays based on the tile order. The system may determine the visibility of the surfaces from the second viewing position based on the ray-surface intersections. The system may generate color values of a subframe based on the surface visibility and the texel data. The system may provide the color values to the display.

3D DIGITAL PAINTING
20170309057 · 2017-10-26 ·

A method of digital continuous and simultaneous three-dimensional painting and three-dimensional drawing with steps of providing a digital electronic canvas having at least one display and capable of presenting two pictures for a right eye and a left eye; providing means for creating a continuous 3D virtual canvas by digitally changing a value and sign of horizontal disparity between two images for the right eye and the left eye and their scaling on the digital electronic canvas corresponding to instant virtual distance between the painter and an instant image within the virtual 3D canvas; providing at least one multi-axis input control device allowing digital painting or drawing on the digital electronic canvas; painting within virtual 3D canvas by providing simultaneous appearance of a similar stroke on the images for the right eye and the left eye on the digital electronic canvas.

3D DIGITAL PAINTING
20170309057 · 2017-10-26 ·

A method of digital continuous and simultaneous three-dimensional painting and three-dimensional drawing with steps of providing a digital electronic canvas having at least one display and capable of presenting two pictures for a right eye and a left eye; providing means for creating a continuous 3D virtual canvas by digitally changing a value and sign of horizontal disparity between two images for the right eye and the left eye and their scaling on the digital electronic canvas corresponding to instant virtual distance between the painter and an instant image within the virtual 3D canvas; providing at least one multi-axis input control device allowing digital painting or drawing on the digital electronic canvas; painting within virtual 3D canvas by providing simultaneous appearance of a similar stroke on the images for the right eye and the left eye on the digital electronic canvas.

Techniques for inferring three-dimensional poses from two-dimensional images

In various embodiments, a training application generates training items for three-dimensional (3D) pose estimation. The training application generates multiple posed 3D models based on multiple 3D poses and a 3D model of a person wearing a costume that is associated with multiple visual attributes. For each posed 3D model, the training application performs rendering operation(s) to generate synthetic image(s). For each synthetic image, the training application generates a training item based on the synthetic image and the 3D pose associated with the posed 3D model from which the synthetic image was rendered. The synthetic images are included in a synthetic training dataset that is tailored for training a machine-learning model to compute estimated 3D poses of persons from two-dimensional (2D) input images. Advantageously, the synthetic training dataset can be used to train the machine-learning model to accurately infer the orientations of persons across a wide range of environments.