Patent classifications
G06T2219/2024
GENERATING GROUND TRUTHS FOR MACHINE LEARNING
A messaging system processes three-dimensional (3D) models to generate ground truths for training machine learning models for applications of the messaging system. A method of generating ground truths for machine learning includes generating a plurality of first rendered images from a first 3D base model where each first rendered image includes the 3D base model modified by first augmentations of a plurality of augmentations. The method further includes determining for a second 3D base model incompatible augmentations of the first plurality of augmentations, where the incompatible augmentations indicate changes to fixed features of the second 3D base model, and generating a plurality of second rendered images from a second 3D base model, each second rendered image comprising the second 3D base model modified by second augmentations, the second augmentations corresponding to the first augmentations of a corresponding first rendered image, where the second augmentations comprises augmentations of the first augmentations that are not incompatible augmentations.
SYSTEM AND METHOD FOR PRODUCT DESIGN, SIMULATION AND ORDERING
Disclosed is a system and method for dynamically designing a custom product. The method is initiated using custom software, which permits user interaction via a graphical user interface. An HTML drawing element is provided in a display area of the graphical user interface, configured to display graphics drawn using a program executable by the computing device, the program comprising instructions to draw the custom product as a plurality of line segments. The program is executed to render the plurality of line segments, which are displayed as a first displayed version of the custom product in the display area. A length dimension value is received in an entry field of the display interface, to update one or more of the line segments. The program is re-executed to render the updated line segments, which are displayed as an updated displayed version of the custom product in the display area.
Voice driven modification of sub-parts of assets in computer simulations
A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
Techniques for generating stylized quad-meshes from tri-meshes
In various embodiments, a stylization subsystem automatically modifies a three-dimensional (3D) object design. In operation, the stylization subsystem generates a simplified quad mesh based on an input triangle mesh that represents the 3D object design, a preferred orientation associated with at least a portion of the input triangle mesh, and mesh complexity constraint(s). The stylization subsystem then converts the simplified quad mesh to a simplified T-spline. Subsequently, the stylization subsystem creases one or more of edges included in the simplified T-spline to generate a stylized T-spline. Notably, the stylized T-spline represents a stylized design that is more convergent with the preferred orientation(s) than the 3D object design. Advantageously, relative to prior art approaches, the stylization subsystem can more efficiently modify the 3D object design to improve overall aesthetics and manufacturability.
System and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects
Present disclosure discloses an image processing system and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects of a predetermined class (e.g., human faces). A 2D input image of a 3D object of the predetermined class is manipulated by manipulating physical properties of the 3D object, such as a 3D shape of the 3D input object, an albedo of the 3D input object, a pose of the 3D input object, and lighting illuminating the 3D input object. The physical properties are extracted from the 2D input image using a neural network that is trained to reconstruct the 2D input image. The 2D input image is reconstructed by disentangling the physical properties from pixels of the 2D input image using multiple subnetworks. The disentangled physical properties produced by the multiple subnetworks are combined into a 2D output image using a differentiable renderer.
System and Method for Manipulating Two-Dimensional (2D) Images of Three-Dimensional (3D) Objects
Present disclosure discloses an image processing system and method for manipulating two-dimensional (2D) images of three-dimensional (3D) objects of a predetermined class (e.g., human faces). A 2D input image of a 3D object of the predetermined class is manipulated by manipulating physical properties of the 3D object, such as a 3D shape of the 3D input object, an albedo of the 3D input object, a pose of the 3D input object, and lighting illuminating the 3D input object. The physical properties are extracted from the 2D input image using a neural network that is trained to reconstruct the 2D input image. The 2D input image is reconstructed by disentangling the physical properties from pixels of the 2D input image using multiple subnetworks. The disentangled physical properties produced by the multiple subnetworks are combined into a 2D output image using a differentiable renderer.
METHOD AND SYSTEM FOR CREATING AVATAR CONTENT
A method for creating avatar content includes receiving, from a user, a first user input selecting a first content including time-series actions of a first avatar and a second avatar, displaying a first user interface for selecting one of the first avatar and the second avatar in the first content, receiving, from the user through the first user interface, a second user input selecting the first avatar, and in response to the receiving the second user input, generating a user-customized content by adding, to the first content, a third avatar associated with the user that performs a same action as the selected first avatar.
Smart render design tool and method
A smart render design tool includes: (a) a designer side plug-in enabling a designer to generate credentials for a client and associate the credentials with a model for the client, add one or more camera views to the model, select one or more surfaces in the one or more camera views to add in the model, specify one or more materials for each surface of the one or more surfaces of the model, and publish the model including the specified materials for the one or more surfaces of the model; and (b) a client side portal associated with the credentials and the model enabling the client to access the published model using the generated credentials, select desired materials from among the materials specified by the designer for each surface of the published model, and save the desired materials selections of the client for review by the designer using a synchronization function of the designer side plug-in.
Methods and Systems for an Automated Design, Fulfillment, Deployment and Operation Platform for Lighting Installations
A platform for design of a lighting installation generally includes an automated search engine for retrieving and storing a plurality of lighting objects in a lighting object library and a lighting design environment providing a visual representation of a lighting space containing lighting space objects and lighting objects. The visual representation is based on properties of the lighting space objects and lighting objects obtained from the lighting object library. A plurality of aesthetic filters is configured to permit a designer in a design environment to adjust parameters of the plurality of lighting objects handled in the design environment to provide a desired collective lighting effect using the plurality of lighting objects.
METHOD OF GENERATING VIRTUAL CHARACTER, ELECTRONIC DEVICE, AND STORAGE MEDIUM
A method of generating a virtual character, an electronic device, and a storage medium. A specific implementation solution includes: determining, in response to a first speech command for adjusting an initial virtual character, a target adjustment object corresponding to the first speech command; determining a plurality of character materials related to the target adjustment object; determining a target character material from the plurality of character materials in response to a second speech command for determining the target character material; and adjusting the initial virtual character by using the target character material, so as to generate a target virtual character.