Patent classifications
G06T2219/2024
VOICE DRIVEN 3D STATIC ASSET CREATION IN COMPUTER SIMULATIONS
A 3D scene is generated consisting of one or more objects from a natural language description that may consist of text or voice. Relevant keywords like asset attributes and placement are extracted from the description. Using these keywords, a 2D image is generated using a generative model. Another neural model is used to reconstruct the 3D objects from the 2D. The 3D objects can be assembled to meet the placement specifications. Alternatively, the 3D object is generated by either transforming existing 3D objects or by using a 3D generative model to meet the specifications in the description.
VOICE DRIVEN MODIFICATION OF SUB-PARTS OF ASSETS IN COMPUTER SIMULATIONS
A computer simulation object such as a chair is described by voice or photo input to render a 2D image. Machine learning may be used to convert voice input to the 2D image. The 2D image is converted to a 3D asset and the 3D asset or portions thereof are used in the computer simulation, such as a computer game, as the object such as a chair.
GENERATING MODIFIED DIGITAL IMAGES UTILIZING A GLOBAL AND SPATIAL AUTOENCODER
The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.
SIMULATING AND EDITING OF GARMENT OF HIERARCHICAL STRUCTURE
A method and apparatus for simulating a garment according to an example embodiment receive a user input for selecting, in a simulated three-dimensional (3D) garment worn on an avatar, any one of a plurality of objects corresponding to a next depth level to a depth level which is being currently displayed among depth levels for hierarchically displaying one or more pieces included in the 3D garment and two-dimensional (2D) patterns forming each of the pieces, display the next depth level by controlling visualization of the plurality of objects such that the selected object is not covered by at least one of the remaining objects other than the selected object of the plurality of objects, deform the selected object based on an input for editing the selected object corresponding to the next depth level, deform the 3D garment by reflecting deformation of the selected object, and output the deformed 3D garment.
Generation and implementation of 3D graphic object on social media pages
Disclosed herein is digital object generator that builds unique digital objects based on the user specific input. The unique digital objects are part of a graphic presentation to users. The user specific input is positioned on pre-configured regions of a 3D object such as a polygon. Examples of the pre-configured regions include faces of the 3D object, orbits around the 3D object, or identifiable regions associated with the 3D object. The 3D object is rendered as a part of a social media page and enables social interactions between users. In the social media page, the 3D object rotates displaying regions/faces to page visitors. In some embodiments, the 3D object is implemented as a pet or companion of a user avatar in a virtual, augmented, or extended reality space.
STYLE TRANSFER PROGRAM AND STYLE TRANSFER METHOD
A style transfer program causes a server to implement an acquisition function of acquiring buffer data from a buffer used for rendering, a style transfer function of applying style transfer based on one or more style images to the buffer data, and an output function of outputting data after the style transfer is applied,
VIRTUAL REALITY SYSTEM AND METHOD
A method comprises displaying in virtual reality a computer-generated scene; obtaining a movement command from a real-world physical movement of a user, the movement command corresponding to a movement of a virtual body; and adjusting the movement of the virtual body in dependence on an effect of gravity in the computer-generated scene and/or in dependence on the presence of at least one object within the computer-generated scene that inhibits the movement of the virtual body, wherein the adjusting of the movement is such that the adjusted movement of the virtual body does not correspond with the real-world physical movement of the user.
Providing 3D data for messages in a messaging system
The subject technology receives, at a client device, a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator including a 3D effect. The subject technology captures image data using at least one camera of the client device. The subject technology generates depth data using a machine learning model based at least in part on the captured image data. The subject technology applies, to the image data and the depth data, the 3D effect based at least in part on the augmented reality content generator.
System and method of providing a customizable virtual environment
A system and method generating a virtual environment of an imaged space are disclosed. In one aspect, the system comprises a communication circuit configured to communicate via a network with one or more data sources and a memory configured to store instructions. The system further comprises one or more hardware processors configured to execute the instructions to receive a request to display a virtual environment, request an environment image for the imaged space, and receive the environment image. The one or more hardware processors are further configured to receive at least one item image of at least one item to be placed in the virtual environment, request generation of the virtual environment, receive the virtual environment comprising virtual representations for the imaged space and the at least one item, and provide the virtual environment for interaction by a user with the at least one item and the imaged space.
COMPUTER IMPLEMENTED METHOD AND SYSTEM FOR NAVIGATION AND DISPLAY OF 3D IMAGE DATA
A computer-implemented method and system for navigation and display of 3D image data is described. In the method, a 3D image dataset to be displayed is retrieved and a highlight position is identified within the 3D image dataset. A scalar opacity map is calculated for the 3D image dataset, the opacity map having a value for each of a plurality of positions in the 3D image dataset, the respective value being dependent on the respective position relative to the highlight position, and on the value of the 3D image at the respective position relative to the value of the 3D image at the highlight position. The opacity is applied to the 3D image dataset to generate a modified 3D image view.