G06T2219/2024

Intuitive editing of three-dimensional models

Embodiments of the present invention are directed towards intuitive editing of three-dimensional models. In embodiments, salient geometric features associated with a three-dimensional model defining an object are identified. Thereafter, feature attributes associated with the salient geometric features are identified. A feature set including a plurality of salient geometric features related to one another is generated based on the determined feature attributes (e.g., properties, relationships, distances). An editing handle can then be generated and displayed for the feature set enabling each of the salient geometric features within the feature set to be edited in accordance with a manipulation of the editing handle. The editing handle can be displayed in association with one of the salient geometric features of the feature set.

Generating modified digital images utilizing a global and spatial autoencoder

The present disclosure relates to systems, methods, and non-transitory computer readable media for generating a modified digital image from extracted spatial and global codes. For example, the disclosed systems can utilize a global and spatial autoencoder to extract spatial codes and global codes from digital images. The disclosed systems can further utilize the global and spatial autoencoder to generate a modified digital image by combining extracted spatial and global codes in various ways for various applications such as style swapping, style blending, and attribute editing.

METHOD FOR ADJUSTING POINT CLOUD DENSITY, ELECTRONIC DEVICE, AND STORAGE MEDIUM
20220414987 · 2022-12-29 ·

A method for adjusting point cloud density, an electronic device, and a storage medium are provided. In the method an initial point cloud map and a distance determination threshold of a robot are obtained. A plurality of target regions in the initial point cloud map are determined, and an environmental complexity value of each target region is calculated. The initial point cloud map is divided into submaps, and a point cloud density coefficient of each submap is determined. The initial point cloud map is adjusted according to the point cloud density coefficient and the target point cloud map is obtained. By utilizing such method, adjustment efficiency and an accuracy of point cloud density can be improved.

INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
20220392174 · 2022-12-08 ·

A template AR content can be favorably used in an application scene different from the scene at the time of creation.

An environment map of an augmented reality application scene is generated. An abstract representation of the augmented reality application scene is generated on the basis of the environment map of the augmented reality scene. The abstract representation of the augmented reality application scene is compared with an abstract representation of a template augmented reality generated on the basis of a template augmented reality environment map, and template augmented reality content is mapped to the augmented reality application scene on the basis of a result of the comparison. Thus, augmented reality content for display is generated. For example, the abstract representation of the augmented reality application scene or the abstract representation of the template augmented reality can be edited.

REAL-TIME TEMPORALLY CONSISTENT OBJECT SEGMENTED STYLE TRANSFER IN MEDIA AND GAMING

One embodiment provides a method comprising, at a runtime library executed by a processor of a data processing system, receiving an input frame having objects to be stylized via a style transfer network associated with the runtime library, wherein the style transfer network is a neural network model trained to apply one or more visual styles to an input frame, performing instance segmentation on the input frame to generate one or more instance masks to identify one or more objects to be stylized, generating one or more stylized frames for each style to transfer to the input frame, and merging, via the one or more instance masks, stylized objects from one or more stylized frames with un-stylized content from the input frame to generate an output frame with per-instance stylization.

SYSTEM AND METHOD OF PROVIDING A CUSTOMIZABLE VIRTUAL ENVIRONMENT
20230058236 · 2023-02-23 ·

A system and method generating a virtual environment of an imaged space are disclosed. In one aspect, the system comprises a communication circuit configured to communicate via a network with one or more data sources and a memory configured to store instructions. The system further comprises one or more hardware processors configured to execute the instructions to receive a request to display a virtual environment, request an environment image for the imaged space, and receive the environment image. The one or more hardware processors are further configured to receive at least one item image of at least one item to be placed in the virtual environment, request generation of the virtual environment, receive the virtual environment comprising virtual representations for the imaged space and the at least one item, and provide the virtual environment for interaction by a user with the at least one item and the imaged space.

Method and device for generating avatar on basis of corrected image
11587205 · 2023-02-21 · ·

Disclosed in various embodiments of the present invention are a method and a device, the device comprising a camera, a display, and a processor, wherein the processor is configured to display, on the display in a preview state, a user's face acquired from the camera, correct the user's face on the basis of a configuration related to face correction, acquire an image including the corrected user's face when an avatar generation request is received, and generate an avatar by using the acquired image. Various embodiments are possible.

Multi-dimensional rendering
11589024 · 2023-02-21 · ·

A photo filter (e.g., multi-dimensional) light field effect system includes an eyewear device having a frame, a temple connected to a lateral side of the frame, and a depth-capturing camera. Execution of programming by a processor configures the system to create an image in each of a least two dimensions and create a multi-dimensional light field effect image with an appearance of a spatial rotation or movement and transitional change, by blending together a left photo filter image and a right photo filter image in each dimension and blending the blended images from all dimensions.

Method and system for generating user driven adaptive object visualizations using generative adversarial network models

A method and system for generating user driven adaptive object visualizations using Generative Adversarial Network (GAN) models is disclosed. The method includes the steps of generating a first set of object vectors for an object based on at least one input received from a user. The first set of vectors corresponds to a first set of visualizations for the object The method further includes capturing at least one tacit reaction type of the user in response to user interaction with each of the first set of visualizations, computing a score for each portion of each of the first set of visualizations, identifying a plurality of portions from at least one of the first set of object visualizations, generating a second set of object vectors, and processing the second set of object vectors sequentially through a plurality of GAN models to generate a final object visualization of the object.

Methods and Systems for an Automated Design, Fulfillment, Deployment and Operation Platform for Lighting Installations

A platform for design of a lighting installation generally includes an automated search engine for retrieving and storing a plurality of lighting objects in a lighting object library and a lighting design environment providing a visual representation of a lighting space containing lighting space objects and lighting objects. The visual representation is based on properties of the lighting space objects and lighting objects obtained from the lighting object library. A plurality of aesthetic filters is configured to permit a designer in a design environment to adjust parameters of the plurality of lighting objects handled in the design environment to provide a desired collective lighting effect using the plurality of lighting objects.