Patent classifications
G06T15/50
Rendering virtual environments using container effects
In one embodiment, a computer implemented method for rendering virtual environments is disclosed. The method includes associating by a computing system, an object with a container effect, by receiving information regarding an object category for the object and matching the object category to a category associated with the container effect, where the container effect defines virtual effects for objects associated therewith. The method also includes generating by the computing system a virtual environment including the object by retrieving a model of the object an utilizing the model and the container effect to render a virtual object.
Rendering virtual environments using container effects
In one embodiment, a computer implemented method for rendering virtual environments is disclosed. The method includes associating by a computing system, an object with a container effect, by receiving information regarding an object category for the object and matching the object category to a category associated with the container effect, where the container effect defines virtual effects for objects associated therewith. The method also includes generating by the computing system a virtual environment including the object by retrieving a model of the object an utilizing the model and the container effect to render a virtual object.
Augmented reality content rendering via Albedo models, systems and methods
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
Augmented reality content rendering via Albedo models, systems and methods
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
Generative latent textured proxies for object category modeling
Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.
Predictive virtual reconstruction of physical environments
Embodiments of the present invention describe predictively reconstructing a physical event using augmented reality. Embodiments describe, identifying relative states of objects located in a physical event area by using video analysis to analyze collected video feeds from the physical event area before and after a physical event involving at least one of the objects, creating a knowledge corpus including the video analysis and the collected video feeds associated with the physical event and historical information, and capturing data, by a computing device, of the physical event area. Additionally, embodiments describe identifying possible precursor events based on the captured data and the knowledge corpus, and generating a virtual reconstruction of the physical event using the possible precursor events, displaying, by the computing device, the generated virtual reconstruction of the predicted physical event, wherein the displayed virtual reconstruction of the predicted physical event overlays an image of the physical event area.
CORRECTION OF A HALO IN A DIGITAL IMAGE AND DEVICE FOR IMPLEMENTING SAID CORRECTION
The object of the invention is a method (400) for correcting a halo (H) in a digital image (1) captured using photogrammetry in a 3-D modeling studio, the halo being generated through the interaction of light originating from a light source (L3, L4, L5, L6) in the studio with the optic of the shooting device, and manifesting as a local lightening of the digital image, the method comprising the steps of generating (410) a light intensity map (M) characterizing the light source in terms of spatial distribution and light intensity, providing (420) a convolution kernel specific to the shooting device, calculating (430) a convolution product of the light intensity map and the kernel to obtain a corrective value map (CVM), and removing the corrective value map from the digital image pixel by pixel to produce a corrected image (Icorr) in which the halo is not present.
CORRECTION OF A HALO IN A DIGITAL IMAGE AND DEVICE FOR IMPLEMENTING SAID CORRECTION
The object of the invention is a method (400) for correcting a halo (H) in a digital image (1) captured using photogrammetry in a 3-D modeling studio, the halo being generated through the interaction of light originating from a light source (L3, L4, L5, L6) in the studio with the optic of the shooting device, and manifesting as a local lightening of the digital image, the method comprising the steps of generating (410) a light intensity map (M) characterizing the light source in terms of spatial distribution and light intensity, providing (420) a convolution kernel specific to the shooting device, calculating (430) a convolution product of the light intensity map and the kernel to obtain a corrective value map (CVM), and removing the corrective value map from the digital image pixel by pixel to produce a corrected image (Icorr) in which the halo is not present.
Lossless Compression for Multisample Render Targets Alongside Fragment Compression
Described herein is a data processing system having a multisample antialiasing compressor coupled to a texture unit and shader execution array. In one embodiment, the data processing system includes a memory device to store a multisample render target, the multisample render target to store color data for a set of sample locations of each pixel in a set of pixels; and general-purpose graphics processor comprising a multisample antialiasing compressor to apply multisample antialiasing compression to color data generated for the set of sample locations of a first pixel in the set of pixels and a multisample render cache to store color data generated for the set of sample locations of the first pixel in the set of pixels, wherein color data evicted from the multisample render cache is to be stored to the multisample render target.
BEAUTIFICATION TECHNIQUES FOR 3D DATA IN A MESSAGING SYSTEM
The subject technology receives a selection of a selectable graphical item from a plurality of selectable graphical items, the selectable graphical item comprising an augmented reality content generator for applying a 3D effect, the 3D effect including at least one beautification operation. The subject technology captures image data and depth data using a camera. The subject technology applies, to the image data and the depth data, the 3D effect including the at least one beautification operation based at least in part on the augmented reality content generator, the beautification operation being performed as part of applying the 3D effect. The subject technology generates a 3D message based at least in part on the applied 3D effect including the at least one beautification operation. The subject technology renders a view of the 3D message based at least in part on the applied 3D effect including the at least one beautification operation.