Patent classifications
G06T2215/12
GEOMETRY-AWARE AUGMENTED REALITY EFFECTS WITH REAL-TIME DEPTH MAP
Techniques of introducing virtual objects into a physical environment of AR system include displacing vertices of a mesh representing the physical environment based on a live depth map. For example, an AR system generates a mesh template, i.e., an initial mesh with vertices that represents a physical environment and a depth map that indicates a geometry of real objects within the physical environment. The AR system is configured to represent the real objects in the physical environment by displacing the vertices of the mesh based on depth values of the depth map and parameter values of a pinhole camera model. The depth values may be taken from the perspective of an illumination source in the physical environment.
Indoor scene illumination
Techniques for illuminating an indoor scene. A directional distribution associated with the indoor scene is received. The indoor scene has a first scene element and a first quadrilateral. The first scene element has a first shading point disposed thereon. The directional distribution is reparametrized such that the first quadrilateral as viewed from the first shading point corresponds to an axis-aligned rectangular region in the reparametrized directional distribution. The scene element is illuminated using one or more samples drawn from the shading point by performing importance sampling based on the reparametrized directional distribution.
APPLICATION PROGRAMMING INTERFACE TO CREATE AND MODIFY GRAPHICS OBJECTS
Apparatuses, systems, and techniques to enable image processing methods on a graphics processing unit (GPU). In at least one embodiment, seamless cubemapping is enabled with a flag contained within a function of an application programming interface (API).
TEXTURE MAPPING WITH RENDER-BAKED ANIMATION
A virtual-reality computing device comprises a pose sensor, a rendering tool, and a display. The pose sensor is configured to measure a current pose of the virtual-reality computing device in a physical space. The rendering tool is configured to receive a holographic animation of a 3D model that includes a sequence of holographic image frames. The rendering tool is also configured to receive a render-baked dynamic lighting animation that includes a sequence of lighting image frames corresponding to the sequence of holographic image frames. The rendering tool also is configured to derive a 2D view of the 3D model with a virtual perspective based on the current pose and texture map a corresponding lighting image frame to the 2D view of the 3D model to generate a rendered image frame of the 2D view with texture-mapped lighting. The display is configured to visually present the rendered image frame.
Methods and systems for an automated design, fulfillment, deployment and operation platform for lighting installations
A platform for design of a lighting installation generally includes an automated search engine for retrieving and storing a plurality of lighting objects in a lighting object library and a lighting design environment providing a visual representation of a lighting space containing lighting space objects and lighting objects. The visual representation is based on properties of the lighting space objects and lighting objects obtained from the lighting object library. A plurality of aesthetic filters is configured to permit a designer in a design environment to adjust parameters of the plurality of lighting objects handled in the design environment to provide a desired collective lighting effect using the plurality of lighting objects.
Sampling shadow maps at an offset
Disclosed herein is a web-based videoconference system that allows for video avatars to navigate within a virtual environment. Various methods for efficient modeling, rendering, and shading are disclosed herein.
Methods and Systems for an Automated Design, Fulfillment, Deployment and Operation Platform for Lighting Installations
A platform for design of a lighting installation generally includes an automated search engine for retrieving and storing a plurality of lighting objects in a lighting object library and a lighting design environment providing a visual representation of a lighting space containing lighting space objects and lighting objects. The visual representation is based on properties of the lighting space objects and lighting objects obtained from the lighting object library. A plurality of aesthetic filters is configured to permit a designer in a design environment to adjust parameters of the plurality of lighting objects handled in the design environment to provide a desired collective lighting effect using the plurality of lighting objects.
Analysis and manipulation of panoramic surround views
Various embodiments of the present invention relate generally to systems and methods for analyzing and manipulating images and video. According to particular embodiments, the spatial relationship between multiple images and video is analyzed together with location information data, for purposes of creating a representation referred to herein as a surround view. In particular embodiments, a surround view can be generated by combining a panoramic view of an object with a panoramic view of a distant scene, such that the object panorama is placed in a foreground position relative to the distant scene panorama. Such combined panoramas can enhance the interactive and immersive viewing experience of the surround view.
SYSTEM AND METHOD FOR CONTENT CREATION VIA INTERACTIVE LAYERS
A system and method for content creation via interactive layers is provided. A mutable general object on which to build an artefact is stored. The mutable general object includes a plurality of n-dimensional data units capable of being rendered in a multi-dimensional display. An environment represented by the artefact is displayed. The artefact includes layers that each represent a different characteristic of the environment. Each layer includes a generator and layer parameters. A unique identifier is assigned to each layer. The identifiers for the layers of the artefact are composited and the composited identifiers are stored. Upon accessing the composited identifiers, the artefact is reconfigured for display using the generator and layer parameters from each of the layers.
Interactive Path Tracing on the Web
A method renders photorealistic images in a web browser. The method is performed at a computing device having a general purpose processor and a graphics processing unit (GPU). The method includes obtaining an environment map and images of an input scene. The method also includes computing textures for the input scene including by encoding an acceleration structure of the input scene. The method further includes transmitting the textures to shaders executing on a GPU. The method includes generating samples of the input scene, by performing at least one path tracing algorithm on the GPU, according to the textures. The method also includes lighting or illuminating a sample of the input scene using the environment map, to obtain a lighted scene, and tone mapping the lighted scene. The method includes drawing output on a canvas, in the web browser, based on the tone-mapped scene to render the input scene.