Patent classifications
G06T15/506
SYSTEM FOR PHOTO-REALISTIC REFLECTIONS IN AUGMENTED REALITY
The present disclosure describes a system for fast generation of ray traced reflections of virtually augmented objects into a real-world image, specifically on reflective surfaces. The system utilizes a standard raster graphics pipeline.
Inserting three-dimensional objects into digital images with consistent lighting via global and local lighting information
This disclosure describes methods, non-transitory computer readable storage media, and systems that generate realistic shading for three-dimensional objects inserted into digital images. The disclosed system utilizes a light encoder neural network to generate a representation embedding of lighting in a digital image. Additionally, the disclosed system determines points of the three-dimensional object visible within a camera view. The disclosed system generates a self-occlusion map for the digital three-dimensional object by determining whether fixed sets of rays uniformly sampled from the points intersects with the digital three-dimensional object. The disclosed system utilizes a generator neural network to determine a shading map for the digital three-dimensional object based on the representation embedding of lighting in the digital image and the self-occlusion map. Additionally, the disclosed system generates a modified digital image with the three-dimensional object inserted into the digital image with consistent lighting of the three-dimensional object and the digital image.
Apparatus, method and computer program for rendering a visual scene
An apparatus for rendering a visual scene includes: a content visualization stage configured: to obtain as a first input a set of images of one or more objects, and to obtain as a second input a geometry representation of the one or more objects in a 3D-space; to obtain a final image representing the visual scene from a perspective of a target position, the visual scene including the one or more objects; to consider at least one of a lighting effect and/or an object interaction effect between the one or more objects and one or more further objects contained in the visual scene; the content visualization stage is configured to obtain a target view image from the set of images irrespective of the geometry representation. The apparatus is configured to map the target view image on the geometry representation under consideration of the target position.
SYSTEMS AND METHODS FOR RENDERING A VIRTUAL ENVIRONMENT USING LIGHT PROBES
Methods, systems, and computer-readable media for rendering light probes in a virtual environment are disclosed. Noisy lighting data is accessed in a data structure associated with a light probe in a set of light probes in an environment. The noisy lighting data is provided as an input to a neural network. The neural network is trained to output an estimate of non-noisy lighting data based on the input. The noisy lighting data is replaced in the data structure with the estimated non-noisy lighting data.
Methods for collecting and processing image information to produce digital assets
Paired images of substantially the same scene are captured with the same freestanding sensor. The paired images include reflected light illuminated with controlled polarization states that are different between the paired images. Information from the images is applied to a convolutional neural network (CNN) configured to derive a spatially varying bi-directional reflectance distribution function (SVBRDF) for objects in the paired images. Alternatively, the sensor is fixed and oriented to capture images of an object of interest in the scene while a light source traverses a path that intersects the sensor's field of view. Information from the paired images of the scene and from the images captured of the object of interest when the light source traverses the field of view are applied to a CNN to derive a SVBDRF for the object of interest. The image information and the SVBRDF are used to render a representation with artificial lighting conditions.
Graphical user interface for controlling a solar ray mapping
Systems, methods, and computer-readable media are described herein to model divergent beam ray paths between locations on a roof (e.g., of a structure) and modeled locations of the sun at different times of the day and different days during a week, month, year, or another time period. A graphical user interface allows for visualization of the modeled ray paths and graphical manipulation of the resolution and parameters of the modeling process.
GENERATING AND MODIFYING REPRESENTATIONS OF HANDS IN AN ARTIFICIAL REALITY ENVIRONMENT
A method includes receiving an image of a real environment captured using a camera worn by a user, the image comprising a hand of the user and determining a pose of the hand based on the image. Based on a three-dimensional model of the hand having the determined pose, generating a two-dimensional surface representing the hand as viewed from a first viewpoint of the user and positioning the two-dimensional surface representing the hand and one or more virtual-object representations in a three-dimensional space. The method further includes determining that a portion of the two-dimensional surface representing the hand is visible from a second viewpoint in the three-dimensional space, and generating an output image, wherein a set of image pixels of the output image corresponding to the portion of the two-dimensional surface that is visible is configured to cause a display to tur off a set of corresponding display pixels.
White balance and color correction for interior vehicle camera
An image is received from a camera built into a cabin of a vehicle. The image is demosaiced and its noise is reduced. A segmentation algorithm is applied to the image. A global illumination for the image is solved. Based on the segmentation of the image and the global illumination, a bidirectional reflectance distribution function (BRDF) for color and/or reflectance information of material in the cabin area of the vehicle is solved for. A white balance matrix and a color correction matrix for the image are computed based on the BRDF. The white balance matrix and the color correction matrix are applied to the image, which is then displayed or stored for addition image processing.
IMAGE PROCESSING METHODS AND SYSTEMS FOR TRAINING A MACHINE LEARNING MODEL TO PREDICT ILLUMINATION CONDITIONS FOR DIFFERENT POSITIONS RELATIVE TO A SCENE
An image processing method generates a training dataset for training a machine learning model to predict illumination conditions for different positions relative to a scene, the training dataset including training images and reference data. The method includes: obtaining a training image of a training scene acquired by a first camera having an associated first coordinate system; determining local illumination maps associated to a respective position in the training scene in a respective second coordinate system and representing illumination received from different directions around the position; transforming the position of each local illumination map from the second to the first coordinate system; responsive to determining that the transformed position of a local illumination map is visible: transforming the local illumination map from the second to the first coordinate system and including the transformed local illumination map and its transformed position in the reference data associated to the training image.
Light volume rendering
Systems, apparatuses, and methods for implementing light volume rendering techniques are disclosed. A processor is coupled to a memory. A processor renders the geometry of a scene into a geometry buffer. For a given light source in the scene, the processor initiates two shader pipeline passes to determine which pixels in the geometry buffer to light. On the first pass, the processor renders a front-side of a light volume corresponding to the light source. Any pixels of the geometry buffer which are in front of the front-side of the light volume are marked as pixels to be discarded. Then, during the second pass, only those pixels which were not marked to be discarded are sent to the pixel shader. This approach helps to reduce the overhead involved in applying a lighting effect to the scene by reducing the amount of work performed by the pixel shader.