Patent classifications
G06T15/506
Multi-level lighting system
A system and method for facilitating lighting of objects during interactive gameplay by users on client computing platforms distinguishes activities performed prior to interactive gameplay and during interactive gameplay. Different client computing platforms may have different levels of graphics performance. Lighting may be defined by characteristics of one or more light sources that illuminate one or more objects in a multi-dimensional volume in a virtual space. Different lighting techniques or lighting features may be combined to create lighting during interactive gameplay. Some lighting techniques or lighting features may only be available and/or supported on high-performance computing platforms, whereas other lighting features may be available even on low-performance computing platforms.
3D microgeometry and reflectance modeling
A system and method for three-dimensional (3D) microgeometry and reflectance modeling is provided. The system receives images comprising a first set of images of a face and a second set of images of the face. The faces in the first set of images and the second set of images are exposed to omni-directional lighting and directional lighting, respectively. The system generates a 3D face mesh based on the received images and executes a set of skin-reflectance modeling operations by using the generated 3D face mesh and the second set of images, to estimate a set of texture maps for the face. Based on the estimated set of texture maps, the system texturizes the generated 3D face mesh. The texturization includes an operation in which texture information, including microgeometry skin details and skin reflectance details, of the estimated set of texture maps is mapped onto the generated 3D face mesh.
Method and apparatus for determining ambient illumination in AR scene
An apparatus for determining ambient illumination in an AR scene includes: setting virtual light source points in an AR scene, predicting reference illumination parameters of all of the virtual light source points for a current image frame according to a neural network, configuring a reference space confidence and a reference time confidence for the virtual light source points, acquiring a reference comprehensive confidence by fusing the reference space confidence and the reference time confidence, acquiring a fused current comprehensive confidence by comparing the reference comprehensive confidence with a comprehensive confidence of a previous image frame, acquiring illumination parameters of the current frame by correcting the illumination parameters of the current image frame according to the current comprehensive confidence, the previous frame comprehensive confidence and the previous frame illumination parameters, and performing illumination rendering of a virtual object in the AR scene according to the illumination parameters of current frame.
TECHNOLOGIES FOR RENDERING ITEMS AND ELEMENTS THEREOF WITHIN A DESIGN STUDIO
Systems and methods for rendering items in a user interface are described. According to certain aspects, an electronic device may initiate, on a display of the electronic device, a user interface for displaying a rendering of an item. In embodiments, the user interface may include selectable options for editing design elements of the item. As a user of the electronic device moves a pointer on the user interface, the electronic device may automatically and dynamically configure a lighting effect simulating a virtual light source by setting a position of the virtual light source to the location of the pointer. A rendering of the item can then be generated by applying the lighting effect to a digital image of the item, where the lighting effect can be updated responsive to movement of the pointer.
Dynamically estimating light-source-specific parameters for digital images using a neural network
This disclosure relates to methods, non-transitory computer readable media, and systems that can render a virtual object in a digital image by using a source-specific-lighting-estimation-neural network to generate three-dimensional (“3D”) lighting parameters specific to a light source illuminating the digital image. To generate such source-specific-lighting parameters, for instance, the disclosed systems utilize a compact source-specific-lighting-estimation-neural network comprising both common network layers and network layers specific to different lighting parameters. In some embodiments, the disclosed systems further train such a source-specific-lighting-estimation-neural network to accurately estimate spatially varying lighting in a digital image based on comparisons of predicted environment maps from a differentiable-projection layer with ground-truth-environment maps.
System and method for interplay between elements of an augmented reality environment
An augmented reality system is disclosed. The system receives values of parameters of real-world elements of an augmented reality environment from various sensors and creates a three-dimensional textual matrix of sensor representation of a real environment world based on the parameters. The system determines a context of a specific virtual object with respect to the real-world environment based on the three-dimensional textual matrix. The system then models the specific virtual object based on the context to place the specific virtual object in the augmented reality environment.
METHOD FOR RENDERING ON BASIS OF HEMISPHERICAL ORTHOGONAL FUNCTION
The invention discloses a method for rendering on the basis of hemispherical orthogonal function, the method comprising the following steps: selecting rendering fragments and establishing a local coordinate system; acquiring a bidirectional reflectance distribution function of a material; if global illumination is an orthogonal function, determining a rotation matrix of an orthogonal function coefficient according to the rotation angles of the global coordinate system and the local coordinate system, and calculating a local orthogonal function illumination coefficient; converting the local orthogonal function illumination coefficient into a hemispherical orthogonal function illumination coefficient; sampling to obtain the spatial distribution of a bidirectional reflection distribution function of a rendered material; obtaining a hemispherical orthogonal function of the bidirectional reflection distribution function of the rendered material; and using the dot product of a hemispherical orthogonal function coefficient of illumination and a hemispherical orthogonal function coefficient of the bidirectional reflection distribution function of the rendered material and accumulating to obtain the light intensity in the reflection direction. A hemispherical harmonic function(HSH) is used to fit measurement data or theoretically derived data of a BRDF, which may avoid the difficulty of fitting that accurs for a hemispherical harmonics function due to data being missing in the lower hemisphere.
Temporal techniques of denoising Monte Carlo renderings using neural networks
A modular architecture is provided for denoising Monte Carlo renderings using neural networks. The temporal approach extracts and combines feature representations from neighboring frames rather than building a temporal context using recurrent connections. A multiscale architecture includes separate single-frame or temporal denoising modules for individual scales, and one or more scale compositor neural networks configured to adaptively blend individual scales. An error-predicting module is configured to produce adaptive sampling maps for a renderer to achieve more uniform residual noise distribution. An asymmetric loss function may be used for training the neural networks, which can provide control over the variance-bias trade-off during denoising.
METHOD FOR FORMING AN IMAGE OF AN OBJECT, COMPUTER PROGRAM PRODUCT AND IMAGE FORMING SYSTEM FOR CARRYING OUT THE METHOD
A method for forming an image of an object, a computer program product, and an image forming system for carrying out the method are provided. In the method, data about the object are provided by a first data processing device, and a first data record with first data is provided. The first data record is loaded from the first data processing device into a second data processing device. A second data record is loaded from a data memory into the second data processing device in dependence on the first data record loaded into the second data processing device. A processing data record is generated or detected based on the second data record. A two-dimensional output image of the object is generated by processing the data about the object with the processing data record, the output image having a predeterminable number of output image pixels.
IMAGE RENDERING METHOD AND APPARATUS, ELECTRONIC DEVICE, AND STORAGE MEDIUM
This application discloses an image rendering method and apparatus, an electronic device, and a storage medium, and belongs to the field of image processing technologies. The method includes: acquiring a lightmap of a target three-dimensional model in response to a light rendering instruction; acquiring a seam edge of the target three-dimensional model; determining, in the lightmap, a target pixel corresponding to the seam edge; and updating an initial pixel value of the target pixel in the lightmap to obtain a repaired lightmap. In this application, the seam edge of the target three-dimensional model is automatically identified, and the pixel value of the target pixel is automatically updated, so that automatic recognition and automatic repair of a seam problem can be completed by one click, thereby avoiding a waste of labor resources for troubleshooting and repairing the seam problem, reducing image rendering time, and improving image rendering efficiency.