Patent classifications
G06T15/50
METHOD FOR RECONSTRUCTING THREE-DIMENSIONAL MODEL, METHOD FOR TRAINING THREE-DIMENSIONAL RECONSTRUCTION MODEL, AND APPARATUS
This application provides a method for reconstructing a three-dimensional model, a method for training a three-dimensional reconstruction model, an apparatus, a computer device, and a storage medium. The method for reconstructing a three-dimensional model includes: obtaining an image feature coefficient of an input image; respectively obtaining, according to the image feature coefficient, a global feature map and an initial local feature map based on a texture and a shape of the input image; performing edge smoothing on the initial local feature map, to obtain a target local feature map; respectively splicing the global feature map and the target local feature map based on the texture and the shape, to obtain a target texture image and a target shape image; and performing three-dimensional model reconstruction according to the target texture image and the target shape image, to obtain a target three-dimensional model.
ANISOTROPIC TEXTURE FILTERING USING WEIGHTS OF AN ANISOTROPIC FILTER THAT MINIMIZE A COST FUNCTION
A method of performing anisotropic texture filtering includes generating one or more parameters describing an elliptical footprint in texture space; performing isotropic filtering at each sampling point of a set of sampling points in an ellipse to be sampled to produce a plurality of isotropic filter results, the ellipse to be sampled based on the elliptical footprint; selecting, based on one or more parameters of the set of sampling points and one or more parameters of the ellipse to be sampled, weights of an anisotropic filter that minimize a cost function that penalises high frequencies in the filter response of the anisotropic filter under a constraint that the variance of the anisotropic filter is related to an anisotropic ratio squared, the anisotropic ratio being the ratio of a major radius of the ellipse to be sampled and a minor axis of the ellipse to be sampled; and combining the plurality of isotropic filter results using the selected weights of the anisotropic filter to generate at least a portion of a filter result.
METHOD AND APPARATUS FOR THE AUTOMATION OF VARIABLE RATE SHADING IN A GPU DRIVER CONTEXT
A system and a method are disclosed for varying a pixel-rate functionality of a GPU as an optional feature without an explicit implementation from within an application. User interface (UI) content may be detected in a draw call of an application and a variable-rate shader lookup map may be generated based on the detected UI content. A pixel rate of 3D content may be increased using the variable-rate shader lookup map. Additionally or alternatively, other conditions may be detected for increasing the pixel rate, such as using information in an application profile, detecting high or low luminance values, detecting motion and/or detecting temporal anti-aliasing.
System and method for three-dimensional scanning and for capturing a bidirectional reflectance distribution function
A method for generating a three-dimensional (3D) model of an object includes: capturing images of the object from a plurality of viewpoints, the images including color images; generating a 3D model of the object from the images, the 3D model including a plurality of planar patches; for each patch of the planar patches: mapping image regions of the images to the patch, each image region including at least one color vector; and computing, for each patch, at least one minimal color vector among the color vectors of the image regions mapped to the patch; generating a diffuse component of a bidirectional reflectance distribution function (BRDF) for each patch of planar patches of the 3D model in accordance with the at least one minimal color vector computed for each patch; and outputting the 3D model with the BRDF for each patch.
Adaptive sampling for structured light scanning
A system to process images includes a light source configured to emit a first illumination pattern onto one or more first portions of a scene. The system also includes an image sensor configured to capture light reflected from the scene in response to the emitted first illumination pattern. The system also includes an optimizer configured to perform raytracing of the light reflected from the scene. The system further includes a processor operatively coupled to the optimizer. The processor is configured to determine a parameter of a surface of the scene based on the raytracing, cause the light source to emit a second illumination pattern onto one or more second portions of the scene based at least in part on the parameter of the surface, and refine the parameter of the surface of the scene based on additional raytracing performed on reflected light from the second illumination pattern.
White Balance and Color Correction for Interior Vehicle Camera
An image is received from a camera built into a cabin of a vehicle. The image is demosaiced and its noise is reduced. A segmentation algorithm is applied to the image. A global illumination for the image is solved. Based on the segmentation of the image and the global illumination, a bidirectional reflectance distribution function (BRDF) for color and/or reflectance information of material in the cabin area of the vehicle is solved for. A white balance matrix and a color correction matrix for the image are computed based on the BRDF. The white balance matrix and the color correction matrix are applied to the image, which is then displayed or stored for addition image processing.
VOLUMETRIC DYNAMIC DEPTH DELINEATION
A method for visualizing two-dimensional data with three-dimensional volume enables the end user to easily view abnormalities in sequential data. The two-dimensional data can be in the form of a tiled texture with the images in a set row and column, a media file with the images displayed at certain images in time, or any other way to depict a set of two-dimensional images. The disclosed method takes in each pixel of the images and evaluates the density, usually represented by color, of the pixel. The disclosed method evaluates and renders the opacity and color of each of the pixels within the volume. The disclosed method also calculates and creates dynamic shadows within the volume in real time. This evaluation allows the user to set threshold values and return exact representations of the data presented.
Image processing apparatus, image capturing apparatus, control method, and storage medium
An image processing apparatus includes obtaining distance information and subject information with respect to a captured image, specifying, based on the obtained distance information, a first subject region in which images of subjects that exist in a predetermined distance range are distributed in the captured image, specifying, based on the subject information and independently of the obtained distance information, specifying a second subject region that includes a region from which the predetermined subject has been detected in the captured image, and determining a target region for which the image processing is executed with reference to at least one of the first subject region and the second subject region. A subject region varies depending on at least one of an image capture condition of the captured image and a state of the captured image.
AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.
AUGMENTED REALITY CONTENT RENDERING VIA ALBEDO MODELS, SYSTEMS AND METHODS
Methods for rendering augmented reality (AR) content are presented. An a priori defined 3D albedo model of an object is leveraged to adjust AR content so that is appears as a natural part of a scene. Disclosed devices recognize a known object having a corresponding albedo model. The devices compare the observed object to the known albedo model to determine a content transformation referred to as an estimated shading (environmental shading) model. The transformation is then applied to the AR content to generate adjusted content, which is then rendered and presented for consumption by a user.