Patent classifications
G06T15/60
SPATIOTEMPORAL SELF-GUIDED SHADOW DENOISING IN RAY-TRACING APPLICATIONS
In examples, a filter used to denoise shadows for a pixel(s) may be adapted based at least on variance in temporally accumulated ray-traced samples. A range of filter values for a spatiotemporal filter may be defined based on the variance and used to exclude temporal ray-traced samples that are outside of the range. Data used to compute a first moment of a distribution used to compute variance may be used to compute a second moment of the distribution. For binary signals, such as visibility, the first moment (e.g., accumulated mean) may be equivalent to a second moment (e.g., the mean squared). In further respects, spatial filtering of a pixel(s) may be skipped based on comparing the mean of variance of the pixel(s) to one or more thresholds and based on the accumulated number of values for the pixel.
SPATIOTEMPORAL SELF-GUIDED SHADOW DENOISING IN RAY-TRACING APPLICATIONS
In examples, a filter used to denoise shadows for a pixel(s) may be adapted based at least on variance in temporally accumulated ray-traced samples. A range of filter values for a spatiotemporal filter may be defined based on the variance and used to exclude temporal ray-traced samples that are outside of the range. Data used to compute a first moment of a distribution used to compute variance may be used to compute a second moment of the distribution. For binary signals, such as visibility, the first moment (e.g., accumulated mean) may be equivalent to a second moment (e.g., the mean squared). In further respects, spatial filtering of a pixel(s) may be skipped based on comparing the mean of variance of the pixel(s) to one or more thresholds and based on the accumulated number of values for the pixel.
PROCESSING METHOD AND APPARATUS WITH AUGMENTED REALITY
A method and apparatus for processing augmented reality (AR) are disclosed. The method includes determining a compensation parameter to compensate for light attenuation of visual information caused by a display area of an AR device as the visual information corresponding to a target scene is displayed through the display area, generating a background image without the light attenuation by capturing the target scene using a camera of the AR device, generating a compensation image by reducing brightness of the background image using the compensation parameter, generating a virtual object image to be overlaid on the target scene, generating a display image by synthesizing the compensation image and the virtual object image, and displaying the display image in the display area.
PROCESSING METHOD AND APPARATUS WITH AUGMENTED REALITY
A method and apparatus for processing augmented reality (AR) are disclosed. The method includes determining a compensation parameter to compensate for light attenuation of visual information caused by a display area of an AR device as the visual information corresponding to a target scene is displayed through the display area, generating a background image without the light attenuation by capturing the target scene using a camera of the AR device, generating a compensation image by reducing brightness of the background image using the compensation parameter, generating a virtual object image to be overlaid on the target scene, generating a display image by synthesizing the compensation image and the virtual object image, and displaying the display image in the display area.
EFFICIENT STORAGE, REAL-TIME RENDERING, AND DELIVERY OF COMPLEX GEOMETRIC MODELS AND TEXTURES OVER THE INTERNET
A method for real-time compositing, rendering and delivery of complex geometric models and textures, includes storing a plurality of three-dimensional models of at least two sub-parts of a whole three-dimensional object, storing a plurality of image textures for each of the plurality of three-dimensional models, receiving instructions from a user, the instructions including a selection of at least two of the plurality of three-dimensional models, each of the at least two of the plurality of three-dimensional models being one of the at least two sub-parts of the whole three-dimensional object, and generating the whole three-dimensional object including the at least one of the plurality of image textures for each of the at least two of the plurality of three-dimensional models applied according to the instructions to the at least two of the plurality of three-dimensional models.
EFFICIENT STORAGE, REAL-TIME RENDERING, AND DELIVERY OF COMPLEX GEOMETRIC MODELS AND TEXTURES OVER THE INTERNET
A method for real-time compositing, rendering and delivery of complex geometric models and textures, includes storing a plurality of three-dimensional models of at least two sub-parts of a whole three-dimensional object, storing a plurality of image textures for each of the plurality of three-dimensional models, receiving instructions from a user, the instructions including a selection of at least two of the plurality of three-dimensional models, each of the at least two of the plurality of three-dimensional models being one of the at least two sub-parts of the whole three-dimensional object, and generating the whole three-dimensional object including the at least one of the plurality of image textures for each of the at least two of the plurality of three-dimensional models applied according to the instructions to the at least two of the plurality of three-dimensional models.
JSON-Based Translation of Software Programming Language Into an Accessible Drag and Drop Web-based Application for Content Creation in Spatial Computing
A method of creating an animation file for a virtual reality, augmented reality, extended reality or mixed reality story file on a computing device includes selecting an animation start; selecting a number of loops or repetitions to be performed for the animation file; selecting a rotation at origin and a position of origin for the animation file; selecting a rotation at destination and a position of destination for the animation file; and selecting a body type for the animation file. The method further comprises generating javascript object notation (JSON) parameters for animation file based at least in part on the selected animation start, the selected number of loops or repetition, the rotation at origin; the position at origin; the position of destination; the rotation at destination or the selected body type for the animation file; and storing the generated JSON parameters for the animation file in a database.
JSON-Based Translation of Software Programming Language Into an Accessible Drag and Drop Web-based Application for Content Creation in Spatial Computing
A method of creating an animation file for a virtual reality, augmented reality, extended reality or mixed reality story file on a computing device includes selecting an animation start; selecting a number of loops or repetitions to be performed for the animation file; selecting a rotation at origin and a position of origin for the animation file; selecting a rotation at destination and a position of destination for the animation file; and selecting a body type for the animation file. The method further comprises generating javascript object notation (JSON) parameters for animation file based at least in part on the selected animation start, the selected number of loops or repetition, the rotation at origin; the position at origin; the position of destination; the rotation at destination or the selected body type for the animation file; and storing the generated JSON parameters for the animation file in a database.
IMAGE PROCESSING DEVICE, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An image processing device includes a shape acquisition unit configured to acquire shape information of a subject, a first region detection unit configured to detect a first region generating a shadow of the subject, a second region detection unit configured to detect a second region onto which the shadow is projected, a virtual light source direction setting unit configured to determine a direction of a virtual light source in which the first region projects the shadow onto the second region on the basis of the shape information, the first region, and the second region, and an image generation unit configured to generate an image with the shadow on the basis of the shape information and the determined direction of the virtual light source.
IMAGE PROCESSING DEVICE, IMAGING APPARATUS, IMAGE PROCESSING METHOD, AND RECORDING MEDIUM
An image processing device includes a shape acquisition unit configured to acquire shape information of a subject, a first region detection unit configured to detect a first region generating a shadow of the subject, a second region detection unit configured to detect a second region onto which the shadow is projected, a virtual light source direction setting unit configured to determine a direction of a virtual light source in which the first region projects the shadow onto the second region on the basis of the shape information, the first region, and the second region, and an image generation unit configured to generate an image with the shadow on the basis of the shape information and the determined direction of the virtual light source.