G06T2210/36

Adaptive model updates for dynamic and static scenes

In one embodiment, a computing system may update a first 3D model of a region of an environment based on comparisons between the first 3D model and first depth measurements of the region generated during a first time period. The computing system may determine that the region is static by comparing the first 3D model to second depth measurements of the region generated during a second time period. The computing system may in response to determining that the region is static, detect whether the region changed after the second time period based on comparisons between a second 3D model of the region and third depth measurements of the region generated after the second time period, the second 3D model having a lower resolution than the first 3D model. The computing system may in response to detecting a change in the region, update the first 3D model of the region.

Method for Generating a Hierarchical Data Structure, Hierarchical Data Structure, and Method for Streaming Three-Dimensional Objects
20230042578 · 2023-02-09 ·

The present invention relates to a method for generating a hierarchical data structure of a three-dimensional object, such a hierarchical data structure, and a method for streaming three-dimensional objects. In the method according to the invention, a hierarchical data structure is generated from a three-dimensional object, which has and possibly consists of three-dimensional object data and a texture that is mapped onto the object data, by first converting the three-dimensional object data into multiple detail levels and then segmenting the detail levels, wherein the texture is respectively mapped onto the segments with a corresponding resolution.

Temporal Approximation Of Trilinear Filtering
20230039787 · 2023-02-09 ·

In one embodiment, a method includes receiving instructions to render a snapshot of a scene for a video, where the snapshot is to be displayed using a sequence of N frames, computing a mipmap-level determining factor for a texture appearing in the scene based on a scale of the texture on a pixel grid, selecting a mipmap level of the texture for each of the N frames based on the mipmap-level determining factor, where the mipmap levels selected for the N frames are non-uniform and temporally approximate the mipmap-level determining factor, rendering each of the N frames by sampling the mipmap level of the texture selected for that frame, and displaying the rendered N frames sequentially to represent the snapshot of the scene.

CONTINUOUS AND DYNAMIC LEVEL OF DETAIL FOR EFFICIENT POINT CLOUD OBJECT RENDERING
20180012400 · 2018-01-11 ·

Rendering real-time three-dimensional computer models is a resource-intensive task, and even more so for point cloud objects. Level of detail is traditionally performed using a small number of fixed-size independent models. A new system is presented of rendering point cloud objects with efficient dynamic level of detail. Several novel point cloud dynamic level of detail techniques are presented that are fairly simple to implement and significantly more efficient in terms of managing rendering load, data reduction, and memory consumption. The novel point cloud dynamic level of detail techniques can be employed to optimize or otherwise improve the rendering efficiency of rendering point cloud objects.

Volumetric representation of digital objects from depth renderings

An image processing system includes a computing platform having processing hardware, a display, and a system memory storing a software code. The processing hardware executes the software code to receive a digital object, surround the digital object with virtual cameras oriented toward the digital object, render, using each one of the virtual cameras, a depth map identifying a distance of that one of the virtual cameras from the digital object, and generate, using the depth map, a volumetric perspective of the digital object from a perspective of that one of the virtual cameras, resulting in multiple volumetric perspectives of the digital object. The processing hardware further executes the software code to merge the multiple volumetric perspectives of the digital object to form a volumetric representation of the digital object, and to convert the volumetric representation of the digital object to a renderable form.

Depth based foveated rendering for display systems

Methods and systems for depth-based foveated rendering in the display system are disclosed. The display system may be an augmented reality display system configured to provide virtual content on a plurality of depth planes using different wavefront divergence. Some embodiments include determining a fixation point of a user's eyes. Location information associated with a first virtual object to be presented to the user via a display device is obtained. A resolution-modifying parameter of the first virtual object is obtained. A particular resolution at which to render the first virtual object is identified based on the location information and the resolution-modifying parameter of the first virtual object. The particular resolution is based on a resolution distribution specifying resolutions for corresponding distances from the fixation point. The first virtual object rendered at the identified resolution is presented to the user via the display system.

Virtual camera for 3-d modeling applications

A user interface to a virtual camera for a 3-D rendering application provides various features. A rendering engine can continuously refine the image being displayed through the virtual camera, and the user interface can contain an element for indicating capture of the image as currently displayed, which causes saving of the currently displayed image. Autofocus (AF) and autoexposure (AE) reticles can allow selection of objects in a 3-D scene, from which an image will be rendered, for each of AE and AF. A focal distance can be determined by identifying a 3-D object visible at a pixel overlapped by the AF reticle, and a current viewpoint. The AF reticle can be hidden in response to a depth of field selector being set to infinite depth of field. The AF and AE reticles can be linked and unlinked, allowing different 3-D objects for each of AF and AE.

Cost-driven framework for progressive compression of textured meshes
11568575 · 2023-01-31 · ·

Techniques of compressing level of detail (LOD) data involve sharing a texture image LOD among different mesh LODs for single-rate encoding. That is, a first texture image LOD corresponding to a first mesh LOD may be derived by refining a second texture image LOD corresponding to a second mesh LOD. This sharing is possible when texture atlases of LOD meshes are compatible.

Anisotropic texture filtering for sampling points in screen space
11715243 · 2023-08-01 · ·

Anisotropic texture filtering applies a texture at a sampling point in screen space. Calculated texture-filter parameters configure a filter to perform filtering of the texture for the sampling point. The texture for the sampling point is filtered using a filtering kernel having a footprint in texture space determined by the texture-filter parameters. Texture-filter parameters are calculated by generating a first and a second pair of screen-space basis vectors being rotated relative to each other. First and second pairs of texture-space basis vectors are calculated that correspond to the first and second pairs of screen-space basis vectors transformed to texture space under a local approximation of a mapping between screen space and texture space. An angular displacement is determined between a selected pair of the first and second pairs of screen-space basis vectors and screen-space principal axes of a local approximation of the mapping that indicate the maximum and minimum scale factors of the mapping. The angular displacement and the pair of screen-space basis vectors are used to generate texture-space principal axes comprising a major axis associated with the maximum scale factor of the mapping and a minor axis associated with the minimum scale factor of the mapping.

DATA-DRIVEN PHYSICS-BASED MODELS WITH IMPLICIT ACTUATIONS

One embodiment of the present invention sets forth a technique for generating actuation values based on a target shape such that the actuation values cause a simulator to output a simulated soft body that matches the target shape. The technique includes inputting a latent code that represents a target shape and a point on a geometric mesh into a first machine learning model. The technique further includes generating, via execution of the first machine learning model, one or more simulator control values that specify a deformation of the geometric mesh, where each of the simulator control values is based on the latent code and corresponds to the input point, and generating, via execution of the simulator, a simulated soft body based on the one or more simulator control values and the geometric mesh. The technique further includes causing the simulated soft body to be outputted to a computing device.