G06T2210/21

Physics engine with collision detection neighbor welding

A computing device is provided, comprising a processor configured to execute a physics engine. The physics engine is configured to, during narrowphase collision detection of a collision detection phase, identify a set of convex polyhedron pairs, each including a first convex polyhedron from a first rigid body and a second convex polyhedron from a second rigid body. The physics engine is further configured to, for each convex polyhedron pair, determine a separating plane. The physics engine is further configured to perform neighbor welding on pair combinations of the convex polyhedron pairs during the narrowphase collision detection to thereby modify the separating planes of at least a subset of the convex polyhedron pairs. The physics engine is further configured to determine collision manifolds for the convex polyhedron pairs, including for the subset of convex polyhedron pairs having the modified separating planes.

GRAPHICS PROCESSING SYSTEMS
20230043630 · 2023-02-09 ·

A method of operating a graphics processor when rendering a frame representing a view of a scene using a ray tracing process in which part of the processing for a ray tracing operation is offloaded to a texture mapper unit of the graphics processor. Thus, when the graphics processor's execution unit is executing a program to perform a ray tracing operation the execution unit is able to message the texture mapper unit to perform one or more processing operations for the ray tracing operation. This operation can be triggered by including an appropriate instruction to message the texture mapper unit within the ray tracing program.

System and method to convert two-dimensional video into three-dimensional extended reality content

System and method are provided to detect objects in a scene frame of two-dimensional (2D) video using image processing and determine object image coordinates of the detected objects in the scene frame. The system and method deploy a virtual camera in a three-dimensional (3D) environment to create a virtual image frame in the environment and generate a floor in the environment in a plane below the virtual camera. The system and method adjust the virtual camera to change a height and angle relative to the virtual image frame. The system and method generate at an extended reality (XR) coordinate location relative to the floor for placing the detected object in the environment. The XR coordinate location is a point of intersection of a ray cast of the virtual camera through the virtual frame on the floor that translates to the image coordinate in the virtual image frame.

THREE-DIMENSIONAL POINT-IN-POLYGON OPERATION TO FACILITATE VISUALIZING DATA POINTS BOUNDED BY 3D GEOMETRIC REGIONS
20180012405 · 2018-01-11 ·

A system, a method and instructions embodied on a non-transitory computer-readable storage medium that solve a 3D point-in-polygon (PIP) problem is presented. This system projects polygons that comprise a set of polyhedra onto projected polygons in a reference plane. Next, the system projects a data point onto the reference plane, and performs a 2D PIP operation in the reference plane to determine which projected polygons the projected data point falls into. For each projected polygon the projected data point falls into, the system performs a 3D crossing number operation by counting intersections between a ray projected from the corresponding data point in a direction orthogonal to the reference plane and polyhedral faces corresponding to projected polygons, to identify polyhedra the data point falls into. The system then generates a visual representation of the set of polyhedra, wherein each polyhedron is affected by data points that fall into it.

3D MULTI-OBJECT SIMULATION
20230237210 · 2023-07-27 · ·

An occlusion metric is computed for a target object in a 3D multi-object simulation. The target object is represented in 3D space by a collision surface and a 3D bounding box. In a reference surface defined in 3D space, a bounding box projection is determined for the target object with respect to an ego location. The bounding box projection is used to determine a set of reference points in 3D space. For each reference point of the set of reference points, a corresponding ray is cast based on the ego location, and it is determined whether the ray is an object ray that intersects the collision surface of the target object. For each such object ray, it is determined whether the object ray is occluded. The occlusion metric conveys an extent to which the object rays are occluded.

Intersection testing in a ray tracing system using ray coordinate system basis vectors

A method and an intersection testing module for performing intersection testing of a ray with a box in a ray tracing system. The ray and the box are defined in a 3D space using a space-coordinate system, and the ray is defined with a ray origin and a ray direction. A ray-coordinate system is used to perform intersection testing, wherein the ray-coordinate system has an origin at the ray origin, and the ray-coordinate system has three basis vectors. A first of the basis vectors is aligned with the ray direction. A second and a third of the basis vectors: (i) are both orthogonal to the first basis vector, (ii) are not parallel with each other, and (iii) have a zero as one component when expressed in the space-coordinate system. A result of performing the intersection testing is outputted for use by the ray tracing system.

DYNAMIC FACIAL HAIR CAPTURE OF A SUBJECT

Embodiments of the present disclosure are directed to methods and systems for generating three-dimensional (3D) models and facial hair models representative of subjects (e.g., actors or actresses) using facial scanning technology. Methods accord to embodiments may be useful for performing facial capture on subjects with dense facial hair. Initial subject facial data, including facial frames and facial performance frames (e.g., images of the subject collected from a capture system) can be used to accurately predict the structure of the subject's face underneath their facial hair to produce a reference 3D facial shape of the subject. Likewise, image processing techniques can be used to identify facial hairs and generate a reference facial hair model. The reference 3D facial shape and reference facial hair mode can subsequently be used to generate performance 3D facial shapes and a performance facial hair model corresponding to a performance by the subject (e.g., reciting dialog).

INTERSECTION TESTING IN A RAY TRACING SYSTEM
20230023323 · 2023-01-26 ·

A ray tracing unit and method for processing a ray in a ray tracing system performs intersection testing for the ray by performing one or more intersection testing iterations. Each intersection testing iteration includes: (i) traversing an acceleration structure to identify the nearest intersection of the ray with a primitive that has not been identified as the nearest intersection in any previous intersection testing iterations for the ray; and (ii) if, based on a characteristic of the primitive, a traverse shader is to be executed in respect of the identified intersection: executing the traverse shader in respect of the identified intersection; and if the execution of the traverse shader determines that the ray does not intersect the primitive at the identified intersection, causing another intersection testing iteration to be performed. When the intersection testing for the ray is complete, an output shader is executed to process a result of the intersection testing for the ray.

TECHNIQUES FOR INTRODUCING ORIENTED BOUNDING BOXES INTO BOUNDING VOLUME HIERARCHY

Described herein is a technique for modifying a bounding volume hierarchy. The techniques include combining preferred orientations of child nodes of a first bounding box node to generate a first preferred orientation; based on the first preferred orientation, converting one or more child nodes of the first bounding box node into one or more oriented bounding box nodes; combining preferred orientations of child nodes of a second bounding box node to generate a second preferred orientation; and based on the second preferred orientation, maintaining one or more children of the second bounding box node as non-oriented bounding box nodes.

3-D graphics rendering with implicit geometry

Aspects relate to tracing rays in 3-D scenes that comprise objects that are defined by or with implicit geometry. In an example, a trapping element defines a portion of 3-D space in which implicit geometry exist. When a ray is found to intersect a trapping element, a trapping element procedure is executed. The trapping element procedure may comprise marching a ray through a 3-D volume and evaluating a function that defines the implicit geometry for each current 3-D position of the ray. An intersection detected with the implicit geometry may be found concurrently with intersections for the same ray with explicitly-defined geometry, and data describing these intersections may be stored with the ray and resolved.