Patent classifications
G06T17/005
DEVICE AND METHOD FOR PROCESSING POINT CLOUD DATA
A method for processing point cloud data according to embodiments may encode and transmit point cloud data. The method for processing point cloud data according to embodiments may receive and decode point cloud data.
Method and apparatus for point cloud coding
A method of point cloud geometry decoding in a point cloud decoder is provided. In the method, first signaling information is received from a coded bitstream for a point cloud that includes a set of points in a three-dimensional (3D) space. The first signaling information indicates partition information of the point cloud. Second signaling information is determined based on the first signaling information indicating a first value. The second signaling information is indicative of a partition mode of the set of points in the 3D space. Further, the partition mode of the set of points in the 3D space is determined based on the second signaling information. The point cloud is reconstructed subsequently based on the partition mode.
Quotidian scene reconstruction engine
A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.
Collaborative 3-D environment map for computer-assisted or autonomous driving vehicles
Disclosures herein may be directed to a method, technique, or apparatus directed to a computer-assisted or autonomous driving (CA/AD) vehicle that includes a system controller, disposed in a first CA/AD vehicle, to manage a collaborative three-dimensional (3-D) map of an environment around the first CA/AD vehicle, wherein the system controller is to receive, from another CA/AD vehicle proximate to the first CA/AD vehicle, an indication of at least a portion of another 3-D map of another environment around both the first CA/AD vehicle and the another CA/AD vehicle and incorporate the at least the portion of the 3-D map proximate to the first CA/AD vehicle and the another CA/AD vehicle into the 3-D map of the environment of the first CA/AD vehicle managed by the system controller.
Hybrid tile-based and element-based visualization of 3D models in interactive editing workflows
In example embodiments, techniques are provided for visualizing a 3D model in an interactive editing workflow. A user modifies one or more elements of a model of the 3D model, by inserting one or more new elements having geometry, changing the geometry of one or more existing elements and/or deleting one or more existing elements having geometry. An updated view of the 3D model is then rendered to reflect the modification to the one or more elements in part by obtaining, for each new element or changed existing element of the model visible in the view, a polygon mesh that represents geometry of the individual element, obtaining a set of tiles that each include a polygon mesh that represent collective geometry of a set of elements intersecting the tile's volume, displaying the polygon mesh for each new element or changed existing element, and displaying the set of tiles while hiding any deleted or changed existing elements therein.
QUOTIDIAN SCENE RECONSTRUCTION ENGINE
A stored volumetric scene model of a real scene is generated from data defining digital images of a light field in a real scene containing different types of media. The digital images have been formed by a camera from opposingly directed poses and each digital image contains image data elements defined by stored data representing light field flux received by light sensing detectors in the camera. The digital images are processed by a scene reconstruction engine to form a digital volumetric scene model representing the real scene. The volumetric scene model (i) contains volumetric data elements defined by stored data representing one or more media characteristics and (ii) contains solid angle data elements defined by stored data representing the flux of the light field. Adjacent volumetric data elements form corridors, at least one of the volumetric data elements in at least one corridor represents media that is partially light transmissive. The constructed digital volumetric scene model data is stored in a digital data memory for subsequent uses and applications.
In-tree geometry quantization of point clouds
An example method includes receiving (502) a plurality of points that represent a point cloud; representing a position of the point in each dimension of a three-dimensional space as a sequence of bits (504), where the position of the point is encoded according to a tree data structure; partitioning (506) at least one of the sequences of bits into a first portion of bits and a second portion of bits; quantizing (508) each of the second portions of bits according to a quantization step size, where the quantization step size is determined according to an exponential function having a quantization parameter value as an input and the quantization step size as an output; and generating (510) a data structure representing the point cloud and including the quantized second portions of bits.
Method and system for image processing to determine blood flow
Embodiments include a system for determining cardiovascular information for a patient. The system may include at least one computer system configured to receive patient-specific data regarding a geometry of the patient's heart, and create a three-dimensional model representing at least a portion of the patient's heart based on the patient-specific data. The at least one computer system may be further configured to create a physics-based model relating to a blood flow characteristic of the patient's heart and determine a fractional flow reserve within the patient's heart based on the three-dimensional model and the physics-based model.
ACCELERATED PROCESSING VIA A PHYSICALLY BASED RENDERING ENGINE
One embodiment of a computer-implemented method for compiling a material graph into a set of instructions for execution within an execution unit includes receiving a first material graph having a plurality of nodes, wherein each node included in the plurality of nodes represents a different surface property of a material; parsing the material graph to generate an expression tree that includes one or more expressions for each node included in the plurality of nodes; and generating a set of byte code instructions corresponding to the material graph based on the expression tree, wherein the byte code instructions are executable by a plurality of processing cores included within the execution unit.
Hybrid hierarchy of bounding and grid structures for ray tracing
Methods and ray tracing units are provided for performing intersection testing for use in rendering an image of a 3-D scene. A hierarchical acceleration structure may be traversed by traversing one or more upper levels of nodes of the hierarchical acceleration structure according to a first traversal technique, the first traversal technique being a depth-first traversal technique; and traversing one or more lower levels of nodes of the hierarchical acceleration structure according to a second traversal technique, the second traversal technique not being a depth-first traversal technique. Results of traversing the hierarchical acceleration structure are used for rendering the image of the 3-D scene. The upper levels of the acceleration structure may be defined according to a spatial subdivision structure, whereas the lower levels of the acceleration structure may be defined according to a bounding volume structure.