Patent classifications
G06T2210/08
RENDER TARGET COMPRESSION SCHEME COMPATIBLE WITH VARIABLE RATE SHADING
A disclosed technique includes reading, from a compressed render target, a set of unique color values for a coarse pixel, wherein the coarse pixel includes multiple render target pixels; reading, from the compressed render target, pointers to the unique color values for the coarse pixel; and generating colors for the multiple render target pixels based on the unique color values and the pointers.
Information processing apparatus, information processing method and storage medium
The information processing apparatus (encoding apparatus) that acquires first polygon data representing a shape of an object, acquires geometry data relating to geometry of second polygon data whose resolution is higher than that of the first polygon data, and outputs encoded data including the geometry data and topology data relating to the first polygon data.
VIRTUAL REALITY ENVIRONMENT
A three-dimensional virtual reality environment.
OPERATIONS USING SPARSE VOLUMETRIC DATA
A volumetric data structure models a particular volume representing the particular volume at a plurality of levels of detail. A first entry in the volumetric data structure includes a first set of bits representing voxels at a first level of detail, the first level of detail includes the lowest level of detail in the volumetric data structure, values of the first set of bits indicate whether a corresponding one of the voxels is at least partially occupied by respective geometry, where the volumetric data structure further includes a number of second entries representing voxels at a second level of detail higher than the first level of detail, the voxels at the second level of detail represent subvolumes of volumes represented by voxels at the first level of detail, and the number of second entries corresponds to a number of bits in the first set of bits with values indicating that a corresponding voxel volume is occupied.
Mixed rendering system and mixed rendering method for reducing latency in VR content transmission
The disclosure provides a mixed rendering system and a mixed rendering method. The mixed rendering system includes a client device configured to perform: determining at least one user-interactable object of a virtual environment; rendering the at least one user-interactable object; receiving a background scene frame of the virtual environment; blending the at least one rendered user-interactable object with the background scene frame as a visual content of the virtual environment; and providing the visual content of the virtual environment.
PARALLEL RENDERERS FOR ELECTRONIC DEVICES
Aspects of the subject technology relate to electronic devices having multiple renderers. The multiple renderers may include a system renderer that renders system content and application content generated by some applications at the electronic device, and one or more application renderers that render application content generated by one or more other corresponding applications. The electronic device may include a compositor that receives rendered content from the system renderer and one or more application renderers, and generates a composite display environment that concurrently includes the rendered content from the system renderer and one or more application renderers.
METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO ENCODING AND VIDEO DECODING
The embodiments relate to a method comprising establishing a three-dimensional conversational interaction with one or more receivers; generating a pointcloud relating to a user and capturing audio from one or more audio source; generating conversational scene description comprising at least a first dynamic object describing a virtual space for the three-dimensional conversational interaction, wherein the first dynamic object refers to one or more objects specific to the three-dimensional conversational interaction, wherein said one or more objects comprises at least data relating to transformable pointcloud; audio obtained from said one or more audio source and input obtained from one or more connected devices controlling at least the pointcloud, wherein said objects are linked to each other for seamless manipulation; applying the conversational scene description into a metadata, and transmitting the metadata with the respective audio in realtime to said one or more receivers.
INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD AND STORAGE MEDIUM
The information processing apparatus (encoding apparatus) that acquires first polygon data representing a shape of an object, acquires geometry data relating to geometry of second polygon data whose resolution is higher than that of the first polygon data, and outputs encoded data including the geometry data and topology data relating to the first polygon data.
MINIMAL VOLUMETRIC 3D ON DEMAND FOR EFFICIENT 5G TRANSMISSION
A minimal volumetric 3D transmission implementation enables efficient transmission of a 3D model to a client device. A volumetric 3D model is generated using a camera rig to capture frames of a subject. A viewer is able to select a view of the subject. A system determines an optimal subset of cameras of the camera rig to utilize to capture frames to generate the volumetric 3D model based on the viewer's selected view. The volumetric 3D model is transmitted to the user device. If the user changes the view, the process repeats, and a new subset of cameras are selected to generate the volumetric 3D model at a different angle.
INFORMATION PROCESSING APPARATUS, 3D DATA GENERATION METHOD, AND PROGRAM
When an operation detection unit (31) (detection unit) of a mobile terminal (30a) (information processing apparatus) detects an operation instruction given when observing a 3D model (90M) (3D object), a texture information selection unit (33) (decision unit) selects which texture information (Ta or Tb) that expresses the texture of the 3D model (90M) in a plurality of different formats acquired by a 3D model acquisition unit (32) to use when drawing the 3D model (90M) according to the operation instruction detected by the operation detection unit (31). Then, a rendering processing unit (34) (drawing unit) renders the texture information (Ta or Tb) selected by the texture information selection unit (33) on the 3D model (90M) reconstructed on the basis of mesh information (M) (shape information) so as to draw the 3D model (90M).