Patent classifications
G06T2210/61
INFORMATION PROCESSING DEVICE AND METHOD
A scene descriptive file describing a scene of 3D object content is generated, in the scene descriptive file, timed metadata identification information indicating that metadata of an associated external file changes in a time direction being stored in an MPEG_media extension, and timed metadata access information associating a camera object with the metadata being stored in the camera object. Furthermore, timed metadata that changes in the time direction is acquired on the basis of the timed metadata identification information and the timed metadata access information stored in the scene descriptive file, and a display image of the 3D object content is generated on the basis of the acquired timed metadata. The present disclosure is applicable to, for example, an information processing device, an information processing method, or the like.
Centralized rendering
A method is disclosed, the method comprising the steps of receiving, from a first client application, first graphical data comprising a first node; receiving, from a second client application independent of the first client application, second graphical data comprising a second node; and generating a scenegraph, wherein the scenegraph describes a hierarchical relationship between the first node and the second node according to visual occlusion relative to a perspective from a display.
MEDIA DISTRIBUTION DEVICE, MEDIA DISTRIBUTION METHOD, AND PROGRAM
The present disclosure relates to a media distribution device, a media distribution method, and a program enabling to generate a guide voice more appropriately. A media distribution device includes a guide voice generation unit that generates a guide voice describing a rendered image viewed from a viewpoint in a virtual space by using a scene description as information describing a scene in the virtual space and a user viewpoint information indicating a position and a direction of the viewpoint of a user; and an audio encoding unit that mixes the guide voice with original audio, and encodes the guide voice. The present technology can be applied to, for example, a media distribution system that distributes 6DoF media.
CROSS REALITY SYSTEM WITH BUFFERING FOR LOCALIZATION ACCURACY
A system form localizing an electronic device with dynamic buffering identifies, from the buffer, a first set of features that is extracted from a first image captured by the electronic device and receives, at the system, a second set of features that is extracted from a second image captured by the electronic device. The system further determines a first characteristic for the first set of features and a second characteristic for the second set of features and determines whether a triggering condition for dynamically changing a size of the buffer is satisfied based at least in part upon the first characteristic for the first set of features and the second characteristic for the second set of features.
SYSTEM AND METHOD FOR GENERATING A LIMITLESS PATH IN VIRTUAL REALITY ENVIRONMENT FOR CONTINUOUS LOCOMOTION
A method for generating a limitless path in a virtual reality environment (VR) for a continuous locomotion within a real physical space using Head-Mounted-Display (HMD) device associated with a user is provided. The method includes determining a line segment between two points that corresponds to an initial path travelled by the user. The method includes detecting a boundary of the VR environment to generate a next line segment. The method includes generating and adding a new line segment to end of the initial path. The method includes generating and adding the new line segment to the end of the next line segment. The method includes generating an updated path by adding the new line segment in a direction at the angle of shift angle to the direction of the next line segment. The method includes, configuring to output updated path as two-dimensional points to render updated path into VR environment.
Methods and apparatus to generate a three-dimensional (3D) model for 3D scene reconstruction
Methods, apparatus, systems and articles of manufacture for generating a three-dimensional (3D) model for 3D scene reconstruction are disclosed. An example apparatus includes a 3D scene generator to generate a 3D model for digital image scene reconstruction based on a trained generative model and a digital image captured in a real environment. An image simulator is to generate a simulated image based on the 3D model, the simulated image corresponding to the captured image. A discriminator is to apply a discriminative model to the simulated image to determine whether the simulated image is simulated.
Enhanced visibility system for work machines
An enhanced visibility system for a work machine includes an image capture device, a sensor, one or more control circuits, and a display. The image capture device is configured to obtain image data of an area surrounding the work machine. The sensor is configured to obtain data regarding physical properties of the area surrounding the work machine. The control circuits are configured to receive the image data and the data regarding the physical properties, and augment the image data with the data regarding the physical properties to generate augmented image data. The display is configured to display the augmented image data to provide an enhanced view of the area surrounding the work machine.
ACCELERATED PROCESSING VIA A PHYSICALLY BASED RENDERING ENGINE
One embodiment of a computer-implemented method for compiling a material graph into a set of instructions for execution within an execution unit includes receiving a first material graph having a plurality of nodes, wherein each node included in the plurality of nodes represents a different surface property of a material; parsing the material graph to generate an expression tree that includes one or more expressions for each node included in the plurality of nodes; and generating a set of byte code instructions corresponding to the material graph based on the expression tree, wherein the byte code instructions are executable by a plurality of processing cores included within the execution unit.
ALLOCATION OF PRIMITIVES TO PRIMITIVE BLOCKS
An application sends primitives to a graphics processing system so that an image of a 3D scene can be rendered. The primitives are placed into primitive blocks for storage and retrieval from a parameter memory. Rather than simply placing the first primitives into a primitive block until the primitive block is full and then placing further primitives into the next primitive block, multiple primitive blocks can be “open” such that a primitive block allocation module can allocate primitives to one of the open primitive blocks to thereby sort the primitives into primitive blocks according to their spatial positions. By grouping primitives together into primitive blocks in accordance with their spatial positions, the performance of a rasterization module can be improved. For example, in a tile-based rendering system this may mean that fewer primitive blocks need to be fetched by a hidden surface removal module in order to process a tile.
Generating three-dimensional virtual scene
A method and system for generating a three-dimensional (3D) virtual scene are disclosed. The method includes: identifying a two-dimensional (2D) object in a 2D picture and the position of the 2D object in the 2D picture; obtaining the three-dimensional model of the 3D object corresponding to the 2D object; calculating the corresponding position of the 3D object corresponding to the 2D object in the horizontal plane of the 3D scene according to the position of the 2D object in the picture; and simulating the falling of the model of the 3D object onto the 3D scene from a predetermined height above the 3D scene, wherein the position of the landing point the model of the 3D object in the horizontal plane is the corresponding position of the 3D object in the horizontal plane of the 3D scene.