Patent classifications
G06T15/40
Light volume rendering
Systems, apparatuses, and methods for implementing light volume rendering techniques are disclosed. A processor is coupled to a memory. A processor renders the geometry of a scene into a geometry buffer. For a given light source in the scene, the processor initiates two shader pipeline passes to determine which pixels in the geometry buffer to light. On the first pass, the processor renders a front-side of a light volume corresponding to the light source. Any pixels of the geometry buffer which are in front of the front-side of the light volume are marked as pixels to be discarded. Then, during the second pass, only those pixels which were not marked to be discarded are sent to the pixel shader. This approach helps to reduce the overhead involved in applying a lighting effect to the scene by reducing the amount of work performed by the pixel shader.
EFFICIENT STORAGE, REAL-TIME RENDERING, AND DELIVERY OF COMPLEX GEOMETRIC MODELS AND TEXTURES OVER THE INTERNET
A method for real-time compositing, rendering and delivery of complex geometric models and textures, includes storing a plurality of three-dimensional models of at least two sub-parts of a whole three-dimensional object, storing a plurality of image textures for each of the plurality of three-dimensional models, receiving instructions from a user, the instructions including a selection of at least two of the plurality of three-dimensional models, each of the at least two of the plurality of three-dimensional models being one of the at least two sub-parts of the whole three-dimensional object, and generating the whole three-dimensional object including the at least one of the plurality of image textures for each of the at least two of the plurality of three-dimensional models applied according to the instructions to the at least two of the plurality of three-dimensional models.
EFFICIENT STORAGE, REAL-TIME RENDERING, AND DELIVERY OF COMPLEX GEOMETRIC MODELS AND TEXTURES OVER THE INTERNET
A method for real-time compositing, rendering and delivery of complex geometric models and textures, includes storing a plurality of three-dimensional models of at least two sub-parts of a whole three-dimensional object, storing a plurality of image textures for each of the plurality of three-dimensional models, receiving instructions from a user, the instructions including a selection of at least two of the plurality of three-dimensional models, each of the at least two of the plurality of three-dimensional models being one of the at least two sub-parts of the whole three-dimensional object, and generating the whole three-dimensional object including the at least one of the plurality of image textures for each of the at least two of the plurality of three-dimensional models applied according to the instructions to the at least two of the plurality of three-dimensional models.
IMAGING SYSTEMS AND METHODS INCORPORATING IMPROVED CULLING OF VIRTUAL OBJECTS
An imaging system including visible-light camera(s), pose-tracking means, and processor(s). The processor(s) is/are configured to: control visible-light camera(s) to capture visible-light image, whilst processing pose-tracking data to determine pose of camera(s); obtain three-dimensional model of real-world environment; create occlusion mask, using three-dimensional model; cull part of virtual object(s) to generate culled virtual object(s), wherein virtual object(s) is to be embedded at given position in visible-light image; detect whether width of culled part or remaining part of virtual object(s) is less than predefined percentage of total width of virtual object(s); if width of culled part is less than predefined percentage, determine new position and embed entirety of virtual object(s) at new position to generate extended-reality image; and if width of remaining part is less than predefined percentage, cull entirety of virtual object(s).
GENERATING AND MODIFYING AN ARTIFICIAL REALITY ENVIRONMENT USING OCCLUSION SURFACES AT PREDETERMINED DISTANCES
A method includes generating a depth map of a real environment as seen from a viewpoint that comprises pixels having corresponding depth values of one or more physical objects. Based on the depth map a two-dimensional occlusion surface is generated representing at least a visible portion of the one or more physical objects that are located within a predetermined depth range defined relative to the viewpoint. The two-dimensional occlusion surface is posed in a three-dimensional coordinate system such that the two-dimensional occlusion surface is located at a predetermined distance from the viewpoint. The visibility of a virtual object is determined relative to the one or more physical objects by comparing a model of the virtual object with the two-dimensional occlusion surface, and an output image is generated based on the visibility of the virtual object.
MESH PROCESSING FOR VIEWABILITY TESTING
A computer-implemented method includes obtaining an input polygon mesh representing at least part of a three-dimensional scene and comprising a plurality of input polygons, and obtaining mapping data for mapping at least part of an image to a region of the input polygon when the three-dimensional scene is rendered. Said region extends at least partway across the plurality of input polygons. The method includes using the mapping data to generate one or more test polygons to match or approximate said region of the input polygon mesh. Each of the generated test polygons is distinct from each of said plurality of input polygons.
System and method for modifying content of a virtual environment
A system for modifying data representing a virtual environment includes: an environment navigation unit operable to control navigation within the virtual environment to generate one or more viewpoints within the virtual environment, an environment identification unit operable to identify one or more aspects of the geometry of the virtual environment visible in the one or more viewpoints, a geometry evaluation unit operable to evaluate the visibility of one or more aspects of the geometry based upon the identification for each of one or more viewpoints, and a data modification unit operable to modify one or more elements of data representing the virtual environment.
Reprojecting holographic video to enhance streaming bandwidth/quality
Improved video compression and video streaming systems and methods are disclosed for environments where camera motion is common, such as cameras incorporated into head-mounted displays. This is accomplished by combining a 3D representation of the shape of the user's environment (walls, floor, ceiling, furniture, etc.), image data, and data representative of changes in the location and orientation (pose) of the camera between successive image frames, thereby reducing data bandwidth needed to send streaming video in the presence of camera motion.
METHODS FOR AUGMENTED REALITY GAMING
Disclosed are methods for augmenting progressive meter information presented in association with a gaming device. The methods include controlling a camera on a mobile device using an augmented reality gaming assistance component and enabling a user to employ the camera to capture an image of a game screen display, the image including one or more progressive meters associated with a respective progressive award. The captured image is sent via a network to a server that, for each of one or more progressive meters, determines a location of the progressive meters within the captured image, determines a value for each associated progressive award based on the captured image at each respective location, and determines a progressive rating for the associated progressive award based on the value. Content based on the one or more progressive ratings is received at the mobile device and displayed to the user via a display.
Tessellating patches of surface data in tile based computer graphics rendering
A method and system for culling a patch of surface data from one or more tiles in a tile based computer graphics system. A rendering space is divided into a plurality of tiles and a patch of surface data read. Then, at least a portion of the patch is analysed to determine data representing a bounding depth value evaluated over at least one tile. This may comprise tessellating the patch of surface data to derive a plurality of tessellated primitives and analysing at least some of the tessellated primitives. For each tile within which the patch is located, the data representing the bounding depth value is then used to determine whether the patch is hidden in the tile, and at least a portion of the patch is rendered, if the patch is determined not to be hidden in at least one tile.