Patent classifications
G09G2360/121
METHOD OF AND APPARATUS FOR PROVIDING AN OUTPUT SURFACE IN A DATA PROCESSING SYSTEM
An apparatus for compositing an output surface (10) from a plurality of input surfaces (1, 2, 3, 4) includes processing circuitry and a composition processor. The processing circuitry is configured to determine whether two or more input surfaces of the plurality of input surfaces (1, 2, 3, 4) can be combined into a single secondary surface for provision to the composition processor. When it is determined that two or more input surfaces of the plurality of input surfaces (1, 2, 3, 4) can be combined into a single secondary surface for provision to the composition processor, the processing circuitry is configured to provide data representing the secondary surface to the composition processor, the data indicating the input surfaces that contribute to the secondary surface.
DISPLAY CONTROL DEVICE, DISPLAY CONTROL METHOD, AND DISPLAY CONTROL PROGRAM
A display control device (10) detects change of a screen, and executes, depending on the detected change of the screen, any one or both of provisional update processing that generates a strip-shaped rectangular region extended in a vertical direction or a horizontal direction based on the position of an icon displayed on the screen, searches the screen for a region similar to a specified image template by using the rectangular region as a search range, and performs overlay display of an icon in the similar region, and definitive update processing that searches for a region similar to the image template by using a region larger than a search range of the provisional update processing as a search range, and performs overlay display of an icon at a position corresponding to the similar region.
CONSOLIDATION OF DATA COMPRESSION USING COMMON SECTORED CACHE FOR GRAPHICS STREAMS
A mechanism is described for facilitating consolidated compression/de-compression of graphics data streams of varying types at computing devices. A method of embodiments, as described herein, includes generating a common sector cache relating to a graphics processor. The method may further include performing a consolidated compression of multiple types of graphics data streams associated with the graphics processor using the common sector cache.
GENERATION OF A MODIFIED UI ELEMENT TREE
A computing device comprises an electronic paper display, a processor and a memory. The memory is arranged to store platform software and application software for at least one application that is not adapted to work with an electronic paper display. The platform software comprises a UI conversion module comprising device-executable instructions, which when executed by the processor, cause the processor to: access a UI element tree for the application; generate a modified UI element tree for the application by removing and/or re-styling at least one UI element; and render data from the application using the modified UI element tree for display on the electronic paper display.
METHOD AND SYSTEM OF DOWNLOADING IMAGE TILES ONTO A CLIENT DEVICE
There is disclosed a method of downloading image tiles onto a client device. A server stores a plurality of image tiles organized in a hierarchical structure, each level of the structure storing a sub-set of the image tiles being associated with a particular resolution level. The method comprises when downloading image tiles of a particular resolution level to generate a second set of image tiles having a second resolution level lower than the first resolution level; each image tile of the second set of image tiles having four child image tiles in the image tiles of the first resolution level as prescribed by the hierarchical structure and to preload the second set of image tiles to the client device to use for generating a transition view that is displayed while the actual image tiles required for a newly requested image view are downloaded from the server to the client device.
SYSTEMS AND METHOD FOR GPU BASED VIRTUAL REALITY VIDEO STREAMING SERVER
Systems and methods of processing and streaming a virtual reality video using a graphics processing unit (GPU) are provided. A video server is configured to cause a processor to read, from a video data source, source video data including multiple spherical image frame data and store the source video data in a first memory. The video server is further configured to cause the GPU to convert, in response to storing first spherical image frame data in a first frame buffer of a second memory, the first spherical image frame data to first equirectangular image frame data that correspond to a portion of spherical image represented by the first spherical image frame data, encode the converted first equirectangular image frame data and store the encoded first equirectangular image frame data in an encoded frame buffer of the second memory.
SUB-FRAME SCANOUT FOR LATENCY REDUCTION IN VIRTUAL REALITY APPLICATIONS
A system, computer readable medium, and method for sub-frame scan-out are disclosed. The method includes the steps of dividing a frame into a plurality of slices. For each slice in the plurality of slices, the steps further include sampling a sensor associated with a head mounted display to generate sample data corresponding to the slice; adjusting one or more parameters associated with rendering operations for the slice based on the sample data; and rendering primitive data associated with a model according to the rendering operations to generate image data for the slice. Each slice is a portion of the frame and corresponds to different sample data from the sensor. Thus, adjusting of the parameters is different for each slice of the frame.
APPARATUS AND METHOD FOR EFFICIENT GRAPHICS VIRTUALIZATION
An apparatus and method are described for allocating local memories to virtual machines. For example, one embodiment of an apparatus comprises: a command streamer to queue commands from a plurality of virtual machines (VMs) or applications, the commands to be distributed from the command streamer and executed by graphics processing resources of a graphics processing unit (GPU); a tile cache to store graphics data associated with the plurality of VMs or applications as the commands are executed by the graphics processing resources; and tile cache allocation hardware logic to allocate a first portion of the tile cache to a first VM or application and a second portion of the tile cache to a second VM or application; the tile cache allocation hardware logic to further allocate a first region in system memory to store spill-over data when the first portion of the tile cache and/or the second portion of the file cache becomes full.
TECHNIQUES FOR VIDEO PLAYBACK DECODING SURFACE PREDICTION
Techniques are disclosed for video playback decoding surface prediction. For instance, in some embodiments, video content may be parsed for information that can be used to predict what surfaces (e.g., computer graphics shapes to be rendered, as defined by vertices specifying the location and possibly other attributes of the shape) are most likely to be accessed, for example, by a display or a graphics processing unit (GPU) in the near future. In accordance with some embodiments, these surfaces may be pre-loaded, for example, into cache memory or other desired high-bandwidth memory in advance to minimize or otherwise reduce memory access latency. In some cases, these surfaces may be entered in a list that is kept updated with each new input frame, and the surfaces in that list may be kept inside the cache (or other high-bandwidth memory) for future display or GPU access.
Caching of adaptively sized cache tiles in a unified L2 cache with surface compression
One embodiment of the present invention includes techniques for adaptively sizing cache tiles in a graphics system. A device driver associated with a graphics system sets a cache tile size associated with a cache tile to a first size. The detects a change from a first render target configuration that includes a first set of render targets to a second render target configuration that includes a second set of render targets. The device driver sets the cache tile size to a second size based on the second render target configuration. One advantage of the disclosed approach is that the cache tile size is adaptively sized, resulting in fewer cache tiles for less complex render target configurations. Adaptively sizing cache tiles leads to more efficient processor utilization and reduced power requirements. In addition, a unified L2 cache tile allows dynamic partitioning of cache memory between cache tile data and other data.