G09G2360/121

SELECTIVELY WRITING BACK DIRTY CACHE LINES CONCURRENTLY WITH PROCESSING

A graphics pipeline includes a cache having cache lines that are configured to store data used to process frames in a graphics pipeline. The graphics pipeline is implemented using a processor that processes frames for the graphics pipeline using data stored in the cache. The processor processes a first frame and writes back a dirty cache line from the cache to a memory concurrently with processing of the first frame. The dirty cache line is retained in the cache and marked as clean subsequent to being written back to the memory. In some cases, the processor generates a hint that indicates a priority for writing back the dirty cache line based on a read command occupancy at a system memory controller.

Ray tracing system architectures and methods

Aspects comprise systems implementing 3-D graphics processing functionality in a multiprocessing system. Control flow structures are used in scheduling instances of computation in the multiporcessing system, where different points in the control flow structure serve as points where deferral of some instances of computation can be performed in favor of scheduling other instances of computation. In some examples, the control flow structure identifies particular tasks, such as intersection testing of a particular portion of an acceleration structure, and a particular element of shading code. In some examples, the aspects are used in 3-D graphics processing systems that can perform ray tracing based rendering.

Separately processing regions or objects of interest from a render engine to a display engine or a display panel
11200717 · 2021-12-14 · ·

Video or graphics, received by a render engine within a graphics processing unit, may be segmented into a region of interest such as foreground and a region of less interest such as background. In other embodiments, an object of interest may be segmented from the rest of the depiction in a case of a video game or graphics processing workload. Each of the segmented portions of a frame may themselves make up a separate surface which is sent separately from the render engine to the display engine of a graphics processing unit. In one embodiment, the display engine combines the two surfaces and sends them over a display link to a display panel. The display controller in the display panel displays the combined frame. The combined frame is stored in a buffer and refreshed periodically. In accordance with another embodiment, video or graphics may be segmented by a render engine into regions of interest or objects of interest and objects not of interest and again each of the separate regions or objects may be transferred to the display engine as a separate surface. Then the display engine may transfer the separate surfaces to a display controller of a display panel over a display link. At the display panel, a separate frame buffer may be used for each of the separate surfaces.

Method and apparatus for processing pixel data of a video frame

Embodiments of the present disclosure disclose a method and apparatus for processing a video frame. A specific embodiment of the method includes: receiving a video frame set; selecting a video frame from the video frame set, and performing following processing: creating a pixel buffer object newly; reading pixel data of the selected video frame from a frame buffer corresponding to a central processing unit, and writing the read pixel data into the newly created pixel buffer object; storing the written pixel buffer object into a pixel buffer object queue; determining whether an unselected video frame is present in the video frame set; and storing the video frame set in response to determining no unselected video frame being present in the video frame set.

Smooth image scrolling with dynamic scroll extension
11194461 · 2021-12-07 ·

A system and method for performing image scrolling are disclosed. In one embodiment, a system for image scrolling determines the scroll rate for image scrolling. The scroll rate is based on selection of a scroll rate range, through a user input device, and is further based on movement indicated by the user input device. The system writes a sequence of images from the image cache to the frame buffer for image scrolling on the display at the determined scroll rate.

Data Display Method and Device, and Readable Storage Medium

A data display method and device based on an ARM micro-controller, and a readable storage medium are provided. The data display method includes receiving data signals of a display image, and storing the data signals; extending the stored data signals into multiple data signal sets in a preset sequence, and synchronously caching the multiple data signal sets in a rising edge and a falling edge of a clock signal; and controlling the multiple data signal sets to be respectively output to multiple output ports to control a display unit to display the image.

GRAPHICS PROCESSING UNITS WITH POWER MANAGEMENT AND LATENCY REDUCTION

The graphics processing unit (GPU) of a processing system transitions to a low-power state between frame rendering operations according to an inter-frame power off process, where GPU state information is stored on retention hardware. The retention hardware can include retention random access memory (RAM) or retention flip-flops. The retention hardware is operable in an active mode and a retention mode, where read/write operations are enabled at the retention hardware in the active mode and disabled in the retention mode, but data stored on the retention hardware is still retained in the retention mode. The retention hardware is placed in the retention state between frame rendering operations. The GPU transitions from its low-power state to its active state upon receiving an indication that a new frame is ready to be rendered and is restored using the GPU state information stored at the retention hardware.

Systems and method for GPU based virtual reality video streaming server
11375172 · 2022-06-28 · ·

Systems and methods of processing and streaming a virtual reality video using a graphics processing unit (GPU) are provided. A video server is configured to cause a processor to read, from a video data source, source video data including multiple spherical image frame data and store the source video data in a first memory. The video server is further configured to cause the GPU to convert, in response to storing first spherical image frame data in a first frame buffer of a second memory, the first spherical image frame data to first equirectangular image frame data that correspond to a portion of spherical image represented by the first spherical image frame data, encode the converted first equirectangular image frame data and store the encoded first equirectangular image frame data in an encoded frame buffer of the second memory.

Cache replacement mechanism

An apparatus to facilitate cache replacement is disclosed. The apparatus includes a cache memory and cache replacement logic to manage data in the cache memory. The cache replacement logic includes tracking logic to track addresses accessed at the cache memory and replacement control logic to monitor the tracking logic and apply a replacement policy based on information received from the tracking logic.

APPARATUS AND METHOD FOR POWER MANAGEMENT OF A COMPUTING SYSTEM
20220187898 · 2022-06-16 ·

A multiple graphics processing unit (GPU) based parallel graphics system comprising multiple graphics processing pipelines with multiple GPUs supporting a parallel graphics rendering process having an object division mode of operation. Each GPU comprises video memory, a geometry processing subsystem and a pixel processing subsystem. According to the principles of the present invention, pixel (color and z depth) data buffered in the video memory of each GPU is communicated to the video memory of a primary GPU, and the video memory and the pixel processing subsystem in the primary GPU are used to carry out the image recomposition process, without the need for dedicated or specialized apparatus.