Patent classifications
G06F12/0857
Mirroring multiple writeable storage arrays
Systems, methods, and computer program products for mirroring dual writeable storage arrays are provided. Various embodiments provide configurations including two or more mirrored storage arrays that are each capable of being written to by different hosts. When commands to write data to corresponding mirrored data blocks within the respective storage arrays are received from different hosts at substantially the same time, write priority for writing data to the mirrored data blocks is given to one of the storage arrays based on a predetermined criterion or multiple predetermined criteria.
METHOD AND SYSTEM FOR ACCELERATING STORAGE OF DATA IN WRITE-INTENSIVE COMPUTER APPLICATIONS
A method of optimising a service rate of a buffer in a computer system having memory stores of first and second type is described. The method selectively services the buffer by routing data to each of the memory store of the first type and the second type based on read/write capacity of the memory store of the first type.
SELECTIVELY WRITING BACK DIRTY CACHE LINES CONCURRENTLY WITH PROCESSING
A graphics pipeline includes a cache having cache lines that are configured to store data used to process frames in a graphics pipeline. The graphics pipeline is implemented using a processor that processes frames for the graphics pipeline using data stored in the cache. The processor processes a first frame and writes back a dirty cache line from the cache to a memory concurrently with processing of the first frame. The dirty cache line is retained in the cache and marked as clean subsequent to being written back to the memory. In some cases, the processor generates a hint that indicates a priority for writing back the dirty cache line based on a read command occupancy at a system memory controller.
Pre-decompressing a compressed form of data that has been pre-fetched into a cache to facilitate subsequent retrieval of a decompressed form of the data from the cache
Pre-decompressing a compressed form of data that has been pre-fetched into a cache to facilitate subsequent retrieval of a decompressed form of the data from the cache is presented herein. A system retrieves, from a first portion of a cache, a compression chunk comprising compressed data blocks representing a compressed form of a group of data blocks in response to a first cache hit from the first portion of the cache being incurred, decompresses the compression chunk to obtain a decompressed chunk comprising uncompressed data blocks representing an uncompressed form of the group of data blocks, and inserts the uncompressed data blocks into a second portion of the cache. Further, the system retrieves, from the second portion of the cache, an uncompressed data block of the uncompressed data blocks in response to a second cache hit from the second portion of the cache being incurred.
VIRTUAL NETWORK PRE-ARBITRATION FOR DEADLOCK AVOIDANCE AND ENHANCED PERFORMANCE
A device includes a data path, a first interface configured to receive a first memory access request from a first peripheral device, and a second interface configured to receive a second memory access request from a second peripheral device. The device further includes an arbiter circuit configured to, in a first clock cycle, a pre-arbitration winner between a first memory access request and a second memory access request based on a first number of credits allocated to a first destination device and a second number of credits allocated to a second destination device. The arbiter circuit is further configured to, in a second clock cycle select a final arbitration winner from among the pre-arbitration winner and a subsequent memory access request based on a comparison of a priority of the pre-arbitration winner and a priority of the subsequent memory access request.
Apparatus and method for improved cache utilization and efficiency on a many core processor
Apparatus and method for improved cache utilization and efficiency on a many-core processor. An apparatus comprising: a plurality of execution units to generate cache access requests responsive to executing instructions; a pending request queue to store pending cache access requests generated by the execution units; pending queue management circuitry to compare a current cache access request with entries in the pending request queue to determine whether the current cache access request can be merged with an entry in the pending request queue and, if so, to merge the current cache access request with the entry.
Adaptive cache reconfiguration via clustering
A method of dynamic cache configuration includes determining, for a first clustering configuration, whether a current cache miss rate exceeds a miss rate threshold. The first clustering configuration includes a plurality of graphics processing unit (GPU) compute units clustered into a first plurality of compute unit clusters. The method further includes clustering, based on the current cache miss rate exceeding the miss rate threshold, the plurality of GPU compute units into a second clustering configuration having a second plurality of compute unit clusters fewer than the first plurality of compute unit clusters.
Method and apparatus for managing page cache for multiple foreground applications
Provided are a method and apparatus for managing a page cache for multiple foreground applications. A method of managing a page cache includes identifying an application accessing to data stored in storage; allocating a page used by the application for the accessed data to a page cache; setting a page variable corresponding to a type of the identified application to the allocated page; and managing demoting of the allocated page based on the set page when the allocated page is a demoting target.
Cache management system and method
A method, computer program product, and computing system for receiving a plurality of data streams on an SSD cache memory system associated with a backend storage system and writing a first of the plurality of data streams to a first portion of the SSD cache memory system.
ACCELERATED PROCESSING OF STREAMS OF LOAD-RESERVE REQUESTS
A processing unit for a data processing system includes a processor core that issues memory access requests and a cache memory coupled to the processor core. The cache memory includes a reservation circuit that tracks reservations established by the processor core via load-reserve requests and a plurality of read-claim (RC) state machines for servicing memory access requests of the processor core. The cache memory, responsive to receipt from the processor core of a store-conditional request specifying a store target address, allocates an RC state machine among the plurality of RC state machines to process the store-conditional request and transfers responsibility for tracking a reservation for the store target address from the reservation circuit to the RC state machine.