G06F12/0824

PIPELINE ARBITRATION
20230023242 · 2023-01-26 ·

A method includes receiving, by a first stage in a pipeline, a first transaction from a previous stage in pipeline; in response to first transaction comprising a high priority transaction, processing high priority transaction by sending high priority transaction to a buffer; receiving a second transaction from previous stage; in response to second transaction comprising a low priority transaction, processing low priority transaction by monitoring a full signal from buffer while sending low priority transaction to buffer; in response to full signal asserted and no high priority transaction being available from previous stage, pausing processing of low priority transaction; in response to full signal asserted and a high priority transaction being available from previous stage, stopping processing of low priority transaction and processing high priority transaction; and in response to full signal being de-asserted, processing low priority transaction by sending low priority transaction to buffer.

Remote atomic operations in multi-socket systems

Disclosed embodiments relate to remote atomic operations (RAO) in multi-socket systems. In one example, a method, performed by a cache control circuit of a requester socket, includes: receiving the RAO instruction from the requester CPU core, determining a home agent in a home socket for the addressed cache line, providing a request for ownership (RFO) of the addressed cache line to the home agent, waiting for the home agent to either invalidate and retrieve a latest copy of the addressed cache line from a cache, or to fetch the addressed cache line from memory, receiving an acknowledgement and the addressed cache line, executing the RAO instruction on the received cache line atomically, subsequently receiving multiple local RAO instructions to the addressed cache line from one or more requester CPU cores, and executing the multiple local RAO instructions on the received cache line independently of the home agent.

Cache Replacement Based on Traversal Tracking
20230012199 · 2023-01-12 ·

Techniques are disclosed relating to controlling cache replacement. In some embodiments, a computing system performs multiple searches of a data structure, where one or more of the searches traverse multiple links between elements of the data structure. The system may cache, in a traversal cache, traversal information that is usable by searche s to skip one or more links traversed by one or more prior searches. The system may store tracking information that indicates a location in the traversal cache at which prior traversal information for a first search is stored. The system may select, based on the tracking information, an entry in the traversal cache for new traversal information generated by the first search. The selection may override a default replacement policy for the traversal cache, e.g., to select the location in the traversal cache to replace the prior traversal information with the new traversal information.

MAINTAINING AND RECOMPUTING REFERENCE COUNTS IN A PERSISTENT MEMORY FILE SYSTEM

Techniques are provided for maintaining and recomputing reference counts in a persistent memory file system of a node. Primary reference counts are maintained for pages within persistent memory of the node. In response to receiving a first operation to link a page into a persistent memory file system of the persistent memory, a primary reference count of the page is incremented before linking the page into the persistent memory file system. In response to receiving a second operation to unlink the page from the persistent memory file system, the page is unlinked from the persistent memory file system before the primary reference count is decremented. Upon the node recovering from a crash, the persistent memory file system is traversed in order to update shadow reference counts for the pages with correct reference count values, which are used to overwrite the primary reference counts with the correct reference count values.

IMPLIED DIRECTORY STATE UPDATES
20230367481 · 2023-11-16 · ·

A request is received over a link that requests a particular line in memory. A directory state record is identified in memory that identifies a directory state of the particular line. A type of the request is identified from the request. It is determined that the directory state of the particular line is to change from the particular state to a new state based on the directory state of the particular line and the type of the request. The directory state record is changed, in response to receipt of the request, to reflect the new state. A copy of the particular line is sent in response to the request

IN-KERNEL CACHE REQUEST QUEUING FOR DISTRIBUTED CACHE
20230367713 · 2023-11-16 ·

A node includes at least one memory for use as a shared cache in a distributed cache. One or more other nodes on a network each provide a respective shared cache for the distributed cache. A request is received by a kernel of the node to access data in the shared cache and an Input/Output (I/O) queue is identified from among a plurality of I/O queues in a kernel space of the at least one memory for queuing the received request based on at least one of a priority indicated by the received request and an application that initiated the request. In another aspect, each I/O queue of the plurality of I/O queues corresponds to at least one of different respective priorities for requests to access data in the shared cache and different respective applications initiating requests to access data in the shared cache.

CACHE SYSTEMS
20230195630 · 2023-06-22 ·

A method of operating a cache system is disclosed. Different entries in the same cache of the cache system are addressed using different address domains. The different address domains may be associated with different data types that are cached by the cache.

CACHE SYSTEMS
20230195631 · 2023-06-22 ·

A method of operating a cache system is disclosed. An update to an entry in one cache of the cache system triggers updates to plural related entries in another cache of the cache system. The entries may be related to each other by virtue of caching data for the same compression block.

Scalable System on a Chip

A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.

Scalable System on a Chip

A system including a plurality of processor cores, a plurality of graphics processing units, a plurality of peripheral circuits, and a plurality of memory controllers is configured to support scaling of the system using a unified memory architecture.