G06F12/0811

DYNAMICALLY ALLOCATABLE PHYSICALLY ADDRESSED METADATA STORAGE

In examples there is a computing device comprising a processor, the processor having a memory management unit. The computing device also has a memory that stores instructions that, when executed by the processor, cause the memory management unit to receive a memory access instruction comprising a virtual memory address; translate the virtual memory address to a physical memory address of the memory, and obtain permission information associated with the physical memory address. Responsive to the permission information indicating that metadata is permitted to be associated with the physical memory address, a check is made of a metadata summary table stored in the physical memory to check whether metadata is compatible with the physical memory address. Responsive to the check being unsuccessful, a trap is sent to system software of the computing device in order to trigger dynamic allocation of physical memory for storing metadata associated with the physical memory address.

APPROACH FOR SUPPORTING MEMORY-CENTRIC OPERATIONS ON CACHED DATA
20230021492 · 2023-01-26 ·

A technical solution to the technical problem of how to support memory-centric operations on cached data uses a novel memory-centric memory operation that invokes write back functionality on cache controllers and memory controllers. The write back functionality enforces selective flushing of dirty, i.e., modified, cached data that is needed for memory-centric memory operations from caches to the completion level of the memory-centric memory operations, and updates the coherence state appropriately at each cache level. The technical solution ensures that commands to implement the selective cache flushing are ordered before the memory-centric memory operation at the completion level of the memory-centric memory operation.

APPROACH FOR SUPPORTING MEMORY-CENTRIC OPERATIONS ON CACHED DATA
20230021492 · 2023-01-26 ·

A technical solution to the technical problem of how to support memory-centric operations on cached data uses a novel memory-centric memory operation that invokes write back functionality on cache controllers and memory controllers. The write back functionality enforces selective flushing of dirty, i.e., modified, cached data that is needed for memory-centric memory operations from caches to the completion level of the memory-centric memory operations, and updates the coherence state appropriately at each cache level. The technical solution ensures that commands to implement the selective cache flushing are ordered before the memory-centric memory operation at the completion level of the memory-centric memory operation.

Bottom-up Pre-emptive Cache Update in a Multi-level Redundant Cache System
20230022351 · 2023-01-26 ·

Embodiments for providing cache updates in a hierarchical multi-node system, through a service component between a lower level component and a next higher level component by maintaining a ledger storing an incrementing entry number indicating a present state of the datasets in a cache of the lower level component. The service component receives a data request to the lower level component from the higher level component including an appended last entry number accessed by the higher level component, and determines if the appended last entry number matches a current entry number in the ledger for any requested dataset, wherein no match indicates that at least some data in the higher level component cache is stale. In which case, it sends updated data information for the stale data to the higher level component, while the higher level component invalidates its cache entries and updates the appended last entry number to match a current entry number in the ledger.

Bottom-up Pre-emptive Cache Update in a Multi-level Redundant Cache System
20230022351 · 2023-01-26 ·

Embodiments for providing cache updates in a hierarchical multi-node system, through a service component between a lower level component and a next higher level component by maintaining a ledger storing an incrementing entry number indicating a present state of the datasets in a cache of the lower level component. The service component receives a data request to the lower level component from the higher level component including an appended last entry number accessed by the higher level component, and determines if the appended last entry number matches a current entry number in the ledger for any requested dataset, wherein no match indicates that at least some data in the higher level component cache is stale. In which case, it sends updated data information for the stale data to the higher level component, while the higher level component invalidates its cache entries and updates the appended last entry number to match a current entry number in the ledger.

HOST TRAINING INDICATION FOR MEMORY ARTIFICIAL INTELLIGENCE

A host can determine whether to train an AI accelerator of a memory sub-system. Responsive to determining to train the AI accelerator, the host can determine a training category corresponding to a memory access request. The host can also provide an indication to the memory sub-system that causes training of the AI accelerator to be performed based on the training category.

HOST TRAINING INDICATION FOR MEMORY ARTIFICIAL INTELLIGENCE

A host can determine whether to train an AI accelerator of a memory sub-system. Responsive to determining to train the AI accelerator, the host can determine a training category corresponding to a memory access request. The host can also provide an indication to the memory sub-system that causes training of the AI accelerator to be performed based on the training category.

Allocation of spare cache reserved during non-speculative execution and speculative execution
11561903 · 2023-01-24 · ·

A cache system, having cache sets, a connection to a line identifying an execution type, a connection to a line identifying a status of speculative execution, and a logic circuit that can: allocate a first subset of cache sets when the execution type is a first type indicating non-speculative execution, allocate a second subset when the execution type changes from the first type to a second type indicating speculative execution, and reserve a cache set when the execution type is the second type. When the execution type changes from the second to the first type and the status of speculative execution indicates that a result of speculative execution is to be accepted, the logic circuit can reconfigure the second subset when the execution type is the first type; and allocate the at least one cache set when the execution type changes from the first to the second type.

Selectively writing back dirty cache lines concurrently with processing

A graphics pipeline includes a cache having cache lines that are configured to store data used to process frames in a graphics pipeline. The graphics pipeline is implemented using a processor that processes frames for the graphics pipeline using data stored in the cache. The processor processes a first frame and writes back a dirty cache line from the cache to a memory concurrently with processing of the first frame. The dirty cache line is retained in the cache and marked as clean subsequent to being written back to the memory. In some cases, the processor generates a hint that indicates a priority for writing back the dirty cache line based on a read command occupancy at a system memory controller.

Distribution of injected data among caches of a data processing system

A data processing system includes a plurality of processor cores each supported by a respective one of a plurality of vertical cache hierarchies. Based on receiving on a system fabric a cache injection request requesting injection of a data into a cache line identified by a target real address, the data is written into a cache in a first vertical cache hierarchy among the plurality of vertical cache hierarchies. Based on a value in a field of the cache injection request, a distribute field is set in a directory entry of the first vertical cache hierarchy. Upon eviction of the cache line the first vertical cache hierarchy, a determination is made whether the distribute field is set. Based on determining the distribute field is set, a lateral castout of the cache line from the first vertical cache hierarchy to a second vertical cache hierarchy is performed.