G06F2212/6082

Dynamic allocation of cache memory as RAM

An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.

Prediction confirmation for cache subsystem

A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.

Prediction Confirmation for Cache Subsystem

A cache subsystem is disclosed. The cache subsystem includes a cache configured to store information in cache lines arranged in a plurality of ways. A requestor circuit generates a request to access a particular cache line in the cache. A prediction circuit is configured to generate a prediction of which of the ways includes the particular cache line. A comparison circuit verifies the prediction by comparing a particular address tag associated with the particular cache line to a cache tag corresponding to a predicted one of the ways. Responsive to determining that the prediction was correct, a confirmation indication is stored indicating the correct prediction. For a subsequent request for the particular cache line, the cache is configured to forego a verification of the prediction that the particular cache line is included in the one of the ways based on the confirmation indication.

Method and apparatus for controlling cache line storage in cache memory
11636038 · 2023-04-25 · ·

A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls refresh operation so that data refresh does not occur for clean data only banks or the refresh rate is reduced for clean data only banks. Partitions that store dirty data can also store clean data, however other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.

Dynamic Allocation of Cache Memory as RAM

An apparatus includes a cache controller circuit and a cache memory circuit that further includes cache memory having a plurality of cache lines. The cache controller circuit may be configured to receive a request to reallocate a portion of the cache memory circuit that is currently in use. This request may identify an address region corresponding to one or more of the cache lines. The cache controller circuit may be further configured, in response to the request, to convert the one or more cache lines to directly-addressable, random-access memory (RAM) by excluding the one or more cache lines from cache operations.

Limiting allocation of ways in a cache based on cache maximum associativity value
11604733 · 2023-03-14 · ·

An apparatus has processing circuitry to perform data processing, at least one architectural register to store at least one partition identifier selection value which is programmable by software processed by the processing circuitry; a set-associative cache comprising a plurality of sets each comprising a plurality of ways; and partition identifier selecting circuitry to select, based on the at least one partition identifier selection value stored in the at least one architectural register, a selected partition identifier to be specified by a cache access request for accessing the set-associative cache. The set-associative cache comprises: selecting circuitry responsive to the cache access request to select, based on the selected partition identifier, a selected cache maximum associativity value; and allocation control circuitry to limit a number of ways allocated in a same set for information associated with the selected partition identifier to a maximum number of ways determined based on the selected cache maximum associativity value.

Set associative cache memory with heterogeneous replacement policy

A set associative cache memory, comprising: an array of storage elements arranged as M sets by N ways; an allocation unit that allocates the storage elements in response to memory accesses that miss in the cache memory. Each memory access selects a set; for each parcel of a plurality of parcels, a parcel specifier specifies: a subset of ways of the N ways included in the parcel. The subsets of ways of parcels associated with a selected set are mutually exclusive; a replacement scheme associated with the parcel from among a plurality of predetermined replacement schemes. For each memory access, the allocation unit: selects the parcel specifier in response to the memory access; and uses the replacement scheme associated with the parcel to allocate into the subset of ways of the selected set included in the parcel.

INFORMATION PROCESSING APPARATUS AND NON-TRANSITORY COMPUTER-READABLE STORAGE MEDIUM
20220058128 · 2022-02-24 · ·

An information processing apparatus includes a processor. The processor configured to allocate, to a process, a first number of first divided regions from among a plurality of divided regions obtained by division of a cache, and determine, based on an address of each data block corresponding to the process and the first number, a storage destination of the data block corresponding to the process from among the first divided regions. The processor configured to determine a second number that is a divisor of the first number, identify, for the individual first divided regions after the reduction, second divided regions from among the first divided regions before the reduction, determine data blocks to be stored in the individual first divided regions after the reduction by allocating data blocks to the first divided regions after the reduction from the corresponding second divided regions in ascending order of purging order.

Methods and apparatus for data cache way prediction based on classification as stack data

A method of way prediction for a data cache having a plurality of ways is provided. Responsive to an instruction to access a stack data block, the method accesses identifying information associated with a plurality of most recently accessed ways of a data cache to determine whether the stack data block resides in one of the plurality of most recently accessed ways of the data cache, wherein the identifying information is accessed from a subset of an array of identifying information corresponding to the plurality of most recently accessed ways; and when the stack data block resides in one of the plurality of most recently accessed ways of the data cache, the method accesses the stack data block from the data cache.

Systems and methods for supporting a plurality of load and store accesses of a cache

Systems and methods for supporting a plurality of load and store accesses of a cache are disclosed. Responsive to a request of a plurality of requests to access a block of a plurality of blocks of a load cache, the block of the load cache and a logically and physically paired block of a store coalescing cache are accessed in parallel. The data that is accessed from the block of the load cache is overwritten by the data that is accessed from the block of the store coalescing cache by merging on a per byte basis. Access is provided to the merged data.