G06F2212/283

Extension Write Buffering
20170308327 · 2017-10-26 ·

In various examples, a memory may comprise a subarray having an associated write extension buffer; and request logic to; receive a write request associated with the subarray, execute the write request. The request logic may further determine that the write request has not completed within an allocated number of write cycles, and responsive to determining that the write request has not completed the allocated number of write cycles: store the write request in the write extension buffer.

ACCESSING PARTIAL CACHELINES IN A DATA CACHE

Accessing partial cachelines in a data cache including storing a first portion of a cacheline in a cache entry of the data cache; relaunching a load instruction targeting a second portion of the cacheline, wherein the second portion of the cacheline is not stored in the data cache; determining that the load instruction targets a portion of the cacheline not stored in the cache entry; storing the second portion of the cacheline in the data cache; and reading the second portion of the cacheline from the data cache according to the load instruction.

Multiple data channel memory module architecture

According to one example of the present disclosure, a system includes a computing element configured to provide requests for memory access operations and a memory module comprising a plurality of memories, a plurality of independent data channels, each of the independent data channels coupled to one of the plurality of memories, a plurality of internal address/control channels, each of the independent address/control channels coupled to one of the plurality of memories, and control logic coupled to the plurality of internal address/control channels and configured to receive and decode address and control information for a memory access operation, the control logic further configured to selectively provide the decoded address and control information to a selected internal address/control channel for a selected independent data channel of the plurality of independent data channels based on the received address and control information for the memory access operation.

QoS-CLASS BASED SERVICING OF REQUESTS FOR A SHARED RESOURCE

Systems and methods are directed to managing access to a shared memory. A request received at a memory controller, for access to the shared memory from a client of one or more clients configured to access the shared memory, is placed in at least one queue in the memory controller. A series of one or more timeout values is assigned to the request, based, at least in part on a priority associated with the client which generated the request. The priority may be fixed or based on a Quality-of-Service (QoS) class of the client. A timer is incremented while the request remains in the first queue. As the timer traverses each one of the one or more timeout values in the series, a criticality level of the request is incremented. A request with a higher criticality level may be prioritized for servicing over a request with a lower criticality level.

SELECTIVE BYPASSING OF ALLOCATION IN A CACHE

Systems and methods are directed to selectively bypassing allocation of cache lines in a cache. A bypass predictor table is provided with reuse counters to track reuse characteristics of cache lines, based on memory regions to which the cache lines belong in memory. A contender reuse counter provides an indication of a likelihood of reuse of a contender cache line in the cache pursuant to a miss in the cache for the contender cache line, and a victim reuse counter provides an indication of a likelihood of reuse for a victim cache line that will be evicted if the contender cache line is allocated in the cache. A decision whether to allocate the contender cache line in the cache or bypass allocation of the contender cache line in the cache is based on the contender reuse counter value and the victim reuse counter value.

Apparatus and method for controlling level 0 cache

Disclosed herein is an apparatus and method for controlling level 0 caches, capable of delivering data to a processor without errors and storing error-free data in the caches even when soft errors occur in the processor and caches. The apparatus includes: a level 0 cache #0 connected to the load/store unit of a first processor; a level 0 cache #1 connected to the load/store unit of a second processor; and a fault detection and recovery unit for reading from and writing to tag memory, data memory, and valid bit memory of the level 0 cache #0 and the level 0 cache #1, performing the write-back and flush of the level 0 cache #0 and the level 0 cache #1 based on information stored therein, and instructing the load/store units of the first and second processors to stall a pipeline and to restart an instruction #n.

Cache lookup bypass in multi-level cache systems

Techniques described herein are generally related to retrieval of data in computer systems having multi-level caches. The multi-level cache may include at least a first cache and a second cache. The first cache may be configured to receive a request for a cache line. The request may be associated with an instruction executing on a tile of the computer system. A suppression status of the instruction may be determined by a first cache controller to determine whether look-up of the first cache is suppressed based upon the determined suppression status. The request for the cache line may be forwarded to the second cache by the first cache controller after the look-up of the first cache is suppressed.

Computer including cache used in plural different data sizes and control method of computer
11669450 · 2023-06-06 · ·

A computer includes a memory and a cache holding a part of data stored in the memory in any of a plurality of data regions. In a case of replacing first data of a first data size held in the cache with second data of a second data size larger than the first data size, allocation of data regions of the cache is changed in units of the second data size by referring to a first management list that includes a plurality of first entries that correspond to the plurality of data regions, respectively, for managing priorities of the data regions for each of the plurality of processes, and a second management list that includes a plurality of second entries corresponding to the first entries for a process that uses the first data size, for managing priorities of first data of the first data size held in the data regions.

Implementation of Reserved Cache Slots in Computing System Having Inclusive/Non Inclusive Tracking And Two Level System Memory

Electronic circuitry of a computing system is described where the computing system includes a multi-level system memory where the multi-level system memory includes a near memory cache. The computing system directs system memory access requests whose addresses map to a same near memory cache slot to a same home caching agent so that the same home caching agent can characterize individual cache lines as inclusive or non-inclusive before forwarding the requests to a system memory controller, and where the computing system directs other system memory access requests to the system memory controller without passing the other requests through a home caching agent. The electronic circuitry is to modify the respective original addresses of the other requests to include a special code that causes the other system memory access requests to map to a specific pre-determined set of slots within the near memory cache.

Prefetch command optimization for tiered storage systems
20170286305 · 2017-10-05 · ·

A system is provided. The system includes a storage controller configured to receive a prefetch command from a host interface. The storage controller includes a read cache memory that stores prefetch data in response to the prefetch command and a plurality of storage tiers coupled to the storage controller and providing the prefetch data. The plurality of storage tiers includes a fastest storage tier that stores the prefetch data if the read cache memory discards the prefetch data after storing the prefetch data.