G06F2201/885

Using multi-tiered cache to satisfy input/output requests

A computer-implemented method, according to one approach, includes: receiving a stream of incoming I/O requests, all of which are satisfied using one or more buffers in a primary cache. However, in response to determining that the available capacity of the one or more buffers in the primary cache is outside a predetermined range: one or more buffers in the secondary cache are allocated. These one or more buffers in the secondary cache are used to satisfy at least some of the incoming I/O requests, while the one or more buffers in the primary cache are used to satisfy a remainder of the incoming I/O requests. Moreover, in response to determining that the available capacity of the one or more buffers in the primary cache is not outside the predetermined range: the one or more buffers in the primary cache are again used to satisfy all of the incoming I/O requests.

DYNAMIC CHUNK SIZE ADJUSTMENT FOR CACHE-AWARE LOAD BALANCING
20230120010 · 2023-04-20 ·

A method in one embodiment comprises separating logical block addresses of one or more storage devices of a storage system into a plurality of ranges of logical block addresses using a designated chunk size, the chunk size denoting a particular number of logical block addresses. The method further comprises assigning different ones of the ranges of logical block addresses to different ones of a plurality of cache entities of the storage system, to select paths for delivery of respective input-output operations from a host device to the storage system based at least in part on the assigning, detecting particular ones of the input-output operations that each overlap with two or more adjacent ranges of the plurality of ranges, and responsive to the detected input-output operations exceeding a threshold, modifying the chunk size and repeating at least portions of the separating, assigning, selecting and detecting utilizing the modified chunk size.

CONTROLLER FOR MANAGING METRICS AND TELEMETRY
20220326874 · 2022-10-13 ·

Systems, apparatuses, and methods related to a controller for managing metrics and telemetry are described. A controller includes a front end portion, a central controller portion, a back end portion, and a management unit. The central controller portion can include a cache to store data associated with the performance of the memory operations, metric logic configured to collect metrics related to performance of the memory operations, load telemetry logic configured to collect load telemetry associated with performance of the memory operations within a threshold time, and a storage area to store the collected metrics and the collected load telemetry. The management unit memory of the controller can store metrics and load telemetry associatAND ed with monitoring the characteristics of the memory controller, and based on the stored metrics and load telemetry, alter at least one characteristic of the computing system.

Methods and Systems for Memory Bandwidth Control

Resources of an electronic device are partitioned into a plurality of resource portions to be utilized by a plurality of clients. Each resource portion is assigned to a respective client, has a respective partition identifier (ID), and corresponds to a plurality memory bandwidth usage states tracked for a plurality of memory blocks. For each resource portion, each of the memory bandwidth usage states is associated with a respective memory block and indicates at least how much of a memory access bandwidth assigned to the respective partition ID to access the respective memory block is used. A usage level is determined for each resource partition based on the memory bandwidth usage states, and applied to adjust a credit count. When the credit count is adjusted beyond a request issue threshold, a next data access request is issued from a memory access request queue for the respective partition ID.

Storage system and method for data recovery after detection of an uncorrectable error

A storage system caches, in volatile memory, data read from non-volatile memory. After detecting an uncorrectable error in the data cached in the volatile memory, the storage system replaces the cached data with data re-read from the non-volatile memory and updated to reflect any changes made to the data after it was stored in the non-volatile memory. The storage system can also analyze a pattern in data adjacent to the uncorrectable error and predict corrected data based on the pattern.

CLASSIFICATION OF DIFFERENT TYPES OF CACHE MISSES
20230161678 · 2023-05-25 ·

Various examples are provided related to cache miss classification. In one example, a method for classification of cache misses includes detecting a susceptible instruction of a program with frequent cache misses based upon performance monitoring units (PMU) based course grain sampling; collecting a memory access pattern of the susceptible instruction using breakpoint-based fine-grain sampling; and classifying a type of cache miss associated with the susceptible instruction. The type of cache miss can be classified as a capacity miss, a conflict miss, or a coherence miss using the memory access pattern of the susceptible instruction.

Criticality-Informed Caching Policies

A cache may store critical cache lines and non-critical cache lines, and may attempt to retain critical cache lines in the cache by, for example, favoring the critical cache lines in replacement data updates, retaining the critical cache lines with a certain probability when victim cache blocks are being selected, etc. Criticality values may be retained at various levels of the cache hierarchy. Additionally, accelerated eviction may be employed if the threads previously accessing the critical cache blocks are viewed as dead.

Mergeable counter system and method

A system includes a first counter configured to increment or decrement in response to a triggering event. The first counter is sized to overflow. The system also includes a second counter configured to increment or decrement in response to a triggering event. The first counter and the second counter are merged to form a third counter in response to detecting an overflow triggering event for the first counter. A merge bit indicative of whether the first counter and the second counter are merged changes value in response to merging the first counter and the second counter.

Access frequency caching hardware structure
11467960 · 2022-10-11 · ·

An access frequency caching hardware structure has entries each storing an access frequency counter indicative of a frequency of accesses to a corresponding page of a memory address space. Access frequency tracking circuitry is responsive to a given memory access request requesting access to a target page, to determine whether the access frequency caching hardware structure already includes a corresponding entry which is valid and corresponds to the target page. When the structure includes the corresponding entry, a corresponding access frequency counter specified by the corresponding entry is incremented. In response to a counter writeback event associated with a selected access frequency counter corresponding to a selected page, an update is made to a global access frequency counter corresponding to the selected page within a global access frequency tracking data structure stored in the memory system.

MULTI-ADAPTIVE CACHE REPLACEMENT POLICY

Techniques for performing cache operations are provided. The techniques include tracking performance events for a plurality of test sets of a cache, detecting a replacement policy change trigger event associated with a test set of the plurality of test sets, and in response to the replacement policy change trigger event, operating non-test sets of the cache according to a replacement policy associated with the test set.