G06F12/0897

Using request class and reuse recording in one cache for insertion policies of another cache
11704250 · 2023-07-18 · ·

Systems and methods are disclosed for maintaining insertion policies of a lower-level cache. Techniques are described for selecting, based on metadata of an evicted data block received from an upper-level cache, an insertion policy out of the insertion policies. Then, determining, based on the selected insertion policy, whether to insert the data block into the lower-level cache. If it is determined to insert, the data block is inserted into the lower-level cache according to the selected insertion policy. Techniques for dynamically updating the insertion policies of the lower-level cache are also disclosed.

Using request class and reuse recording in one cache for insertion policies of another cache
11704250 · 2023-07-18 · ·

Systems and methods are disclosed for maintaining insertion policies of a lower-level cache. Techniques are described for selecting, based on metadata of an evicted data block received from an upper-level cache, an insertion policy out of the insertion policies. Then, determining, based on the selected insertion policy, whether to insert the data block into the lower-level cache. If it is determined to insert, the data block is inserted into the lower-level cache according to the selected insertion policy. Techniques for dynamically updating the insertion policies of the lower-level cache are also disclosed.

Method and storage system with a layered caching policy

A storage system has volatile memory for use as a cache and can extend the available caching space by using a host memory buffer (HMB) in a host. However, because accesses to the HMB involve going through a host interface, there may be latencies in accessing the HMB, To reduce access latencies, the storage system views the volatile memory and the HMB as a two-level cache. In one use case, the storage system decides whether to store a logical-to-physical address table in the volatile memory or in the HMB based on a prediction of the likelihood that the table will be updated. If the likelihood for an update is above a threshold, the table is stored in the volatile memory, thereby eliminating the access latencies that would be encountered if the table needs to be updated and is stored in the HMB.

Method and apparatus for using a storage system as main memory
11556469 · 2023-01-17 · ·

A data access system including a processor, multiple cache modules for the main memory, and a storage drive. The cache modules include a FLC controller and a main memory cache. The multiple cache modules function as main memory. The processor sends read/write requests (with physical address) to the cache module. The cache module includes two or more stages with each stage including a FLC controller and DRAM (with associated controller). If the first stage FLC module does not include the physical address, the request is forwarded to a second stage FLC module. If the second stage FLC module does not include the physical address, the request is forwarded to the storage drive, a partition reserved for main memory. The first stage FLC module has high speed, lower power operation while the second stage FLC is a low-cost implementation. Multiple FLC modules may connect to the processor in parallel.

Integrated semi-inclusive hierarchical metadata predictor

Embodiments are provided for an integrated semi-inclusive hierarchical metadata predictor. A hit in a second-level structure is determined, the hit being associated with a line of metadata in the second-level structure. Responsive to determining that a victim line of metadata in a first-level structure meets at least one condition, the victim line of metadata is stored in the second-level structure. The line of metadata from the second-level structure is stored in a first-level structure to be utilized to facilitate performance of a processor, the line of metadata from the second-level structure including entries for a plurality of instructions.

Integrated semi-inclusive hierarchical metadata predictor

Embodiments are provided for an integrated semi-inclusive hierarchical metadata predictor. A hit in a second-level structure is determined, the hit being associated with a line of metadata in the second-level structure. Responsive to determining that a victim line of metadata in a first-level structure meets at least one condition, the victim line of metadata is stored in the second-level structure. The line of metadata from the second-level structure is stored in a first-level structure to be utilized to facilitate performance of a processor, the line of metadata from the second-level structure including entries for a plurality of instructions.

SYSTEM AND METHOD FOR OPTIMIZING CACHED MEMORY COMPRISING VARYING DEGREES OF SLA AND CRG

A system and method for optimizing cached memory comprising varying degrees of Service Level Agreements (SLA) and Consistency Requirement Grades (CRG). The system receives one or more requests from one or more client devices to store information in a cache memory, and determines degrees of Service Level Agreement (SLA) and CRG in the information received via requests or system configurations. Further, system stores for one-time in cache layer of cache memory, the information as master record, based on determining the degrees of SLA and CRG. Furthermore, the system stores grades of entries of information referencing to the master record in different layers of cache memory. Each of the grades of entries comprises different Time-To-Live (TTL). Thereafter, the system outputs the information stored in the master record to client devices, based on SLA and consistency requirements.

SYSTEM AND METHOD FOR OPTIMIZING CACHED MEMORY COMPRISING VARYING DEGREES OF SLA AND CRG

A system and method for optimizing cached memory comprising varying degrees of Service Level Agreements (SLA) and Consistency Requirement Grades (CRG). The system receives one or more requests from one or more client devices to store information in a cache memory, and determines degrees of Service Level Agreement (SLA) and CRG in the information received via requests or system configurations. Further, system stores for one-time in cache layer of cache memory, the information as master record, based on determining the degrees of SLA and CRG. Furthermore, the system stores grades of entries of information referencing to the master record in different layers of cache memory. Each of the grades of entries comprises different Time-To-Live (TTL). Thereafter, the system outputs the information stored in the master record to client devices, based on SLA and consistency requirements.

Interleaving in multi-level data cache on memory bus
11698873 · 2023-07-11 · ·

This invention provides a system having a processor assembly interconnected to a memory bus and a memory-storage combine, interconnected to the memory bus. The memory-storage combine is adapted to allow access, through the memory bus, a combination of random access memory (RAM) based data storage and non-volatile mass data storage. A controller is arranged to address the both RAM based data storage and the non-volatile mass data storage as part of a unified address space in the manner of RAM.

Three tiered hierarchical memory systems

Systems, apparatuses, and methods related to three tiered hierarchical memory systems are described herein. A three tiered hierarchical memory system can leverage persistent memory to store data that is generally stored in a non-persistent memory, thereby increasing an amount of storage space allocated to a computing system at a lower cost than approaches that rely solely on non-persistent memory. An example apparatus may include a persistent memory, and one or more non-persistent memories configured to map an address associated with an input/output (I/O) device to an address in logic circuitry prior to the apparatus receiving a request from the I/O device to access data stored in the persistent memory, and map the address associated with the I/O device to an address in a non-persistent memory subsequent to the apparatus receiving the request and accessing the data.