G06F12/0802

AGGRESSIVE WRITE FLUSH SCHEME FOR A VICTIM CACHE
20230004500 · 2023-01-05 ·

A caching system including a first sub-cache and a second sub-cache in parallel with the first sub-cache, wherein the second sub-cache includes: line type bits configured to store an indication that a corresponding cache line of the second sub-cache is configured to store write-miss data, and an eviction controller configured to evict a cache line of the second sub-cache storing write-miss data based on an indication that the cache line has been fully written.

MEMORY MANAGEMENT METHOD AND APPARATUS
20230236971 · 2023-07-27 ·

This disclosure provides a memory management method and apparatus, to perform refined management on a process access status in a physical host, a virtual machine, or a container, and a traffic occupation status of each control group (cgroup). The method can includes a first record table corresponding to a target cache page is used to record all cgroups that have read or written target data from or into the target cache page and a quantity of times that the target data is read or written by all the cgroups, to improve accuracy of a traffic statistics result of a cgroup. In this case, throttling is performed on a first cgroup based on an updated first record table, so that throttling on the first cgroup can be more fair and accurate.

MEMORY MANAGEMENT METHOD AND APPARATUS
20230236971 · 2023-07-27 ·

This disclosure provides a memory management method and apparatus, to perform refined management on a process access status in a physical host, a virtual machine, or a container, and a traffic occupation status of each control group (cgroup). The method can includes a first record table corresponding to a target cache page is used to record all cgroups that have read or written target data from or into the target cache page and a quantity of times that the target data is read or written by all the cgroups, to improve accuracy of a traffic statistics result of a cgroup. In this case, throttling is performed on a first cgroup based on an updated first record table, so that throttling on the first cgroup can be more fair and accurate.

MEMORY MEDIA ROW ACTIVATION-BIASED CACHING
20230236968 · 2023-07-27 · ·

A cache memory having a memory media device row activation-biased caching policy is described. The cache policies that are biased based on row activation counts include at least one of a cache line eviction policy which determines which cache lines are the most evictable from the cache memory, and cache line storage policy which determined which row data is allocated cache lines for storage. A memory controller including a row activation-biased cache memory is also described. The memory media device may be DRAM.

MEMORY MEDIA ROW ACTIVATION-BIASED CACHING
20230236968 · 2023-07-27 · ·

A cache memory having a memory media device row activation-biased caching policy is described. The cache policies that are biased based on row activation counts include at least one of a cache line eviction policy which determines which cache lines are the most evictable from the cache memory, and cache line storage policy which determined which row data is allocated cache lines for storage. A memory controller including a row activation-biased cache memory is also described. The memory media device may be DRAM.

Method to opportunistically reduce the number of SSD IOs, and reduce the encryption payload, in an SSD based cache in a deduplication file system

Disclosed is a storage system comprising: receiving a first data segment and first metadata associated with the first data segment to be stored in the storage system; storing the first data segment and the first metadata in a persistent storage device of the storage system; compressing the first data segment using a predetermined compression algorithm to generate a first compressed data segment; and storing the first metadata and the first compressed data segment in a solid state drive (SSD) cache device of the storage system, including aligning the first metadata and the first compressed data segment to a page boundary of the SSD device to reduce a number of input and output (IO) operations required for accessing the first metadata and the first compressed data segment from the SSD cache device.

Method to opportunistically reduce the number of SSD IOs, and reduce the encryption payload, in an SSD based cache in a deduplication file system

Disclosed is a storage system comprising: receiving a first data segment and first metadata associated with the first data segment to be stored in the storage system; storing the first data segment and the first metadata in a persistent storage device of the storage system; compressing the first data segment using a predetermined compression algorithm to generate a first compressed data segment; and storing the first metadata and the first compressed data segment in a solid state drive (SSD) cache device of the storage system, including aligning the first metadata and the first compressed data segment to a page boundary of the SSD device to reduce a number of input and output (IO) operations required for accessing the first metadata and the first compressed data segment from the SSD cache device.

Hinting Mechanism for Efficient Accelerator Services
20230236969 · 2023-07-27 ·

Solid State Drive (SSD) devices with hardware accelerators and methods for apportioning storage resources in the SSD are disclosed. SSDs typically comprise an array of non-volatile memory devices and a controller which manages access to the memory devices. The controller may also comprise one or more accelerators to either improve the performance of the SSD itself or to offload specialized computation workloads of a host-computing device. Different accelerators may be dynamically assigned portions of the non-volatile memory array according to the type of data being accessed and/or the throughput required. Provision is also made for the data to be accessed directly by the accelerators bypassing the controller and for a hinting mechanism to improve accelerator performance.

Hinting Mechanism for Efficient Accelerator Services
20230236969 · 2023-07-27 ·

Solid State Drive (SSD) devices with hardware accelerators and methods for apportioning storage resources in the SSD are disclosed. SSDs typically comprise an array of non-volatile memory devices and a controller which manages access to the memory devices. The controller may also comprise one or more accelerators to either improve the performance of the SSD itself or to offload specialized computation workloads of a host-computing device. Different accelerators may be dynamically assigned portions of the non-volatile memory array according to the type of data being accessed and/or the throughput required. Provision is also made for the data to be accessed directly by the accelerators bypassing the controller and for a hinting mechanism to improve accelerator performance.

CACHE-ASSISTED ROW HAMMER MITIGATION
20230236739 · 2023-07-27 · ·

A system comprising a row hammer mitigation circuitry and a cache memory that collaborate to mitigate row hammer attacks on a memory media device is described. The cache memory biases cache policy based on row access count information maintained by the row hammer mitigation circuit. The row hammer mitigation circuitry may be implemented in a memory controller. The memory media device may be DRAM. Corresponding methods are also described.