G06F2212/282

Adaptive Cache Partitioning

Described apparatuses and methods partition a cache memory based, at least in part, on a metric indicative of prefetch performance. The amount of cache memory allocated for metadata related to prefetch operations versus cache storage can be adjusted based on operating conditions. Thus, the cache memory can be partitioned into a first portion allocated for metadata pertaining to an address space (prefetch metadata) and a second portion allocated for data associated with the address space (cache data). The amount of cache memory allocated to the first portion can be increased under workloads that are suitable for prefetching and decreased otherwise. The first portion may include one or more cache units, cache lines, cache ways, cache sets, or other resources of the cache memory.

Cache address mapping method and related device

This application discloses a cache address mapping method and a related device. The method includes: obtaining a binary file, the binary file including a first hot section; obtaining alignment information of a second hot section, the second hot section is a hot section that has been loaded into a cache, and the alignment information includes a set index of a last cache set occupied by the second hot section; and performing an offset operation on the first hot section based on the alignment information. According to embodiments of the present invention, a problem of a conflict miss of a cache in an N-way set associative structure can be resolved without increasing physical hardware overheads, thereby improving a cache hit rate.

Heterogeneous unified memory
09792227 · 2017-10-17 · ·

Inventive aspects include a heterogeneous unified memory section, which includes an extended unified memory space across a plurality of physical heterogeneous memory modules. A cold page reclamation logic section can receive and prioritize cold pages from a system memory. The cold pages can include a first subset of memory pages having a first type of memory data and a second subset of memory pages having a second type of memory data. For example, the cold pages can include anon-type memory pages and file-type memory pages. A dynamic tuning logic section can manage space allocation within the extended unified memory space. An intelligent page sort logic section can distribute the cold pages among different pools of physical heterogeneous memory modules based on varying characteristics of the pools, and based on the assigned priorities.

ADAPTIVE BLOCK CACHE MANAGEMENT METHOD AND DBMS APPLYING THE SAME
20170293441 · 2017-10-12 ·

An adaptive block cache management method and a DBMS applying the same are provided. A DB system according to an exemplary embodiment of the present disclosure includes: a cache configured to temporarily store DB data; a disk configured to permanently store the DB data; and a processor configured to determine whether to operate the cache according to a state of the DB system. Accordingly, a high-speed cache is adaptively managed according to a current state of a DBMS, such that a DB processing speed can be improved.

SYSTEM PROBE AWARE LAST LEVEL CACHE INSERTION BYPASSING
20220050785 · 2022-02-17 ·

Systems, apparatuses, and methods for employing system probe filter aware last level cache insertion bypassing policies are disclosed. A system includes a plurality of processing nodes, a probe filter, and a shared cache. The probe filter monitors a rate of recall probes that are generated, and if the rate is greater than a first threshold, then the system initiates a cache partitioning and monitoring phase for the shared cache. Accordingly, the cache is partitioned into two portions. If the hit rate of a first portion is greater than a second threshold, then a second portion will have a non-bypass insertion policy since the cache is relatively useful in this scenario. However, if the hit rate of the first portion is less than or equal to the second threshold, then the second portion will have a bypass insertion policy since the cache is less useful in this case.

Caching systems and methods for hard disk drives and hybrid drives
09733841 · 2017-08-15 · ·

A system includes a read/write module and a caching module. The read/write module is configured to access a first portion of a recording surface of a rotating storage device. Data is stored on the first portion of the recording surface of the rotating storage device at a first density. The caching module is configured to cache data on a second portion of the recording surface of the rotating storage device at a second density. The second portion of the recording surface of the rotating storage device is separate from the first portion of the recording surface of the rotating storage device. The second density is less than the first density.

Method and apparatus for history-based snooping of last level caches

A method and apparatus for snooping caches is disclosed. In one embodiment, a system includes a number of processing nodes and a cache shared by each of the processing nodes. The cache is partitioned such that each of the processing nodes utilizes only one assigned partition. If a query by a processing node to its assigned partition of the cache results in a miss, a cache controller may determine whether to snoop other partitions in search of the requested information. The determination may be made based on history of where requested information was obtained from responsive to previous misses in that partition.

Memory system with first cache for storing uncompressed look-up table segments and second cache for storing compressed look-up table segments

A memory system is connectable to the host. The memory system includes a nonvolatile first memory, a second memory in which a plurality of pieces of first information each correlating a logical address indicating a location in a logical address space of the memory system with a physical address indicating a location in the first memory are stored, a volatile third memory including a first cache and a second cache, a compressor configured to perform compression on the plurality of pieces of first information, and a memory controller. The memory controller stores the first information not compressed by the compressor in the first cache, stores the first information compressed by the compressor in the second cache, and controls a ratio between a first capacity, which is a capacity of the first cache, and a second capacity, which is a capacity of the second cache.

Method and apparatus for controlling cache line storage in cache memory
11237972 · 2022-02-01 · ·

A method and apparatus physically partitions clean and dirty cache lines into separate memory partitions, such as one or more banks, so that during low power operation, a cache memory controller reduces power consumption of the cache memory containing the clean only data. The cache memory controller controls a refresh operation so that a data refresh does not occur for the clean data only banks or the refresh rate is reduced for the clean data only banks. Partitions that store dirty data can also store clean data; however, other partitions are designated for storing only clean data so that the partitions can have their refresh rate reduced or refresh stopped for periods of time. When multiple DRAM dies or packages are employed, the partition can occur on a die or package level as opposed to a bank level within a die.

Semiconductor apparatus and operating method thereof

A semiconductor apparatus may include: a buffer configured to store write request data input in response to a write request from a host; a memory device configured to store data evicted from the buffer; and a controller configured to control the buffer and the memory device to process the write request.