Patent classifications
G06F12/126
Cache grouping for increasing performance and fairness in shared caches
A method includes monitoring one or more metrics for each of a plurality of cache users sharing a cache, and assigning each of the plurality of cache users to one of a plurality of groups based on the monitored one or more metrics.
Cache grouping for increasing performance and fairness in shared caches
A method includes monitoring one or more metrics for each of a plurality of cache users sharing a cache, and assigning each of the plurality of cache users to one of a plurality of groups based on the monitored one or more metrics.
Machine learning to improve caching efficiency in a storage system
A system and method improve caching efficiency in a data storage system by performing machine learning processes on metadata relating to extents of data blocks, rather than individual blocks themselves. Thus, once the storage devices are divided into extents, various metadata regarding access to the blocks within each extent are aggregated, and per-extent features are extracted. These features are used to train a data regression model that is subsequently used to infer a most likely “hotness” value for each extent at a future time. These predicted values, which may be further classified as e.g. “hot”, “warm”, and “cold” using thresholds, are used to implement the cache replacement policy. Embodiments scale to large and multi-layered caches, and may avoid common caching problems like thrashing, by adjusting the extent size. Policy goal functions may be optimized by dynamically adjusting the classification thresholds.
Memory system and data processing system including the same
A data processing system includes a compute blade generating a write command to store data and a read command to read the data, and a memory blade. The compute blade has a memory that stores information about performance characteristics of each of a plurality of memories, and determines priority information through which eviction of a cache line is carried out based on the stored information.
Adjustable memory operation settings based on memory sub-system operating requirements
A priority for each operating requirement of a set of operating requirements of a memory sub-system can be determined. A programming operation setting for a programming operation to be performed at the memory sub-system can be determined based on the priority for each operating requirement. A request to perform the programming operation at the memory sub-system can be received. Responsive to receiving the request to perform the programming operation, the programming operation can be performed at the memory sub-system based on the programming operation setting.
Adjustable memory operation settings based on memory sub-system operating requirements
A priority for each operating requirement of a set of operating requirements of a memory sub-system can be determined. A programming operation setting for a programming operation to be performed at the memory sub-system can be determined based on the priority for each operating requirement. A request to perform the programming operation at the memory sub-system can be received. Responsive to receiving the request to perform the programming operation, the programming operation can be performed at the memory sub-system based on the programming operation setting.
TECHNOLOGY FOR EARLY ABORT OF COMPRESSION ACCELERATION
An integrated circuit includes a compression accelerator to process a request from software to compress source data into an output file. The compression accelerator includes early-abort circuitry to provide for early abort of compression operations. In particular, the compression accelerator uses a predetermined sample size to compute an estimated size for a portion of the output file. The sample size specifies how much of the source data is to be analyzed before computing the estimated size. The compression accelerator also determines whether the estimated size reflects an acceptable amount of compression, based on a predetermined early-abort threshold. The compression accelerator aborts the request if the estimated size does not reflect the acceptable amount of compression. The compression accelerator may complete the request if the estimated size reflects the acceptable amount of compression. Other embodiments are described and claimed.
TECHNOLOGY FOR EARLY ABORT OF COMPRESSION ACCELERATION
An integrated circuit includes a compression accelerator to process a request from software to compress source data into an output file. The compression accelerator includes early-abort circuitry to provide for early abort of compression operations. In particular, the compression accelerator uses a predetermined sample size to compute an estimated size for a portion of the output file. The sample size specifies how much of the source data is to be analyzed before computing the estimated size. The compression accelerator also determines whether the estimated size reflects an acceptable amount of compression, based on a predetermined early-abort threshold. The compression accelerator aborts the request if the estimated size does not reflect the acceptable amount of compression. The compression accelerator may complete the request if the estimated size reflects the acceptable amount of compression. Other embodiments are described and claimed.
COMPUTER-READABLE RECORDING MEDIUM STORING DATA PLACEMENT PROGRAM, PROCESSOR, AND DATA PLACEMENT METHOD
A data placement program causes a computer to execute a process of data placement in a main memory and a cache. When performing an operation using a first data groups and second data groups to generate pieces of operation result data representing operation results of the operation, based on a size of one piece of the operation result data and a size of an operation result area storing some of the plurality of pieces of operation result data in the cache memory, determining a number of the first data groups and a number of the second data groups, both corresponding to the some pieces of operation result data, and placing the plurality of first data groups and the plurality of second data groups in the main memory based on the determined number of the first data groups and the determined number of the second data groups.
Method To Use Flat Relink Table In HMB
A data storage device includes a non-volatile memory (NVM) device and a controller coupled to the NVM device. The controller is configured to create a bad block table that tracks bad blocks of the NVM device, send the bad block table to a host memory location, and check the bad block table to determine whether a block to be read or written to is bad. The controller is further configured to request information on a bad block from the bad block table located in the host memory location, determine that the requested information is not available from the host memory location, and retrieve the requested information from a location separate from the host memory location. A sum of the times to generate a request to check the flat relink table, execute the request, and retrieve the requested information is less than a time to process a host command.