Patent classifications
G06F2212/261
REDUCING CONCURRENCY OF GARBAGE COLLECTION OPERATIONS
Methods, computing systems and computer program products implement embodiments of the present invention that include identifying, in a storage system including multiple storage devices having respective sets of storage regions, respective default low storage region thresholds that are used for garbage collection. For each given storage region, a time threshold and an alternative low storage region threshold greater than the default low storage region threshold for the given storage device are defined. While processing input/output operations for each given storage device, a count of unused storage regions in the given storage device is maintained, a timer is initialized, and upon the timer matching the time threshold for the given storage device and upon the count of unused storage regions being less than or equal to the alternative low storage region threshold, a garbage collection operation is initiated. In some embodiments, processing the input/output operations includes using a log-structured array format.
METHOD AND APPARATUS FOR IMPROVING PERFORMANCE OF SEQUENTIAL LOGGING IN A STORAGE DEVICE
In one embodiment, an apparatus comprises a storage device to receive, from a computing host, a request to append data to a data log. The storage device is further to identify a memory location after a last segment of the data log, append the data to the data log by writing the data to the memory location after the last segment of the data log, and provide, to the computing host, a key comprising an identification of the memory location at which the data was appended to the data log.
MEMORY PROTOCOL
The present disclosure includes apparatuses and methods related to a memory protocol. An example apparatus can perform operations on a number of block buffers of the memory device based on commands received from a host using a block configuration register, wherein the operations can read data from the number of block buffers and write data to the number of block buffers on the memory device.
Data-relationship-based fast cache system
A data-relationship-based FAST cache system includes a storage controller that is coupled to first storage device(s) and second storage device(s). The storage controller identifies a relationship between first data stored in the first storage device(s) and second data stored in the first storage device (s), with the relationship based on a difference between a first number of accesses of the first data associated with a first time period and a second number of accesses of the second data associated with the first time period being within an access difference threshold range. Subsequent to identifying the relationship, the storage controller determines that the first data has been accessed in the first storage device(s) a number of times within a second time period that exceeds a FAST cache threshold and, in response, moves both the first data and the second data to the second storage device(s) based on the relationship.
CACHE ALLOCATION TECHNIQUES
Methods, systems, and devices for cache allocation techniques are described. The memory system may receive a write command that includes a stream identification (ID) associated with performance constraints for data streams. The memory system may determine an application identification (ID) that indicates a type of data based on the stream ID of the write command. In some cases, the memory system may assign a level associated with the application ID that indicates an amount of data to be written for the write operation and allocate an amount of space available to be written for the write operation in a single-level cell cache based on the assigned level. The memory system may write the data to the single-level cell cache within the amount of space of the single-level cell cache.
Caching and deduplication of data blocks in cache memory
Techniques for deduplicating data in cache memory include determining that a first data block stored in the cache memory matches a second data block stored in the cache memory. It is further determined that a number of accesses associated with at least one of the first data block or the second data block is equal to or greater than a threshold number of accesses. In response to determining that the number of accesses is equal to or greater than the threshold number of accesses, the first data block is deduplicated in the cache memory.
Systems and methods for expanding memory for a system on chip
Systems and methods are disclosed for expanding memory for a system on chip (SoC). A memory card is loaded in an expandable memory socket electrically and is coupled to a system on chip (SoC) via an expansion bus. The memory card comprises a first volatile memory device. In response to detecting the memory card, an expanded virtual memory map is configured. The expanded virtual memory map comprises a first virtual memory space associated the first volatile memory device and a second virtual memory space associated with a second volatile memory device electrically coupled to the SoC via a memory bus. One or more peripheral images associated with the second virtual memory space are relocated to a first portion of the first virtual memory space. A second portion of the first virtual memory space is configured as a block device for performing swap operations associated with the second virtual memory space.
MANAGING PREFETCHING OPERATIONS FOR DRIVES IN DISTRIBUTED STORAGE SYSTEMS
Systems and methods are provided for managing prefetching operations for read requests for drives in a distributed storage system. For example, a system can determine that a first drive of a plurality of drives is powered on. Prior to receiving a read request for reading a first set of data from the first drive, the system can enable a prefetching operation for prefetching the first set of data from the first drive to be written to a cache. The system may power off the first drive. The system may receive a read request for reading the first set of data from the first drive of a plurality of drives. In response to receiving the read request, the system may read the first set of data from the cache.
Vector processor storage
A method comprising: receiving, at a vector processor, a request to store data; performing, by the vector processor, one or more transforms on the data; and directly instructing, by the vector processor, one or more storage device to store the data; wherein performing one or more transforms on the data comprises: erasure encoding the data to generate n data fragments configured such that any k of the data fragments are usable to regenerate the data, where k is less than n; and wherein directly instructing one or more storage device to store the data comprises: directly instructing the one or more storage devices to store the plurality of data fragments.
Integrating host-side storage device management with host-side non-volatile memory
The present disclosure relates to the field of solid-state data storage, and particularly to improving the speed performance and reducing the cost of solid-state data storage devices. A host-managed data storage system according to embodiments includes a set of storage devices, each storage device including a write buffer and memory; and a host coupled to the set of storage devices, the host including: a storage device management module for managing data storage functions for each storage device; memory including: a front-end write buffer; a first mapping table for data stored in the front-end write buffer; and a second mapping table for data stored in the memory of each storage device.