Patent classifications
G06F2212/461
COMPUTER INCLUDING CACHE USED IN PLURAL DIFFERENT DATA SIZES AND CONTROL METHOD OF COMPUTER
A computer includes a memory and a cache holding a part of data stored in the memory in any of a plurality of data regions. In a case of replacing first data of a first data size held in the cache with second data of a second data size larger than the first data size, allocation of data regions of the cache is changed in units of the second data size by referring to a first management list that includes a plurality of first entries that correspond to the plurality of data regions, respectively, for managing priorities of the data regions for each of the plurality of processes, and a second management list that includes a plurality of second entries corresponding to the first entries for a process that uses the first data size, for managing priorities of first data of the first data size held in the data regions.
VECTOR PROCESSOR STORAGE
A method comprising: receiving, at a vector processor, a request to store data; performing, by the vector processor, one or more transforms on the data; and directly instructing, by the vector processor, one or more storage device to store the data; wherein performing one or more transforms on the data comprises: erasure encoding the data to generate n data fragments configured such that any k of the data fragments are usable to regenerate the data, where k is less than n; and wherein directly instructing one or more storage device to store the data comprises: directly instructing the one or more storage devices to store the plurality of data fragments.
Persistent storage device management
A method comprising: receiving a request to write data at a virtual location; writing the data to a physical location on a persistent storage device; and recording a mapping from the virtual location to the physical location; wherein the physical location corresponds to a next free block in a sequence of blocks on the persistent storage device.
Block device interface using non-volatile pinned memory
A method comprising: receiving, at a block device interface, an instruction to write data, the instruction comprising a memory location of the data; copying the data to pinned memory; performing, by a vector processor, one or more invertible transforms on the data; and writing the data from the pinned memory to one or more storage devices asynchronously; wherein the pinned memory of the data corresponds to a location in pinned memory, the pinned memory being accessible by the vector processor and one or more other processors.
Methods and apparatuses for managing page cache in virtualization service
Provided are a method and an apparatus for managing a page cache in a virtualization service and a method for managing a page cache in a virtualization service according to an exemplary embodiment of the present disclosure includes: comparing a weight value of a container and a weight variable of a page possessed by a process operated by the container in the virtualization service with each other, changing the weight variable of the page based on a comparison result, and managing pages of the page cache using the changed weight variable of the page.
Cached volumes at storage gateways
Methods and apparatus for supporting cached volumes at storage gateways are disclosed. A storage gateway appliance is configured to cache at least a portion of a storage object of a remote storage service at local storage devices. In response to a client's write request, directed to at least a portion of a data chunk of the storage object, the appliance stores a data modification indicated in the write request at a storage device, and asynchronously uploads the modification to the storage service. In response to a client's read request, directed to a different portion of the data chunk, the appliance downloads the requested data from the storage service to the storage device, and provides the requested data to the client.
Compression of host I/O data in a storage processor of a data storage system with selection of data compression components based on a current fullness level of a persistent cache
A storage processor in a data storage system includes a compression selection component that selects a data compression component to be used to compress host I/O data that is flushed from a persistent cache of the storage processor based on a current fullness level of the persistent cache. The compression selection component selects compression components implementing compression algorithms having relatively lower compression ratios for relatively higher current fullness levels of the persistent cache, and selects compression components implementing compression algorithms having relatively higher compression ratios for relatively lower current fullness levels of the persistent cache.
Method, apparatus, and computer program product for managing storage system
Storage system management is provided. Metadata in a first version at a first time point of the storage system is obtained, here the metadata in the first version describes reference relations between at least one data block in a chunk included in the storage system and at least one object stored in the storage system at the first time point. Metadata in a second version at a second time point of the storage system is obtained, the second time point being after the first time point. The chunk included in the storage system is managed based on a determined difference between the metadata in the first version and the metadata in the second version. By means of the technical solution of the present disclosure, chunks in the storage system may be managed more effectively, and the chunk reclaiming efficiency may be increased.
Container-based flash cache with a survival queue
Use of a survival queue to manage a container-based flash cache is disclosed. In various embodiments, a corresponding survival time is associated with each of a plurality of containers stored in a flash cache, each container comprising a plurality of data blocks. The survival time may be determined based at least in part on a calculated proportion of relatively recently accessed data blocks associated with the container is associated with the container. A container to evict from the flash cache is selected based at least in part on a determination that the corresponding survival time of the selected container has expired.
PERSISTENT READ CACHE IN A SCALE OUT STORAGE SYSTEM
Methods, apparatuses, systems, and media for implementing a persistent read cache in a scale out storage system are disclosed to reduce access latency and achieve higher performance. Both the cached data blocks and distributed data placements are referenced by their unique content identifiers and are deduplicated. The persistent read cache spans across node reboots and is inherently coherent across all storage nodes without a distributed lock manager. The cached data blocks share the same storage pool as distributed data placements without costing storage capacity. A cached data block can become a distributed data placement or vice versa without moving the physical data block. Methods are also disclosed to reduce time to performance for logical device mobility.