G06F2212/285

METHOD AND APPARATUS TO REDUCE CACHE STAMPEDING
20230046354 · 2023-02-16 ·

An apparatus comprises a memory having a data cache stored therein and a control circuit operably coupled thereto. The control circuit is configured to update that data cache in accordance with a scheduled update time. In the latter regards, by one approach, the control circuit computes selected entries for the data cache prior to the scheduled update time pursuant to a prioritization scheme to provide a substitute data cache. At the scheduled update time, the control circuit switches the substitute data cache for the data cache such that data queries made subsequent to the scheduled update time access the substitute data cache and not the data cache.

TECHNOLOGIES FOR MANAGING REPLICA CACHING IN A DISTRIBUTED STORAGE SYSTEM
20170251073 · 2017-08-31 ·

Technologies for managing replica caching in a distributed storage system include a storage manager device. The storage manager device is configured to receive a data write request to store replicas of data. Additionally, the storage manager device is configured to designate one of the replicas as a primary replica, select a first storage node to store the primary replica of the data in a cache storage and at least a second storage node to store a non-primary replica of the data in a non-cache storage. The storage manager device is further configured to include a hint in a first replication request to the first storage node that the data is to be stored in the cache storage of the first storage node as the primary replica. Further, the storage manager device is configured to transmit replication requests to the respective storage nodes. Other embodiments are described and claimed.

DATA STORAGE DEVICE

A memory system includes a plurality of volatile memory modules to temporarily store data in a distributed manner, a V storing place management unit included in each of the volatile memory modules, a plurality of nonvolatile memory modules to store the data stored in each of the volatile memory modules in a distributed manner, and a NV storing place management unit included in each of the nonvolatile memory modules. Each V storing place management unit and each NV storing place management unit communicate and determine the destination nonvolatile memory module for each volatile memory module. The data is transmitted to the determined destination nonvolatile memory module and stored in the destination nonvolatile memory module.

Storage device and computer system
09766824 · 2017-09-19 · ·

When computers and virtual machines operating in the computers both attempt to allocate a cache regarding the data in a secondary storage device to respective primary storage devices, identical data is prevented from being stored independently in multiple computers or virtual machines. An integrated cache management function in the computer arbitrates which computer or virtual machine should cache the data of the secondary storage device, and when the computer or the virtual machine executes input/output of data of the secondary storage device, the computer inquires the integrated cache management function, based on which the integrated cache management function retains the cache only in a single computer, and instructs the other computers to delete the cache. Thus, it is possible to prevent identical data from being cached in a duplicated manner in multiple locations of the primary storage device, and enables efficient capacity usage of the primary storage device.

EXTRA-RESILIENT CACHE FOR RESILIENT STORAGE ARRAY

A data storage array is configured for m-way resiliency across a first plurality of storage nodes. The m-way resiliency causes the data storage array to direct each top-level write to at least m storage nodes within the first plurality, for committing data to a corresponding capacity region allocated on each storage node to which each write operation is directed. Based on the data storage array being configured for m-way resiliency, an extra-resilient cache is allocated across a second plurality of storage nodes comprising at least s storage nodes (where s>m), including allocating a corresponding cache region on each of the second plurality for use by the extra-resilient cache. Based on determining that a particular top-level write has not been acknowledged by at least n of the first plurality of storage nodes (where n≤m), the particular top-level write is redirected to the extra-resilient cache.

Protocol for processing requests that assigns each request received by a node a sequence identifier, stores data written by the request in a cache page block, stores a descriptor for the request in a cache page descriptor, and returns a completion acknowledgement of the request

Processing requests may include: receiving a write request from a host at a first node of a system; and servicing the write comprising assigning, by the first node, a sequence identifier to the write request, wherein the sequence identifier is included in a subsequence of identifiers only assignable by the first node, performing in parallel a first operation that stores first data written by the write request in a cache, a second operation that stores a descriptor for the write request in the cache, and a third operation that sends the descriptor (including the sequence identifier) to a peer node of the system; determining by the first node that the first, second and third operations have successfully completed; and responsive to determining the first, second and third operations have successfully completed, sending an acknowledgement from the first node to a host indicating successful completion of the write request.

Cluster based hard drive SMR optimization

Technologies are provided for storing data by alternating the performance of data write operations using multiple clusters of storage devices. Data is written to internal buffers of storage devices in one cluster while data stored in buffers of storage devices in another cluster is transferred to the storage devices' permanent storages. When available buffer capacity in a cluster falls below a specified threshold, data write commands are no longer sent the cluster and the storage devices in the cluster transfer data stored in their buffers to their permanent storages. While the data is being transferred, data write commands are transmitted to other clusters. When the data transfer is complete, the storage devices in the cluster can be scheduled to receive data write commands again. A cluster can be selected for performing a given data write request by matching the attributes of the cluster to parameters of the data write request.

METHODS FOR CACHE REWARMING IN A FAILOVER DOMAIN AND DEVICES THEREOF
20220121538 · 2022-04-21 ·

Methods, non-transitory machine readable media, and computing devices that facilitate cache rewarming in a failover domain are disclosed. With this technology, a tag is inserted into a local tagstore. The tag includes a location of data in a cache hosted by a failover computing device and is retrieved from a snapshot of a remote tagstore for the cache. An invalidation log for an aggregate received from the failover computing device is replayed subsequent to mounting a filesystem that is associated with the aggregate and comprises the data. The data is retrieved from the cache following determination of the location from the tag in the local tagstore in order to service a received storage operation associated with the data. Takeover nodes do not have to wait for a cache to repopulate organically, and can leverage the contents of a cache of a failover node to thereby improve performance following takeover events.

Populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node

Provided are a computer program product, system, and method for populating a second cache with tracks from a first cache when transferring management of the tracks from a first node to a second node. Management of a first group of tracks in the storage managed by the first node is transferred to the second node managing access to a second group of tracks in the storage. After the transferring the management of the tracks, the second node manages access to the first and second groups of tracks and caches accessed tracks from the first and second groups in the second cache of the second node. The second cache of the second node is populated with the tracks in a first cache of the first node.

Methods for cache rewarming in a failover domain and devices thereof
11221928 · 2022-01-11 · ·

Methods, non-transitory machine readable media, and computing devices that facilitate cache rewarming in a failover domain are disclosed. With this technology, a tag is inserted into a local tagstore. The tag includes a location of data in a cache hosted by a failover computing device and is retrieved from a snapshot of a remote tagstore for the cache. An invalidation log for an aggregate received from the failover computing device is replayed subsequent to mounting a filesystem that is associated with the aggregate and comprises the data. The data is retrieved from the cache following determination of the location from the tag in the local tagstore in order to service a received storage operation associated with the data. Takeover nodes do not have to wait for a cache to repopulate organically, and can leverage the contents of a cache of a failover node to thereby improve performance following takeover events.