Patent classifications
G06F2212/286
REPLICATION OF MEMORY IMAGE FOR EFFICIENT SIMULTANEOUS USES
An apparatus includes a computing architecture having multiple memories including a first memory and a second memory. The multiple memories are configured to store multiple copies of a memory image including a first copy and a second copy, where the memory image contains instructions to be executed by the computing architecture and data to be used by the computing architecture. The computing architecture can be configured to perform multiple functions including a first function and a second function. The first memory can be positioned in the computing architecture so that the first copy of the memory image is located in a first position that is more efficient for the first function. The second memory can be positioned in the computing architecture so that the second copy of the memory image is located in a second position that is more efficient for the second function.
Data copy avoidance across a storage
Embodiments of the present disclosure relate to methods and apparatuses for data copy avoidance where after a data access request is received from the first storage node, what is sent by a second storage node to the first storage node is not an address of a second storage space in a second mirrored cache, but an address of a first storage space in a first cache corresponding to the second storage space. In this way, data access may be implemented directly in the first cache on the first storage node, and can reduce data communication across different storage nodes, eliminate potential system performance bottlenecks, and enhance data access performance.
ENHANCED DUPLICATE WRITE DATA TRACKING FOR CACHE MEMORY
Data is stored at a cache portion of a cache memory of a memory sub-system responsive to a request to perform a write operation to write the data. A duplicate copy of the data is stored at a write buffer portion of the cache memory. The cache memory is partitioned into the cache portion and the write buffer portion. An entry that maps a location of the duplicate copy of the data stored at the write buffer portion of the cache memory to a location of the data stored at the cache portion of the cache memory is recorded in a write buffer record.
Methods for improved data replication across hybrid cloud volumes using data tagging and devices thereof
Methods, non-transitory computer readable media, and computing devices that receive data from a primary storage node. The data is stored in a primary volume within a primary composite aggregate hosted by the primary storage node. A determination is made when the data is tagged to indicate that the data is stored in the primary volume on a remote data storage device of the primary composite aggregate. The data is stored on another remote data storage device without storing the data in a local data storage device, when the determining indicates that the data is tagged to indicate that the data is stored in the primary volume on a remote data storage device of the primary composite aggregate. Accordingly, this technology allows data placement to remain consistent across primary and secondary volumes and facilitates efficient operation of secondary storage nodes by eliminating two-phase writes for data stored on cloud storage devices.
Optimizations to avoid intersocket links
Described are techniques for processing read and write requests in a system having a NUMA (non-uniform memory access) configuration. Such techniques may include receiving, at a front end adapter of the system, a write request, to write first data to a first storage device, storing a first copy of the first data in first memory local to a first domain, copying, using a first inter-storage processor communication connection, the first data from the first memory to a third memory of a third domain thereby creating a second copy of the first data in the third memory; and determining, in accordance with a first heuristic and first criteria, whether to use the first copy of the first data stored in the first memory or the second copy of the first data stored in the third memory as a source when writing the first data to the first storage device.
Content based cache failover
Methods and systems are disclosed for populating a fail-over cache. When host computer systems in a system each have a content based read cache, the methods and system provide several functions applied in different orders for determining blocks that are to be included in the fail-over cache. Each function attempts a different strategy for combining the contents of the caches of each host computer system into the fail-over cache. If any strategy is successful, then the fail-over cache is placed into service. If all of the strategies fail, then an eviction strategy is employed in which blocks are evicted from each cache until the combination of caches meets a requirement of the fail-over cache, which, in one embodiment, is the size of the fail-over cache.
Read and Write Load Sharing in a Storage Array Via Partitioned Ownership of Data Blocks
A system shares I/O load between controllers in a high availability system. For writes, a controller determines based on one or more factors which controller will flush batches of data from write-back cache to better distribute the I/O burden. The determination occurs after the local storage controller caches the data, mirrors it, and confirms write complete to the host. Once it is determined which storage controller will flush the cache, the flush occurs and the corresponding metadata at a second layer of indirection is updated by that determined storage controller (whether or not it is identified as the owner of the corresponding volume to the host, while the volume owner updates metadata at a first layer of indirection). For a host read, the controller that owns the volume accesses the metadata from whichever controller(s) flushed the data previously and reads the data, regardless of which controller had performed the flush.
Duplicate-copy cache using heterogeneous memory types
A method for demoting data from a cache comprising heterogeneous memory types is disclosed. The method maintains, for a data element in the cache, a write access count that is incremented each time the data element is updated in the cache. The cache includes a higher performance portion and a lower performance portion. The method removes the data element from the higher performance portion in accordance with a cache demotion algorithm. If the data element also resides in the lower performance portion and the write access count is below a first threshold, the method leaves the data element in the lower performance portion. If the data element also resides in the lower performance portion and the write access count is at or above the first threshold, the method removes the data element from the lower performance portion. A corresponding system and computer program product are also disclosed.
DUAL CLASS OF SERVICE FOR UNIFIED FILE AND OBJECT MESSAGING
A storage system has priority queues for real time-class file system messaging and backup-class file system messaging. The storage system includes servers, coupled as a storage cluster, storage devices and a network coupling the servers and the storage devices. The servers have priority queues. The servers operate the priority queues for messaging from the servers to the storage devices via the network in accordance with a real time-class file system and a backup-class file system. A first subset of the priority queues has higher priority on the network for real time-class file system messaging of at least one type. A second subset of the priority queues has lower priority on the network for backup-class file system messaging of at least one type.
Single-copy cache using heterogeneous memory types
A method for demoting data from a cache comprising heterogeneous memory types is disclosed. The method maintains for a data element in the cache, a write access count that is incremented each time the data element is updated in the cache. The cache includes a higher performance portion and a lower performance portion. The method also maintains, for the data element, a read access count that is incremented each time a data element is read in the cache. The method removes the data element from the higher performance portion of the cache in accordance with a cache demotion algorithm. If the write access count is below a first threshold and the read access count is above a second threshold, the method places the data element in the lower performance portion. A corresponding system and computer program product are also disclosed.