G06F2201/855

Write invalidation of a remote location cache entry in a networked storage system

Methods and systems for a networked storage system are provided. One method includes: receiving, by a first storage node, a request to modify data stored using a logical storage object presented by the first storage node, the first storage node communicating with a second storage node configured as a failover partner of the first storage node; transmitting, by the first storage node, an invalidation request to the second storage node to invalidate an entry in a storage location cache of the second storage node, the entry indicating a storage location where data is stored by the first storage node, before modification; and responding, by the first storage node, to the request after modifying the data and upon receiving a response from the second storage node indicating successful invalidation of the entry.

Data recovery using bitmap data structure

Examples of the present disclosure describe implementing bitmap-based data replication when a primary form of data replication between a source device and a target device cannot be used. According to one example, a temporal identifier may be received from the target device. If the source device determines that the primary replication method is unable to be used to replicate data associated with the temporal identifier, a secondary replication method may be initiated. The secondary replication method may utilize a recovery bitmap identifying data blocks that have changed on the source device since a previous event.

JOURNAL BARRIER CONSISTENCY DETERMINATION
20220358018 · 2022-11-10 ·

In one embodiment, an apparatus comprises a source system comprising a processing device coupled to memory. The processing device is configured to obtain an IO operation corresponding to an address of the source system. The IO operation comprises first user data. The processing device is further configured to store metadata associated with the IO operation in a first journal barrier of a replication journal of the source system and to close the first journal barrier. The processing device is further configured to determine that the first user data associated with the IO operation is missing from the first journal barrier and to obtain second user data from the address. The processing device is further configured to identify an interval from the first journal barrier to a second journal barrier and to provide the first journal barrier and the interval to a destination system.

Technique for replicating oplog index among nodes of a cluster

A technique replicates an index of an operations log (oplog) from a primary node to a secondary node of a cluster in the event of failure. The oplog functions as a staging area to coalesce random write operations directed to a virtual disk (vdisk) stored on a backend storage tier. The oplog temporarily caches write data as well as metadata describing the write data. The metadata includes descriptors to the write data corresponding to offset ranges of the vdisk and are used to identify ranges of write data for the vdisk that are cached in the oplog. To facilitate fast lookup operations of whether write data is cached in the oplog, an oplog index provides a state of the latest data for offset ranges of the vdisk that enables fast failover of metadata used to construct the oplog index in memory without downtime or significant metadata replay.

Storage System and Method Using Persistent Memory
20220342562 · 2022-10-27 ·

A method, computer program product, and computing system for sensing a failure within a system within a computing device. The system may include a cache memory system and a vaulted memory comprising a random access memory (RAM) having a plurality of independent persistent areas. A primary node and secondary node may be provided. The primary node may occupy a first independent persistent area of the RAM of the vaulted memory. The secondary node may occupy a second independent persistent area of the RAM of the vaulted memory. Data within the vaulted memory may be written to a persistent media using an iterator. The data may include at least one dirty page. Writing data within the vaulted memory to the persistent media may include flushing the at least one dirty page to the persistent media.

Cloud image replication of client devices
11599559 · 2023-03-07 · ·

Systems and methods for replicating a device image to a storage such as the cloud. The cloud is seeded with a base image that corresponds to the device. Changes between the contents of the device and the base image are identified, uploaded to the cloud, and applied to the image. The changes are tracked continuously and the image in the cloud can thus be used to restore the device to any point in time. The cloud image can also be used in a cloud based virtual machine that provides a user of the device with access to the device's contents via the cloud based image.

ACHIEVING NEAR-ZERO ADDED LATENCY FOR ANY POINT IN TIME OS KERNEL-BASED APPLICATION REPLICATION
20230123049 · 2023-04-20 ·

One example method includes intercepting an IO issued by an application, writing the IO and IO metadata to a splitter journal in NVM, forwarding the IO to storage, and asynchronous with operations occurring along an IO path between the application and storage, evacuating the splitter journal by sending the IO and IO metadata from the splitter journal to a replication site. In this example, sending the IO and IO metadata from the journal to the replication site does not increase a latency associated with the operations on the IO path

IN-MEMORY DATABASE-MANAGED CONTAINER VOLUME REPLICATION
20230161794 · 2023-05-25 ·

In an example embodiment, a solution is used to provide container volume replication via a container storage replication log and volume buffer synchronization, which is built on top of a container cloud platform whose container metadata and replication runtime configuration are all managed by a storage manager (a service orchestrated by its job scheduler and service orchestrator). This container volume replication ensures the data security for a long-running service in the container. In the case of any disaster, the in-memory database and application data inside of the container can be recovered via volume replication. This provides container volume replication for long-running containerized applications whose states keep changing.

DATA RECOVERY USING BITMAP DATA STRUCTURE

Examples of the present disclosure describe implementing bitmap-based data replication when a primary form of data replication between a source device and a target device cannot be used. According to one example, a temporal identifier may be received from the target device. If the source device determines that the primary replication method is unable to be used to replicate data associated with the temporal identifier, a secondary replication method may be initiated. The secondary replication method may utilize a recovery bitmap identifying data blocks that have changed on the source device since a previous event.

In-memory database-managed container volume replication
11625418 · 2023-04-11 · ·

In an example embodiment, a solution is used to provide container volume replication via a container storage replication log and volume buffer synchronization, which is built on top of a container cloud platform whose container metadata and replication runtime configuration are all managed by a storage manager (a service orchestrated by its job scheduler and service orchestrator). This container volume replication ensures the data security for a long-running service in the container. In the case of any disaster, the in-memory database and application data inside of the container can be recovered via volume replication. This provides container volume replication for long-running containerized applications whose states keep changing.