G06F2212/28

Stream level uninterrupted backup operation using data probe

Methods and systems for backing up data to a target device are described. According to some embodiments, the method receives a first set of data packets for backup, where the first set of data packets includes a multiplicity of data chunks. The method further captures footprints of the first set of data packets in a cache disk array. In response to receiving an acknowledgement from the cache disk array indicating the footprints have been captured, the method further initiates a write operation to write each data chunk of the first set of data packets to the target device. In response to receiving an acknowledgement indicating the data chunk is successfully written, the method further flushes the respective footprint of the data chunk from the cache disk array.

STREAM LEVEL UNINTERRUPTED BACKUP OPERATION USING DATA PROBE

Methods and systems for backing up data to a target device are described. According to some embodiments, the method receives a first set of data packets for backup, where the first set of data packets includes a multiplicity of data chunks. The method further captures footprints of the first set of data packets in a cache disk array. In response to receiving an acknowledgement from the cache disk array indicating the footprints have been captured, the method further initiates a write operation to write each data chunk of the first set of data packets to the target device. In response to receiving an acknowledgement indicating the data chunk is successfully written, the method further flushes the respective footprint of the data chunk from the cache disk array.

Mitigating DRAM cache metadata access overhead with SRAM metadata cache and bloom filter

A system and method for mitigating overhead for accessing metadata for a cache in a hybrid memory module are disclosed. The method includes: providing a hybrid memory module including a DRAM cache, a flash memory, and an SRAM for storing a metadata cache; obtaining a host address including a DRAM cache tag and a DRAM cache index; and obtaining a metadata address from the DRAM cache index, wherein the metadata address includes a metadata cache tag and a metadata cache index. The method further includes determining a metadata cache hit based on a presence of a matching metadata cache entry in the metadata cache stored in the SRAM; in a case of a metadata cache hit, obtaining a cached copy of data included in the DRAM cache and skipping access to metadata included in the DRAM cache; and returning the data obtained from the DRAM cache to a host computer. The SRAM may further store a Bloom filter, and a potential DRAM cache hit may be determined based on a result of a Bloom filter test. A cache controller of the hybrid memory module may disable the Bloom filter when a metadata cache hit ratio is higher than a threshold.

Shared data cache for kernel bypass applications
10235298 · 2019-03-19 · ·

Techniques for implementing a shared data cache for kernel bypass applications are provided. In one set of embodiments, a shared data caching (SDC) service associated with an instance of a kernel bypass application can create a named shared memory region in user space, where the kernel bypass application is configured to use a user-level Input/Output (I/O) stack for accessing a physical I/O device. The SDC service can further map the named shared memory region into a virtual memory address space of the instance. Then, at a time the instance issues an I/O read request to the physical I/O device, the SDC service can process the I/O read request by accessing the named shared memory region as a data cache.

Latency sensitive metadata object persistence operation for storage device

Apparatus and method for managing metadata in a data storage device. In some embodiments, a metadata object has entries that describe data sets stored in a non-volatile write cache. During an archival (persistence) operation, the metadata object is divided into portions, and the portions are copied in turn to a non-volatile memory at a rate that maintains a measured latency within a predetermined threshold. A journal is formed of time-ordered entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory. The journal is subsequently stored to the non-volatile memory, and may be subsequently combined with the previously stored portions to recreate the metadata object in a local memory. The measured performance latency may be related to a specified customer command completion time (CCT) for host commands.

Methods and systems for reducing churn in a caching device

A storage device includes a flash memory-based cache for a hard disk-based storage device and a controller that is configured to limit the rate of cache updates through a variety of mechanisms, including determinations that the data is not likely to be read back from the storage device within a time period that justifies its storage in the cache, compressing data prior to its storage in the cache, precluding storage of sequentially-accessed data in the cache, and/or throttling storage of data to the cache within predetermined write periods and/or according to user instruction.

Shared Data Cache for Kernel Bypass Applications
20180357176 · 2018-12-13 ·

Techniques for implementing a shared data cache for kernel bypass applications are provided. In one set of embodiments, a shared data caching (SDC) service associated with an instance of a kernel bypass application can create a named shared memory region in user space, where the kernel bypass application is configured to use a user-level Input/Output (I/O) stack for accessing a physical I/O device. The SDC service can further map the named shared memory region into a virtual memory address space of the instance. Then, at a time the instance issues an I/O read request to the physical I/O device, the SDC service can process the I/O read request by accessing the named shared memory region as a data cache.

LATENCY SENSITIVE METADATA OBJECT PERSISTENCE OPERATION FOR STORAGE DEVICE

Apparatus and method for managing metadata in a data storage device. In some embodiments, a metadata object has entries that describe data sets stored in a non-volatile write cache. During an archival (persistence) operation, the metadata object is divided into portions, and the portions are copied in turn to a non-volatile memory at a rate that maintains a measured latency within a predetermined threshold. A journal is formed of time-ordered entries that describe changes to the metadata object after the copying of the associated portions to the non-volatile memory. The journal is subsequently stored to the non-volatile memory, and may be subsequently combined with the previously stored portions to recreate the metadata object in a local memory. The measured performance latency may be related to a specified customer command completion time (CCT) for host commands.

MITIGATING DRAM CACHE METADATA ACCESS OVERHEAD WITH SRAM METADATA CACHE AND BLOOM FILTER
20180232310 · 2018-08-16 ·

According to one embodiment, the method includes: providing a hybrid memory module including a DRAM cache, a flash memory, and an SRAM for storing a metadata cache; obtaining a host address by decoding a data access request received from a host computer, wherein the host address includes a DRAM cache tag and a DRAM cache index; obtaining a metadata address from the DRAM cache index, wherein the metadata address includes a metadata cache tag and a metadata cache index; determining a metadata cache hit based on a presence of a matching metadata cache entry in the metadata cache of the SRAM; in a case of the metadata cache hit, obtaining the data from the DRAM cache and skipping an access to the metadata of the DRAM cache; and returning the data obtained from the DRAM cache to the host computer.

Host device caching of a business process data

A data storage subsystem includes a data storage array and a host device in communication with the data storage array. Applications on servers and user terminals communicate with the host to access data maintained by the storage array. In order to enhance performance, the host includes a cache resource and a computer program including cache configuration logic which determines whether an IO received from an application is associated with a predetermined type of business process, and configures the cache resource to store data associated with the received IO where it is determined that the IO is associated with the predetermined type of business process, thereby enabling the data to be available directly from the host without accessing the storage subsystem in response to a subsequent Read request.