G06F3/0664

Method for reassembling local disk managers and array groups

A method of reassembling a local disk manager (LDM) and array group (AGRP) includes starting a physical extent manager (PEM) configured to run on a number of nodes. The PEM on each node is configured to manage an AGRP running on the same node. A number of LDMs are reassembled, and each LDM is configured to manage virtual disks on each of the nodes. Once enough LDMs are reassembled, an AGRP can be reassembled.

Delegating low priority tasks to a passive storage controller

A computer program product and a data storage device including first and second storage controllers operating in active-passive mode with a shared disk. Each storage controller includes a storage device storing program instructions and a processor to process the program instructions and perform various operations. The operations include receiving a task to be performed by the storage device containing the first and second storage controllers, wherein the first storage controller is currently operating as an active storage controller and the second storage controller is currently operating as a passive storage controller. The operations further include determining whether the received task has a high priority or a low priority, performing the received task in response to determining that the received task has a high priority, and delegating the received task to the second storage controller for performance in response to determining that the received task has a low priority.

MANAGING AN ENTERPRISE DATA STORAGE SYSTEM
20230016745 · 2023-01-19 ·

The present disclosure describes a method to manage an enterprise data storage system, the method including: dividing storage disks of the enterprise data storage system into multiple virtual storage subsystems, wherein each virtual storage subsystem hosts a non-overlapping subset of the storage disks, and wherein each virtual storage subsystem includes a level-2 cache memory dedicated thereto; establishing a communication path between the level-2 cache memory dedicated to each virtual storage subsystem and a main cache of the enterprise-level data storage system; and maintaining a copy of transaction data from the non-overlapping subset of the storage disks hosted by each virtual storage subsystem in the level-2 cache memory dedicated thereto such that when the main cache searches for the copy of the transaction data, the main cache fetches, over the communication path, the copy of the transaction data from the level-2 cache memory of the virtual storage subsystem.

Data transmission method and apparatus used in virtual switch technology
11556491 · 2023-01-17 · ·

A data transmission method and an apparatus used in a virtual switch technology are provided, and the method includes: receiving an IO request of a virtual machine VM for accessing a file or a disk, and When the IO request is to be sent to a physical NIC by using a user mode Open vSwitch (OVS), converting the IO request into an Internet Small Computer Systems Interface (iSCSI) command in a user mode, and then sending the iSCSI command to the user mode OVS, where the user mode OVS sends the iSCSI command to the physical NIC.

PERFORMANCE EVALUATION OF AN APPLICATION BASED ON DETECTING DEGRADATION CAUSED BY OTHER COMPUTING PROCESSES

Performance degradation of an application that is caused by another computing process that shares infrastructure with the application is detected. The application and the other computing device may execute via different virtual machines hosted on the same computing device. To detect the performance degradation that is attributable to the other computing process, certain storage segments of a data storage (e.g., a cache) shared by the virtual machines is written with data. A pattern of read operations are then performed on the segments to determine whether an increase in read access time has occurred. Such a performance degradation is attributable to another computing process. After detecting the degradation, a metric that quantifies the detected degradation attributable to the other computing process is provided to an ML model, which determines the actual performance of the application absent the degradation attributable to the other computing process.

MIGRATION OF VIRTUALIZED COMPUTING INSTANCE WITH MULTIPLE DISK CHAINS IN VIRTUALIZED COMPUTING ENVIRONMENT

Example methods and systems to perform a migration of a virtualized computing instance and its first snapshot hierarchy from a first object store to a second object store have been disclosed. One example method includes identifying a first disk chain of the first snapshot hierarchy having an object running point, identifying a second disk chain of the first snapshot hierarchy different from the first disk chain, and migrating the second disk chain from the first object store to the second object store to form a first branch of a second snapshot hierarchy in the second object store. After the migrating, the example method includes instructing to take a first native snapshot on the object running point in the second object store, instructing to revert the object running point along the first branch and migrating the first disk chain from the first object store to the second object store.

Efficient backup after a restore operation

A request to restore a specific backup instance is received. In response to the received request to restore the specific backup instance, a new reference backup instance based on the specific backup instance stored at the storage controlled by the backup system is created at a storage controlled by a backup system. Data associated with the specific backup instance is provided to a recipient system from the storage associated with a backup system. A constructive incremental backup snapshot of the recipient system is performed based on the new reference backup instance.

Low-latency shared memory channel across address spaces without system call overhead in a computing system

Examples provide a method of communication between a client application and a filesystem server in a virtualized computing system. The client application executes in a virtual machine (VM) and the filesystem server executes in a hypervisor. The method includes: allocating, by the client application, first shared memory in a guest virtual address space of the client application; creating a guest application shared memory channel between the client application and the filesystem server upon request by the client application to a driver in the VM, the driver in communication with the filesystem server, the guest application shared memory channel using the first shared memory; sending authentication information associated with the client application to the filesystem server to create cached authentication information at the filesystem server; and submitting a command in the guest application shared memory channel from the client application to the filesystem server, the command including the authentication information.

DISTRIBUTED DATA STORAGE SYSTEM USING ERASURE CODING ON STORAGE NODES FEWER THAN DATA PLUS PARITY FRAGMENTS AND HEALING FAILED WRITE ATTEMPTS

A distributed data storage system using erasure coding (EC) provides advantages of EC data storage while retaining high resiliency for EC data storage architectures having fewer data storage nodes than the number of EC data-plus-parity fragments. An illustrative embodiment is a three-node data storage system with EC 4+2. Incoming data is temporarily replicated to ameliorate the effects of certain storage node outages or fatal disk failures, so that read and write operations can continue from/to the storage system. The system is equipped to automatically heal failed EC write attempts in a manner transparent to users and/or applications: when all storage nodes are operational, the distributed data storage system automatically converts the temporarily replicated data to EC storage and reclaims storage space previously used by the temporarily replicated data. Individual hardware failures are healed through migration techniques that reconstruct and re-fragment data blocks according to the governing EC scheme.

MULTIPLE READER/WRITER MODE FOR CONTAINERS IN A VIRTUALIZED COMPUTING ENVIRONMENT
20230214249 · 2023-07-06 ·

Multiple stateful virtualized computing instances (e.g., containers) are provided with concurrent access (e.g., read and/or write access) to a shared persistent storage location, such as a persistent volume (PV). This multiple-access capability is provided by a container volume driver that generates and maintains an interval tree data structure for purposes of tracking and managing attempts by containers to simultaneously read/write to the PV.