Patent classifications
G06F3/0617
DISTRIBUTED RESOURCE CACHING
Embodiments are directed to distributed resource caching. A file system that includes cache volumes and agents that may be associated with clients of the file system may be provided. A cache allocation for each agent may be determined based on a capacity of the cache volumes and a number of the agents such that each cache allocation is associated with tokens that each represent a reserved portion of free space in the cache volumes. Storage jobs may be provided to the agents. Data associated with the storage jobs may be stored in the cache volumes. The cache allocation for each agent may be reduced based on the data stored for each agent.
Memory controller and operating method thereof
A memory controller for controlling a memory device includes a host interface and a background controller. The host interface communicates with a host through a link, determines whether quality of the link has been degraded by monitoring the quality of the link, and performs a link recovery operation on the link when it is determined that the quality of the link is degraded. The background controller controls the memory device to perform a background operation, while the link recovery operation is being performed.
Resolving erred 10 flows
A method for resolving an erred input/output (IO) flow, the method may include (i) sending over a path a remote direct write request associated with a certain address range; wherein the path is formed between a compute node of a storage system to a storage drive of the storage system; (ii) receiving by the compute node an error message related to the remote direct write request; wherein the error message does not indicate whether an execution of the remote direct write request failed or is only temporarily delayed; (iii) responding by the compute node to the error message by (a) preventing from sending one or more IO requests through the path, (b) preventing from sending at least one IO requests aimed to the certain address range; and (c) requesting, using a management communication link, to force an execution of pending IO requests that are related to the path; and (iv) reuse the path, by the compute node, following an indication that there are no pending IO requests that are related to the path.
mon service migration method, apparatus, and device, and readable storage medium
A MON service migration method, apparatus, and device, and a readable storage medium, for use in any node in a distributed storage system. The method comprises: acquiring historical data of a MON service in a current node; in the node, determining a target magnetic disk for migrating the MON service, and migrating the historical data to the target magnetic disk; creating mount information of the MON service in a configuration file of the distributed storage system; and restarting the MON service according to the configuration file, such that the MON service migrates to the target magnetic disk. The present method does not need to remove nodes in the distributed storage system, and therefore the MON service migration process will not affect front-end services, improving the service capabilities and reliability of the distributed storage system.
MAINTAINING QUEUES FOR MEMORY SUB-SYSTEMS
Methods, systems, and devices for data stream processing for maintaining queues for memory sub-systems are described. A number of commands included in a queue of a plurality of queues of a memory die of a memory sub-system can be determined. Each queue can be associated with a respective priority level and can be configured to maintain a respective set of commands. A command can be assigned to the queue based on a number of commands included in the queue. One or more commands can be issued from the queues based on the respective priority levels of the queues.
Handling Urgent Commands in a Data Storage Device
A data storage device including a non-volatile memory coupled to a controller. The controller is configured to transmit data between the non-volatile memory and an external electronic device and receive one or more commands to read or write data to the non-volatile memory. The controller is further configured to identify an urgent command in the one or more commands, transmit the urgent command to a negative index of an input queue of the data storage device, and execute a plurality of commands in the input queue.
METHOD FOR WRITING DATA IN APPEND MODE, DEVICE AND STORAGE MEDIUM
The present disclosure provides a method and apparatus for writing data in an append mode, a device and a storage medium. The present disclosure relates to the field of cloud storage technology, and can be applied to a cloud platform. The method includes: acquiring to-be-written data, and writing the to-be-written data into a magnetic disk; writing first index information of the to-be-written data in a memory; storing, in response to determining that the number of pieces of second index information is greater than a first preset threshold, the second index information into storage hardware, the second index information including the first index information; and writing first identifier information corresponding to the second index information in the memory.
CROSS-SITE HIGH-AVAILABILITY DISTRIBUTED CLOUD STORAGE SYSTEM TO PROVIDE MULTIPLE VIRTUAL CHANNELS BETWEEN STORAGE NODES
Systems and methods are described for a cross-site high availability distributed storage system. According to one embodiment, a computer implemented method includes providing a remote direct memory access (RDMA) request for a RDMA stream, and generating, with an interconnect (IC) layer of the first storage node, multiple IC channels and associated IC requests for the RDMA request. The method further includes mapping an IC channel to a group of multiple transport layer sessions to split data traffic of the IC channel into multiple packets for the group of multiple transport layer sessions using an IC transport layer of the first storage node and assigning, with the IC transport layer, a unique transaction identification (ID) to each IC request and assigning a different data offset to each packet of a transport layer session.
INTERCONNECT LAYER SEND QUEUE RESERVATION SYSTEM
Systems and methods for an interconnect layer send queue reservation system are provided. In one example, a method involves performing a transfer of data (e.g., an NVLog) from a storage system to a secondary storage system. A send queue having a fixed number of slots is maintained within an interconnect layer interposed between a file system and a Remote Direct Memory Access (RDMA) layer of the storage system. The interconnect layer implements an application programming interface (API) for the reservation system. A deadlock situation is avoided by, during a suspendable phase of a write transaction, making a reservation for slots within the send queue via the reservation system for the transfer of data. When the reservation is successful, the write transaction proceeds with a modify phase, during which the reservation is consumed and the interconnect layer is caused to perform an RDMA operation to carry out the transfer of data.
Storage device and method of operating the same
Provided herein may be a storage device configured to check a status of a memory device based on data read without output of a status check command, and determine a subsequent command to be generated. The storage device may include a memory device and a memory controller configured to control the memory device. The memory device may include a read data generator configured to generate new read data including both read data corresponding to a read command received from the memory controller and information indicating a status of the memory device. The memory controller may include: a status information determiner configured to determine the status of the memory device based on the new read data received from the read data generator and generate status information and a command generator configured to generate a command to be output to the memory device based on the status information.