Patent classifications
G06F2211/1038
IMPLEMENTING QUEUES (FIFO) AND STACKS (FILO) ON TOP DISPERSED STORAGE
A computing device includes an interface to communicate with a dispersed storage network (DSN), a memory, and a processing module. The computing device receives, from another computing device, a write queue entry request to facilitate storage of one or more queue entries of a queue in a set of storage units (SUs). The computing device dispersed error encodes at least a portion of the write queue entry request to generate a set of queue entry encoded slices (QEESs). The computing device generates a write request, based on the write queue entry request, that includes a slice name corresponding to a QEES of the set of QEESs that includes a queue entry identifier (ID) field that includes a timestamp field and/or an entry number of the write queue entry request. The computing device transmits the write request to the set of SUs to facilitate distributed storage of the set of QEESs.
Automatic adaptation of parameters controlling database savepoints
Each of a plurality of database transactions are logged (i.e., recorded) in a log. Concurrent with the logging, one or more characteristics of the log are monitored. Thereafter, a savepoint is triggered when a pre-defined condition is met as indicated by the monitoring. The triggered savepoint can override or accelerate a savepoint that would have otherwise been triggered based on pre-specified parameters.
Automatic Adaptation of Parameters Controlling Database Savepoints
Each of a plurality of database transactions are logged (i.e., recorded) in a log. Concurrent with the logging, one or more characteristics of the log are monitored. Thereafter, a savepoint is triggered when a pre-defined condition is met as indicated by the monitoring. The triggered savepoint can override or accelerate a savepoint that would have otherwise been triggered based on pre-specified parameters.
I/O accelerator for striped disk arrays using parity
Disclosed herein is an enhanced volume manager (VM) for a storage system that accelerates input/output (I/O) performance for random write operations to a striped disk array using parity. More specifically, various implementations are directed to accelerating random writes (writes comprising less than a complete stripe of data) by consolidating several random writes together to create a sequential write (a full-stripe write) to eliminate one or more read operations and/or increase the volume of new/updated data stored for each write operation. Several such implementations comprise functionality in the VM (volume manager) for identifying random write I/O requests, queuing them locally in a journal, and then periodically flushing the journal to the disk array as a sequential write request.
NVRAM data organization using self-describing entities for predictable recovery after power-loss
In one embodiment, a node coupled to a plurality of storage devices executes a storage input/output (I/O) stack having a plurality of layers including a persistence layer. A portion of non-volatile random access memory (NVRAM) is configured as one or more logs. The persistence layer cooperates with the NVRAM to employ the log to record write requests received from a host and to acknowledge successful receipt of the write requests to the host. The log has a set of entries, each entry including (i) write data of a write request and (ii) a previous offset referencing a previous entry of the log. After a power loss, the acknowledged write requests are recovered by replay of the log in reverse sequential order using the previous record offset in each entry to traverse the log.