Patent classifications
G06F2211/1054
SMART MEMORY BUFFERS
An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
Smart memory buffers
An example method involves receiving, at a first memory node, data to be written at a memory location in the first memory node. The data is received from a device. At the first memory node, old data is read from the memory location, without sending the old data to the device. The data is written to the memory location. The data and the old data are sent from the first memory node to a second memory node to store parity information in the second memory node without the device determining the parity information. The parity information is based on the data stored in the first memory node.
ALLOCATING REBUILDING QUEUE ENTRIES IN A DISPERSED STORAGE NETWORK
A method for execution by a processing system in dispersed storage and task network (DSTN) that includes a processor, includes: identifying a slice name of a slice in error of a set of slices stored in a set of dispersed storage (DS) units; identifying a number of slice errors of the set of slices; generating a queue entry that includes the slice name of the slice in error, a rebuilding task indicator, an identity of the set of slices, and the number of slice errors; identifying a rebuilding queue based on the number of slice errors, wherein the rebuilding queue is associated with one of: the set of DS units or another set of DS units; and facilitating storing the queue entry in the identified rebuilding queue.
Storing data objects in a storage network with multiple memory types
A processing system of a storage network operates by: selecting a queue memory type of a plurality of memory types to store a data object, based on a size parameter associated with the data object; storing the data object in a queue memory device having the queue memory type, when the queue memory type is selected; selecting a main memory type of a plurality of memory types to store the data object, when the queue memory type is not selected; and storing the data object in a main memory device having the main memory type, when the queue memory type is not selected; wherein the data object is dispersed error encoded and stored as a plurality of encoded data slices.
NON-VOLATILE STORAGE DEVICE OFFLOADING IN A MULTI-DATA NODE ENVIRONMENT
Various examples, controllers and methods are disclosed relating to parity checking. One controller can receive a plurality of data segments from a compute node via an interface. Further, the controller can determine at least one intermediate parity based on performing at least one XOR operation of the plurality of data segments, the at least one intermediate parity being stored in at least one device buffer of the first storage device. Further, the controller can transmit the at least one intermediate parity of the at least one device buffer to at least one parity storage device, wherein the at least one intermediate parity corresponds to one of a plurality of intermediate parities used to determine at least one partial parity of a redundant array of independent disk (RAID) volume. Further, the controller can store the plurality of data segments in at least the first storage device and a second storage device.
Storing Data in Multiple Types of Storage Network Memory
A processing system of a storage network operates by: selecting a queue memory type of a plurality of memory types to store a data object, based on a size parameter associated with the data object; storing the data object in a queue memory device having the queue memory type, when the queue memory type is selected; selecting a main memory type of a plurality of memory types to store the data object, when the queue memory type is not selected; and storing the data object in a main memory device having the main memory type, when the queue memory type is not selected; wherein the data object is dispersed error encoded and stored as a plurality of encoded data slices.