Patent classifications
G06F11/2017
Front End Traffic Handling In Modular Switched Fabric Based Data Storage Systems
Systems, methods, apparatuses, and software for data storage systems are provided herein. In one example, a data storage system is provided that includes storage drives each comprising a PCIe interface, and configured to store data and retrieve the data stored on associated storage media responsive to data transactions received over a switched PCIe fabric. The data storage system includes processors configured to each manage only an associated subset of the storage drives over the switched PCIe fabric. A first processor is configured to identify first data packets received over a network interface associated with the first processor within a network buffer of the first processor as comprising a storage operation associated with at least one of the plurality of storage drives managed by a second processor, and responsively transfer the first data packets into a network buffer of the second processor.
Input-output path selection using switch topology information
Switch topology-aware path selection in an information processing system is provided. For example, an apparatus comprises a host device comprising a processor coupled to a memory. The host device is configured to communicate with a storage system over a network with a plurality of switches. The host device is further configured to obtain topology information associated with the plurality of switches in the network, and select a path from the host device to the storage system through one or more of the plurality of switches based at least in part on the obtained topology information.
Memory system, memory controller, and method of operating memory system
Embodiments of the present disclosure relate to a memory system, a memory controller, and a method of operating the memory system. According to the embodiments of the present disclosure, when result data obtained by derandomizing data included in a flag area is different from reference data after a random data unit is derandomized based on a seed, it is possible to detect an error occurring in the seed in a process of derandomizing the data and to prevent malfunction of firmware in advance by searching for a target seed and derandomizing the random data unit based on the target seed.
CONTROL DEVICE AND METHOD FOR REWRITING CONTROL PROGRAM
A control unit is configured to determine success or failure of a rewrite process after end of the rewrite process, and write success or failure display data into a success or failure determination result storage area in accordance with the success or failure, and read the success or failure display data present in the success or failure determination result storage area after startup of an electronic device, and execute a control program, in lieu of carrying out the rewrite process, when it is determined that the success or failure display data indicate a success of the rewrite process.
Active-active architecture for distributed ISCSI target in hyper-converged storage
A method is provided for a hyper-converged storage-compute system to implement an active-active failover architecture for providing Internet Small Computer System Interface (iSCSI) target service. The method intelligently selects multiple hosts to become storage nodes that process iSCSI input/output (I/O) for a target. The method further enables iSCSI persistent reservation (PR) to handle iSCSI I/Os from multiple initiators.
BIOS UPDATES
In example implementations, a computing device is provided. The computing device includes a processor, a multiplexer, a first memory, a second memory, and a controller. The processor is to execute an operating system (OS). The multiplexer is coupled to the processor. The first memory is coupled to the multiplexer to store current basic input/output system (BIOS) instructions. The second memory is coupled to the multiplexer. The controller is coupled to the multiplexer to control connections of the multiplexer to allow the processor to store updated BIOS instructions in the second memory in a background while the OS is executed by processor.
FAULT TOLERANCE USING SHARED MEMORY ARCHITECTURE
Examples provide a fault tolerant virtual machine (VM) using pooled memory. When fault tolerance is enabled for a VM, a primary VM is created on a first host in a server cluster. A secondary VM is created on a second host in the server cluster. Memory for the VMs is maintained on a shared partition in pooled memory. The pooled memory is accessible to all hosts in the cluster. The primary VM has read and write access to the VM memory in the pooled memory. The secondary VM has read-only access to the VM memory. If the second host fails, a new secondary VM is created on another host in the cluster. If the first host fails, the secondary VM becomes the new primary VM and a new secondary VM is created on another host in the cluster.
High availability for container based control execution
In an industrial automation system, a control device adapted to a container-based architecture has been developed. The control device may comprise one or more containers instantiated with control execution application, communication application, and or redundancy management application.
SEQUENTIAL RESETS OF REDUNDANT SUBSYSTEMS WITH RANDOM DELAYS
Example implementations relate to sequential resets of redundant subsystems. For example, in an implementation, a controller may receive a maintenance activity instruction and may perform the maintenance activity on the redundant subsystems. After performance of the redundant subsystems, the controller may sequentially reset each of the redundant subsystems. The controller may wait a random delay between sequential resets of the redundant subsystems.
System and Method for Coordinating Use of Multiple Coprocessors
An interface software layer is interposed between at least one application and a plurality of coprocessors. A data and command stream issued by the application(s) to an API of an intended one of the coprocessors is intercepted by the layer, which also acquires and stores the execution state information for the intended coprocessor at a coprocessor synchronization boundary. At least a portion of the intercepted data and command stream data is stored in a replay log associated with the intended coprocessor. The replay log associated with the intended coprocessor is then read out, along with the stored execution state information, and is submitted to and serviced by at least one different one of the coprocessors other than the intended coprocessor.