Patent classifications
G06F3/0664
Using a smart network interface controller with storage systems
A backup data storage system includes non-volatile memory units, a disk interface coupled to at least some of the non-volatile memory units, a connection component that facilitates exchanging data with the backup data storage system, and a smart network interface controller, coupled to the disk interface and the connection component to provide tape emulation to a host coupled to the backup data storage system. The disk interface, the connection component, and the smart network interface controller may be coupled using a PCIe bus. Tape data written to the backup storage device may be stored on the non-volatile memory units. A processor coupled to the smart network interface controller and the disk interface may receive the data from the smart network interface controller and may provide the data to the disk interface to store the data on the non-volatile memory units. The connection component may be a FICON connection component.
Systems and methods for intercycle gap refresh and backpressure management
A system may include a synchronization device and an emulation chip including a processor and a memory. The processor may evaluate, during a first cycle, at least one of a set of one or more execution instructions in the memory or evaluation primitives configured to emulate a circuit, and evaluate, during a second cycle, at least one of the set of one or more execution instructions or a set of configured logic primitives. The synchronization device may interpose a gap period interposed between the first cycle and the second cycle such that during the gap period, the processor does not evaluate one or more instructions from the set of one or more execution instructions or re-evaluate primitives. The synchronization device may cause, during the first gap period, the emulation chip to perform refreshes on the memory of the emulation chip.
STORAGE ASSISTED VIRTUAL MACHINE BACKUPS USING STORAGE VMOTION AND XCOPY
Embodiments for transferring data directly from primary storage to secondary storage in a virtualized network including virtual machine (VM) based storage, by exposing a source volume in the primary storage to a hypervisor host of the virtualized network, preparing a destination volume of the secondary storage as an empty volume and exporting it to the hypervisor host so that the host can the destination volume along with the source volume, and moving, in the hypervisor host, data from the exposed source volume to the exported empty destination volume using a combination of Storage Direct, Storage VMotion, and XCOPY or enhanced XCOPY technologies, wherein the XCOPY technology provides a direct transfer of data from the primary storage to the secondary storage.
Dynamic fail-safe redundancy in aggregated and virtualized solid state drives
A solid state drive having a drive aggregator and a plurality of component solid state drive, including a first component solid state drive and a second component solid state drive. The drive aggregator has at least one host interface, and a plurality of drive interfaces connected to the plurality of component solid state drives. The drive aggregator is configured to generate, in the second solid state drive, a copy of a dataset that is stored in the first component solid state drive. In response to a failure of the first component solid state drive, the drive aggregator is configured to substitute a function of the first component solid state drive with respect to the dataset with a corresponding function of the second component solid state drive, based on the copy of the dataset generated in the second component solid state drive.
Distributed write buffer for storage systems
A computer-based system and method for providing a distributed write buffer in a storage system, including: obtaining a write request at a primary storage server to store data associated with the write request in a non-volatile storage of the primary storage server; and storing the data associated with the write request in a persistent memory of the primary storage server or in a persistent memory of an auxiliary storage server based on presence of persistent memory space in the primary storage server. The write request may be acknowledged by the primary storage server after storing the data associated with the write request in the persistent memory of the primary storage server or in the persistent memory of the auxiliary storage server.
MANAGING LIFECYCLE OF VIRTUALIZATION SOFTWARE RUNNING IN A STANDALONE HOST
Virtualization software installed in a standalone host is remediated according to a desired state model using a desired image of a virtualization software that is used to remediate virtualization software running in hosts which are logically grouped as a cluster of hosts not including the standalone host. The method of remediating the virtualization software installed in the standalone host includes the steps of generating a desired image of the virtualization software of the standalone host from a desired image of the virtualization software of the hosts in the cluster, and upon detecting a difference between an image of the virtualization software currently running in the standalone host and the desired image of the virtualization software of the standalone host, instructing the standalone host to remediate the image of the virtualization software currently running therein to match the desired image of the virtualization software of the standalone host.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
System and method for secure access to a distributed virtual firmware network drive
An information handling system includes a virtual network access module configured to access a virtual network drive that has a first partition in a local storage resource and a second partition in a remote storage resource. In response to detection of an exception, a processor may trigger an exception handler that directs a service processor to initialize a network stack. The processor initializes a mailbox to transmit a mailbox request to retrieve network configuration settings to be used in the initialization of the network stack. The service processor transmits a request to the processor to initialize the mailbox, and initializes the network stack based on the network configuration settings. Subsequent to the initialization of the network stack, a universal network device interface request may be sent to mount and secure communication with the virtual network drive.
Virtualized append-only storage device
An interface receives storage requests for storing data in a software-defined storage network using an append-only storage scheme. The requests include an identifier of a data object to be stored. The requests are agnostic of hardware-specific details of the storage devices. A virtualization layer accesses space allocation data for the storage devices; and policies for prioritizing performance. Based on the data and policies, a physical storage location at the plurality of storage devices is selected for storing the data object. Metadata is generated for the data object indicating that the data object is an append-only object and mapping the physical storage location of the data object to the identifier. The request is translated to instructions for storing the data object at the physical storage location using the append-only storage scheme. The data object is stored at the physical storage location using the append-only storage scheme.
Host computing systems determination to deploy virtual machines based on disk specifications
Techniques for determining host computing systems to deploy virtual machines based on disk specifications are disclosed. In one example, a blueprint to deploy a virtual machine in a cloud computing environment may be received. Further, disk specifications required to deploy the virtual machine may be retrieved from the blueprint. Furthermore, candidate storage entities that support the retrieved disk specifications may be determined. A host computing system that has connectivity to the candidate storage entities may be determined. the determined host computing system may be recommended to deploy the virtual machine.