Patent classifications
H04L69/32
Technologies for managing errors in a remotely accessible memory pool
Technologies for managing errors in a remotely accessible memory pool include a memory sled. The memory sled includes a memory pool having one or more byte-addressable memory devices and a memory pool controller coupled to the memory pool. The memory sled is to write test data to a byte-addressable memory region in the memory pool. The memory region is to be accessed by a remote compute sled. The memory sled is also to read data from the memory region to which the test data was written, compare the read data to the test data to determine whether a threshold number of errors are present in the read data, and send, in response to a determination that the threshold number of errors are present in the read data, a notification to the remote compute sled that the memory region is faulty.
Communication node with digital plane interface
A solution is provided that can be used in a communication node. A case is provided that supports a first circuit board that is connected to a second circuit board via a connection. The first circuit board supports an integrated circuit module and the second circuit board supports a PHY module. A transformer box is mounted on the first circuit board and supports terminals for engagement with a mating connector. A third circuit board can be provided that is parallel to the second circuit board and is mounted on the transformer box so that termination of signals provided to the terminals can take place on a different circuit board than the second circuit board. The third circuit board can also provide power over Ethernet (POE) circuitry.
Virtual dispersive networking systems and methods
A method for network communications from a first device to a second device includes communicating data from the first device to the second device by spawning a first virtual machine for a first network connection that virtualizes network capabilities of the electronic device, and using the virtualized network capabilities of the first virtual machine, transmitting a plurality of packets for communication to a first network address and port combination associated with the second device. The method further includes repeatedly changing to a respective another network address and port combination by repeatedly spawning a respective another virtual machine for a respective another network connection that virtualizes network capabilities of the electronic device, and using the virtualized network capabilities of the spawned respective another virtual machine, transmitting a plurality of packets for communication to the respective another network address and port combination associated with the second device.
Technologies for dividing memory across socket partitions
Technologies for dividing resources across partitions include a compute sled. The compute sled is to determine partitions among sockets of the compute sled. Each socket is associated with a corresponding processor. The compute sled is also to establish a separate memory space for each determined partition, obtain, from an application executed in one of the sockets, a request to access a logical memory address, identify the partition associated with the memory access request, determine a corresponding physical memory address as a function of the identified partition and the logical memory address, and access a memory of the compute sled at the determined physical memory address. Other embodiments are also described and claimed.
In-situ data verification for the cloud
Example methods and apparatus asynchronously verify data stored in a cloud data storage system. One embodiment comprises a monitoring circuit that determines if a data auditing condition associated with a cloud storage system or archived data stored in the cloud storage system has been met, a metadata mirror circuit that controls a metadata mirror to provide metadata, including a first checksum, associated with the archived data to the apparatus, a checksum circuit that computes a second checksum based on the archived data, a verification circuit that generates an audit of the first checksum and the second checksum by comparing the second checksum with the first checksum, and a reporting circuit that generates a log of the audit, that provides the log to the data storage system, and that provides a notification of a data integrity failure to a user associated with the archived data.
Technologies for dynamically allocating data storage capacity for different data storage types
Technologies for allocating data storage capacity on a data storage sled include a plurality of data storage devices communicatively coupled to a plurality of network switches through a plurality of physical network connections and a data storage controller connected to the plurality of data storage devices. The data storage controller is to determine a target storage resource allocation to be used by one or more applications to be executed by one or more sleds in a data center, determine data storage capacity available for each of a plurality of different data storage types on the data storage sled, wherein each data storage type is associated with a different level of data redundancy, determine an amount of data storage capacity for each data storage type to be allocated to satisfy the target storage resource allocation, and adjust the amount of data storage capacity allocated to each data storage type.
Link management method, device and system in virtual machine environment
At least some embodiments of invention provide a method, device and system for link management in a Virtual Machine (VM) environment. The method includes: a heartbeat handshake link is established with a VM. After the heartbeat handshake link is successfully established, Link Aggregation Control Protocol (LACP) state information of a Physical Function (PF) of a plurality of a Network Interface Cards (NICs) is acquired. The LACP state information is sent to the VM through the heartbeat handshake link.
Electronic apparatus and external apparatus controlling method thereof
An electronic apparatus is disclosed, the electronic apparatus including a communicator comprising communication circuitry configured to communicate with an external apparatus, a memory configured to store knowledge information including information regarding the external apparatus and a plurality of action templates that define an operation of the external apparatus and a controller configured to receive identification information of the external apparatus through the communication circuitry of the communicator, to acquire knowledge information and an action template corresponding to the external apparatus based on the identification information from the memory, to generate a command to operate the external apparatus based on the knowledge information and the action template and to transmit the command to the external apparatus through the communication circuitry of the communicator.
Technologies for dynamic accelerator selection
Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.
Technologies for dynamic accelerator selection
Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.