G06F15/161

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

Connectors for a networking device with orthogonal switch bars

Connectors for a networking device may be provided. A networking device may comprise a first plurality of switch bars each comprising a first switch type arranged parallel to one another, a second plurality of switch bars each comprising a second switch type arranged parallel to one another, and a third plurality of switch bars each comprising a third switch type arranged parallel to one another. The first plurality of switch bars, the second plurality of switch bars, and the third plurality of switch bars may be arranged orthogonally. A first one of the first plurality of switch bars may be connected to a first one of the second plurality of switch bars via a retractable mechanical connector mechanism.

Technologies for assigning workloads to balance multiple resource allocation objectives

Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.

Computer Implemented Method And Distributed Computing Infrastructure For Automated Plug And Play Configuration
20220398106 · 2022-12-15 · ·

Various embodiments of the teachings herein include a computer-implemented method for automated configuration of a joining computing device into a computing system. The method may include: using a device management service to listen for messages from joining devices; connecting via secure shell and factory default credentials to a discovered device; configuring the joining device based on device descriptions, including: downloading a description from the joining device; creating new security certificates which enable secure communication; closing the default ssh services and triggering a reboot; reading the description from the joining device; using the description to identify the set of connectors required for the container runtime environment to be deployed; and receiving into and executing containerized software in a deployed container runtime environment on the joining computer device.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

TECHNOLOGIES FOR DYNAMIC ACCELERATOR SELECTION
20230050698 · 2023-02-16 ·

Technologies for dynamic accelerator selection include a compute sled. The compute sled includes a network interface controller to communicate with a remote accelerator of an accelerator sled over a network, where the network interface controller includes a local accelerator and a compute engine. The compute engine is to obtain network telemetry data indicative of a level of bandwidth saturation of the network. The compute engine is also to determine whether to accelerate a function managed by the compute sled. The compute engine is further to determine, in response to a determination to accelerate the function, whether to offload the function to the remote accelerator of the accelerator sled based on the telemetry data. Also the compute engine is to assign, in response a determination not to offload the function to the remote accelerator, the function to the local accelerator of the network interface controller.

TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

CRYPTOCURRENCY MINER WITH MULTIPLE POWER DOMAINS
20230095559 · 2023-03-30 ·

A cryptocurrency miner includes a control power supply, a compute power supply, a compute module, and a controller. The compute module includes control circuitry powered based on first power supplied by the control power supply and a compute engine powered based on second power supplied by the compute power supply. The controller causes the control power supply to apply the first power to the control circuitry. The controller further causes the compute power supply to apply the second power to the compute engine after initialization of the control circuitry.

MODULAR SYSTEM (SWITCHBOARDS AND MID-PLANE) FOR SUPPORTING 50G OR 100G ETHERNET SPEEDS OF FPGA+SSD
20230085624 · 2023-03-23 ·

A chassis front-end is disclosed. The chassis front-end may include a switchboard including an Ethernet switch, a Baseboard Management Controller, and a mid-plane connector. The chassis front-end may also include a mid-plane including at least one storage device connector and a speed logic to inform at least one storage device of an Ethernet speed of the chassis front-end. The Ethernet speeds may vary.

CONNECTORS FOR A NETWORKING DEVICE WITH ORTHOGONAL SWITCH BARS

Connectors for a networking device may be provided. A networking device may comprise a first plurality of switch bars each comprising a first switch type arranged parallel to one another, a second plurality of switch bars each comprising a second switch type arranged parallel to one another, and a third plurality of switch bars each comprising a third switch type arranged parallel to one another. The first plurality of switch bars, the second plurality of switch bars, and the third plurality of switch bars may be arranged orthogonally. A first one of the first plurality of switch bars may be connected to a first one of the second plurality of switch bars via a retractable mechanical connector mechanism.