H05K7/1447

Technologies for providing shared memory for accelerator sleds

Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.

CABLE BACKPLANE SYSTEM HAVING INDIVIDUALLY REMOVABLE CABLE CONNECTOR ASSEMBLIES
20180014424 · 2018-01-11 ·

A cable backplane system includes cable backplanes each including a tray configured to be coupled to a chassis and a plurality of cable connector assemblies mounted to the tray. The tray has a plate extending between a front and a rear with mounting locations receiving corresponding cable connector assemblies. The trays are oriented parallel to each other with front openings between the fronts of the plates and rear openings between the rears of the plates. Each cable connector assembly has a housing holding contacts terminated to corresponding cables. Each cable connector assembly has a holder mounted to the corresponding mounting location of the plate. The holder is mounted to the plate and removable from the plate through the rear opening at the rear of the plate.

Technologies for assigning workloads to balance multiple resource allocation objectives

Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.

Base module and functional module for a switch-cabinet system, and switch-cabinet system

A base module for a switch-cabinet system, having a plurality of communication units and connection elements for a plurality of functional modules. The connection elements are configured to engage in module-connection elements of functional modules. Each connection element has at least one data connection. Each communication unit is in each case connected to at least one data connection of a connection element. The communication units are connected to one another by a data bus. The base module has a first field-bus connection. The data bus is connected to the first field-bus connection to connect the communication units to a field-bus.

MODULAR HIGH-POWER OFF-BOARD CHARGER

The disclosure relates to a modular high-power off-board charger. The charger includes an AC module, a DC module, a rectifier module and a wire harness module. The AC module is provided with an AC module connector, the DC module is provided with a DC module connector, the rectifier module is provided with a rectifier module connector, and the wire harness module is provided with an AC wire harness connector docked with the AC module connector, a DC wire harness connector docked with the DC module connector, and a rectifier wire harness connector docked with the rectifier module connector. The AC wire harness connector, the DC wire harness connector and the rectifier wire harness connector are electrically connected in the wire harness module.

CABLE CONNECTION STRUCTURE, SINGLE-BOARD ASSEMBLY, SINGLE-BOARD ASSEMBLY CONNECTION STRUCTURE
20220386494 · 2022-12-01 ·

Embodiments of the disclosure provide a cable connection structure, including: a bearing member, the bearing member being provided with at least one cable connector, and each cable connector having a first port connected to a cable and a second port electrically connected to the first port; and a sliding structure connected to the bearing member, the bearing member being configured to be connected to a single board through the sliding structure, the bearing member enabling the single board connected to the bearing member to slide in a first direction which is a direction close to or away from the second port. Embodiments of the disclosure also provide a single-board assembly and a single-board assembly connection structure.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Cloud-based scale-up system composition

Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.

Techniques to configure physical compute resources for workloads via circuit switching

Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.