H05K7/1485

Interlocking transportation totes

A system may include first and second apparatuses. The first and second apparatuses may each include: an enclosure portion including a plurality of mounting features that are configured to receive information handling systems, wherein dimensions of the enclosure portion define a footprint; a base portion disposed below the enclosure portion and coupled to the enclosure portion; a plurality of casters coupled to the base portion and mounted in respective positions that are laterally displaced from the base portion such that the positions are outside the footprint of the enclosure portion; and a locking mechanism. The first and second apparatuses may be operable to be coupled together via their respective locking mechanisms for transport.

POWER AND TEMPERATURE MANAGEMENT OF DEVICES

Examples described herein relate to an interface and a network interface device coupled to the interface and comprising circuitry to: control power utilization by a first set of one or more devices based on power available to a system that includes the first set of one or more devices, wherein the system is communicatively coupled to the network interface and control cooling applied to the first set of one or more devices.

Server rack flood shroud
11711903 · 2023-07-25 · ·

Systems and techniques for deploying a deployable water barrier to protect server racks and components housed thereon are provided. The deployable water barrier is deployed in response to water detected at the server rack. The deployable water barrier is deployed by a barrier deployment mechanism, causing the barrier to expand and cover a front and/or a back side of the server rack.

Connectors for a networking device with orthogonal switch bars

Connectors for a networking device may be provided. A networking device may comprise a first plurality of switch bars each comprising a first switch type arranged parallel to one another, a second plurality of switch bars each comprising a second switch type arranged parallel to one another, and a third plurality of switch bars each comprising a third switch type arranged parallel to one another. The first plurality of switch bars, the second plurality of switch bars, and the third plurality of switch bars may be arranged orthogonally. A first one of the first plurality of switch bars may be connected to a first one of the second plurality of switch bars via a retractable mechanical connector mechanism.

Technologies for assigning workloads to balance multiple resource allocation objectives

Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.

Application and integration of a GPU server system

A graphics processing unit (GPU) server having a GPU host head with one or more host graphics processing units (GPUs). The GPU server further has a GPU system with a plurality of system GPUs that are separate from the host GPUs, and that are configured to rapidly accelerate creation of images for output to a display device. The GPU server also has a mounting assembly that integrates the GPU host head and the GPU system into a single GPU server unit. The GPU host head is independently movable relative to the GPU system.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Electronic devices for expansion

The present disclosure is related to electronic devices. At least some embodiments of the present disclosure relate to an electronic device comprising a circuit board, a first connector, and a second connector. The first connector and the second connector are disposed on the circuit board. The first connector is different from the second connector. The second connector is adjacent to the first connector. The first connector is arranged along a reference line in a first direction, and the second connector is adjacent to the reference line in the first direction.

TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Application And Integration Of A GPU Server System
20230094401 · 2023-03-30 ·

A graphics processing unit (GPU) server having a GPU host head with one or more host graphics processing units (GPUs). The GPU server further has a GPU system with a plurality of system GPUs that are separate from the host GPUs, and that are configured to rapidly accelerate creation of images for output to a display device. The GPU server also has a mounting assembly that integrates the GPU host head and the GPU system into a single GPU server unit. The GPU host head is independently movable relative to the GPU system.