H05K7/1485

COMPUTE SLED PROVIDING HIGH SPEED STORAGE ACCESS THROUGH A PCI EXPRESS FABRIC BETWEEN COMPUTE NODES AND A STORAGE SERVER
20230273889 · 2023-08-31 ·

A network architecture including a streaming array that includes a plurality of compute sleds, wherein each compute sled includes one or more compute nodes. The network architecture including a network storage of the streaming array. The network architecture including a PCIe fabric of the streaming array configured to provide direct access to the network storage from a plurality of compute nodes of the streaming array. The PCIe fabric including one or more array-level PCIe switches, wherein each array-level PCIe switch is communicatively coupled to corresponding compute nodes of corresponding compute sleds and communicatively coupled to the network storage. The network storage is shared by the plurality of compute nodes of the streaming array.

Technologies for dynamically managing resources in disaggregated accelerators

Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.

Application and integration of a GPU server system

A graphics processing unit (GPU) server having a GPU host head with one or more host graphics processing units (GPUs). The GPU server further has a GPU system with a plurality of system GPUs that are separate from the host GPUs, and that are configured to rapidly accelerate creation of images for output to a display device. The GPU server also has a mounting assembly that integrates the GPU host head and the GPU system into a single GPU server unit. The GPU host head is independently movable relative to the GPU system.

Network architecture providing high speed storage access through a PCI express fabric between a compute node and a storage server

A network architecture including network storage. The network architecture includes a plurality of streaming arrays, each streaming array including a plurality of compute sleds, wherein each compute sled includes one or more compute nodes. The network architecture includes a PCI Express (PCIe) fabric configured to provide direct access to the network storage from compute nodes of each of the plurality of streaming arrays, the PCIe fabric including a plurality of array-level PCIe switches, each array-level PCIe switch communicatively coupled to compute nodes of compute sleds of a corresponding streaming array and communicatively coupled to the storage server. The network storage is shared by the plurality of streaming arrays.

FLOATING DEVICE LOCATION IDENTIFICATION SYSTEM

A floating device location identification system includes a chassis defining floating device housings and including respective chassis location identification features adjacent each floating device housing that identify the relative location of that floating device housing. A floating device may be positioned in a first floating device housing and adjacent a first chassis location identification feature. The first floating device includes floating device cabling connector(s) that are connected via a cabling subsystem to a device location identification subsystem, and chassis engagement elements that are coupled to the floating device cabling connector(s) and that engage the first chassis location identification feature. The floating device transmits floating device location identifying information to the device location identification subsystem that is based on the engagement of the chassis engagement elements and the first chassis location identification feature, and that identifies a relative location of the first floating device housing in the chassis.

Technologies for data center multi-zone cabling

Technologies for connecting data cables in a data center are disclosed. In the illustrative embodiment, racks of the data center are grouped into different zones based on the distance from the racks in a given zone to a network switch. All of the racks in a given zone are connected to the network switch using data cables of the same length. In some embodiments, certain physical resources such as storage may be placed in racks that are in zones closer to the network switch and therefore use shorter data cables with lower latency. An orchestrator server may, in some embodiments, schedule workloads or create virtual servers based on the different zones and corresponding latency of different physical resources.

Techniques to configure physical compute resources for workloads via circuit switching

Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.

Cooling system for a networking device with orthogonal switch bars

A cooling system for a networking device may be provided. The networking device may comprise a first plurality of switch bars each comprising a first switch type arranged parallel to one another, a second plurality of switch bars each comprising a second switch type arranged parallel to one another, and a third plurality of switch bars each comprising a third switch type arranged parallel to one another. The first plurality of switch bars, the second plurality of switch bars, and the third plurality of switch bars may be arranged orthogonally. A plurality of cooling passages may be configured to supply a coolant to the apparatus and to exhaust the coolant from the apparatus. The coolant may pass through the first plurality of switch bars, the second plurality of switch bars, and the third plurality of switch bars.

MULTI-DEVICE CHASSIS AIR FILTER CHARACTERIZATION SYSTEM

A multi-device chassis air filter characterization system includes a multi-device chassis, an air filter that is included on the multi-device chassis, and a plurality of computing devices that are housed in the multi-device chassis. Each of the computing devices determines that a current time corresponds to a predetermined air filter characterization time period and, in response, operates a cooling system in that computing device at a predetermined cooling system operating level for the predetermined air filter characterization time period. A first computing device that is included in the plurality of computing devices measures an air filtering characteristic provided by the air filter during the predetermined air filter characterization time period and, based on the air filtering characteristic, determines whether to generate an air filter replacement alert.

SLOTFILLER, COMPUTER-RACK INSERTABLE COMPONENT, AND COMPUTER RACK
20230320020 · 2023-10-05 ·

The present disclosure concerns a slotfiller (1) for closing a slot (21) in a computer-rack-insertable component (20), wherein the slotfiller (1) comprises at least two finger-grippable cup elements (2), and a hinge (3) connecting the cup elements (2). Therein, the slotfiller (1) is configured to be insertable into the slot (21) by pressing of the cup elements (2) towards each other and is configured to at least partially cover an open surface area of the slot (21) by releasing of the cup elements (2) in the inserted state. The present disclosure also concerns a computer-rack-insertable component (20) and a computer rack (100), each comprising at least one slotfiller (1).