Patent classifications
H05K7/1421
Printed circuit board orientations
An example computing device enclosure can include a first printed circuit board (PCB) that includes a first plurality of components, where a first portion of the first plurality of components that are shorter than a threshold height are positioned on a first side of the first PCB and a second portion of the first plurality of components that are taller than the threshold height are positioned on a second side of the first PCB, and a second printed circuit board (PCB) that includes a second plurality of components, where a first portion of the second plurality of components that are shorter than the threshold height are positioned on a first side of the second PCB and a second portion of the second plurality of components that are taller than the threshold height are positioned on a second side of the second PCB.
PRINTED CIRCUIT BOARD ORIENTATIONS
An example computing device enclosure can include a first printed circuit board (PCB) that includes a first plurality of components, where a first portion of the first plurality of components that are shorter than a threshold height are positioned on a first side of the first PCB and a second portion of the first plurality of components that are taller than the threshold height are positioned on a second side of the first PCB, and a second printed circuit board (PCB) that includes a second plurality of components, where a first portion of the second plurality of components that are shorter than the threshold height are positioned on a first side of the second PCB and a second portion of the second plurality of components that are taller than the threshold height are positioned on a second side of the second PCB.
Technologies for assigning workloads to balance multiple resource allocation objectives
Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Technologies for blind mating for sled-rack connections
Technologies for blind mating of optical connectors in a rack of a data center are disclosed. In the illustrative embodiment, a sled can be slid into a rack and an optical connector on the sled will blindly mate with a corresponding optical connector on the rack. The illustrative optical connector on the sled includes two guide post receivers which mate with corresponding guide posts on the optical connector on the rack such that, when mated, optical fibers of the optical connector on the rack will be aligned and optically coupled with corresponding optical fibers on the optical connector of the sled.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Technologies for dynamic remote resource allocation
Technologies for dynamically allocating resources among a set of managed nodes include an orchestrator server to receive telemetry data from the managed nodes indicative of resource utilization and workload performance by the managed nodes as the workloads are executed, generate a resource allocation map indicative of allocations of resources among the managed nodes, determine, as a function of the telemetry data and the resource allocation map, a dynamic adjustment to allocation of resources to at least one of the managed nodes to improve performance of at least one of the workloads executed on the at least one of the managed nodes, and apply the adjustment to the allocation of the resources among the managed nodes as the workloads are executed. Other embodiments are also described and claimed.
Accelerator resource allocation and pooling
Examples may include techniques to allocate physical accelerator resources from pools of accelerator resources. In particular, virtual computing devices can be composed from physical resources and physical accelerator resources dynamically allocated to the virtual computing devices. The present disclosure provides that physical accelerator resources can be dynamically allocated, or composed, to a virtual computing device despite not being physically coupled to other components in the virtual device.
System and Method for Mechanical Release of Sleds in Enclosures
An enclosure for installation in a server rack has front panel access to multiple release mechanisms. The front panel access allows separate and safe dual tasking of both hot and cold swaps. A hot swap mechanism allows a sled to be ejected while retaining a connection to electrical power. A separate cold swap mechanism releases the sled and also disconnects the electrical connection. When the hot swap mechanism is engaged or enabled, one or more mechanical linkages operate to lock out access to the cold swap mechanism. When the cold swap mechanism is engaged or enabled, the mechanical linkages interferes or blocks manual operation of the hot swap mechanism. Both the hot swap mechanism and the cold swap mechanism thus cannot be inadvertently and/or simultaneously performed.