Patent classifications
H05K7/1421
Techniques to verify and authenticate resources in a data center computer environment
Embodiments are generally directed apparatuses, methods, techniques and so forth to receive a sled manifest comprising identifiers for physical resources of a sled, receive results of an authentication and validation operations performed to authenticate and validate the physical resources of the sled, determine whether the results of the authentication and validation operations indicate the physical resources are authenticate or not authenticate. Further and in response to the determination that the results indicate the physical resources are authenticated, permit the physical resources to process a workload, and in response to the determination that the results indicate the physical resources are not authenticated, prevent the physical resources from processing the workload.
TECHNOLOGIES FOR DYNAMICALLY MANAGING RESOURCES IN DISAGGREGATED ACCELERATORS
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Cabinet and slide rail kit thereof
A cabinet and a slide rail kit thereof are disclosed. The cabinet includes an equipment body and a slide rail mechanism. The equipment body includes a first wall and a second wall. One of the first wall and the second wall is provided with a guiding path. The guiding path includes a blocking feature. The slide rail mechanism includes a supporting rail and a stop. The stop can be moved with respect to the supporting rail. When at a particular position, the stop can be blocked by the blocking feature of the guiding path to prevent the supporting rail from being displaced with respect to the equipment body from a predetermined position in a certain direction.
Locking device and chassis using locking device
A locking device includes a housing, a first linking member, a second linking member, a push-pull member, and a stopper. The push-pull member defines a first sliding groove. A first end portion of the first linking member is slidably received in the first sliding groove, and a second end portion of the first linking member is rotationally fixed on the housing. The housing defines a second sliding groove. The stopper is slidably received in the second sliding groove. A second end portion of the second linking member is rotationally mounted on the first linking member. The first end portion of the first linking member is driven by the push-pull member to move along the first sliding groove, which drives the second linking member to rotate, which drives the stopper to move along the second sliding groove.
LOCKING DEVICE AND CHASSIS USING LOCKING DEVICE
A locking device includes a housing, a first linking member, a second linking member, a push-pull member, and a stopper. The push-pull member defines a first sliding groove. A first end portion of the first linking member is slidably received in the first sliding groove, and a second end portion of the first linking member is rotationally fixed on the housing. The housing defines a second sliding groove. The stopper is slidably received in the second sliding groove. A second end portion of the second linking member is rotationally mounted on the first linking member. The first end portion of the first linking member is driven by the push-pull member to move along the first sliding groove, which drives the second linking member to rotate, which drives the stopper to move along the second sliding groove.
Suspended fan modules
A system includes a drawer and a fan assembly. The drawer includes a mounting structure with a support surface and an aperture. The fan assembly extends through the aperture and includes a fan module and a top cover. The top cover is coupled to the support surface.
TECHNOLOGIES FOR ASSIGNING WORKLOADS TO BALANCE MULTIPLE RESOURCE ALLOCATION OBJECTIVES
Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.
TECHNIQUES TO SUPPORT MULTIPLE INTERCONNECT PROTOCOLS FOR A COMMON SET OF INTERCONNECT CONNECTORS
Embodiments may be generally direct to apparatuses, systems, method, and techniques to determine a configuration for a plurality of connectors, the configuration to associate a first interconnect protocol with a first subset of the plurality of connectors and a second interconnect protocol with a second subset of the plurality of connectors, the first interconnect protocol and the second interconnect protocol are different interconnect protocols and each comprising one of a serial link protocol, a coherent link protocol, and an accelerator link protocol, cause processing of data for communication via the first subset of the plurality of connectors in accordance with the first interconnect protocol, and cause processing of data for communication via the second subset of the plurality of connector in accordance with the second interconnect protocol.
Technologies for dynamically managing resources in disaggregated accelerators
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.