Patent classifications
H05K7/1421
CABINET AND SLIDE RAIL KIT THEREOF
A cabinet and a slide rail kit thereof are disclosed. The cabinet includes an equipment body and a slide rail mechanism. The equipment body includes a first wall and a second wall. One of the first wall and the second wall is provided with a guiding path. The guiding path includes a blocking feature. The slide rail mechanism includes a supporting rail and a stop. The stop can be moved with respect to the supporting rail. When at a particular position, the stop can be blocked by the blocking feature of the guiding path to prevent the supporting rail from being displaced with respect to the equipment body from a predetermined position in a certain direction.
TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Disaggregated physical memory resources in a data center
Examples may include sleds for a rack in a data center including physical compute resources and memory for the physical compute resources. The memory can be disaggregated, or organized into near and far memory. A first sled can comprise the physical compute resources and a first set of physical memory resources while a second sled can comprise a second set of physical memory resources. The first set of physical memory resources can be coupled to the physical compute resources via a local interface while the second set of physical memory resources can be coupled to the physical compute resources via a fabric.
Technologies for adaptive processing of multiple buffers
Technologies for adaptive processing of multiple buffers is disclosed. A compute device may establish a buffer queue to which applications can submit buffers to be processed, such as by hashing the submitted buffers. The compute device monitors the buffer queue and determines an efficient way of processing the buffer queue based on the number of buffers present. The compute device may process the buffers serially with a single processor core of the compute device or may process the buffers in parallel with single-instruction, multiple data (SIMD) instructions. The compute device may determine which method to use based on a comparison of the throughput of serially processing the buffers as compared to parallel processing the buffers, which may depend on the number of buffers in the buffer queue.
Out-of-band management techniques for networking fabrics
Out-of-band management techniques for networking fabrics are described. In an example embodiment, an apparatus may comprise a packet-switched network interface to deconstruct a packet received via an out-of-band management network and control circuitry to execute an out-of-band management agent, and the out-of-band management agent may be operative to identify a configuration command comprised in the received packet and control an optical circuit-switched network interface based on the configuration command. Other embodiments are described and claimed.
Technologies for dynamically managing resources in disaggregated accelerators
Technologies for dynamically managing resources in disaggregated accelerators include an accelerator. The accelerator includes acceleration circuitry with multiple logic portions, each capable of executing a different workload. Additionally, the accelerator includes communication circuitry to receive a workload to be executed by a logic portion of the accelerator and a dynamic resource allocation logic unit to identify a resource utilization threshold associated with one or more shared resources of the accelerator to be used by a logic portion in the execution of the workload, limit, as a function of the resource utilization threshold, the utilization of the one or more shared resources by the logic portion as the logic portion executes the workload, and subsequently adjust the resource utilization threshold as the workload is executed. Other embodiments are also described and claimed.
SUBSTRATE MAGAZINE, SUBSTRATE MAGAZINE SYSTEM AND SUBSTRATE PLACEMENT SYSTEM
A substrate magazine for a substrate insertion system, having a frame in which several drawers for receiving in each case at least one flat substrate are arranged one above the other. Each drawer is formed by two guide rails, arranged in parallel and at a distance from one another at the same height as that of the frame, each with a sliding surface on which a substrate lying on the edge can be displaced. At least one elastically displaceable latching element is assigned to each drawer. In a first, unloaded state, the at least one latching element extends at least partially over the sliding surface of one of the guide rails of the drawer and, in a second, elastically deformed state, releases the sliding surface.
Retractable guide features for data storage device carriers
Data storage device mounting systems to provide alignment and mounting of data storage devices are provided herein. A carrier is configured to couple to a data storage device and hang the data storage device in a vertical orientation in a data storage assembly from retractable mounting pins coupled to the carrier. The carrier also includes retractable alignment features to guide the data storage device and carrier into the mounting system. Finger grips are coupled to retractable alignment features and mounting pins and configured to extend retractable alignment features beyond the carrier concurrent with retracting the mounting pins when actuated by a user squeezing the finger grips. The finger grips are configured to extend the mounting pins when de-actuated by the user.
Systems and methods for damping a storage system
In an embodiment, an apparatus (e.g., for damping a motion of a drawer in a storage system) comprises a plate to pivotally attach to a first wall of a drawer, the plate comprising a pivot point about which the plate can pivot; a damped gear coupled to the plate, the damped gear having a plurality of gear teeth; and a spring to facilitate pivoting the plate about the pivot point to engage at least one of the plurality of gear teeth with at least one tooth on a rack. In some embodiments, the spring is to pivot the plate from a first configuration to an angular position relative the wall in a second configuration, wherein the at least one of the plurality of gear teeth and the at least one tooth on the rack are fully engaged with one another in both the first configuration and the second configuration.
Techniques to support multiple interconnect protocols for a common set of interconnect connectors
Embodiments may be generally direct to apparatuses, systems, method, and techniques to determine a configuration for a plurality of connectors, the configuration to associate a first interconnect protocol with a first subset of the plurality of connectors and a second interconnect protocol with a second subset of the plurality of connectors, the first interconnect protocol and the second interconnect protocol are different interconnect protocols and each comprising one of a serial link protocol, a coherent link protocol, and an accelerator link protocol, cause processing of data for communication via the first subset of the plurality of connectors in accordance with the first interconnect protocol, and cause processing of data for communication via the second subset of the plurality of connector in accordance with the second interconnect protocol.