Patent classifications
H05K7/1442
Technologies for providing power to a rack
A rack for supporting sleds includes a pair of elongated support posts and pairs of elongated support arms that extend from the elongated support posts. Each pair of the elongated support arms defines a sled slot to receive a corresponding sled. A power supply is attached to an elongated support arm of each pair of elongated support arms to provide power to a corresponding sled. The power supply may include a chassis-less circuit board substrate that is removable from a power supply housing coupled to the corresponding elongated support arm.
Robotically serviceable computing rack and sleds
Examples may include racks for a data center and sleds for the racks, the sleds arranged to house physical resources for the data center. The sleds and racks can be arranged to be autonomously manipulated, such as, by a robot. The sleds and racks can include features to facilitate automated installation, removal, maintenance, and manipulation by a robot.
TECHNOLOGIES FOR ADAPTIVE PROCESSING OF MULTIPLE BUFFERS
Technologies for adaptive processing of multiple buffers is disclosed. A compute device may establish a buffer queue to which applications can submit buffers to be processed, such as by hashing the submitted buffers. The compute device monitors the buffer queue and determines an efficient way of processing the buffer queue based on the number of buffers present. The compute device may process the buffers serially with a single processor core of the compute device or may process the buffers in parallel with single-instruction, multiple data (SIMD) instructions. The compute device may determine which method to use based on a comparison of the throughput of serially processing the buffers as compared to parallel processing the buffers, which may depend on the number of buffers in the buffer queue.
Storage sled and techniques for a data center
Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
APPARATUS FOR MOUNTING A PROCESSOR FOR CLUSTER COMPUTING
A bracket for mounting a processor and a support structure for receiving bracket-supported processors for duster computing are provided. In some embodiments, a bracket may be configured to receive a processor and fasten the processor to the bracket. The bracket may be configured to mount the processor to a support structure. The support structure may be configured to receive an array of brackets. The support structure may be configured to be stacked in combination with additional support structures.
APPARATUS FOR MOUNTING PROCESSORS FOR CLUSTER COMPUTING
A bracket for mounting a processor and a support structure for receiving bracket-supported processors for cluster computing are provided. In some embodiments, a bracket may be configured to receive a processor and fasten the processor to the bracket. The bracket may be configured to mount the processor to a support structure. The support structure may be configured to receive an array of brackets. The support structure may be configured to be stacked in combination with additional support structures.
Technologies for performing partially synchronized writes
Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.
Apparatus for mounting processors for cluster computing
A bracket for mounting a processor and a support structure for receiving bracket-supported processors for duster computing are provided. In some embodiments, a bracket may be configured to receive a processor and fasten the processor to the bracket. The bracket may be configured to mount the processor to a support structure. The support structure may be configured to receive an array of brackets. The support structure may be configured to be stacked in combination with additional support structures.
Technologies for performing speculative decompression
Technologies for performing speculative decompression include a managed node to decode a variable size code at a present position in compressed data with a deterministic decoder and concurrently perform speculative decodes over a range of subsequent positions in the compressed data, determine the position of the next code, determine whether the position of the next code is within the range, and output, in response to a determination that the position of the next code is within the range, a symbol associated with the deterministically decoded code and another symbol associated with a speculatively decoded code at the position of the next code.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.