H05K7/1442

Storage sled for a data center

Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises mounting flanges to enable robotic insertion and removal from a rack and storage device mounting slots to enable robotic insertion and removal of storage devices into the sled. The storage devices are coupled to an optical fabric through storage resource controllers and a dual-mode optical network interface.

Retainer for Electronic Modules
20180206358 · 2018-07-19 ·

A retainer can be configured to secure an electronic module to a body. A wedge arrangement of the retainer can extend along a wedge axis and can include expansion wedges and actuation wedges. Brackets can be secured at least partly around the wedge arrangement. An actuator can compress the wedge arrangement along the wedge axis so that the actuation wedges urge the expansion wedges in opposite lateral directions, relative to the wedge axis, and the expansion wedges urge the brackets perpendicularly to the opposite lateral directions and the wedge axis.

TECHNOLOGIES FOR PERFORMING SPECULATIVE DECOMPRESSION
20180205392 · 2018-07-19 ·

Technologies for performing speculative decompression include a managed node to decode a variable size code at a present position in compressed data with a deterministic decoder and concurrently perform speculative decodes over a range of subsequent positions in the compressed data, determine the position of the next code, determine whether the position of the next code is within the range, and output, in response to a determination that the position of the next code is within the range, a symbol associated with the deterministically decoded code and another symbol associated with a speculatively decoded code at the position of the next code.

Technologies for heuristic huffman code generation
09973207 · 2018-05-15 · ·

Technologies for heuristic Huffman code generation include a computing device that generates a weighted list of symbols for a data block. The computing device determines a threshold weight and identifies one or more lightweight symbols in the list that have a weight less than or equal to the threshold weight. The threshold weight may be the average weight of all symbols with non-zero weight in the list. The computing device generates a balanced sub-tree of nodes for the lightweight symbols, with each lightweight symbol associated with a leaf node. The computing device adds the remaining symbols and the root of the balanced sub-tree to a heap and generates a Huffman code tree by processing the heap. The threshold weight may be adjusted to tune performance and compression ratio. Other embodiments are described and claimed.

Technologies for performing low-latency decompression with tree caching

Technologies for performing low-latency decompression include a managed node to parse, in response to a determination that a read tree descriptor does not match a cached tree descriptor, the read tree descriptor to construct one or more tables indicative of codes in compressed data. Each code corresponds to a different symbol. The managed node is further to decompress the compressed data with the one or more tables and store the one or more tables in association with the read tree descriptor in a cache memory for subsequent use.

TECHNIQUES TO CONTROL SYSTEM UPDATES AND CONFIGURATION CHANGES VIA THE CLOUD

Embodiments are generally directed apparatuses, methods, techniques and so forth determine an access level of operation based on an indication received via one or more network links from a pod management controller, and enable or disable a firmware update capability for a firmware device based on the access level of operation, the firmware update capability to change firmware for the firmware device. Embodiments may also include determining one or more configuration settings of a plurality of configuration settings to enable for configuration based on the access level of operation, and enable configuration of the one or more configuration settings.

Technologies for rack architecture

A rack for supporting a sleds includes a pair of elongated support posts and pairs of elongated support arms that extend from the elongated support posts. Each pair of the elongated support arms defines a sled slot to receive a corresponding sled. To do so, each elongated support arm includes a circuit board guide to receive a chassis-less circuit board substrate of the corresponding sled. The rack may include a cross-member arm associated with each sled slot and an optical connector mounted to each cross-member arm. Additional elongated support posts may be used to provide additional sled slots.

Technologies for high-performance single-stream LZ77 compression

Technologies for high-performance single-stream data compression include a computing device that updates an index data structure based on an input data stream. The input data stream is divided into multiple chunks. Each chunk has a predetermined length, such as 136 bytes, and overlaps the previous chunk by a predetermine amount, such as eight bytes. The computing device processes multiple chunks in parallel using the index data to generate multiple token streams. The tokens include literal tokens and reference tokens that refer to matching data from earlier in the input data stream. The computing device thus searches for matching data in parallel. The computing device merges the token streams to generate a single output token stream. The computing device may merge a pair of tokens from two different chunks to generate one or more synchronized tokens that are output to the output token stream. Other embodiments are described and claimed.

TECHNOLOGIES FOR BLIND MATING FOR SLED-RACK CONNECTIONS

Technologies for blind mating of optical connectors in a rack of a data center are disclosed. In the illustrative embodiment, a sled can be slid into a rack and an optical connector on the sled will blindly mate with a corresponding optical connector on the rack. The illustrative optical connector on the sled includes two guide post receivers which mate with corresponding guide posts on the optical connector on the rack such that, when mated, optical fibers of the optical connector on the rack will be aligned and optically coupled with corresponding optical fibers on the optical connector of the sled.

Memory Sharing for Physical Accelerator Resources in a Data Center
20180024739 · 2018-01-25 · ·

Examples may include sleds for a rack in a data center including physical accelerator resources and memory for the accelerator resources. The memory can be shared between the accelerator resources. One or more memory controllers can be provided to couple the accelerator resources to the memory to provide memory access to all the accelerator resources. Each accelerator resource can include a memory controller to access a portion of the memory while the accelerator resources can be coupled via an out-of-band channel to provide memory access to the other portions of the memory.