Patent classifications
H04Q2213/13527
Techniques to process packets in a dual-mode switching environment
Various embodiments are generally directed to an apparatus, method and other techniques to receive a packet via an optical fabric, the packet comprising a switch mode indicator, determine a switch mode for the packet based on the switch mode indicator, and process the packet in accordance with a first protocol or a second protocol based on the switch mode.
Memory sharing for physical accelerator resources in a data center
Examples may include sleds for a rack in a data center including physical accelerator resources and memory for the accelerator resources. The memory can be shared between the accelerator resources. One or more memory controllers can be provided to couple the accelerator resources to the memory to provide memory access to all the accelerator resources. Each accelerator resource can include a memory controller to access a portion of the memory while the accelerator resources can be coupled via an out-of-band channel to provide memory access to the other portions of the memory.
Configurable computing resource physical location determination
Examples may include techniques to determine locations of a physical resource in a data center. A data center can include a number of racks having sled spaced. The sled spaces accommodate sleds having one or more physical resources disposed on each sled. The racks and sleds can include a beacon and beacon sensor, respectively, operable to determine a location of the sleds within the data center. Beacons and beacon sensors can exchange signals, a pod controller can receive an information element including indications of the exchanged signals and determine a location of the physical resource within the data center.
Technologies for cooling rack mounted sleds
Technologies for rack cooling includes monitoring a temperature of a sled mounted in a rack and controlling a cooling system of the rack based on the temperature of the sled. The cooling system includes a cooling fan array, which may be controlled to cool the sled. Additionally, if needed, one or more adjacent cooling fan arrays that are located adjacent to the controlled cooling fan array may be adjusted to provide additional cooling to the sled.
Technologies for providing power to a rack
A rack for supporting sleds includes a pair of elongated support posts and pairs of elongated support arms that extend from the elongated support posts. Each pair of the elongated support arms defines a sled slot to receive a corresponding sled. A power supply is attached to an elongated support arm of each pair of elongated support arms to provide power to a corresponding sled. The power supply may include a chassis-less circuit board substrate that is removable from a power supply housing coupled to the corresponding elongated support arm.
Robotically serviceable computing rack and sleds
Examples may include racks for a data center and sleds for the racks, the sleds arranged to house physical resources for the data center. The sleds and racks can be arranged to be autonomously manipulated, such as, by a robot. The sleds and racks can include features to facilitate automated installation, removal, maintenance, and manipulation by a robot.
TECHNOLOGIES FOR ADAPTIVE PROCESSING OF MULTIPLE BUFFERS
Technologies for adaptive processing of multiple buffers is disclosed. A compute device may establish a buffer queue to which applications can submit buffers to be processed, such as by hashing the submitted buffers. The compute device monitors the buffer queue and determines an efficient way of processing the buffer queue based on the number of buffers present. The compute device may process the buffers serially with a single processor core of the compute device or may process the buffers in parallel with single-instruction, multiple data (SIMD) instructions. The compute device may determine which method to use based on a comparison of the throughput of serially processing the buffers as compared to parallel processing the buffers, which may depend on the number of buffers in the buffer queue.
Storage sled and techniques for a data center
Examples may include a sled for a rack of a data center including physical storage resources. The sled comprises an array of storage devices and an array of memory. The storage devices and memory are directly coupled to storage resource processing circuits which are themselves, directly coupled to dual-mode optical network interface circuitry. The circuitry can store data on the storage devices and metadata associated with the data on non-volatile memory in the memory array.
Technologies for performing partially synchronized writes
Technologies for managing partially synchronized writes include a managed node. The managed node is to issue a write request to write a data block, on behalf of a workload, to multiple data storage devices connected to a network, pause execution of the workload, receive an initial acknowledgment associated with one of the multiple data storage devices, wherein the initial acknowledgement is indicative of successful storage of the data block, and resume execution of the workload after receipt of the initial acknowledgement and before receipt of subsequent acknowledgements associated with any of the other data storage devices. Other embodiments are also described and claimed.
Technologies for performing speculative decompression
Technologies for performing speculative decompression include a managed node to decode a variable size code at a present position in compressed data with a deterministic decoder and concurrently perform speculative decodes over a range of subsequent positions in the compressed data, determine the position of the next code, determine whether the position of the next code is within the range, and output, in response to a determination that the position of the next code is within the range, a symbol associated with the deterministically decoded code and another symbol associated with a speculatively decoded code at the position of the next code.