Patent classifications
H03M7/6017
ADAPTIVE INLINE POLLING OF DATA COMPRESSION WITH HARDWARE ACCELERATOR
A computer implemented method of data compression using a hardware accelerator includes submitting a request to compress or decompress a data segment using a compression or decompression thread. The method also includes compressing or decompressing the data segment using a hardware accelerator, and performing inline polling of the hardware accelerator to determine whether the hardware accelerator has completed compressing or decompressing the data segment. The inline polling and the compressing or decompressing are performed in a single thread. The method also includes submitting a wakeup command to a segment thread in response to determining that the hardware accelerator has completed compressing or decompressing the data segment.
TECHNOLOGIES FOR DIVIDING WORK ACROSS ACCELERATOR DEVICES
Technologies for dividing work across one or more accelerator devices include a compute device. The compute device is to determine a configuration of each of multiple accelerator devices of the compute device, receive a job to be accelerated from a requester device remote from the compute device, and divide the job into multiple tasks for a parallelization of the multiple tasks among the one or more accelerator devices, as a function of a job analysis of the job and the configuration of each accelerator device. The compute engine is further to schedule the tasks to the one or more accelerator devices based on the job analysis and execute the tasks on the one or more accelerator devices for the parallelization of the multiple tasks to obtain an output of the job.
Technologies for lifecycle management with remote firmware
Technologies for lifecycle management include multiple computing devices in communication with a lifecycle management server. On boot-up, a computing device loads a lightweight firmware boot environment. The lightweight firmware boot environment connects to the lifecycle management server and downloads one or more firmware images for controllers of the computing device. The controllers includes baseboard management controllers, network interface controllers, solid-state drive controllers, or other controllers. The lifecycle management server selects firmware images and/or versions of firmware images based on the controllers or the computing device. The computing device installs each firmware image to a controller memory device coupled to a controller, and in use, each controller accesses the firmware image in the controller memory device.
TECHNOLGIES FOR MILLIMETER WAVE RACK INTERCONNECTS
Racks and rack pods to support a plurality of sleds are disclosed herein. Switches for use in the rack pods are also disclosed herein. A rack comprises a plurality of sleds and a plurality of electromagnetic waveguides. The plurality of sleds are vertically spaced from one another. The plurality of electromagnetic waveguides communicate data signals between the plurality of sleds.
Technologies for providing manifest-based asset representation
Technologies for generating manifest data for a sled include a sled to generate manifest data indicative of one or more characteristics of the sled (e.g., hardware resources, firmware resources, a configuration of the sled, or a health of sled components). The sled is also to associate an identifier with the manifest data. The identifier uniquely identifies the sled from other sleds. Additionally, the sled is to send the manifest data and the associated identifier to a server. The sled may also detect a change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also generate an update of the manifest data based on the detected change, where the update specifies the detected change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also send the update of the manifest data to the server.
Compression and decompression engines and compressed domain processors
Compressed domain processors configured to perform operations on data compressed in a format that preserves order. The Compressed domain processors may include operations such as addition, subtraction, multiplication, division, sorting, and searching. In some cases, compression engines for compressing the data into the desired formats are provided.
COMPUTERIZED SYSTEMS AND METHODS OF DATA COMPRESSION
A computerized system and method of compressing symbolic information organized into a plurality of documents, each document having a plurality of symbols, the system and method including: (i) automatically identifying a plurality of sequential (also referred to as adjacent) and/or non-sequential symbol (also referred to as non-adjacent) pairs in an input document; (ii) counting the number of appearances of each unique symbol pair; and (iii) producing a compressed document that includes a replacement symbol at each position associated with one of the plurality of symbol pairs, at least one of which corresponds to a non-sequential symbol pair. For each non-sequential pair the compressed document includes corresponding indicia indicating a distance between locations of the non-sequential symbols of the pair in the input document.
METHODS, DEVICES AND SYSTEMS FOR EFFICIENT COMPRESSION AND DECOMPRESSION FOR HIGHER THROUGHPUT
A decompression system (800; 1100; 1300) for decompressing a compressed data block that comprises a plurality of compressed data values is presented. The decompression system has a plurality of decompression devices (700; 1200A-B) in an array or chain layout (820a-820m−1; 120a-1120m−1; 1320a-1320m−1) for decompressing respective compressed data values of the compressed data block. A first decompression device (820a; 1120a; 1320a) is connected to a next decompression device (820b; 1120b; 1320b), and a last decompression device (820m−1; 120m−1; 1320m−1) is connected to a preceding decompression device (820m−2; 1120m−2; 320m−2). The first decompression device (820a; 1120a; 1320a) decompresses a compressed data value of the compressed data block and reduces the compressed data block by extracting a codeword of the compressed data value and removing the compressed data value from the compressed data block, retrieving a decompressed data value out of the extracted codeword, and passing the reduced compressed data block to the next decompression device (820b; 1120b; 320b). The last decompression device (820m−1; 1120m−1; 1320m−1) receives a reduced compressed data block as reduced by the preceding decompression device (820m−2; 1120m−2; 320m−2) and decompresses another compressed data value of the compressed data block by extracting a codeword of said another compressed data value, and retrieving another decompressed data value out of the extracted codeword.
Computerized methods of data compression and analysis
A computerized method and apparatus compresses symbolic information, such as text. Symbolic information is compressed by recursively identifying pairs of symbols (e.g., pairs of words or characters) and replacing each pair with a respective replacement symbol. The number of times each symbol pair appears in the uncompressed text is counted, and pairs are only replaced if they appear more than a threshold number of times. In recursive passes, each replaced pair can include a previously substituted replacement symbol. The method and apparatus can achieve high compression especially for large datasets. Metadata, such as the number of times each pair appears, generated during compression of the documents can be used to analyze the documents and find similarities between two documents.
TECHNOLOGIES FOR OFFLOADING ACCELERATION TASK SCHEDULING OPERATIONS TO ACCELERATOR SLEDS
Technologies for offloading acceleration task scheduling operations to accelerator sleds include a compute device to receive a request from a compute sled to accelerate the execution of a job, which includes a set of tasks. The compute device is also to analyze the request to generate metadata indicative of the tasks within the job, a type of acceleration associated with each task, and a data dependency between the tasks. Additionally the compute device is to send an availability request, including the metadata, to one or more micro-orchestrators of one or more accelerator sleds communicatively coupled to the compute device. The compute device is further to receive availability data from the one or more micro-orchestrators, indicative of which of the tasks the micro-orchestrator has accepted for acceleration on the associated accelerator sled. Additionally, the compute device is to assign the tasks to the one or more micro-orchestrators as a function of the availability data.