Patent classifications
H03M7/6017
TECHNOLOGIES FOR COORDINATING DISAGGREGATED ACCELERATOR DEVICE RESOURCES
A compute device to manage workflow to disaggregated computing resources is provided. The compute device comprises a compute engine receive a workload processing request, the workload processing request defined by at least one request parameter, determine at least one accelerator device capable of processing a workload in accordance with the at least one request parameter, transmit a workload to the at least one accelerator device, receive a work product produced by the at least one accelerator device from the workload, and provide the work product to an application.
Technologies for accelerator interface
Technologies for an accelerator interface over Ethernet are disclosed. In the illustrative embodiment, a network interface controller of a compute device may receive a data packet. If the network interface controller determines that the data packet should be pre-processed (e.g., decrypted) with a remote accelerator device, the network interface controller may encapsulate the data packet in an encapsulating network packet and send the encapsulating network packet to a remote accelerator device on a remote compute device. The remote accelerator device may pre-process the data packet (e.g., decrypt the data packet) and send it back to the network interface controller. The network interface controller may then send the pre-processed packet to a processor of the compute device.
Decompression Engine for Decompressing Compressed Input Data that Includes Multiple Streams of Data
An electronic device that includes a decompression engine that includes N decoders and a decompressor decompresses compressed input data that includes N streams of data. Upon receiving a command to decompress compressed input data, the decompression engine causes each of the N decoders to decode a respective one of the N streams from the compressed input data separately and substantially in parallel with others of the N decoders. Each decoder outputs a stream of decoded data of a respective type for generating commands associated with a compression standard for decompressing the compressed input data. The decompressor next generates, from the streams of decoded data output by the N decoders, commands for decompressing the data using the compression standard to recreate the original data. The decompressor next executes the commands to recreate the original data and stores the original data in a memory or provides the original data to another entity.
SHAPE AND DATA FORMAT CONVERSION FOR ACCELERATORS
A method for converting a shape and a format of tensor data to meet a specific data format of a hardware accelerator is provided. The method receives input tensors L.sub.1 and L.sub.2, each being constants having a data format of < X x Y x Z >, and each further having an n-dimension input tensor shape as <X.sub.n x X.sub.n-1 x X.sub.n-2 x ... x X.sub.1 >. The method stores input tensor shape. The method calculates an n-dimension modified shape of the input tensors by (a) setting a largest divisor of (X.sub.n x X.sub.n-1 x...x X.sub.1 ) ≤ L.sub.1 to S.sub.1, (b) setting a largest divisor of ((X.sub.n x X.sub.n-1 x...x X.sub.1 ) / S.sub.1) ≤ L.sub.2 to S.sub.2, (c) setting (((X.sub.n x X.sub.n-1 x... x X.sub.1 ) / (S.sub.1 x S.sub.2)) to S.sub.3, and (d) returning the n-dimension modified shape as < S.sub.3 x S.sub.2 x S.sub.1 >.
Methods and apparatus for compressing data streams
Methods and apparatus for compressing data streams. In an embodiment, a method includes calculating a probability distribution function (PDF) for scaler data, matching the PDF to PDF templates to determine a closest matching PDF template, and selecting an encoder corresponding to the closest matching PDF template wherein a corresponding encoder identifier is determined. The method also includes encoding the scaler data with the encoder to generate an encoded stream, and transmitting the encoded stream and the encoder identifier.
Adaptive inline polling of data compression with hardware accelerator
A computer implemented method of data compression using a hardware accelerator includes submitting a request to compress or decompress a data segment using a compression or decompression thread. The method also includes compressing or decompressing the data segment using a hardware accelerator, and performing inline polling of the hardware accelerator to determine whether the hardware accelerator has completed compressing or decompressing the data segment. The inline polling and the compressing or decompressing are performed in a single thread. The method also includes submitting a wakeup command to a segment thread in response to determining that the hardware accelerator has completed compressing or decompressing the data segment.
TECHNOLOGIES FOR PROVIDING MANIFEST-BASED ASSET REPRESENTATION
Technologies for generating manifest data for a sled include a sled to generate manifest data indicative of one or more characteristics of the sled (e.g., hardware resources, firmware resources, a configuration of the sled, or a health of sled components). The sled is also to associate an identifier with the manifest data. The identifier uniquely identifies the sled from other sleds. Additionally, the sled is to send the manifest data and the associated identifier to a server. The sled may also detect a change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also generate an update of the manifest data based on the detected change, where the update specifies the detected change in the hardware resources, firmware resources, the configuration, or component health of the sled. The sled may also send the update of the manifest data to the server.
Methods and Apparatus for Compressing Data Streams
Methods and apparatus for compressing data streams. In an embodiment, a method includes calculating a probability distribution function (PDF) for scaler data, matching the PDF to PDF templates to determine a closest matching PDF template, and selecting an encoder corresponding to the closest matching PDF template wherein a corresponding encoder identifier is determined. The method also includes encoding the scaler data with the encoder to generate an encoded stream, and transmitting the encoded stream and the encoder identifier.
System and method for increasing logical space for native backup appliance
One embodiment provides a computer implemented method of data compression including segmenting user data into data segments; deduplicating the data segments to form deduped data segments; compressing the deduped data segments into compression units using a hardware accelerator; packing the compression units into compression regions; and packing the compression regions into one or more containers.
STORAGE DEVICE
The storage device includes a first memory, a process device that stores data in the first memory and reads the data from the first memory, and an accelerator that includes a second memory different from the first memory. The accelerator stores compressed data stored in one or more storage drives storing data, in the second memory, decompresses the compressed data stored in the second memory to generate plaintext data, extracts data designated in the process device from the plaintext data, and transmits the extracted designated data to the first memory.