Patent classifications
H03M7/6023
Compression And Decompression In Hardware for Data Processing
Methods, systems, and apparatus, including computer-readable storage media for hardware compression and decompression. A system can include a decompressor device coupled to a memory device and a processor. The decompressor device can be configured to receive, from the memory device, compressed data that has been compressed using an entropy encoding, process the compressed data using the entropy encoding to generate uncompressed data, and send the uncompressed data to the processor. The system can also include a compressor device configured to generate, from uncompressed data, a probability distribution of codewords, generate a code table from the probability distribution, and compress incoming data using the generated code table.
Methods and apparatus to parallelize data decompression
Methods and apparatus to parallelize data decompression are disclosed. An example method selecting initial starting positions in a compressed data bitstream; adjusting a first one of the initial starting positions to determine a first adjusted starting position by decoding the bitstream starting at a training position in the bitstream, the decoding including traversing the bitstream from the training position as though first data located at the training position is a valid token; outputting first decoded data generated by decoding a first segment of the bitstream starting from the first adjusted starting position; and merging the first decoded data with second decoded data generated by decoding a second segment of the bitstream, the decoding of the second segment starting from a second position in the bitstream and being performed in parallel with the decoding of the first segment, and the second segment preceding the first segment in the bitstream.
Technologies for switching network traffic in a data center
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
INTERLEAVING OF VARIABLE BITRATE STREAMS FOR GPU IMPLEMENTATIONS
Interleaving of variable bitrate streams for GPU implementations is described. An example of an apparatus includes one or more processors including a graphic processor, the graphics processor including a super-compression encoder pipeline to provide variable width interleaved coding; and memory for storage of data, wherein the graphics processor is to perform parallel dictionary encoding on a bitstream of symbols one of multiple workgroups, the workgroup to employ a plurality of encoders to generate a plurality of token-streams of variable lengths; create a histogram including at least tokens from the plurality of token-streams for the workgroup to generate an optimized entropy code; entropy code each of the plurality of token-streams for the workgroup into an encoded bitstream; and variably interleave the encoded bitstreams to generate an interleaved bitstream and bookkeep a size of the interleaved bitstream.
SYSTEMS, METHODS, AND APPARATUS FOR DIVIDING AND ENCRYPTING DATA
A method for data encryption may include receiving input data, finding a delimiter in the input data, generating, based on a position of the delimiter in the input data, a portion of data using a part of the input data, and encrypting the portion of data. The input data may include a record, the delimiter indicates a boundary of the record, and the portion of data may include the record. The position of the delimiter may be in the part of the input data. Generating the portion of data may include generating the portion of data based on a subset of the part of the input data. The part of the input data may be a first part of the input data, and the position of the delimiter may be in a second part of the input data.
HARDWARE CHANNEL-PARALLEL DATA COMPRESSION/DECOMPRESSION
A multichannel data packer includes a plurality of two-input multiplexers and a controller. The plurality of two-input multiplexers is arranged in 2.sup.N rows and N columns in which N is an integer greater than 1. Each input of a multiplexer in a first column receives a respective bit stream of 2.sup.N channels of bit streams. Each respective bit stream includes a bit-stream length based on data in the bit stream. The multiplexers in a last column output 2.sup.N channels of packed bit streams each having a same bit-stream length. The controller controls the plurality of multiplexers so that the multiplexers in the last column output the 2.sup.N channels of bit streams that each has the same bit-stream length.
Cloud computing data compression for allreduce in deep learning
In deep learning, and in particular, for data compression for allreduce in deep learning, a gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.
DATA COMPRESSION TECHNOLOGIES
Examples described herein relate to performing data compression by performing dictionary matching of data using hardware circuitry to generate dictionary matched results and post-processing of dictionary matched results using software executed by a processor. In some examples, dictionary matching includes LZ77 dictionary matching. In some examples, dictionary matching occurs on multiple segments of data in parallel.
DOUBLE-PASS LEMPEL-ZIV DATA COMPRESSION WITH AUTOMATIC SELECTION OF STATIC ENCODING TREES AND PREFIX DICTIONARIES
A method includes receiving an input data stream at a processor, and for each byte sequence from a plurality of byte sequences of the input data stream, a hash is generated and compared to a hash table to determine whether a match exists. If a match exists, that byte sequence is incrementally expanded to include one or more additional adjacent bytes from the input data stream, to produce multiple expanded byte sequences. Each of the expanded byte sequences is compared to the hash table to identify a maximum-length matched byte sequence from a set that includes the byte sequence and the plurality of expanded byte sequences. A representation of the maximum-length matched byte sequence is stored in the memory. If a match does not exist, a representation of that byte sequence is stored as a byte sequence literal in the memory.
TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.