Patent classifications
H03M7/40
System and method for compressing activation data
A method for adapting a trained neural network is provided. Input data is input to the trained neural network and a plurality of filters are applied to generate a plurality of channels of activation data. Differences between corresponding activation values in the plurality of channels of activation data are calculated and an order of the plurality of channels is determined based on the calculated differences. The neural network is adapted so that it will output channels of activation data in the determined order. The ordering of the channels of activation data is subsequently used to compress activation data values by taking advantage of a correlation between activation data values in adjacent channels.
System and method for compressing activation data
A method for adapting a trained neural network is provided. Input data is input to the trained neural network and a plurality of filters are applied to generate a plurality of channels of activation data. Differences between corresponding activation values in the plurality of channels of activation data are calculated and an order of the plurality of channels is determined based on the calculated differences. The neural network is adapted so that it will output channels of activation data in the determined order. The ordering of the channels of activation data is subsequently used to compress activation data values by taking advantage of a correlation between activation data values in adjacent channels.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
Technologies for providing shared memory for accelerator sleds
Technologies for providing shared memory for accelerator sleds includes an accelerator sled to receive, with a memory controller, a memory access request from an accelerator device to access a region of memory. The request is to identify the region of memory with a logical address. Additionally, the accelerator sled is to determine from a map of logical addresses and associated physical address, the physical address associated with the region of memory. In addition, the accelerator sled is to route the memory access request to a memory device associated with the determined physical address.
SYSTEMS AND METHODS OF DATA COMPRESSION
There is provided a computer implemented method of compressing a baseline dataset comprising a sequence of a plurality of instances of a plurality of unique data elements, the method comprising: providing a weight function that calculates an increasing value for a weight for each one of the plurality of instances of each one of the plurality of unique data elements in the baseline dataset, as a function of increasing number of previously processed sequential locations of each of the plurality of instances of each respective unique data element within the baseline dataset relative to a current sequential location of the baseline dataset, computing an encoding for the baseline dataset according to a distribution of the weight function computed for the plurality of unique data elements in the baseline dataset, and creating a compressed dataset according to the encoding.
MULTI-CONTEXT ENTROPY CODING FOR COMPRESSION OF GRAPHS
Example embodiments relate to using a multi-context entropy coder for encoding adjacency lists. A system may obtain a graph having data (or multiple graphs) and may compress the data of the graph using a multi -context entropy coder. The multi-context entropy coder may encode adjacency lists within the data such that each integer is assigned to a different probability distribution. For example, operating the multi-context entropy coder may involve using a combination of arithmetic coding, Huffman coding, and ANS. The assignment of integers to the probability distributions may depend on each integer’s role and/or previous values of a similar kind. By using multi -context entropy- coding, the computing system may increase compression ratio while maintaining similar processing speed.
MULTI-CONTEXT ENTROPY CODING FOR COMPRESSION OF GRAPHS
Example embodiments relate to using a multi-context entropy coder for encoding adjacency lists. A system may obtain a graph having data (or multiple graphs) and may compress the data of the graph using a multi -context entropy coder. The multi-context entropy coder may encode adjacency lists within the data such that each integer is assigned to a different probability distribution. For example, operating the multi-context entropy coder may involve using a combination of arithmetic coding, Huffman coding, and ANS. The assignment of integers to the probability distributions may depend on each integer’s role and/or previous values of a similar kind. By using multi -context entropy- coding, the computing system may increase compression ratio while maintaining similar processing speed.
SYSTEM AND METHOD FOR DATA COMPACTION UTILIZING MISMATCH PROBABILITY ESTIMATION
A system and method for compacting data that uses mismatch probability estimation to improve entropy encoding methods to account for, and efficiently handle, previously-unseen data in data to be compacted. Training data sets are analyzed to determine the frequency of occurrence of each sourceblock in the training data sets. A mismatch probability estimate is calculated comprising an estimated frequency at which any given data sourceblock received during encoding will not have a codeword in the codebook. Entropy encoding is used to generate codebooks comprising codewords for data sourceblocks based on the frequency of occurrence of each sourceblock. A “mismatch codeword” is inserted into the codebook based on the mismatch probability estimate to represent those cases when a block of data to be encoded does not have a codeword in the codebook. During encoding, if a mismatch occurs, a secondary encoding process is used to encode the mismatched sourceblock.
Encoding and decoding with differential encoding size
In accordance with an embodiment, the method includes determining a second sequence of numbers of digits for encoding the respective integer coefficient values of the first sequence, the second sequence including, as first element, a first number of digits for encoding the first integer coefficient value of the first sequence, and as second and subsequent elements, constrained numbers of digits that are greater than or equal to respective minimum required numbers of digits for encoding the second and subsequent integer coefficient values of the first sequence. The constrained numbers of digits are such that any two successive elements of the second sequence do not differ from each other by more than a given threshold value. The method further includes encoding difference values between the successive elements of the second sequence; and encoding the integer coefficient values of the first sequence using the respective numbers of digits of the second sequence.
Encoding and decoding with differential encoding size
In accordance with an embodiment, the method includes determining a second sequence of numbers of digits for encoding the respective integer coefficient values of the first sequence, the second sequence including, as first element, a first number of digits for encoding the first integer coefficient value of the first sequence, and as second and subsequent elements, constrained numbers of digits that are greater than or equal to respective minimum required numbers of digits for encoding the second and subsequent integer coefficient values of the first sequence. The constrained numbers of digits are such that any two successive elements of the second sequence do not differ from each other by more than a given threshold value. The method further includes encoding difference values between the successive elements of the second sequence; and encoding the integer coefficient values of the first sequence using the respective numbers of digits of the second sequence.