Patent classifications
H03M7/3066
DATA COMPRESSION SYSTEM AND METHOD OF USING
A system includes a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for generating a mask based on received data from a sensor, wherein the mask includes a plurality of importance values, and each region of the received data is designated a corresponding importance value of the plurality of importance values. The processor is configured to execute the instructions for encoding the received data based on the mask; and transmitting the encoded data to a decoder for defining reconstructed data. The processor is configured to execute the instructions for computing a loss based on the reconstructed data, the received data and the mask. The processor is configured to execute the instructions for providing training to an encoder for encoding the received data based on the computed loss.
COMMUNICATION SYSTEM, TRANSMISSION APPARATUS, RECEPTION APPARATUS, MATRIX GENERATION APPARATUS, COMMUNICATION METHOD, TRANSMISSION METHOD, RECEPTION METHOD, MATRIX GENERATION METHOD AND RECORDING MEDIUM
A communication system SYS includes a transmission apparatus 1 and a reception apparatus 2. The transmission apparatus includes: a conversion unit 111 for converting a bit stream Z having a bit length b into a bit stream Y that has w−1 (w is an integer equal to or larger than 2) bit 1 and that has a bit length n (n>b); a conversion unit 112 for converting the bit stream Y into a bit stream X having a bit length t (t<n); and a Neural Network 113 that has a t input node and that outputs a value relating to a feature of a transmission signal Tx when the bit stream X is inputted thereto. The reception apparatus includes: a Neural Network 212 that has a t output node and that outputs a numerical data stream U including t numerical data when a feature of the reception signal is inputted thereto; a conversion unit 213 for converting the numerical data stream U into a numerical data stream Y′ including n numerical data; and a generation unit 214 for generating a bit stream Z′ having the bit length b by performing, on the numerical data stream U, an inverse conversion of a conversion processing performed by the conversion unit 111.
Encoding / Decoding System and Method
A computer-implemented method, computer program product and computing system for: processing an unencoded data file to identify a plurality of file segments wherein the unencoded data file is a dataset for use with a long-range wireless communication platform; mapping each of the plurality of file segments to a portion of a dictionary file to generate a plurality of mappings, wherein each of the plurality of mappings includes a starting location and a length, thus generating a related encoded data file based, at least in part, upon the plurality of mappings; and transmitting the related encoded data file from a first location to a second location using the long-range wireless communication platform.
MEMORY ALLOCATION TECHNOLOGIES FOR DATA COMPRESSION AND DE-COMPRESSION
Examples described herein relate to a manner of determining a number of bits to encode compression data. Some examples include: compressing pixel data of a region of pixels in a frame; determining a number of bits associated with at least two partitions; utilizing the determined number of bits to encode residual values generated from the compressing the pixel data; and storing the encoded residual values. In some examples, the at least two partitions comprise a first partition and a second partition. Some examples include: encoding residuals in the first partition using a number of bits associated with the first partition and encoding residuals in the second partition using a number of bits associated with the second partition. Some examples include: determining a distribution of bins of residuals, wherein each different bin represents a number of bits used to encode a residual value and determining a midpoint of a total number of residuals as a bin that stores a residual that is approximately 50 percentile of the total number of residuals of the distribution.
WEIGHT DATA COMPRESSION METHOD, WEIGHT DATA DECOMPRESSION METHOD, WEIGHT DATA COMPRESSION DEVICE, AND WEIGHT DATA DECOMPRESSION DEVICE
A weight data compression method includes: generating a 4-bit data string of 4-bit data items each expressed as any one of nine 4-bit values, by dividing ternary weight data into data items each having 4 bits; and generating first compressed data including a first flag value string and a first non-zero value string by (0) generating the first flag value string by assigning one of 0 and 1 as a first flag value of a 1-bit flag to a 4-bit data item 0000 and assigning an other of 0 and 1 as a second flag value of the 1-bit flag to a 4-bit data item other than 0000 among the 4-bit data items in the 4-bit data string and (ii) generating the first non-zero value string by converting the 4-bit data item other than 0000 into a 3-bit data item having any one of eight 3-bit values.
Homogenizing data sparsity using a butterfly multiplexer
A data-sparsity homogenizer includes a plurality of multiplexers and a controller. The plurality of multiplexers receives 2.sup.N bit streams of non-homogenous sparse data in which the non-homogenous sparse data includes non-zero value data clumped together. The plurality of multiplexers is arranged in 2.sup.N rows and N columns. Each input of a multiplexer in a first column receives a respective bit stream of the 2.sup.N bit streams of non-homogenized sparse data, and the multiplexers in a last column output 2.sup.N bit streams of sparse data that is more homogenous than the non-homogenous sparse data of the 2.sup.N bit streams. The controller controls the plurality of multiplexers so that the multiplexers in the last column output the 2.sup.N channels of bit streams of sparse data that is more homogeneous than the non-homogenous sparse data of the 2.sup.N bit streams.
Neural network processor for compressing featuremap data and computing system including the same
Provided is a neural network device including at least one processor configured to implement an arithmetic circuit configured to generate third data including a plurality of pixels based on a neural network configured to perform an arithmetic operation on first data and second data, and a compressor configured to generate compressed data by compressing the third data, wherein the compressor is further configured to generate, as the compressed data, bitmap data comprising location information about a non-zero pixel having a non-zero data value among the plurality of pixels based on a quad-tree structure.
Method, system and program product for mask-based compression of a sparse matrix
A method, system, and program product accesses chunks of data identifying data elements. A mask is used to identify a position of the data elements that have zero values and that have non-zero values. The data elements are processed based on the mask. For compression of data, data elements in chunks of data that have zero values and that have non-zero values are determined. A mask is used to identify a position of the data elements that have zero values and that have non-zero values. The data elements in the chunks of data having zero values are removed. The data elements having non-zero values are packed into the chunks to form the compressed data. For decompressing the data, zero-value data elements are added in positions in the chunks of data according to the mask to form uncompressed data.
NEURAL NETWORK PROCESSOR USING COMPRESSION AND DECOMPRESSION OF ACTIVATION DATA TO REDUCE MEMORY BANDWIDTH UTILIZATION
A deep neural network (“DNN”) module can compress and decompress neuron-generated activation data to reduce the utilization of memory bus bandwidth. The compression unit can receive an uncompressed chunk of data generated by a neuron in the DNN module. The compression unit generates a mask portion and a data portion of a compressed output chunk. The mask portion encodes the presence and location of the zero and non-zero bytes in the uncompressed chunk of data. The data portion stores truncated non-zero bytes from the uncompressed chunk of data. A decompression unit can receive a compressed chunk of data from memory in the DNN processor or memory of an application host. The decompression unit decompresses the compressed chunk of data using the mask portion and the data portion. This can reduce memory bus utilization, allow a DNN module to complete processing operations more quickly, and reduce power consumption.
METHODS AND DEVICES FOR MULTI-POINT DIRECT CODING IN POINT CLOUD COMPRESSION
Methods and devices for coding point clouds using direct coding mode to code coordinates of a point within a sub-volume associated with a current node instead of a pattern of occupancy for child nodes. When direct coding is applied to two or more points in the sub-volume, the points are ordered based on one of their respective coordinate values and pairwise coding of those coordinate values is carried out on a bit-by-bit basis. The pairwise coding includes coding whether the bits are the same and, if so, the bit value.