Patent classifications
H03M7/6047
METHODS AND APPARATUS FOR SPARSE TENSOR STORAGE FOR NEURAL NETWORK ACCELERATORS
Methods, apparatus, systems and articles of manufacture are disclosed for sparse tensor storage for neural network accelerators. An example apparatus includes sparsity map generating circuitry to generate a sparsity map corresponding to a tensor, the sparsity map to indicate whether a data point of the tensor is zero, static storage controlling circuitry to divide the tensor into one or more storage elements, and a compressor to perform a first compression of the one or more storage elements to generate one or more compressed storage elements, the first compression to remove zero points of the one or more storage elements based on the sparsity map and perform a second compression of the one or more compressed storage elements, the second compression to store the one or more compressed storage elements contiguously in memory.
TECHNIQUES FOR ACCESSING AND UTILIZING COMPRESSED DATA AND ITS STATE INFORMATION
Some systems compress data utilized by a user mode software without the user mode software being aware of any compression taking place. To maintain that illusion, such systems prevent user mode software from being aware of and/or accessing the underlying compressed states of the data. While such an approach protects proprietary compression techniques used in such systems from being deciphered, such restrictions limit the ability of user mode software to use the underlying compressed forms of the data in new ways. Disclosed herein are various techniques for allowing user-mode software to access the underlying compressed states of data either directly or indirectly. Such techniques can be used, for example, to allow various user-mode software on a single system or on multiple systems to exchange data in the underlying compression format of the system(s) even when the user mode software is unable to decipher the compression format.
Techniques for accessing and utilizing compressed data and its state information
Some systems compress data utilized by a user mode software without the user mode software being aware of any compression taking place. To maintain that illusion, such systems prevent user mode software from being aware of and/or accessing the underlying compressed states of the data. While such an approach protects proprietary compression techniques used in such systems from being deciphered, such restrictions limit the ability of user mode software to use the underlying compressed forms of the data in new ways. Disclosed herein are various techniques for allowing user-mode software to access the underlying compressed states of data either directly or indirectly. Such techniques can be used, for example, to allow various user-mode software on a single system or on multiple systems to exchange data in the underlying compression format of the system(s) even when the user mode software is unable to decipher the compression format.
TRELLIS BASED RECONSTRUCTION ALGORITHMS AND INNER CODES FOR DNA DATA STORAGE
Techniques for achieving reductions in cost of encoding and decoding operations used in DNA data storage systems to facilitate reducing errors in those encoding and decoding operations while accounting for a code structure used during the encoding and decoding by constructing and using insertion-deletion-substitution (IDS) trellises for multiple traces are disclosed. A DNA sequencing channel is used to randomly sample and sequence DNA strands to generate noisy traces. Multiple trellises are independently constructed for each respective noisy trace. A forward-backward algorithm is run on each trellis to compute posterior marginal probabilities for vertices included in each trellises. An estimate of the data message sequence is then computed.
METHODS AND APPARATUS TO COMPRESS DATA
Methods, apparatus, systems and articles of manufacture to compress data are disclosed. An example apparatus includes a data slicer to split a dataset into a plurality of blocks of data; a data processor to select a first compression technique for a first block of the plurality of blocks of data based on first characteristics of the first block; and select a second compression technique for a second block of the plurality of blocks of data based on second characteristics of the second block; a first compressor to compress the first block using the first compression technique to generate a first compressed block of data; a second compressor to compress the second block using the second compression technique to generate a second compressed block of data; and a header generator to generate a first header identifying the first compression technique and a second header identifying the second compression technique.
Compression and Decompression of Downlink Channel Estimates
A network node (501) determines parameters (503) indicating a compression function for compressing downlink channel estimates, and a decompression function. The network node transmits the parameters, receives compressed downlink channel estimates (504), and decompresses the compressed downlink channel estimates using the decompression function. A terminal device (502) receives the parameters, forms the compression function, compresses downlink channel estimates using the compression function, and transmits the compressed downlink channel estimates. The compression function comprises a first function formed based on at least some of the parameters, a second function which is non-linear, and a quantizer. The first function is configured to receive input data, and to reduce a dimension of the input data. The decompression function comprises a first function configured to receive input data and provide output data in a higher dimensional space than the input data, and a second function which is non-linear.
Compression and decompression of downlink channel estimates
A network node determines parameters indicating a compression function for compressing downlink channel estimates, and a decompression function. The network node transmits the parameters, receives compressed downlink channel estimates, and decompresses the compressed downlink channel estimates using the decompression function. A terminal device receives the parameters, forms the compression function, compresses downlink channel estimates using the compression function, and transmits the compressed downlink channel estimates. The compression function comprises a first function formed based on at least some of the parameters, a second function which is non-linear, and a quantizer. The first function is configured to receive input data, and to reduce a dimension of the input data. The decompression function comprises a first function configured to receive input data and provide output data in a higher dimensional space than the input data, and a second function which is non-linear.
Hybrid edge-cloud compression of volumetric 3D data for efficient 5G transmission
A hybrid implementation enables sharing the processing of 3D data locally and remotely based on processing and bandwidth factors. The hybrid implementation is flexible in determining what information to process locally, what information to transmit to a remote system, and what information to process remotely. Based on the available bandwidth, computing power/availability locally and computing power/availability remotely, the hybrid implementation is able to direct the processing of the data. By performing some of the processing locally and some of the processing remotely, more efficient processing is able to be implemented.
DATA COMPRESSION METHOD AND DATA DECOMPRESSION METHOD FOR ELECTRONIC DEVICE, AND ELECTRONIC DEVICE
A data compression method and a data decompression method for an electronic device, and an electronic device, are provided to make compressed data become smaller, so that overheads caused by data storage and receiving/sending are reduced. Each of one or more matching rules includes one or more matching entries, each matching entry is used to perform matching on one or more pieces of to-be-matched data in a to-be-matched data group, and each matching entry includes: a preset field; a matching rule field. The method includes: receiving a to-be-matched data group, obtaining a target matching rule by performing matching based on the preset field and the matching rule field in each matching entry in the one or more matching rules, and performing processing based on a compression rule field in each matching entry in the target matching rule.
CACHE ARRANGEMENTS FOR DATA PROCESSING SYSTEMS
A data processing system is provided comprising a cache system configured to transfer data between a processor and memory system. The cache system comprises a cache. When a block of data that is stored in the memory in a compressed form is to be loaded into the cache, the block of data is stored into a group of one or more cache lines of the cache and the associated compression metadata for the compressed block of data is provided as separate side band data.