Patent classifications
H03M7/6058
Methods and Apparatuses for Managing Compression of Information in a Wireless Network
A method performed by a network node (110) is described herein. The method is for managing compression of information to be transmitted by a transmitting device (130) 5 in a set of packets. The network node (110) determines (304) whether or not to apply a first compression algorithm to the information comprised in the set of packets. The determining (304) is based on at least one of: i) a compression efficiency of the first compression algorithm applied to a first information comprised in a first set of packets, and ii) a computational complexity of the first algorithm. The information is a second 10 information and the set of packets is a second set of packets. Each of the first and the second set of packets comprise at least one packet. The network node (110) then initiates (305) providing, based on a result of the determination, an indication of the result to the transmitting device (130).
Queue management for direct memory access
A direct memory access (DMA) engine may be responsible to enable and control DMA data flow within a computing system. The DMA engine moves blocks of data, associated with descriptors in a plurality of queues, from a source to a destination memory location or address, autonomously from control by a computer system's processor. Based on analysis of the data blocks linked to the descriptors in the queues, the DMA engine and its associated DMA fragmenter ensure that data blocks stored linked to descriptors in the queues do not remain idle for an exorbitant period of time. The DMA fragmenter may divide large data blocks into smaller data blocks to ensure that the processing of large data blocks does not preclude the timely processing of smaller data blocks associated with one or more descriptors in the queues. The data blocks stored may be two-dimensional data blocks.
LOSSLESS COMPRESSION OF LARGE DATA SETS FOR SYSTEMS ON A CHIP
A system on a chip (SoC) includes a first subsystem, a second subsystem and a compression block connected to the first and second subsystems, wherein the compression block includes a decoder and an encoder. The compression block receives spill data generated by a compute element in one of the first and second subsystems, compresses the spill data using the encoder and stores the compressed spill data in a data block in local memory of one of the compute elements.
Method for compressing digital signal data and signal compressor module
A method of compressing digital signal data obtained from a signal is described. The method includes: receiving digital signal data associated with a signal and/or generating digital signal data based on a signal; transforming the digital signal data into a transform domain, thereby generating transformed digital signal data; determining at least one characteristic parameter based on the transformed digital signal data by an artificial intelligence circuit; detecting and/or classifying at least one wanted signal portion based on the at least one characteristic parameter by the artificial intelligence circuit; and storing only a subset of the digital signal data that is associated with the at least one wanted signal portion. Further, a signal compressor circuit for compressing digital signal data obtained from a signal and a computer program are described.
Neural network processor using compression and decompression of activation data to reduce memory bandwidth utilization
A deep neural network (DNN) module compresses and decompresses neuron-generated activation data to reduce the utilization of memory bus bandwidth. The compression unit receives an uncompressed chunk of data generated by a neuron in the DNN module. The compression unit generates a mask portion and a data portion of a compressed output chunk. The mask portion encodes the presence and location of the zero and non-zero bytes in the uncompressed chunk of data. The data portion stores truncated non-zero bytes from the uncompressed chunk of data. A decompression unit receives a compressed chunk of data from memory in the DNN processor or memory of an application host. The decompression unit decompresses the compressed chunk of data using the mask portion and the data portion.
COMPRESSING PROBABILITY TABLES FOR ENTROPY CODING
This disclosure provides methods, devices, and systems for data compression. The present implementations more specifically relate to encoding techniques for compressing probability tables used for entropy coding. In some aspects, an entropy encoder may encode a probability table so that one or more contexts are represented by fewer bits than would otherwise be needed to represent the frequency of each symbol as a proportion of the total frequency of all symbols associated with such contexts. For example, if a given row of the probability table (prior to encoding) includes a number (M) of entries each having a binary value represented by a number (K) of bits, the same row of entries may be represented by fewer than M*K bits in the encoded probability table.
DATA COMPRESSION ALGORITHM
A method for augmenting a dictionary of a data compression scheme. For each input string, the result of a sliding window search is compared to the result of a dictionary search. The dictionary is augmented with the sliding window search result if the sliding window search result is longer than the dictionary search result. An embodiment of the disclosure implements multiple sliding windows, each sliding window having an associated size, the size of sliding window dependent on a corresponding match length. For one embodiment, each sliding window has a corresponding hash function based upon the match length.
Managing memory fragmentation in hardware-assisted data compression
Systems, devices, and methods for managing fragmentation in hardware-assisted compression of data in physical computer memory which may result in reduced internal fragmentation. An example computer-implemented method comprises: providing, by a memory management program to compression hardware, a compression command including an address in physical computer memory of data to be compressed and a list of at least two available buffers for storing compressed data; using, by the compression hardware, the address included in the compression command to retrieve uncompressed data; compressing the uncompressed data; and selecting, by the compression hardware, from the list of at least two available buffers, at least two buffers for storing compressed data based on an amount of space that would remain if the compressed data were stored in the at least two buffers, wherein each of the at least two selected buffers differs in size from at least one other of the selected buffers.
Electronic device and method for compressing sampled data
An electronic device for compressing sampled data comprises a memory element and a processing element. The memory element is configured to store sampled data points and sampled times. The processing element is in electronic communication with the memory element and is configured to receive a plurality of sampled data points, a slope for each sampled data point in succession, the slope being a value of a change between the sampled data point and its successive sampled data point, and store the sampled data point in the memory element when the slope changes in value from a previous sampled data point.
Overwriting compressed data extents
A technique for overwriting compressed data tests whether new data compressed with a first compression procedure fits within spaces provided for previous data. If the compressed new data does not fit, the technique compresses the new data using a second compression procedure. Assuming the second compression procedure reduces the compressed size of the new data to fit the available space, the technique stores the new data in the same location as the previous data. In this manner, overwrites can be accommodated in place without the need to create new mapping metadata.