H03M7/6058

TECHNIQUES TO ENABLE STATEFUL DECOMPRESSION ON HARDWARE DECOMPRESSION ACCELERATION ENGINES
20220405142 · 2022-12-22 ·

A hardware decompression acceleration engine including: an input buffer for receiving to-be-decompressed data from a software layer of a host computer; a decompression processing unit coupled to the input buffer for decompressing the to-be-decompressed data, the decompression processing unit further receiving first and second flags from the software layer of the host computer, wherein the first flag is indicative of a location of the to-be-decompressed data in a to-be-decompressed data block and the second flag is indicative of a presence of an intermediate state; and an output buffer for storing decompressed data from the decompression processing unit.

File system reorganization in the presence of inline compression

A method for file system reorganization in the presence of inline compression includes obtaining a virtual block pointer for an original compressed segment to be reorganized, the original compressed segment comprising compressed allocation units of data stored in a storage system, wherein the virtual block pointer comprises an extent list identifying the compressed allocation units in the original compressed segment and a pointer to where the original compressed segment is stored; copying only the referenced compressed allocation units in the original compressed segment to a new compressed segment in a substantially contiguous manner; updating the extent list to identify the referenced compressed allocation units in the new compressed segment, and the pointer to where the new compressed segment is stored; and freeing the original compressed segment.

STORAGE DEVICE AND OPERATING METHOD OF THE STORAGE DEVICE

A memory device may include a data receiver configured to receive a plurality of read data chunks from a plurality of memory areas which transmit and receive data through one channel, a data compressor configured to generate a plurality of compressed data chunks from each of the plurality of read data chunks and a data output unit configured to simultaneously output the plurality of compressed data through the channel in response to a data output command.

NEURAL NETWORK PROCESSOR USING COMPRESSION AND DECOMPRESSION OF ACTIVATION DATA TO REDUCE MEMORY BANDWIDTH UTILIZATION

A deep neural network (“DNN”) module can compress and decompress neuron-generated activation data to reduce the utilization of memory bus bandwidth. The compression unit can receive an uncompressed chunk of data generated by a neuron in the DNN module. The compression unit generates a mask portion and a data portion of a compressed output chunk. The mask portion encodes the presence and location of the zero and non-zero bytes in the uncompressed chunk of data. The data portion stores truncated non-zero bytes from the uncompressed chunk of data. A decompression unit can receive a compressed chunk of data from memory in the DNN processor or memory of an application host. The decompression unit decompresses the compressed chunk of data using the mask portion and the data portion. This can reduce memory bus utilization, allow a DNN module to complete processing operations more quickly, and reduce power consumption.

VLSI EFFICIENT HUFFMAN ENCODING APPARATUS AND METHOD
20170366198 · 2017-12-21 ·

A compression algorithm based on Huffman coding is disclosed that is adapted to be readily implemented using VLSI design. A data file may be processed to replace duplicate data with a copy commands including an offset and length, such as according to the LV algorithm. A Huffman code may then be generated for parts of the file. The Huffman code may be generated according to a novel method that generates Huffman code lengths for literals in a data file without first sorting the literal statistics. The Huffman code lengths may be constrained to be no longer than a maximum length and the Huffman code may be modified to provide an acceptable overflow probability and be in canonical order. Literals, offsets, and lengths may be separately encoded. The different values for these data sets may be assigned to a limited number of bins for purpose of generating usage statistics used for generating Huffman codes.

EXPLOITING LOCALITY OF PRIME DATA FOR EFFICIENT RETRIEVAL OF DATA THAT HAS BEEN LOSSLESSLY REDUCED USING A PRIME DATA SIEVE
20230198549 · 2023-06-22 · ·

An amount of memory needed to hold prime data elements during reconstitution may be determined by examining the creation and usage of prime data elements and their spatial and temporal characteristics during data distillation.

In-memory data compression complementary to host data compression
09836248 · 2017-12-05 · ·

A storage infrastructure, device and associated method for storing compressed data is provided. Included is a method for compressing data on a storage device in a storage infrastructure, including: receiving a compressed extent from a host, wherein the compressed extent includes data compressed with entropy-coding-less data compression; receiving logical identification information about the compressed extent from the host; performing in-memory entropy encoding on the compressed extent to generate a compressed unit; storing the compressed unit in a physical memory; and in a case where the host is aware of the in-memory entropy encoding, reporting size information of the compressed unit back the host.

Encoder supporting multiple code rates and code lengths

An encoder that supports multiple code rates and code lengths is disclosed. A shift register utilized by the encoder may be scaled in size based on a selected code rate or code length. The shift register shifts a bit series for the matrix without requiring fixed feedback points within the register. The sizes of the matrix and bit series are based on the selected code rate or code length, and the encoder loads the bit series into a first portion of the shift register, and a division of the bit series into a second portion of the shift register located adjacent to the first portion. The encoder periodically repopulates the shift register from memory to simulate circular shifting of the bit series without feedback points. Accordingly, complexity of the encoder is reduced.

Compression/decompression using index correlating uncompressed/compressed content

Compression of data that permits direct reconstruction of arbitrary portions of the uncompressed data. Also, the direct reconstruction of arbitrary portions of the uncompressed data. Conventional compression is done such that decompression has to begin either at the very beginning of the data, or at particular intervals (e.g., at block boundaries—every 64 kilobytes) within the data. However, the principles described herein permit decompression to begin at any point within the compressed data, without having to decompress any prior portion of the file. Thus, the principles described herein permit random access of the compressed data. In accordance with the principles described herein, this is accomplished by using an index that correlates positions within the uncompressed data with positions within the compressed data.

DATA COMPRESSION USING REDUCED NUMBERS OF OCCURRENCES
20220060196 · 2022-02-24 ·

Systems, apparatus and methods are provided for compressing data. A method may comprise receiving an input data block to be compressed, determining numbers of occurrences for distinct symbols in the input data block, generating reduced numbers of occurrences for the distinct symbols based on the numbers of occurrences for the distinct symbols and encoding the input data block using the reduced numbers of occurrences as probability distribution of the distinct symbols in the input data block.