H03M7/4006

MULTI-CONTEXT ENTROPY CODING FOR COMPRESSION OF GRAPHS
20230042018 · 2023-02-09 ·

Example embodiments relate to using a multi-context entropy coder for encoding adjacency lists. A system may obtain a graph having data (or multiple graphs) and may compress the data of the graph using a multi -context entropy coder. The multi-context entropy coder may encode adjacency lists within the data such that each integer is assigned to a different probability distribution. For example, operating the multi-context entropy coder may involve using a combination of arithmetic coding, Huffman coding, and ANS. The assignment of integers to the probability distributions may depend on each integer’s role and/or previous values of a similar kind. By using multi -context entropy- coding, the computing system may increase compression ratio while maintaining similar processing speed.

VARIABLE LENGTH CODING METHOD AND VARIABLE LENGTH DECODING METHOD
20180007358 · 2018-01-04 ·

A variable length coding method is comprised of: a coefficient value scanning step in which an RL sequence generation unit, a reordering unit, and a binarization unit scan coefficient values within a block in a predetermined scanning order starting at a higher-frequency component toward a lower-frequency component; and an arithmetic coding step in which an arithmetic coding unit and a table storage unit perform arithmetic coding on the absolute values of the coefficient values according to the scanning order used in the coefficient value scanning step, by switching between probability tables for use, wherein, in the arithmetic coding step, a probability table to be used is switched to another probability table in one direction, when the arithmetic-coded absolute values of the coefficient values include an absolute value exceeding a predetermined threshold value.

Adaptive Stochastic Entropy Coding
20180007361 · 2018-01-04 ·

Adaptive stochastic entropy encoding may include identifying a current portion of an input video stream, and identifying a current probability distribution, which may be an adapted probability distribution associated with a previously encoded portion of the video stream. Adaptive stochastic entropy encoding may include identifying a forward update probability distribution based on the current portion, generating a modified probability distribution for the current portion based on the forward update probability distribution and the current probability distribution, generating an encoded portion based on the current portion and the modified probability distribution, and generating an adapted probability distribution based on the current probability distribution and the forward update probability distribution.

HIGH-DENSITY COMPRESSION METHOD AND COMPUTING SYSTEM

Certain implementations of the disclosed technology may include methods and computing systems for performing high-density data compression, particularly on numerical data that demonstrates various patterns, and patterns of patters. According to an example implementation, a method is provided. The method may include extracting a data sample from a data set, compressing the data sample using a first compression filter configuration, and calculating a compression ratio associated with the first compression filter configuration. The method may also include compressing the data sample using a second compression filter configuration and calculating a compression ratio associated with the second compression filter configuration. A particular compression filter configuration to utilize in compressing the entire data set may be selected based on a comparison of the compression ratio associated with the first compression filter configuration and a compression ratio associated with the second compression filter configuration.

ARITHMETIC ENCODER FOR ARITHMETICALLY ENCODING AND ARITHMETIC DECODER FOR ARITHMETICALLY DECODING A SEQUENCE OF INFORMATION VALUES, METHODS FOR ARITHMETICALLY ENCODING AND DECODING A SEQUENCE OF INFORMATION VALUES AND COMPUTER PROGRAM FOR IMPLEMENTING THESE METHODS

The invention describes an encoding scheme for arithmetically encoding a sequence of information values into an arithmetic coded bitstream using providing the bitstream with entry point information allowing for resuming arithmetic decoding the bitstream from a predetermined entry point onward. A respective decoding scheme is also provided. These encoding and decoding schemes provide more efficient encoding concept in view of the decoding speed.

High-density compression method and computing system

Certain implementations of the disclosed technology may include methods and computing systems for performing high-density data compression, particularly on numerical data that demonstrates various patterns, and patterns of patters. According to an example implementation, a method is provided. The method may include extracting a data sample from a data set, compressing the data sample using a first compression filter configuration, and calculating a compression ratio associated with the first compression filter configuration. The method may also include compressing the data sample using a second compression filter configuration and calculating a compression ratio associated with the second compression filter configuration. A particular compression filter configuration to utilize in compressing the entire data set may be selected based on a comparison of the compression ratio associated with the first compression filter configuration and a compression ratio associated with the second compression filter configuration.

Low-latency encoding using a bypass sub-stream and an entropy encoded sub-stream
11705924 · 2023-07-18 · ·

A system comprises an encoder configured to entropy encode a bitstream comprising both compressible and non-compressible symbols. The encoder parses the bitstream into a compressible symbol sub-stream and a non-compressible sub-stream. The non-compressible symbol sub-stream bypass an entropy encoding component of the encoder while the compressible symbol sub-stream is entropy encoded. When a quantity of bytes of entropy encoded symbols and bypass symbols is accumulated a chunk of fixed or known size is formed using the accumulated entropy encoded symbol bytes and the bypass bytes without waiting on the full bitstream to be processed by the encoder. In a complementary manner, a decoder reconstructs the bitstream from the packets or chunks.

MULTI-THREADED CABAC DECODING
20220394284 · 2022-12-08 ·

A method, system, and computer readable medium for improved decoding CABAC encoded media are described. A decoding loop includes decoding an encoded binary element from a sequence of encoded binary elements to generate a decoded binary element using a context probability. A next context probability for a next encoded binary element in the sequence is determined from the decoded binary element and the next context probability for decoding the next encoded binary element is provided to the decoding loop for a next iteration.

QUALITY SCORE COMPRESSION
20230040143 · 2023-02-09 ·

Methods, systems, and computer programs for compressing nucleic acid sequence data. A method can include obtaining nucleic acid sequence data representing: (i) a read sequence, and (ii) a plurality of quality scores, determining whether the read sequence includes at least one “N” base, based on a determination that the read sequence includes at least one “N” base, generating, by one or more computers, a first encoding data set by using a first encoding process to encode each set of four quality scores of the read sequence into a single byte of memory, and using a second encoding process to encode the first encoded data set, thereby compressing the data to be compressed.

Techniques for parameter set and header design for compressed neural network representation

Systems and methods for encoding and decoding neural network data is provided. A method includes: receiving a neural network representation (NNR) bitstream including a group of NNR units (GON) that represents an independent neural network with a topology, the GON including an NNR model parameter set unit, an NNR layer parameter set unit, an NNR topology unit, an NNR quantization unit, and an NNR compressed data unit; and reconstructing the independent neural network with the topology by decoding the GON.