H03M7/4006

EFFICIENT UPDATE OF CUMULATIVE DISTRIBUTION FUNCTIONS FOR IMAGE COMPRESSION
20230085142 · 2023-03-16 ·

Updating cumulative distribution functions (CDFs) during arithmetic encoding can be a challenge because the final element of the CDF should remain fixed during the update calculations. If the probabilities were floating-point numbers, this would not be too much of a challenge; nevertheless, the probabilities and hence the CDFs are represented as integers to take advantage of infinite-precision arithmetic. Some of these difficulties may be alleviated by introducing a “mixing” CDF along with the active CDF being updated; the mixing CDF provides nonlocal context for updating the CDF due to the introduction of a particular symbol in the encoding. Improved techniques of performing arithmetic encoding include updating the CDF using two, one-dimensional mixing CDF arrays: a symbol-dependent array and a symbol-dependent array. The symbol-dependent array is a sub array of a larger, fixed array such that the sub array selected depends on the symbol being used.

METHOD AND SYSTEMS FOR GENOME SEQUENCE COMPRESSION
20230076603 · 2023-03-09 ·

Systems and methods for genome sequence compression and decompression are provided. The method for compression encoding of a genome sequence includes partitioning a genome sequence into a plurality of Group of Bases (GoBs) and processing each of the plurality of GoBs independently to encode the genome sequence into a bit stream. Processing each of the plurality of GoBs includes dividing each of the plurality of GOBs into a first part and a second part, the first part including an initial context part and the second part including a learning-based inference part. The processing each of the plurality of GoBs further includes encoding the first part in accordance with a Markov model, encoding the second part in accordance with a learning-based model, and encoding the encoded first part and the encoded second part into the bit stream with an arithmetic encoder. The learning-based model may include Long and Short-Term Memory (LSTM)-based neural networks.

Computer-implemented methods and systems relating to arithmetic coding for serialised arithmetic circuits

Techniques described herein may be utilized to implement methods and systems for lossless compression and serialization of arithmetic circuits to a bit stream using compression techniques such as the arithmetic coding. An arithmetic circuit representing a smart contract may be compressed using arithmetic coding, thereby generating a compressed arithmetic circuit that can be stored or broadcast to a blockchain network using less computational resources (e.g., data storage resources) than would otherwise be needed to store the arithmetic circuit. The arithmetic circuit can be efficiently compressed using entropy coding based on the frequency of elements in the data structure, such as the arithmetic operator types. Instructions for de-serialization and de-compression can also be embedded in the bit stream, and can be used (e.g., by another computer system) to reconstruct the original circuit in a lossless manner.

COMPUTER-IMPLEMENTED METHODS AND SYSTEMS RELATING TO ARITHMETIC CODING FOR SERIALISED ARITHMETIC CIRCUITS
20230122761 · 2023-04-20 ·

Techniques described herein may be utilized to implement methods and systems for lossless compression and serialization of arithmetic circuits to a bit stream using compression techniques such as the arithmetic coding. An arithmetic circuit representing a smart contract may be compressed using arithmetic coding, thereby generating a compressed arithmetic circuit that can be stored or broadcast to a blockchain network using less computational resources (e.g., data storage resources) than would otherwise be needed to store the arithmetic circuit. The arithmetic circuit can be efficiently compressed using entropy coding based on the frequency of elements in the data structure, such as the arithmetic operator types. Instructions for de-serialization and de-compression can also be embedded in the bit stream, and can be used (e.g., by another computer system) to reconstruct the original circuit in a lossless manner.

TECHNIQUES FOR PARAMETER SET AND HEADER DESIGN FOR COMPRESSED NEURAL NETWORK REPRESENTATION

Systems and methods for encoding and decoding neural network data is provided. A method includes: obtaining an independent neural network with a topology; encoding the independent neural network with the topology such as to obtain a neural network representation (NNR) bitstream; and sending the NNR bitstream to a decoder, wherein the NNR bitstream includes a group of NNR units (GON) that represents the independent neural network with the topology, and the GON includes an NNR model parameter set unit, an NNR layer parameter set unit, an NNR topology unit, an NNR quantization unit, and an NNR compressed data unit.

Arithmetic Encoders, Arithmetic Decoders, Video Encoder, Video Decoder, Methods for Encoding, Methods for Decoding and Computer Program

An arithmetic encoder for encoding a plurality of symbols having symbol values is configured to derive an interval size information for an arithmetic encoding of one or more symbol values to be encoded based on a plurality of state variable values representing statistics of a plurality of previously encoded symbol values with different adaptation time constants. The arithmetic encoder is configured to map a first state variable value, or a scaled and/or rounded version thereof, using a lookup-table and to map a second state variable value, or a scaled and/or rounded version thereof using the lookup-table, in order to obtain the interval size information describing an interval size for the arithmetic encoding of one or more symbols to be encoded. Further arithmetic encoders, arithmetic decoders, video encoders, video decoder, methods for encoding, methods for decoding and computer programs are also disclosed which are based on the same concept and on other concepts.

Methods and apparatus for improved entropy encoding and decoding

Methods and apparatus are provided for improved entropy encoding and decoding. An apparatus includes a video encoder (200) for encoding at least a block in a picture by transforming a residue of the block to obtain transform coefficients, quantizing the transform coefficients to obtain quantized transform coefficients, and entropy coding the quantized transform coefficients. The quantized transform coefficients are encoded using a flag to indicate that a current one of the quantized transform coefficients being processed is a last non-zero coefficient for the block having a value greater than or equal to a specified value.

METHOD AND DEVICE FOR ARITHMETIC ENCODING OR ARITHMETIC DECODING
20170338833 · 2017-11-23 ·

A method and a device for arithmetic encoding of a current spectral coefficient using preceding spectral coefficients. The method comprises processing the preceding spectral coefficients, using the processed preceding spectral coefficients for determining a context class being one of at least two different context classes, using the determined context class and a mapping from the at least two different context classes to at least two different probability density functions for determining the probability density function, and arithmetic encoding the current spectral coefficient based on the determined probability density function wherein processing the preceding spectral coefficients comprises non-uniformly quantizing absolutes of the preceding spectral coefficients for use in determining of the context class.

Low-Latency Encoding Using a Bypass Sub-Stream and an Entropy Encoded Sub-Stream
20220360280 · 2022-11-10 · ·

A system comprises an encoder configured to entropy encode a bitstream comprising both compressible and non-compressible symbols. The encoder parses the bitstream into a compressible symbol sub-stream and a non-compressible sub-stream. The non-compressible symbol sub-stream bypass an entropy encoding component of the encoder while the compressible symbol sub-stream is entropy encoded. When a quantity of bytes of entropy encoded symbols and bypass symbols is accumulated a chunk of fixed or known size is formed using the accumulated entropy encoded symbol bytes and the bypass bytes without waiting on the full bitstream to be processed by the encoder. In a complementary manner, a decoder reconstructs the bitstream from the packets or chunks.

Method and apparatus for range derivation in context adaptive binary arithmetic coding
11265561 · 2022-03-01 · ·

A method and apparatus of entropy coding of coding symbols using Context-Based Adaptive Binary Arithmetic Coder (CABAC) are disclosed. According to the present invention, CABAC encoding or decoding is applied to a current bin of a binary data of a current coding symbol according to a current probability for a binary value of the current bin and a current range associated with the current state of arithmetic coder. An LPS probability index corresponding to an inverted current probability or the current probability is derived depending on whether the current probability is greater than 0.5. A range index is derived for identifying one range interval containing the current range. An LPS range is then derived using one or more mathematical operations comprising calculating a multiplication of a first value related to the LPS probability index and a second value related to the range index n.