Patent classifications
H03M7/6052
Arithmetic encoder for arithmetically encoding and arithmetic decoder for arithmetically decoding a sequence of information values, methods for arithmetically encoding and decoding a sequence of information values and computer program for implementing these methods
An encoding scheme is provided for arithmetically encoding a sequence of information values into an arithmetic coded bitstream by providing the bitstream with entry point information, allowing for resuming arithmetic decoding of the bitstream from a predetermined entry point onward. A respective decoding scheme is also provided. These encoding and decoding schemes provide more efficient encoding in view of the decoding speed.
ONLINE ADAPTIVE LOSSLESS COMPRESSION WITH LOCKSTEP-TRAINED PREDICTIVE MODELS
A method and system are disclosed for adaptive lossless data compression using a predictive model that is updated deterministically in lockstep at both encoder and decoder. An encoder generates symbol probability distributions from a frozen base model with a small set of updatable parameters, encodes input blocks via entropy coding, and applies deterministic parameter updates using the observed data. A decoder entropy-decodes the compressed bitstream, reconstructs the same blocks, and applies identical updates, thereby maintaining synchronization without transmission of model parameters. Optional features include periodic beacons for state verification and resynchronization, error-control metadata, and modality-specific tokenization. In preferred embodiments, parameter adaptation is confined to adapter modules within a largely fixed neural network, ensuring computational efficiency and reproducibility across platforms. Over time, the model specializes to the stream, reducing average coding rate while guaranteeing exact reconstruction of the original data.
Systems and methods for decompressing neural network coefficients
A method for decompressing data may include receiving a first sequence of bits and performing a plurality of iterations. Each of the plurality of iterations may include scanning bits of the first sequence, starting from a starting point, to search for at least one of a variable length codeword or a bypass indicator, the starting point being either a starting point of the first sequence or a starting point defined in a previous iteration. The method also include, for at least one of the plurality of iterations, when a bypass indicator is found, outputting a neural network coefficient related value (NNCRV) that is non-compressed and follows the bypass indicator, and defining a starting point that follows the NNCRV as a starting point for a next iteration.