H03M7/4006

METHODS AND APPARATUS FOR IMPROVED ENTROPY ENCODING AND DECODING
20190075321 · 2019-03-07 ·

Methods and apparatus are provided for improved entropy encoding and decoding. An apparatus includes a video encoder (200) for encoding at least a block in a picture by transforming a residue of the block to obtain transform coefficients, quantizing the transform coefficients to obtain quantized transform coefficients, and entropy coding the quantized transform coefficients. The quantized transform coefficients are encoded using a flag to indicate that a current one of the quantized transform coefficients being processed is a last non-zero coefficient for the block hiving a value greater than or equal to a specified value.

Data Processing System and Method for Protecting Data in a Data Memory Against an Undetected Change

A method for protecting data in a data memory against an undetected change, wherein a functional variable x is encoded via a value, an input constant, an input signature and a timestamp D into a coded variable, where the functional variable is normalized relative to a base to form the integer value from the functional variable.

Entropy encoding and decoding scheme

Decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of z laying within the respective partitions separately with at least one by VLC coding and with at least one by PIPE or entropy coding is used to greatly increase the compression efficiency at a moderate coding overhead since the coding scheme used may be better adapted to the syntax element statistics. Accordingly, syntax elements are decomposed into a respective number n of source symbols s.sub.i with i=1 . . . n, the respective number n of source symbols depending on as to which of a sequence of n partitions into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s.sub.i yields z, and, if n>1, for all i=1 . . . n1, the value of s.sub.i corresponds to a range of the i.sup.th partition.

Compression/Decompression Method and Apparatus for Genomic Variant Call Data
20190057185 · 2019-02-21 ·

Methods and apparatus for compressing and decompressing genetic information from an individual. In one arrangement, a data compression method generates a compressed representation of at least a portion of an individual's genome by receiving an input file having a representation of the genome as a sequence of variants defined relative to a reference genome. A reference database having a plurality of reference lists of genetic variants from other individuals is accessed. Each reference list has a sequence of genetic variants from a single, phased haplotype. Two mosaics of segments from the reference lists are identified which match the genome to within a threshold accuracy. Each mosaic represents a single one of the two haplotypes of the individual's genome and includes a portion of the sequence of genetic variants from one of the reference lists. The compressed representation is generated by encoding the two mosaics and deviations from the two mosaics.

ENCODER, DECODER, ENCODING METHOD, AND DECODING METHOD
20190052883 · 2019-02-14 ·

An encoder includes processing circuitry, a block memory, and a frame memory. The processing circuitry defines at least one parameter for each of plural types of segment_ids, splits an image into blocks, assigns, to each of the blocks, segment_id according to a type of the block, among the plural types of segment_ids, and sequentially encodes the blocks. In encoding the blocks, the processing circuitry identifies segment_id of a current block to be encoded, and encodes the current block using the at least one parameter defined for identified segment_id. The at least one parameter includes seg_context_idx for identifying probability information associated with context used in context-based adaptive binary arithmetic coding (CABAC).

CIRCUITRY FOR LOW-PRECISION DEEP LEARNING

The present disclosure relates generally to techniques for improving the implementation of certain operations on an integrated circuit. In particular, deep learning techniques, which may use a deep neural network (DNN) topology, may be implemented more efficiently using low-precision weights and activation values by efficiently performing down conversion of data to a lower precision and by preventing data overflow during suitable computations. Further, by more efficiently mapping multipliers to programmable logic on the integrated circuit device, the resources used by the DNN topology to perform, for example, inference tasks may be reduced, resulting in improved integrated circuit operating speeds.

Bin string coding based on a most probable symbol
10194153 · 2019-01-29 · ·

Bins of a bin string representative of binarized video data are processed to determine whether each bin stores a most probable symbol of a probability model available for coding the binarized video data. If the symbol stored in each bin of the bin string is the most probable symbol, the probability model is updated based on a size of the bin string to determine a first number of bits to use to code the binarized video data. However, if the symbol stored in each bin of the bin string is not the most probable symbol, the probability model is updated based on a number of bins of the bin string storing a symbol that is not the most probable symbol to determine a second number of bits to use to code the binarized video data.

Memory Compression Method and Apparatus
20190028115 · 2019-01-24 ·

Methods and systems for encoding of integers are discussed. For example, various methods and systems may utilize Huffman coding, Tunstall coding, Arithmetic Coding, LZ77 coding, LZ78 coding, LW coding, or Shannon Fano Elias coding to encode the integers.

ENTROPY ENCODING AND DECODING SCHEME
20190013822 · 2019-01-10 ·

Decomposing a value range of the respective syntax elements into a sequence of n partitions with coding the components of z laying within the respective partitions separately with at least one by VLC coding and with at least one by PIPE or entropy coding is used to greatly increase the compression efficiency at a moderate coding overhead since the coding scheme used may be better adapted to the syntax element statistics. Accordingly, syntax elements are decomposed into a respective number n of source symbols s.sub.i with i=1 . . . n, the respective number n of source symbols depending on as to which of a sequence of n partitions into which a value range of the respective syntax elements is sub-divided, a value z of the respective syntax elements falls into, so that a sum of values of the respective number of source symbols s.sub.i yields z, and, if n>1, for all i=1 . . . n1, the value of s.sub.i corresponds to a range of the i.sup.th partition.

Lossless compression of a content item using a neural network trained on content item cohorts

Lossless compression of a content item using a neural network trained on content item cohorts. A computing system includes a neural network that is used to train a plurality of symbol prediction models. Each symbol prediction model is trained based on a corresponding cohort of content items. A particular symbol prediction model of the models trained is selected based on an intrinsic characteristic of a particular content item to be losslessly compressed such as, for example, the type or file extension of the content item. The content item is then losslessly compressed based on a set of symbol predictions fed to an arithmetic coder that are generated using the particular symbol prediction model selected.