H03M7/3082

Lossy compression techniques

Techniques are disclosed relating to compression of pixel data using different quantization for different regions of a block of pixels being compressed. In some embodiments, compression circuitry is configured to determine, for multiple components included in pixels of the block of pixels being compressed, respective smallest and greatest component values in respective regions of the block of pixels. The compression circuitry may determine, based on the determined smallest and greatest component values, to use a first number of bits to represent delta values relative to a base value for a first component in a first region and a second, different number of bits to represent delta values relative to a base value for a second component in the first region. The compression circuitry may then quantize delta values for the first and second components of pixels in the first region of the block of pixels using the determined first and second numbers of bits. In some embodiments, the compression circuitry determines whether to provide cross-component bit sharing within a region.

DATA COMPRESSION SYSTEM USING CONCATENATION IN STREAMING

The present disclosure refers to a data compression system developed to serve several areas, providing a compressed form of information with the purpose of occupying less bytes than the original form, obtaining as a result, the transmission and maintenance of a compressed form of information and requiring less time and space, compared to performing the same functions with the original form of information, that is, using files already compressed by traditional methods and reordering them data in order to achieve new bit gains breaking the compression limit of methods already universally known, being for this purpose constituted by the encoding process, streaming concatenation process, decompression process and deconcatenation process.

Structural data matching using neural network encoders
11468024 · 2022-10-11 · ·

Implementations of the present disclosure include methods, systems, and computer-readable storage mediums for receiving first and second data sets, both the first and second data sets including structured data in a plurality of columns, for each of the first data set and the second data set, inputting each column into an encoder specific to a column type of a respective column, the encoder providing encoded data for the first data set, and the second data set, respectively, providing a first multi-dimensional vector based on encoded data of the first data set, providing a second multi-dimensional vector based on encoded data of the second data set, and outputting the first multi-dimensional vector and the second multi-dimensional vector to a loss-function, the loss-function processing the first multi-dimensional vector and the second multi-dimensional vector to provide an output, the output representing matched data points between the first and second data sets.

Methods and apparatus for thread-based scheduling in multicore neural networks
11625592 · 2023-04-11 · ·

Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.

Communication compression method based on model weight distribution in federated learning

A communication compression method based on model weight distribution in federated learning, and belongs to the technical field of wireless communication. Based on the existing federated average idea in federated learning, counts the distribution of model weight information to be transmitted between nodes during each communication, then performs scalar quantization and compression through Lloyd-Max quantizer according to their distribution characteristics, then encodes with Huffman coding method, and finally sends the codes to the target node, thereby the minimum mean square quantization error is realized and the number of bits required for communication is reduced.

Methods and devices for vector segmentation for coding

A method for partitioning of input vectors for coding is presented. The method comprises obtaining of an input vector. The input vector is segmented, in a non-recursive manner, into an integer number, N.sup.SEG, of input vector segments. A representation of a respective relative energy difference between parts of the input vector on each side of each boundary between the input vector segments is determined, in a recursive manner. The input vector segments and the representations of the relative energy differences are provided for individual coding. Partitioning units and computer programs for partitioning of input vectors for coding, as well as positional encoders, are presented.

METHODS AND APPARATUS FOR THREAD-BASED SCHEDULING IN MULTICORE NEURAL NETWORKS
20230153596 · 2023-05-18 · ·

Systems, apparatus, and methods for thread-based scheduling within a multicore processor. Neural networking uses a network of connected nodes (aka neurons) to loosely model the neuro-biological functionality found in the human brain. Various embodiments of the present disclosure use thread dependency graphs analysis to decouple scheduling across many distributed cores. Rather than using thread dependency graphs to generate a sequential ordering for a centralized scheduler, the individual thread dependencies define a count value for each thread at compile-time. Threads and their thread dependency count are distributed to each core at run-time. Thereafter, each core can dynamically determine which threads to execute based on fulfilled thread dependencies without requiring a centralized scheduler.

Compressing and decompressing image data using compacted region transforms
11647234 · 2023-05-09 · ·

A method of compressing image data comprising a set of image values each representing a position in image-value space so as to define an occupied region thereof. The method comprises selectively applying a series of compression transforms to subsets of the image data items to generate a transformed set of image data items occupying a compacted region of value space. The method further comprises identifying a set of one or more reference data items that quantizes the compacted region in value space. For each image data item in the set of image data items, a sequence of decompression transforms from a fixed set of decompression transforms is identified that generates an approximation of that image data item when applied to a selected one of the one or more reference data items. Each image data item in the set of image data items is encoded as a representation of the identified sequence of decompression transforms for that image data item. The encoded image data items, set of reference data items and the fixed set of decompression transforms are stored as compressed image data.

Arithmetic Encoders, Arithmetic Decoders, Video Encoder, Video Decoder, Methods for Encoding, Methods for Decoding and Computer Program

An arithmetic encoder for encoding a plurality of symbols having symbol values is configured to derive an interval size information for an arithmetic encoding of one or more symbol values to be encoded based on a plurality of state variable values representing statistics of a plurality of previously encoded symbol values with different adaptation time constants. The arithmetic encoder is configured to map a first state variable value, or a scaled and/or rounded version thereof, using a lookup-table and to map a second state variable value, or a scaled and/or rounded version thereof using the lookup-table, in order to obtain the interval size information describing an interval size for the arithmetic encoding of one or more symbols to be encoded. Further arithmetic encoders, arithmetic decoders, video encoders, video decoder, methods for encoding, methods for decoding and computer programs are also disclosed which are based on the same concept and on other concepts.

System and method of data compression

This disclosure relates to systems and methods for adaptively compressing data based on compression parameters. In one embodiment, a method for compressing a dataset is disclosed, including filtering a dataset based on occurrence of an event, and determining a quality of information index indicating a measure of quality of the dataset based on a quality of information estimation function. The method may include comparing the quality of information index with a list of indices stored in a lookup table to identify a target quality of information index and corresponding compression parameters, wherein the target quality of information index is indicative of a reference measure of quality of the dataset applicable for deriving statistical inferences based on analysis of the dataset. Also, the method may include inputting the compression parameters in a compression algorithm for compressing the dataset to achieve the target quality of information index for the analysis.