H03M7/3071

Guaranteed data compression using intermediate compressed data
10985776 · 2021-04-20 · ·

Methods for converting an n-bit number into an m-bit number for situations where n>m and also for situations where n<m, where n and m are integers. The methods use truncation or bit replication followed by the calculation of an adjustment value which is applied to the replicated number.

BANDWIDTH COMPRESSION FOR NEURAL NETWORK SYSTEMS
20210120248 · 2021-04-22 ·

Techniques and systems are provided for compressing data in a neural network. For example, output data can be obtained from a node of the neural network. Re-arranged output data having a re-arranged scanning pattern can be generated. The re-arranged output data can be generated by re-arranging the output data into the re-arranged scanning pattern. One or more residual values can be determined for the re-arranged output data by applying a prediction mode to the re-arranged output data. The one or more residual values can then be compressed using a coding mode.

Compression of machine learned models
10970470 · 2021-04-06 · ·

Devices and techniques are generally described for compression of natural language processing models. A first index value to a first address of a weight table may be stored in a hash table. The first address may store a first weight associated with a first feature of a natural language processing model. A second index value to a second address of the weight table may be stored in the hash table. The second address may store a second weight associated with a second feature of the natural language processing model. A first code associated with the first feature and comprising a first number of bits may be generated. A second code may be generated associated with the second feature and comprising a second number of bits greater than the first number of bits based on a magnitude of the second weight being greater than a magnitude of the first weight.

Sample array coding for low-delay based on position information

The entropy coding of a current part of a predetermined entropy slice is based on, not only, the respective probability estimations of the predetermined entropy slice as adapted using the previously coded part of the predetermined entropy slice, but also probability estimations as used in the entropy coding of a spatially neighboring, in entropy slice order preceding entropy slice at a neighboring part thereof. Thereby, the probability estimations used in entropy coding are adapted to the actual symbol statistics more closely, thereby lowering the coding efficiency decrease normally caused by lower-delay concepts. Temporal interrelationships are exploited additionally or alternatively.

LEARNING APPARATUS AND METHOD, AND PROGRAM
20210073645 · 2021-03-11 ·

The present technology relates to a learning apparatus and method, and a program which allow speech recognition with sufficient recognition accuracy and response speed. A learning apparatus includes a model learning unit that learns a model for recognition processing, on the basis of output of a decoder for the recognition processing constituting a conditional variational autoencoder when features extracted from learning data are input to the decoder, and the features. The present technology can be applied to learning apparatuses.

Decompression of model parameters using functions based upon cumulative count distributions

A predictive model utilizes a set of coefficients for processing received input data. To reduce memory usage storing the coefficients, a compression circuit compresses the set of coefficients prior to storage by generating a cumulative count distribution of the coefficient values, and identifying a distribution function approximating the cumulative count distribution. Function parameters for the determined function are stored in a memory and used by a decompression circuit to apply the function the compressed coefficients to determine the decompressed component values. Storing the function parameters may consume less memory in comparison to storing a look-up table for decompression, and may reduce an amount of memory look-ups required during decompression.

Compensation table compression method, display manufacturing apparatus, and memory

The present invention discloses a compensation table compression method, a display manufacturing apparatus, and a memory. The method includes: obtaining a reference frame compensation table and a current frame compensation table; dividing the reference frame compensation table and the current frame compensation table into a plurality of coding blocks, wherein each coding block is separately processed by using multiple prediction modes to obtain a residual coding block in the corresponding prediction mode; and compressing the residual coding block. By using the above method, the invention can save resources, reduce costs, and improve work efficiency.

Compression and/or decompression of activation data

A method for compressing activation data of a neural network to be written to a storage is provided. The activation data is formed into a plurality of groups and a state indicator indicates whether there are any data elements within each group that have a non-zero value. A second state indicator indicates, for groups having a non-zero value, whether sub-groups within the group contain a data element having a non-zero value. A sub-group state indicator indicates, for each sub-group having a non-zero value, which data elements within that sub-group have a non-zero value. Non-zero values of data elements in the activation data are encoded and a compressed data set is formed comprising the first state indicators, any second state indicators, any sub-group state indicators and the encoded non-zero values.

Lossless integer compression scheme
11854235 · 2023-12-26 · ·

Decompressing a compressed image to obtain a decompressed image includes receiving, in a compressed stream, compressed pixel values of the compressed image; decompressing, from the compressed stream, a first compressed pixel value of the compressed pixel values using a lossy floating-point decompression scheme to obtain a floating-point pixel value; rounding the floating-point pixel value to a nearest integer to obtain a pixel value of the decompressed image; and displaying or storing the decompressed image.

GUARANTEED DATA COMPRESSION USING INTERMEDIATE COMPRESSED DATA
20210211138 · 2021-07-08 ·

Methods for converting an n-bit number into an m-bit number for situations where n>m and also for situations where n<m, where n and m are integers. The methods use truncation or bit replication followed by the calculation of an adjustment value which is applied to the replicated number.