Patent classifications
H03M7/3082
Compression for sparse data structures utilizing mode search approximation
Embodiments are generally directed to compression for compression for sparse data structures utilizing mode search approximation. An embodiment of an apparatus includes one or more processors including a graphics processor to process data; and a memory for storage of data, including compressed data. The one or more processors are to provide for compression of a data structure, including identification of a mode in the data structure, the data structure including a plurality of values and the mode being a most repeated value in a data structure, wherein identification of the mode includes application of a mode approximation operation, and encoding of an output vector to include the identified mode, a significance map to indicate locations at which the mode is present in the data structure, and remaining uncompressed data from the data structure.
ENCODING DEVICE, DECODING DEVICE, ENCODING METHOD, AND DECODING METHOD
An encoding device is provided with: a quantizing circuit which generates quantization parameters including first information on a vector quantization codebook, and second information on code vectors included in the codebook; and a control circuit which employs the second number of bits based on the difference between the first number of bits available for encoding of a sub-vector in the vector quantization, and the number of bits for the sub-vector quantization parameters, to control encoding of the first information with respect to the sub-vector.
Nonlinear, decentralized processing unit and related systems or methodologies
Disclosed is a processor chip that includes on-chip and off-chip software. The chip is optimized for hyperdimensional, fixed-point vector algebra to efficiently store, process, and retrieve information. A specialized on-chip data-embedding algorithm uses algebraic logic gates to convert off-chip normal data, such as images and spreadsheets, into discrete, abstract vector space where information is processed with off-chip software and on-chip accelerated computation via a desaturation method. Information is retrieved using an on-chip optimized decoding algorithm. Additional software provides an interface between a CPU and the processor chip to manage information processing instructions for efficient data transfer on- and off-chip in addition to providing intelligent processing that associates input information to allow for suggestive outputs.
Parallel processing circuits for neural networks
The present disclosure provides an integrated circuit chip device and a related product. The integrated circuit chip device includes: a primary processing circuit and a plurality of basic processing circuits. The primary processing circuit or at least one of the plurality of basic processing circuits includes the compression mapping circuits configured to perform compression on each data of a neural network operation. The technical solution provided by the present disclosure has the advantages of a small amount of computations and low power consumption.
SYSTEM AND METHOD FOR A CONTENT-AWARE AND CONTEXT-AWARE COMPRESSION ALGORITHM SELECTION MODEL FOR A FILE SYSTEM
A method for managing a file system includes obtaining, by a compression optimizing manager, a compression algorithm selection request for the file system, determining a set of selection inputs based on a set of file system parameters of the file system, applying a compression selection model to the set of selection inputs to obtain a compression algorithm selection, and initiating a file system compression implementation of the file system using the compression algorithm selection.
Variable bit rate LPC filter quantizing and inverse quantizing device and method
A device and a method for quantizing a LPC filter in the form of an input vector in a quantization domain, comprises a calculator of a first-stage approximation of the input vector, a subtractor of the first-stage approximation from the input vector to produce a residual vector, a calculator of a weighting function from the first-stage approximation, a warper of the residual vector with the weighting function, and a quantizer of the weighted residual vector to supply a quantized weighted residual vector. A device and a method for inverse quantizing of a LPC filter, comprises means for receiving coded indices representative of a first-stage approximation of a vector representative of the LPC filter in a quantization domain and of a quantized weighted residual version of the vector, a calculator of an inverse weighting function from the first-stage approximation, an inverse quantizer of the quantized weighted residual version of the vector to produce a weighted residual vector, a multiplier of the weighted residual vector by the inverse weighting function to produce a residual vector, and an adder of the first-stage approximation with the residual vector to produce the vector representative of the LPC filter in the quantization domain.
Compression assist instructions
In an embodiment, a processor supports one or more compression assist instructions which may be employed in compression software to improve the performance of the processor when performing compression/decompression. That is, the compression/decompression task may be performed more rapidly and consume less power when the compression assist instructions are employed then when they are not. In some cases, the cost of a more effective, more complex compression algorithm may be reduced to the cost of a less effective, less complex compression algorithm.
Methods and apparatus systems for unified speech and audio decoding improvements
The present disclosure relates to an apparatus for decoding an encoded Unified Audio and Speech stream. The apparatus comprises a core decoder for decoding the encoded Unified Audio and Speech stream. The core decoder includes a fast Fourier transform, FFT, module implementation based on a Cooley-Tuckey algorithm. The FFT module is configured to determine a discrete Fourier transform, DFT. Determining the DFT involves recursively breaking down the DFT into small FFTs based on the Cooley-Tucker algorithm and using radix-4 if a number of points of the FFT is a power of 4 and using mixed radix if the number is not a power of 4. Performing the small FFTs involves applying twiddle factors. Applying the twiddle factors involves referring to pre-computed values for the twiddle factors. The present disclosure further relates to an apparatus for decoding an encoded Unified Audio and Speech stream, in which the core decoder is configured to decode an LPC filter that has been quantized using a line spectral frequency, LSF, representation from the Unified Audio and Speech stream. Decoding the LPC filter from the Unified Audio and Speech stream comprises computing a first-stage approximation of a LSF vector, reconstructing a residual LSF vector, if an absolute quantization mode has been used for quantizing the LPC filter, determining inverse LSF weights for inverse weighting of the residual LSF vector by referring to pre-computed values for the inverse LSF weights or their respective corresponding LSF weights, inverse weighting the residual LSF vector by the determined inverse LSF weights, and calculating the LPC filter based on the inversely-weighted residual LSF vector and the first-stage approximation of the LSF vector. The present disclosure further relates to corresponding methods and storage media.
Convolution acceleration with embedded vector decompression
Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory.
Codebook subset restriction signaling
A network node signals to a wireless communication device which precoders in a codebook are restricted from being used. The network node in this regard generates codebook subset restriction signaling that, for each of one or more groups of precoders, jointly restricts the precoders in the group by restricting a certain component (e.g., a certain beam precoder) that the precoders in the group have in common. This signaling may be for instance rank-agnostic signaling that jointly restricts the precoders in a group without regard to the precoders' transmission rank. Regardless, the network node sends the generated signaling to the wireless communication device.