Patent classifications
H03M7/6005
Deflate compression using sub-literals for reduced complexity Huffman coding
A literal element that has a plurality of bits is received. The plurality of bits in the literal element is divided into a first sub-literal comprising a first set of bits and a second sub-literal comprising a second set of bits. The first sub-literal is encoded using a first Huffman code tree to obtain a first sub-literal codeword; the second sub-literal is encoded using a second Huffman code tree to obtain a second sub-literal codeword. Encoded data that includes information associated with the first Huffman code tree, information associated with the second Huffman code tree, the first sub-literal codeword, and the second sub-literal codeword is output.
SYSTEM AND METHOD FOR LOW-DISTORTION COMPACTION OF FLOATING-POINT NUMBERS
A system and method for low-distortion compaction of floating-point numbers comprising a pre-encoder, a data deconstruction engine, a library manager, a codeword storage, and a data reconstruction engine. A pre-encoder may receive a plurality of data sourcepackets with may contain one or more floating-point numbers and the received data sourcepackets are scanned to identify floating-point numbers and the identified floating-point numbers. Identified floating-point numbers may be pre-encoded into binary string representations which are low-distortion embeddings of real numbers into a Hamming space. The binary string representation may be indexed to indicate it represents a floating-point number before being compacted by a data deconstruction engine and library manager. The pre-encoding of floating-point numbers located within a sourcepacket enables the system to maximize the benefit of the compaction capabilities of the data deconstruction engine.
SYSTEM AND METHOD FOR MULTIPLE PASS DATA COMPACTION UTILIZING DELTA ENCODING
The inventor has conceived, and reduced to practice, a system and method for data compaction using that applies delta encoding methods to entropy encoding methods to improve data compaction of entropy encoding methods under certain conditions and when compacting data having certain characteristics. Delta encoding may be applied to entropy encoding methods to further compact data sets by reducing the number of sourceblocks included in a codebook to those most commonly encountered in data to be encoded and, where mismatches occur during encoding, using delta encoding of bit differences with existing sourceblocks in the codebook rather than adding new sourceblocks to the codebook.
TECHNIQUES FOR PARAMETER SET AND HEADER DESIGN FOR COMPRESSED NEURAL NETWORK REPRESENTATION
Systems and methods for encoding and decoding neural network data is provided. A method includes: obtaining an independent neural network with a topology; encoding the independent neural network with the topology such as to obtain a neural network representation (NNR) bitstream; and sending the NNR bitstream to a decoder, wherein the NNR bitstream includes a group of NNR units (GON) that represents the independent neural network with the topology, and the GON includes an NNR model parameter set unit, an NNR layer parameter set unit, an NNR topology unit, an NNR quantization unit, and an NNR compressed data unit.
System and method for data compaction and security using multiple encoding algorithms
A system and method for encoding data using a plurality of encoding libraries. Portions of the data are encoded by different encoding libraries, depending on which library provides the greatest compaction for a given portion of the data. This methodology not only provides substantial improvements in data compaction over use of a single data compaction algorithm with the highest average compaction, but provides substantial additional security in that multiple decoding libraries must be used to decode the data. In some embodiments, each portion of data may further be encoded using different sourceblock sizes, providing further security enhancements as decoding requires multiple decoding libraries and knowledge of the sourceblock size used for each portion of the data. In some embodiments, encoding libraries may be randomly or pseudo-randomly rotated to provide additional security.
Methods, devices and systems for efficient compression and decompression for higher throughput
A decompression system has a plurality of decompression devices in an array or chain layout for decompressing respective compressed data values of a compressed data block. A first decompression device is connected to a next decompression device, and a last decompression device is connected to a preceding decompression device. The first decompression device decompresses a compressed data value and reduces the compressed data block by extracting a codeword of the compressed data value and removing the compressed data value from the compressed data block, retrieving a decompressed data value out of the extracted codeword, and passing the reduced compressed data block to the next decompression device. The last decompression device receives a reduced compressed data block from the preceding decompression device and decompresses another compressed data value by extracting a codeword of the other compressed data value, and retrieving another decompressed data value out of the extracted codeword. Elected for publication; FIG. 8.
System and method for data-layout aware decompression and verification using a hardware accelerator chain
A computer implemented method of data decompression and verification includes decompressing a compressed data segment to generate a decompressed data region. The method also includes generating a segment vector array (SVA) including a number of segment vectors corresponding to data segments within the decompressed data region, each segment vector indicating a location and a size of a corresponding data segment. The method also includes transmitting the SVA to a chain plugin module and transmitting segment vector array data to a SVA-based message constructor. The method also includes constructing a SVA-based message including the location and size of data segments within the decompressed data region, and transmitting the SVA-based message to a hardware accelerator. The method also includes performing verification sessions at the hardware accelerator, each verification session corresponding to a specific data segment indicated by the SVA-based message.
Backward-compatible integration of high frequency reconstruction techniques for audio signals
A method for decoding an encoded audio bitstream is disclosed. The method includes receiving the encoded audio bitstream and decoding the audio data to generate a decoded lowband audio signal. The method further includes extracting high frequency reconstruction metadata and filtering the decoded lowband audio signal with an analysis filterbank to generate a filtered lowband audio signal. The method also includes extracting a flag indicating whether either spectral translation or harmonic transposition is to be performed on the audio data and regenerating a highband portion of the audio signal using the filtered lowband audio signal and the high frequency reconstruction metadata in accordance with the flag.
Method and device for coding and decoding an image by block cutting into zones
A method for encoding or decoding at least one image, an image being split into blocks of elements. The method includes, for at least one block: splitting the block into at least two areas; and processing at least one of the areas. The processing includes scanning the elements of the area according to a predetermined scanning order, and for at least one scanned element, called a current element: selecting at least one predictor element previously encoded or decoded according to a prediction function; and predicting the current element: from the at least one predictor element, if the at least one predictor element belongs to the area; or from at least one replacement value, otherwise.
Techniques to configure physical compute resources for workloads via circuit switching
Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.