Patent classifications
H03M7/6005
METHOD AND SYSTEM FOR COMPRESSING APPLICATION DATA FOR OPERATIONS ON MULTI-CORE SYSTEMS
A system and method to compress application control data, such as weights for a layer of a convolutional neural network, is disclosed. A multi-core system for executing at least one layer of the convolutional neural network includes a storage device storing a compressed weight matrix of a set of weights of the at least one layer of the convolutional network and a decompression matrix. The compressed weight matrix is formed by matrix factorization and quantization of a floating point value of each weight to a floating point format. A decompression module is operable to obtain an approximation of the weight values by decompressing the compressed weight matrix through the decompression matrix. A plurality of cores executes the at least one layer of the convolutional neural network with the approximation of weight values to produce an inference output.
SYSTEM AND METHOD FOR DATA COMPACTION AND SECURITY USING MULTIPLE ENCODING ALGORITHMS
A system and method for encoding data using a plurality of encoding libraries. Portions of the data are encoded by different encoding libraries, depending on which library provides the greatest compaction for a given portion of the data. This methodology not only provides substantial improvements in data compaction over use of a single data compaction algorithm with the highest average compaction, but provides substantial additional security in that multiple decoding libraries must be used to decode the data. In some embodiments, each portion of data may further be encoded using different sourceblock sizes, providing further security enhancements as decoding requires multiple decoding libraries and knowledge of the sourceblock size used for each portion of the data. In some embodiments, encoding libraries may be randomly or pseudo-randomly rotated to provide additional security.
INTERRUPTABLE BSDIFF DELTA DECOMPRESSION
A method includes inputting at least one compressed image in a computing system. The method also includes an inplace patching process. Another image is decompressed over the compressed image by a processor. Local variables are stored periodically, receiving restored power after an interruption to the inplace patching, wherein an execution of the inplace patching is resumed at a later time interval by the processor by restoring the local variables. The method also includes completing the inplace patching process of decompressing the image over the inputted compressed image after restoring the local variables.
SYSTEM AND METHOD FOR DATA COMPRESSION WITH ENCRYPTION
A system and method for data compression with encryption, that produces a conditioned data stream by replacing data blocks within an input data stream to bring the frequency of each data block closer to an ideal value, produces an error stream comprising the differences between the original data and the encrypted data, and compresses the conditioned data.
Technologies for assigning workloads to balance multiple resource allocation objectives
Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.
COMPRESSION DEVICE AND DECOMPRESSION DEVICE
According to one embodiment, an interleaving unit divides a symbol string into first and second symbols. A first coding unit converts the first symbols to first codewords. A first packet generating unit generates first packets including the first codewords. A first request generating unit generates first packet requests including sizes of variable length packets. A second coding unit converts the second symbols to second codewords. A second packet generating unit generates second packets including the second codewords. A second request generating unit generates second packet requests including sizes of variable length packets. A multiplexer outputs a compressed stream including the first and second variable length packets cut out from the first and second packets.
Method and system for processing image data
A method for processing image data and a system thereof are provided. The method is operated in the system including an encoding system and a decoding system. In the decoding system, multiple image data packages are received from the encoding system. The image data packages include multiple encoded data that are formed by encoding the pixels of an image and the pixels are beforehand rearranged according to an arrangement order. The arrangement order is exemplarily made based on the quantity of encoding circuits of the encoding system. In the decoding system, the encoded data received from the encoding system are sequentially stored in a memory according to the arrangement order. The decoding circuits start to decode the encoded data from an initial code synchronously for enhancing decoding performance. The method can be applied to decoding of high resolution images. The image is reproduced after the decoding process.
TECHNIQUES TO ENABLE STATEFUL DECOMPRESSION ON HARDWARE DECOMPRESSION ACCELERATION ENGINES
A hardware decompression acceleration engine including: an input buffer for receiving to-be-decompressed data from a software layer of a host computer; a decompression processing unit coupled to the input buffer for decompressing the to-be-decompressed data, the decompression processing unit further receiving first and second flags from the software layer of the host computer, wherein the first flag is indicative of a location of the to-be-decompressed data in a to-be-decompressed data block and the second flag is indicative of a presence of an intermediate state; and an output buffer for storing decompressed data from the decompression processing unit.
Methods and apparatus systems for unified speech and audio decoding improvements
The present disclosure relates to an apparatus for decoding an encoded Unified Audio and Speech stream. The apparatus comprises a core decoder for decoding the encoded Unified Audio and Speech stream. The core decoder includes a fast Fourier transform, FFT, module implementation based on a Cooley-Tuckey algorithm. The FFT module is configured to determine a discrete Fourier transform, DFT. Determining the DFT involves recursively breaking down the DFT into small FFTs based on the Cooley-Tucker algorithm and using radix-4 if a number of points of the FFT is a power of 4 and using mixed radix if the number is not a power of 4. Performing the small FFTs involves applying twiddle factors. Applying the twiddle factors involves referring to pre-computed values for the twiddle factors. The present disclosure further relates to an apparatus for decoding an encoded Unified Audio and Speech stream, in which the core decoder is configured to decode an LPC filter that has been quantized using a line spectral frequency, LSF, representation from the Unified Audio and Speech stream. Decoding the LPC filter from the Unified Audio and Speech stream comprises computing a first-stage approximation of a LSF vector, reconstructing a residual LSF vector, if an absolute quantization mode has been used for quantizing the LPC filter, determining inverse LSF weights for inverse weighting of the residual LSF vector by referring to pre-computed values for the inverse LSF weights or their respective corresponding LSF weights, inverse weighting the residual LSF vector by the determined inverse LSF weights, and calculating the LPC filter based on the inversely-weighted residual LSF vector and the first-stage approximation of the LSF vector. The present disclosure further relates to corresponding methods and storage media.
Convolution acceleration with embedded vector decompression
Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory.