Patent classifications
H03M7/702
Method and apparatus for compressing an application
The size of a source application is reduced by compressing a plurality of invoked files, such as SO files, in the source application with a compression algorithm that has a higher compression rate compared to a default compression rate. A decompression file that corresponds with the compression algorithm is inserted into a plurality of invoking files in the source application so that the source application itself can decompress the invoked files that were compressed with the compression algorithm.
DISTRIBUTED DATA STORAGE
According to an example aspect of the present invention, there is provided a method, comprising: receiving an input ordered set of transactions after a genesis block or a preceding compressed block in a chain of blocks, generating a compressed block on the basis of the input ordered set of transactions, wherein processing of the compressed block results to an equivalent final state as processing of the input ordered set of transactions, and providing the compressed block to a distributed network for establishing a new chain epoch and replacing a set of uncompressed blocks associated with the input ordered set of transactions.
Decompression engine for executable microcontroller code
A code decompression engine reads compressed code from a memory containing a compressed code part and a dictionary part. The compressed code part contains a series of instructions being either an uncompressed instruction preceded by an uncompressed code bit, or a compressed instruction having a compressed code bit followed by a number of segments field followed by segments, followed by a directory index indication a directory location to read. Each segment consists of a mask type, a mask offset, and a mask.
Multi-pixel caching scheme for lossless encoding
Systems and methods are provided for encoding a multi-pixel caching scheme for lossless encoders. The systems and methods can include obtaining a sequence of pixels, determining repeating sub-sequences of the sequence of pixels consisting of a single repeated pixel and non-repeating sub-sequences of the sequence of pixels, responsive to the determination, encoding the repeating sub-sequences using a run-length of the repeated pixel and encoding the non-repeating sub-sequences using a multi-pixel cache, wherein the encoding using a multi-pixel cache comprises, encoding non-repeating sub-sequences stored in the multi-pixel cache as the location of the non-repeating sub-sequences in the multi-pixel cache, and encoding non-repeating sub-sequences not stored in the multi-pixel cache using the value of the pixels in the non-repeating sub-sequences.
Compression of deep neural networks
In an approach for compressing a neural network, a processor receives a neural network, wherein the neural network has been trained on a set of training data. A processor receives a compression ratio. A processor compresses the neural network based on the compression ratio using an optimization model to solve for sparse weights. A processor re-trains the compressed neural network with the sparse weights. A processor outputs the re-trained neural network.
ACCELERATED STARTUP THROUGH PARALLEL DECOMPRESSION OF RAM DISKS
A system for an accelerated startup including a primary processing core of a computer, a plurality of secondary processing cores of the computer, connected to the primary processing core, and a non-volatile memory connected to the primary processing core and to the plurality of secondary processing cores. The system may include a non-volatile memory may include an initial program load and an initial RAM disk containing a compressed operating system kernel image, where the primary core decompresses the operating system kernel image upon execution of the initial program load. The system may include a plurality of compressed RAM disks, where the plurality of compressed RAM discs may be decompressed in parallel by the secondary processing cores. The system may include applications stored on the plurality of RAM discs that may be executed in parallel by the secondary processing cores after the decompression in parallel has been completed.
Bounds checking
A data processing apparatus is provided, for performing a determination of whether a value falls within a boundary defined by a lower limit between 0 and 2.sup.m and an upper limit between 0 and 2.sup.m. The apparatus includes storage circuitry that stores each of the lower limit and the upper limit in a compressed form as a mantissa of q<m bits and a shared exponent e. A most significant m-q-e bits of said lower limit and said upper limit are equal to a most significant m-q-e bits of said value. Adjustment circuitry performs adjustments to the lower limit and the upper limit in compressed form and boundary comparison circuitry performs the determination on the value using the lower limit and the upper limit in the compressed form.
NEURAL NETWORK ACTIVATION COMPRESSION WITH NON-UNIFORM MANTISSAS
Apparatus and methods for training a neural network accelerator using quantized precision data formats are disclosed, and in particular for storing activation values from a neural network in a compressed format having lossy or non-uniform mantissas for use during forward and backward propagation training of the neural network. In certain examples of the disclosed technology, a computing system includes processors, memory, and a compressor in communication with the memory. The computing system is configured to perform forward propagation for a layer of a neural network to produced first activation values in a first block floating-point format. In some examples, activation values generated by forward propagation are converted by the compressor to a second block floating-point format having a non-uniform and/or lossy mantissa. The compressed activation values are stored in the memory, where they can be retrieved for use during back propagation.
Deep learning numeric data and sparse matrix compression
An apparatus to facilitate deep learning numeric data and sparse matrix compression is disclosed. The apparatus includes a processor comprising a compression engine to: receive a data packet comprising a plurality of cycles of data samples, and for each cycle of the data samples: pass the data samples of the cycle to a compressor dictionary; identify, from the compressor dictionary, tags for each of the data samples, wherein the compressor dictionary comprises at least a first tag for data having a value of zero and a second tag for data having a value of one; and compress the data samples into compressed cycle data by storing the tags as compressed data, wherein the data samples identified with the first tag are compressed using the first tag and the data samples identified with the second tag are compressed using the second tag at the same time as values of the data samples identified with the first tag or the second tag are excluded from the compressed cycle data.
APPLICATION ACTIVATION METHOD AND APPARATUS
An application activation method is provided. The method includes obtaining a first compressed file, where the first compressed file contains activation information of an application and compressed content of a code package of the application. The method also includes extracting the compressed content from the first compressed file; generating a second compressed file by using the compressed content without decompressing the compressed content; and loading the second compressed file, and activating the application according to the activation information in the first compressed file.