Patent classifications
H03M7/702
Application process context compression and replay
Application state data from a main memory may be compressed and the compressed data may be written to a first location in a mass storage. Updated application state data is generated, and the updated application state data is compressed from the main memory. The updated application state data is then written to a second location in the mass storage. Processing may then be paused on the application state data and updated application state data. The compressed application state data and compressed updated application state data stored in the mass storage is scanned and information corresponding to compressed application state data and updated compressed application state data stored in the mass storage is displayed using information from the scanned compressed application state data and compressed updated application state data.
APPLICATION PROCESS CONTEXT COMPRESSION AND REPLAY
Application state data from a main memory may be compressed and the compressed data may be written to a first location in a mass storage. Updated application state data is generated, and the updated application state data is compressed from the main memory. The updated application state data is then written to a second location in the mass storage. Processing may then be paused on the application state data and updated application state data. The compressed application state data and compressed updated application state data stored in the mass storage is scanned and information corresponding to compressed application state data and updated compressed application state data stored in the mass storage is displayed using information from the scanned compressed application state data and compressed updated application state data.
Neural network compression
A neural network model is trained, where the training includes multiple training iterations. Weights of a particular layer of the neural network are pruned during a forward pass of a particular one of the training iterations. During the same forward pass of the particular training iteration, values of weights of the particular layer are quantized to determine a quantized-sparsified subset of weights for the particular layer. A compressed version of the neural network model is generated from the training based at least in part on the quantized-sparsified subset of weights.
NEURAL NETWORK COMPRESSION
A neural network model is trained, where the training includes multiple training iterations. Weights of a particular layer of the neural network are pruned during a forward pass of a particular one of the training iterations. During the same forward pass of the particular training iteration, values of weights of the particular layer are quantized to determine a quantized-sparsified subset of weights for the particular layer. A compressed version of the neural network model is generated from the training based at least in part on the quantized-sparsified subset of weights.
Encoding and decoding variable length instructions
Methods of encoding and decoding are described which use a variable number of instruction words to encode instructions from an instruction set, such that different instructions within the instruction set may be encoded using different numbers of instruction words. To encode an instruction, the bits within the instruction are re-ordered and formed into instruction words based upon their variance as determined using empirical or simulation data. The bits in the instruction words are compared to corresponding predicted values and some or all of the instruction words that match the predicted values are omitted from the encoded instruction.
Distributable hash filter for nonprobabilistic set inclusion
In certain embodiments, a method includes recursively performing a procedure that includes using an allowed set of object identifiers and a hash function to update a bit array, using a disallowed set of object identifiers and the hash function to further update the bit array where collisions occur, repeating the process with a new allowed set that includes object identifiers from the original allowed set that collided with the disallowed set and a new hash function, until reaching a round where no collisions occurred, generating a data structure that includes the bit arrays created during each recursive round, and compressing the data structure.
CONTINUAL LEARNING SYSTEM AND CONTINUAL LEARNING METHOD
A continual learning system learns a prediction model that performs prediction on input data, acquires additional data, learns the prediction model, calculates information on past data to be used in a next learning stage, and stores the learned prediction model and the calculated information on the past data. The continual learning system calculates the prediction model and the information of the past data, and calculates statistics of the past data. The statistics provides a learning result equivalent to a learning result obtained when the acquisition unit uses the past data acquired as the additional data in past by the acquisition unit.