Patent classifications
H03M7/6058
METHODS AND DEVICES FOR SOURCE-CODING AND DECODING OF DATA INVOLVING SYMBOL COMPRESSION
A method of encoding input data in an encoder to generate corresponding encoded data includes splitting and/or transforming the input data into data chunks, analyzing symbols present in the input data and compressing the symbols as a function of occurrence of the symbols in the data chunks; generating code tables, frequency tables, and/or length of code word tables for the symbols present in the data chunks; computing sets of indices relating the symbols in each data chunk and/or the compressed symbols to entries in the code tables, the frequency tables, and/or the length of code word tables; and assembling the sets of indices, together with the frequency tables, the code tables, and/or information indicative of such tables, for generating the encoded data. An encoder that utilizes the method, together with a corresponding decoder, wherein the encoder and the decoder in combination form a codec.
DATA COMPRESSION AND DECOMPRESSION METHOD
A data compression and decompression algorithm performing the function of data compression and decompression by using the steps of: dividing a main data stream into sub data streams, calculating frequency of occurrence of sub data streams in the main data stream, repeating the process of calculating frequency of occurrence by changing the number of digits in sub data stream and by changing the starting digit position in main data stream, assigning codes to sub data streams based on their frequency occurrence values, calculating group dimension index for each group, selecting the group with the lowest group dimension index and placing codes of the group with the lowest group dimension index in a multi dimensional space wherein vector placement is utilized to eliminate the need to use digits that are common to neighboring codes therefore providing an additional compression.
Managing data records
Data records may be managed in a relational database by monitoring, a record length for a first data record in a page of memory, an amount of free space in the page, and a page length. In response to receiving an operator command to replace the first data record with a second data record, a database management system may determine whether an estimated record length of a compressed second data record is outside of the amount of free space in the page. In response to determining the estimated record length of a compressed second data record is outside of the amount of free space in the page, the database management system may determine whether an estimated length of a compressed page is outside of the page length. In response to determining the estimated length of a compressed page is within the page length, the page may be compressed.
DATA PROCESSING METHODS AND APPARATUS FOR USE WITH FEATURE MAPS IN SPARSE CONVOLUTIONAL NEURAL NETWORKS
A convolutional neural network (CNN) system is provided that includes a flexible accelerator configured to convert an input feature map into a set of input sub-feature maps, each having a similar amount of sparsity. The system allows each of the sub-feature maps to be processed independently while taking advantage of the sparsity. In some aspects, the CNN system is configured with an index processor that receives data value indexes and weight indexes and generates data path processor commands for processing by a separate data path processor. In other aspects, unroll circuitry is configured to unroll feature maps to provide index-value compression. The unroll/compression scheme allows an input feature map to be read sequentially (tile-by-tile) so that an accumulate buffer can be implemented with a single read-only path and single write-only path. This can simplify memory control design, eliminating requirements for expensive cache-like structures while also reducing power.
Data inspection for compression/decompression configuration and data type determination
Distribution of data in a neural network data set is used to determine an optimal compressor configuration for compressing the neural network data set and/or the underlying data type of the neural network data set. By using a generalizable optimization of examining the data prior to compressor invocation, the example non-limiting technology herein makes it possible to tune a compressor to better target the incoming data. For sparse data compression, this step may involve examining the distribution of data (e.g., in one example, zeros in the data). For other algorithms, it may involve other types of inspection. This changes the fundamental behavior of the compressor itself. By inspecting the distribution of data (e.g., zeros in the data), it also possible to very accurately predict the data width of the underlying data. This is useful because this data type is not always known a priori, and lossy compression algorithms useful for deep learning depend on knowing the true data type to achieve good compression rates.
METHOD FOR ADJUSTING COMPRESSION RATE AND ELECTRONIC DEVICE
A method for adjusting compression rate is provided. The method includes compressing data at a compression rate and storing the data into a memory using a processing circuit. The method further includes calculating a compression rate adjustment parameter based on each time duration for reading or writing the memory in a first specific period of time and a time threshold using the processing circuit. The method further includes adjusting the compression rate based on the compression rate adjustment parameter using a processing circuit.
Application process context compression and replay
Application state data from a main memory may be compressed and the compressed data may be written to a first location in a mass storage. Updated application state data is generated, and the updated application state data is compressed from the main memory. The updated application state data is then written to a second location in the mass storage. Processing may then be paused on the application state data and updated application state data. The compressed application state data and compressed updated application state data stored in the mass storage is scanned and information corresponding to compressed application state data and updated compressed application state data stored in the mass storage is displayed using information from the scanned compressed application state data and compressed updated application state data.
APPLICATION PROCESS CONTEXT COMPRESSION AND REPLAY
Application state data from a main memory may be compressed and the compressed data may be written to a first location in a mass storage. Updated application state data is generated, and the updated application state data is compressed from the main memory. The updated application state data is then written to a second location in the mass storage. Processing may then be paused on the application state data and updated application state data. The compressed application state data and compressed updated application state data stored in the mass storage is scanned and information corresponding to compressed application state data and updated compressed application state data stored in the mass storage is displayed using information from the scanned compressed application state data and compressed updated application state data.
Low-Latency Decompressor
An example method of low-latency decompression includes receiving a data read request to read data stored, in a compressed storage format, in a memory, and responsive to receiving the data read request, accessing compressed data sequences, splitting the compressed data sequences into three separate streams for parallel processing, the three separate streams including (i) a literal stream, (ii) a history cache stream, and (iii) a history buffer stream, for each data sequence in the literal stream, determining a literal decompressed block offset for the data sequence, for each data sequence in the history cache stream, determining a decompressed block offset using one or more history cache pointers associated with the data sequence, for each data sequence in the history buffer stream, determining the decompressed block offset via a history buffer, and generating a data output responsive to the data read request.
DATA INSPECTION FOR COMPRESSION/DECOMPRESSION CONFIGURATION AND DATA TYPE DETERMINATION
Distribution of data in a neural network data set is used to determine an optimal compressor configuration for compressing the neural network data set and/or the underlying data type of the neural network data set. By using a generalizable optimization of examining the data prior to compressor invocation, the example non-limiting technology herein makes it possible to tune a compressor to better target the incoming data. For sparse data compression, this step may involve examining the distribution of data (e.g., in one example, zeros in the data). For other algorithms, it may involve other types of inspection. This changes the fundamental behavior of the compressor itself. By inspecting the distribution of data (e.g., zeros in the data), it also possible to very accurately predict the data width of the underlying data. This is useful because this data type is not always known a priori, and lossy compression algorithms useful for deep learning depend on knowing the true data type to achieve good compression rates.