Patent classifications
H03M7/6035
Techniques for general-purpose lossless data compression using a recurrent neural network
Techniques for general-purpose lossless data compression using a neural network including compressing an original content item to a baseline lossless compressed data format. The baseline lossless compressed data format is binarized to a binarized format. The binarized format is arithmetically coded based on probability estimates from a neural network probability estimator. The neural network probability estimator generates the probability estimates for current symbols of the binarized format to be arithmetically coded based on symbols of the binarized format that have already been arithmetically coded.
SYSTEM AND METHOD FOR DYADIC DISTRIBUTION-BASED COMPRESSION AND ENCRYPTION
A system and method for simultaneous compression and encryption of data. The system analyzes input data to determine its properties and creates a transformation matrix based on these properties. Using this matrix, the input data is transformed into a modified distribution, generating a main data stream of transformed data and a secondary stream of transformation information. The main data stream is compressed, and both streams are combined into a single output. The system implements security measures to protect against various attacks, including side-channel vulnerabilities. By using a dyadic distribution algorithm, the system achieves both compression and encryption in a single pass over the data, offering significant efficiency gains. The system can operate in both lossless and lossy modes, providing flexibility for different application requirements. This approach offers a unique solution for data transmission and storage scenarios where both data reduction and security are critical concerns.
Hamming distance based binary representations of numbers
Technology is described herein for encoding and decoding numbers. In one aspect, floating point numbers are represented as binary strings. The binary strings may be encoded in a manner such that if one bit flips, the average and maximum distortion in the number that is represented by the binary string is relatively small. In one aspect, 2^n binary strings are ordered across an interval [a, b) in accordance with their Hamming weights. Numbers in the interval may be uniformly quantized into one of 2^n sub-intervals. For example, floating point numbers in the interval [a, b) may be uniformly quantized into 2^n sub-intervals. These 2^n sub-intervals may be mapped to the 2^n binary strings. Thus, the number may be assigned to one of the 2^n binary strings. Doing so may reduce the distortion in the number in the event that there is a bit flip in the assigned binary string.
Information processing device, data compression method and data compression program
An information processing device includes: a memory; and a processor coupled to the memory and the processor configured to: generate compressed data, in sets of a prescribed size, in respect of one set of object data, in accordance with each of a plurality of compression methods; and select compressed data of the compression method which has completed compression of the object data first, among the plurality of compression methods.
NETWORK UTILIZATION IMPROVEMENT BY DATA REDUCTION BASED MIGRATION PRIORITIZATION
Methods and systems for data transfer include adding a data chunks to a priority queue in an order based on utilization priority. A reducibility score for the data chunks is determined. A data reduction operation is performed on a data chunk having a highest reducibility in the priority queue using a processor if sufficient resources are available. The data chunk having the lowest reducibility score is moved from the priority queue to a transfer queue for transmission if the transfer queue is not full.
DEEP LEARNING-BASED DATA COMPRESSION WITH PROTOCOL ADAPTATION
A system and method for data compression with protocol adaptation, that utilizes a codebook generator which leverages one or more machine/deep learning algorithms trained on at least a plurality of protocol policies in order to generate a protocol appendix and codebook, wherein original data is encoded by an encoder according to the codebook and sent to a decoder, but instead of just decoding the data according to the codebook to reconstruct the original data, data manipulation rules such as mapping and transformation are applied at the decoding stage to transform the decoded data into protocol formatted data.
PARAMETER UPDATE METHOD FOR ENTROPY CODING AND DECODING OF CONVERSION COEFFICIENT LEVEL, AND ENTROPY CODING DEVICE AND ENTROPY DECODING DEVICE OF CONVERSION COEFFICIENT LEVEL USING SAME
An video decoding apparatus including a parser which obtains bit strings corresponding to current transformation coefficient level information by arithmetic decoding a bitstream based on a context model; a parameter determiner which determines a current binarization parameter by updating or maintaining a previous binarization parameter based on a comparison of a threshold and a size of a previous transformation coefficient; a syntax element restorer which obtains the current transformation coefficient level information by performing de-binarization of the bit strings using the determined current binarization parameter and generates a size of a current transformation coefficient using the current transformation coefficient level information, wherein the current binarization parameter has a value equal to or smaller than a predetermined value.
SYSTEM AND METHOD FOR ENCRYPTED DATA COMPACTION
A system and method for data compression with encryption, that produces a conditioned data stream by replacing data blocks within an input data stream to bring the frequency of each data block closer to an ideal value, produces an error stream comprising the differences between the original data and the encrypted data, and compresses the conditioned data.
Deep learning using large codeword model with homomorphically compressed data
A system and method for deep learning using a large codeword model with homomorphically compressed and dyadically encrypted data is disclosed. The system preprocesses input data, applies homomorphic-dyadic compression and encryption, tokenizes the compressed data into sourceblocks, and assigns codewords using a codebook. These codewords are processed through a machine learning core, which can be either a conventional transformer-based architecture or a latent transformer core utilizing a variational autoencoder. The system enables secure operations on encrypted data, preserving privacy while allowing complex computations. The processed output is decrypted, decompressed, and translated to match the input modality. A neural upsampler may further enhance the output. The machine learning core is continuously trained using the processed data and additional training data, improving performance over time.
System and method for data storage, transfer, synchronization, and security using automated model monitoring and training with dyadic distribution-based simultaneous compression and encryption
A system and method for efficient data storage, transfer, synchronization, and security using automated model monitoring and training. The system analyzes test datasets to detect data drift, retraining encoding and decoding algorithms as needed. New data sourceblocks are created and assigned codewords, compiling an updated codebook for distribution to connected devices. A novel dyadic distribution subsystem simultaneously compresses and encrypts data by transforming input streams into a dyadic distribution. This process generates a compressed main data stream and a secondary stream of transformation information, which are combined into a secure output. The system includes a network device manager for optimizing codebook distribution based on device resource usage. Operating in both lossless and lossy modes, the system offers flexible, efficient, and secure data handling across various network configurations.