Patent classifications
H03M7/6011
Controlling compression of input/output (I/O) operations)
Embodiments of the present disclosure measure a state of a storage group within a storage array. The embodiments also increase or decrease a compression ratio corresponding to input/output (I/O) operations on the storage group based on a target data reduction ratio (DRR) of the storage array, an expected performance envelope, and a compressibility factor of the storage group.
Discretization of numerical values with adaptive accuracy
An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
TRACING ENGINE-BASED SOFTWARE LOOP ESCAPE ANALYSIS AND MIXED DIFFERENTIATION EVALUATION
A method for loop escape analysis includes receiving a set of executable computer instructions stored on a storage medium, and determining a number of inputs to a loop associated with a data structure, storage space that would be saved by compressing the data structure, and a size of new elements required to compress the data structure. Upon reaching an end of the loop, the method determines whether to compress the data structure based on a comparison between the size of the new elements and the saved storage space. In response to determining to compress the data structure, the method compresses the data structure.
CONTENT-ADAPTIVE TILING SOLUTION VIA IMAGE SIMILARITY FOR EFFICIENT IMAGE COMPRESSION
Techniques are provided herein for more efficiently storing images that have a common subject, such as product images that share the same product in the image. Each image undergoes an adaptive tiling procedure to split the image into a plurality of tiles, with each tile identifying a region of the image having pixels with the same content. The tiles across multiple images can then be clustered together and those tiles having identical content are removed. Once all duplicate tiles have been removed from the set of all tiles across the images, the tiles are once again clustered based on their encoding scheme and certain encoding parameters. Tiles within each cluster are compressed using the best compression technique for the tiles in each corresponding cluster. By removing duplicative tile content between numerous images of the same subject, the total amount of data that needs to be stored is reduced.
METHOD AND SYSTEMS FOR GENOME SEQUENCE COMPRESSION
Systems and methods for genome sequence compression and decompression are provided. The method for compression encoding of a genome sequence includes partitioning a genome sequence into a plurality of Group of Bases (GoBs) and processing each of the plurality of GoBs independently to encode the genome sequence into a bit stream. Processing each of the plurality of GoBs includes dividing each of the plurality of GOBs into a first part and a second part, the first part including an initial context part and the second part including a learning-based inference part. The processing each of the plurality of GoBs further includes encoding the first part in accordance with a Markov model, encoding the second part in accordance with a learning-based model, and encoding the encoded first part and the encoded second part into the bit stream with an arithmetic encoder. The learning-based model may include Long and Short-Term Memory (LSTM)-based neural networks.
DATA COMPRESSION DEVICE, MEMORY SYSTEM AND METHOD
According to one embodiment, a data compression device includes a dictionary match determination unit, an extended matching generator, a match selector and a match connector. The dictionary match determination unit searches for first past input data matching first new input data. The extended matching generator compares second past input data subsequent to the first past input data with second new input data subsequent to the first new input data. The match selector generates compressed data by replacing a part of the input data with match information output from the dictionary match determination unit or the extended matching generator. The match connector replaces a plurality of match information in the compressed data with single match information.
Quantization of spatial audio parameters
There is disclosed inter alia an apparatus for spatial audio signal encoding which determines at least one spatial audio parameter comprising a direction parameter with an elevation component and an azimuth component. The elevation component and azimuth component of the direction parameter are then converted to an index value.
DATA COMPRESSION SYSTEM AND METHOD OF USING
A system includes a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for generating a mask based on received data from a sensor, wherein the mask includes a plurality of importance values, and each region of the received data is designated a corresponding importance value of the plurality of importance values. The processor is configured to execute the instructions for encoding the received data based on the mask; and transmitting the encoded data to a decoder for defining reconstructed data. The processor is configured to execute the instructions for computing a loss based on the reconstructed data, the received data and the mask. The processor is configured to execute the instructions for providing training to an encoder for encoding the received data based on the computed loss.
CLUSTER-BASED DATA COMPRESSION FOR AI TRAINING ON THE CLOUD FOR AN EDGE NETWORK
A disclosed information handling system includes an edge device communicatively coupled to a cloud computing resource. The edge device is configured to respond to receiving, from an internet of things (IoT) unit, a numeric value for a parameter of interest by determining a compressed encoding for the numeric value in accordance with a non-lossless compression algorithm. The edge device transmits the compressed encoding of the numeric value to the cloud computing resource. The cloud computing resource includes a decoder communicatively coupled to the encoder and configured to respond to receiving the compressed encoding by generating a surrogate for the numeric value. The surrogate may be generated in accordance with a probability distribution applicable to the parameter of interest. The compression algorithm may be a clustering algorithm such as a k-means clustering algorithm.
Method and system for compressing application data for operations on multi-core systems
A system and method to compress application control data, such as weights for a layer of a convolutional neural network, is disclosed. A multi-core system for executing at least one layer of the convolutional neural network includes a storage device storing a compressed weight matrix of a set of weights of the at least one layer of the convolutional network and a decompression matrix. The compressed weight matrix is formed by matrix factorization and quantization of a floating point value of each weight to a floating point format. A decompression module is operable to obtain an approximation of the weight values by decompressing the compressed weight matrix through the decompression matrix. A plurality of cores executes the at least one layer of the convolutional neural network with the approximation of weight values to produce an inference output.