H03M7/3079

Storage system and storage control method

A storage system that performs irreversible compression on time-series data using a compressor/decompressor based on machine learning calculates a statistical amount value of each of one or more kinds of statistical amounts based on one or more parameters in relation to original data (time-series data input to a compressor/decompressor) and calculates a statistical amount value of each of the one or more kinds of statistical amounts based on the one or more kinds of parameters in relation to decompressed data (time-series data output from the compressor/decompressor) corresponding to the original data. The machine learning of the compressor/decompressor is performed based on the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the original data and the statistical amount value calculated for each of the one or more kinds of statistical amounts in relation to the decompressed data.

SYSTEM AND METHOD FOR DATA COMPACTION UTILIZING MISMATCH PROBABILITY ESTIMATION

A system and method for compacting data that uses mismatch probability estimation to improve entropy encoding methods to account for, and efficiently handle, previously-unseen data in data to be compacted. Training data sets are analyzed to determine the frequency of occurrence of each sourceblock in the training data sets. A mismatch probability estimate is calculated comprising an estimated frequency at which any given data sourceblock received during encoding will not have a codeword in the codebook. Entropy encoding is used to generate codebooks comprising codewords for data sourceblocks based on the frequency of occurrence of each sourceblock. A “mismatch codeword” is inserted into the codebook based on the mismatch probability estimate to represent those cases when a block of data to be encoded does not have a codeword in the codebook. During encoding, if a mismatch occurs, a secondary encoding process is used to encode the mismatched sourceblock.

PROBABILISTIC MODEL FOR FILE-SPECIFIC COMPRESSION SELECTION UNDER SLA-CONSTRAINTS

One example method includes file specific compression selection. Compression metrics are generated for a chunk of a file using a reference compressor. Compression metrics for other compressors are determined from the metrics of the reference compressor. A compressor is then selected to compress the file.

MULTI-THREADED CABAC DECODING
20220394284 · 2022-12-08 ·

A method, system, and computer readable medium for improved decoding CABAC encoded media are described. A decoding loop includes decoding an encoded binary element from a sequence of encoded binary elements to generate a decoded binary element using a context probability. A next context probability for a next encoded binary element in the sequence is determined from the decoded binary element and the next context probability for decoding the next encoded binary element is provided to the decoding loop for a next iteration.

METHODS AND DEVICES FOR TREE SWITCHING IN POINT CLOUD COMPRESSION

Methods and devices for coding point cloud data using volume trees and predicted-point trees. In one embodiment of the disclosure, a method of encoding a point cloud data to generate a bitstream of compressed point cloud data representing a three-dimensional location of a physical object is provided, the point cloud data being located within a volumetric space. The method includes compressing a first part of the point cloud data represented by a first tree of a first type; determining for a given node of the first tree if an assignation to a second type of tree is enabled, said given node still being processed for the first tree; when the assignation is enabled, compressing a second part of the point cloud data represented by a second tree of the second type wherein, features associated with a root node of the second tree are at least partially obtained from the given node.

Coefficient context modeling in video coding
11593968 · 2023-02-28 · ·

In some embodiments, a method analyzing a first set of values for a first bin plane in a plurality of bin planes. The plurality of bin planes are used to determine a context model for entropy coding of a current block in a video. The method determines whether to use a second set of values from a second bin plane based on the analyzing. When it is determined to use the second set of values, information is calculated for the context model using the first set of values and the second set of values. When it is determined to not use the second set of values, information is calculated for the context model using the first set of values.

SYSTEM AND METHOD FOR EFFECTIVE COMPRESSION, REPRESENTATION AND DECOMPRESSION OF DIVERSE TABULATED DATA
20220368347 · 2022-11-17 ·

A method for controlling compression of data includes accessing genomic annotation data in one of a plurality of first file formats, extracting attributes from the genomic annotation data, dividing the genomic annotation data into multiple chunks, and processing the extracted attributes and chunks into correlated information. The method also includes selecting different compressors for the attributes and chunks identified in the correlated information and generating a file in a second file format that includes the correlated information and information indicative of the different compressors for the chunks and attributes indicated in the correlated information. The information indicative of the different compressors is processed into the second file format to allow selective decompression of the attributes and chunks indicated in correlated information.

Electronic device for efficiently saving historic data of ambient sensors and associated method
11587378 · 2023-02-21 · ·

A device includes sensing circuitry, compression circuitry, and a memory. The sensing circuitry, in operation, generates sensor data. The compression circuitry is coupled to the sensing circuitry, and, in operation, determines environmental contexts based on variation rates of sensor data and compresses sensor data based on determined environmental contexts. The compressed data is stored in the memory.

METHODS AND SYSTEMS FOR DYNAMIC COMPRESSION AND TRANSMISSION OF APPLICATION LOG DATA

Certain aspects of the present disclosure provide techniques for committing log data in an application to a log data repository. An example method generally includes receiving, from an application, data to be committed to a remote storage location. A type of the received data is determined. The type of the received data is generally associated with a prioritization level and a compression mechanism to be used in committing the data to the remote storage location. An application execution context associated with the received data is determined. At a dispatch time associated with the prioritization level of the received data and the application execution context associated with the received data, a compressed data payload is generated and transmitted to the remote storage location. Generally, to compress the data payload, at least the received data is generally compressed based on the determined compression mechanism.

Neural network model compression with block partitioning
11496151 · 2022-11-08 · ·

An apparatus of neural network model decompression includes processing circuitry. The processing circuitry can be configured to receive, from a bitstream of a compressed neural network representation, one or more first syntax elements associated with a 3-dimensional coding unit (CU3D) partitioned from a 3-dimensional coding tree unit (CTU3D). The first CTU3D can be partitioned from a tensor in a neural network. The one or more first syntax elements can indicate that the CU3D is partitioned based on a 3D pyramid structure that includes multiple depths. Each depth corresponds to one or more nodes. Each node has a node value. Second syntax elements corresponding to the node values of the nodes in the 3D pyramid structure can be received from the bitstream in a breadth-first scan order for scanning the nodes in the 3D pyramid structure. Model parameters of the tensor can be reconstructed based on the received second syntax elements.