Patent classifications
H03M7/6094
MULTI-CONTEXT ENTROPY CODING FOR COMPRESSION OF GRAPHS
Example embodiments relate to using a multi-context entropy coder for encoding adjacency lists. A system may obtain a graph having data (or multiple graphs) and may compress the data of the graph using a multi -context entropy coder. The multi-context entropy coder may encode adjacency lists within the data such that each integer is assigned to a different probability distribution. For example, operating the multi-context entropy coder may involve using a combination of arithmetic coding, Huffman coding, and ANS. The assignment of integers to the probability distributions may depend on each integer’s role and/or previous values of a similar kind. By using multi -context entropy- coding, the computing system may increase compression ratio while maintaining similar processing speed.
USING DRIVE COMPRESSION IN UNCOMPRESSED TIER
In a storage system such as a SAN, NAS, or storage array that implements hierarchical performance tiers based rated drive access latency, on-drive compression is used on data stored on a first tier and off-drive compression is used on data stored on a second tier. Off-drive compression is more processor intensive and may introduce some data access latency but reduces storage requirements. On-drive compression is performed at or near line speed but generally yields lower size reduction ratios than off-drive compression. On-drive compression may be implemented at a higher performance tier whereas off-drive compression may be implemented at a lower performance tier. Further, space saving realized from on-drive compression may be applied to over-provisioning.
Methods and devices using direct coding in point cloud compression
Methods and devices for coding point clouds using direct coding mode to code coordinates of a point within a sub-volume associated with a current node instead of a pattern of occupancy for child nodes. Eligibility for use of direct coding is based on occupancy data from another node. If eligible, then a flag is represented in the bitstream to signal whether direct coding is applied to points in the sub-volume or not.
PROBABILISTIC MODEL FOR FILE-SPECIFIC COMPRESSION SELECTION UNDER SLA-CONSTRAINTS
One example method includes file specific compression selection. Compression metrics are generated for a chunk of a file using a reference compressor. Compression metrics for other compressors are determined from the metrics of the reference compressor. A compressor is then selected to compress the file.
Data processing apparatus, data processing method, medium, and trained model
There is provided with a data processing apparatus. An acquisition unit acquires feature plane data of a layer included in a neural network. A control unit outputs a first control signal corresponding to the layer for controlling first compression processing and a second control signal corresponding to the layer for controlling second compression processing. A first compression unit performs the first compression processing corresponding to the first control signal on the feature plane data. A second compression unit performs the second compression processing corresponding to the second control signal on the feature plane data after the first compression processing. A type of processing of the second compression processing is different from the first compression processing.
Statistical and neural network approach for data characterization to reduce storage space requirements
A data model is trained to determine whether data is raw, compressed, and/or encrypted. The data model may also be trained to recognize which compression algorithm was used to compress data and predict compression ratios for the data using different compression algorithms. A storage system uses the data model to independently identify raw data. The raw data is grouped based on similarity of statistical features and group members are compressed with the same compression algorithm and may be encrypted after compression with the same encryption algorithm. The data model may also be used to identify sub-optimally compressed data, which may be uncompressed and grouped for compression using a different compression algorithm.
METHODS AND DEVICES USING DIRECT CODING IN POINT CLOUD COMPRESSION
Methods and devices for coding point clouds using direct coding mode to code coordinates of a point within a sub-volume associated with a current node instead of a pattern of occupancy for child nodes. Eligibility for use of direct coding is based on occupancy data from another node. If eligible, then a flag is represented in the bitstream to signal whether direct coding is applied to points in the sub-volume or not.
STORAGE SYSTEM AND DATA PROCESSING METHOD IN STORAGE SYSTEM
A storage system includes an interface and a data compression system configured to compress reception data from the interface before the data is stored in a storage device. The data compression system is configured to compress the reception data using a first compression algorithm to generate first compressed data, use the number of appearances of each of predetermined code categories included in the first compressed data to estimate a decompression time when a second compression algorithm is used, select a second compression method including compression using the second compression algorithm when the decompression time is equal to or less than a threshold value, and select a first compression method that does not include the compression using the second compression algorithm when the decompression time is greater than the threshold value.
INFORMATION PROCESSING METHOD, INFORMATION PROCESSING SYSTEM, AND INFORMATION PROCESSING PROGRAM
The information processing system includes: a processing unit that executes processing in cooperation with a memory; and a storage unit that stores a compression technology for a data volume to be applied to a Hamiltonian of the optimization problem, a compressible condition indicating whether the compression technology can be applied or not, and compression method judgment information associated with a compression format indicating a feature quantity of the Hamiltonian when the Hamiltonian is compressed by applying the compression technology. The processing unit: refers to the compression method judgment information and judges a part included in the Hamiltonian which satisfies the compressible condition; and extracts the feature quantity of the part, which is included in the Hamiltonian and judged as satisfying the compressible condition, by means of the compression technology corresponding to the compressible condition and compresses the extracted feature quantity into the compression format.
Probabilistic model for file-specific compression selection under SLA-constraints
One example method includes file specific compression selection. Compression metrics are generated for a chunk of a file using a reference compressor. Compression metrics for other compressors are determined from the metrics of the reference compressor. A compressor is then selected to compress the file.