Patent classifications
H03M7/6041
DEEP LEARNING-BASED CHANNEL BUFFER COMPRESSION
A method and system are provided. The method includes performing channel estimation on a reference signal (RS), compressing, with a neural network, the channel estimation of the RS, decompressing, with the neural network, the compressed channel estimation, and interpolating the decompressed channel estimation.
Input Encoding for Classifier Generalization
Techniques for classifier generalization in a supervised learning process using input encoding are provided. In one aspect, a method for classification generalization includes: encoding original input features from at least one input sample {right arrow over (x)}.sub.S with a uniquely decodable code using an encoder E(⋅) to produce encoded input features E({right arrow over (x)}.sub.S), wherein the at least one input sample {right arrow over (x)}.sub.S comprises uncoded input features; feeding the uncoded input features and the encoded input features E({right arrow over (x)}.sub.S) to a base model to build an encoded model; and learning a classification function {tilde over (C)}.sub.E(⋅) using the encoded model, wherein the classification function {tilde over (C)}.sub.E(⋅) learned using the encoded model is more general than that learned using the uncoded input features alone.
POOLING BLOCKS FOR ERASURE CODING WRITE GROUPS
A technique provides efficient data protection, such as erasure coding, for data blocks of volumes served by storage nodes of a cluster. Data blocks associated with write requests of unpredictable client workload patterns may be compressed. A set of the compressed data blocks may be selected to form a write group and an erasure code may be applied to the group to algorithmically generate one or more encoded blocks in addition to the data blocks. Due to the unpredictability of the data workload patterns, the compressed data blocks may have varying sizes. A pool of the various-sized compressed data blocks may be established and maintained from which the data blocks of the write group are selected. Establishment and maintenance of the pool enables selection of compressed data blocks that are substantially close to the same size and, thus, that require minimal padding.
MULTI-PAGE PARITY DATA STORAGE IN A MEMORY DEVICE
parity data storage location. Responsive to determining that a first size of the stored parity data satisfies a first condition, the processing device initiates execution of a compression algorithm to compress the stored parity data. Responsive to determining that a second size of the parity data resulting from the execution of the compression algorithm satisfies a second condition, the processing device performs a scan operation to release at least a subset of the stored parity data.
POOLING BLOCKS FOR ERASURE CODING WRITE GROUPS
A technique provides efficient data protection, such as erasure coding, for data blocks of volumes served by storage nodes of a cluster. Data blocks associated with write requests of unpredictable client workload patterns may be compressed. A set of the compressed data blocks may be selected to form a write group and an erasure code may be applied to the group to algorithmically generate one or more encoded blocks in addition to the data blocks. Due to the unpredictability of the data workload patterns, the compressed data blocks may have varying sizes. A pool of the various-sized compressed data blocks may be established and maintained from which the data blocks of the write group are selected. Establishment and maintenance of the pool enables selection of compressed data blocks that are substantially close to the same size and, thus, that require minimal padding.
Data amount compressing method, apparatus, program, and IC chip
A data amount compressing method for compressing a data amount corresponding to a learned model obtained by letting the learning model learn a predetermined data group, the learning model having a tree structure in which multiple nodes associated with respective hierarchically divided state spaces are hierarchically arranged, wherein each node in the learned model is associated with an error amount that is generated in the process of the learning and corresponds to prediction accuracy, and the data amount compressing method includes: a reading step of reading the error amount associated with each node; and a node deleting step of deleting a part of the nodes of the learned model according to the error amount read in the reading step, thereby compressing the data amount corresponding to the learned model.
Data Compression Method, Data Decompression Method, and Related Apparatus
A data compression method includes obtaining N to-be-compressed data blocks and N pieces of protection information (PI), where the N to-be-compressed data blocks are in a one-to-one correspondence with the N pieces of PI, and N is a positive integer greater than or equal to 2, compressing the N to-be-compressed data blocks to obtain a compressed data block, and compressing the N pieces of PI to obtain compressed PI.
Systems and Methods for Performing Lossless Source Coding
Systems and methods in accordance with various embodiments of the invention perform lossless source coding. In several embodiments, a nested code structure is utilized to perform Random Access Source Coding (RASC), where the number of active encoders is initially unknown. In several embodiments, the decoder can attempt to decode using a number of Slepian-Wolf decoders corresponding to an estimated number of sources. One embodiment includes multiple source encoders configured to receive a start message and transmit a portion of a codeword selected by encoding data from a source until an end of epoch message is received. A source decoder can transmit at least one start message, and receive codeword portions transmitted by the plurality of source encoders. When a decoding rule is satisfied, the source decoder can decode data from multiple source encoders based upon received codeword portions, and cause the broadcast transmitter to transmit an end of epoch message.
Increasing storage capacity and data transfer speed in genome data backup
Methods and systems for storing data include compressing data inflated from a first compression format into a second format using a processor and verifying contents of the data concurrently with compressing the data. Compression is aborted responsive to a failure of the content verification, but an output of the compression is stored to a tape drive until the compression is aborted. The tape drive is rolled back to a file start position after the compression is aborted and compression of any remaining uncompressed data is skipped after the compression is aborted. The data is stored to the tape drive after rolling the tape drive back.
Reducing error in data compression
Systems and methods are provided for reducing error in data compression and decompression when data is transmitted over low bandwidth communication links, such as satellite links. Embodiments of the present disclosure provide systems and methods for variable block size compression for gridded data, efficiently storing null values in gridded data, and eliminating growth of error in compressed time series data.