H03M7/6041

METHOD AND APPARATUS FOR COMPRESSING WEIGHTS OF NEURAL NETWORK
20220166444 · 2022-05-26 · ·

A method of compressing weights of a neural network includes compressing a weight set including the weights of a the neural network, determining modified weight sets by changing at least one of the weights, calculating compression efficiency values for the determined modified weight sets based on a result of compressing the weight set and results of compressing the determined modified weight sets, determining a target weight of the weights satisfying a compression efficiency condition among the weights based on the calculated compression efficiency values, and determining a final compression result by compressing the weights based on a result of replacing the determined target weight.

MEMORY DEVICE AND MEMORY SYSTEM
20220164143 · 2022-05-26 ·

A memory device includes: a plurality of memory cells; soft read logic configured to generate soft data by reading data from the plurality of memory cells in response to a soft read command from a controller, the soft data including at least a major symbol and at least a minor symbol; a compressor configured to generate compressed data by: encoding, into a code alphabet having a second length, a major source alphabet including repetitions of the major symbol by a first length among a plurality of source alphabets included in the soft data, and encoding, into a code alphabet having a longer length than the second length, a minor source alphabet including repetitions of the major symbol by a shorter length than the first length and ending with one minor symbol; and an interface configured to provide the compressed data to the controller.

Methods, decoder and encoder for handling a data stream for transmission between a remote unit and a base unit of a base station system

A method performed by an encoder of a base station system, for handling a data stream for transmission over a transmission connection between a remote unit and a base unit of the base station system, the remote unit being arranged to transmit wireless signals to, and receive from, mobile stations. The method comprises quantizing a plurality of IQ samples, converting the quantized plurality of IQ samples to IQ predictions, calculating per sample a difference between the quantized plurality of IQ samples and the IQ predictions in order to create IQ prediction errors. The method further comprises quantizing the IQ predictions or the IQ prediction errors, entropy encoding the IQ prediction errors and sending the entropy encoded IQ prediction errors over the transmission connection to a decoder of the base station system. The method can be performed by a decoder.

DATA COMPRESSION APPARATUS AND DATA COMPRESSION METHOD
20220131555 · 2022-04-28 · ·

A compression engine calculates replacement CRC codes, in predetermined data lengths, for DIF-in cleartext data including cleartext data and multiple CRC codes based on the cleartext data. The compression engine generates headered compressed-text data in which a header including the replacement CRC codes is added to compressed-text data in which the cleartext data is compressed, and generates code-in compressed-text data by calculating multiple CRC codes based on the headered compressed-text data to add the calculated CRC codes to the headered compressed-text data.

Pooling blocks for erasure coding write groups

A technique provides efficient data protection, such as erasure coding, for data blocks of volumes served by storage nodes of a cluster. Data blocks associated with write requests of unpredictable client workload patterns may be compressed. A set of the compressed data blocks may be selected to form a write group and an erasure code may be applied to the group to algorithmically generate one or more encoded blocks in addition to the data blocks. Due to the unpredictability of the data workload patterns, the compressed data blocks may have varying sizes. A pool of the various-sized compressed data blocks may be established and maintained from which the data blocks of the write group are selected. Establishment and maintenance of the pool enables selection of compressed data blocks that are substantially close to the same size and, thus, that require minimal padding.

SYSTEMS AND METHODS FOR APPROXIMATE COMMUNICATION FRAMEWORK FOR NETWORK-ON-CHIPS
20210344617 · 2021-11-04 ·

Systems and methods are disclosed for reducing latency and power consumption of on-chip movement through an approximate communication framework for network-on-chips (“NoCs”). The technology leverages the fact that big data applications (e.g., recognition, mining, and synthesis) can tolerate modest error and transfers data with the necessary accuracy, thereby improving the energy-efficiency and performance of multi-core processors.

Multi-page parity data storage in a memory device

A processing device, operatively coupled with a memory device, is configured to perform a write operation on a page of a plurality of pages of a data unit of a memory device, to store host data in the page of the data unit. The processing device further generates a parity page for the host data stored in the page of the data unit and adds the parity page to parity data stored at a parity data storage location. Responsive to determining that a first size of the stored parity data satisfies a first condition, the processing device initiates execution of a compression algorithm to compress the stored parity data. Responsive to determining that a second size of the parity data resulting from the execution of the compression algorithm satisfies a second condition, the processing device performs a scan operation to release at least a subset of the stored parity data.

REDUCING ERROR IN DATA COMPRESSION

Systems and methods are provided for reducing error in data compression and decompression when data is transmitted over low bandwidth communication links, such as satellite links. Embodiments of the present disclosure provide systems and methods for variable block size compression for gridded data, efficiently storing null values in gridded data, and eliminating growth of error in compressed time series data.

DATA STORAGE METHOD, DATA ACQUISITION METHOD AND DEVICE THEREOF
20220261433 · 2022-08-18 ·

Embodiments of the present application provide a data storage method, data acquisition method and device thereof. The method includes allocating an N-dimensional first parameter vector for N pieces of to-be-stored data; performing N-dimensional permutation on the first parameter vector, to obtain N second parameter vectors each having N dimensions; constructing a neural network model that maps the current second parameter vectors to expected data samples of the N pieces of to-be-stored data; adjusting model parameters of the neural network model and/or the first parameter vector until expected data samples of the N pieces of to-be-stored data regress to the N pieces of to-be-stored data, the expected data samples being obtained from the current second parameter vectors based on the trained neural network model; storing the current first parameter vector. The embodiments of the present application make the storage of the first parameter vector equivalent to storing N pieces of to-be-stored data, which reduces high-dimensional data to low-dimensional data for storage, thus greatly reducing the storage space.

Compression in Lattice-Based Cryptography
20220303133 · 2022-09-22 ·

Compiling a compression function of a lattice-based cryptographic mechanism by (i) basing the compression function on a lossy compression function, (ii) determining an error based on a loss introduced by an integer division, and (iii) determining an output of the compression function based on the error.