Patent classifications
H03M7/3077
Method and system for sampling and converting vehicular network data
A method for sampling and converting vehicular network data is executed by a vehicle host. The vehicle host selects one of multiple data signals from an original signal, and establishes a data table. The vehicle host further determines whether the original signal includes any data signal remaining unselected. When the original signal does not include any data signal remaining unselected, the vehicle host differentially samples data in the data table corresponding to other time sequences by using the data in the data table corresponding to a first time sequence as a reference to generate a differential data table, and compresses the differential data table. The method can reduce the amount of data by performing differential sampling, so that the compression ratio of the data can be effectively improved, and the delay of data transmission can be avoided.
BEZIER VOLUME REPRESENTATION OF POINT CLOUD ATTRIBUTES
The systems and methods discussed herein implement a volumetric approach to point cloud representation, compression, decompression, communication, or any suitable combination thereof. The volumetric approach can be used for both geometry and attribute compression and decompression, and both geometry and attributes can be represented by volumetric functions. To create a compressed representation of the geometry or attributes of a point cloud, a suitable set of volumetric functions are transformed, quantized, and entropy-coded. When decoded, the volumetric functions are sufficient to reconstruct the corresponding geometry or attributes of the point cloud.
GENERATING COMPRESSED REPRESENTATIONS OF SORTED ARRAYS OF IDENTIFIERS
A method includes obtaining an array of sorted identifiers to be stored in a designated portion of a memory of a given computing system, determining a segment size for splitting elements of the array into a plurality of segments, splitting the array into the plurality of segments based at least in part on the determined segment size, and compressing the plurality of segments to create a plurality of compressed segments. The method also includes generating a balanced binary search tree comprising a plurality of nodes each identifying a range of elements of the array corresponding to a given one of the segments and comprising a pointer to a given compressed segment corresponding to the given segment. The method further includes maintaining the balanced binary search tree and the compressed segments in the designated portion of the memory, and processing queries to the array utilizing the balanced binary search tree.
MEMORY SYSTEM AND INFORMATION PROCESSING SYSTEM
A memory system includes a nonvolatile memory, an interface circuit, and a controller configured to upon receipt of a plurality of write commands for storing write data in the nonvolatile memory via the interface circuit, acquire compression-ratio information about the write data associated with each write command, determine a compression ratio of each write data based on the acquired compression-ratio information, and determine an execution order of the write commands based on the determined compression ratio.
HYBRID DATA REDUCTION
An information handling system may include at least one processor and a memory coupled to the at least one processor. The information handling system may be configured to receive data comprising a plurality of data chunks; perform deduplication on the plurality of data chunks to produce a plurality of unique data chunks; determine a compression ratio for respective pairs of the unique data chunks; determine a desired compression order for the plurality of unique data chunks based on the compression ratios; combine the plurality of unique data chunks in the desired compression order; and perform data compression on the combined plurality of unique data chunks.
GENE SEQUENCING QUALITY LINE DATA COMPRESSION PRE-PROCESSING AND DECOMPRESSION AND RESTORATION METHODS, AND SYSTEM
This invention relates to a gene sequencing quality line data compression pre-processing and decompression and restoration method, and a system, wherein the basic principle of the gene sequencing quality line data compression pre-processing and decompression and restoration is to extract several columns from an inputted quality line document or data block to act as index columns, and then perform rearrangement on all quality line data, all quality lines having a same index column being one group and being arranged together according to their relative positions in the original data block. Since quality line data having a same index column is usually more similar, the data reorganization means can arrange similar gene sequencing data together, so as to increase local similarity of the data.
COMPRESSION TECHNIQUES FOR DATA STRUCTURES SUITABLE FOR ARTIFICIAL NEURAL NETWORKS
In artificial neural networks, and other similar applications, there is typically a large amount of data involved that is considered sparse data. Due to the large size of the data involved in such applications, it is helpful to compress the data to save bandwidth resources when transmitting the data and save memory resources when storing the data. Introduced herein is a compression technique that selects elements with significant values from data and restructures them into a structured sparse format. By generating metadata that enforces the structured sparse format and organizing the data according to the metadata, the introduced technique not only reduces the size of the data but also consistently places the data in a particular format. As such, hardware can be simplified and optimized to process the data much faster and much more efficiently than the conventional compression techniques that rely on a non-structured sparsity format.
COMPUTING SYSTEM AND COMPRESSING METHOD FOR NEURAL NETWORK PARAMETERS
A computing system and a compressing method for neural network parameters are provided. In the method, multiple neural network parameters are obtained. The neural network parameters are used for a neural network algorithm. Every at least two neural network parameters are grouped into an encoding combination. The number of neural network parameters in each encoding combination is the same. The encoding combinations are compressed with the same compression target bit number. Each encoding combination is compressed independently. The compression target bit number is not larger than a bit number of each encoding combination. Thereby, the storage space can be saved and excessive power consumption for accessing the parameters can be prevented.
Memory system and information processing system
A memory system includes a nonvolatile memory, an interface circuit, and a controller configured to upon receipt of a plurality of write commands for storing write data in the nonvolatile memory via the interface circuit, acquire compression-ratio information about the write data associated with each write command, determine a compression ratio of each write data based on the acquired compression-ratio information, and determine an execution order of the write commands based on the determined compression ratio.
GENERATING A DATA STREAM WITH CONFIGURABLE COMPRESSION
One example method includes receiving a mixed data stream that was created using a first data stream and a second data stream, the mixed data stream having a compressibility of N, where N is a compressibility merging parameter, and the mixed data stream has a compressibility that is between a compressibility of the first data stream and a compressibility of the second data stream, providing the mixed data stream to an application and/or hardware, observing and recording a response of the application and/or hardware to the mixed data stream, and analyzing the response of the response of the application and/or hardware to the mixed data stream.