Patent classifications
H03M7/34
Dynamic data compression selection
Aspects of dynamic data compression selection are presented. In an example method, as uncompressed data chunks of a data stream are compressed, at least one performance factor affecting selection of one of multiple compression algorithms for the uncompressed data chunks of the data stream may be determined. Each of the multiple compression algorithms may facilitate a different expected compression ratio. One of the multiple compression algorithms may be selected separately for each uncompressed data chunk of the data stream based on the at least one performance factor. Each uncompressed data chunk may be compressed using the selected one of the multiple compression algorithms for the uncompressed data chunk.
Method and apparatus for efficient deflate decompression using content-addressable data structures
Apparatus and method for efficient compression block decoding using content-addressable structure for header processing. For example, one embodiment of an apparatus comprises: a header parser to extract a sequence of tokens and corresponding length values from a header of a compression block, the tokens and corresponding length values associated with a type of compression used to compress a payload of the compression block; and a content-addressable data structure builder to construct a content-addressable data structure based on the tokens and length values, the content-addressable data structure builder to write an entry in the content-addressable data structure comprising a length value and a count value, the count value indicating a number of times the length value was previously written to an entry in the content-addressable data structure.
Generalized neural network architectures based on frequency and/or time division
Certain aspects of the present disclosure provide techniques for measurement encoding and decoding using neural networks to compress and decompress measurement data. One example method generally includes: generating, via each of a plurality of neural network encoders operating on measurement data, a compressed measurement based on a respective portion of the measurement data, wherein each of the neural network encoders is based on the same neural network model; generating at least one message indicative of the measurement data based on the compressed measurements; and transmitting the at least one message.
Systems and methods for data compression
The transmission of broadcast data, such as financial data and news feeds, is accelerated over a communication channel using data compression and decompression to provide secure transmission and transparent multiplication of communication bandwidth, as well as reduce the latency. Broadcast data may include packets having fields. Encoders associated with particular fields may be selected to compress those particular fields.
Instrumental analysis data processing method and device
After setting an intensity value equal to or lower than a predetermined level in mass spectrum data as invalid data, an uncompressed data array in which intensity values are arrayed in the order of m/z is divided into blocks per predetermined number of pieces of data. When significant intensity values are consecutive in order from the start of each block, this consecutive number and the respective intensity values are used as data for compression. When invalid data is consecutive, this consecutive number is used as data for compression. Then, sequence numbers of data at the start of each block after compression are collected to create an index and the index is stored together with the compressed data.
Compression and/or encryption of a file
A computing device includes a memory and a controller. The controller is configured to encrypt and/or compress a file by transforming at least a portion of said file to a number and transforming the number to an exponent vector comprising at least one exponent, wherein each exponent corresponds to a base in a base vector, whereby the file is represented by the exponent vector and a family constant. The family constant is configured to align the number to be compressed and/or encrypted into a table family number, and the table family number represents a number family which is evenly dividable with the number.
Computer-readable recording medium, encoding apparatus, and encoding method
An encoding apparatus reads text data of an encoding target, encodes each character or word in the text data of the encoding target by using a bit map type index in which an appearance position is associated with each of the encoded characters or words, appearing on the text data of the encoding target, as bit map data, and updates the bit map type index with respect to the encoded character or word.
Decompression using cascaded history windows
The following description is directed to decompression using cascaded history buffers. In one example, an apparatus can include a decompression pipeline configured to decompress compressed data comprising code words that reference a history of decompressed data generated from the compressed data. The apparatus can include a first-level history buffer configured to store a more recent history of the decompressed data received from the decompression pipeline. The apparatus can include a second-level history buffer configured to store a less recent history of the decompressed data received from the first-level history buffer.
Computer architecture for emulating an asynchronous correlithm object processing system
A device that includes a node engine configured to emulate a first node, a second node, and a third node. The first node is configured to receive a first correlithm object, fetch a second correlithm object based on the first correlithm object, and output the second correlithm object to the second node and the third node. Each correlithm object is a point in an n-dimensional space represented by a binary string. The second node is configured to receive the second correlithm object, fetch a third correlithm object based on the second correlithm object, and output the third correlithm object to the third node. The third node is configured to receive the second correlithm object, receive the third correlithm object, fetch a fourth correlithm object based on the second correlithm object and the third correlithm object, and output the fourth correlithm object.
High speed data weighted averaging architecture
Data weighted averaging of a thermometric coded input signal is accomplished by controlling the operation of a crossbar switch matrix to generate a current cycle of a data weighted averaging output signal using a control signal generated in response to feedback of a previous cycle of the data weighted averaging output signal. The control signal specifies a bit location for a beginning logic transition of the data weighted averaging output signal in the current cycle based on detection of an ending logic transition of the data weighted averaging output signal in the previous cycle.