H03M7/34

Methods and apparatus to parallelize data decompression

Methods and apparatus to parallelize data decompression are disclosed. An example method selecting initial starting positions in a compressed data bitstream; adjusting a first one of the initial starting positions to determine a first adjusted starting position by decoding the bitstream starting at a training position in the bitstream, the decoding including traversing the bitstream from the training position as though first data located at the training position is a valid token; outputting first decoded data generated by decoding a first segment of the bitstream starting from the first adjusted starting position; and merging the first decoded data with second decoded data generated by decoding a second segment of the bitstream, the decoding of the second segment starting from a second position in the bitstream and being performed in parallel with the decoding of the first segment, and the second segment preceding the first segment in the bitstream.

Analog-to-digital converter with dynamic range enhancer

A circuit includes a programmable gain amplifier (PGA) having a PGA output. The circuit further includes a delta-sigma modulator having an input coupled to the PGA output. The circuit also includes a digital filter and a dynamic range enhancer (DRE) circuit. The digital filter is coupled to the delta-sigma modulator output. The DRE circuit is coupled to the delta-sigma modulator output and to the PGA. The DRE circuit is configured to monitor a signal level of the delta-sigma modulator output. Responsive to the signal level being less than a DRE threshold, the DRE circuit is configured to program the PGA for a gain level greater than unity gain and to cause the digital filter to implement an attenuation of a same magnitude as the gain level to be programmed into the PGA.

Dynamic data compression selection

Aspects of dynamic data compression selection are presented. In an example method, as uncompressed data chunks of a data stream are compressed, at least one performance factor affecting selection of one of multiple compression algorithms for the uncompressed data chunks of the data stream may be determined. Each of the multiple compression algorithms may facilitate a different expected compression ratio. One of the multiple compression algorithms may be selected separately for each uncompressed data chunk of the data stream based on the at least one performance factor. Each uncompressed data chunk may be compressed using the selected one of the multiple compression algorithms for the uncompressed data chunk.

Apparatus and method for data compression in a wearable device

Described is an apparatus and method for data compression using compressive sensing in a wearable device. Described is also a machine-readable storage media having instruction stored thereon, that when executed, cause one or more processors to perform an operation comprising: receive an input signal from a sensor; convert the input signal to a digital stream; and symmetrically pad on either ends of the digital stream with a portion of the digital stream to form a padded digital stream.

Real-time history-based byte stream compression
10651871 · 2020-05-12 · ·

Systems and methods for stream-based compression include an encoder of a first device that receives an input stream of bytes including a first byte preceded by one or more second bytes. The encoder may determine to identify a prefix code for the first byte. The encoder may select a prefix code table using the one or more second bytes. The encoder may identify, from the selected prefix code table, the prefix code of the first byte. The encoder may generate an output stream of bytes by replacing the first byte in the input stream with the prefix code of the first byte. The encoder may transmit the output stream from the encoder of the first device to a decoder of a second device. The output stream may have a fewer number of bits than the input stream.

General-purpose processor instruction to perform compression/decompression operations

A DEFLATE Conversion Call general-purpose processor instruction. An instruction is obtained by a general-purpose processor of the computing environment. The instruction is a single architected instruction of an instruction set architecture that complies to an industry standard for compression. The instruction is executed, and the executing includes transforming, based on a function to be performed by the instruction being a compression function or a decompression function, state of input data between an uncompressed form of the input data and a compressed form of the input data to provide a transformed state of data. The transformed state of the data is provided as output to be used in performing a task.

Information processing apparatus, information processing method, and recording medium storing program
10601444 · 2020-03-24 · ·

An information processing apparatus includes: a processor; and a processing circuit coupled to the processor, wherein the processing circuit is configured to: generate compressed data by compressing send data; and determine whether to transmit the compressed data or the send data before the compression to a network, based on a size of the compressed data, and wherein the processor is configured to transmit the compressed data or the send data before the compression to the network, based on a result of the determination.

Multi-mode compression acceleration

A computer system includes a plurality of hardware processors, and a hardware accelerator. A first processor among the plurality of processor runs an application that issues a data compression request to compress or decompress a data stream. The hardware accelerator selectively operates in different modes to compresses or decompresses the data stream. Based on a selected mode, the hardware accelerator can utilize a different number of processors among the plurality of hardware to compress or decompress the data stream.

Area efficient decompression acceleration

An embodiment of a semiconductor package apparatus may include technology to load compressed symbols in a data stream into a first content accessible memory, break a serial dependency of the compressed symbols in the compressed data stream, and decode more than one symbol per clock. Other embodiments are disclosed and claimed.

Signal encoder, decoder and methods using predictor models
10530388 · 2020-01-07 · ·

A signal encoder divides the signal into segments and uses prediction models to approximate the samples of each segment Each local prediction model, each applicable to one segment, is applied in its own translated axis system within the segment and the offset is given by the last predicted value for the previous segment. When the signal is reasonably continuous, it alleviates the need to parameterize the offset for each local predictor model as each local predictor model can build on this last predicted sample value of the previous segment. The encoder as a consequence doesn't suffer from a build up of error even though the offset is not transmitted but instead the last predicted value of the last sample of the previous segment is used. Prediction errors are obtained for the approximated samples and transmitted to the decoder, together with the predictor model parameters and seed value to allow accurate reconstruction of the signal by the decoder.