H03M7/3084

METHODS AND DEVICES FOR COMPRESSING SIGNED MEDIA DATA
20230208614 · 2023-06-29 · ·

A signed media bitstream comprises data units and signature units. Each signature unit is associated with one or more nearby data units and include at least one fingerprint derived from the associated data units and a digital signature of the at least one fingerprint. A storing method comprises: receiving a segment of the media bitstream; identifying N≥2 instances of a repeating data unit in the received segment; pruning up to N−1 of the identified instances of the repeating data unit; and storing the received segment after pruning. A validation method comprises: receiving a segment of the media bitstream stored in accordance with the storing method; and validating a signature unit using a digital signature contained therein. Despite the pruning of the repeating data unit, the received associated data units can be successfully validated, either directly or indirectly, by means of different embodiments herein.

Techniques to configure physical compute resources for workloads via circuit switching

Embodiments are generally directed apparatuses, methods, techniques and so forth to select two or more processing units of the plurality of processing units to process a workload, and configure a circuit switch to link the two or more processing units to process the workload, the two or more processing units each linked to each other via paths of communication and the circuit switch.

VLSI EFFICIENT HUFFMAN ENCODING APPARATUS AND METHOD
20170366198 · 2017-12-21 ·

A compression algorithm based on Huffman coding is disclosed that is adapted to be readily implemented using VLSI design. A data file may be processed to replace duplicate data with a copy commands including an offset and length, such as according to the LV algorithm. A Huffman code may then be generated for parts of the file. The Huffman code may be generated according to a novel method that generates Huffman code lengths for literals in a data file without first sorting the literal statistics. The Huffman code lengths may be constrained to be no longer than a maximum length and the Huffman code may be modified to provide an acceptable overflow probability and be in canonical order. Literals, offsets, and lengths may be separately encoded. The different values for these data sets may be assigned to a limited number of bins for purpose of generating usage statistics used for generating Huffman codes.

Database replication using adaptive compression

Methods, computer program products, and/or systems are provided that perform the following operations: in a data replication environment, analyzing a database workload to generate a knowledge base of information related to compression; dividing a transfer data stream into different segments based, at least in part, on the knowledge base; obtaining candidate compression types for the transfer data stream based, at least in part, on the knowledge base; assigning respective compression types of the candidate compression types to the different segments; generating compressed segments based, at least in part, on the respective compression types assigned to the different segments; and providing the compressed segments to a replication target.

APPARATUS AND METHOD FOR CONSTANT DETECTION DURING COMPRESS OPERATIONS

Apparatus and method for detecting a constant data block are described herein. An apparatus embodiment includes compression circuitry to perform compression operations on a memory block; constant detection circuitry to, concurrently with performance of the compression operations on the memory block, determine that the memory block is a constant data block comprised of only repeat instances of a constant value; and controller circuitry to associate a first indication with the memory block based on the determination, the first indication usable for controlling whether to abort the compression operations or whether to discard a compressed memory block generated from the compression operations.

COMPUTING SYSTEM WITH DATA TRANSFER BASED UPON DEVICE DATA FLOW CHARACTERISTICS AND RELATED METHODS
20220385718 · 2022-12-01 ·

A computing system may include a server, and a client computing device in communication with the server. The server may be configured to provide a corresponding virtual desktop instance for the client computing device. The computing system may include a local device to be coupled to a given client computing device and to be operable in a given virtual desktop instance associated with the given client computing device, thereby generating client initialization packets. The server may be configured to generate a server mapping table. The given client computing device may be configured to generate a client mapping table, replace a client packet with a client mapping ID number to define compressed client initialization packets, and send the compressed client initialization packets to the server. The server may be configured to replace the client mapping ID number with the client packet in the compressed client initialization packets based upon the server mapping table.

PARALLEL DECOMPRESSION OF COMPRESSED DATA STREAMS
20220376701 · 2022-11-24 ·

In various examples, metadata may be generated corresponding to compressed data streams that are compressed according to serial compression algorithms—such as arithmetic encoding, entropy encoding, etc.—in order to allow for parallel decompression of the compressed data. As a result, modification to the compressed data stream itself may not be required, and bandwidth and storage requirements of the system may be minimally impacted. In addition, by parallelizing the decompression, the system may benefit from faster decompression times while also reducing or entirely removing the adoption cycle for systems using the metadata for parallel decompression.

INITIALIZING A PSEUDO-DYNAMIC DATA COMPRESSION SYSTEM WITH PREDETERMINED HISTORY DATA TYPICAL OF ACTUAL DATA

In at least one embodiment, a history data structure of a Lempel-Ziv compressor is preloaded with fixed predetermined history data typical of actual data of a workload of the Lempel-Ziv compressor. The Lempel-Ziv compressor then compresses each of multiple data pages in a sequence of data pages by reference to the fixed predetermined history data.

CLOUD-BASED SCALE-UP SYSTEM COMPOSITION

Technologies for composing a managed node with multiple processors on multiple compute sleds to cooperatively execute a workload include a memory, one or more processors connected to the memory, and an accelerator. The accelerator further includes a coherence logic unit that is configured to receive a node configuration request to execute a workload. The node configuration request identifies the compute sled and a second compute sled to be included in a managed node. The coherence logic unit is further configured to modify a portion of local working data associated with the workload on the compute sled in the memory with the one or more processors of the compute sled, determine coherence data indicative of the modification made by the one or more processors of the compute sled to the local working data in the memory, and send the coherence data to the second compute sled of the managed node.

Data reduction in block-based storage systems using content-based block alignment
11507273 · 2022-11-22 · ·

A method of data reduction in a block-based data storage system includes selecting a starting position in a block based on a deterministic function of block data content. Then for an unaligned block beginning at the selected starting position, a block digest (e.g., block hash) is generated and compared with stored block digests of stored data blocks. If there is a match, and the stored block matches the unaligned block, then a reference to the stored block is stored in place of the unaligned block, and otherwise the unaligned block and a corresponding digest are stored. The storing of references to already stored blocks, without the constraint of observing aligned-block boundaries, realizes increased savings of physical storage space.