H03M7/6064

COMPRESSION OF NEURAL NETWORK ACTIVATION DATA

A processor arranged to compress neural network activation data comprising an input module for obtaining neural network activation data. The processor also comprises a block creation module arranged to split the neural network activation data into a plurality of blocks; and a metadata generation module for generating metadata associated with at least one of the plurality of blocks. Based on the metadata generated a selection module selects a compression scheme for each of the plurality of blocks, and a compression module for applying the selected compression scheme to the corresponding block to produce compressed neural network activation data. An output module is also provided for outputting the compressed neural network activation data.

BLENDSHAPE COMPRESSION SYSTEM
20210019916 · 2021-01-21 ·

The systems and methods described herein can pre-process a blendshape matrix via a global clusterization process and a local clusterization process. The pre-processing can cause the blendshape matrix to be divided into multiple blocks. The techniques can further apply a matrix compression technique to each block of the blendshape matrix to generate a compression result. The matrix compression technique can comprise a matrix approximation step, an accuracy verification step, and a recursive compression step. The compression result for each block may be combined to generate a compressed blendshape matrix for rendering a virtual entity.

Hierarchical point cloud compression

A system comprises an encoder configured to compress attribute information for a point cloud and/or a decoder configured to decompress compressed attribute information for the point cloud. Attribute values for at least one starting point are included in a compressed attribute information file and attribute correction values used to correct predicted attribute values are included in the compressed attribute information file. Attribute values are predicted based, at least in part, on attribute values of neighboring points and distances between a particular point for whom an attribute value is being predicted and the neighboring points. The predicted attribute values are compared to attribute values of a point cloud prior to compression to determine attribute correction values. A decoder follows a similar prediction process as an encoder and corrects predicted values using attribute correction values included in a compressed attribute information file.

Grouping objects of a compound document and compressing the group using a compression method corresponding to the group

A method of encoding print data performed by a host device is described in which a print request for a compound document is received; objects included in the compound document is classified into predetermined groups based on object attribute information; each of the predetermined groups is compressed according to a preset compression method; and the compressed groups is merged and the merged groups is transmitted to an image forming apparatus.

Guaranteed data compression
10868565 · 2020-12-15 · ·

A method of compressing data is described in which the compressed data is generated by either or both of a primary compression unit or a reserve compression unit in order that a target compression threshold is satisfied. If a compressed data block generated by the primary compression unit satisfies the compression threshold, that block is output. However, if the compressed data block generated by the primary compression unit is too large, such that the compression threshold is not satisfied, a compressed data block generated by the reserve compression unit using a lossy compression technique, is output.

Byte select cache compression

Techniques are disclosed for designing cache compression algorithms that control how data in caches are compressed. The techniques generate a custom byte select algorithm by applying repeated transforms applied to an initial compression algorithm until a set of suitability criteria is met. The suitability criteria include that the cost is below a threshold and that a metadata constraint is met. The cost is the number of blocks that can be compressed by an algorithm as compared with the ideal algorithm. The metadata constraint is the number of bits required for metadata.

Dynamic handwriting verification, handwriting-based user authentication, handwriting data generation, and handwriting data preservation
10846510 · 2020-11-24 · ·

Handwriting verification methods and related computer systems, and handwriting-based user authentication methods and related computer systems are disclosed. A handwriting verification method comprises obtaining a handwriting test sample containing a plurality of available parameters, extracting geometric parameters, deriving geometric features comprising an x-position value and a y-position value for each of a plurality of feature points in the test sample, performing feature matching between geometric features of the test sample and a reference sample, determining a handwriting verification result based at least in part on the feature matching, and outputting the handwriting verification result. Techniques and tools for generating and preserving electronic handwriting data also are disclosed. Raw handwriting data is converted to a streamed format that preserves the original content of the raw handwriting data. Techniques and tools for inserting electronic handwriting data into a digital image also are disclosed.

METHOD AND DEVICE FOR TRANSMITTING/RECEIVING SIGNAL IN WIRELESS COMMUNICATION SYSTEM
20200359261 · 2020-11-12 ·

A radio frequency (RF) unit, a digital unit, and methods of transmitting and receiving data in a wireless communication system are provided. The digital unit may include: a transceiver configured to receive compressed data from an RF unit, a processor configured to divide a frequency domain and a time domain into a plurality of blocks, set a compression parameter to be applied to each of the plurality of blocks, and expand the received data in units of the blocks based on the set compression parameter, and a memory storing the compression parameter.

Process aware data compression

Determining an expected compression rate for a prospective process in a federated system includes obtaining compression rate data for existing processes in the federated system, compiling the compression rate data into a plurality of entries in a process name table according to process identifier, client, and industry, determining a specific entry in the process name table for an existing process that most closely matches the prospective process, and determining an expected compression rate of the prospective process based on the compression rate data for the specific entry. Compression rate data may be provided by a driver at host systems that sends compression rate information to a central repository. The central repository may be provided by a host system at a data center of the federated system. The compression rate data may use a sliding average that weighs the data more heavily to favor more recent data.

METHODS FOR SELECTIVELY COMPRESSING DATA AND DEVICES THEREOF
20200344315 · 2020-10-29 ·

Methods, non-transitory computer readable media, and computing devices that assists with selectively compressing data includes identifying data stream characteristics in a received data stream from a client. A data processing operation to perform on the received data stream is determined based on stored compression instructions data obtained using the identified one or more characteristics. The determined data processing operation is performed on the received data stream. The received data stream is performed upon performing the determined data processing operation.