Patent classifications
H03M7/3079
DYNAMIC CONTENT ENCODING
A method for encoding text includes grouping text as a sequence of bytes, the text comprising a string of characters, each byte corresponding to a character in the text. For each byte of the sequence of bytes: (a) each bit is processed from most significant bit to least significant bit to generate a context; and (b) a subsequent bit is predicted, using a prediction model, based on the context generated based on previously processed bits, prediction of the prediction model being a combination of predictions of a plurality of sub-models. An encoded bitstream is output based on the predicted bits. The encoded bitstream includes encoded data corresponding to the text.
SIGNALING OF CODING TREE UNIT BLOCK PARTITIONING IN NEURAL NETWORK MODEL COMPRESSION
A method of neural network decoding includes receiving a first syntax element in a model parameter set from a bitstream of a compressed neural network representation (NNR) of a neural network. The first syntax element indicates whether a coding tree unit (CTU) block partitioning is enabled for a tensor in an NNR aggregate unit. The method also includes reconstructing the tensor in the NNR aggregate unit based on the first syntax element.
Data compression method and apparatus, and computer device
A data compression method includes: obtaining a to-be-compressed object; searching a recommendation record for a recommended compression coding rule that meets a compression rate condition, the recommendation record being configured to record a compression coding rule of a historical compressed object and corresponding compression rate information, and the historical compressed object being of a same type as the to-be-compressed object; and if the recommended compression coding rule that meets the compression rate condition is found, compressing the to-be-compressed object by using the recommended compression coding rule; and if the recommended compression coding rule that meets the compression rate condition is not found, starting a regular compression coding process to obtain estimated compression rates of a plurality of compression coding rules for the to-be-compressed object, selecting a target compression coding rule based on at least the estimated compression rates, and compressing the to-be-compressed object by using the target compression coding rule.
SYSTEM AND METHOD FOR COMPUTER DATA TYPE IDENTIFICATION
A system and method for file type identification involving extraction of a file-print of a file, the file-print being a unique or practically-unique representation of statistical characteristics associated with the distribution of bits in the binary contents of the file, similar to a fingerprint. The file-print is then passed to a machine learning algorithm that has been trained to recognize file types from their file-prints. The machine learning algorithm returns a predicted file type and, in some cases, a probability of correctness of the prediction. The file may then be encoded using an encoding algorithm chosen based on the predicted file type.
Compression context setup for data transmission for IOT devices
A method or enabling compression context setup for Internet-of-Thing, IoT, devices in a communication network is presented. The method is performed in an application server node for IoT devices, and includes sending a get context message to a gateway node, the get context message requesting a compression context setup includes compression details for an IoT device, receiving an indication of the requested compression context setup for the IoT device from the gateway node, and compressing and decompressing messages sent to and from the IoT device based on the received indication. An IoT device, a gateway node, an application node, a computer program and a computer program product thereof are also presented.
METHOD, DEVICE, AND STORAGE MEDIUM FOR DATA ENCODING/DECODING
A data encoding method includes obtaining an attribute residual of a current point cloud point, binarizing the attribute residual to obtain a binary code of the current point cloud point that includes a first binary code indicating a first flag bit and a second binary code indicating a second flag bit, selecting a first context model from a context model list according to a first condition, selecting a second context model from the context model list according to a second condition, encoding the first binary code using the first context model, and encoding the second binary code using the second context model. The first condition and the second condition are different for the first context model and the second context model corresponding to a same index in the context model list.
OCCUPANCY INFORMATION PREDICTION METHOD, ENCODER, DECODER, AND STORAGE MEDIUM
Embodiments of the present application provide an occupancy information prediction method, an encoder, a decoder, and a storage medium. The occupancy information prediction method comprises: when an encoder encodes geometrical information on the basis of an octree, determining encoding information corresponding to a neighboring node of a node to be predicted, and a distance parameter between a child node of the node to be predicted and the neighboring nodes; wherein the encoding information corresponding to the neighboring node comprises occupancy information; determining an occupancy weight corresponding to the child node of the node to be predicted according to the distance parameter, and the encoding information corresponding to the neighboring node; performing a prediction processing on the child node according to the occupancy weight and a preset occupancy threshold set to obtain a node type corresponding to the child node.
Compression of high dynamic ratio fields for machine learning
Various embodiments include methods and devices for implementing decompression of compressed high dynamic ratio fields. Various embodiments may include receiving compressed first and second sets of data fields, decompressing the first and second compressed sets of data fields to generate first and second decompressed sets of data fields, receiving a mapping for mapping the first and second decompressed sets of data fields to a set of data units, aggregating the first and second decompressed sets of data fields using the mapping to generate a compression block comprising the set of data units.
COMPUTATION APPARATUS AND COMPRESSION METHOD
The computation load of computation using a neural network can be lowered. A computation apparatus has a prediction device, an encoder, and a decoder, and encodes and decodes data by using a probability density distribution. Of a learning process and a compression process, at least the compression process can be executed. By performing learning by using a neural network created by the learning process, a probability distribution table that causes a parameter and a symbol value probability distribution to correspond to each other can be used. In the compression process, the prediction device calculates the parameter from input data, and the encoder compresses the input data by using the symbol value probability distribution on the basis of the calculated parameter and the probability distribution table.
SYSTEMS AND METHODS FOR CODING
The present disclosure relates to systems and methods for coding. The methods may include receiving at least two contexts, for each of the at least two contexts, obtaining at least one coding parameter corresponding to the context from at least one lookup table, determining a probability interval value corresponding to the context based on a previous probability interval value and the at least one coding parameter, determining a normalized probability interval value corresponding to the context by performing a normalization operation on the probability interval value, determining a probability interval lower limit corresponding to the context based on a previous probability interval lower limit and the at least one coding parameter, determining a normalized probability interval lower limit corresponding to the context by performing the normalization operation on the probability interval lower limit, and outputting at least one byte based on the normalized probability interval lower limit.