Patent classifications
H03M7/3066
COMPRESSING DEVICE AND METHOD USING PARAMETERS OF QUADTREE METHOD
A device configured to compress a tensor including a plurality of cells includes: a quadtree generator configured to generate a quadtree searching for a non-zero cell included in the tensor and extract at least one parameter value from the quadtree; a mode selector configured to determine a compression mode based on the at least one parameter; and a bitstream generator configured to generate a bitstream by compressing the tensor based on the compression mode.
METHOD AND DEVICE FOR FAST LOSSLESS COMPRESSION
A computer-implemented method for compressing digital data includes obtaining a sequence of digital data values; mapping the sequence of digital data values to a sequence of code words having non-uniform bit lengths; packing the sequence of code words into a sequence of storage words having uniform bit length and corresponding to a fixed-size piece of data handled as a unit by the instruction set or the hardware of a processor; and outputting the sequence of storage words together with a first bitmask indicating the bit length of each code word, wherein in the method is implemented using special purpose vector instructions.
REGION-ADAPTIVE HIERARCHICAL TRANSFORM AND ENTROPY CODING FOR POINT CLOUD COMPRESSION, AND CORRESPONDING DECOMPRESSION
Innovations in compression and decompression of point cloud data are described. For example, an encoder is configured to encode point cloud data, thereby producing encoded data. In particular, the encoder applies a region-adaptive hierarchical transform (“RAHT”) to attributes of occupied points, thereby producing transform coefficients. The encoder can also quantize the transform coefficients and perform adaptive entropy coding of the quantized transform coefficients. For corresponding decoding, a decoder is configured to decode the encoded data to reconstruct point cloud data. In particular, the decoder applies an inverse RAHT to transform coefficients for attributes of occupied points. The decoder can also perform adaptive entropy decoding and inverse quantization of the quantized transform coefficients. The adaptive entropy coding/decoding can use estimates of the distribution of values for the quantized transform coefficients. In this case, the encoder calculates the estimates and signals them to the decoder.
Hardware channel-parallel data compression/decompression
A multichannel data packer includes a plurality of two-input multiplexers and a controller. The plurality of two-input multiplexers is arranged in 2.sup.N rows and N columns in which N is an integer greater than 1. Each input of a multiplexer in a first column receives a respective bit stream of 2.sup.N channels of bit streams. Each respective bit stream includes a bit-stream length based on data in the bit stream. The multiplexers in a last column output 2.sup.N channels of packed bit streams each having a same bit-stream length. The controller controls the plurality of multiplexers so that the multiplexers in the last column output the 2.sup.N channels of bit streams that each has the same bit-stream length.
Electronic device subscription
Broadly speaking, embodiments of the present technique provide methods, apparatuses and systems for controlling device resource subscriptions by an LwM2M server, comprising receiving at said LwM2M server a registration request message from a LwM2M client device, the message comprising an enumeration of a plurality of subscribable elements of an object hierarchy of the device; storing, using the LwM2M server, an association between the device and the plurality of subscribable elements; and sending from the LwM2M server to the LwM2M client device a subscription message comprising a unitary compressed expression representing plural ones of said plurality of subscribable elements associated with said device.
USING PREDICATES IN CONDITIONAL TRANSCODER FOR COLUMN STORE
A storage device is disclosed. The storage device may comprise storage for input encoded data. A controller may process read requests and write requests from a host computer on the data in the storage. An in-storage compute controller may receive a predicate from the host computer to be applied to the input encoded data. A transcoder may include an index mapper to map an input dictionary to an output dictionary, with one entry in the input dictionary mapped to an entry in the output dictionary, and another entry in the input dictionary mapped to a “don't care” entry in the output dictionary.
Methods and devices for lossy coding of point cloud occupancy
Methods and devices for lossy encoding of point clouds. Rate-distortion optimization is used in coding an occupancy pattern for a sub-volume to determine whether to invert any of the bits of the occupancy pattern. The assessment may be a greedy evaluation of whether to invert bits in the coding order. Inverting a bit of the occupancy pattern amounts to adding or removing a point from the point cloud. A distortion metric may measure distance between the point added or removed and its nearest neighbouring point.
Dynamic sequencing of data partitions for optimizing memory utilization and performance of neural networks
Optimized memory usage and management is crucial to the overall performance of a neural network (NN) or deep neural network (DNN) computing environment. Using various characteristics of the input data dimension, an apportionment sequence is calculated for the input data to be processed by the NN or DNN that optimizes the efficient use of the local and external memory components. The apportionment sequence can describe how to parcel the input data (and its associated processing parameters—e.g., processing weights) into one or more portions as well as how such portions of input data (and its associated processing parameters) are passed between the local memory, external memory, and processing unit components of the NN or DNN. Additionally, the apportionment sequence can include instructions to store generated output data in the local and/or external memory components so as to optimize the efficient use of the local and/or external memory components.
Method, System and Program Product for Mask-Based Compression of a Sparse Matrix
A method, system and program product includes examining elements of a first matrix in a sequential fashion. Values of the examined elements are determined. A corresponding bit of a first mask is set to a first value if a determined value is zero. A corresponding bit of a first mask is set to a second value if a determined value is non-zero. The non-zero values are packed in a first vector, wherein bits of at least the first mask determine operations on packed values.
Memory efficient dropout, with reordering of dropout mask elements
A method for selectively dropping out feature elements from a tensor is disclosed. The method includes generating a mask that has a plurality of mask elements arranged in a first order. A compressed mask is generated, which includes a plurality of compressed mask elements arranged in a second order that is different from the first order. For example, each mask element of the plurality of mask elements of the mask is compressed to generate a corresponding compressed mask element of the plurality of compressed mask elements of the compressed mask. Individual compressed mask element of the plurality of compressed mask elements is indicative of whether a corresponding feature element of the tensor output by a neural network layer is to be dropped out or retained. Feature elements are selectively dropped from the tensor, based on the compressed mask.