Patent classifications
H03M7/3066
Bandwidth compression for neural network systems
Techniques and systems are provided for compressing data in a neural network. For example, output data can be obtained from a node of the neural network. Re-arranged output data having a re-arranged scanning pattern can be generated. The re-arranged output data can be generated by re-arranging the output data into the re-arranged scanning pattern. One or more residual values can be determined for the re-arranged output data by applying a prediction mode to the re-arranged output data. The one or more residual values can then be compressed using a coding mode.
Data Storage Using Roaring Binary-Tree Format
Techniques are disclosed relating to managing virtual data sources (VDSs), including creating and using VDSs. A virtual data source manager (VDSM) that is executing on a computer system may receive a request to generate a bitmap index for a dataset. The VDSM may then generate a bitmap index by ingesting the dataset into a data format of the bitmap index. The VDSM may further generate the bitmap index by performing a compression procedure on the ingested dataset to generate a plurality of data containers, where a given data container includes a respective compressed portion of the ingested dataset. After compressing the ingested dataset, the VDSM may then store the plurality of data containers in a set of binary trees (b-trees), where the set of b-trees is usable to respond to data requests for data of the bitmap index.
Compressed versions of image data based on relationships of data
Methods of image compression are described. A stream of color image data is filtered with a prediction routine using a pixel neighborhood. The filtered stream of color image data is sorted with a block sorting routing. A version of the color image data is compressed based on the sorted and filtered stream of color image data.
HARDWARE CHANNEL-PARALLEL DATA COMPRESSION/DECOMPRESSION
A multichannel data packer includes a plurality of two-input multiplexers and a controller. The plurality of two-input multiplexers is arranged in 2.sup.N rows and N columns in which N is an integer greater than 1. Each input of a multiplexer in a first column receives a respective bit stream of 2.sup.N channels of bit streams. Each respective bit stream includes a bit-stream length based on data in the bit stream. The multiplexers in a last column output 2.sup.N channels of packed bit streams each having a same bit-stream length. The controller controls the plurality of multiplexers so that the multiplexers in the last column output the 2.sup.N channels of bit streams that each has the same bit-stream length.
Cloud computing data compression for allreduce in deep learning
In deep learning, and in particular, for data compression for allreduce in deep learning, a gradient may be compressed for synchronization in a data parallel deep neural network training for allreduce by sharing a consensus vector between each node in a plurality of nodes to ensure identical indexing in each of the plurality of nodes prior to performing sparse encoding.
STORAGE DEVICE AND OPERATING METHOD OF THE STORAGE DEVICE
A memory device may include a data receiver configured to receive a plurality of read data chunks from a plurality of memory areas which transmit and receive data through one channel, a data compressor configured to generate a plurality of compressed data chunks from each of the plurality of read data chunks and a data output unit configured to simultaneously output the plurality of compressed data through the channel in response to a data output command.
System and method for compressing controller area network (CAN) messages
A system for compressing Controller Area Network (CAN) messages, the system comprising a processing resource configured to: obtain a CAN messages sequence including a plurality of CAN messages intercepted at a given order by at least one device adapted to monitor messages transmitted via communication channel(s) of a vehicle; group the CAN messages of the CAN messages sequence into MID groups, by a CAN MID field of the CAN messages; for each given MID group of the MID groups split the CAN messages of the MID group into field groups, wherein each field group comprises a respective field of a plurality of fields of the CAN messages of the MID group; employ at least one compression scheme on at least one of the field groups; generate a data structure comprising the field groups; and compress the data structure using a lossless compression algorithm, giving rise to a compressed data structure.
Lossless data compression for sensors
Systems or methods for losslessly compressing data received from sensors, such as photon counters, are disclosed. An integer representation of a sensor reading is received from a sensor. The integer representation is combined with additional integer representations from each of a plurality of additional sensors into a single integer value. The single integer value is then stored as an element of an integer array that represents a predefined sample interval.
System and Method for Smart NVMeOF Disk Array Enclosure Deep Background Data Reduction Offload
A method, computer program product, and computer system for identifying, by a computing device, storage containers that contain cold data. At least a portion of the storage containers may be processed to determine whether a first compression technique will result in a higher level of compression above a threshold level of compression than a second compression technique. The storage containers may be processed using the first compression technique based upon, at least in part, determining that the first compression technique will result in the higher level of compression above the threshold level of compression than the second compression technique.
ZERO CODER COMPRESSION
The subject technology groups received data in data blocks having a predetermined number of bytes. For each received data block, a compressed data block is written to an output buffer. The compressed data block includes a mask block having a same number of bits as the predetermined number, and a subsequent block. The mask block includes in a same order as bytes within the corresponding data block, a zero corresponding to a zero-byte within the data block, and a one corresponding to each non-zero byte within the data block. The subsequent block includes non-zero bytes within the corresponding data block in a same order as the non-zero bytes within the data block.