Patent classifications
H03M7/6005
Decoding device, decoding method, and program
A decoding device comprising a decoding unit configured to decode a tactile signal encoded for each of frequency bands. A decoding method comprising decoding a tactile signal encoded for each of frequency bands. A non-transitory storage medium encoded with instructions that, when executed by a computer, execute processing comprising decoding a tactile signal encoded for each of frequency bands.
TECHNOLOGIES FOR SWITCHING NETWORK TRAFFIC IN A DATA CENTER
Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuity is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.
Neural network model compression with block partitioning
An apparatus of neural network model decompression includes processing circuitry. The processing circuitry can be configured to receive, from a bitstream of a compressed neural network representation, one or more first syntax elements associated with a 3-dimensional coding unit (CU3D) partitioned from a 3-dimensional coding tree unit (CTU3D). The first CTU3D can be partitioned from a tensor in a neural network. The one or more first syntax elements can indicate that the CU3D is partitioned based on a 3D pyramid structure that includes multiple depths. Each depth corresponds to one or more nodes. Each node has a node value. Second syntax elements corresponding to the node values of the nodes in the 3D pyramid structure can be received from the bitstream in a breadth-first scan order for scanning the nodes in the 3D pyramid structure. Model parameters of the tensor can be reconstructed based on the received second syntax elements.
Discretization of numerical values with adaptive accuracy
An encoder, connectable to a data-memory, for storing numerical values in the data-memory, which lie in a value range between a predefined-minimum-value and a predefined-maximum-value, the encoder including an assignment instruction, according to which the value range is subdivided into multiple discrete intervals, and the encoder being configured to classify a numerical value to be stored in exactly one interval and to output an identifier of this interval, the intervals varying in width on the scale of the numerical values. A decoder for numerical values, which are stored in a data-memory using an encoder, to assign according to one assignment instruction an identifier of a discrete interval retrieved from the data-memory a fixed numerical value belonging to this interval and to output it. Also described are an AI module including an ANN, an encoder and a decoder, and a method for manufacturing the AI module, and an associated computer program.
DATA DECOMPRESSION DEVICE, DATA COMPRESSION DEVICE, AND MEMORY SYSTEM
According to one embodiment, a data decompression device includes: a detection circuit configured to detect a boundary between a header and a payload in a compressed stream, based on boundary information in the header; a separation circuit configured to separate the header and the payload; a first decompression circuit configured to decompress a compressed coding table in the header; and a second decompression circuit configured to decompress the payload, based on an output of the first decompression circuit.
Techniques for parameter set and header design for compressed neural network representation
Systems and methods for encoding and decoding neural network data is provided. A method includes: receiving a neural network representation (NNR) bitstream including a group of NNR units (GON) that represents an independent neural network with a topology, the GON including an NNR model parameter set unit, an NNR layer parameter set unit, an NNR topology unit, an NNR quantization unit, and an NNR compressed data unit; and reconstructing the independent neural network with the topology by decoding the GON.
CONVOLUTION ACCELERATION WITH EMBEDDED VECTOR DECOMPRESSION
Techniques and systems are provided for implementing a convolutional neural network. One or more convolution accelerators are provided that each include a feature line buffer memory, a kernel buffer memory, and a plurality of multiply-accumulate (MAC) circuits arranged to multiply and accumulate data. In a first operational mode the convolutional accelerator stores feature data in the feature line buffer memory and stores kernel data in the kernel data buffer memory. In a second mode of operation, the convolutional accelerator stores kernel decompression tables in the feature line buffer memory.
DATA COMPRESSION SYSTEM AND METHOD OF USING
A system includes a non-transitory computer readable medium configured to store instructions thereon; and a processor connected to the non-transitory computer readable medium. The processor is configured to execute the instructions for generating a mask based on received data from a sensor, wherein the mask includes a plurality of importance values, and each region of the received data is designated a corresponding importance value of the plurality of importance values. The processor is configured to execute the instructions for encoding the received data based on the mask; and transmitting the encoded data to a decoder for defining reconstructed data. The processor is configured to execute the instructions for computing a loss based on the reconstructed data, the received data and the mask. The processor is configured to execute the instructions for providing training to an encoder for encoding the received data based on the computed loss.
CLUSTER-BASED DATA COMPRESSION FOR AI TRAINING ON THE CLOUD FOR AN EDGE NETWORK
A disclosed information handling system includes an edge device communicatively coupled to a cloud computing resource. The edge device is configured to respond to receiving, from an internet of things (IoT) unit, a numeric value for a parameter of interest by determining a compressed encoding for the numeric value in accordance with a non-lossless compression algorithm. The edge device transmits the compressed encoding of the numeric value to the cloud computing resource. The cloud computing resource includes a decoder communicatively coupled to the encoder and configured to respond to receiving the compressed encoding by generating a surrogate for the numeric value. The surrogate may be generated in accordance with a probability distribution applicable to the parameter of interest. The compression algorithm may be a clustering algorithm such as a k-means clustering algorithm.
Method and system for compressing application data for operations on multi-core systems
A system and method to compress application control data, such as weights for a layer of a convolutional neural network, is disclosed. A multi-core system for executing at least one layer of the convolutional neural network includes a storage device storing a compressed weight matrix of a set of weights of the at least one layer of the convolutional network and a decompression matrix. The compressed weight matrix is formed by matrix factorization and quantization of a floating point value of each weight to a floating point format. A decompression module is operable to obtain an approximation of the weight values by decompressing the compressed weight matrix through the decompression matrix. A plurality of cores executes the at least one layer of the convolutional neural network with the approximation of weight values to produce an inference output.