Patent classifications
H03M13/1191
Sparse graph creation device and sparse graph creation method
A selective PEG algorithm, creating a sparse matrix while maintaining row weight/column weight at arbitrary multi-levels, and in the process, inactivating an arbitrary edge so that a minimum loop formed between arbitrary nodes is enlarged or performing constrained interleaving, so that encoding efficiency in the case where a matrix space is narrow is improved.
Polar encoder, communication unit, integrated circuit and method therefor
A polar encoder kernel, a communication unit, an integrated circuit and a method of polar encoding are described. The polar encoder kernal is configured to receive one or more bits from a kernal information block having a kernal block size of N; and output one or more bits from a kernal encoded block having a block size that matches the kernal block size N; wherein the polar encoder kernal comprises a decomposition of a polar code graph having multiple columns that are processed by a reused single datapath, at least one of said multiple columns contains two or more stages and where each column of the multiple columns is further decomposed into one or more polar code sub-graphs and is configured to process encoded bits one polar code sub-graph at a time.
METHODS AND PROCEDURES FOR FLEXIBLE AND HIGHLY-PARALLEL POLAR ENCODING AND DECODING
A method performed by a WTRU may comprise generating a polar factor graph and pruning the polar factor graph to generate a pruned factor graph. The pruned factor graph may include input variable nodes, check nodes and output variable nodes. The method may further comprise the initializing input variable nodes. For each of a plurality of encoding levels of the pruned factor graph, values from the input variable nodes may be transferred to the check nodes. Operations, for example, XOR) additions, may be performed on the values of the check nodes. Check nodes having a single connection to another node not used in a previous transfer may be identified. Values from the identified check nodes may be transferred to the input variable nodes. Binary values from the input variable nodes may be transferred to the output variable nodes for transmission to a receiver.
ELECTRONIC DEVICE
Provided herein may be an electronic device using an artificial neural network. The electronic device may include a training data generator configured to determine an input vector corresponding to a trapping set, detected during error correction decoding corresponding to a codeword, and a target vector corresponding to the input vector, and a training component configured to train an artificial neural network based on supervised learning by inputting the input vector to an input layer of the artificial neural network and by inputting the target vector to an output layer of the artificial neural network.
Hyper-Graph Network Decoders for Algebraic Block Codes
In one embodiment, a method includes inputting an encoded message with noise to a neural-networks model comprising a variable and a check layers of nodes, each node being associated with at least one weight and a hyper-network node, updating the weights associated with the variable layer of nodes by processing the encoded message using the hyper-network nodes associated with the variable layer of nodes, generating a first set of outputs by processing the encoded message using the variable layer of nodes and their respective updated weights, updating the weights associated with the check layer of nodes by processing the first set of outputs using the hyper-network nodes associated with the check layer of nodes, and generating a decoded message without noise using the neural-networks model by using at least the first set of outputs and the check layer of nodes and their respective updated weights.
LEARNING DEVICE
According to one embodiment, a learning device includes a noise generation unit, a decoding unit, a generation unit, and a learning unit. The noise generation unit outputs a second code word which corresponds to a first code word to which noise has been added. The decoding unit decodes the second code word and outputs a third code word. The generation unit generates learning data for learning a weight in message passing decoding in which the weight and a message to be transmitted are multiplied, based on whether or not decoding of the second code word into the third code word has been successful. The learning unit determines a value for the weight in the message passing decoding by using the learning data.
Soft output decoding of polar codes
According to certain embodiments, a method is provided for generating soft information for code bits of polar codes. The method includes receiving, by a decoder of a receiver, soft information associated with coded bits from a first module of the receiver and using a tree structure of the polar code to generate updated soft information. The updated soft information is output by the decoder for use by a second module of the receiver.
NEURAL NETWORKS FOR DECODING
Methods and apparatus for training a Neural Network to recover a codeword of a Forward Error Correction code are provided. Trainable parameters of the Neural Network are optimised to minimise a loss function. The loss function is calculated by representing an estimated value of the message bit output from the Neural Network as a probability of the value of the bit in a predetermined real number domain and multiplying the representation of the estimated value of the message bit by a representation of a target value of the message bit. Training a neural network may be implemented via a loss function.
NEURAL NETWORKS FOR FORWARD ERROR CORRECTION DECODING
Methods and apparatus for training a neural network to recover a codeword and for decoding a received signal using a neural network are disclosed. According to examples of the disclosed methods, a syndrome check is introduced at even layers of the neural network during the training, testing and online phases. During training, optimisation of trainable parameters of the neural network is ceased after optimisation at the layer at which the syndrome check is satisfied. Examples of the method for training a neural network may be implemented via a proposed loss function. During testing and online phases, propagation through the neural network is ceased at the layer at which the syndrome check is satisfied.
Sparse-coded ambient backscatter communication method and system
The present disclosure relates to a sparse-coded ambient backscatter communication method and a system. According to the sparse-coded ambient backscatter communication method, in an ambient backscatter system including an access point and a plurality of sensor nodes, each sensor node transmits a code word in a non-orthogonal multiple access (NOMA) manner using sparsity of a signal by a duty cycling operation and the access point detects a superimposed signal transmitted in the NOMA manner by an iterative decoding method in which a dyadic channel and intersymbol interference are reflected. The present disclosure may reduce the implementation cost by reducing the number of impedances required to modulate data of a batteryless sensor node in an Internet of Things environment and utilize the dyadic backscatter channel to detect a signal, thereby providing massive connectivity of the access point.