H03M13/6597

Forward error correction encoding using binary clustering

Embodiments of the present disclosure relate to a binary clustered forward error correction encoding scheme. Systems and methods are disclosed that define binary clustered encodings of the media packets from which forward error correction (FEC) packets are computed. The different encodings specify which media packets in a frame are used to compute each FEC packet (a frame includes M media packets). The different encodings may be defined based on the quantity of media packets in a frame, M≤floor(2.sup.N), where each bit of the binary representation of N is associated with a different cluster pair encoding of the media packets. Each cluster pair includes a cluster for which the bit=0 and a cluster for which the bit=1. Computing FEC packets using at least two cluster pair encodings provides redundancy for each media packet, thereby improving media packet recovery rates.

Neural networks for decoding

Methods and apparatus for training a Neural Network to recover a codeword of a Forward Error Correction (FEC) code are provided. Trainable parameters of the Neural Network are optimised to minimise a loss function. The loss function is calculated by representing an estimated value of the message bit output from the Neural Network as a probability of the value of the bit in a predetermined real number domain and multiplying the representation of the estimated value of the message bit by a representation of a target value of the message bit. Training a neural network may be implemented via a loss function.

Mixing coefficient data for processing mode selection
11528048 · 2022-12-13 · ·

Examples described herein include systems and methods which include wireless devices and systems with examples of mixing input data delayed versions of at least a portion of the respective processing results with coefficient data specific to a processing mode selection. For example, a computing system with processing units may mix the input data delayed versions of respective outputs of various layers of multiplication/accumulation processing units (MAC units) for a transmission in a radio frequency (RF) wireless domain with the coefficient data to generate output data that is representative of the transmission being processed according to a wireless processing mode selection. In another example, such mixing input data with delayed versions of processing results may be to receive and process noisy wireless input data. Examples of systems and methods described herein may facilitate the processing of data for 5G wireless communications in a power-efficient and time-efficient manner.

Hardware architecture for local erasure correction in SSD/UFS via maximally recoverable codes

A hardware architecture for systematic erasure encoding includes first matrix constructor circuit that receives parity-check matrix H for codeword C, and the erased part of codeword C, and outputs matrix H.sub.1 of columns of H located on erased coordinates of code C; second matrix constructor circuit that receives matrix H and the erased part of codeword C and outputs matrix H.sub.2 of columns of H located on non-erased coordinates of code C; a neural network that calculates matrix J.sub.1 that is an approximate inverse of matrix H.sub.1. The matrix J.sub.1 is used to determine new erasures in the parity matrix H and new erased coordinates. Matrices H.sub.1 and H.sub.2 are updated, and the updated H.sub.1 is provided as feedback to the first matrix constructor circuit. A calculator circuit restores the erased coordinates of codeword C, from the matrix J.sub.1, matrix H.sub.2, and a non-erased part of codeword C.

Neural networks for forward error correction decoding

Methods and apparatus for training a neural network to recover a codeword and for decoding a received signal using a neural network are disclosed. According to examples of the disclosed methods, a syndrome check is introduced at even layers of the neural network during the training, testing and online phases. During training, optimisation of trainable parameters of the neural network is ceased after optimisation at the layer at which the syndrome check is satisfied. Examples of the method for training a neural network may be implemented via a proposed loss function. During testing and online phases, propagation through the neural network is ceased at the layer at which the syndrome check is satisfied.

INTERLEAVER DESIGN AND PAIRWISE CODEWORD DISTANCE DISTRIBUTION ENHANCEMENT FOR TURBO AUTOENCODER
20220385307 · 2022-12-01 ·

A symmetric interleaver for a Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN) encoder and a circular padding mode are disclosed. The interleaver interleaves elements of an input block to form an output block in which an output neighborhood of elements for each element of the output block is symmetric to an input neighborhood of elements for each element of the input block. A position of an element of the input block is interleaved based on an index i of the position times a parameter δ modulo K in which the parameter δ is relatively prime with K. A test loss function may be used to train the encoder that includes a Binary Cross Entropy (BCE) loss function plus a function that minimizes a number of codeword pairs based on a Euclidean distance. The RNN encoder may be implemented as part of a Turbo Autoencoder (TurboAE) encoder.

NEURAL SELF-CORRECTED MIN-SUM DECODER AND AN ELECTRONIC DEVICE COMPRISING THE DECODER
20220385305 · 2022-12-01 ·

An electronic device and an operating method of an electronic device are provided. The operating method includes configuring a self-correction condition for adjusting an information deletion and dropout rate, performing iterative decoding on the received information using decoding factors and a self-correction technique, determining whether decoding of the codeword succeeds or fails, based on a result of the decoding, storing a received signal and the codeword which are successfully decoded, based on a determination result, and optimizing the decoding factors, based on the stored received signal and codeword.

SYSTEMS FOR ERROR REDUCTION OF ENCODED DATA USING NEURAL NETWORKS
20220368356 · 2022-11-17 · ·

Examples described herein utilize multi-layer neural networks, such as multi-layer recurrent neural networks to estimate an error-reduced version of encoded data based on a retrieved version of encoded data (e.g., data encoded using one or more encoding techniques) from a memory. The neural networks and/or recurrent neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing a neural network or recurrent neural network to estimate an error-reduced version of encoded data for an error correction coding (ECC) decoder, e.g., to facilitate decoding of the error-reduced version of encoded data at the decoder. In this manner, neural networks or recurrent neural networks described herein may be used to improve or facilitate aspects of decoding at ECC decoders, e.g., by reducing errors present in encoded data due to storage or transmission.

DECODERS AND SYSTEMS FOR DECODING ENCODED DATA USING NEURAL NETWORKS
20220368349 · 2022-11-17 · ·

Examples described herein utilize multi-layer neural networks, such as multi-layer recurrent neural networks to estimate message probability compute data based on encoded data (e.g., data encoded using one or more encoding techniques). The neural networks and/or recurrent neural networks may have nonlinear mapping and distributed processing capabilities which may be advantageous in many systems employing a neural network or recurrent neural network to estimate message probability compute data for a message probability compute (MPC) decoder. In this manner, neural networks or recurrent neural networks described herein may be used to implement aspects of error correction coding (ECC) decoders, e.g., an MPC decoder that iteratively decodes encoded data.

METHOD AND DEVICE FOR DECODING DATA

A method for decoding data by an electronic device (100) is provided. The method includes receiving, by the electronic device (100), encoded data. The method includes determining, by the electronic device (100), a sparsity of a plurality of Machine Learning (ML) models (301, 302) of a turbo decoder (150) of the electronic device (100) for decoding the encoded data based on Quality-of-Service (QoS) parameters. The method includes decoding, by the electronic device (100), the encoded data using the turbo decoder (150) based on the determined sparsity.