Patent classifications
H03M13/6577
Storage device including error correction decoder and operating method of error correction decoder
An operating method of an error correction decoder includes receiving data, setting initial log-likelihood values of variable nodes, and decoding the received data by updating a log-likelihood value of a selected variable node by use of a minimum value and a minimum candidate value associated with the selected variable node. The minimum value indicates a minimum value of absolute values of log-likelihood values of first variable nodes sharing a check node with the selected variable node and including the selected variable node. The minimum candidate value indicates one from among absolute values of log-likelihood values of second variable nodes that has the smallest value greater than the minimum value. The second variable nodes are selected later than one from among the first variable nodes corresponding to the minimum value.
Non-concatenated FEC codes for ultra-high speed optical transport networks
A communication system includes a transmitter having an encoder configured to encode input data using FEC codewords and a receiver including a decoder configured to decode the FEC codewords using a parity check matrix. The decoder includes check node processing units each configured to perform a check node computation on an FEC codeword using a different row of the parity check matrix. Each of the check node processing units includes an input computation stage configured to compute initial computation values, a pipelined message memory configured to shift the initial computation values at a predefined clock interval, an output computation stage configured to generate a plurality of check node output messages, a plurality of variable node processing units each configured to perform variable node update computations to generate the variable node messages, and an output circuit configured to generate a decoded codeword based on the variable node messages.
DECODING MODULE WITH LOGARITHM CALCULATION FUNCTION
A decoding module for a communication device includes a first calculation circuit, outputting the larger between a first parameter and a second parameter as a first output parameter; a first arithmetic circuit, calculating a first product of a third parameter and a first slope, and a first difference between a first constant and the first product; a second arithmetic circuit, calculating a second product of the third parameter and a second slope, and a second difference between a second constant and the second product; a second calculation circuit, selecting the largest among a third constant, the first difference and the second difference and generating a second output parameter, wherein the third constant is zero; and an addition circuit, adding the first output parameter and the second output parameter to generate output information, according to which the communication device determines a data bit.
Design and Training of Binary Neurons and Binary Neural Networks with Error Correcting Codes
A data processing system having a neural network architecture for receiving a binary network input and, in dependence on the binary network input, propagating signals via a plurality of processing nodes, in accordance with respective binary weights, to form a network output, the data processing system being configured to train a node by implementing an error correcting function to identify a set of binary weights which minimize, for a given input to the node, any error between an output of the node when formed in accordance with current binary weights of the node and a preferred output from the node and to update the binary weights of the node to be the identified weights. This training is performed without storing and/or using any higher arithmetic precision weights or other components.
Electronic device
Provided herein may be an electronic device using an artificial neural network. The electronic device may include a training data generator configured to determine an input vector corresponding to a trapping set, detected during error correction decoding corresponding to a codeword, and a target vector corresponding to the input vector, and a training component configured to train an artificial neural network based on supervised learning by inputting the input vector to an input layer of the artificial neural network and by inputting the target vector to an output layer of the artificial neural network.
Non-Concatenated FEC Codes For Ultra-High Speed Optical Transport Networks
A decoder for a receiver in a communication system includes an interface configured to receive encoded input data via a communication channel. The encoded input data includes forward error correction (FEC) codewords. A processor is configured to decode the FEC codewords using low density parity check (LDPC) codes defined by a parity check matrix. The parity check matrix is defined by both regular column partition (RCP) constraints and quasi-cyclic (QC) constraints. An output circuit is configured to output a decoded codeword based on the FEC codewords decoded by the processor.
Low-density parity-check (LDCP) decoder of reconstruction-computation-quantization (RCQ) approach for a storage device
A device is disclosed. The device may include an input buffer to receive a first low bit width message. A reconstruction circuit may implement a reconstruction function on the first low bit width message, producing a first high bit width message. A computation circuit may implementing a computation function on the first high bit width message, producing a second high bit width message. A quantization circuit may implementing a quantization function on the second high bit width message, producing a second low bit width message. A decision buffer may then store the second low bit width message. The reconstruction function and the quantization function may vary depending on an iteration and a layer of the device.
Method and apparatus for LDPC decoding using indexed messages
A low-density parity check (LDPC) decoder includes a variable node unit (VNU) comprising a plurality of variable nodes configured to perform sums. A first message mapper of the LDPC decoder receives first n1-bit indices from likelihood ratio (LLR) input and maps the first n1-bit indices to first numerical values that are input to the variable nodes of the VNU. A second message mapper of the LDPC decoder receives second n2-bit indices from a check node unit (CNU) and maps the second n2-bit indices to second numerical values that are input to the variable nodes of the VNU. The CNU includes a plurality of check nodes that perform parity check operations. The first and second numerical values having ranges that are larger than what can be represented in n1-bit and n2-bit binary, respectively.
ITERATIVE DECODER FOR DECODING A CODE COMPOSED OF AT LEAST TWO CONSTRAINT NODES
An iterative decoder, comprises:
N variable nodes (VNs) v.sub.n, n=1 . . . N, configured to receive a LLR I.sub.n defined on a alphabet A.sub.l of q.sub.ch quantization bits, q.sub.ch≥2;
M constraint nodes (CNs) c.sub.m, m=1 . . . M, 2≤M<N;
v.sub.n and c.sub.m exchanging messages along edges of a Tanner graph;
each v.sub.n sending messages m.sub.v.sub.
each c.sub.m sending messages m.sub.c.sub.
the LLR I.sub.n and the messages m.sub.v.sub.
each variable node v.sub.n, for each iteration l, compute:
sign-preserving factors:
where ξis a positive or a null integer;
and
LLR estimation for soft decoding
A method of soft decoding received signals. The method comprising defining quantisation intervals for a signal value range, determining a number of bits in each quantisation interval that are connected to unsatisfied constraints, providing, the number of bits in each quantisation interval that are connected to unsatisfied constraints, as an input to a trained model, wherein the trained model has been trained to cover an operational range of a device for soft decoding of signals, determining, using the trained model, a log likelihood ratio for each quantisation interval, and performing soft decoding using the log likelihood ratios.