DECODING METHOD ADOPTING ALGORITHM WITH WEIGHT-BASED ADJUSTED PARAMETERS AND DECODING SYSTEM

20220399903 · 2022-12-15

    Inventors

    Cpc classification

    International classification

    Abstract

    A decoding method adopting an algorithm with weight-based adjusted parameters and a decoding system are provided. The decoding method is applied to a decoder. M×N low density parity check codes (LDPC codes) having N variable nodes and M check nodes are generated from input signals. In the decoding method, information of the variable nodes and the check nodes is initialized. The information passed from the variable nodes to the check nodes is formed after multiple iterations. After excluding a connection to be calculated, a product of the remaining connections between the variable nodes and the check nodes is calculated. Next, an estimated first minimum or an estimated second minimum can be calculated with multi-dimensional parameters. The information passed from the check nodes to the variable nodes can be updated for making a decision.

    Claims

    1. A decoding method adopting an algorithm with weight-based adjusted parameters, wherein the decoding method is applied to a decoder having “N” variable nodes and “M” check nodes, in which input signals generate “M*N” low density parity check codes, and the decoding method comprises: initializing information of the variable nodes and the check nodes; updating the variable nodes, wherein each of the variable nodes is updated based on the information of the check nodes connected thereto, and wherein the information that each of the variable nodes provides to the check nodes is formed by multiple iterations, and a sum of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated; and updating the check nodes, wherein each of the check nodes is updated according to the information of the variable nodes connected thereto, wherein the information that each of the check nodes provides to the variable nodes is formed by the multiple iterations, a product of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated, and a dot product is then calculated according to an estimated first minimum or an estimated second minimum, so as to obtain the information that the check nodes provide to the variable nodes for making a decision, and wherein: searching for a minimum of the updated variable nodes, so as to obtain a first minimum; obtaining a false second minimum from data accompanied with the first minimum; multiplying the first minimum by a first parameter (α) for obtaining the estimated first minimum; and multiplying the first minimum by a second parameter (β) and adding a result of the false second minimum multiplied by a third parameter (γ), so as obtain the estimated second minimum.

    2. The decoding method according to claim 1, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relation expressed by (β+γ)≥α.

    3. The decoding method according to claim 1, wherein, in the step of initializing the information of the variable nodes and the check nodes, intrinsic information is one-by-one written into the multiple variable nodes before the iterations are performed.

    4. The decoding method according to claim 1, wherein the estimated first minimum or the estimated second minimum is determined according to a determination result of whether or not a number of the variable node in the information that the check node provides to the variable node is a position of the information that the smallest variable node provides to the check node.

    5. The decoding method according to claim 4, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    6. The decoding method according to claim 1, wherein intrinsic information of the variable node is summed up with the information that the check node provides to the multiple variable nodes and is updated via a connection between the check node and the other variable nodes, so as to obtain the information of the variable node for making the decision.

    7. The decoding method according to claim 6, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    8. The decoding method according to claim 1, wherein, in a pre-processing step of the decoder, a decode scaling method is used to control a log-likelihood ratio for adjusting a weight value that is inputted to the log-likelihood ratio.

    9. The decoding method according to claim 8, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    10. The decoding method according to claim 9, wherein an equation for obtaining the information that the check node provides to the variable node is as follows: for m { 1 , .Math. M } and n N m ; c m , n ( i ) = ( .Math. n N m n sign ( v n , m ( i ) ) ) .Math. { min 1 est , when n of c m , n i is not n where minimum v 2 c is located ; min 2 est , others ; min 1 = min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) ; min 1 est = α .Math. min 1 ; min 2 est = β .Math. min 1 + γ .Math. min 2 ′′′ . wherein “N” denotes number of the variable node; “M” denotes number of the check node; “c.sub.m,n.sup.(i)” denotes information that the m-numbered check node sends to the n-numbered variable node; “n′” denotes a number of the remaining variable node(s) with exclusion of the connection to be calculated; “v.sub.n′,m.sup.(i)” denotes information that the n′-numbered variable node(s) sends to the m-numbered check node after excluding the connection to be calculated, and is v2c information; a sign function “sign( )” is used to return to a value “0”, “1” or “−1” according to the value “0”, “a positive number” or “a negative number” in the sign function; “min1” is the first minimum; min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) is a function used to acquire minimum; “min1.sup.est” is the estimated first minimum; “min2.sup.est” is the estimated second minimum; and “min2′″” denotes the false second minimum.

    11. A decoding system, comprising a decoder disposed at a receiving end of the decoding system, in which a decoding method adopting an algorithm with weight-based adjusted parameters is performed according to steps as follows: generating M*N low density parity check codes having N variable nodes and M check nodes from input signals; initializing information of the variable nodes and the check nodes; updating the variable nodes, wherein each of the variable nodes is updated based on the information of the check nodes connected thereto, and wherein the information that each of the variable nodes provides to the check nodes is formed by multiple iterations, and a sum of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated; and updating the check nodes, wherein each of the check nodes is updated according to the information of the variable nodes connected thereto, wherein the information that each of the check nodes provides to the variable nodes is formed by the multiple iterations, a product of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated, and a dot product is then calculated according to an estimated first minimum or an estimated second minimum so as to obtain the information that the check nodes provide to the variable nodes for making a decision, and wherein: searching for a minimum of the updated variable nodes so as to obtain a first minimum; obtaining a false second minimum from data accompanied with the first minimum; multiplying the first minimum by a first parameter (α) for obtaining the estimated first minimum; and multiplying the first minimum by a second parameter (β) and adding a result of the false second minimum multiplied by a third parameter (γ) so as obtain the estimated second minimum.

    12. The decoding system according to claim 11, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    13. The decoding system according to claim 11, wherein, in the step of initializing the information of the variable nodes and the check nodes, intrinsic information is one-by-one written into the multiple variable nodes before the iterations are performed.

    14. The decoding system according to claim 11, wherein the estimated first minimum or the estimated second minimum is determined according to a determination result of whether or not a number of the variable node in the information that the check node provides to the variable node is a position of the information that the smallest variable node provides to the check node.

    15. The decoding system according to claim 14, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    16. The decoding system according to claim 10, wherein intrinsic information of the variable node is summed up with the information that the check node provides to the multiple variable nodes and is updated via a connection between the check node and the other variable nodes so as to obtain the information of the variable node for making the decision.

    17. The decoding system according to claim 16, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    18. The decoding system according to claim 11, wherein, in a pre-processing step of the decoder, a decode scaling method is used to control a log-likelihood ratio for adjusting a weight value that is inputted to the log-likelihood ratio.

    19. The decoding system according to claim 18, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.

    20. The decoding system according to claim 19, wherein an equation for obtaining the information that the check node provides to the variable node is as follows: for m { 1 , .Math. M } and n N m ; c m , n ( i ) = ( .Math. n N m n sign ( v n , m ( i ) ) ) .Math. { min 1 est , when n of c m , n i is not n where minimum v 2 c is located ; min 2 est , others ; min 1 = min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) ; min 1 est = α .Math. min 1 ; min 2 est = β .Math. min 1 + γ .Math. min 2 ′′′ . wherein, “N” denotes number of the variable nodes; “M” denotes number of the check nodes; “c.sub.m,n.sup.(i)” denotes information that the m-numbered check node sends to the n-numbered variable node; “n′” denotes a number of the remaining variable node(s) with exclusion of the connection to be calculated; “v.sub.n′,m.sup.(i)” denotes information that the n′-numbered variable node(s) sends to the m-numbered check node after excluding the connection to be calculated, and is v2c information; a sign function “sign( )” is used to return to a value “0”, “1” or “−1” according to the value “0”, “a positive number” or “a negative number” in the sign function; “min1” is the first minimum; min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) is a function used to acquire a minimum; “min1.sup.est” is the estimated first minimum; “min2.sup.est” is the estimated second minimum; and “min2′″” denotes the false second minimum.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0017] The described embodiments may be better understood by reference to the following description and the accompanying drawings, in which:

    [0018] FIG. 1 is a schematic diagram depicting a circuit framework of a decoding system according to one embodiment of the present disclosure;

    [0019] FIG. 2 is a schematic diagram exemplarily showing a Tanner graph that illustrates a decoding process with a low density parity check code;

    [0020] FIG. 3 is a schematic diagram exemplarily showing a Tanner graph that illustrates a sum calculation during the decoding process with the low density parity check code;

    [0021] FIG. 4 is a schematic diagram exemplarily showing a Tanner graph that illustrates a product calculation during the decoding process with the low density parity check code;

    [0022] FIG. 5 to FIG. 8 show block diagrams of logic circuits that are used to calculate a first minimum and a false second minimum according to embodiments of the present disclosure;

    [0023] FIG. 9 is a schematic block diagram of logic circuits that implement a modified SMAMSA according to one embodiment of the present disclosure; and

    [0024] FIG. 10 is a flow chart describing a decoding method adopting an algorithm with weight-based adjusted parameters according to one embodiment of the present disclosure.

    DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS

    [0025] The present disclosure is more particularly described in the following examples that are intended as illustrative only since numerous modifications and variations therein will be apparent to those skilled in the art. Like numbers in the drawings indicate like components throughout the views. As used in the description herein and throughout the claims that follow, unless the context clearly dictates otherwise, the meaning of “a”, “an”, and “the” includes plural reference, and the meaning of “in” includes “in” and “on”. Titles or subtitles can be used herein for the convenience of a reader, which shall have no influence on the scope of the present disclosure.

    [0026] The terms used herein generally have their ordinary meanings in the art. In the case of conflict, the present document, including any definitions given herein, will prevail. The same thing can be expressed in more than one way. Alternative language and synonyms can be used for any term(s) discussed herein, and no special significance is to be placed upon whether a term is elaborated or discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms is illustrative only, and in no way limits the scope and meaning of the present disclosure or of any exemplified term. Likewise, the present disclosure is not limited to various embodiments given herein. Numbering terms such as “first”, “second” or “third” can be used to describe various components, signals or the like, which are for distinguishing one component/signal from another one only, and are not intended to, nor should be construed to impose any substantive limitations on the components, signals or the like.

    [0027] The present disclosure is related to a decoding method adopting an algorithm with weight-based adjusted parameters and a decoding system. A modified single-min-sum algorithm (hereinafter referred to as “modified SMAMSA”) is provided. To improve performance of calculation, compared with conventional algorithms, the modified SMAMSA adopts variables with more dimensions. Furthermore, for the modified SMAMSA, the range of application is expanded, and lower error rates, reduced complexity of hardware, and efficient power consumption can also be achieved.

    [0028] Reference is made to FIG. 1, which is a schematic diagram of a framework of the decoding system that implements the decoding method adopting an algorithm with weight-based adjusted parameters according to one embodiment of the present disclosure. In a signal transmission process of a communication system that uses the decoding system, to check whether or not the reliability of signal transmission of a transmission medium is reduced when data is damaged due to interference, additional information such as an error correction code (e.g., an LDPC code) can be used and be added into signals to be transmitted. The error correction code allows a receiving end to infer the correct information based on the received information, so as to restore the damaged data.

    [0029] The decoding system includes a decoder disposed at the receiving end. The decoder receives signals via an input circuit 101. The signals are, for example, communication signals. After initializing the communication signals, the signals are inputted to a log-likelihood ratio operator 103, and the LLR operator 103 provides a log-likelihood ratio. Therefore, the decoding system can achieve a lower error rate and higher performance. Furthermore, a decode scaling can be used to control the log-likelihood ratio (LLR), so as to provide a correct log-likelihood ratio to a low density parity checker (LDPC) 105. In a decoding process, multiple updates and iterations can be performed according to connection relationships among check nodes and variable nodes. Lastly, the content of the signals can be verified based on a probability, and the decoded signals are outputted via an output circuit 107.

    [0030] To explain the technical features of the decoding method adopting an algorithm with weight-based adjusted parameters and the decoding system according to certain embodiments of the present disclosure, the differences between the conventional single minimum algorithm and the min-sum algorithm will be described. An exemplary decoding process of the min-sum algorithm is as follows.

    [0031] According to the various algorithms provided in different phases in the present disclosure (such as the modified SMAMSA), the algorithms can be applied to a decoder. In the min-sum algorithm, M×N low density parity check codes including N variable nodes and M check nodes are provided. It should be noted that the complexity of computation can be determined according to a degree of the check nodes, or a quantity of the variable nodes included in a check equation. Reference is made to FIG. 2, which is a schematic diagram depicting an exemplary example of a Tanner graph used to illustrate a decoding process with the low density parity check codes.

    [0032] In FIG. 2, N variable nodes 21 and M check nodes 22 are shown. The decoding process of a decoder is based on a concept of message passing, e.g., probabilities calculated at each of the variable nodes 21 and the check nodes 22 being transmitted to each other. In the diagram, the probability that a specific variable node (e.g., an n.sup.th variable node 211) transmits a message to one of the check nodes (e.g., an m.sup.th check node 221) is calculated. With exclusion of the connection between the n.sup.th variable node 211 and the m.sup.th check node 221, the probability is determined based on the connections between the n.sup.th variable node 211 and the other check nodes.

    [0033] In the decoding equation with the low density parity check codes, “n” denotes a number of the variable node, “m” denotes a number of the check node, “N.sub.m” denotes all of the variable nodes (N) 21 that participate in the current (m.sup.th) check equation, and “M” denotes all of the check equations of the check nodes (M) 22 that participate in the current (n.sup.th) variable node.

    [0034] FIG. 2 also shows the multiple connections between the multiple variable nodes 21 and the multiple check nodes 22. These connections denote a decoding process with the low density parity check codes. In the decoding process, the information is transmitted to each other after the probabilities of the two types of the nodes are calculated. Here, the connection between the n.sup.th variable node 211 and the m.sup.th check node 221 is taken as an example. During an initialization process, intrinsic information (such as a log likelihood ratio (LLR) of a decoder) is written into the multiple variable nodes 21. It should be noted that the log likelihood ratio is used to indicate that a value of the variable node is close to 0 or 1. “i=k” denotes a k.sup.th iteration in a LDCP decoding process. “v.sub.n,m.sup.(i=k)” denotes v2c information 201 that the n.sup.th variable node 211 provides to the m.sup.th check node 221 in the k.sup.th iteration, in which “v2c” denotes the information that the variable node provides to the check node. “c.sub.m,n.sup.(i=k)” denotes c2v information 202 that the m.sup.th check node 221 provides to the n.sup.th variable node 211 in the k.sup.th iteration. “c2v” denotes the information that the check node provides to the variable node. “In” denotes intrinsic information of the n.sup.th variable node, and the intrinsic information refers to original information of when the nodes enter the system. “α” indicates a normalization factor. For example, in a min-sum algorithm (MS), “α=1”. In a normalized min-sum algorithm (NMS), “α≠1”.

    [0035] The calculation steps of the min-sum algorithm are described as follows.

    [0036] In an initialization stage, the intrinsic information is one-by-one written into the multiple variable nodes. For the check nodes, “i=0” denotes an initial state before an iteration (a 0.sup.th iteration) is performed. In equation 1, “c.sub.m,n.sup.(i=0)” is 0, which indicates that an initial state of the check node is 0. A symbol “∀” means any one, “∈” means belonging, and “∀n∈N.sub.m” denotes any “n” belonging to “N.sub.m.”


    c.sub.m,n.sup.(i=0)=0,∀m∈{1, . . . M},∀n∈N.sub.m.  Equation 1:

    [0037] First step is to update the information of the variable node. In equation 2, the information (i.e., v2c information) that the variable node provides to the check node is updated, in which “N” is a quantity of the variable nodes, “M” is a quantity of the check nodes, “n” is a number of the variable node, and “m” is a number of the check node.


    For n∈{1, . . . N} and m∈M.sub.n;


    v.sub.n,m.sup.(i)=I.sub.n+Σ.sub.m′∈M.sub.n\mc.sub.m′,n.sup.i-1.  Equation 2:

    [0038] After the k.sup.th iteration, “v.sub.n,m.sup.(i=k)” forms information that the variable node provides to the m.sup.th check node. After excluding the connection to be calculated, a sum of the remaining connections among the variable nodes and the check nodes is calculated. The remaining connections are the variable nodes that participate in the m.sup.th check equation. Reference is made to FIG. 3, which is an exemplary example depicting a Tanner graph used to illustrate a sum calculation during the LDPC decoding process. For example, to form the information (v.sub.n,m.sup.(i=k)) that the n.sup.th variable node 211 provides to the m.sup.th check node 221 after multiple iterations, the connection indicative of the v2c information 201 is excluded from the connections (301, 302 and 303) between the check nodes 311, 312 and 313 and the n.sup.th variable node 211. A sum of the information transmitted between the check nodes 311, 312 and 313 and the n.sup.th variable node 211 (i.e., the c2v information) and the intrinsic information (In) of the n.sup.th variable node 211 need be obtained, so that the information of the variable node can be updated to “v.sub.n,m.sup.(i).”

    [0039] Second step is to update information of the check node. In equation 3, the c2v information 201 that the m.sup.th check node 221 provides to the n.sup.th variable node 211 is updated. A sign function “sign( )” returns 0, 1 or −1 based on the value of the function is 0, a positive number or a negative number.

    [00003] For m { 1 , .Math. M } and n N m ; c m , n ( i ) = α .Math. ( .Math. n N m \ n sign ( v n , m ( i ) ) ) .Math. ( min n N m \ n ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) ) . Equation 3

    [0040] In the k.sup.th iteration, “c.sub.m,n.sup.(i=k)”, forms information that the check node provides to the variable node. After excluding the connection to be calculated, a product among the remaining connections between the variable nodes and the check nodes is calculated. A minimum is obtained. Reference is made to FIG. 4, which shows a Tanner graph that is used to exemplarily illustrate a product calculation during the LDPC decoding process. Here, the information that the m.sup.th check node 221 provides to the n.sup.th variable node 211 is taken as an example. In the process of forming the information (c.sub.m,n.sup.(i=k)) that the check node provides to the variable node after multiple iterations, the connection indicative of c2v information 202 is excluded, and a product of the v2c information formed from the connections (401, 402 and 403) between the remaining variable nodes 411, 412 and 413 and the m.sup.th check node 221 is calculated. A minimum “c.sub.m,n.sup.(i)” is then obtained, and is used to update the c2v information that the m.sup.th check node 221 provides to n.sup.th variable node 211.

    [0041] In a decision stage, the information obtained in the above steps can be summed up for making a final decision. For example, a hard decision algorithm is used to decode firstly. Each input signal and each output signal can be expressed as 1 or 0. In equation 4, the intrinsic information (In) of the variable node and the updated information that the check node provides to multiple variable nodes based on the connections between the check node and the other variable nodes are summed up as (Σ.sub.m′∈M.sub.nc.sub.m′,n.sup.i), so as to obtain the information (v.sub.n) of the variable node and make a final decision (D.sub.n). D.sub.n is 0 or 1.

    [00004] v n = I n + .Math. m M n c m , n i D n = { 1 , v n < 0 0 , v n 0 . Equation 4

    [0042] In a next step, rather than the second step in the above initialization stage, a single-min algorithm min-sum algorithm (SMAMSA) is used in the initialization stage. The check node is updated as in equation 5.

    [0043] In equation 5, the check node is updated. That is, the c2v information is updated. There are N variable nodes and M check nodes, in which “n” denotes a number of the variable node and “m” denotes a number of the check node.

    [00005] For m { 1 , .Math. M } and n N m ; Equation 5 c m , n ( i ) = ( .Math. n N m n sign ( v n , m ( i ) ) ) .Math. { min 1 est , when n of c m , n i is not n where minimum v 2 c is located ; min 2 est , others ; min 1 = min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) ; min 1 est = α .Math. min 1 ; min 2 est = β .Math. min 1 + γ .Math. min 2 ′′′ .

    [0044] “c.sub.m,n.sup.(i)” denotes the information that the check node provides to the variable node. “c.sub.m,n.sup.(i)” in the SMAMSA is used to confirm an estimated first minimum (min1.sup.est) or an estimated second minimum (min2.sup.est). The estimated first minimum (min1.sup.est) is used if the number (“n”) of the variable node in “c.sub.m,n.sup.(i)” is not at a position of the minimum v2c information that the variable node provides to the check node; otherwise, the estimated second minimum (min2.sup.est) is used. Accordingly, a dot product is performed between the estimated second minimum (min1.sup.est) or the estimated second minimum (min2.sup.est) and the updated variable node for updating information (c.sub.m,n.sup.(i)) of the check node. “α” and “γ” are operational parameters in the equation for obtaining the estimated first minimum or the estimated second minimum.

    [0045] In the above-mentioned single-min algorithm, in order to estimate the second minimum (min2, i.e., the second smallest value), the second minimum (min2, i.e., the second smallest value) originally used in the min-sum algorithm (MS) is substituted by the estimated second minimum (min2.sup.est). The estimated second minimum is obtained from a false second minimum (min2′″) that can be obtained when searching for the first minimum (min1).

    [0046] Any of the circuits depicted in FIG. 5 to FIG. 8 can be used to search for the first minimum (min1). However, the first minimum (min1) can also be obtained by other methods. In particular, apart from obtaining the first minimum by any of the circuits depicted in FIG. 5 to FIG. 8, additional information other than the first minimum can also be obtained. The additional information calculated from actual signals has a certain credibility, and can therefore be used to estimate the false second minimum (min2′″) of the second minimum. The false second minimum (min2′″) can still be obtained by the SMAMSA without any additional circuit. Accordingly, there is no need to use any additional hardware to obtain the second minimum (min2).

    [0047] FIG. 5 to FIG. 8 show block diagrams of logic circuits whose check node degree is 16 according to certain embodiments of the present disclosure.

    [0048] In FIG. 5, 16 input signals are shown to be inputted to 4 calculation units M41. After comparison operations, 4 minimums can be obtained from the 4 calculation units M41, and can then be inputted to a calculation unit M42. After the 4 minimums are compared, a first minimum (min1) can be obtained. It should be noted that a second minimum is not calculated, but a false second minimum (min2′″) is obtained from data accompanied with the first minimum. In FIG. 6, 16 input signals are shown to be inputted to 8 calculation units M21. Every 4 calculation units M21 generate a minimum that is configured to be inputted to next 2 calculation units M41. Afterwards, a next calculation unit M22 generates a first minimum (min1) and a false second minimum (min2′″). In FIG. 7, 16 input signals are shown to be correspondingly inputted to 8 calculation units M21. Every 2 calculation units M21 generate a minimum that is configured to be inputted to next 4 calculation units M21. Afterwards, the minimum obtained from the next 2 calculation units M21 is inputted to a calculation unit M22 for obtaining a first minimum (min1) and a false second minimum (min2′″) through a comparison operation. In FIG. 8, 16 input signals are shown to be correspondingly inputted to 8 calculation units M21. Every 2 calculation units M21 generate a minimum that is configured to be inputted to next 4 calculation units M21. Afterwards, a next calculation unit M42 generates a first minimum (min1) and a false second minimum (min2′″).

    [0049] For example, when the check node degree is 16, a first degree of the 16 input signals is the v2c information that the n.sup.th variable node (no. n) provides to the m.sup.th check node (no. m). The v2c information can be expressed by “v.sub.n,m.sup.(i).” Afterwards, the first minimum (min1) is obtained, and the false second minimum (min2′″) can be obtained from data accompanied with the first minimum. The false second minimum (min2′″) can be the actual second minimum plus noises, i.e., the scrambled second minimum.

    [0050] The decoding method and the decoding system of the present disclosure have improved the conventional SMAMSA, and an equation 6 for the estimated second minimum is provided. The equation 6 implements a modified SMAMSA (referred to as M-SMAMSA).

    [0051] In equation 6, a function

    [00006] min n N m ( )

    is used to obtain a minimum.

    [00007] min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" )

    is used to obtain a minimum of “v.sub.n′,m.sup.(i)” (that is, the information that the variable node provides to the check node with exclusion of the connection to be calculated). The estimated first minimum and the estimated second minimum can be obtained. Based on the information (v.sub.n′,m.sup.(i)) and with exclusion of the connection to be calculated, a product among the other connections (n′) between the variable nodes and the check nodes is calculated. It should be noted that the equation 6 estimates the information of a node based on the adjacent connections. After a dot product between the information and the estimated first minimum or the estimated second minimum is calculated, the information (c.sub.m,n.sup.(i)) that the check node provides to the variable node can be obtained.

    [00008] For m { 1 , .Math. M } and n N m ; Equation 6 c m , n ( i ) = ( .Math. n N m n sign ( v n , m ( i ) ) ) .Math. { min 1 est , when n of c m , n i is not n where minimum v 2 c is located ; min 2 est , others ; min 1 = min n N m ( .Math. "\[LeftBracketingBar]" v n , m ( i ) .Math. "\[RightBracketingBar]" ) ; min 1 est = α .Math. min 1 ; min 2 est = β .Math. min 1 + γ .Math. min 2 ′′′ .

    [0052] Compared to the conventional SMAMSA, the estimated first minimum (min1.sup.est) obtained from the modified SMAMSA of equation 6 is consistent with the first minimum obtained in equation 5 before the modification. “α”, “γ” and “β” are weights in the equation 6. When the modified SMAMSA is used to estimate the estimated second minimum (min2.sup.est), the first minimum (min1) is multiplied by “α” (a first parameter), the false second minimum (min2′″) is multiplied by “γ” (a third parameter), and the first minimum (min1) is multiplied by “β (a second parameter).

    [0053] Accordingly, the modified SMAMSA changes dimensions for generating the estimated second minimum (min2.sup.est). For example, two dimensions in the original SMAMSA (e.g., in equation 5) are changed to three dimensions (e.g., in equation 6). Therefore, the modified SMAMSA is able to increase a range of use, so as to cooperate with more modulations and have a wider range of fixed points of the decoder. In the present disclosure, the modified SMAMSA is also required to comply with the following rules when adding the parameters “α”, “β” and “γ” to the equation 6.

    [0054] min2.sup.est≥min1.sup.est.

    [0055] min2′″≥min1.

    [0056] Accordingly, the minimum of min2′″ is equal to min1. In addition, if min2′″=min1 and “min2.sup.est” is at lower bound, a relationship: min2.sup.est=(β+γ).Math.min1≥min1.sup.est=α.Math.min1 can be derived. Therefore, the parameters “α”, “β” and “γ” comply with a relationship of “(β+γ)≥α”.

    [0057] Reference is made to FIG. 9, which is a block diagram of logic circuits that implement the modified SMAMSA according to one embodiment of the present disclosure. FIG. 10 is a flow chart describing the decoding method adopting an algorithm with weight-based adjusted parameters according to one embodiment of the present disclosure.

    [0058] In FIG. 10, after a decoder receives signals, M×N low density parity check codes including N variable nodes and M check nodes are generated and expressed by input signals 901 and 902 (step S101). The input signals are, for example, the information (v.sub.n′,m.sup.(i)) that the multiple variable nodes provide to the check nodes. To update the information of the check nodes, via a calculation unit 90 (references are made to FIG. 5 to FIG. 8), equation 6 is used to calculate a first minimum (min1) (step S103). The data accompanied with the first minimum can be used to obtain a false second minimum (min2′″) (step S105).

    [0059] In equation 6, in compliance with the relationship of (β+γ)≥α, the first minimum is multiplied by a first parameter (α) for obtaining an estimated first minimum (min1.sup.est) (step S107). An estimated second minimum (min2.sup.est) is obtained by the first minimum being multiplied by a second parameter (β) plus the false second minimum being multiplied by a third parameter (γ) (step S109). In the meantime, based on the information that the variable nodes provide to the check nodes, after excluding the connection to be calculated, a sum of the remaining connections is calculated for determining a value 0 or 1 (v.sub.n′,m.sup.(i)) (step S111). Afterwards, whether or not the value “n” (a number of the variable node) of “c.sub.m,n.sup.(i)” is at the connection with the minimum (v.sub.n,m.sup.(i)) that the n.sup.th variable node provides to m.sup.th check node is determined, so as to determine if the estimated first minimum (min1.sup.est) is to be used. On the contrary, the estimated second minimum (min2.sup.est) is used if the value “n” of “c.sub.m,n.sup.(i)” is not at the connection with the minimum (v.sub.n,m.sup.(i)) that the n.sup.th variable node provides to m.sup.th check node. The result (0 or 1) in step S111 is used to perform a dot product, so as to obtain the information (c.sub.m,n.sup.(i)) that the check nodes provide to the variable nodes (step S113).

    [0060] Furthermore, in the decoding method, a pre-processing process can be performed before the signals are inputted to the decoder. In the pre-processing process, in view of hardware limitation, a decode-scaling method can be performed to adjust the signals, such that the decoder is able to identify the features of the signals. Decode scaling can be determined by comparing a noise-power ratio with a threshold set by the decoding system. The decode-scaling method can improve a bandwidth limitation caused by the fixed points of the decoder, thereby solving a problem of reduced performance due to the bandwidth limitation.

    [0061] For example, for the LLR entering the LDPC decoder, the decode-scaling method is used to adjust a weight of the LLR inputted to the decoder since the information of an inverse of noise power (1/σ.sup.2) is required by the decoder when performing an LLR calculation. It should be noted that the inverse of noise power (1/σ.sup.2) provides information of a signal-noise ratio (SNR), and the inverse of noise power (1/σ.sup.2) can be used to adjust the strength of signals in each channel. Therefore, a correct LLR is provided to the decoder.

    [0062] In a practical application, the signal-noise ratio operated in the decoding system may cross an 8-9 dB interval, such that a dynamic range of the inverse of noise power required when the LLR is inputted to the LDPC decoder falls in a range of 2 to 8. To completely cover the whole dynamic range, the fixed points of the decoder should increase a bit width for maintaining the performance of the decoder. However, the additional bit width may increase hardware area and power consumption. Accordingly, the decode-scaling method for the LLR is provided for suppressing changes such as the bit width of the fixed points.

    [0063] In summation, according to the above embodiments of the decoding method adopting an algorithm with weight-based adjusted parameters and the decoding system, the modified SMAMSA that is a new single-sum framework with a layered decoding technology is provided. Compared with the conventional algorithms, the modified SMAMSA provides variables with more dimensions, and the range of application is expanded. Better performance than the conventional NMS is also provided. Further, the modified SMAMSA provides a lower error rate when the single minimum is used with the scrambled second minimum. Since only the first minimum is searched for, the modified SMAMSA can reduce hardware complexity and power consumption. At an input signal end, the decoding method uses various decode-scaling methods for different signal-noise ratios, and therefore provides optimized performance since the hardware area is reduced for the reduced bit width.

    [0064] The foregoing description of the exemplary embodiments of the disclosure has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.

    [0065] The embodiments were chosen and described in order to explain the principles of the disclosure and their practical application so as to enable others skilled in the art to utilize the disclosure and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present disclosure pertains without departing from its spirit and scope.