DECODING METHOD ADOPTING ALGORITHM WITH WEIGHT-BASED ADJUSTED PARAMETERS AND DECODING SYSTEM
20220399903 · 2022-12-15
Inventors
Cpc classification
H03M13/6516
ELECTRICITY
H03M13/1125
ELECTRICITY
H03M13/458
ELECTRICITY
International classification
Abstract
A decoding method adopting an algorithm with weight-based adjusted parameters and a decoding system are provided. The decoding method is applied to a decoder. M×N low density parity check codes (LDPC codes) having N variable nodes and M check nodes are generated from input signals. In the decoding method, information of the variable nodes and the check nodes is initialized. The information passed from the variable nodes to the check nodes is formed after multiple iterations. After excluding a connection to be calculated, a product of the remaining connections between the variable nodes and the check nodes is calculated. Next, an estimated first minimum or an estimated second minimum can be calculated with multi-dimensional parameters. The information passed from the check nodes to the variable nodes can be updated for making a decision.
Claims
1. A decoding method adopting an algorithm with weight-based adjusted parameters, wherein the decoding method is applied to a decoder having “N” variable nodes and “M” check nodes, in which input signals generate “M*N” low density parity check codes, and the decoding method comprises: initializing information of the variable nodes and the check nodes; updating the variable nodes, wherein each of the variable nodes is updated based on the information of the check nodes connected thereto, and wherein the information that each of the variable nodes provides to the check nodes is formed by multiple iterations, and a sum of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated; and updating the check nodes, wherein each of the check nodes is updated according to the information of the variable nodes connected thereto, wherein the information that each of the check nodes provides to the variable nodes is formed by the multiple iterations, a product of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated, and a dot product is then calculated according to an estimated first minimum or an estimated second minimum, so as to obtain the information that the check nodes provide to the variable nodes for making a decision, and wherein: searching for a minimum of the updated variable nodes, so as to obtain a first minimum; obtaining a false second minimum from data accompanied with the first minimum; multiplying the first minimum by a first parameter (α) for obtaining the estimated first minimum; and multiplying the first minimum by a second parameter (β) and adding a result of the false second minimum multiplied by a third parameter (γ), so as obtain the estimated second minimum.
2. The decoding method according to claim 1, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relation expressed by (β+γ)≥α.
3. The decoding method according to claim 1, wherein, in the step of initializing the information of the variable nodes and the check nodes, intrinsic information is one-by-one written into the multiple variable nodes before the iterations are performed.
4. The decoding method according to claim 1, wherein the estimated first minimum or the estimated second minimum is determined according to a determination result of whether or not a number of the variable node in the information that the check node provides to the variable node is a position of the information that the smallest variable node provides to the check node.
5. The decoding method according to claim 4, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
6. The decoding method according to claim 1, wherein intrinsic information of the variable node is summed up with the information that the check node provides to the multiple variable nodes and is updated via a connection between the check node and the other variable nodes, so as to obtain the information of the variable node for making the decision.
7. The decoding method according to claim 6, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
8. The decoding method according to claim 1, wherein, in a pre-processing step of the decoder, a decode scaling method is used to control a log-likelihood ratio for adjusting a weight value that is inputted to the log-likelihood ratio.
9. The decoding method according to claim 8, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
10. The decoding method according to claim 9, wherein an equation for obtaining the information that the check node provides to the variable node is as follows:
11. A decoding system, comprising a decoder disposed at a receiving end of the decoding system, in which a decoding method adopting an algorithm with weight-based adjusted parameters is performed according to steps as follows: generating M*N low density parity check codes having N variable nodes and M check nodes from input signals; initializing information of the variable nodes and the check nodes; updating the variable nodes, wherein each of the variable nodes is updated based on the information of the check nodes connected thereto, and wherein the information that each of the variable nodes provides to the check nodes is formed by multiple iterations, and a sum of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated; and updating the check nodes, wherein each of the check nodes is updated according to the information of the variable nodes connected thereto, wherein the information that each of the check nodes provides to the variable nodes is formed by the multiple iterations, a product of connections among the variable nodes and the check nodes is calculated after excluding the connections to be calculated, and a dot product is then calculated according to an estimated first minimum or an estimated second minimum so as to obtain the information that the check nodes provide to the variable nodes for making a decision, and wherein: searching for a minimum of the updated variable nodes so as to obtain a first minimum; obtaining a false second minimum from data accompanied with the first minimum; multiplying the first minimum by a first parameter (α) for obtaining the estimated first minimum; and multiplying the first minimum by a second parameter (β) and adding a result of the false second minimum multiplied by a third parameter (γ) so as obtain the estimated second minimum.
12. The decoding system according to claim 11, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
13. The decoding system according to claim 11, wherein, in the step of initializing the information of the variable nodes and the check nodes, intrinsic information is one-by-one written into the multiple variable nodes before the iterations are performed.
14. The decoding system according to claim 11, wherein the estimated first minimum or the estimated second minimum is determined according to a determination result of whether or not a number of the variable node in the information that the check node provides to the variable node is a position of the information that the smallest variable node provides to the check node.
15. The decoding system according to claim 14, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
16. The decoding system according to claim 10, wherein intrinsic information of the variable node is summed up with the information that the check node provides to the multiple variable nodes and is updated via a connection between the check node and the other variable nodes so as to obtain the information of the variable node for making the decision.
17. The decoding system according to claim 16, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
18. The decoding system according to claim 11, wherein, in a pre-processing step of the decoder, a decode scaling method is used to control a log-likelihood ratio for adjusting a weight value that is inputted to the log-likelihood ratio.
19. The decoding system according to claim 18, wherein the first parameter (α), the second parameter (β) and the third parameter (γ) satisfy a relational expression (β+γ)≥α.
20. The decoding system according to claim 19, wherein an equation for obtaining the information that the check node provides to the variable node is as follows:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0017] The described embodiments may be better understood by reference to the following description and the accompanying drawings, in which:
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS
[0025] The present disclosure is more particularly described in the following examples that are intended as illustrative only since numerous modifications and variations therein will be apparent to those skilled in the art. Like numbers in the drawings indicate like components throughout the views. As used in the description herein and throughout the claims that follow, unless the context clearly dictates otherwise, the meaning of “a”, “an”, and “the” includes plural reference, and the meaning of “in” includes “in” and “on”. Titles or subtitles can be used herein for the convenience of a reader, which shall have no influence on the scope of the present disclosure.
[0026] The terms used herein generally have their ordinary meanings in the art. In the case of conflict, the present document, including any definitions given herein, will prevail. The same thing can be expressed in more than one way. Alternative language and synonyms can be used for any term(s) discussed herein, and no special significance is to be placed upon whether a term is elaborated or discussed herein. A recital of one or more synonyms does not exclude the use of other synonyms. The use of examples anywhere in this specification including examples of any terms is illustrative only, and in no way limits the scope and meaning of the present disclosure or of any exemplified term. Likewise, the present disclosure is not limited to various embodiments given herein. Numbering terms such as “first”, “second” or “third” can be used to describe various components, signals or the like, which are for distinguishing one component/signal from another one only, and are not intended to, nor should be construed to impose any substantive limitations on the components, signals or the like.
[0027] The present disclosure is related to a decoding method adopting an algorithm with weight-based adjusted parameters and a decoding system. A modified single-min-sum algorithm (hereinafter referred to as “modified SMAMSA”) is provided. To improve performance of calculation, compared with conventional algorithms, the modified SMAMSA adopts variables with more dimensions. Furthermore, for the modified SMAMSA, the range of application is expanded, and lower error rates, reduced complexity of hardware, and efficient power consumption can also be achieved.
[0028] Reference is made to
[0029] The decoding system includes a decoder disposed at the receiving end. The decoder receives signals via an input circuit 101. The signals are, for example, communication signals. After initializing the communication signals, the signals are inputted to a log-likelihood ratio operator 103, and the LLR operator 103 provides a log-likelihood ratio. Therefore, the decoding system can achieve a lower error rate and higher performance. Furthermore, a decode scaling can be used to control the log-likelihood ratio (LLR), so as to provide a correct log-likelihood ratio to a low density parity checker (LDPC) 105. In a decoding process, multiple updates and iterations can be performed according to connection relationships among check nodes and variable nodes. Lastly, the content of the signals can be verified based on a probability, and the decoded signals are outputted via an output circuit 107.
[0030] To explain the technical features of the decoding method adopting an algorithm with weight-based adjusted parameters and the decoding system according to certain embodiments of the present disclosure, the differences between the conventional single minimum algorithm and the min-sum algorithm will be described. An exemplary decoding process of the min-sum algorithm is as follows.
[0031] According to the various algorithms provided in different phases in the present disclosure (such as the modified SMAMSA), the algorithms can be applied to a decoder. In the min-sum algorithm, M×N low density parity check codes including N variable nodes and M check nodes are provided. It should be noted that the complexity of computation can be determined according to a degree of the check nodes, or a quantity of the variable nodes included in a check equation. Reference is made to
[0032] In
[0033] In the decoding equation with the low density parity check codes, “n” denotes a number of the variable node, “m” denotes a number of the check node, “N.sub.m” denotes all of the variable nodes (N) 21 that participate in the current (m.sup.th) check equation, and “M” denotes all of the check equations of the check nodes (M) 22 that participate in the current (n.sup.th) variable node.
[0034]
[0035] The calculation steps of the min-sum algorithm are described as follows.
[0036] In an initialization stage, the intrinsic information is one-by-one written into the multiple variable nodes. For the check nodes, “i=0” denotes an initial state before an iteration (a 0.sup.th iteration) is performed. In equation 1, “c.sub.m,n.sup.(i=0)” is 0, which indicates that an initial state of the check node is 0. A symbol “∀” means any one, “∈” means belonging, and “∀n∈N.sub.m” denotes any “n” belonging to “N.sub.m.”
c.sub.m,n.sup.(i=0)=0,∀m∈{1, . . . M},∀n∈N.sub.m. Equation 1:
[0037] First step is to update the information of the variable node. In equation 2, the information (i.e., v2c information) that the variable node provides to the check node is updated, in which “N” is a quantity of the variable nodes, “M” is a quantity of the check nodes, “n” is a number of the variable node, and “m” is a number of the check node.
For n∈{1, . . . N} and m∈M.sub.n;
v.sub.n,m.sup.(i)=I.sub.n+Σ.sub.m′∈M.sub.
[0038] After the k.sup.th iteration, “v.sub.n,m.sup.(i=k)” forms information that the variable node provides to the m.sup.th check node. After excluding the connection to be calculated, a sum of the remaining connections among the variable nodes and the check nodes is calculated. The remaining connections are the variable nodes that participate in the m.sup.th check equation. Reference is made to
[0039] Second step is to update information of the check node. In equation 3, the c2v information 201 that the m.sup.th check node 221 provides to the n.sup.th variable node 211 is updated. A sign function “sign( )” returns 0, 1 or −1 based on the value of the function is 0, a positive number or a negative number.
[0040] In the k.sup.th iteration, “c.sub.m,n.sup.(i=k)”, forms information that the check node provides to the variable node. After excluding the connection to be calculated, a product among the remaining connections between the variable nodes and the check nodes is calculated. A minimum is obtained. Reference is made to
[0041] In a decision stage, the information obtained in the above steps can be summed up for making a final decision. For example, a hard decision algorithm is used to decode firstly. Each input signal and each output signal can be expressed as 1 or 0. In equation 4, the intrinsic information (In) of the variable node and the updated information that the check node provides to multiple variable nodes based on the connections between the check node and the other variable nodes are summed up as (Σ.sub.m′∈M.sub.
[0042] In a next step, rather than the second step in the above initialization stage, a single-min algorithm min-sum algorithm (SMAMSA) is used in the initialization stage. The check node is updated as in equation 5.
[0043] In equation 5, the check node is updated. That is, the c2v information is updated. There are N variable nodes and M check nodes, in which “n” denotes a number of the variable node and “m” denotes a number of the check node.
[0044] “c.sub.m,n.sup.(i)” denotes the information that the check node provides to the variable node. “c.sub.m,n.sup.(i)” in the SMAMSA is used to confirm an estimated first minimum (min1.sup.est) or an estimated second minimum (min2.sup.est). The estimated first minimum (min1.sup.est) is used if the number (“n”) of the variable node in “c.sub.m,n.sup.(i)” is not at a position of the minimum v2c information that the variable node provides to the check node; otherwise, the estimated second minimum (min2.sup.est) is used. Accordingly, a dot product is performed between the estimated second minimum (min1.sup.est) or the estimated second minimum (min2.sup.est) and the updated variable node for updating information (c.sub.m,n.sup.(i)) of the check node. “α” and “γ” are operational parameters in the equation for obtaining the estimated first minimum or the estimated second minimum.
[0045] In the above-mentioned single-min algorithm, in order to estimate the second minimum (min2, i.e., the second smallest value), the second minimum (min2, i.e., the second smallest value) originally used in the min-sum algorithm (MS) is substituted by the estimated second minimum (min2.sup.est). The estimated second minimum is obtained from a false second minimum (min2′″) that can be obtained when searching for the first minimum (min1).
[0046] Any of the circuits depicted in
[0047]
[0048] In
[0049] For example, when the check node degree is 16, a first degree of the 16 input signals is the v2c information that the n.sup.th variable node (no. n) provides to the m.sup.th check node (no. m). The v2c information can be expressed by “v.sub.n,m.sup.(i).” Afterwards, the first minimum (min1) is obtained, and the false second minimum (min2′″) can be obtained from data accompanied with the first minimum. The false second minimum (min2′″) can be the actual second minimum plus noises, i.e., the scrambled second minimum.
[0050] The decoding method and the decoding system of the present disclosure have improved the conventional SMAMSA, and an equation 6 for the estimated second minimum is provided. The equation 6 implements a modified SMAMSA (referred to as M-SMAMSA).
[0051] In equation 6, a function
is used to obtain a minimum.
is used to obtain a minimum of “v.sub.n′,m.sup.(i)” (that is, the information that the variable node provides to the check node with exclusion of the connection to be calculated). The estimated first minimum and the estimated second minimum can be obtained. Based on the information (v.sub.n′,m.sup.(i)) and with exclusion of the connection to be calculated, a product among the other connections (n′) between the variable nodes and the check nodes is calculated. It should be noted that the equation 6 estimates the information of a node based on the adjacent connections. After a dot product between the information and the estimated first minimum or the estimated second minimum is calculated, the information (c.sub.m,n.sup.(i)) that the check node provides to the variable node can be obtained.
[0052] Compared to the conventional SMAMSA, the estimated first minimum (min1.sup.est) obtained from the modified SMAMSA of equation 6 is consistent with the first minimum obtained in equation 5 before the modification. “α”, “γ” and “β” are weights in the equation 6. When the modified SMAMSA is used to estimate the estimated second minimum (min2.sup.est), the first minimum (min1) is multiplied by “α” (a first parameter), the false second minimum (min2′″) is multiplied by “γ” (a third parameter), and the first minimum (min1) is multiplied by “β (a second parameter).
[0053] Accordingly, the modified SMAMSA changes dimensions for generating the estimated second minimum (min2.sup.est). For example, two dimensions in the original SMAMSA (e.g., in equation 5) are changed to three dimensions (e.g., in equation 6). Therefore, the modified SMAMSA is able to increase a range of use, so as to cooperate with more modulations and have a wider range of fixed points of the decoder. In the present disclosure, the modified SMAMSA is also required to comply with the following rules when adding the parameters “α”, “β” and “γ” to the equation 6.
[0054] min2.sup.est≥min1.sup.est.
[0055] min2′″≥min1.
[0056] Accordingly, the minimum of min2′″ is equal to min1. In addition, if min2′″=min1 and “min2.sup.est” is at lower bound, a relationship: min2.sup.est=(β+γ).Math.min1≥min1.sup.est=α.Math.min1 can be derived. Therefore, the parameters “α”, “β” and “γ” comply with a relationship of “(β+γ)≥α”.
[0057] Reference is made to
[0058] In
[0059] In equation 6, in compliance with the relationship of (β+γ)≥α, the first minimum is multiplied by a first parameter (α) for obtaining an estimated first minimum (min1.sup.est) (step S107). An estimated second minimum (min2.sup.est) is obtained by the first minimum being multiplied by a second parameter (β) plus the false second minimum being multiplied by a third parameter (γ) (step S109). In the meantime, based on the information that the variable nodes provide to the check nodes, after excluding the connection to be calculated, a sum of the remaining connections is calculated for determining a value 0 or 1 (v.sub.n′,m.sup.(i)) (step S111). Afterwards, whether or not the value “n” (a number of the variable node) of “c.sub.m,n.sup.(i)” is at the connection with the minimum (v.sub.n,m.sup.(i)) that the n.sup.th variable node provides to m.sup.th check node is determined, so as to determine if the estimated first minimum (min1.sup.est) is to be used. On the contrary, the estimated second minimum (min2.sup.est) is used if the value “n” of “c.sub.m,n.sup.(i)” is not at the connection with the minimum (v.sub.n,m.sup.(i)) that the n.sup.th variable node provides to m.sup.th check node. The result (0 or 1) in step S111 is used to perform a dot product, so as to obtain the information (c.sub.m,n.sup.(i)) that the check nodes provide to the variable nodes (step S113).
[0060] Furthermore, in the decoding method, a pre-processing process can be performed before the signals are inputted to the decoder. In the pre-processing process, in view of hardware limitation, a decode-scaling method can be performed to adjust the signals, such that the decoder is able to identify the features of the signals. Decode scaling can be determined by comparing a noise-power ratio with a threshold set by the decoding system. The decode-scaling method can improve a bandwidth limitation caused by the fixed points of the decoder, thereby solving a problem of reduced performance due to the bandwidth limitation.
[0061] For example, for the LLR entering the LDPC decoder, the decode-scaling method is used to adjust a weight of the LLR inputted to the decoder since the information of an inverse of noise power (1/σ.sup.2) is required by the decoder when performing an LLR calculation. It should be noted that the inverse of noise power (1/σ.sup.2) provides information of a signal-noise ratio (SNR), and the inverse of noise power (1/σ.sup.2) can be used to adjust the strength of signals in each channel. Therefore, a correct LLR is provided to the decoder.
[0062] In a practical application, the signal-noise ratio operated in the decoding system may cross an 8-9 dB interval, such that a dynamic range of the inverse of noise power required when the LLR is inputted to the LDPC decoder falls in a range of 2 to 8. To completely cover the whole dynamic range, the fixed points of the decoder should increase a bit width for maintaining the performance of the decoder. However, the additional bit width may increase hardware area and power consumption. Accordingly, the decode-scaling method for the LLR is provided for suppressing changes such as the bit width of the fixed points.
[0063] In summation, according to the above embodiments of the decoding method adopting an algorithm with weight-based adjusted parameters and the decoding system, the modified SMAMSA that is a new single-sum framework with a layered decoding technology is provided. Compared with the conventional algorithms, the modified SMAMSA provides variables with more dimensions, and the range of application is expanded. Better performance than the conventional NMS is also provided. Further, the modified SMAMSA provides a lower error rate when the single minimum is used with the scrambled second minimum. Since only the first minimum is searched for, the modified SMAMSA can reduce hardware complexity and power consumption. At an input signal end, the decoding method uses various decode-scaling methods for different signal-noise ratios, and therefore provides optimized performance since the hardware area is reduced for the reduced bit width.
[0064] The foregoing description of the exemplary embodiments of the disclosure has been presented only for the purposes of illustration and description and is not intended to be exhaustive or to limit the disclosure to the precise forms disclosed. Many modifications and variations are possible in light of the above teaching.
[0065] The embodiments were chosen and described in order to explain the principles of the disclosure and their practical application so as to enable others skilled in the art to utilize the disclosure and various embodiments and with various modifications as are suited to the particular use contemplated. Alternative embodiments will become apparent to those skilled in the art to which the present disclosure pertains without departing from its spirit and scope.