Decoding method using dynamic scaler factor
09647798 ยท 2017-05-09
Assignee
Inventors
Cpc classification
H04L25/03171
ELECTRICITY
H04L1/005
ELECTRICITY
International classification
Abstract
A decoding method applied to a convolutionally coded signal is provided. The method includes: adjusting first input information according to a first scaling factor to generate first a-priori information; b) decoding the convolutionally coded signal according to systematic information and the first a-priori information to generate first extrinsic information; c) adjusting second input information according to a second scaling factor to generate second a-priori information, wherein the second scaling factor is generated according to the first extrinsic information and the first a-priori information; and d) decoding the convolutionally coded signal according to the systematic information and the second a-priori information to generate second extrinsic information. One of step (b) and step (d) further generates a-posteriori information as a decoding result.
Claims
1. A decoding method applied to a convolutionally coded signal, comprising: a) adjusting first input information according to a first scaling factor to generate first a-priori information; b) decoding the convolutionally coded signal according to systematic information and the first a-priori information to generate first extrinsic information; c) adjusting second input information according to a second scaling factor to generate second a-priori information, wherein the second scaling factor is calculated based on the first extrinsic information and the first a-priori information; and d) decoding the convolutionally coded signal according to the systematic information and the second a-priori information to generate second extrinsic information; wherein, one of step (b) and step (d) further generates a-posteriori information as a decoding result.
2. The method according to claim 1, decoding the convolutionally coded signal by an iteration loop method, each iteration loop comprising: a1) performing a first half iteration; and b1) performing a second half iteration; wherein, the step of generating the first a-priori information and the step of generating the first extrinsic information are the first half iteration; and the step of generating the second a-priori information and the step of generating the second extrinsic information are the second half iteration.
3. The decoding method according to claim 2, wherein the second scaling factor is calculated based on the first scaling factor.
4. The decoding method according to claim 2, wherein the second scaling factor is calculated based on a previous second scaling factor.
5. The decoding method according to claim 1, decoding the convolutionally coded signal by an iteration loop method, each iteration loop comprising: a1) performing a first half iteration; and b1) performing a second half iteration; wherein, the step of generating the first a-priori information and the step of generating the first extrinsic information are the first half iteration of an Nth iteration loop; and the step of generating the second a-priori information and the step of generating the second extrinsic information are the first half iteration of an (N+1)th iteration loop.
6. The decoding method according to claim 5, wherein the second scaling factor is generated according to the first scaling factor.
7. The decoding method according to claim 5, wherein the second scaling factor is generated according to a previous scaling factor of the second half iteration of the Nth iteration loop.
8. The decoding method according to claim 1, wherein the step of adjusting the first input information according to the first scaling factor to generate the first a-priori information comprises: interleaving the adjusted first input information to generate the first a-priori information, wherein the first input information is extrinsic information.
9. The decoding method according to claim 1, wherein the first input information is extrinsic information having undergone interleaving.
10. The decoding method according to claim 1, wherein the second scaling factor is greater than or equal to the first scaling factor.
11. The decoding method according to claim 1, wherein the second scaling factor is limited to be smaller than or equal to a predetermined value.
12. The decoding method according to claim 1, wherein the first a-priori information and the first extrinsic information determine a variance, and the second scaling factor is generated according to the variance and the first scaling factor.
13. The decoding method according to claim 1, wherein the convolutionally coded signal comprises a block code length, and the second scaling factor is generated further according to the block code length.
14. A decoding apparatus applied to a convolutionally coded signal, comprising: a first soft-in-soft-out (SISO) decoder, decoding the convolutionally coded signal according to systematic information and first a-priori information to generate first extrinsic information; a second data converter, converting the first extrinsic information to second a-priori information according to a second scaling factor; a second SISO decoder, decoding the convolutionally coded signal according to the systematic information and the second a-priori information to generate second extrinsic information; a first data converter, converting the second extrinsic information to the first a-priori information according a first scaling factor; a first scaling factor generator, calculating one of the first scaling factor and the second scaling factor according to the first a-priori information and the first extrinsic information; and a second scaling factor generator, calculating the other of the first scaling factor and the second scaling factor according to the second a-priori information and the second extrinsic information; wherein, one of the first and second SISOs further generates a-posteriori information as a decoding result.
15. The decoding apparatus according to claim 14, wherein the first data converter comprises a first multiplier and a de-interleaver connected in series, and the first multiplier is controlled by the first scaling factor; and the second data converter comprises a second multiplier and an interleaver connected in series, and the second multiplier is controlled by the second scaling factor.
16. The decoding apparatus according to claim 15, wherein the first multiplier receives an output of the de-interleaver.
17. The decoding apparatus according to claim 15, wherein the de-interleaver receives an output of the first multiplier.
18. The decoding apparatus according to claim 14, wherein the first a-priori information and the first extrinsic information determine a variance, and the first scaling factor generator provides one of the first scaling factor and the second scaling factor according to the variance.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
DETAILED DESCRIPTION OF THE INVENTION
(5) The present invention is capable of improving the converging speed of a soft-decision decoder in an iteration process. In a decoder according to embodiments of the present invention, a scaling factor is introduced for adjusting a-priori information generated from interleaving or de-interleavinga extrinsic information. The scaling factor is not a constantly value, but dynamically changes along with a result generated from previous iteration decoding and a previous value of the scaling factor.
(6) In general, convolutional codes and turbo codes may be represented by a trellis, as shown in
(7) In
(8) The turbo decoder in
(9) Under the condition that the foregoing MAP algorithm calculates a received message Y, the probability of the message bit being digital 1 or 0 at a step k, or referred to as a-posteriori log-likelihood ratio L(uk|Y), is defined as below.
(10)
(11) The MAP algorithm calculates L(uk|Y) of each step k through forward and backward recursive operations on the trellis. L(uk|Y) is organized and represented as:
(12)
(13) In equation (1), the branch metric r.sub.k(n,m) represents the probability of becoming a state m at the step k under conditions that the state is n at the step k1 and the received message is Y, the forward state metric .sub.k1(n) represents the probability that the state is n at the step k1 under a condition that the received message is Y, the backward state metric .sub.k(m) represents the probability that the state ism at the step k under a condition that the received message is Y. The alphabet zigma () of the numerator refers to a calculated total of all branches that possibly generate u.sub.k=1. Similarly, the alphabet zigma () of the denominator refers to a calculated total of all branches that possibly generate u.sub.k=1. The forward state metric .sub.k(m), the backward state metric .sub.k(m) and the branch metric r.sub.k(n,m) may be respectively represented as:
(14)
(15) In the above, A.sub.k is a constant number, and i=+1 or 1. It is known from equation (2) that, to calculate the forward state metric .sub.k(m), the forward state metric prior to the step k needs to be first learned; it is known from equation (3) that, to calculate the backward state metric .sub.k(m), the backward state metric subsequent to the step k needs to be first learned. Thus, the forward state metric .sub.k(m) and the backward state metric .sub.k(m) are generally calculated and obtained through iteration, with however the directions of the iteration being opposite. It is known from equation (4) that, to calculate the branch metric r.sub.k(n,m) requires index and multiplication calculations, and consumes immense resources and costs regardless of whether it is implemented by hardware or software.
(16) A Log-MAP algorithm simplifies the calculation process by using a logarithm quadrant operation. Further, the index and logarithm calculations may be simplified through the approximation method below.
max*(x,y)In(e.sup.x+e.sup.y)=max(x,y)+In(1+e.sup.|yx|)(5)
(17) The last logarithm calculation may be obtained through a look-up table. A Max-Log-MAP algorithm further removes the last item in equation (5) to approximate the operator max*, as below:
max*(x,y)=In(e.sup.x+e.sup.y)max(x,y)(6)
(18) The decoding calculation in the SISOs 20 and 22 adopts the maximum a-posteriori probability (MAP) algorithm for decoding. The MAP algorithm may be the Log-MAP or Max-Log-MAP algorithm.
(19) When using the Log-MAP or Max-Log-MAP algorithm for processing, equation (4) may be organized as:
.sub.k(n,m)=In(.sub.k(n,m))=2/N.sub.0*(y.sub.k.sup.sx.sub.k.sup.s(i)+y.sub.k.sup.px.sub.k.sup.p(i,n,m))+In(P(m|n))+K(7)
(20) In equation (7), K is a constant value, and P(min) is input information transmitted from a first half iteration, e.g., a-priori information. Components of the input information exchanged between two half iterations may be slightly adjusted, e.g., multiplying by a scaling factor s.sub.d to increase the converging speed of the iteration loop of the turbo decoder. Wherein, d is 1 or 2, which respectively correspond to the SISO 20 and 22, i.e., the first and second half iterations.
(21) The scaling factor s.sub.d may be set as a fixed value, e.g., 0.7. However, it usually may be challenging to select a most appropriate fixed value.
(22)
(23) In one embodiment of the present invention, for example, the scaling factor generators 38 and 34 generate s.sub.1 and s.sub.2 according to equations (8a) and (8b), respectively:
s.sub.1(I+1)=s.sub.1(I)+STEP*DIF.sub.1(I)/LEN(8a)
s.sub.2(I+1)=s.sub.2(I)+STEP*DIF.sub.2(I)/LEN(8b)
(24) In the equations above, STEP represents a constant number and may be pre-set according to requirements, and s.sub.1(I) represents the scaling factor adopted in the 1.sup.st half iteration in the I.sup.th iteration loop. The variance DIF.sub.1(I) is a sum of different numbers of bits between the extrinsic information L.sub.e1 generated by the first half iteration of the I.sup.th iteration loop and the a-priori information L.sub.a1 generated by the second half iteration of the (I1).sup.th iteration loop. That is, The variance DIF.sub.1(I) may be learned from the a-priori information L.sub.a1 and the extrinsic information L.sub.e1 of the same half iteration. For example, after the first half iteration of the 2.sup.nd iteration loop, there is a difference of 3 bits between the extrinsic information L.sub.e1 and t the a-priori information L.sub.a1, and so the variance DIF.sub.1(2) is 3. LEN is a constant number associated with the code block length K. In one embodiment, LEN is directly equal to the block code length K, e.g., 2080. In another embodiment, the relationship between LEN and the block code length K is LEN=K*bit/step, where bit is information bit, and bit/step is the possible number outputted in each step. In one embodiment K=2080, bit/step=2, and LEN=4160.
(25) s.sub.2(I) and the variance DIF.sub.2(I) respectively represent the scaling factor adopted in the second half iteration of the I.sup.th iteration loop, and the variance generated after the decoding of the second half iteration of the I.sup.th iteration loop.
(26) In some embodiments, initial values s.sub.1(0) and s.sub.2(0) of the scaling factors are both set to 0.25. In some embodiments, the scaling factors s.sub.1 and s.sub.2 are limited to a maximum value 1. In other words, if the scaling factors s.sub.1 and s.sub.2 obtained from equations (8a) and (8b) exceed 1, the scaling factors s.sub.1 and s.sub.2 are set to 1.
(27) It is discovered from equations (8a) and (8b) that, the scaling factor adopted in the specific half iteration is determined by the scaling factor and the decoding result of the same specific half iteration of a previous iteration loop. Further, if the variance DIF.sub.d(I) generated by the previous iteration loop is not 0, the scaling factor adopted in the current iteration loop is then larger than the scaling factor adopted in the previous iteration loop. It is further discovered from equations (8a) and (8b) that, the scaling factors s.sub.1 and s.sub.2 used in each iteration loop are not smaller than the scaling factors s.sub.1 and s.sub.2 used in the previous iteration loop.
(28) It is proven through computerized simulations that, the scaling factors adopting equations (8a) and (8b) are capable of causing the iteration loop to converge quickly, i.e., quickly reducing the variance DIF.sub.d(I). Thus, the decoding efficiency of the turbo decoder can be enhanced.
(29)
s.sub.2(I)=s.sub.1(I)+STEP*DIF.sub.1(I)/LEN(9a)
s.sub.1(I+1)=s.sub.2(I)+STEP*DIF.sub.2(I)/LEN(9b)
(30) The scaling factors in
(31)
(32)
(33) In
s.sub.1(I+1)=s.sub.2(I)+STEP*DIF.sub.1(I)/LEN(10a)
s.sub.2(I)=s.sub.1(I)+STEP*DIF.sub.2(I)/LEN(10b)
(34) In
s.sub.2(I+1)=s.sub.2(I)+STEP*DIF.sub.1(I)/LEN(11a)
s.sub.1(I+1)=s.sub.1(I)+STEP*DIF.sub.2(I)/LEN(11b)
(35) In simple, each time the scaling factor is adjusted, the scaling factor used by the same half iteration in the previous iteration loop or the scaling factor used by the previous half iteration may be referred. Each time the scaling factor is adjusted, the variance generated by the same half iteration in the previous iteration loop or the variance generated by the previous half iteration may be referred. Thus, there are at least four combinations, which are encompassed within the scope of the present invention.
(36) After the iteration loop repetition is complete, a-posteriori information is generated by the SISO 22 in
(37) While the invention has been described by way of example and in terms of the preferred embodiments, it is to be understood that the invention is not limited thereto. On the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures.