METHOD AND DEVICE FOR POLAR CODE ENCODING AND DECODING
20220123767 · 2022-04-21
Inventors
Cpc classification
International classification
Abstract
The disclosure relates to generating a polar code and also to encoding and decoding data using a polar code. A method of generating a polar code includes obtaining a first matrix as an m-fold Kronecker product of a 2×2 binary lower triangular matrix where m=log2(M/2), M<N, and N is the length of a polar code to be generated. A second matrix may be obtained, where the inverse of the second matrix is a lower triangular band matrix. A transformation matrix may be generated for the polar code by calculating a Kronecker product of the second matrix with the first matrix. An information set I identifying reliable bit channels for the polar code may be determined. A polar codeword of length N may be obtained using the polar code that is decodable by iteratively applying a sliding decoding window of length M to the polar codeword.
Claims
1. A method of decoding a received signal comprising: applying, at a first position, a window of length M to a received signal containing N signal values, where M<N; decoding a first sub-input vector using a polar code and first channel likelihoods L based on signal values obtained from the window at the first position; shifting the window position to a second position; obtaining second channel likelihoods L based on the signal values from the window at the second position and the decoded first sub-input vector; and decoding a second sub-input vector using the polar code and the second channel likelihoods.
2. The method according to claim 1, wherein the number of first and second channel likelihoods obtained is M/2 and the polar code has a polar transformation matrix of size M/2.
3. The method according to claim 1, wherein the polar code used to decode the first and second sub-input vector is a sub-information set (I.sub.t), of an information set (I) of a polar code of length N used to encode an input vector comprising the first and second sub-input vectors.
4. The method according to claim 1, further comprising dividing the windowed M signal values into a first sub-channel and second sub-channel of M/2 likelihood values, and using the first and second sub-channel likelihoods to generate the first and second channel likelihoods.
5. The method according to claim 4, wherein obtaining the second channel likelihoods comprises: updating a likelihood buffer (L0) of M/2 likelihood values using the decoded first sub-input vector and the first sub-channel likelihood values, and using the buffer together with those of the first and second sub-channel likelihoods at the second window position to generate the second channel likelihoods.
6. The method according to claim 1 wherein the steps of shifting the window, obtaining second channel likelihoods and decoding the second sub-input vector are performed iteratively.
7. An encoding method, the method comprising: obtaining a first matrix as an m-fold Kronecker product of a 2×2 binary lower triangular matrix where m=log2(M/2), M<N, and N is the length of a polar code to be generated; obtaining a second matrix of dimension 2S×2S, where S=N/M and the inverse of the second matrix is a lower triangular band matrix; generating a transformation matrix for the polar code by calculating a Kronecker product of the second matrix with the first matrix; and determining an information set I identifying reliable bit channels for the polar code, and obtaining a polar codeword of length N using the polar code.
8. The method according to claim 7, wherein the polar codeword is decodable by iteratively applying a sliding decoding window of length M to the polar codeword.
9. The method according to claim 8 where a successive decoding process using a polar code of size M/2 is applied to the windowed M values of the polar codeword during each iteration.
10. The method according to claim 7, wherein determining the information set comprises: estimating bit-error probability and/or log-likelihood ratios of first and second kernels having i bit channels, corresponding to the first and second matrices.
11. The method according to claim 7, wherein the second matrix is a full binary lower triangular matrix.
12. The method according to claim 7, wherein obtaining a polar codeword of length N using the polar code: inserting K message bits into an input vector u according to the reliable channels identified by the information set I; generating a polar codeword using the input vector u by calculating the product of the input vector and the transformation matrix.
13. The method according to claim 7, wherein obtaining a polar codeword of length N using the polar code: inserting K message bits into an input vector u according to the reliable bit channels identified by the information set I of a polar code of length N; dividing the input vector u into 2S sub-input vectors of size M/2; encoding the sub-input vectors using the first matrix; iteratively adding the respective bits of one or more encoded sub-input vectors to an immediately preceding encoded sub-input vector.
14. An encoding apparatus, comprising one or more processors and a memory, wherein the one or more processors are configured to execute the instructions in the memory to: obtain a first matrix as an m-fold Kronecker product of a 2×2 binary lower triangular matrix where m=log2(M/2), M<N, and N is the length of a polar code to be generated obtain a second matrix of dimension 2S×2S, where S=N/M and the inverse of the second matrix is a lower triangular band matrix; generate a transformation matrix for the polar code by calculating a Kronecker product of the second matrix with the first matrix; and determine an information set I identifying reliable bit channels for the polar code, and obtain a polar codeword of length N using the polar code.
15. The apparatus according to claim 14, wherein the polar codeword is decodable by iteratively applying a sliding decoding window of length M to the polar codeword.
16. The apparatus according to claim 15 where a successive decoding process using a polar code of size M/2 is applicable to the windowed M values of the polar codeword during each iteration.
17. The apparatus according to claim 14, wherein the one or more processors are configured to execute the instructions in the memory to: estimate bit-error probability and/or log-likelihood ratios of first and second kernels having i bit channels, corresponding to the first and second matrices.
18. The apparatus according to claim 14, wherein the second matrix is a full binary lower triangular matrix.
19. The apparatus according to claim 14, wherein the one or more processors are configured to execute the instructions in the memory to: insert K message bits into an input vector u according to the reliable channels identified by the information set I; generate a polar codeword using the input vector u by calculating the product of the input vector and the transformation matrix.
20. The apparatus according to claim 14, wherein the one or more processors are configured to execute the instructions in the memory to: insert K message bits into an input vector u according to the reliable bit channels identified by the information set I of a polar code of length N; divide the input vector u into 2S sub-input vectors of size M/2; encode the sub-input vectors using the first matrix; and iteratively add the respective bits of one or more encoded sub-input vectors to an immediately preceding encoded sub-input vector.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0065] Embodiments will now be described, by way of example only, with reference to the accompanying drawings, in which:
[0066]
[0067]
[0068]
[0069]
[0070]
[0071]
[0072]
[0073]
[0074]
[0075]
[0076]
[0077]
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
[0092]
DESCRIPTION
[0093] Example embodiments are described below in sufficient detail to enable those of ordinary skill in the art to embody and implement the systems and processes herein described. It is important to understand that embodiments can be provided in many alternate forms and should not be construed as limited to the examples set forth herein.
[0094] Accordingly, while embodiments can be modified in various ways and take on various alternative forms, specific embodiments thereof are shown in the drawings and described in detail below as examples. There is no intent to limit to the particular forms disclosed. On the contrary, all modifications, equivalents, and alternatives falling within the scope of the appended claims should be included. Elements of the example embodiments are consistently denoted by the same reference numerals throughout the drawings and detailed description where appropriate.
[0095] The terminology used herein to describe embodiments is not intended to limit the scope. The articles “a,” “an,” and “the” are singular in that they have a single referent, however the use of the singular form in the present document should not preclude the presence of more than one referent. In other words, elements referred to in the singular can number one or more, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises,” “comprising,” “includes,” and/or “including,” when used herein, specify the presence of stated features, items, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, items, steps, operations, elements, components, and/or groups thereof.
[0096] Unless otherwise defined, all terms (including technical and scientific terms) used herein are to be interpreted as is customary in the art. It will be further understood that terms in common usage should also be interpreted as is customary in the relevant art and not in an idealized or overly formal sense unless expressly so defined herein.
[0097]
[0098]
[0099] We consider communication between a transmitter and a receiver having different computational capabilities, namely when the receiver is less powerful than the transmitter, e.g. the downlink in the wireless communication system 200. In the following embodiments, the transmitter is able to encode message data to create polar codewords of length N, while the receiver can process and decode only polar codewords of length M<N.
Code Design
[0100] According to an embodiment, we describe how to design a polar code of length N and dimension K such that it is decodable through a sliding window of size M. To design a polar code means to provide a transformation matrix T and a frozen set F (or conversely the information set K.
[0101] The process of generating a polar code according to an embodiment will be described with reference to the flow chart of
[0102] The transformation matrix T may be designed as follows. In a first step 301, we obtain a first kernel matrix T.sub.M/2. Given S=N/M, where N is the length of the polar codeword to be generated at the encoder and M is the length of codeword that can be processed at a target decoder, T.sub.M/2=T.sub.2.sup..Math.m with m=log.sub.2(M/2) and the fundamental T.sub.2 polar code matrix is given by
Thus, T.sub.M/2 is the transformation matrix of a classical polar code of length M/2.
[0103] The next stage 302 is to obtain a second kernel matrix W.sub.2S. The kernel W.sub.2S is a kernel defined by a full binary lower triangular matrix of size 2S×2S. The value of S being as given before, S=N/M. The W.sub.2S kernel matrix is illustrated for an arbitrary value of 2S in
[0104] The transformation matrix is then determined according to the definition T=W.sub.2S.Math.T.sub.M/2. In other words, the transfer matrix is defined as the Kronecker (tensor) product of the W.sub.2S kernel obtained in 302 with the classic transformation matrix of a polar code of length M/2 This matrix his given by a square matrix of size 2S×2S having ones on and below the diagonal, and zeros above the diagonal as depicted in
[0105] The Tanner graph 600 comprises a series of input channels or rows over which the values of an input vector 601 are received. In a first stage of the graph there are a series of T.sub.M/2 encoding units 602-1 to 602-2S to which the input rows receiving the values of an input vector u are sequentially connected. The input vector can be considered as a series of sub-input vectors u.sub.1 to u.sub.2S each received at the inputs of a corresponding T.sub.M/2 encoding unit which is encodes the M/2 inputs according to a classical polar code kernel i.e. for a sub-input vector u.sub.n the encoded bits are equal to u.sub.n.T.sub.M/2. These output bits are then spread evenly across the W.sub.2S coding units 604-1 to 604-2S, according to permutation network 602 such that each output is received at a corresponding one of the inputs of respective W.sub.2S coding units 604-1 to 604-2S in a second coding stage. Accordingly, the outputs of the first T.sub.M/2 encoding unit are received by the first inputs of each W.sub.2S coding unit respectively, the outputs of the second T.sub.M/2 unit are received at the second inputs of each W.sub.2S coding unit respectively, and so on. The outputs of the W.sub.2S coding units are then reordered according to the permutation connections (reordering network) 605 to output an encoded codeword x. The permutations (reordering) being such that a partial vector consisting of the first M/2 values of the codeword x correspond to the first outputs from the four W.sub.2S coding units 604-1 to 604-2S respectively, a second partial vector consisting of the next M/2 values of x correspond to the second outputs from the four W.sub.2S coding units 604-1 to 604-2S, and so on.
[0106] In the above embodiment, the selection of the second kernel matrix is a full binary lower triangular matrix of size 2S×2S. However, other choices are possible for the second kernel matrix. In particular, the key property that enables the received codeword x to be sequentially decoded in portions with the decoding result of each portion being fed back into the decoding of the next portion, (i.e. by applying a sliding window) is that the inverse W.sup.−1 of the second kernel matrix W consists of a lower triangular band matrix. As will be illustrated by way of a later embodiment, it is this property that allows each set of M values received to be iteratively decoded in t-uples of vectors of M/2 values according to existing successive decoding update rules. The absence of ‘1’s in each column of W.sup.−1 below a certain point ensures that only a subset of the N received LLRs need to be used for the decoding of a particular input bit u.sub.i.
[0107] A frozen set can be designed according to multi-kernel polar code mechanism. Reliabilities are determined for each output of the kernel matrix W.sub.2S and then propagated from right-to-left along the Tanner graph to the T.sub.M/2 kernel matrices and determine the reliability at each input bit channel. The most reliable channels are determined from the resulting values and the frozen channel positions determined as the remaining unreliable channels.
[0108] Accordingly, we need to determine the polarization equations of the kernels W.sub.2S and T.sub.M/2. Under BEC, bit error probability can be calculated, while under AWGN channel, DE/GA method can be used [5]. This algorithm estimates the log-likelihood ratios (LLRs) distribution of the polarized channels by tracking their mean at each stage of the SC decoding tree. Given the block decoder representation of kernel W.sub.2S depicted in
where δ is the error probability of the input channels, while the LLR mean μ.sub.i for a bit channel u.sub.i can be calculated as
where μ is the input LLR mean and function ϕ can be defined as
and can be approximated through curve-fitting. The curve fitting may be performed using methods known to those in the art, for example. as described in J. Ha, J. Kim, and S. W. McLaughlin, “Rate-compatible puncturing of low-density parity-check codes,” IEEE Transactions on Information Theory, vol. 50, no. 11, pp. 2824-2836, 2004.
[0109] Using the above metrics, the reliability of each bit of the input vector can be calculated; the K best bits will form the information set I, while the indices of the remaining N−K bit-channels form the frozen set F of the code.
[0110] The bit error probabilities log-likelihood ratio means of the classical polar code kernel can be determined in an existing manner that would be known to those skilled in the art.
[0111] With equations for both the T.sub.M/2 and W.sub.2S matrices, given a known error probability or LLR mean at the output we can work back to determine a value that is a measure of the reliability of each bit channel in the transformation matrix.
Encoding
[0112] The K message bits are inserted in the input vector u according to the information I set previously calculated, namely storing their values in the indices listed in I, while the remaining bits of u are set to zero. Codeword x is then calculated as x=u.Math.T, where T is the transformation matrix of the code calculated as previously described. Codeword x is then transmitted through the channel as shown in
[0113] Alternatively, codeword x can be calculated only on the basis of the transformation matrix T.sub.M/2 of a polar code, e.g. without the need of implementing matrix W.sub.2S. In fact, given the sub-information set I.sub.t for t=1, . . . ,2S, calculated from the information set I as the set of entries of I comprised between
and t.Math.M/2 reduced by (t−1).Math.M/2, input vectors u.sub.1, . . . , u.sub.2S are created accordingly on the basis of the message bits. Each partial input vector is encoded independently through matrix multiplication by T.sub.M/2, obtaining partial codewords x.sub.1, . . . , x.sub.2S. Finally, codeword x is obtained by backward accumulating the partial codewords starting from the last one, i.e. x=[x.sub.1⊕. . . ⊕x.sub.2S, x.sub.2⊕. . . ⊕x.sub.2S, . . . , x.sub.2S-1⊕x.sub.2S, x.sub.2S] where ⊕ applies a bitwise XOR operation when applied to binary partial codewords.
Example of a Polar Code
[0114] As an example, we will now describe the generation of a polar code according to the above embodiment when the N=16, M=8 and, thus, S=N/M=2.
[0115] Given M/2=4, then the first kernel matrix 701 is selected as classical polar transformation matrix of dimension M/2, i.e. T.sub.4 as shown in
[0116] The Tanner graph of the matrix 703 may be constructed using the coding blocks for T4
[0117] and W.sub.4 shown in
[0118] In this embodiment, LLR mean values are calculated as the basis for determining the reliability of each bit channel to which the bits of the input vector are applied. Thus, for the W.sub.4 block equation (2) may be applied giving the expressions for μ.sub.1, μ.sub.2, μ.sub.3, μ.sub.4 as shown in
[0119] If we start at the right hand side of the Tanner graph and take the input mean LLR value to be μ=2 then the resulting output mean LLRs for each bit channel i=1 . . . 16 are given as μ.sub.i={0.01, 0.40, 0.60, 3.28, 0.06, 0.85, 1.24, 5.26, 0.11, 1.17, 1.66, 6.42, 3.78, 11.5, 13.4, 32}. Low values correspond to unreliable channels and the order of the bit channels in terms of reliability is shown as the column 1100 in
[0120] This can be illustrated by the following encoding example which uses the polar code of
x=u.Math.T=[0 1 0 1 1 0 1 0 0 1 0 1 1 1 1 1]
[0121] Alternatively, the alternate encoding method already described above can be used that doesn't explicitly require generating the transformation matrix T. According to this process the input vector u=[0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1]. The input vector may be divided into sub-input vectors u1 to u4 of length M/2=4 such that, u.sub.1=[0 0 0 1], u.sub.2[0 0 0 1], u.sub.3=[0 0 1 0], u.sub.4[0 0 0 1]. Equivalently, a sub-information set I1 to I4 may be obtained from the information set I for the full polar code such that I.sub.1={4}, I.sub.2={4}, I.sub.3'{3,4}, I.sub.4={1,2,3,4} and the four sub-input vectors u.sub.1 to u.sub.4 populated accordingly. Each of the sub-input vectors u.sub.1 to u.sub.4 may then be encoded using the T.sub.4 classical polar transformation matrix. The resulting encoded vectors are x.sub.1=[1 1 1 1], x.sub.2=[1 1 1 1], x.sub.3=[1 0 1 0], x.sub.4=[1 1 1 1]. In order to generate the slidably decodable codeword, the following operation is performed:
x=[x.sub.1⊕x.sub.2⊕x.sub.3⊕x.sub.4, x.sub.2⊕x.sub.3⊕x.sub.4, x.sub.3⊕x.sub.4, x.sub.4]
where ⊕ denotes a bitwise XOR operation applied to binary partial codewords x.sub.1 to x.sub.4.
[0122] This is further illustrated in
Sliding Window Decoding
[0123] Sliding window decoding of a polar codeword generated using a polar code designed according to the above embodiment is performed such that 2S polar decoding steps are used, each one using M channel signals (e.g. received signal values upon which LLRs are based). Each step outputs M/2 bits of the input vector, using the M/2 input bits decoded at the previous step to steer half of the LLRs used in the decoding. We consider can consider the W2S block, as shown in
[0124] In general, the decoding proceeds as shown in
[0125] Then at a second step 1502 a sub-input vector is calculated from the windowed values. Where the sub-input vector u.sub.t, where t is the number of the decoding step, is comprised of M/2 bits and is calculated from M/2 likelihood values (LLRs) which are derived from the windowed values. As will be seen these are derived by combining the values according to the Tanner graph representation of the Polar code as previously shown in
[0126] Once the sub-input vector u.sub.1 is calculated at the first window position, at 1503 the window is shifted to a further (e.g. second position) 1420. In particular, the window is shifted to the right from an initial position by M/2 values. Further likelihood values are determined at 1504 corresponding to the second position in a similar manner as the first but also taking into account the LLR values now having been discarded in moving the window position. A further sub-input vector u.sub.2 is then decoded based on the derived likelihoods (Step 1505).
[0127] At 1506, a determination is made as whether all the received signal values have been decoded. In other words, have all the sub-input vectors that make up the input vector been decoded from the received signal values. If the answer is ‘No’ then the process returns to step 1503 and the window is shifted by M/2 values to a further position 1430 and the decoding process continues. At 1504, in obtaining the further likelihood values not only are the likelihoods discarded from the immediately preceding window taken into account but also all preceding but now discarded values. This may be achieved by maintaining a buffer that is updated at the end of each decoding stage by performing a process using the values that are to imminently be discarded. A specific embodiment describing this process will be described subsequently.
[0128] If the answer is “yes” at step 1506 then the process moves to step 1507 in which message/information bits are determined from the sub-input vectors which when concatenated together comprise the full input vector into which the information bits to be decoded have been inserted. The information bits can be extracted using the information set (i.e. reliability sequence) which specifies the bit positions containing information (good channels) and those containing frozen bit values (noisy/bad channels). The information set is the full-length information set corresponding to the polar code generated above having the length N.
[0129] A decoding example is depicted in
[0130] In embodiments, the received signal values y are log likelihood ratios (LLRs) and the decoding process based on a successive cancellation (SC) decoding scheme. However, it is noted that other existing polar decoding schemes (e.g. successive cancellation list (SCL) decoding) may alternatively be used to iteratively determine the values of the input vector by evaluating and updating received values as they propagate through the Tanner graph, making hard decisions on the input bits based on the propagated received values and knowledge of the positions of the frozen bits according to the polar code. Further, although log-likelihood ratio (LLR) values are used here, another measure of likelihoods based on received signal values (e.g. from a demodulated signal) may be used. LLR values are convenient computationally because they avoid computational under-flow occurring when the algorithm is implemented by a processor.
[0131] In general, the log-likelihood ratios (LLRs) are propagated along the Tanner graph from right-to-left and hard decisions on the decoded bits of the input vector u are passed from left-to-right and used to update the LLR values in subsequent branches for consistency with the decoded bits. Initially, the LLRs of the coded bits x based on the received vector y are calculated at the receiver. The received signal is decoded bit-by-bit using LLR propagation through the graph to retrieve the transmitted input vector u (i.e. the transmitted message). For every bit u.sub.i, the position i is checked against the information set which indicates the bit positions of the input vector that contain frozen bits and those that contain information bits. If the position i of the bit u.sub.i corresponds to a frozen bit then its value is decoded as the predetermined value u.sub.i=0, and the decoder moves on to evaluating the next bit. If the information set indicates that u.sub.i is an information bit, then a corresponding LLR is recursively calculated for that bit position. A decision is then taken based on the calculate LLR as to the value of the bit u.sub.i at that position. This is typically done according to a threshold, where negative LLR values are indicative of ‘1’ and positive values indicative of a ‘0’. The determination of the LLR for the bit u.sub.i generally involves receiving LLR values from a preceding stage in the multi-kernel tanner graph and updating the values according to the update rules for that kernel block. Each kernel block consists of recursively connected iterations of the fundamental T.sub.2 polar code block and uses existing decoding rules for the existing polar codes kernel. Such that where (u.sub.0u.sub.1). T.sub.2=(x.sub.0x.sub.1), with
and where λ.sub.i and l.sub.i denote LLRs at the input vector side and output side (i.e. received LLR values) respectively, and u.sub.i and x.sub.i denote the hard decision on the bit values being decoded. The hard decision update rules dictate that
x.sub.0=u.sub.0+u.sub.1 3
x.sub.1=u.sub.1 4
[0132] Further, the inverse update rules (i.e. going from right-to-left in the Tanner graph) are u.sub.0=x.sub.0+x.sub.1 and u.sub.1=x.sub.1=u.sub.0+x.sub.0 which corresponding to the message update equations:
[0133] A further embodiment of the decoding process is provided in
[0134] In an initialization step 1701, upper LLRs L.sub.0 (LLR buffer) are initialized to zero. We call these upper LLRs because they relate to the LLRs that propagates downwards from an upper branch in the Tanner graph derived in a previous decoding window t. The input vector y is initialized with LLR values corresponding to values of a signal received at the decoder. An information set I is initialized with the reliability sequence of the full multi-kernel polar code by which the received signal was encoded. The step counter t is initialized to t=1.
[0135] At step 1702, sub-information set I.sub.t is calculated from the information set I as the set of entries of I comprised for the values in the current decoding window defined by t. The values of I.sub.t are the values of I between
and t.Math.M/2 reduced by (t−1).Math.M/2; obviously, K.sub.t=|I.sub.t|. This sub-information set will be used as the information set of a polar code having a classical polar code transformation matrix T.sub.M/2.
[0136] The next step is to extract sub-channel LLRs L.sub.1 and L.sub.2 from the received signal values y. The M/2 LLRs for this decoder are calculated as follows on the basis of y: the vector
is extracted from y, while a second vector L.sub.2 of length M/2 is calculated as
[0137] The first and second sub-channel LLRs L.sub.1 and L.sub.2 are then used in step 1604 to derive the channel LLRs L. The channel LLRs L to be used for the current decoding step are calculated on the basis of these two vectors as
[0138] This is derivable from the update rules for existing successive cancellation decoding according to equation (5) set out above when applied to the branches of the Tanner graph of the decoding box W.sub.2S.
[0139] Next, at steps 1705 and 1706 the (M/2, K.sub.t) polar code defined by I.sub.t is decoded via SC decoding using L as channel LLRs and based on the Tanner graph for the classical polar code T.sub.M/2 block. Successive cancellation decoding results in a sub-input vector u.sub.t (step 1605). In SC decoding, the hard decisions made on the bits of the sub-input vector u.sub.t are further used to calculate the partial sums x.sub.t used in the SC decoding such that x.sub.t=u.sub.t.Math.T.sub.M/2. Accordingly, the SC decoding provides both u.sub.t and x.sub.t as outputs.
[0140] In step 1707, the partial sums x.sub.t are then used to update the upper LLRs L.sub.0 as
L.sub.0=(L.sub.0+L.sub.1).Math.(1−2x.sub.t) 10
[0141] Again, this is based on the classical successive decoding update rules when applied to the nodes of the Tanner graph of the W.sub.2S block, specifically update equation 6 mentioned above.
[0142] Further, at step 1708 it is determined if t=2S. If t=2S, decoding is concluded, and input vector u is calculated at step 1709 by appending all the decoded sub input vectors to form u=[u.sub.1 u.sub.2 . . . u.sub.2S].
[0143] If at step 1708 it is determined that t is not equal to 2S then the process returns to step 1702 t is incremented by 1 at step 1710 and another decoding step is performed. The increment of the value oft by 1 having the effect of shifting a decoding window by M/2 values to the right other than for the last position where the L.sub.1 values are the last M/2 values of the received signal and the L.sub.2 values are taking as infinity. Accordingly, as will be appreciated, standard a successive cancellation decoder may be used to decode received signal values of a codeword encoded according the earlier described embodiments of a multi-kernel polar code.
[0144] An example of a decoding process using successive decoding will now be described when applied to a polar code where N=16 as per the example code generation and encoding example shown in
u=[0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1]
x=u.Math.T=[0 1 0 1 1 0 1 0 0 1 0 1 1 1 1 1]
[0145] In the initialization step the following are received as the channel LLRs,
y={1.3, −0.2, 0.5, 0.2, −1.1, 0.7, −0.5, 1.1, 0.8, 0.5, 1.0, −0.4, −1.0, −0.8, −1.4, −0.6}
and the following as the information set,
I={4, 8, 11, 12, 13, 14, 15, 16}
further the M/2=4 LLR buffer values (upper LLR values) are set to zero such that
L.sub.0=[0 0 0 0]
[0146] The channel LLR values shown in underline have a sign error due to the noise in the channel. As will be demonstrated the error correcting properties of the polar code will allow the correct input vector u and encoded bit values x to be decoded from the channel LLRs.
[0147] L.sub.2={−1.1, −0.2, −0.5, 0.2}. The channel LLRs L are propagated across the permutation network 903 such that are provided at the outputs of the first T.sub.4 polar coding block 902-1. Thus, these values can be successively decoded across the polar coding block using the sub information set I.sub.1={4}. The resulting decoded sub-input vector u.sub.1=[0 0 0 1] and the partial sum values x.sub.1=[1 1 1 1]. The partial sum values propagate from left-to-right and are used to update the upper LLR buffer according to L.sub.0=(L.sub.0+L.sub.1).Math.(1−2x.sub.1)=L.sub.1.Math.(1−2x.sub.1)={−1.3, 0.2, −0.5, −0.2}.
[0148] At the next stage t=2 shown in
[0149] Thus, the channel LLRs are L=(L.sub.0+L.sub.1) L.sub.2={−1.1, −0.2, −0.5, 0.2}. The channel LLRs L are propagated across the permutation network 903 such that are provided at the outputs of the second T.sub.4 polar coding block 902-2. Again, these values are successively decoded across the polar coding block using the sub information set I.sub.2={4} derived from the information set I values that correspond to the bit positions of the second sub-input vector. The resulting decoded sub-input vector is u.sub.2=[0 0 0 1] and the partial sum values x.sub.2=[1 1 1 1]. The update of the upper LLR buffer proceeds according to L.sub.0=(L.sub.0+L.sub.1).Math.(1−2x.sub.2)={−1.3, 0.2, −0.5, −0.2}.
[0150] The third stage t=3 is shown in
[0151] Thus, the channel LLRs are derived as L=(L.sub.0+L.sub.1) L.sub.2={−1.0, 0.4, −1.4, 0.6}. The channel LLRs L are propagated across the permutation network 903 such that are provided at the outputs of the third T.sub.4 polar coding block 902-3. Again, these LLR values are used to perform successive decoding across the polar coding block using the sub information set I.sub.3={3, 4} derived from the information set I values that correspond to the bit positions of the second sub-input vector. The resulting decoded sub-input vector is u.sub.3=[0 0 0 1] and the partial sum values x.sub.3=[1 1 1 1]. The update of the upper LLR buffer proceeds according to L.sub.0=(L.sub.0+L.sub.1).Math.(1−2x.sub.3)={−3.2, −0.4, −2.0, −1.3}.
[0152] Decoding then proceeds to the final stage t=4, effectively the window is shifted so that only the last four values are within the decoding window. This means that the sub-channel LLRs become L.sub.1={−1.0, −0.8, −1.4, −0.6} and L.sub.2={∞, ∞, ∞, ∞}. The L.sub.1 values are propagated across the permutation network 904 such that they are provided to fourth rows of the W4 decoding blocks 904-1 . . . 904-4 respectively. The buffer L.sub.0 is {−3.2, −0.4, −2.0, −1.3} and, thus, L.sub.0+L.sub.1={−4.2, −1.2, −3.4, −1.9}.
[0153] Thus, the channel LLRs are derived as L=(L.sub.0+L.sub.1) L.sub.2={−4.2, −1.2, −3.4, −1.9}. The channel LLRs L are propagated across the permutation network 903 such that are provided at the outputs of the third T.sub.4 polar coding block 902-3. Again, these LLR values are used to perform successive decoding across the polar coding block using the sub information set I.sub.4={1,2,3,4} derived from the information set I values that correspond to the bit positions of the second sub-input vector. The resulting decoded sub-input vector is u.sub.4=[0 0 0 1]. As this is the final decoding step, the steps of determining the partial sum values x.sub.4 and updating the buffer L.sub.0 are redundant and may be omitted.
[0154] The derived sub-input vectors u.sub.1=[0 0 0 1], u.sub.2=[0 0 0 1], u.sub.3=[0 0 0 1], u.sub.4=[0 0 0 1] may be concatenated and the decoded input vector u is
[0 0 0 1 0 0 0 1 0 0 1 0 0 0 0 1]
[0155] From the information set I, the decoded message is thus,
m=[1 1 1 0 0 0 0 1]
and matches the message as originally encoded using the generated multi-kernel polar code according to this embodiment.
[0156]
[0157] The first obtaining unit 1901 obtains a first matrix as an m-fold Kronecker product of a 2×2 binary lower triangular matrix where m=log2(M/2), M<N, and N is the length of a polar code to be generated.
[0158] The second obtaining unit 1902 obtains a second matrix of dimension 2S×2S, where S=N/M and the inverse of the second matrix is a lower triangular band matrix.
[0159] The generating unit 1903 generates a transformation matrix for the polar code by calculating a Kronecker product of the second matrix with the first matrix.
[0160] The information set unit 1904 determines an information set I identifying reliable bit channels for the polar code.
[0161] The selection by the first and second selecting units is such that a polar codeword of length N may be obtained using the polar code that is decodable by iteratively applying a sliding decoding window of length M to the polar codeword, where M<N.
[0162] Additionally, an encoder 1910 may be provided that receives the polar code from the apparatus 1900 and uses it to encode a message to be transmitted on a communications channel. Further, a transmitter 1920 may be provided (that may include an antenna) that is capable of transmitted the encoded message data across a channel e.g. by modulating a signal and transmitting it via an antenna.
[0163] The apparatus 1900, 1910 and 1920 shown in
[0164]
[0165] The window unit 2001 applies at a first position, a window of length M to a received signal containing N signal values, where M<N.
[0166] The first decoding unit 2002 decodes a first sub-input vector using a polar code and first channel likelihoods L based on signal values obtained from the window at the first position.
[0167] The shifting unit 2003 shifts the window position to second position.
[0168] The channel likelihood obtaining unit 2004 obtains second channel likelihoods L based on the signal values from the window at the second position and the decoded first sub-input vector.
[0169] The second decoding unit 2005 decodes a second-sub-input vector using a polar code and the second channel likelihoods.
[0170] A receiver 2020 may be provided that receives a signal to be decoded e.g. via a communications network and provides it to the apparatus 2000. A demodulator 2010 may be provided that demodulates the signal received at the receiver 2020 before providing it to the apparatus 2000 for decoding.
[0171] The apparatus 2000, 2010 and 2020 shown in
[0172]
[0173] The method disclosed in the embodiments of the present disclosure may be applied in processing unit 2101. In a process of implementation, each step of the method may be completed by using an integrated logic circuit of hardware in the processing unit 2101 or instructions in a software form. These instructions may be implemented and controlled by using the processing unit 2101. Configured to execute the method disclosed in the embodiments of the present disclosure, the foregoing processing unit may include a general-purpose processor, a digital signal processor (DSP), an application-specific integrated circuit (ASIC), a field-programmable gate array (FPGA), or another programmable logic device, a discrete gate or transistor logic device, or a discrete hardware component; and can implement or execute each disclosed method, step, and logic block diagram in the embodiments of the present disclosure. The general-purpose processor may be a microprocessor or the processor may be any common processor or decoder, and so on. The step with reference to the method disclosed in the embodiments of the present disclosure may be directly executed and completed by a hardware decoding processor or executed and completed by a combination of hardware and a software module in a decoding processor. The software module may be located in a mature storage medium in the art, such as a random-access memory, a flash memory, a read-only memory, a programmable read-only memory, an electronically erasable programmable memory, or a register. The storage medium is located in the memory 2102, and the processing unit 2101 reads information in the memory 2102, and completes the steps of the method with reference to the hardware. For example, the memory 2102 may store information about an obtained Polar code or frozen or information set for the processing unit 2101 to use during encoding or decoding.
[0174] A communications system or a communications apparatus according to an embodiment of the present disclosure may include the apparatus 1900 the apparatus 2000 or the apparatus 2100.
[0175] The block error rate (BLER) performance of the sliding window design and decoding of polar codes in embodiments of the disclosure may be compared with independent block transmission and optimal full polar code transmission. Specifically, we consider the scenario where the transmitter has to send K bits to the receiver at a rate R=K/N, i.e. it should transmit N bits, however the receiver can handle only M<N bit per reception due to limited decoding capabilities.
[0176] We compare 3 strategies: [0177] State-of-the-art independent transmission (IND): A transmitter divides the K message bits into S=N/M messages of K′=K/S bits, that are encoded and transmitted independently using S polar codes of length M and dimension K′. Transmission is successful if all S blocks are decoded correctly. [0178] Best case full polar code (FULL): A transmitter ignores the limitations at the receiver and transmits a codeword obtained using the full (N, K) polar code. This case is used as a benchmark of the best possible BLER performance attainable by polar codes in the transmission. [0179] A sliding window decoding (SW) process as according to the above described embodiments: A transmitter designs and encode a polar codeword according to the already described code generation and encoding embodiments. A receiver uses a decoding process according to the above embodiment i.e.
[0180] In the following, we show a performance result under SC (SCL-1 in the figures) and SCL decoding.
[0181] A person of ordinary skill in the art may be aware that, in combination with the examples described in the embodiments disclosed in this specification, units and algorithm steps may be implemented by electronic hardware, or a combination of computer software and electronic hardware. Whether the functions are performed by hardware or software depends on particular applications and design constraint conditions of the technical solutions. A person skilled in the art may use different methods to implement the described functions for each particular application, but it should not be considered that such implementation goes beyond the scope of the present disclosure.
[0182] It may be clearly understood by a person skilled in the art that, for the purpose of convenient and brief description, for a detailed working process of the foregoing system, apparatus, and unit, reference may be made to the corresponding process in the foregoing method embodiments, and details are not described herein again.
[0183] In the embodiments provided in the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other manners. For example, the described apparatus embodiment is merely exemplary. For example, the unit division is merely logical function division and may be other division in actual implementation. For example, a plurality of units or components may be combined or integrated into another system, or some features may be ignored or not performed. In addition, the displayed or discussed mutual couplings or direct couplings or communications connections may be implemented through some interfaces. The indirect couplings or communications connections between the apparatuses or units may be implemented in electronic, mechanical, or other forms.
[0184] The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on a plurality of network units. A part or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
[0185] In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each of the units may exist alone physically, or two or more units are integrated into one unit.
[0186] When the functions are implemented in the form of a software functional unit and sold or used as an independent product, the functions may be stored in a computer-readable storage medium. Based on such an understanding, the technical solutions of the present disclosure essentially, or the part contributing to the prior art, or a part of the technical solutions may be implemented in the form of a software product. The computer software product is stored in a storage medium, and includes several instructions for instructing a computer device (which may be a personal computer, a server, or a network device) to perform all or a part of the steps of the methods described in the embodiments of the present disclosure. The foregoing storage medium includes: any medium that can store program codes, such as a USB flash drive, a removable hard disk, a read-only memory (ROM, Read-Only Memory), a random access memory (RAM, Random Access Memory), a magnetic disk, or an optical disc.
[0187] The present disclosures can be embodied in other specific apparatus and/or methods. The described embodiments are to be considered in all respects as illustrative and not restrictive. In particular, the scope of the disclosure is indicated by the appended claims rather than by the description and figures herein. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.