Generalized polar codes
11336300 · 2022-05-17
Assignee
Inventors
Cpc classification
H04L1/005
ELECTRICITY
H03M13/458
ELECTRICITY
International classification
Abstract
A method for determining the n best positions of frozen bits in a channel decoder for a noisy communication channel. A decoding method and decoding processing unit for implementing the channel having frozen bits at the n worst positions. A method and system that iteratively, for each bit i from the n bits, determines a probability vector for the bit i by traversing a logical graph using contraction identities simplified to specific values, indexes the specific values from the contraction identities newly computed during the determination of the probability vector for subsequent reference during a following iteration based on corresponding contraction identities, fixes the bit i from the probability vector and moving to bit i+1 until all n bits are fixed.
Claims
1. A method of decoding data received over a noisy channel having a channel bandwidth defining channel bits, the method comprising: receiving channel bits transmitted over the noisy channel and encoded using a polar code encoding scheme having input positions, the received channel bits encoding non-frozen message bits and frozen bits, the non-frozen message bits applied to encoder input positions having a highest probability of successful decoding after transmission; selecting a polar code decoding scheme corresponding to the polar code encoding scheme; decoding the received channel bits to output the non-frozen message bits by iteratively, for each bit position: if the bit at a bit position is a frozen bit, fixing the decoded bit at the bit position to a fixed frozen bit value; if the bit at the bit position is a non-frozen bit, fixing the decoded bit at the bit position to a respective non-frozen value determined using the selected polar code decoding scheme and the previously fixed decoded bits by: determining a probability vector for the decoded bit by traversing a logical graph using contraction identities simplified to specific values; and fixing the bit from the probability vector; indexing the specific values from the contraction identities newly computed during the determination of the probability vector for subsequent reference during a following iteration based on corresponding contraction identities; and increasing to a next bit position and processing the next bit until all channel bits are fixed; and outputting as the decoded message the non-frozen bits of the fixed decoded bits.
2. The method of claim 1, wherein the noisy communication channel is modelled as an erasure channel with an erasure probability.
3. The method of claim 1, wherein the noisy channel is modelled as an erasure channel that presents correlated noise characteristics and is further defined by: a bad-state probability of erasure; a good-state probability of erasure, where the bad-state probability of erasure is greater than or equal to the good-state probability of erasure; a probability of transition between the good state and the bad state; and a probability of transition between the bad state and the good state.
4. A non-transitory computer readable medium storing instructions, which when executed a processor of a computing device configure the device to perform a method of decoding data received over a noisy channel having a channel bandwidth defining channel bits, the method comprising: receiving the channel bits transmitted over the noisy channel and encoded using a polar code encoding scheme having input positions, the received channel bits encoding non-frozen message bits and frozen bits, the non-frozen message bits applied to encoder input positions having a highest probability of successful decoding after transmission; selecting a polar code decoding scheme corresponding to the polar code encoding scheme; decoding the received channel bits to output the non-frozen message bits by iteratively, for a bit: if the bit is a frozen bit, fixing the decoded bit to a fixed frozen bit value; and if the bit is a non-frozen bit, fixing the decoded bit to a respective non-frozen value determined using the selected polar code decoding scheme and the previously fixed decoded bits by: determining a probability vector for the decoded bit by traversing a logical graph using contraction identities simplified to specific values; and fixing the bit from the probability vector; indexing the specific values from the contraction identities newly computed during the determination of the probability vector for subsequent reference during a following iteration based on corresponding contraction identities; and outputting as the decoded message the non-frozen bits of the fixed decoded bits.
5. The computer readable medium of claim 4, wherein the noisy communication channel is modelled as an erasure channel with an erasure probability.
6. The computer readable medium of claim 4, wherein the noisy channel is modelled as an erasure channel that presents correlated noise characteristics and is further defined by: a bad-state probability of erasure; a good-state probability of erasure, where the bad-state probability of erasure is greater than or equal to the good-state probability of erasure; a probability of transition between the good state and the bad state; and a probability of transition between the bad state and the good state.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further features and exemplary advantages of the present invention will become apparent from the following detailed description, taken in conjunction with the appended drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
DETAILED DESCRIPTION
(25) The phenomenon of channel polarization, discovered by Arikan, can be produced by a controlled-not (CNOT) gate. Because the control bit is added to the target bit, it becomes redundantly encoded and thus more robust to noise. On the other hand, the information of the target bit is partially washed away because its value is modified in a way that depends on the value of the possibly unknown control bit. We thus say that the channels have partially polarized into a better and a worse channel. The encoding circuit of a Polar code is obtained by iterating this polarization procedure, and asymptotically produces a perfect polarization, where a fraction of the channels are error-free and the complement are completely randomizing.
(26) Because of this recursive nature, the encoding circuit takes the geometric form of a spectral transformation where CNOT gates follow a hierarchical arrangement on different length scales, and much like the (fast) Fourier transform, the linear encoding matrix can be decomposed into a Kronecker product of small matrices. In this case, the polarization is defined with respect to the successive cancellation decoder, where the marginal probability of input bit is calculated with the prior knowledge of the bits 1, . . . , i−1.
(27) A broad family of codes building on the powerful graphical calculus developed in the field of quantum many-body physics has been previously developed. In this field, the encoding circuit associated to Polar codes are a restricted form of branching multi-scale entanglement renormalization ansatz (branching MERA) tensor networks. More precisely, they correspond to branching MERA networks with half of the tensors being trivial identity gates, resulting in an object that could be called a ‘branching tree’. In the present disclosure, focus is put on the branching MERA code family, which contains all the codes obtained from an encoding circuit with the topology of the branching MERA, and includes the Polar code as a special case with many trivial gates. While Polar codes use a sequence of polarization steps, each composed of a product of gates operating on non-intersecting, finite-size blocks, the generalization considered here goes beyond this scheme by using a convolutional structure at every polarization step (lading to Convolutional Polar codes).
(28) With Polar codes, Arikan was able to give the first concrete example of a provably efficient and capacity-achieving code for symmetric channels, generating significant interest and stimulating further work on improvements and generalizations. In particular, the limitations of Polar Codes have been investigated, with the notable result that while keeping the decomposability into Kronecker products, the asymptotic block error performance of Polar codes is optimal considering underlying matrices with small dimension. In fact, the dimension has to be raised to at least 15 to improve the asymptotic error exponent. Convolutional Polar codes re-open the investigation of optimal behavior by abandoning the block-structure-limited polarization in favor of a more general polarization structure.
(29) Convolutional Polar codes form one example of generalization of Polar codes, and inherit many of their properties including a successive cancellation decoder that produces a tensor contractible in log-linear time. While the decoding algorithm is slower by a small, constant numerical factor, it has been observed that a significant improvement in both the channel polarization and the error-correction performance is achievable. While an important practical limitation of Polar code is their important finite-size effects, it has been observed that Convolutional Polar codes display a steeper waterfall region, thus suppressing such finite-size effects.
(30) Abstractly, a gate, such as a CNOT, can be viewed as a tensor A.sup.αβγ . . . with a certain number of indices denoted α, β, γ, . . . , each taking values in a finite set, that we will assume henceforth to be .sub.2. The number of indices is the rank of the tensor. For instance, the CNOT gate is a rank-four tensor N.sup.αβδγ with indices α and δ representing the two input bits and δ and γ representing the two output bits. The value of the tensor is given by: N.sup.αβδγ=1 when γ=α and δ=α⊕β and N.sup.αβδγ=0 otherwise. A tensor can be graphically represented as a vertex and its indices as edges, with the degree of the vertex equal to the rank of the tensor. In that setting, an edge linking two vertices represents a tensor contraction defined by the following equation:
(31)
(32) Tensor contraction is a generalization of matrix multiplication. A (closed) graph represents a tensor network (TN) with all edges contracted and, hence, a scalar. A graph with one open edge (an edge with one end not connected to a vertex) represents a tensor network with one uncontracted index, and thus a vector; and so forth. Such objects, and particularly their graphical representations, are also called a factor graph, where some vertices may be identified as variables. Normal factor graphs provide a better analogy to open tensor networks.
(33) Reference is now made to the drawings in which
(34) Different encoding circuits can be built from the levels depicted in
(35) The presented generalization extends to higher dimensional fields and to codes with a larger branching ratio. In many of these cases, this opens up the possibility of using nonlinear gates for kernel. Efficient coding and decoding is expected for all of these extensions. It should be noted that not all branching MERA codes, according to the above definition, exhibit channel polarization—that will depend on the choice of gates used.
(36) Viewing the encoding circuit of a code, such as the Convolutional Polar code encoding circuit shown at
(37) The encoding circuit G is a rank-2N tensor, with N indices representing N input bits and N indices representing N output bits, where some of the input bits are fixed (frozen) to 0. It is to be noted that freezing the bits to 1 would also work and deciding to freeze the bits to 0 or 1 might be justified in presence or expectation of non-symmetric noise over the channel. Finally, the probability distribution over the value of a single bit can be represented as a rank-one tensor, with the tensor “0’=(1, 0) representing the bit value 0 and tensor “1”=(0, 1) representing the bit value 1. Given these tensors, the probability of the input bit string x=(x.sub.1, . . . , x.sub.N), given the observed output y=(y.sub.1, . . . , y.sub.N), can be represented as the TN shown at
(38) In general, not all TNs can be efficiently contracted. Referring to Eq. (1) where tensor A has rank 6 and tensor B has rank 5, the resulting tensor C from their contraction has rank 6+5−2=9. Thus, while tensor A is specified by 2.sup.6 entries and tensor B is specified by 2.sup.5 entries, tensor C contains 2.sup.9>>2.sup.6+2.sup.5 entries. A TN composed of bounded-rank tensors (e.g., a circuit with only two-bit gates) can be specified efficiently. However, the tensors obtained at intermediate steps of the TN contraction schedule can be of very high rank r, and so its contraction will produce an intractable amount of data 2.sup.r. The contraction schedule that minimizes the intermediate tensor rank defines the treewidth of the graph, so generally the cost of contracting a TN is exponential with its treewidth.
(39) Encoding circuits that produce TNs with finite treewidth can therefore be efficiently decoded. This is the case for instance of Convolutional polar codes, whose corresponding TN is simply a chain, and therefore have a constant treewidth. However, it can sometimes be possible to efficiently decode even when the encoding circuit has a large treewidth by making use of special circuit identities that simplify the TN, which are depicted
(40) The equivalence between several TNs families developed in the context of quantum many-body physics and encoding circuits of various classical and quantum error correcting codes exists. In particular, the computational techniques applicable in physics and coding theory to evaluate quantities of interest (e.g. magnetization in physics, bit likelihood in coding theory) are often identical. Of particular interest here are TNs called branching MERA, which are applied in physics both as a conceptual and numerical tool to understand the properties of potentially highly entangled physical systems, which exhibit separation of degrees of freedom at low energies. The graph representing this TN has a richer structure than the encoding circuit of Polar code, but yet it remains efficiently contractible. This is the key observation which enables a generalization of Polar codes.
(41) The definition of a branching MERA code relies purely on the topology of the encoding circuit. For the special case of Polar and Convolutional Polar codes with two sublayers (or levels), these definitions are:
(42) Definition 1: A code is said to be a Polar or Convolutional Polar code over N=2.sup.n bits (labelled 1 . . . N) if and only if its encoding circuit can be constructed recursively using the following rules. If N−1, we begin with the trivial code. An encoding circuit for a code of size N=2.sup.n can be constructed from those of two codes of size 2.sup.n−1 by interleaving the logical channels and adding the following two-bit gates to the beginning of the circuit.
(43) As mentioned previously, in
(44) For the Convolutional Polar code, we begin with a sublayer (or level) of 2.sup.n−1 CNOT gates, where the ith gate connects bits 2.sup.i to 2.sup.i+1 (for i≠2.sup.n−1). The final gate connects bit N with bit 1. If this gate is non-trivial, the Convolutional Polar code is said to be periodic (otherwise it is described as non-periodic or having open boundary conditions). For either the Polar or Convolutional Polar code, we next apply 2.sup.n−1 CNOT gates, where the ith gate connects bits 2.sup.i−1 to 2.sup.i. Adding more sublayers (or levels) lead to further generalizations of the convolutional polar code.
(45) For the sake of simplicity only the Convolutional Polar code with two levels (or sublayers) example will be discussed further in terms of encoding and decoding. As will be shown, efficient decoding is performed by contracting the tensor network and examples will be provided for the Convolutional Polar code. It will become apparent to the skilled persons that the generalized encoding circuits can be decoded efficiently by contracting the corresponding tensor network. The contraction rules for the generalized encoding circuits are obtained by a straightforward generalization of the ones applicable for the Convolutional Polar code.
(46) The encoding circuit for a Convolutional Polar code over N bits can be applied with a computation cost scaling as N log.sub.2 N, and when parallel processing is considered, only takes time scaling as log.sub.2 N. The total number of two-bit gates to encode using the Polar code is simply N/2 log.sub.2 N. The periodic Convolutional Polar code has precisely twice as many gates introduced at each layer, making for a total of N log.sub.2 N. From the construction, it is clear that only one (Polar) or two (Convolutional Polar) layers of gates are applied every time the code size doubles, so the circuit depth is logarithmic: log.sub.2 N for the Polar code and 2 log.sub.2 N for the Convolutional Polar code.
(47) The Convolutional Polar codes can be defined with periodic or open boundary conditions. The open boundary code is obtained from the periodic boundary code simply by removing the gate which connects bit 2.sup.n to bit 1 in the n-th level of polarization. While such boundary conditions lead to two distinct encoding circuits, it has been noted that under successive cancellation decoding, the periodic and the open boundary Convolutional Polar codes become almost identical. Skilled persons will appreciate that minor modifications need to be applied for Open-boundary Convolutional Polar code when compared to periodic Convolutional Polar code.
(48) The decoding problem requires a successive cancellation scheme in order to significantly simplify its resolution. In a successive cancellation decoder, the goal is to determine a single bit at the time, moving from right-to-left, by assuming complete ignorance of the input bits to the left, and total confidence in the value of the input bits to the right (either because they are frozen in the code, or because they have been previously decoded). A generic, optimal decoder will locate the codeword with maximal likelihood—that is, the most probable input x=(x.sub.1, . . . , x.sub.N), given the observed output y=(y.sub.1, . . . , y.sub.N), error model W, and a set of frozen bits A.sup.c.
(49) However, for many codes determining the most probable codeword exactly is a hard problem, and a range of (usually iterative) approximations are used. The successive cancellation decoder begins with the rightmost non-frozen bit at position i, and determines its value by maximizing the probability:
max.sub.xi P(x.sub.i|y,{x.sub.k,k=i−1,i−2, . . . 1})
(50) For the purpose of the above calculation, the bits to the left of i (i.e. i+1,i+2, . . . N) are considered unknown, even if they are frozen bits. The bits to the right of i (i.e. i−1,i−2, . . . 1) are known, either because they are frozen or because they were decoded in a previous step. In this sense, the successive cancellation decoder is not an optimal decoder, because it does not take advantage of all the available information at every step. Once a bit is determined, the successive cancellation decoder proceeds to the next non-frozen bit, and so on, until the complete message has been determined.
(51) The CNOT gate has a very simple action on some states, such that it does not introduce any correlations to the joint distribution. Circuit identities have been previously introduced in
(52) For the Polar code, probing a single bit in the code results in effectively probing a single bit in two smaller codes, one layer below. This is not the case for the Convolutional Polar code, but a similar structure emerges when more bits are considered—probing a three-bit distribution in one layer can be mapped to probing three-bit distributions in two smaller codes below. To decode bits i, i+1 and i+2 using successive cancellation on the Convolutional Polar code of N bits, strictly less than 5N gates remain in the tensor network diagram.
(53) Due to the construction of the Convolutional Polar code the odd and even sites can be split into two effective Convolutional Polar codes, each of which have three bits connected to the CNOTs above. The bits to the left are unknown, and the bits to the right are fully determined, and so the process iterates. Another 5 gates will remain in both the ‘odd’ and ‘even’ sub-code. For the bottom two layers, the number of bits in each code becomes fewer than 6, so the iteration ends. At the second-bottom layer, there are only four gates total, while there are just two in the lowest—in both cases less than five. The total number of gates is upper-bounded by 5(N−1).
(54) Now that the tensor network diagram has been simplified, it remains to be shown that it can be contracted efficiently. While the gate cancellation is done from top-to-bottom, the tensor network contraction is done from bottom-to-top. To see how this is done, it is useful to highlight a few contraction identities, which are depicted in
(55) The Convolutional Polar code typically combines two bits at the lowest layer. The distributions can then be combined to a 3-bit distribution. The 3-bit to 3-bit transformation is a natural fixed point of the branching MERA. When the input bit is near the boundary, the lower layers may make several 2-bit to 2-bit transformations before moving to the 3-bit ones in. A single step of successive cancellation decoding in the N-bit Convolutional Polar code has computational cost linear in N. A full sweep of successive cancellation decoding can therefore be performed with computational cost N.sup.2 log.sub.2 for both the Polar and Convolutional Polar codes.
(56) A full sweep of successive cancellation decoding can be performed at a reduced cost N log.sub.2 N by storing in memory the contractions steps leading to successive cancellation decoder. Indeed, the TN representing the successive cancelation decoding of bit i and of bit i−1 only differ at log.sub.2 N locations. Thus, if the result of the computations leading to the decoding of bit i−1 are stored in memory, the complexity of decoding bit i is only log.sub.2 N, leading to a total complexity N log.sub.2 N to decode all N bits.
(57) It has been shown that the CNOT gate used in a polarization step transforms two erasure channels with erasure rates ϵL and ϵR (on the left and right, respectively) into two new effective erasure channels with erasure rates ϵ′.sub.L=ϵ.sub.Lϵ.sub.R and ϵ′.sub.R=ϵ.sub.L+ϵ.sub.R−ϵ.sub.Lϵ.sub.R. The transformation is slightly more complicated for the Convolutional Polar, but can nonetheless be performed efficiently.
(58) Under the erasure channel, the value of a bit is either known or not. Slightly more complex situations are to be considered. For instance the sum of two-bit values may be known while not knowing either. In general, the state of knowledge can be summarized by linear constraints. In the above example, the constraints would be expressed as (1, 1).Math.(x.sub.1, x.sub.2)−a. In general, for a collection of n bits, the state of knowledge can be written C.sub.x=a where C is a k×n matrix with k≤n and a is a k-bit vector. Note that the actual value of the vector a is not important in our analysis since the purpose is to determine whether a quantity is known or not, and not its actual value.
(59) Consider the convolved polar transforms illustrated on
(60)
(61) In these row manipulations, the subscript indicates the dimension of the matrix size, u=0 or 1, and l.sub.u denotes the u×u identity matrix. The matrix B represents the new state of knowledge for bits y.sub.3, y.sub.4 and y.sub.5. A similar reasoning applies to the transformation on the right of
(62) There are 16 matrices C of dimensions k×3 for k≤3 which are distinct under row manipulations. Our study of the erasure channel is based on assigning probabilities to these 16 possible states of knowledge, and evolving these distributions through convolutional polar transforms as described above. Initially, each bit is erased with probability p, so a collection of 3 bits has the following distribution of states of knowledge:
(63)
(64) All 9 other p.sub.j=0. Our technique proceeds by combining pairs of knowledge B.sub.a and B.sub.b into a 6-bit state of knowledge C=B.sub.a.Math.B.sub.b with probability p.sub.ap.sub.b, and applying either the procedure corresponding to the right or left of
(65) In the example of the Convolutional Polar code under successive cancellation, decoding can be represented as a tensor network with tree width equal to 3. States of knowledge over 3 bits is therefore to be addressed. For convenience all 15 states are enumerated hereinafter:
s.sub.1=∅
s.sub.2={x.sub.1}
s.sub.3={x.sub.2}
s.sub.4={x.sub.3}
s.sub.5={x.sub.1+x.sub.2}
s.sub.6={x.sub.1+x.sub.3}
s.sub.7={x.sub.2+x.sub.3}
s.sub.8={x.sub.1+x.sub.2+x.sub.3}
s.sub.9={x.sub.1,x.sub.2}
s.sub.10={x.sub.1,x.sub.3}
s.sub.11={x.sub.2,x.sub.3}
s.sub.12={x.sub.1,x.sub.2+x.sub.3}
s.sub.13={x.sub.2,x.sub.1+x.sub.3}
s.sub.14={x.sub.3,x.sub.1+x.sub.2}
s.sub.15={x.sub.1+x.sub.2,x.sub.2+x.sub.3}
s.sub.16={x.sub.1,x.sub.2,x.sub.3}
(66) The transformation of the probabilities p.sub.i associated with states under each layer of the Convolutional Polar code in the bulk, can be determined (e.g., following
p′.sub.1=p.sub.1p.sub.1+p.sub.1p.sub.2+p.sub.1p.sub.5+p.sub.1p.sub.6+p.sub.1p.sub.8+p.sub.2p.sub.1+p.sub.5p.sub.1+p.sub.6p.sub.1+p.sub.8p.sub.1
p′.sub.2=p.sub.2p.sub.2+p.sub.5p.sub.8+p.sub.6p.sub.6+p.sub.8p.sub.5
p′.sub.3=p.sub.2p.sub.8+p.sub.5p.sub.2+p.sub.6p.sub.5+p.sub.8p.sub.6
p′.sub.4=p.sub.1p.sub.4+p.sub.1p.sub.10+p.sub.1p.sub.14+p.sub.2p.sub.4+p.sub.4p.sub.1+p.sub.4p.sub.2+p.sub.4p.sub.4+p.sub.4p.sub.5+p.sub.4p.sub.6+p.sub.4p.sub.8+p.sub.4p.sub.10+p.sub.4p.sub.14+p.sub.5p.sub.4+p.sub.6p.sub.4+p.sub.8p.sub.4+p.sub.10p.sub.1+p.sub.10p.sub.4+p.sub.14p.sub.1+p.sub.14p.sub.4
p′.sub.5=p.sub.1p.sub.7+p.sub.1p.sub.12+p.sub.1p.sub.15+p.sub.2p.sub.7+p.sub.3p.sub.1+p.sub.3p.sub.2+p.sub.3p.sub.5+p.sub.3p.sub.6+p.sub.3p.sub.7+p.sub.3p.sub.8+p.sub.3p.sub.12+p.sub.3p.sub.15+p.sub.5p.sub.7+p.sub.6p.sub.7+p.sub.8p.sub.7+p.sub.9p.sub.1+p.sub.9p.sub.7+p.sub.13p.sub.1+p.sub.13p.sub.7
p′.sub.6=p.sub.2p.sub.6+p.sub.5p.sub.5+p.sub.6p.sub.2+p.sub.8p.sub.8
p′.sub.7=p.sub.2p.sub.5+p.sub.5p.sub.6+p.sub.6p.sub.8+p.sub.8p.sub.2
p′.sub.8=p.sub.1p.sub.3+p.sub.1p.sub.9+p.sub.1p.sub.13+p.sub.2p.sub.3+p.sub.5p.sub.3+p.sub.6p.sub.3+p.sub.7p.sub.1+p.sub.7p.sub.2+p.sub.7p.sub.3+p.sub.7p.sub.5+p.sub.7p.sub.6+p.sub.7p.sub.8+p.sub.7p.sub.9+p.sub.7p.sub.13+p.sub.8p.sub.3+p.sub.12p.sub.1+p.sub.12p.sub.3+p.sub.15p.sub.1+p.sub.15p.sub.3
p′.sub.9=p.sub.2p.sub.12+p.sub.5p.sub.12+p.sub.6p.sub.15+p.sub.8p.sub.15+p.sub.9p.sub.2+p.sub.9p.sub.8+p.sub.9p.sub.12+p.sub.13p.sub.5+p.sub.13p.sub.6+p.sub.13p.sub.15
p′.sub.10=p.sub.2p.sub.10+p.sub.5p.sub.14+p.sub.6p.sub.10+p.sub.8p.sub.14+p.sub.10p.sub.2+p.sub.10p.sub.6+p.sub.10p.sub.10+p.sub.14p.sub.5+p.sub.14p.sub.8+p.sub.14p.sub.14
p′.sub.11=p.sub.2p.sub.14+p.sub.5p.sub.10+p.sub.6p.sub.14+p.sub.8p.sub.10+p.sub.10p.sub.5+p.sub.10p.sub.8+p.sub.10p.sub.14+p.sub.14p.sub.2+p.sub.14p.sub.6+p.sub.14p.sub.10
p′.sub.12=p.sub.2p.sub.9+p.sub.5p.sub.13+p.sub.6p.sub.13+p.sub.8p.sub.9+p.sub.12p.sub.2+p.sub.12p.sub.5+p.sub.12p.sub.9+p.sub.15p.sub.6+p.sub.15p.sub.8+p.sub.15p.sub.13
p′.sub.13=p.sub.2p.sub.13+p.sub.5p.sub.9+p.sub.6p.sub.9+p.sub.8p.sub.13+p.sub.12p.sub.6+p.sub.12p.sub.8+p.sub.12p.sub.13+p.sub.15p.sub.2+p.sub.15p.sub.5+p.sub.15p.sub.9
p′.sub.14=p.sub.1p.sub.11+p.sub.1p.sub.16+p.sub.2p.sub.11+p.sub.3p.sub.3+p.sub.3p.sub.4+p.sub.3p.sub.9+p.sub.3p.sub.10+p.sub.3p.sub.11+p.sub.3p.sub.13+p.sub.3p.sub.14+p.sub.3p.sub.16+p.sub.4p.sub.3+p.sub.4p.sub.7+p.sub.4p.sub.9+p.sub.4p.sub.11+p.sub.4p.sub.12+p.sub.4p.sub.13+p.sub.4p.sub.15+p.sub.4p.sub.16+p.sub.5p.sub.11+p.sub.6p.sub.11+p.sub.7p.sub.4+p.sub.7p.sub.7−p.sub.7p.sub.10+p.sub.7p.sub.11+p.sub.7p.sub.12+p.sub.7p.sub.14+p.sub.7p.sub.15+p.sub.7p.sub.16+p.sub.8p.sub.11+p.sub.9p.sub.3+p.sub.9p.sub.4+p.sub.9p.sub.11+p.sub.10p.sub.3+p.sub.10p.sub.7+p.sub.10p.sub.11−p.sub.11p.sub.1+p.sub.11p.sub.2+p.sub.11p.sub.3+p.sub.11p.sub.4+p.sub.11p.sub.5+p.sub.11p.sub.6+p.sub.11p.sub.7+p.sub.11p.sub.8+p.sub.11p.sub.9+p.sub.11p.sub.10+p.sub.11p.sub.11+p.sub.11p.sub.12+p.sub.11p.sub.13+p.sub.11p.sub.14+p.sub.11p.sub.15+p.sub.11p.sub.16+p.sub.12p.sub.4+p.sub.12p.sub.7+p.sub.12p.sub.11+p.sub.13p.sub.3+p.sub.13p.sub.4+p.sub.13p.sub.11+p.sub.14p.sub.3+p.sub.14p.sub.7+p.sub.14p.sub.11+p.sub.15p.sub.4+p.sub.15p.sub.7+p.sub.15p.sub.11+p.sub.16p.sub.1+p.sub.16p.sub.3+p.sub.16p.sub.4+p.sub.16p.sub.7+p.sub.16p.sub.11
p′.sub.15=p.sub.2p.sub.15+p.sub.5p.sub.15+p.sub.6p.sub.12+p.sub.8p.sub.12+p.sub.9p.sub.5+p.sub.9p.sub.6+p.sub.9p.sub.15+p.sub.13p.sub.2+p.sub.13p.sub.8+p.sub.13p.sub.12
p′.sub.16=p.sub.2p.sub.16+p.sub.5p.sub.16+p.sub.6p.sub.16+p.sub.8p.sub.16+p.sub.9p.sub.9+p.sub.9p.sub.10+p.sub.9p.sub.13+p.sub.9p.sub.14+p.sub.9p.sub.16+p.sub.10p.sub.9+p.sub.10p.sub.12+p.sub.10p.sub.13+p.sub.10p.sub.15+p.sub.10p.sub.16+p.sub.12p.sub.10+p.sub.12p.sub.12+p.sub.12p.sub.14+p.sub.12p.sub.15+p.sub.12p.sub.16+p.sub.13p.sub.9+p.sub.13p.sub.10+p.sub.13p.sub.13+p.sub.13p.sub.14+p.sub.13p.sub.16+p.sub.14p.sub.9+p.sub.14p.sub.12+p.sub.14p.sub.13+p.sub.14p.sub.15+p.sub.14p.sub.16+p.sub.15p.sub.10+p.sub.15p.sub.12+p.sub.15p.sub.14+p.sub.15p.sub.15+p.sub.15p.sub.16+p.sub.16p.sub.2+p.sub.16p.sub.5−p.sub.16p.sub.6+p.sub.16p.sub.8+p.sub.16p.sub.9+p.sub.16p.sub.10+p.sub.16p.sub.12+p.sub.16p.sub.13+p.sub.16p.sub.14+p.sub.16p.sub.15+p.sub.16p.sub.16
(67) Evaluation of the erasure probability iterates these transformations following the contraction schedule of the tensor network associated to the code.
(68) As such, similar procedures can be derived for the other combinations shown on
(69) A decoding procedure can also be provided for Convolutional Polar codes in the presence of correlated errors. The suggested technique is applicable to Polar codes, Convolutional Polar codes and their generalizations based on the branching MERA tensor network. The technique can be applied to any finite-state memory channel. Equivalently, it is applicable to any noise model which can be described by a tensor network which has the topology of a chain. Our model involves two types of different tensor. One representing the transition probabilities (stochastic matrix), a rank-2 tensor and one representing the states of the Markov process describe by a rank-3 tensor (rank-2 for boundary).
(70) Some noise models are correlated, meaning that when a given bit is affected, it is likely that neighboring bits are also affected (e.g., burst-noise channel). The proposed decoding technique under correlated noise is based on a tensor network contraction algorithm that is schematically represented by a contraction of the noise model tensor network and the code encoding circuit using a successive cancellation scheme. As an example,
(71) The first step in the technique is propagation of the bit states at the top of the circuit (tree tensor network attach to the noise model). In the second step, the simple contraction detailed in
(72) An example of the decoding technique is illustrated in
(73) The decoding strategy in presence of correlated noise is adapted for any finite-state memory error model. For concreteness, a two state model is illustrated in
(74) Given the transition probabilities, a stochastic matrix can be defined and it becomes possible to find the fraction of time spent in state G and B. To do so, a stationary distribution of the process is necessary. It corresponds to the eigenvector associate with the eigenvalue λ=1. Let P(B) be the fraction of time spend in the state B in average. Then, the average error probability of the channel is simply given by hP(B). Then P(B)=P.sub.G.fwdarw.B/(P.sub.G.fwdarw.B+P.sub.B.fwdarw.G) so that the average error probability is given by P.sub.e=hP.sub.G.fwdarw.B/(P.sub.G.fwdarw.B+P.sub.B.fwdarw.G)
(75) Some metric may be used to quantify the correlation strength. In this noise model, l.sub.B is defined as a random variable that measure the length of a consecutive stay in state B referred to herein as the burst-length. That random variable is defined with a geometric distribution so we have l.sub.B
=1/P.sub.B.fwdarw.G.
(76) Another metric that can be useful is the average number of transition N.sub.t between state G and B. This quantity is given by N.sub.t
=2NP.sub.G.fwdarw.BP.sub.B.fwdarw.G/(P0.sub.G.fwdarw.B+P.sub.B.fwdarw.G).
(77) TABLE-US-00001 TABLE 1 Noise model parameters P.sub.G.fwdarw.B P.sub.B.fwdarw.G l.sub.B
N.sub.t
0.01 0.04 25.00 16.384 0.02 0.08 12.50 32.768 0.03 0.12 8.33 49.152 0.04 0.16 6.25 65.536 0.05 0.20 5.00 81.920 0.06 0.24 4.17 98.304 0.07 0.28 3.57 114.688 0.08 0.32 3.13 131.072 0.09 0.36 2.78 147.456 0.10 0.40 2.50 163.840
(78) During numerical simulation, the burst noise model in both a low-noise and a high-noise regimes have been measured. In the low-noise regime, the p probability of Bad is set to h=0.1. In the high-noise regime, the p probability is set to h=0.5 for the state Bad. In both regimes, the decoding technique proposed herein significantly outperforms the standard polar code constructions as well as the interleave construction. Indeed, the observed frame error rate (FER) and bit error rate (BER) are always significantly lower when using the decoding technique presented herein.
(79)
(80) A Convolutional Polar code similar to the one depicted in the example of
(81) TABLE-US-00002 TABLE 2 Ordered worst positions for the convolutional polar code; erasure = ½, h = 0.5 1 26 51 76 101 126 151 176 379 511 633 707 759 815 855 886 913 938 963 988 1013 2 27 52 77 102 127 152 177 381 527 635 709 761 816 856 887 914 939 964 989 1014 3 28 53 78 103 128 153 178 383 535 637 711 763 817 857 888 915 940 965 990 1015 4 29 54 79 104 129 154 191 399 539 639 715 765 819 859 889 916 941 966 991 1016 5 30 55 80 105 130 155 253 415 541 643 717 767 821 860 891 917 942 967 992 1017 6 31 56 81 106 131 156 255 431 543 647 719 771 823 861 892 918 943 968 993 1018 7 32 57 82 107 132 157 287 439 559 651 723 773 824 862 893 919 944 969 994 1019 8 33 58 83 108 133 158 303 443 567 653 725 775 825 863 894 920 945 970 995 1020 9 34 59 84 109 134 159 311 445 571 655 727 779 827 864 895 921 946 971 996 1021 10 35 60 85 110 135 160 315 447 573 663 729 781 828 865 896 922 947 972 997 1022 11 36 61 86 111 136 161 317 463 575 665 731 783 829 867 897 923 948 973 998 1023 12 37 62 87 112 137 162 319 471 583 667 733 787 831 869 899 924 949 974 999 1024 13 38 63 88 113 138 163 335 475 591 669 735 789 832 871 900 925 950 975 1000 14 39 64 89 114 139 164 343 477 599 671 737 791 833 872 901 926 951 976 1001 15 40 65 90 115 140 165 347 479 603 679 739 793 835 873 902 927 952 977 1002 16 41 66 91 116 141 166 349 487 605 683 741 795 837 875 903 928 953 978 1003 17 42 67 92 117 142 167 351 491 607 685 743 797 839 876 904 929 954 979 1004 18 43 68 93 118 143 168 359 493 611 687 745 799 841 877 905 930 955 980 1005 19 44 69 94 119 144 169 363 495 615 691 747 801 843 878 906 931 956 981 1006 20 45 70 95 120 145 170 365 499 619 693 749 803 845 879 907 932 957 982 1007 21 46 71 96 121 146 171 367 501 621 695 751 805 847 880 908 933 958 983 1008 22 47 72 97 122 147 172 371 503 623 697 752 807 848 881 909 934 959 984 1009 23 48 73 98 123 148 173 373 505 627 699 753 809 849 883 910 935 960 985 1010 24 49 74 99 124 149 174 375 507 629 701 755 811 851 884 911 936 961 986 1011 25 50 75 100 125 150 175 377 509 631 703 757 813 853 885 912 937 962 987 1012
(82) It is also important to note that, in all likelihood, certain bits listed herein have probability of error very close to some of the bits that are indicated as unfrozen and that changing these interchanging these bits would mostly have negligible effect on the overall performance. To illustrate this, the same position numbers as provided above are provided again, but this time from worst position number to best position number. Again, they are presented over multiple columns, for easier reference:
(83) TABLE-US-00003 TABLE 3 Unordered worst positions for the convolutional polar code; erasure = ½ 1007 995 951 939 883 942 731 851 447 904 864 505 487 399 17 42 67 92 117 142 167 1015 959 943 765 735 909 699 797 821 729 849 287 906 303 18 43 68 93 118 143 168 1019 1000 976 970 703 755 783 507 918 697 679 894 930 311 19 44 69 94 119 144 169 1021 975 1002 879 944 383 575 835 819 781 629 737 828 335 20 45 70 95 120 145 170 1022 993 994 863 958 827 867 897 934 833 878 914 902 343 21 46 71 96 121 146 171 1023 1014 767 763 877 948 727 807 367 892 860 805 475 709 22 47 72 97 122 147 172 1024 1010 969 966 929 940 695 795 813 573 824 793 371 253 23 48 73 98 123 148 173 1017 979 972 925 861 917 637 753 745 779 627 583 443 665 24 49 74 99 124 149 174 1013 1006 974 889 639 928 901 543 932 896 876 501 463 773 25 50 75 100 125 150 175 1011 1004 893 931 761 907 845 503 916 725 856 541 801 1 26 51 76 101 126 151 176 991 981 963 960 903 839 719 381 351 693 653 787 643 2 27 52 77 102 127 152 177 1003 985 953 945 847 749 926 946 319 571 621 499 365 3 28 53 78 103 128 153 178 999 998 927 759 875 823 687 825 811 723 651 539 439 4 29 54 79 104 129 154 1020 996 986 964 859 924 635 938 669 817 619 832 471 5 30 55 80 105 130 155 1018 977 982 831 962 815 873 880 711 691 415 789 848 6 31 56 81 106 131 156 1005 973 891 952 871 747 843 905 775 591 886 377 816 7 32 57 82 107 132 157 1016 992 965 923 950 936 857 791 741 567 605 493 363 8 33 58 83 108 133 158 1012 957 968 956 921 671 631 495 667 717 615 535 431 9 34 59 84 109 134 159 1009 895 949 978 855 881 655 379 255 685 771 900 317 10 35 60 85 110 135 160 1001 971 941 911 757 920 623 912 908 559 872 752 349 11 36 61 86 111 136 161 983 955 980 919 511 733 869 865 739 633 603 862 359 12 37 62 87 112 137 162 989 988 947 751 829 743 607 922 910 647 707 491 611 13 38 63 88 113 138 163 987 984 887 885 799 913 853 837 663 715 527 373 809 14 39 64 89 114 139 164 997 967 935 933 915 701 888 479 803 683 884 445 315 15 40 65 90 115 140 165 1008 990 961 937 954 899 509 375 191 841 599 477 347 16 41 66 91 116 141 166
(84) It can be noted that the first 178 bits (up to bit number 178) and the last 126 bits (from bit number 899 onwards) are frozen to 0 while the remaining frozen bits are scattered therebetween. However, the first 178 bits have a probability of error that is difficult to discriminate compare to the other remaining unfrozen bits and have therefore been chosen in numerical order rather than varying probability of error.
(85) A second example is provided in the following table for the previously discussed “2-1-2 code” (composed of the levels of CNOT gates 10, 30, 40 and 50) generalized over 1024 bits. Again, the worst positions are provided when coding a message of 512 bits from the 1024 bits for the generalized “2-1-2 code” over a channel having a symmetrical probability of error equal, for every bit, to 0.5. Again, the position numbers have been ordered and are presented over multiple columns, for easier reference:
(86) TABLE-US-00004 TABLE 4 Ordered worst positions for the “2-1-2 code”; erasure = ½ 191 257 307 357 407 457 507 557 607 657 707 757 807 838 863 888 913 938 963 988 1013 193 259 309 359 409 459 509 559 609 659 709 759 809 839 864 889 914 939 964 989 1014 195 261 311 361 411 461 511 561 611 661 711 761 811 840 865 890 915 940 965 990 1015 197 263 313 363 413 463 513 563 613 663 713 763 813 841 866 891 916 941 966 991 1016 199 265 315 365 415 465 515 565 615 665 715 765 815 842 867 892 917 942 967 992 1017 201 267 317 367 417 467 517 567 617 667 717 767 817 843 868 893 918 943 968 993 1018 203 269 319 369 419 469 519 569 619 669 719 769 819 844 869 894 919 944 969 994 1019 205 271 321 371 421 471 521 571 621 671 721 771 820 845 870 895 920 945 970 995 1020 223 273 323 373 423 473 523 573 623 673 723 773 821 846 871 896 921 946 971 996 1021 225 275 325 375 425 475 525 575 625 675 725 775 822 847 872 897 922 947 972 997 1022 227 277 327 377 427 477 527 577 627 677 727 777 823 848 873 898 923 948 973 998 1023 229 279 329 379 429 479 529 579 629 679 729 779 824 849 874 899 924 949 974 999 1024 231 281 331 381 431 481 531 581 631 681 731 781 825 850 875 900 925 950 975 1000 233 283 333 383 433 483 533 583 633 683 733 783 826 851 876 901 926 951 976 1001 235 285 335 385 435 485 535 585 635 685 735 785 827 852 877 902 927 952 977 1002 237 287 337 387 437 487 537 587 637 687 737 787 828 853 878 903 928 953 978 1003 239 289 339 389 439 489 539 589 639 689 739 789 829 854 879 904 929 954 979 1004 241 291 341 391 441 491 541 591 641 691 741 791 830 855 880 905 930 955 980 1005 243 293 343 393 443 493 543 593 643 693 743 793 831 856 881 906 931 956 981 1006 245 295 345 395 445 495 545 595 645 695 745 795 832 857 882 907 932 957 982 1007 247 297 347 397 447 497 547 597 647 697 747 797 833 858 883 908 933 958 983 1008 249 299 349 399 449 499 549 599 649 699 749 799 834 859 884 909 934 959 984 1009 251 301 351 401 451 501 551 601 651 701 751 801 835 860 885 910 935 960 985 1010 253 303 353 403 453 503 553 603 653 703 753 803 836 861 886 911 936 961 986 1011 255 305 355 405 455 505 555 605 655 705 755 805 837 862 887 912 937 962 987 1012
(87) A third example is provided in the following table for the previously discussed Polar Code depicted on
(88) TABLE-US-00005 TABLE 5 Ordered worst positions for the polar code under correlated noise; h = 0.5 in the bad state; h = 0 in the good state; Pgb = 0.01 and Pbg = 0.04 1 26 51 120 239 351 428 475 506 573 638 712 751 792 828 874 904 935 963 988 1013 2 27 52 122 240 352 430 476 507 574 639 716 752 794 829 875 906 936 964 989 1014 3 28 53 123 244 360 431 477 508 575 640 718 754 795 830 876 907 938 965 990 1015 4 29 54 124 246 364 432 478 509 576 656 719 755 796 831 877 908 939 966 991 1016 5 30 55 125 247 366 436 479 510 592 664 720 756 797 832 878 909 940 967 992 1017 6 31 56 126 248 367 438 480 511 600 668 724 757 798 840 879 910 941 968 993 1018 7 32 57 127 250 368 439 484 512 603 670 726 758 799 844 880 911 942 969 994 1019 8 33 60 128 251 372 440 486 528 604 671 727 759 800 846 882 912 943 970 995 1020 9 34 62 160 252 374 442 487 536 605 672 728 760 807 847 883 914 944 971 996 1021 10 35 63 176 253 375 443 488 540 606 680 730 761 808 848 884 915 946 972 997 1022 11 36 64 184 254 376 444 490 542 607 684 731 762 810 852 885 916 947 973 998 1023 12 37 80 188 255 378 445 491 543 608 686 732 763 811 854 886 917 948 974 999 1024 13 38 88 189 256 379 446 492 544 616 687 733 764 812 855 887 918 949 975 1000 14 39 92 190 272 380 447 493 552 620 688 734 765 813 856 888 919 950 976 1001 15 40 94 191 288 381 448 494 556 622 692 735 766 814 858 889 920 951 977 1002 16 41 95 192 304 382 456 495 558 623 694 736 767 815 859 890 921 952 978 1003 17 42 96 208 312 383 460 496 559 624 695 740 768 816 860 891 922 953 979 1004 18 43 104 216 316 384 462 498 560 628 696 742 776 820 861 892 923 954 980 1005 19 44 108 220 318 400 463 499 564 630 698 743 780 821 862 893 924 955 981 1006 20 45 110 222 319 408 464 500 566 631 699 744 782 822 863 894 925 956 982 1007 21 46 111 223 320 412 468 501 567 632 700 746 783 823 864 895 926 957 983 1008 22 47 112 224 336 414 470 502 568 634 701 747 784 824 868 896 927 958 984 1009 23 48 116 232 344 415 471 503 570 635 702 748 788 825 870 900 928 959 985 1010 24 49 118 236 348 416 472 504 571 636 703 749 790 826 871 902 932 960 986 1011 25 50 119 238 350 424 474 505 572 637 704 750 791 827 872 903 934 962 987 1012
(89) Of course, skilled persons will readily understand that these three tables represent only a selected few from thousands of possibilities. However, the method for determining the position of the frozen bits can be generalized. Firstly, the channel bandwidth needs to be determined. The generalized convolutional polar codes exemplified herein are best conveyed over 2.sup.n number of bits, but an adapted encoder and decoder could be designed for any specific number of bits. For instance, given the tolerability of the generalized convolutional polar codes bitloss, one could design an encoder that is “wider” than the channel bandwidth and consequently consider the “extra” bits as lost during transmission (e.g., 1024 bit encoder or a 1000 bit-wide channel or 8192 bit encoder for a 6500 bit-wide channel). Secondly, apart from the channel bandwidth, the number of bits for the message needs to be determined (i.e., how many message bits). Considering the channel bandwidth, the number frozen bits can be computed. A determination that provides the probability of erasure for the channel is required (e.g., can be arbitrary, but should be as close as possible to the actual channel behavior). In some cases, the actual channel is best represented using the memory effect, which then provides different probabilities of erasure for the channel depending on the (good or bad) state and depending on the probability of transition between the good and the bad state and between the bad and the good state. The actual channel may or may not be best characterized as an erasure channel, but will nevertheless be characterized as such for the purpose of the present embodiment. Thirdly, once the (generalized) polar encoding circuit and an equivalent decoder is determined together with erasure characteristics for the channel, the worst positions can be orderly determined, for a given number of message bits.
(90) The polar code and, more generally, the generalized convolutional polar code, may be particularly suited as a channel coding mechanism for the control channel (e.g., in the 5G standard). It is conceivable that different (generalized) polar code encoding and decoding schemes may be supported and/or made available for communications between mobile devices and the core network. A single (symmetrical) coding and decoding scheme may be used for uplink and downlink transmissions or, e.g., given the different physical capabilities of a mobile node compared to the core network, different schemes may be used in different directions. The mobile device and the core network may also be able to negotiate the selection of the proper scheme (e.g., default scheme complemented with capacity to update to a better suited scheme). Alternatively or in addition, the scheme may be imposed by the core network or may be imposed by the mobile device (e.g., physical limitations for coding/decoding capabilities). The scheme may be also be selected based on measured characteristics by the core network and/or the mobile device (e.g., radio characteristics and/or responsiveness metrics, etc.). That is, the core network may be responsible for dynamically determining the best coding scheme based on measured performance with a specific mobile node or, generally, with the mobile nodes surrounding a given base station or, even more generally, with the mobile nodes surrounding the base stations for one or more base station controllers.
(91)
(92) In the depicted example of
(93) A bus 1170 is depicted as an example of means for exchanging data between the different modules of the connected device 1100. The present invention is not affected by the way the different modules exchange information between them. For instance, the memory module 1120 and the processor module 1130 could be connected by a parallel bus, but could also be connected by a serial connection or involve an intermediate module (not shown) without affecting the teachings of the present invention.
(94) Likewise, even though explicit mentions of the memory module 1120 and/or the processor module 1130 are not made throughout the description of the various embodiments, persons skilled in the art will readily recognize that such modules are used in conjunction with other modules of the connected device 1100 to perform routine as well as innovative steps related to the present invention.
(95) The connected device 1100 may also comprise a Graphical User Interface (GUI) module 1150 comprising one or more display screen(s). The display screens of the GUI module 1150 could be split into one or more touch or passive screens, but could also be any combination of flat and/or curved touch or passive screen made accessible to a user position.
(96) The computer telecommunications system 1000 comprises a storage system 1500 that comprises data related to various systems and subsystems of the system 1000 and that may further log dynamic data while one or more communications is being handled.
(97) In the depicted example of
(98) In the context of the depicted embodiments, runtime execution, real-time execution or real-time priority processing execution corresponds to operations executed during a communication between the connected device 1100 and the network 1200 that may have an impact on actual and/or perceived quality of the communication from a system or from a user perspective. An operation performed at runtime, in real-time or using real-time priority processing thus typically needs to meet certain performance constraints that may be expressed, for instance, in terms of maximum time, maximum number of frames, and/or maximum number of processing cycles. For instance, different telecommunications standards provide for different performance requirements. Skilled persons will readily recognize that real-time processing may not actually be achievable in absolutely all circumstances in which it would be best. The real-time priority processing required for the purpose of the disclosed embodiments relates to the connected device sets out to respect and/or the quality of service as codified within the network and/or as perceived by the user of the connected device, and does not require absolute real-time processing.
(99) A control network (e.g., the network 1400 itself or another network overlaid on the network 1400) may be used, to exchange control-related information. For instance, handoff procedure, address management, cell and/or channel attribution for the connected device 1100 and events related to interactions of a user of the connected device 1100 (e.g., data usage or the like) may be shared through the control network. Likewise, network-wide events (e.g., related to status of the network, etc.) may be shared through the control network from one or more centralized computer systems (not shown). In addition, the storage module 1500 (e.g., a networked database system) may be accessible to different components of the computer telecommunications system 1000.
(100)
(101) The polar code encoding scheme may have a number of bits k as a power of 2 and k may therefore be selected to be equal to or greater than the number of channel bits (considering resilience of the polar codes to erasure over the channel).
(102) The polar code encoding scheme may alternatively comprise more than one sub-polar code, each comprising a number of coding bits expressed as a power of 2. (e.g., a 2.sup.6 code and a 2.sup.7 code for a total of 64+128=192 bits). The number of coding bits in each of the sub-polar code may then be selected such that the sum thereof is equal to or greater than the number of channel bits.
(103) As previously discussed, the noisy communication channel may present correlated noise characteristics characterized by a good-state of erasure p.sup.2, the probability of erasure p corresponding to a bad-state probability p.sub.1≥p.sub.2. The channel also provides a probability of transition between the good state and the bad state Pgb and between the good state and the bad state Pbg and computing the n worst positions further considers the probabilities p.sub.2, Pgb and Pbg.
(104) Another aspect of this exemplary embodiment may comprise a decoding method, building upon the exemplified method 2000, for implementing the polar code decoder scheme selected therein and having frozen bits at the n worst positions determined thereby. Likewise, a decoding processing unit supporting the polar code decoder scheme as per the method 2000 and having frozen bits at the n worst positions determined thereby may be provided.
(105)
(106) The newly computed contraction identities correspond to one or more sections of the logical graph (e.g., see the example of
(107) Various network links may be implicitly or explicitly used in the context of the present invention. While a link may be depicted as a wireless link, it could also be embodied as a wired link using a coaxial cable, an optical fiber, a category 5 cable, and the like. A wired or wireless access point (not shown) may be present on the link between. Likewise, any number of routers (not shown) may be present and part of the link, which may further pass through the Internet.
(108) The present invention is not affected by the way the different modules exchange information between them. For instance, the memory module and the processor module could be connected by a parallel bus, but could also be connected by a serial connection or involve an intermediate module (not shown) without affecting the teachings of the present invention.
(109) A method is generally conceived to be a self-consistent sequence of steps leading to a desired result. These steps require physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic/electromagnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, parameters, items, elements, objects, symbols, characters, terms, numbers, or the like. It should be noted, however, that all of these terms and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. The description of the present invention has been presented for purposes of illustration but is not intended to be exhaustive or limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art. The embodiments were chosen to explain the principles of the invention and its practical applications and to enable others of ordinary skill in the art to understand the invention in order to implement various embodiments with various modifications as might be suited to other contemplated uses.
(110) It is interesting to study channel polarization of the erasure channel under Polar and Convolutional Polar coding to get a better understanding of their comparative performance. In
(111) In
(112) From these results is can also be deducted a simple upper bound on the probability of at least one error in the block, or frame-error rate (FER). By simply summing the probability of erasures over the k data channels and noting that the chance of an error is at most the sum of probabilities that a given data bit is the first to be decoded incorrectly, an (over)-estimate of the FER is obtained. In
(113) On
(114) The performance of the Polar and Convolutional Polar codes at protecting data from a variety of channels has been numerically compared, with particular focus on finite-code length effects on codes between 256 and 8192 bits. For all the simulations performed, a simplified channel selection scheme has been used that is independent of the details of the error model. The selected scheme uses the symmetric bit flip channel with probability p and evaluates, for each position j, the probability that bit x.sub.j is the first to be decoded incorrectly. It works by using an all-zero input and an output where the decoder believes each bit has an independent probability p of being a 1, and 1−p of being a 0. For each bit x.sub.j, a corresponding tensor-network diagram is contracted, with x.sub.i=0 for i<j and x.sub.i random for i>j. A more accurate estimate of the logical channel error rate for both the Polar code and Convolutional Polar code could be obtained by sampling, i.e. by sampling over the possible bit values x.sub.i with i<0 instead of fixing them to 0. Alternatively, more sophisticated techniques could also be used for Convolutional Polar codes. However, it has been observed that this simplified procedure gives adequate results over all investigated channels (for instance, performing better for the bit-flip channel than the data presented in
(115) The results for the binary erasure channel with code rate 1/2 are given in
(116) In
(117) Finally, performance under the more realistic additive Gaussian white noise channel is depicted in