DECODING APPARATUS, DECODING METHOD AND PROGRAM

20240056102 ยท 2024-02-15

    Inventors

    Cpc classification

    International classification

    Abstract

    A decoding device includes a memory and a processor configured to execute inputting a code word encoded by a polar code from an original message; decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and outputting the decoded original message.

    Claims

    1. A decoding device, comprising: a memory; and a processor configured to execute inputting a code word encoded by a polar code from an original message; decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and outputting the decoded original message.

    2. The decoding device according to claim 1, wherein, in a case where the decoding device is applied to decoding for an information source code, the processor further executes, inputting information of a frozen bit as the code word from a noise-free communication channel or a storage medium, and decoding the original message by using auxiliary information as the observation information.

    3. The decoding device according to claim 1, wherein, in a case where the decoding device is applied to decoding for communication channel code, the processor further executes, inputting, as the code word, a communication channel output from a communication channel having noise, and decoding the original message by using the communication channel output as the observation information.

    4. The decoding device according to claim 3, wherein, in a case where the decoding device is applied to decoding for a systematic communication channel code, information of a frozen bit is shared between an encoding side and a decoding side, and the processor further executes inputting the communication channel output in which the original message obtained by polar conversion on the encoding side and an-additional information are used as communication channel inputs, and decoding the original message by using the information of the frozen bit and the communication channel output.

    5. The decoding device according to claim 3, wherein, in a case where the decoding device is applied to decoding for a non-systematic communication channel code, information of a frozen bit is shared between an encoding side and a decoding side, and the processor further executes inputting the communication channel output in which information of non-frozen bit and information obtained by polar conversion from the information of the frozen bit on the encoding side are used as communication channel inputs, and decoding the original message by storing a determination result of 0 or 1 in a memory area corresponding to a position of a non-frozen bit based on the information of the frozen bit and the communication channel output.

    6. The decoding device according to claim 1, wherein the processor further executes list decoding for selecting a sequence of a most probable path from among the number (predeteimined list size) of survival paths.

    7. A decoding method executed by a decoding device including a memory and a processor, the method comprising: inputting a code word encoded by a polar code from an original message; decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and outputting the decoded original message.

    8. A non-transitory computer-readable recording medium having computer-readable instructions stored thereon, which when executed, cause a computer to function as the decoding device according to claim 1.

    Description

    BRIEF DESCRIPTION OF DRAWINGS

    [0010] FIG. 1 is a configuration diagram illustrating a communication system according to an embodiment of the present invention.

    [0011] FIG. 2 is a configuration diagram illustrating the communication system according to the embodiment of the present invention.

    [0012] 3 is a configuration diagram of a decoding device.

    [0013] FIG. 4 is a hardware configuration diagram of the decoding device.

    [0014] FIG. 5 is a diagram illustrating Algorithm 1.

    [0015] FIG. 6 is a diagram illustrating Algorithm 2.

    [0016] FIG. 7 is a diagram illustrating Algorithm 3.

    [0017] FIG. 8 is a diagram illustrating Algorithm 4.

    [0018] FIG. 9 is a diagram illustrating Algorithm 5.

    [0019] FIG. 10 is a diagram illustrating Algorithm 6.

    [0020] FIG. 11 is a diagram illustrating Algorithm 7.

    [0021] FIG. 12 is a diagram illustrating Algorithm 8.

    [0022] FIG. 13 is a diagram illustrating Algorithm 9.

    [0023] FIG. 14 is a diagram illustrating Algorithm 2.

    [0024] FIG. 15 is a diagram illustrating Algorithm 2.

    [0025] FIG. 16 is a diagram illustrating Algorithm 7-1.

    [0026] FIG. 17 is a diagram illustrating Algorithm 7-2.

    [0027] FIG. 18 is a diagram illustrating Algorithm 7-3.

    [0028] FIG. 19 is a diagram illustrating Algorithm 7-4.

    [0029] FIG. 20 is a diagram illustrating Algorithm 1.

    [0030] FIG. 21 is a diagram illustrating Algorithm 6

    [0031] FIG. 22 is a diagram illustrating Algorithm 7.

    [0032] FIG. 23 is a diagram illustrating Algorithm 4.

    [0033] FIG. 24 is a diagram illustrating Algorithm 7.

    [0034] FIG. 25 is a diagram illustrating Algorithm 7.

    [0035] FIG. 26 is a diagram illustrating a table used for application to an erasure channel.

    [0036] FIG. 27 is a diagram illustrating a table used for application to the erasure channel.

    [0037] FIG. 28 is a diagram illustrating a table used for application to the erasure channel.

    [0038] FIG. 29 is a diagram illustrating a table used for application to the erasure channel.

    [0039] FIG. 30 is a diagram illustrating a table used for application to the erasure channel.

    [0040] FIG. 31 is a diagram illustrating a table used for application to the erasure channel.

    [0041] FIG. 32 is a diagram illustrating a table used for application to the erasure channel.

    [0042] FIG. 33 is a diagram illustrating Algorithm 1.

    DESCRIPTION OF EMBODIMENTS

    [0043] Hereinafter, embodiments of the present invention (present embodiments) will be described with reference to the drawings. The embodiments described below are merely examples, and embodiments to which the present invention is applied are not limited to the following embodiments.

    [0044] Note that, in the text of the main body of the present description, a hat {circumflex over ()} will be added to the head of characters for convenience of description. {circumflex over ()}M is an example. Furthermore, in the text of the main body of the present specification, in a case where a superscript or a subscript C is used for a subscript B of a certain character A, the subscript B of the character A is not represented by a subscript but is represented by using _ as in A_B.sub.C.

    System Configuration Example

    [0045] FIG. 1 illustrates a configuration example of a communication system in the present embodiment. FIG. 1 illustrates an example in which the present invention is applied to a communication channel code (error correction code). As illustrated in FIG. 1, the communication system includes an encoding device 100 and a decoding device 200, which are connected by a communication channel. Note that the encoding device may be referred to as an encoder or the like. The decoding device may be referred to as a decoder or the like.

    [0046] In the present embodiment, the encoding device 100 encodes an input message M with a polar code, and outputs a code word X as a communication channel input. The decoding device 200 receives Y, which is a communication channel output in which noise is mixed into the code word, performs polar code decoding processing, and outputs a reproduction message {circumflex over ()}M. As will be described later, the decoding device 200 performs decoding by a successive cancellation decoding method or a successive cancellation list decoding method. The successive cancellation decoding method is a method of sequentially decoding bits of a message one by one. The successive cancellation list decoding method is a method in which, when the bits are decoded, L sequences (L is a list size) having high likelihood are set as survival paths, and only the maximum likelihood sequence is finally output as a decoding result. The processing (algorithm) of the decoding device 200 will be described in detail later.

    [0047] The decoding algorithm in the present embodiment can uniformly handle decoding of a communication channel code and decoding of an information source code. FIG. 2 is a system configuration diagram in a case where the present invention is applied to an information source code (information compression code).

    [0048] In the configuration illustrated in FIG. 2, it is assumed that the encoding device 100 and the decoding device 200 are connected by a noise-free communication channel or information is exchanged by a noise-free storage medium. X is input to the encoding device 100 as a message, and a code word is output from the encoding device 100 after being encoded by a polar code. The code word and auxiliary information Y correlated with X are input to the decoding device 100. The decoding device 100 performs polar code decoding processing and outputs a reproduction messaue {circumflex over ()}M. The auxiliary information is, for example, information of a frame preceding a frame to be decoded in a video. Note that the auxiliary information Y may be input to the decoding device 200 from the outside, or may be stored in the decoding device 200 in advance, or a past decoding result may be used as the auxiliary information.

    [0049] FIG. 3 is a functional configuration diagram of the decoding device 200. As illustrated in FIG. 3, the decoding device 200 includes an input unit 210, a computation unit 220, an output unit 230, and a data storage unit 240. The input unit 210 inputs information from a communication channel or a recording medium, or the like. The computation unit 220 executes an algorithm related to the decoding processing. The output unit 230 outputs a computation result (for example, a reproduction message) obtained by the computation unit 220. The data storage unit 240 corresponds to a memory referenced by an algorithm of the decoding processing executed by the computation unit 220. In addition, the data storage unit 240 stores known information and the like used for processing.

    [0050] In the present embodiment, an algorithm may also be called a processing procedure, a program, software, or the like.

    Hardware Configuration Example

    [0051] The decoding device 200 according to the present embodiment can be implemented, for example, by causing a computer to execute a program in which processing contents described in the present embodiment are described. Note that the computer may be a physical machine or a virtual machine on a cloud. In a case where a virtual machine is used, hardware described herein is virtual hardware.

    [0052] The above program can be stored and distributed by having it recorded on a computer-readable recording medium (portable memory or the like). Furthermore, the above program can also be provided through a network such as the Internet or e-mail.

    [0053] FIG. 4 is a diagram illustrating a hardware configuration example of the computer. The computer in FIG. 4 includes a drive device 1000, an auxiliary storage device 1002, a memory device 1003, a CPU 1004, an interface device 1005, a display device 1006, an input device 1007, an output device 1008, and the like which are connected to each other by a bus B.

    [0054] The program for implementing the processing in the computer is provided by, for example, a recording medium 1001 such as a CD-ROM or a memory card. When the recording medium 1001 storing the program is set in the drive device 1000, the program is installed from the recording medium 1001 to the auxiliary storage device 1002 via the drive device 1000. However, the program is not necessarily installed from the recording medium 1001, and may be downloaded from another computer via a network. The auxiliary storage device 1002 stores the installed program and also stores necessary files, data, and the like.

    [0055] In a case where an instruction to start the program is made, the memory device 1003 reads and stores the program from the auxiliary storage device 1002. The CPU 1004 implements a function related to the decoding device 200 in accordance with a program stored in the memory device 1003. The interface device 1005 is used as an interface for connecting to the network. The display device 1006 displays a graphical user interface (GUI) or the like by the program. The input device 1007 includes a keyboard and mouse, buttons, a touch panel, or the like, and is used to input various operation instructions. The output device 1008 outputs a computation result. Note that the display device 1006 and the input device 1007 may not be provided.

    [0056] Hereinafter, algorithms (corresponding to the above program) executed by the decoding device 200 will be described in detail. Hereinafter, first, definitions and basic related techniques will be described, and then each algorithm will be described. In the following description, Reference Literature is indicated as [1], [2], and the like. The names of the Reference Literature are described at the end of the specification. Note that the polar information source code/communication channel code is introduced in Arikan [1], [2], and [3], and Arikan proposes a successive cancellation decoding method that can be implemented with a calculation amount O (Nlog.sub.2N) with respect to a block length N.

    [0057] In the present embodiment, an algorithm of the successive cancellation decoding method and an algorithm for the successive cancellation list decoding method based on what is introduced in Tal and Vardy [9] will be described. As described above, the technique according to the present embodiment can be applied to both the polar information source code and the polar communication channel code. Furthermore, the decoding algorithm in the present embodiment can reduce the memory amount and the calculation amount as compared with the conventional technique such as [9].

    Regarding Definitions and Notations

    [0058] In the description of the algorithm of the present embodiment, the following definitions and notations will be used.

    [0059] For a certain n, N=2.sup.n indicates a block length. is given as a constant, and all the algorithms of the present embodiment can refer to this n. In the present embodiment, bit indexing introduced in [1] is used. The index of the N-dimensional vector can be expressed as X.sup.N=(X_0.sup.n, . . . , X_1.sup.n) as a set of n-bit strings. Here, 0.sup.n and 1.sup.n represent n-bit all-0 sequences and all-1 sequences, respectively. To represent integer intervals, the following notations are used.


    [0.sup.n:b.sup.n]={0.sup.n, . . . , b.sup.n}


    [0.sup.n:b.sup.n]=[0.sup.n:b.sup.n]\{b.sup.n}


    [b.sup.n:1.sup.n]=[0.sup.n1.sup.n]\[0.sup.n:b.sup.n)


    (b.sup.n:1.sup.n]=[0.sup.n:1.sup.n]\[0.sup.n:b.sup.n][Math. 1]

    [0060] For a certain subset I of [0.sup.n:1.sup.n], a subsequence of X.sup.N is defined as follows.


    X.sub.I={X.sub.b.sub.ncustom-character[Math. 2]

    [0061] It is assumed that c.sup.lb.sup.k{0, 1}.sup.l+k is a concatenation of b.sup.k{0, 1}.sup.k and c.sup.l{0, 1}.sup.l. For a given b.sup.k{0, 1}.sup.k and c.sup.l{0, 1}.sup.l, subsets of c.sup.l[0:b.sup.k] and c.sup.l[0:b.sup.k) of {0, 1}.sup.k+l can be defined as [0062] c.sup.l[0:b.sup.k]={c.sup.ld.sup.k:d.sup.k[0.sup.k:b.sup.k]} and [0063] c.sup.l[0:b.sup.k)={c.sup.ld.sup.k:d.sup.k[0.sup.k:b.sup.k)}.

    Bipolar Binary Conversion of b{0, 1}

    [0064]
    custom-character.sub.b [Math. 3]

    is defined as

    [00001] b = { - if b = 1 + if b = 0 . [ Math . 4 ]

    Binary Polar Code

    [0065] Here, the binary polar information source/communication channel code introduced in the existing techniques [1], [2], [3], and [7] will be described. It is assumed that {0,1} is a body having a characteristic of 2. For a given positive integer n, polar conversion G is defined as follows.

    [00002] G = ( 1 0 1 1 ) .Math. n .Math. BR [ Math . 5 ]

    [0066] Here, a symbol on the right shoulder of the matrix represent the n-th Kronecker product and .sub.BR is the bit inverted permutation matrix [1]. A vector u{0,1}.sup.N is then defined as u=xG for a given vector x{0,1}.sup.N. It is assumed that {I.sub.0, I.sub.1} is a division of [0.sup.n:1.sup.n], and I.sub.0I.sub.1= and I.sub.0I.sub.1=[0.sup.n:1.sup.n] are satisfied. {I.sub.0, I.sub.1} will be defined later.

    [0067] It is assumed that X=(X_0.sup.n, . . . , X_1.sup.n) and Y=(Y_0.sup.n, . . . , Y_1.sup.n) are random variables, and U=(U_0.sup.n, . . . , U_1.sup.n) is a random variable defined as U=XG.

    Joint Distribution

    [0068]
    custom-character.sub.Y of (custom-character, custom-character, Y) [Math. 6]

    using a joint distribution P.sub.XY of (X,Y),


    custom-character.sub.Y(custom-character, custom-character, custom-character)=P.sub.XY((custom-character, custom-character)G.sup.1, custom-character) [Math. 7]

    is defined.

    [0069] In the above formula, the elements of (u_I.sub.1, u_I.sub.0) are sorted in the order of indices before the processing of G.sup.1. u_I.sub.1 and u_I.sub.0 are referred to as a frozen bit and a non-frozen bit, respectively.

    [0070] It is assumed that

    [00003] P U b n | U | 0 n : b n ) Y [ Math . 8 ]

    is a conditional probability distribution defined as follows.

    [00004] P U b n .Math. "\[LeftBracketingBar]" U [ 0 n : b n ) Y ( u b n .Math. "\[LeftBracketingBar]" u [ 0 n : b n ) , y ) = .Math. u ( b n : 1 n ] P U 0 U 1 Y ( u 0 , u 1 , y ) .Math. u [ b n : 1 n ] P U 0 U 1 Y ( u 0 , u 1 , y ) [ Math . 9 ]

    [0071] For the vector u_I.sub.1 and the auxiliary information yy.sup.N, a successive cancellation (SC) decoder f that outputs {circumflex over ()}u=f(u_I.sub.1, y) is recursively defined as

    [00005] u ^ b n = { f b n ( u ^ [ 0 n : b n ) , y ) if b n 0 u b n if b n 1 . [ Math . 10 ]

    [0072] The set of functions {f_b.sup.n}_b.sup.nI.sub.0 used in the above formula is defined as

    [00006] [ Math . 11 ] f b n ( u [ 0 n : b n ) , y ) = arg max u { 0 , 1 } P U b n .Math. U [ 0 n : b n ) Y ( u .Math. u [ 0 n : b n ) , y )

    which is the maximum posterior probability discrimination rule after the observation (u_[0.sup.n:b.sup.n), y).

    [0073] In the case of a polar information source code (with auxiliary information), x{0,1}.sup.N is an information source output, u_I.sub.1 is a code word, and yY.sup.N is an auxiliary information source output. The decoder reproduces the information source output f(u_I.sub.1, y)G.sup.1 from code words u_I.sub.1 and y. The (block) decoding error probability is given as Prob(f(U_I.sub.1, Y)G.sup.1X).

    [0074] In the systematic polar communication channel code [3], I.sub.0 and I.sub.1 for a given (I.sub.0, I.sub.1) are defined as follows.


    [Math. 12]


    custom-character={b.sub.0b.sub.1 . . . b.sub.n1:b.sub.n1 . . . b.sub.1b.sub.0custom-character}(1)


    custom-character={b.sub.0b.sub.1 . . . b.sub.n1:b.sub.n1 . . . b.sub.1b.sub.0custom-character}(2)

    [0075] It is assumed that the encoder and the decoder share a vector u_I.sub.1. The encoder calculates (x_I.sub.1, u_I.sub.0) satisfying (x_I.sub.1, x_I.sub.0)=(u_I.sub.1, u_I.sub.0)G.sup.1 from a message x_I.sub.0 and the shared vector u_I.sub.1. Here, the elements of (x_I.sub.1, x_I.sub.0) and (u_I.sub.1, u_I.sub.0) are sorted in the order of indices before the operation of G.sup.1.

    [0076] The encoder generates a communication channel input x=(x_I.sub.0, x_I.sub.1). Here, the elements of (x_I.sub.0, x_I.sub.1) are sorted in the order of indices. The decoder reproduces the communication channel input {circumflex over ()}x=f(u_I.sub.1, y)G.sup.1 from the communication channel output yy.sup.N and the shared vector u_I.sub.1. {circumflex over ()}x_I.sub.0 is reproduction of the message. The (block) decoding error probability is also given as Prob(f(U_I.sub.1, Y)G.sup.1X).

    [0077] In a non-systematic polar communication channel code, u_I.sub.0 is a message, and a vector u_I.sub.1 is shared by an encoder and a decoder. The encoder generates a communication channel input x{0,1}.sup.N with x=(u_I.sub.1, u_I.sub.0)G.sup.1. Here, the elements of (u_I.sub.1, u_I.sub.0) are rearranged in the order of indices before G.sup.1 is operated. The decoder generates a pair of vectors (u_I.sub.1, {circumflex over ()}u_I.sub.0)=f(u_I.sub.1, y) from the communication channel output yy.sup.N and the shared vector u_I.sub.1. Then, {circumflex over ()}u_I.sub.0 is reproduction of the message. The (block) decoding error probability is given as Prob(f(U_I.sub.1, Y)(U_I.sub.0, U_I.sub.1)).

    [0078] Lemmas will be described below.

    [0079] Lemma 1 ([2, Theorem 2], [7, Theorem 4.10]):I.sub.0 is defined as follows.

    [00007] [ Math . 13 ] = { b n [ 0 n : 1 n ] : Z ( U b n .Math. U [ 0 n : b n ) , Y [ 0 n : 1 n ] ) 2 - 2 n }

    where Z is a function [2] that gives a Battacharyya parameter. For any [0, ),

    [00008] [ Math . 14 ] lim n .fwdarw. .Math. "\[LeftBracketingBar]" 0 .Math. "\[RightBracketingBar]" 2 n = 1 - H ( X .Math. Y ) lim n .fwdarw. .Math. "\[LeftBracketingBar]" 1 .Math. "\[RightBracketingBar]" 2 n = H ( X .Math. Y )

    are obtained.

    [0080] There is the following lemma which can be proven as in the previous proof [1].

    [0081] Lemma 2 ([6, Lemma 2], [8, Formula (1)], [7, Proposition 2.7]):

    [00009] [ Math . 15 ] Prob ( f ( U , Y ) G - 1 X ) = Prob ( f ( U , Y ) ( U 0 , U ) ) .Math. b n 0 Prob ( f b n ( U [ 0 n : b n ) , Y ) U b n ) .Math. b n 0 Z ( U b n .Math. U [ 0 n : b n ) , Y [ 0 n : 1 n ] )

    [0082] From the above lemma, a fact is obtained that the rate of the polar code reaches the fundamental limit, and the decoding error probability becomes zero with n.fwdarw.. For example, I.sub.0 can be obtained by using a technique introduced in [8]. In the following, it is assumed that any I.sub.0 is given.

    Symmetric parameterization

    [0083] Here, a polar conversion based on the symmetric parameterization used in the expression of the conditional probability in the present embodiment will be described. For a given probability distribution P.sub.U of a binary random variable U, is defined as follows.


    =P.sub.U(0)P.sub.U(1)

    then,

    [00010] [ Math . 16 ] P U ( u ) = 1 u 2

    is obtained.


    custom-character.sub.u [Math. 17]

    is the bipolar binary conversion of u.

    [0084] In the basic polar conversion, a pair of binary random variables (U.sub.0, U.sub.1) is converted as follows.


    U.sub.0=U.sub.0U.sub.1


    U.sub.1=U.sub.1 [Math. 18]

    where


    [Math. 19]

    indicates addition in a field of characteristic 2. It is assumed that the random variables U.sub.0, U.sub.1{0,1} are independent. For each i{0,1}, .sub.i is defined as follows.


    .sub.i=P.sub.U.sub.i(0)P.sub.U.sub.i(1) [Math. 20]

    [0085] Here, the following equalities are obtained.

    [00011] [ Math . 21 ] P U 0 = P U 0 ( 0 ) P U 1 ( 0 ) + P U 0 ( 1 ) P U 1 ( 1 ) = 1 + 0 2 .Math. 1 + 1 2 + 1 - 0 2 .Math. 1 - 1 2 = 1 + 0 1 2 ( 3 )

    [0086] The first equality is obtained from the definition of U.sub.0 and the fact that U.sub.0 and U.sub.1 are independent of each other.

    [0087] Furthermore, the following equalities are obtained.

    [00012] [ Math . 22 ] P U 0 ( 1 ) = 1 - P U 0 ( 0 ) = 1 - 0 1 2 ( 4 )

    [0088] From Formula (3) and Formula (4),

    [00013] [ Math . 23 ] P U 0 ( u 0 ) = 1 u 0 0 1 2 ( 5 )

    is obtained.


    custom-character.sub.u.sub.0.sub.[Math. 24]

    is the bipolar binary conversion of u.sub.0. .sub.0 is defined as


    [Math. 25]


    .sub.0=P.sub.U.sub.0.sub.(0)P.sub.U.sub.0.sub.(1) (6)

    [0089] From Formula (3) to Formula (6),


    .sub.0=.sub.1.sub.0 (7)

    is obtained.

    [0090] Since the symmetric parameterization is a binary version of the Fourier transform of a probability distribution [5, Definitions 24 and 25], the right side of Formula (7) corresponds to the Fourier transform of the convolution.

    [0091] Here,

    [00014] [ Math . 26 ] P U 0 U 1 ( u 0 , 0 ) = P U 0 ( u 0 ) P U 1 ( 0 ) = 1 0 2 .Math. 1 + 1 2 = 1 0 1 + 1 0 4 ( 8 )

    is obtained. The first equality is derived from the definition of U.sub.0 and U.sub.1 and the fact that U.sub.0 and U.sub.1 are independent of each other. Further,

    [00015] [ Math . 27 ] P U 1 .Math. U 0 ( 0 .Math. u 0 ) = P U 1 U 0 ( 0 , u 0 ) P U 0 ( u 0 ) = 1 + [ 1 0 ] / [ 1 1 0 ] 2 ( 9 ) P U 1 .Math. U 0 ( 1 .Math. u 0 ) = 1 - P U 1 .Math. U 0 ( 0 .Math. u 0 ) = 1 - [ 1 0 ] / [ 1 1 0 ] 2 ( 10 )

    is obtained.

    [0092] .sub.1 is defined as follows.

    [00016] 1 = P U 1 | U 0 ( 0 | u 0 ) - P U 1 | U 0 ( 1 | u 0 ) ( 11 )

    [0093] From Formula (9) to Formula (11),

    [00017] [ Math . 29 ] 1 = 1 0 1 1 0 = 1 0 1 0 ( 12 )

    is obtained. Here, the second equality is obtained from Formula (7).

    Successive Cancellation Decoding

    [0094] Next, an algorithm of successive cancellation decoding executed by the decoding device 200 according to the present embodiment will be described.

    [0095] Note that, in the present embodiment, notations of the algorithm illustrated in each drawing are general by themselves. For example, for A do B means executing B on each element defined by A. if C then D else E means that D is executed in a case where the condition C is satisfied (in a case of true), and E is executed in a case where the condition is not satisfied.

    [0096] The decoding device 200 performs decoding processing of the polar code by executing Algorithm 1 of successive cancellation decoding illustrated in FIG. 5. This algorithm is based on a technique introduced in [9]. Processing contents of update(, U, n, b.sup.n) in line 3 of Algorithm 1 are illustrated in FIG. 6 as Algorithm 2. In addition, processing contents of updateU(U, n, b.sup.n1) in line 9 of Algorithm 1 are illustrated in FIG. 7 as Algorithm 3.

    [0097] Here, each element of is a conditional probability of 0 and 1 expressed by one parameter by applying the symmetric parameterization, and corresponds to a conditional probability expressing how much the value of the bit to be decoded deviates from . This is updated by update(, U, n, b.sup.n).

    [0098] As illustrated in line 7 of Algorithm 1, whether the value of the bit is 0 or 1 is determined by comparing [n][0] and 0. Note that 0 or 1 in line 7 of Algorithm 1 indicates that the value of the bit may be either 0 or 1.

    [0099] The computation unit 220 that executes Algorithms 1 to 3 can access (refer to) the number n of times of polar conversion, the frozen bit u.sup.(n)_I.sub.1, and the following memory space. The memory space corresponds to the data storage unit 240 of the decoding device 200. The successive cancellation decoding processing executed by Algorithms 1 to 3 will be described below.

    [00018] [ Math . 30 ] = { [ k ] [ c n - k ] : k { 0 , .Math. , n } c n - k [ 0 n - k : 1 n - k ] } U = { U [ k ] [ c n - k ] [ b ] : k { 0 , .Math. , n } c n - k [ 0 n - k : 1 n - k ] b { 0 , 1 } }

    [0100] In the above formula, [k][c.sup.nk] is a real variable, U[k][c.sup.nk][b] is a binary variable, and c.sup.0 is a null character string. Note that has .sup.n.sub.k=02.sup.nk=2.sup.n+11 variables, and U has 2.sup.n.sub.k=02.sup.nk=2.sup.n+22 variables.

    [0101] In the following,

    [00019] { U c n ( 0 ) } c n [ 0 n : 1 n ] [ Math . 31 ]

    is a memoryless information source. That is,

    [00020] [ Math . 32 ] P U [ 0 n : 1 n ] ( 0 )

    is defined as

    [00021] [ Math . 33 ] P U [ 0 n : 1 n ] ( 0 ) ( u [ 0 n : 1 n ] ( 0 ) ) = .Math. c n [ 0 n : 1 n ] P U c n ( 0 ) ( u c n ( 0 ) ) .

    where

    [00022] { P U c n ( 0 ) } c n [ 0 n : 1 n ] [ Math . 34 ]

    is given depending on the context. In addition,

    [00023] { U c n ( 0 ) } c n [ 0 n : 1 n ] [ Math . 35 ]

    is allowed to be a non-steady state.

    [0102] U.sup.(n)_b.sup.n is recursively defined (calculated) as

    [00024] [ Math . 36 ] U c n - k b k - 1 0 ( k ) = U c n - k 0 b k - 1 ( k - 1 ) U c n - k 1 b k - 1 ( k - 1 ) ( 13 ) U c n - k b k - 1 1 ( k ) = U c n - k 1 b k - 1 ( k - 1 ) ( 14 )

    for a certain b.sup.n{0,1}.sup.n and c.sup.nk{0,1}.sup.nk.

    [0103] At this time,


    U.sub.[0.sub.n.sub.:1.sub.].sup.(n)=U.sub.[0.sub.n.sub.:1.sub.n.sub.].sup.(0)G [Math. 37]

    obtained. This is a polar conversion of U.sup.(0)_[0.sup.n:1.sup.n]. update(, U, n, b.sup.n) in line 3 of Algorithm 1 starts from the following Formula (16), and

    [00025] [ Math . 38 ] b n ( n ) = P U b n ( n ) .Math. U [ 0 n : b n ) ( n ) ( 0 .Math. u [ 0 n : b n ) ( n ) ) - P U b n ( n ) .Math. U [ 0 n : b n ) ( n ) ( 1 .Math. u [ 0 n : b n ) ( n ) ) ( 15 )

    is calculated recursively.

    [00026] [ Math . 39 ] b n ( 0 ) = P U b n ( 0 ) ( 0 ) - P U b n ( 0 ) ( 1 ) ( 16 )

    [0104] Processing contents of update(, U, n, b.sup.n) in line 3 in Algorithm 1 are shown. In Algorithm 2 of FIG. 6, [0105] parameters defined as

    [00027] [ Math . 40 ] c n - k b k ( k ) = P U c n - k b k ( k ) .Math. U c n - k [ 0 k : b k ) ( k ) ( 0 .Math. u c n - k [ 0 k : b k ) ( k ) ) - P U c n - k b k ( k ) .Math. U c n - k [ 0 k : b k ) ( k ) ( 1 .Math. u c n - k [ 0 k : b k ) ( k ) ) ( 17 )

    are calculated for each c.sup.nk for a given b.sup.n{0, 1}.sup.n. By using Formulas (7), (12), (13), (14), and (17), relations of

    [00028] [ Math . 41 ] c n - k b k - 1 0 ( k ) = c n - k 1 b k - 1 ( k - 1 ) c n - k 0 b k - 1 ( k - 1 ) ( 18 ) c n - k b k - 1 1 ( k ) = c n - k 1 b k - 1 ( k - 1 ) c n - k 0 b k - 1 ( k - 1 ) 1 c n - k b k 0 ( k ) ( 19 )

    can be obtained. Here,


    custom-character.sub.u [Math. 42]

    is the bipolar binary conversion of u=u.sup.(k)_c.sup.nkb.sup.k10. The purpose of updateU(U, n, b.sup.n1) in line 9 of Algorithm 1 is to calculate u.sup.(k)_c.sup.nkb.sup.k10 from u.sup.(n)_[0.sup.n:b.sup.n10] using the relation of


    [Math. 43]


    u.sub.c.sub.nk.sub.0b.sub.k1.sup.(k1)=u.sub.c.sub.nk.sub.b.sub.k1.sub.0.sup.(k)u.sub.c.sub.nk.sub.b.sub.k1.sub.1.sup.(k) (20)


    u.sub.c.sub.nk.sub.1b.sub.k1.sup.(k1)=u.sub.c.sub.nk.sub.b.sub.k1.sub.1.sup.(k) (21)

    [0106] Formulas (20) and (21) are obtained from Formulas (13) and (14). Here, it is assumed that u.sup.(n)_[0.sup.n:b.sup.n10] is normally decoded.

    [0107] Formulas (18) and (19) correspond to lines 5 and 7 of Algorithm 2, respectively, and formulas (20) and (21) correspond to lines 2 and 3 of Algorithm 3, respectively.

    [0108] After applying lines 3 and 9 of Algorithm 1, respectively, the relation of


    [Math. 44]


    [k][c.sup.nk]=.sub.c.sub.nk.sub.b.sub.k.sup.(k) (22)


    U[k][c.sup.nk][b.sub.k1]=u.sub.c.sub.nk.sub.b.sub.k.sup.(k) (23)

    is obtained.

    [0109] Further, line 7 of Algorithm 1 corresponds to a maximum posterior probability discrimination defined as follows.

    [00029] [ Math . 45 ] u ^ b n = arg max u { 0 , 1 } P U b n ( n ) .Math. U [ 0 n : b n ) ( n ) ( u .Math. u ^ [ 0 n : b n ) )

    Regarding Application of Algorithm 1 to Information Source Code

    [0110] In a case where Algorithm 1 is used in a decoder of a polar information source code in which the code word u.sup.(n)_I.sub.1 and the auxiliary information vector y_[0.sup.n:1.sup.n] are used as inputs,

    [00030] [ Math . 46 ] P U c n ( 0 ) ( x ) = P X c n .Math. Y c n ( x .Math. y c n ) for x { 0 , 1 } ( 24 )

    is defined,


    [Math. 47]


    custom-character.sub.c.sub.nU[0][c.sup.n][{circumflex over (b)}.sub.1](25)

    is defined, and
    reproduction


    {custom-character.sub.c.sub.n}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.][Math. 48]

    is obtained. Here, {circumflex over ()}b.sub.1 is a null character string.

    Regarding Application of Algorithm 1 to Systematic Polar Communication Channel Code

    [0111] In a case where Algorithm 1 is used for a decoder of a systematic polar communication channel code in which the communication channel output vector y_[0.sup.n:1.sup.n] and the shared vector u.sup.(n)_I.sub.1 are used as inputs, [0112] with respect to a given communication channel distribution


    {P.sub.Y.sub.c.sub.n.sub.|X.sub.c.sub.n}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.][Math. 49]

    an input distribution


    {P.sub.X.sub.c.sub.n}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.], custom-character{0, 1}[Math. 50]

    and y_c.sup.nY,

    [00031] [ Math . 51 ] P U c n ( 0 ) ( x ) = P Y c n .Math. X c n ( y c n .Math. x ) P X c n ( x ) .Math. x { 0 , 1 } P Y c n ( y c n .Math. x ) P X c n ( x ) ( 26 )

    is defined. Then, the reproduction {{circumflex over ()}x_c.sup.n}_c.sup.nI.sub.0 defined by Formula (25) is obtained. Here, I.sub.0 is defined by Formula (1).

    Regarding Application of Algorithm 1 to Non-Systematic Polar Communication Channel Code

    [0113] In a case where Algorithm 1 is applied to a decoder of a non-systematic polar communication channel code, a series of binary variables {M[b.sup.n]}_b.sup.nI.sub.0 is prepared, and immediately after U[n][c.sup.0][b.sub.n1] (line 7 of Algorithm 1) is updated,


    M[b.sup.n]U[n][c.sup.0][b.sub.n1]

    is inserted. Algorithm 1 with this modification is shown in FIG. 20.

    [0114] As a result, reproduction {circumflex over ()}u_I.sub.0 defined by {circumflex over ()}u_b.sup.n=M[b.sup.n] can be obtained.

    [0115] Remark 1: In a case where Algorithm 1 is applied to a binary erasure channel, the value that can be taken by [k][c.sup.nk] is limited to {1, 0, 1}, and

    [00032] [ Math . 52 ] [ 0 ] [ b n ] P U b n ( 0 ) ( 0 ) - P U b n ( 0 ) ( 1 )

    which is a value in line 1 of Algorithm 1 can be replaced by

    [00033] [ Math . 53 ] [ 0 ] [ b n ] { 1 if y b n = 0 0 if y b n is the erasure symbol - 1 if y b n = 1

    for a given communication channel output y_[0.sup.n:1.sup.n].

    [0116] As described below as Algorithm 2, Algorithm 2 can be improved. Furthermore, this technique can also be used with systematic communication channel codes, as introduced in [3]. There, for a given message vector x_I.sub.0, y_[0.sup.n:1.sup.n] is defined as

    [00034] [ Math . 54 ] y b n = { x b n if b n erasure if b n .

    Successive Cancellation List Decoding

    [0117] Next, an algorithm of successive cancellation list decoding executed by the decoding device 200 according to the present embodiment will be described. This algorithm is based on a technique introduced in [9].

    [0118] In the technique according to the present embodiment, a fixed address memory space is used instead of using the stacking memory space in [9]. In the technique according to the present embodiment, since the size of the memory space for calculating the conditional probability is approximately half of that used in [9], the calculation amount of the algorithm according to the present embodiment is approximately half of that introduced in [9].

    [0119] The decoding device 200 performs decoding processing of the polar code by executing Algorithm 4 of successive cancellation decoding illustrated in FIG. 8. Processing contents of update([], U[], n, b.sup.n) in line 5 of Algorithm 4 are illustrated in FIG. 6 as Algorithm 2. Processing contents of extendPath(b.sup.n, ) in line 7 of Algorithm 4 are illustrated in FIG. 9 as Algorithm 5. Processing contents of splitPath(b.sup.n, ) in line 10 of Algorithm 4 is illustrated in FIG. 10 as Algorithm 6. Processing contents of prunePath(b.sup.n, ) in line 13 of Algorithm 4 is illustrated in FIG. 11 as Algorithm 7. Processing contents of copyPath(, ) in line 25 of Algorithm 7 is illustrated in FIG. 12 as Algorithm 8. Processing contents of magnifyP() in line 17 of Algorithm 4 is illustrated in FIG. 13 as Algorithm 9. Processing contents of updateU(U[], n, b.sup.n1) in line 19 of Algorithm 4 is illustrated in FIG. 7 as Algorithm 3.

    [0120] In the calculation by the procedure of Algorithms 2 to 9, the computation unit 220 of the decoding device 100 shown in FIG. 3 accesses the number of times n of the polar conversion, the list size L, the frozen bit u.sup.(n)_I.sub.1, the memory spaces {[]}.sup.L1.sub.=0, {U[]}.sup.L1.sub.=0, {P[]}.sup.2L1.sub.=0, and {Active []}.sup.2L1.sub.=0, and performs processing. The memory space corresponds to the data storage unit 240.

    [0121] Note that [] and U[] are calculated in Algorithms 2 and 3. {P[]}.sup.2L1.sub.=1 is a series of real variables, and {Active []}.sup.2L1.sub.=1 is a series of binary variables.

    [0122] After applying Algorithm 4, the calculation result is stored in {U[]}.sup.L1.sub.=0 and {P[]}.sup.2L1.sub.=0 to satisfy

    [00035] [ Math . 55 ] P [ ] 2 N = .Math. b n [ 0 n : 1 n ] P U b n ( n ) .Math. U [ 0 n : b n ) ( n ) ( u ^ b n ( n ) ( ) .Math. u ^ [ 0 n : b n ) ( n ) ( ) ) = P U [ 0 n : 1 n ] ( n ) ( u ^ [ 0 n : 1 n ] ( n ) ( ) ) ( 27 ) and [ Math . 56 ] U [ ] [ 0 ] [ c n ] [ b - 1 ] = u ^ c n ( 0 ) ( ) u ^ [ 0 n : 1 n ] ( n ) ( ) = u ^ [ 0 n : 1 n ] ( 0 ) ( ) G .

    where


    .sub.[0.sub.n.sub.:1.sub.n.sub.].sup.(n)() [Math. 57]

    is the th survival path. In line 5 of Algorithm 7 that performs the processing of selecting the survival path, L paths {circumflex over ()}u.sup.(n)_[0.sup.n:b.sup.n] with the highest probability (likelihood)

    [00036] [ Math . 58 ] P [ ] 2 .Math. "\[LeftBracketingBar]" [ 0 n : b n ] .Math. "\[RightBracketingBar]" = .Math. d n [ 0 n : b n ] P U d n ( n ) .Math. U [ 0 n : d n ) ( n ) ( u ^ d n ( n ) ( ) .Math. u ^ [ 0 n : d n ) ( n ) ( ) ) = P U [ 0 n : b n ] ( n ) ( u ^ [ 0 n : b n ] ( n ) ( ) ) ( 28 )

    are selected where


    .sub.[0.sub.n.sub.:b.sub.n.sub.].sup.(n)() [Math. 59]

    is the th survival path.

    Regarding Application of Algorithm 4 to Information Source Code

    [0123] In a case where Algorithm 4 is used in a decoder of a polar information source code, accessing the code word u_I.sub.1 and the auxiliary information vector {y_b.sup.n}_b.sup.n[0.sup.n:1.sup.n],

    [00037] P U b n ( 0 ) [ Math . 60 ]

    is defined by Formula (24), and reproduction


    {custom-character.sub.c.sub.n(l)}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.][Math. 61]

    is obtained. The above reproduction is defined below.


    [Math. 62]


    custom-character.sub.c.sub.n(l)=U[l][0][c.sup.n][b.sub.1](29)

    where

    [00038] [ Math . 63 ] l = arg max P [ ] ( 30 )

    [0124] That is, 1 in Formula (30) is the most probable path (sequence).

    [0125] In a case where s=parity(x.sup.n) is added to the code word u_I.sub.1 by using an external parity check function parity,

    with respect to 1 satisfying


    parity ({custom-character.sub.c.sub.n(l)}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.])=s [Math. 64]

    reproduction is defined as (29).

    Regarding Application of Algorithm 4 to Systematic Polar Communication Channel Code

    [0126] In a case where Algorithm 4 is used in a decoder of a systematic polar communication channel code, accessing the communication channel output vector {y_b.sup.n}_b.sup.n[0.sup.n:1.sup.n] and the shared vector u_I.sub.1, reproduction defined by Formulas (29) and (30)


    {custom-character.sub.c.sub.n(l)custom-character[Math. 65]

    is obtained. Here, I.sub.0 is defined by Formula (1).

    [0127] In a case where the code word (for example, polar code with CRC [9]) having a check vector s satisfying s=parity(x.sup.n) for all the communication channel inputs x.sup.n is used using the external parity check function parity,

    with respect to 1 satisfying


    parity ({custom-character.sub.c.sub.n(l)}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.])=s [Math. 66]

    reproduction


    {custom-character.sub.c.sub.n(l)custom-character[Math. 67]

    is defined as (29).

    Regarding Application of Algorithm 4 to Non-Systematic Polar Communication Channel Code

    [0128] In a case where Algorithm 4 is used for a decoder of a non-systematic polar communication channel code, the binary variable


    {M[][b.sup.n]custom-character[Math. 68]

    is prepared, and immediately after U[][n][c.sup.0][b.sub.n1] and U[+][n][c.sup.0][b.sub.n1] are updated (lines 5 and 6 of Algorithm 6, lines 8, 11, 15, and 26 of Algorithm 7), respectively,


    M[][b.sup.n]U[][n][c.sup.0][b.sub.n1]


    M[+][b.sup.n]U[+][n][c.sup.0][b.sub.n1][Math. 69]

    is inserted.


    [Math. 70]


    .sub.b.sub.n(l)=M[l][b.sup.n](31)

    is defined, and reproduction


    {.sub.b.sub.n(l)custom-character[Math. 71]

    is obtained. Here, l is a maximum likelihood sequence defined by Formula (30). The modified Algorithm 6 is shown in FIG. 21, and the modified algorithm 7 is shown in FIG. 22.

    [0129] In a case where the code word (for example, polar code with CRC [9]) having a check vector s satisfying s=parity(x.sup.n) for all the communication channel inputs x.sup.n is used using the external parity check function parity, with respect to l in which the corresponding communication channel input, which is defined by Formula (29),


    {custom-character.sub.c.sub.n(l)}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.][Math. 72]

    satisfies


    parity ({custom-character.sub.c.sub.n(l)}.sub.c.sub.n.sub.[0.sub.n.sub.:1.sub.n.sub.])=s [Math. 73]

    reproduction of non-systematic code


    {.sub.b.sub.n(l)custom-character[Math. 74]

    is defined as Formula (31).

    [0130] Remark 2: Line 17 of Algorithm 4 is unnecessary in a case where a real variable with infinite precision is used. In the above description, it is assumed to use a real variable with finite precision (floating point) in order to avoid P[] becoming 0 due to truncation when b.sup.n increases. Regardless of whether P[] is infinite precision or not, while (b.sup.n=0.sup.n) and b.sup.nI.sub.1 are continuously satisfied from the beginning, line 17 of Algorithm 4 and line 2 of Algorithm 5 can be skipped.

    [0131] Note that this type of technique is used in [9, Algorithm 10, lines 20 to 25], and this technique has been repeated Nn times. In contrast to that, Algorithm 4 in the present embodiment uses this technique outside the update of the parameter {[]}.sup.L1.sub.=0 (Algorithm 2), and magnifyP() is repeated N times.

    [0132] Remark 3: Assuming that L is a power of 2, =L is always satisfied in line 13 of Algorithm 4. Therefore, since +lL is always satisfied, line 14 of Algorithm 4 and lines 9 to 12 of Algorithm 7 can be omitted. FIG. 23 illustrates Algorithm 4 after skip processing of Remark 2 and omission of Remark 3 with respect to Algorithm 4. In addition, Algorithm 7 after omission of Remark 3 with respect to Algorithm 7 is illustrated in FIG. 24. Note that line 9 of Algorithm 4 in FIG. 23 corresponds to skip processing of Remark 2 with respect to Algorithm 5 in FIG. 9, skip is not performed in extendPath that calls line 18 of Algorithm 4 in FIG. 23, and the processing of Algorithm 5 in FIG. 9 is executed as it is. Further, FIG. 25 illustrates a modification when Algorithm 7 illustrated in FIG. 24 is applied to the non-systematic communication channel code.

    [0133] Remark 4: Since Algorithm 6 and Algorithm 7 have the same lines 2 to 3, lines 2 to 3 of Algorithm 6 can be omitted, and then lines 1 to 4 of Algorithm 7 can be moved between the line 8 and the line 9 of Algorithm 4.

    Implementation of Binary Representation of Index

    [0134] The description so far has used a binary representation of the index. Implementation of this representation will now be described.

    [0135] In an implementation, the binary representation b.sup.k[0.sup.k:1.sup.k] may be represented by a corresponding integer in {0, . . . , 2.sup.k1}. It is assumed that i(b.sup.k) is the corresponding integer of b.sup.k.

    [0136] For a given b.sup.k[0.sup.k:1.sup.k], b.sub.k1 is the least significant bit of b.sup.k, and the corresponding integer can be obtained as i(b.sub.k1)==i(b.sup.k)mod2. This is used in lines 5, 7, and 9 of Algorithm 1 and lines 2, 3, and 5 of the algorithm.

    [0137] In line 9 of Algorithm 1 (FIG. 5), line 2 of Algorithm 2 (FIG. 6), and line 5 of Algorithm 3 (FIG. 7), b.sup.k1 is used for a given b.sup.k. b.sup.k1 can be obtained by using a 1-bit right shift of b.sup.k. Alternatively, a corresponding integer can be obtained by using a relational expression of i(b.sup.k1)=floor(i(b.sup.k)/2). Here, floor is a function that outputs an integer value obtained by rounding down decimal places with respect to an input of a real number.

    [0138] In lines 5 and 7 of Algorithm 2 and lines 2 and 3 of Algorithm 3, c.sup.nk0 can be obtained by setting the least significant bit of c.sup.nk0 to 1 and using c.sup.nk and a 1-bit left shift of c.sup.nk1. Alternatively, the corresponding integer can be obtained using the following relational expression.


    i(c.sup.nk0)=2i(c.sup.nk)


    i(c.sup.nk1)=2i(c.sup.nk)+1 [Math. 75]

    [0139] In lines 5 and 7 of Algorithm 1, line 3 of Algorithm 5, lines 5 and 6 of Algorithm 6, and lines 8, 11, 15, and 26 of Algorithm 7, the corresponding integer of the null character string c.sup.0 can be defined as i(c.sup.0)=0. When k=n, this also appears in lines 5 and 7 of Algorithm 2 and lines 2 and 3 of Algorithm 3. Here, the null character string is distinguished from the binary symbol 0 according to the value k. In lines 2 and 3 of Algorithm 3, the null character string b.sub.1 appears when k=1. i(b.sub.1)=0 can be defined and can be distinguished from the binary symbol 0 according to the value k. The following relational expression is obtained.


    i(c.sup.00)=2i(c.sup.0)


    i(c.sup.01)=2i(c.sup.0)+1


    i(b.sub.1)=floor(i(b.sup.1)/2) [Math. 76]

    Regarding Improvement of Algorithm 2 in Case of Assuming [k][c.SUP.nk.]{1, 0, 1}

    [0140] As described in Remark 1, in a case where Algorithm 1 is applied to the binary erasure channel, [k][c.sup.nk]{1,0,1} is obtained. Given this, two refinements to Algorithm 2 can be introduced.

    [0141] First, FIG. 14 illustrates Algorithm 2. In this Algorithm 2, it is assumed that [k][c.sup.nk] is represented by a 3-bit signed integer including a sign bit and two bits representing an absolute value.

    [0142] FIG. 15 illustrates Algorithm 2. In this Algorithm 2,

    It is assumed that


    [k][c.sup.nk]=(_[k][c.sup.nk], .sub.1[k][c.sup.nk]) [Math. 77]

    is a 2-bit signed integer including a sign bit


    _[k][c.sup.nk][Math. 78]

    and two bits representing an absolute value bit .sub.1[k][c.sup.nk]. Further, line 7 of Algorithm 1 can be replaced with

    [00039] [ Math . 79 ] U [ n ] [ c 0 ] [ b n - 1 ] { - [ n ] [ 0 ] if 1 [ n ] [ 0 ] = 1 0 or 1 otherwise

    or simply with


    U[n][c.sup.0][b.sub.n1]_[n][0][Math. 80]

    [0143] In this Algorithm 2, represents an AND computation, represents an OR computation, and


    [Math. 81]

    represents an XOR computation.

    Algorithm of Line 5 of Algorithm 7

    [0144] Here, the content of the algorithm in line 5 of Algorithm 7 shown in FIG. 11 will be described. In the present embodiment, line 5 of Algorithm 7 can be implemented by Algorithm 7-1 (markPath()) illustrated in FIG. 16. The algorithm of selectPath(0, 2.Math.1) in line 5 in Algorithm 7-1 is shown in FIG. 17 as Algorithm 7-2. The algorithm of partition(left,right) on line 2 in Algorithm 7-2 is shown in FIG. 18 as Algorithm 7-3. The algorithm of swapIndex in Algorithm 7-3 is shown in FIG. 19 as Algorithm 7-4.

    [0145] Algorithms 7-1 to 7-4 can access the memory spaces {P[]}.sup.2L1.sub.=0, {Index []}.sup.2L1.sub.=1, and {Active []}.sup.2L1.sub.=0. Here, Index [X]{0, . . . 2L1} is an integer variable. The calculation result is stored in {Active []}.sup.21.sub.=0.

    [0146] Remark 5: As mentioned in [9], instead of adopting selectPath(0, 2.Math.1) in line 5 of Algorithm 7-1, {Index[]}.sup.1.sub.=0 may be simply sorted to obtain P[Index[0]]P[Index[1]] . . . P[Index[1]]. The time complexity of sorting is O(log), but in a case where is small, the time complexity may be faster than selectPath(0,2.Math.1).

    [0147] Remark 6: Line 1 (can be omitted) of Algorithm 7-3 guarantees that the average time complexity of selectPath(0,2.Math.1) is O(). By selecting the index corresponding to the median of {P[]}.sup.right.sub.=left, this line can be replaced, and the worst-case time complexity O() can be guaranteed (refer to [4]).

    Summary of Each Algorithm

    [0148] Hereinafter, as a summary of each algorithm, input, output, and the like for each algorithm will be described. Note that the number n of times of polar conversion is a constant, and can be referred to from all the algorithms. The block length is N=2.sup.n.

    [0149] For example, the input information described below is input from the input unit 210 in the decoding device 200 shown in FIG. 3, stored in the data storage unit 240, and referred to by the computation unit 220 that executes the algorithm. Reference to information by the computation unit 220 that executes the algorithm may be expressed as input of information to the algorithm. A computation result by the computation unit 220 is output to and stored in the data storage unit 240. The computation result (original message) is read from the data storage unit 240 by the computation unit 220 and output from the output unit 230.

    Algorithm 1: FIG. 5

    Input

    [0150] The input to Algorithm 1 is as follows. [0151] Position information I.sub.1 of frozen bit This is a subset of {0.sup.n, . . . , 1.sup.n}. [0152] Information u_I.sub.1 of frozen bit in the case of the information source code, the information source code is transmitted as a code word from the encoding device 100 to the decoding device 200.

    [0153] The code word in the case of the information source code is more specifically as follows.

    [0154] The encoding device 100 defines u.sup.(0)_b.sup.n=x_b.sup.n for the information source output x_[0.sup.n:1.sup.n], and applies a known polar conversion algorithm to obtain {u[lsb(n)][b.sup.n]}_b.sup.n[0.sup.n:1.sup.n]. At this time,


    {u[lsb(n)][b.sup.n]}_b.sup.nI.sub.1

    is a code word.

    Series of Probability Distributions

    [0155] [00040] { P U b n ( 0 ) } b n [ 0 n : 1 n ] [ Math . 82 ]

    [0156] This is set as follows by the computation unit 220 depending on the object to be applied. [0157] In the case of the information source code, the information observed by the encoding device 100 is set as (X.sub.1, . . . , X.sub.n), and the auxiliary information observed by the decoding device 200 is set as (Y.sub.1, . . . , Y.sub.n). While conditional probability distribution of X_b.sup.n after observation of Y_b.sup.n is


    P.sub.X.sub.b.sub.n.sub.|Y.sub.b.sub.n [Math. 83]

    after the observation of the auxiliary information y_[0.sup.n:1.sup.n],

    [00041] P U b n ( 0 ) [ Math . 84 ]

    is set by the above-described Formula (24). [0158] In the case of the communication channel code, the communication channel input is (X.sub.1, . . . , X.sub.n), and the communication channel output observed by the decoding device 200 is (Y.sub.1, . . . , Y.sub.n). The conditional probability distribution of the communication channel is


    P.sub.Y.sub.b.sub.n.sub.|X.sub.b.sub.n [Math. 85]

    At this time, after observing the communication channel output y_[0.sup.n:1.sup.n],

    [00042] [ Math . 86 ] P U b n ( 0 )

    is set by the above-described Formula (26).

    [0159] The communication channel input for the systematic communication channel code is more specifically as follows.

    [0160] The encoding device 100 sets x_I.sub.0 as a message vector and u.sup.(0)_I.sub.1 as information shared by the encoding device 100 and the decoding device 200 (for example, all are set to 0). Then, a known polar code algorithm is applied to obtain (x_I.sub.1, u_I.sub.0) from (x_I.sub.0, u_I.sub.1). Here, the relationship of I.sub.0, I.sub.1, I.sub.0, and I.sub.1 is defined by the above-described Formulas (1) and (2). At this point, the communication channel input (also referred to as a code word) that is the output of encoding device 100 is obtained by aligning (x_I.sub.0, x_I.sub.1) in the order of indices.

    [0161] The communication channel input for the non-systematic communication channel code is more specifically as follows.

    [0162] The encoding device 100 sets u.sup.(0)_I.sub.0 as a message vector and u.sup.(0)_I.sub.1 as information shared by the encoding device 100 and the decoding device 200 (for example, all are set to 0). By applying a known polar code algorithm to this,


    {u[lsb(n)][b.sup.n]}_b.sup.n[0.sup.n:1.sup.n]

    is obtained and set as a communication channel input (also referred to as a code word) which is an output of the encoding device 100.

    Output

    [0163] As described above, the output from Algorithm 1 is stored in the memory space shown in [Math. 30].

    [0164] The reproduction information as the decoding result is obtained by the following interpretation depending on the object to be applied. [0165] In the case of the information source code, the computation unit 220 sets {circumflex over ()}x_[0.sup.n:1.sup.n] defined in the above-described Formula (25) as the reproduced information source sequence with reference to the memory space of U shown in [Math. 30]. [0166] In the case of the systematic communication channel code, the computation unit 220 refers to the memory space of U shown in [Math. 30], and sets {circumflex over ()}x_I.sub.0 defined in the above-described Formula (25) as a reproduced message. Here, I.sub.0 is as described above. [0167] In the case of the non-systematic communication channel code, as described above, the modified algorithm 1 shown in FIG. 20 is used. That is, the memory area {M[b.sup.n]:b.sup.nI.sub.0} corresponding to the position is prepared in the non-frozen bit, and the result is stored. The reproduction message becomes {circumflex over ()}u_I.sub.0 defined by {circumflex over ()}u_b.sup.n=M[b.sup.n].

    Algorithm 2: FIGS. 6, 14, and 15

    Input

    [0168] The input to Algorithm 2 is as follows. [0169] Reference is made to the memories and U used in Algorithm 1. [0170] Integer k. [0171] Binary representation b.sup.n of index. As described previously, implementations may also handle b.sup.n as an integer.

    Output

    [0172] In a case where the condition is satisfied, Algorithm 2 is recursively called, and the content of is rewritten.

    Algorithm 3: FIG. 7

    Input

    [0173] Reference is made to the memory U used in Algorithm 1. [0174] Integer k. [0175] Binary representation b.sup.n of index Implementations may also treat b.sup.n as an integer.

    Output

    [0176] After the content of U is rewritten, when the condition is satisfied, Algorithm 3 is recursively called.

    Algorithm 4: FIG. 8

    Input

    [0177] The same information as the input information to Algorithm 1 [0178] List size L.

    Output

    [0179] The output from Algorithm 4 is stored in the memory spaces (memory areas) of , U, and P as described above. Specifically, this is as follows.

    [00043] [ Math . 87 ] { [ ] [ k ] [ c n - k ] : { 0 , .Math. , L } k { 0 , .Math. , n } c n - k [ 0 n - k : 1 n - k ] } [ Math . 88 ] { U [ ] [ k ] [ c n - k ] [ b ] : { 0 , .Math. , L } k { 0 , .Math. , n } c n - k [ 0 n - k : 1 n - k ] b { 0 , 1 } } [ Math . 89 ] { P [ ] : { 0 , .Math. , L } }

    [0180] The reproduction information as the decoding result is obtained by the following interpretation depending on the object to be applied. [0181] In the case of the information source code, the computation unit 220 obtains l by the above-described Formula (30) with reference to the above P, and with reference to the above U,


    custom-character.sub.[0.sub.n.sub.:1.sub.n.sub.](l) [Math. 90] [0182] which is defined by Formula (25), is set as reproduced information source sequence. [0183] In the case of the systematic communication channel code, the computation unit 220 obtains l by the above-described Formula (30) with reference to the P, and with reference to the above U, {circumflex over ()}x_I.sub.0 defined by Formula (25) is set as a reproduced message. Here, I.sub.0 is as described above. [0184] In a case where Algorithms 4 to 7 are applied to a non-systematic communication channel code, as described above, the memory area shown in [Math. 68] is prepared and the result is stored. The modified algorithms 6 and 7 accompanying this are as shown in FIGS. 21, 22, and 25, respectively. The reproduction message is as shown in [Math. 70] and [Math. 71].

    Algorithm 5: FIG. 9

    Input

    [0185] Reference is made to the above memories , U, and P. [0186] The information u_I.sub.1 of the frozen bit described in the input of Algorithm 4 (Algorithm 1) is referred to. [0187] Binary representation b.sup.n of index Implementations may also treat b.sup.n as an integer. [0188] Size of valid (active) list

    Output

    [0189] The content of the above memories U and P is rewritten.

    Algorithm 6: FIG. 10

    Input

    [0190] Reference is made to the above memories , U, and P. [0191] Binary representation b.sup.n of index Implementations may also treat b.sup.n as an integer. [0192] Size of valid (active) list

    Output

    [0193] The content of the above memories U and P is rewritten.

    Algorithm 7: FIGS. 11, 22, 24, and 25

    [0194] Here, the memory


    {Active []:{0, . . . , 2L1}}

    is used.

    Input

    [0195] Reference is made to the above memories , U, P, and Active. [0196] Binary representation b.sup.n of index Implementations may also treat b.sup.n as an integer. [0197] Size of valid (active) list

    Output

    [0198] The content of the above memories U, P, and Active is rewritten. In Algorithm 7 illustrated in FIG. 25, the content of M is further rewritten.

    Algorithm 8: FIG. 12

    [0199] Algorithm 8 is an algorithm corresponding to copyPath(,) in line 4 of Algorithm 6 (FIG. 10) and lines 10 and 25 of Algorithm 7 (FIG. 11), and implements a function of copying the state of the th list to a memory that stores the state of the th list.

    Input

    [0200] Reference is made to the above memories and U. [0201] Index of list of copy destinations. [0202] Index of list of copy sources.

    Output

    [0203] The content of the above memories and U is rewritten.

    Algorithm 9: FIG. 13

    [0204] The algorithm 9 is an algorithm that implements rewriting of content of the above memory P (setting the maximum value to 1 without change of ratio of real number stored in memory).

    Input

    [0205] Reference is made to the above memory P. [0206] Size of valid (active) list.

    Output

    [0207] The content of the above memory P is rewritten.

    Algorithm 2: FIGS. 14 and 15

    [0208] Algorithm 2 illustrated in FIGS. 14 and 15 is an algorithm that speeds up Algorithm 2 when Algorithm 1 is applied to a polar code for an erasure channel. The input/output is the same as that of Algorithm 2.

    Algorithms 7-1 to 7-4: FIGS. 16 to 19

    [0209] Algorithms 7-1 to 7-4 are algorithms that set Active[]=1 only to the top L values of P[] as described in line 5 of Algorithm 7 (FIG. 11).

    Simple Configuration

    [0210] As described above, in order to simplify the configuration, Algorithm 4 and Algorithm 7 in a case where it is assumed that the set I.sub.0 is not an empty set, L is a power of 2, and only a real number computation with finite accuracy is possible (although not realistic, in the case of infinite accuracy, line 27 of algorithm 4 in FIG. 23 can be deleted) are as shown in FIGS. 23 to 25. Note that, in a case where I.sub.0 is an empty set, L=1 is set, and the processing up to lines 5 to 8 of Algorithm 4 illustrated in FIG. 23 is skipped and the processing up to line 14 is finished.

    Configuration of Decoding Device 200 for Erasure Channel

    [0211] As described in Remark 1, in the case of the decoding device 200 for the erasure channel, it can be assumed that the value [k][c.sup.nk] takes any value of {1,0,1}.

    [0212] When line 7 of Algorithm 1 (FIG. 5) is replaced with

    [00044] [ Math . 91 ] U [ n ] [ c 0 ] [ b n - 1 ] { 0 if [ n ] [ 0 ] = 1 1 if [ n ] [ 0 ] = - 1 0 or 1 if [ n ] [ 0 ] = 0 ,

    regardless of the method of encoding {1,0,1} into [k][c.sup.nk], processing in lines 4 to 8 of Algorithm 1 (FIG. 5) may be any processing as long as the values in the tables illustrated in FIGS. 26 to 28 are substituted into [k][c.sup.nk]. Two examples (Example A, Example B) will be described below.

    [0213] In a case of being applied to Algorithms 4 to 9, [][k][c.sup.nk] can be a variable having a value of {1,0,1}, and P[] can be an N-bit unsigned integer variable. In this case, the processing of magnifyP in line 17 of Algorithm 4 is unnecessary. Further,

    Since

    [0214]
    1custom-character[][n][b.sup.n]{0, 1, 2}[Math. 92]

    is established, for line 2 of Algorithm 5, lines 2 to 3 of Algorithm 6, and lines 2 to 3 of Algorithm 7,
    in a case of


    1custom-character[][n][b.sup.n]=2 [Math. 93]

    calculation is possible using 1-bit left shift. cl Example A: Method Using Integer Addition/Subtraction

    [0215] The division can be omitted by using Algorithm 2 illustrated in FIG. 14 instead of Algorithm 2 in FIG. 6. Since carrying-up occurs, three bits including a sign are required for [k][c.sup.nk].

    [0216] Note that, when this method is applied to successive cancellation list decoding (Algorithm 4), a change may be made only by regarding a real variable as an integer variable.

    Example B: Method of Using Sign Bit and Absolute Value Bit

    [0217] Further, [k][c.sup.nk] is represented as illustrated in FIG. 29.


    _[k][c.sup.nk], .sub.1[k][c.sup.nk][Math. 94]

    may also be encoded with two bits.

    [0218] In this case, the tables illustrated in FIGS. 30 to 32 may be implemented in lines 4 to 8 of Algorithm 2.

    [0219] This can be implemented by replacing line 7 of Algorithm 1 (FIG. 5) with line 7 illustrated in FIG. 33, and using Algorithm 2 illustrated in FIG. 15 instead of Algorithm 2 in FIG. 6. Line 7 of Algorithm 1 may be

    [00045] U [ n ] [ c 0 ] [ b n - 1 ] { - [ n ] [ 0 ] if 1 [ n ] [ 0 ] = 1 0 or 1 otherwise . [ Math . 95 ]

    [0220] Note that, when this method is applied to successive cancellation list decoding (Algorithm 4),


    [k][c.sup.nk]=(_[k][c.sup.nk], .sub.1[k][c.sup.nk]) [Math. 96]

    is interpreted as a 2-bit signed integer of a sign part 1-bit and a numerical part 1-bit.

    Regarding Effects of Embodiment

    [0221] In a case where the technique according to the present embodiment (referred to as a proposed method) is compared with the technique of Reference Literature [9] (referred to as a conventional method), two values P.sub.[][0] and P.sub.[][1] indicated by P of the conventional method are expressed by one value [k][c.sup.nk] by applying the symmetric parameterization in the proposed method [Algorithm 2], and thus, in the proposed method, the used memory amount is half that in the conventional method. As a result, the calculation amount can also be reduced. In particular, in list decoding, it is possible to greatly reduce a processing time in a case where states of candidates are copied.

    [0222] In addition, a stack memory is used in the processing of the conventional method, but a memory space having a fixed address is directly operated in the proposed method. Thus, the operation in the conventional method regarding the stack memory is simplified in the proposed method.

    [0223] Furthermore, with the technique according to the present embodiment, it is possible to uniformly perform decoding for the communication channel code and decoding for the information source code.

    Summary of Embodiment

    [0224] In the present specification, at least a decoding device, a decoding method, and a program described in clauses described below are described.

    Clause 1

    [0225] A decoding device, including: [0226] an input unit that inputs a code word encoded by a polar code from an original message; [0227] a computation unit that decodes the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and [0228] an output unit that outputs the decoded original message.

    Clause 2

    [0229] The decoding device according to clause 1, in which [0230] in a case where the decoding device is applied to decoding for an information source code, [0231] the input unit inputs information of a frozen bit as the code word from a noise-free communication channel or a storage medium, and [0232] the computation unit decodes the original message by using auxiliary information as the observation information.

    Clause 3

    [0233] The decoding device according to clause 1, in which [0234] in a case where the decoding device is applied to decoding for a communication channel code, [0235] the input unit inputs, as the code word, a communication channel output from a communication channel having noise, and [0236] the computation unit decodes the original message by using the communication channel output as the observation information.

    Clause 4

    [0237] The decoding device according to clause 3, in which [0238] in a case where the decoding device is applied to decoding for a systematic communication channel code, [0239] information of a frozen bit is shared between an encoding side and a decoding side, [0240] the input unit inputs the communication channel output in which the original message obtained by polar conversion on the encoding side and an additional information are used as communication channel inputs, and [0241] the computation unit decodes the original message by using the information of the frozen bit and the communication channel output.

    Clause 5

    [0242] The decoding device according to clause 3, in which [0243] in a case where the decoding device is applied to decoding for a non-systematic communication channel code, [0244] information of a frozen bit is shared between an encoding side and a decoding side, [0245] the input unit inputs the communication channel output in which information of non-frozen bit and information obtained by polar conversion from the information of the frozen bit on the encoding side are used as communication channel inputs, and [0246] the computation unit decodes the original message by storing a determination result of 0 or 1 in a memory area corresponding to a position of a non-frozen bit based on the information of the frozen bit and the communication channel output.

    Clause 6

    [0247] The decoding device according to any one of clauses 1 to 5, in which [0248] the computation unit executes list decoding for selecting a sequence of a most probable path from among the number (predetermined list size) of survival paths.

    Clause 7

    [0249] A decoding method executed by a decoding device, including: [0250] an input step of inputting a code word encoded by a polar code from an original message; [0251] a computation step of decoding the original message from the code word based on a conditional probability expressed by a symmetric parameterization and having observation information as a condition; and [0252] an output step of outputting the decoded original message.

    Clause 8

    [0253] A program for causing a computer to function as each unit in the decoding device according to any one of clauses 1 to 6.

    [0254] Although the present embodiment has been described above, the present invention is not limited to such a specific embodiment, and various modifications and changes can be made within the scope of the gist of the present invention described in the claims.

    REFERENCE LITERATURE

    [0255] [1] E. Arikan, Channel polarization:a method for constructing capacity-achieving codes for symmetric binary-input memoryless channels, IEEE Trans. Inform. Theory, vol. IT-55, no. 7, pp. 3051-3073, Jul. 2009. [0256] [2] E. Arikan, Source polarization, Proc. 2010 IEEE Int. Symp. Inform. Theory, Austin, U.S.A., Jun. 13-18, 2010, pp. 899-903. (corresponding to Non-Patent Literature 2) [0257] [3] E. Arikan, Systematic polar coding, IEEE Communications Letters, vol. 15, no. 8, pp. 860-862, Aug. 2010. [0258] [4] M. Blum, R. W. Floyd, V. Pratt, R. L. Rivest, and R. E. Tarjan, Time bounds for selection, J. Computer and System Sciences, vol. 7, no. 4, pp. 448-461, 1973. [0259] [5] R. Mori and T. Tanaka, Source and channel polarization over finite fields and Reed-Solomon matrices, IEEE Trans. Inform. Theory, vol. IT-60, no. 5, pp. 2720-2736, May 2014. [0260] [6] J. Muramatsu, Successive-cancellation decoding of linear source code, Proceedings of the 2019 IEEE Information Theory Workshop, Visby, Sweden, Aug. 25-28, 2019. Extended version available at arXiv:1903.11787[cs.IT], 2019. [0261] [7] E. S, as,oglu, Polarization and polar codes, Fund. Trends Commun. Inf. Theory, vol. 8, no. 4, pp. 259-381, Oct. 2012. [0262] [8] I. Tal and A. Vardy, How to construct polar codes, IEEE Trans. Inform. Theory, vol. IT-59, no. 10, pp. 6563-6582, Oct. 2013. [0263] [9] I. Tal and A. Vardy, List decoding of polar codes, IEEE Trans. Inform. Theory, vol. IT-61, no. 5, pp. 2213-2226, May. 2015. (corresponding to Non-Patent Literature 1)

    REFERENCE SIGNS LIST

    [0264] 100 Encoding device [0265] 200 Decoding device [0266] 210 Input unit [0267] 220 Computation unit [0268] 230 Output unit [0269] 240 Data storage unit [0270] 1000 Drive device [0271] 1001 Recording medium [0272] 1002 Auxiliary storage device [0273] 1003 Memory device [0274] 1004 CPU [0275] 1005 Interface device [0276] 1006 Display device [0277] 1007 Input device [0278] 1008 Output device