Method of codifying data including generation of a quasi-cyclic code
09543985 ยท 2017-01-10
Assignee
Inventors
- Jorge Vicente Blasco Claret (Valencia, ES)
- Salvador Iranzo Molinero (Betera, ES)
- Agustin Badenes Corella (Valencia, ES)
Cpc classification
H03M13/116
ELECTRICITY
H03M13/1188
ELECTRICITY
H03M13/118
ELECTRICITY
International classification
H03M13/00
ELECTRICITY
Abstract
A method including selecting a factor based on a number of bits in a codeword and a natural number and generating a model matrix including first and second matrices having data and parity bits. Hamming weights of the model matrix are not constant and Hamming weights of columns of the model matrix follow a statistical distribution dependent upon a codification rate of a channel. A compact matrix is generated by replacing elements of the model matrix equal to: 1 with a pseudo-random positive whole number; and 0 with 1. A quasi-cyclic code is generated by replacing in the compact matrix: positive elements with identity matrices; and elements equal to 1 with null matrices. A number of rows and columns in each of the identity and null matrices is equal to the factor. The quasi-cyclic code is applied to a word to generate a codeword, which is transmitted on the channel.
Claims
1. A method comprising: selecting a factor at a first device, wherein the factor is equal to a number of bits in a codeword divided by a natural number; generating a model matrix, wherein the model matrix includes (i) a first matrix comprising data, and (ii) a second matrix comprising parity bits, wherein Hamming weights of columns and rows of the model matrix are not constant and the Hamming weights of the columns correspond to a codification rate of a channel such that the model matrix is irregular; generating a compact matrix by replacing (i) each element of the model matrix that is equal to 1 with a pseudo-random positive whole number, and (ii) each element of the model matrix that is equal to 0 with a 1, wherein the pseudo-random positive whole number is greater than or equal to 0 and less than the factor; generating a quasi-cyclic code including a parity matrix by replacing (i) each positive element of the compact matrix with an identity matrix, and (ii) each element of the compact matrix equal to 1 with a null matrix, wherein each of the identity matrices is rotated cyclically a number of times indicated by a corresponding one of the positive elements of the compact matrix, wherein a number of rows in each of the identity matrices and each of the null matrices is equal to the factor, and wherein a number of columns in each of the identity matrices and each of the null matrices is equal to the factor; applying the quasi-cyclic code to a word of the data via a codification device to generate the codeword; and transmitting the codeword on the channel from the first device to a second device.
2. The method of claim 1, wherein: a number of rows in the parity matrix is equal to a number of bits in the quasi-cyclic code minus a number of bits in the word of the data; and a number of columns in the parity matrix is equal to the number of bits in the quasi-cyclic code.
3. The method of claim 1, wherein: a first portion of the second matrix of the model matrix includes a column vector; a number of elements in the column vector is based on a difference between a first parameter and a second parameter; the first parameter is equal to a number of bits in the quasi-cyclic code divided by the factor; and the second parameter is equal to a number of bits in the word of the data divided by the factor.
4. The method of claim 3, wherein: a second portion of the second matrix of the model matrix comprises a triple diagonal structure; the triple diagonal structure includes (i) a first central diagonal, (ii) a second central diagonal, and (iii) a last row diagonal; and bits in the first central diagonal, the second central diagonal, and the last row diagonal are equal to 1 and each of a remainder of bits in the triple diagonal structure is equal to 0.
5. The method of claim 1, wherein: a portion of the second matrix of the model matrix comprises a triple diagonal structure; the triple diagonal structure includes (i) a first central diagonal, (ii) a second central diagonal, and (iii) a last row diagonal; and bits in the first central diagonal, the second central diagonal, and the last row diagonal are equal to 1 and each of a remainder of bits in the triple diagonal structure is equal to 0.
6. The method of claim 1, wherein: a first portion of the second matrix of the model matrix comprises a pseudo-random column vector having a Hamming weight greater than 2; a second portion of the second matrix of the model matrix comprises a double diagonal structure; the double diagonal structure includes (i) a first central diagonal, and (ii) a second central diagonal; and bits in the first central diagonal and the second central diagonal are equal to 1 and each of a remainder of bits in the double diagonal structure is equal to 0.
7. The method of claim 6, wherein the pseudo-random positive whole numbers that replace the elements of the model matrix that are equal to 1 are a same pseudo-random positive whole number.
8. The method of claim 1, wherein the pseudo-random positive whole numbers that replace the elements of the model matrix that are equal to 1 are different pseudo-random positive whole numbers.
9. The method of claim 1, wherein: the quasi-cyclic code is a low density parity check code; and the codification rate is equal to a number of bits in the word of the data divided by a number of bits in the quasi-cyclic code.
10. The method of claim 1, wherein a transpose of the quasi-cyclic code multiplied by the codeword is equal to 0.
11. The method of claim 1, wherein: a transpose of the quasi-cyclic code multiplied by a binary matrix is equal to 0; a number of rows in the binary matrix is equal to a number of bits in the word of the data; and a number of columns in the binary matrix is equal to a number of bits in the codeword.
12. The method of claim 1, wherein the model matrix comprises:
13. The method of claim 1, wherein the compact matrix comprises:
14. The method of claim 1, wherein the compact matrix comprises:
15. The method of claim 1, wherein the compact matrix comprises:
16. The method of claim 1, wherein the compact matrix comprises:
17. The method of claim 1, wherein the compact matrix comprises:
18. The method of claim 1, wherein the compact matrix comprises:
19. The method of claim 1, wherein the compact matrix comprises:
20. The method of claim 1, wherein the compact matrix comprises:
21. The method of claim 1, wherein the compact matrix comprises:
22. The method of claim 1, wherein the compact matrix comprises:
23. The method of claim 1, further comprising puncturing the codeword to remove bits of the codeword prior to the codeword being transmitted from the first device.
24. The method of claim 23, wherein the puncturing includes the following puncturing pattern:
25. The method of claim 23, wherein the puncturing includes the following puncturing pattern:
26. The method of claim 23, wherein the puncturing includes the following puncturing pattern:
27. The method of claim 23, wherein the puncturing includes the following puncturing pattern:
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) The present disclosure will become more fully understood from the detailed description and the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
DESCRIPTION
(6) Following is the description of one example of implementation of the invention, making reference to the numbering adopted in the figures.
(7) The problem that the invention procedure is intended to solve, from a theoretical point of view, consists of success in optimizing correction of errors in data communication using low-cost hardware implementations and LDPC codes.
(8) An LDPC code is a linear code that operates on blocks of data. The code is defined by its parity matrix H. In this implementation example the codes are binary codes, but it is possible to generalize the invention to codes on any Galois field GF(q), where q is 2.
(9) In transmission, blocks of data are composed of K bits. Said data block is designated as u=[u(0), u(1), . . . , u(K1)]. After applying the method of the invention, a word is generated of a linear code v=[v(0), v(1), . . . , v(N1)] with N bits (where N<K). Said code is generated through the product v=uG, where G is a binary matrix KN, generator of the LDPC code. The set of possible codes generated is called set C, and the codification rate of the code will be R=K/N.
(10) Therefore, it is possible to define a code C of codification R as the set of vectors vC generated by all the possible 2.sup.K binary vectors by applying the generator matrix G to them. An equivalent definition would be that C is the vectorial space of size N included in the base composed of the K rows of the matrix G. Another alternative form of defining code C is through its parity matrix H, which is the form most used in the state of the art. This matrix, in size (NK)N, has as rows the base of the dual space C, and therefore GH.sup.T=0. Any vector of the code satisfies
vH.sup.T=0
(where T is the transposition operator).
(11) At the time these codes are utilized from a practical viewpoint, it is preferable to consider the codes as systematic codes, that is, those in which the bits of the code word are between the data bits. Without losing generality, this example is centered on the case where v=[u|p], where p=[p(0), p(1), . . . p(NK1)] is a vector composed of the parity bits, u the data block that is to be transmitted and v the code word actually transmitted (after including the LDPC code).
(12) Below is shown an example of implementation in which the relation between the parity matrix and a code word can be observed. In this example, the code has a codification rate of R=1/2 and is defined by the following parity matrix:
(13)
(14) In this matrix, the left section corresponds to the K=5 data bits, while the right section corresponds to the NK=5 parity bits. By applying the equation vH.sup.T=0 to the matrix H the following system of equations is obtained:
(15)
(16) An LDPC code can also be represented in graphic form with a two-part graph called a Tanner graph. In a Tanner graph the vertices or nodes are classified in two separate groups or sets: the variable nodes, which represent the bits of the code word, and the check nodes, which represent the parity relationships. Between both sets of nodes are found the possible edges that define the parity equation. In the case of the code defined in the preceding example, its corresponding graph is represented in
(17) On the other hand, cycles can be defined on LDPC codes, where a cycle of length 2c is defined as the path of 2c length edges that processes c check nodes and c variable nodes in the Tanner graph that represents the code before returning to the same beginning node. To optimize the features of the code, it can be demonstrated that it is of vital importance that the number of short cycles be the minimum possible. The cycle of minimum length is called girth. It is particularly desirable that the girth be greater than 4 in order avoid reducing the features of an iterative decodifier.
(18) In the original description, R. Gallagher presented codes whose parity matrices were generated randomly. In the state of the art it is generally known that in order to obtain good features, near the Shannon limit, the size of the code must be relatively large, and as a consequence of that, the parity matrix must be large. The problem is that matrices that are large and generated randomly cause difficulty in the implementation of both the codifier and decodifier. One way to avoid this difficulty is to use matrices with a regular structure. Following are the necessary steps for generating a regular structure: 1. First, one generates a binary model matrix H.sub.0 of size (nk)n where n<N, k<K and R=k/n=K/N. If the Hamming weight of the columns and rows of H.sub.0 is constant, the generated code is called regular LDPC. However, better features can be obtained if the matrix is irregular, that is, if the weights of the columns follow a statistical distribution dependent upon the codification rate and of the channel where the data transmission will finally be done. 2. Once the binary model matrix H.sub.0 has been obtained, the compact matrix H.sub.1 is generated, replacing each element of H.sub.0 that is equal to 1 with a pseudo-random positive whole number 0x<b (where b=N/n) and each element equal to 0 with the value 1. 3. To obtain the parity matrix H, the positive elements of H.sub.1 are replaced by an identity submatrix rotated cyclically the number of times indicated by the value of the positive element of H.sub.1 in question, and the elements equal to 1 are replaced by a null submatrix of the same size. The size of these submatrices will also be bb.
(19) The result is a parity matrix H of size (NK)N which defines an LDPC code with codification rate R=K/N. The grade of density (sparseness) of the matrix will depend upon the size of b. Generally, the larger b is, the better the features that are obtained by using an iterative decodifier.
(20) If the cycles of the generated matrix (Tanner graph) turn out to be very short, step 2 (or even step 1 if necessary) should be applied for the purpose of improving these properties.
(21) In order to facilitate the implementation of the codifier, it is necessary to generate the binary model matrix H.sub.0 in a specific form. First, said matrix is divided into two parts H.sub.0=[H.sub.a|H.sub.b] where the submatrix H.sub.a corresponds to the positions of the data bits and H.sub.b to the parity bits. The first submatrix is generated pseudo-randomly in the way described previously. However, the second section is usually deterministic.
(22) This second section H.sub.b, according to the state of the art and intended to facilitate the design of an efficient codifier, takes one of the following two forms; the first form is
(23)
where the first section is a pseudo-random column vector having a Hamming weight greater than 2 and H.sub.b1 is a double-diagonal matrix whose elements h.sub.b1(i,j) are equal to 1 when i=j,i=j+1 and equal to 0 in the remaining positions.
(24) The second way of generating H.sub.b is totally double diagonal
(25)
where the elements of the submatrix h.sub.b(i,j) are equal to 1 when i=j,i=j+1 and equal to zero in the remaining positions.
(26) Once this base matrix structure has been generated, the compact matrix H.sub.1 is generated, in the form previously described with the sole exception that in the double diagonal part of H.sub.b, the 1s are replaced by the same positive whole number and the 0s by 1. Also, the final parity matrix is obtained by changing the positive wholes by identities rotated cyclically and the negatives by a null submatrix. The procedure can be observed graphically in
(27) The method and device of the invention together modify the structure of the parity matrix known in the state of the art in order to facilitate the final implementation of the codification and decodification and improve the features. For this, the proposed structure consists in that the section of the binary model H.sub.0 corresponding to the parity bits has the following form:
(28)
where the structure H.sub.b1 is triple diagonal, that is, in addition to the elements of the two central diagonals h.sub.b1(I,i), h.sub.b1(i1,i), the element of the diagonal of the last line h.sub.b1 (nk1,0) is also equal to 1. The compact matrix is generated in the way described previously except that the element of the last row equal to 1 is replaced by a strictly positive whole w0.
(29) A binary model matrix H.sub.0 for r=1/2 with n=24 and k=12 might have the following structure:
(30)
(31) A compact matrix H.sub.1 derived from the preceding by a block size N=336 and therefore having an expansion factor b=14 would be the following:
(32)
or as preferred, the following matrix can be used as an alternative:
(33)
(34) For a different block size we can define a different compact matrix that can derive from the same binary model matrix or from another, different one.
(35) To obtain code words of 1920 bits with a codification rate of 1/2, the following matrix can be used:
(36)
or else, preferably, the matrix:
(37)
(38) A compact matrix for N=8640 bits with expansion factor 360 derived from a different binary model matrix would be the following:
(39)
(40) Another preferential alternative for obtaining code words of 8640 bits with a codification rate of 1/2 is the matrix:
(41)
(42) The step from compact matrix to binary model is univocal, but for the opposite step, it is not; that is to say, different compact matrices can be obtained from a binary model matrix. The binary matrix is introduced as a step that facilitates the description of the invention. It is possible to develop a compact matrix directly without going through the binary model matrix. In that case, for that compact matrix, its corresponding binary model matrix can be obtained which, if it is triple diagonal, will be in accord with the method of the invention.
(43) The error correction codes can be punctured, wherein the technique for puncturing consists in removing elements of the code word so that they will not be transmitted. In place of transmitting a code word v=[v(0), v(1), . . . , v(N1)], a word w=[w(0), w(1), . . . , w(M1)] will be transmitted, where M<N. The puncturing must be done in a controlled way, so that the necessary redundancy is transmitted in order for the decodifier of the receptor to be able to evaluate the transmitted data. This puncturing is applied to both the data bits and the parity bits. A pattern of puncturing can be defined as the sequence of bits that are to be transmitted as punctured, where said bit pattern can be periodic or aperiodic. In case there is no regularity, said pattern can be described with a vector pp of N positions, indicating with a 1 the bits to be transmitted and with a 0 the bits to be eliminated (punctured). Thanks to the puncturing technique, data communication can be amplified, since less redundancy is being sent. If the Hamming weight of the pattern pp is M, the total system rate of error codification is R=KIM.
(44) For example, if we have a code with R=5/6 and block size N=5184 and we want to perform a puncturing in order to increase the rate to R=16/18, we can use the following puncturing pattern, which will yield a block with N=4860 bits:
(45)
(46) In order to achieve implementation of LDPC codes, an electronic device is used, whether it be a program executed on a microprocessor or an FPGA or ASIC hardware implementation. Said device receives a block of data, calculates the parity bits, concatenates them with the information bits and submits them to the following phase of the transmitter to be adequately modulated and transmitted through the corresponding channel. The calculation of the parity bits may be done by means of the product of the generator matrix G or the solution of the system of equations presented previously.
(47) The decodifier of LDPC codes is usually based on an iterative decodifier. A possible decodifier, among several state of the art options, consists of an estimator which, upon receiving the word corresponding to the transmitted code r=v+z, where Z is additive channel noise, makes a noise estimation {circumflex over (Z)} such that (r{circumflex over (Z)})H.sup.T=0. In the case where the system makes use of the puncturing technique, before the decodifier there will be a unit that inserts an indicator in the punctured positions. This indicator serves to direct the decodifier to estimate the proper value in those positions.
(48)
(49)