Abstract
A method for generating a binary GTP codeword, comprised of N structure stages and each stage comprises at least one BCH codeword with error correction capability greater than a prior stage and smaller than a next stage, includes: receiving a syndrome vector s of a new stage 0 binary BCH codeword y over a field GF(2.sup.m) that comprises t syndromes of length m bits, wherein the syndrome vector s comprises l-th Reed-Solomon (RS) symbols of t RS codewords whose information symbols are delta syndromes of all BCH codewords from stage 0 until stage n1; and multiplying s by a right submatrix of a matrix U, wherein U is an inverse of a parity matrix of an BCH code defined by t.sub.n, wherein the new binary BCH codeword is y=.Math.s.
Claims
1. A computer implemented method for generating a binary Generalized Tensor Product (GTP) codeword, comprised of N structure stages wherein N is an integer greater than 1 and each stage is comprised of at least one BCH codeword with error correction capability greater than a prior stage and smaller than a next stage, the method executed by the computer comprising the steps of: receiving new stage 0 binary BCH codeword y over a field GF(2.sup.m) from a communication channel; receiving a syndrome vector s of the new stage 0 binary BCH codeword y that comprises t syndromes of length m bits, wherein t=t.sub.nt.sub.0, t.sub.0 is the error correction capability of the stage 0 BCH codeword, t.sub.n is an error correction capability of a stage n BCH codeword to which a new binary BCH codeword y will be added, wherein the syndrome vector s comprises l-th Reed-Solomon (RS) symbols of t RS codewords whose information symbols are delta syndromes of all BCH codewords from stage 0 until stage n1, wherein l indexes the BCH codeword to which y will be added; and multiplying s by a right submatrix of a matrix U, wherein U is an inverse of a parity matrix of an BCH code defined by t.sub.n, wherein the submatrix is of size mt.sub.0mt, wherein the new binary BCH codeword is y=.Math.s.
2. The method of claim 1, wherein multiplying s by right submatrix of matrix U comprises multiplying each component of the syndrome vector s by a component of submatrix by a binary logic function in a single hardware cycle to yield a component product, wherein submatrix is calculated before receiving syndrome vector s of new binary BCH codeword y, and multiplexing the component products into a single output that represents the new binary BCH codeword y.
3. The method of claim 2, wherein the syndrome vector s is demultiplexed into separate matrices.
4. The method of claim 1, wherein multiplying s by right submatrix of matrix U further comprises: multiplying each component of the syndrome vector s by a component of reduced submatrix by a binary logic function in a single hardware cycle to yield a component product, wherein reduced submatrix defined by (x)=(x)/g.sub.0(x), wherein columns of submatrices and are represented as polynomials and each column of (x) is the column of (x) divided by g.sub.0(x), are calculated before receiving syndrome vector s of new binary BCH codeword y; multiplexing the component products into a temporary output; and convolving the temporary output with a common multiplier g.sub.0(x) to yield the single output that represents the new binary BCH codeword y, wherein g.sub.0(x) is a common multiplier of all columns of submatrix represented as polynomials and is calculated before receiving syndrome vector s of new binary BCH codeword y.
5. The method of claim 4, wherein the syndrome vector s is demultiplexed into separate matrices.
6. The method of claim 4, wherein convolving the temporary output with a common multiplier g.sub.0(x) is performed over multiple clock cycles.
7. The method of claim 1, wherein multiplying s by a right submatrix of a matrix U further comprises: calculating a polynomial h.sub.j(x) by multiplying syndrome vector s by a matrix H.sub.j formed by concatenating the m polynomials h.sub.j,l as columns, wherein polynomials m.sub.i(x) wherein m.sub.i(x) is an i-th minimal polynomial of the BCH code C.sub.1 with correction capability of t.sub.1 and is calculated before receiving syndrome vector s of new binary BCH codeword y; multiplying h.sub.j(x) by M.sub.j(x), and summing over j=0 to t1; multiplexing the sums of the products h.sub.j(x)M.sub.j(x) into a temporary output; and convolving the temporary output with a common multiplier g.sub.0(x) to yield the single output that represents the new binary BCH codeword y, wherein g.sub.0(x) is a common multiplier of all columns of submatrix represented as polynomials and is calculated before receiving syndrome vector s of new binary BCH codeword y.
8. The method of claim 7, wherein the syndrome vector s is demultiplexed into separate sets of H.sub.j and M.sub.j.
9. The method of claim 7, wherein convolving the temporary output with a common multiplier g.sub.0(x) is performed over multiple clock cycles.
10. A computer processor configured to execute a program of instructions to perform the method steps generating a binary Generalized Tensor Product (GTP) codeword, comprised of N structure stages wherein N is an integer greater than 1 and each stage is comprised of at least one BCH codeword with error correction capability greater than a prior stage and smaller than a next stage, the method comprising the steps of: receiving new stage 0 binary BCH codeword y over a field GF(2.sup.m) from a communication channel; receiving a syndrome vector s of the new stage 0 binary BCH codeword y that comprises t syndromes of length m bits, wherein t=t.sub.nt.sub.0, t.sub.0 is the error correction capability of the stage 0 BCH codeword, t.sub.n is an error correction capability of a stage n BCH codeword to which a new binary BCH codeword y will be added, wherein the syndrome vector s comprises I-th Reed-Solomon (RS) symbols of t RS codewords whose information symbols are delta syndromes of all BCH codewords from stage 0 until stage n1, wherein l indexes the BCH codeword to which y will be added; and multiplying s by a right submatrix of a matrix U, wherein U is an inverse of a parity matrix of an BCH code defined by t.sub.n, wherein the submatrix is of size mt.sub.0mt, wherein the new binary BCH codeword is y=.Math.s.
11. The computer processor of claim 10, wherein multiplying s by right submatrix of matrix U comprises multiplying each component of the syndrome vector s by a component of submatrix by a binary logic function in a single hardware cycle to yield a component product, wherein submatrix is calculated before receiving syndrome vector s of new binary BCH codeword y, and multiplexing the component products into a single output that represents the new binary BCH codeword y.
12. The computer processor of claim 11, wherein the syndrome vector s is demultiplexed into separate matrices.
13. The computer processor of claim 10, wherein multiplying s by right submatrix of matrix U further comprises: multiplying each component of the syndrome vector s by a component of reduced submatrix by a binary logic function in a single hardware cycle to yield a component product, wherein reduced submatrix defined by (x)=(x)/g.sub.0(x), wherein columns of submatrices and are represented as polynomials and each column of (x) is the column of (x) divided by g.sub.0(x), are calculated before receiving syndrome vector s of new binary BCH codeword y; multiplexing the component products into a temporary output; and convolving the temporary output with a common multiplier g.sub.0(x) to yield the single output that represents the new binary BCH codeword y, wherein g.sub.0(x) is a common multiplier of all columns of submatrix represented as polynomials and is calculated before receiving syndrome vector s of new binary BCH codeword y.
14. The computer processor of claim 13, wherein the syndrome vector s is demultiplexed into separate matrices.
15. The computer processor of claim 13, wherein convolving the temporary output with a common multiplier g.sub.0(x) is performed over multiple clock cycles.
16. The computer processor of claim 10, wherein multiplying s by a right submatrix of a matrix U further comprises: calculating a polynomial h.sub.j(x) by multiplying syndrome vector s by a matrix H.sub.j formed by concatenating the m polynomials h.sub.j,l as columns, wherein polynomials wherein m.sub.i(x) is an i-th minimal polynomial of the BCH code C.sub.1 with correction capability of t.sub.1 and is calculated before receiving syndrome vector s of new binary BCH codeword y; multiplying h.sub.j(x) by M.sub.j(x), and summing over j=0 to t1; multiplexing the sums of the products h.sub.j(x)M.sub.j(x) into a temporary output; and convolving the temporary output with a common multiplier g.sub.0(x) to yield the single output that represents the new binary BCH codeword y, wherein g.sub.0(x) is a common multiplier of all columns of submatrix represented as polynomials and is calculated before receiving syndrome vector s of new binary BCH codeword y.
17. The computer processor of claim 16, wherein the syndrome vector s is demultiplexed into separate sets of H.sub.j and M.sub.j.
18. The computer processor of claim 16, wherein convolving the temporary output with a common multiplier g.sub.0(x) is performed over multiple clock cycles.
19. The computer processor of claim 10, wherein the computer processor is one or more of an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), and firmware.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) FIG. 1 depicts an exemplary encoding/decoding, according to embodiments of the disclosure.
(2) FIG. 2 illustrates information data encoded using a BCH code to obtain a codeword, according to embodiments of the disclosure.
(3) FIG. 3 illustrates exemplary GTP word and delta syndromes (DS) tables, according to embodiments of the disclosure.
(4) FIG. 4 illustrates an exemplary hardware realization of the y-vec calculation, according to embodiments of the disclosure.
(5) FIG. 5 illustrates an exemplary hardware realization of an embodiment of a y_vec calculation divided into two phases, according to embodiments of the disclosure.
(6) FIG. 6 illustrates an exemplary hardware realization of an embodiment of a y_vec calculation divided into three phases, according to embodiments of the disclosure.
(7) FIG. 7 is a block diagram of a system for efficiently encoding generalized tensor product codes, according to an embodiment of the disclosure.
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
(8) Exemplary embodiments of the disclosure as described herein generally provide systems and methods for efficiently encoding Generalized Tensor Product (GTP) codes. While embodiments are susceptible to various modifications and alternative forms, specific embodiments thereof are shown by way of example in the drawings and will herein be described in detail, it should be understood, however, that there is no intent to limit the disclosure to the particular forms disclosed, but on the contrary, the disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure.
(9) A scheme according to an embodiment of the disclosure for encoding GTP codes concatenates several BCH code words (frames) with several different correction capabilities, into a single code word, referred to herein as a GTP codeword, where those frames are connected using several Reed Solomon code words to achieve mutual correction capability. A binary sequence added to each frame, i.e., a BCH single code word, to form the connection between the above mentioned BCH code words is referred to herein as a y_vec In GF(2), the binary sequence can be XOR-ed to each frame.
(10) For a linear code C of length n and dimension k over a finite field F, if HF.sup.(nk)n is a parity-check matrix of C and yF.sup.n is some vector, then the syndrome of y with respect to H is defined as HyF.sup.nk. Write BCH(m, t) for the primitive binary BCH code of length 2.sup.m1 and designed correction radius t. When C is either BCH(m, t) or a shortened BCH(m, t), then typically nk=mt, so that syndromes are typically of length mt. In fact, for BCH codes, it is useful to calculate syndromes with respect to a very particular parity check matrix. Let GF(2.sup.m) be a primitive element. Let C be as above, and let n be the length of C. For a vector y=(y.sub.0, . . . , y.sub.n1).sup.TGF(2).sup.n, let y(X):=y.sub.0+y.sub.1X+ . . . +y.sub.n1X.sup.n1 be the associated polynomial. For odd j{1, . . . , 2t1}, define S.sub.j:=y(.sup.j) GF(2.sup.m). The syndrome of y is defined as S=S.sup.(t):=(S.sub.1, S.sub.3, . . . , S.sub.2t1).sup.T. Note that yC if and only if S=0.
(11) Suppose now that there are two correction radii, t.sub.1<t.sub.2. Then for any y, S.sup.(t.sup.2.sup.) appears as the first t.sub.1 entries of S.sup.(t.sup.2.sup.). The remaining t.sub.2t.sub.1 entries of S.sup.(t.sup.2.sup.) will be referred to as the delta syndrome vector. Each entry of the delta syndrome vector is referred to as a (single) delta syndrome. Note that when yBCH(m, t.sub.1), then the first entries of t.sub.1 entries of S.sup.(t.sup.2.sup.) are zero. Note also that each element of GF(2.sup.m), and in particular each entry of a syndrome, can be represented as an m-bit binary vector once a basis for GF(2.sup.m) over GF(2) is fixed.
(12) Each GTP frame is a BCH frame, and each may have a different correction capability. According to embodiments, t.sub.i is the correction capability of a BCH frame belonging to stage number i, where stage defines a consecutive number of BCH frames with the same correction capability.
(13) A GTP projected encoding scheme according to an embodiment of the disclosure may require that additional data, also referred to herein as projected data, will also be encoded within the BCH frames, along with the information and parity data. This is done by calculating a binary vector, herein called y_vec, upon the delta syndrome vector and XOR-ing it with the corresponding BCH frame. The trivial matrices used for y_vec calculation y=.Math.s are large and require a large hardware area, where U will be defined below. Each internal t requires its own matrix, which requires more code rates and coding stages, and increases the complexity of an encoder.
(14) According to embodiments, an added y_vec fulfills 3 conditions: (1) It is a BCH code word in the BCH code of the first stage; (2) the calculation of the delta syndrome of a y_vec yields a parity symbol of a Reed-Solomon (RS) code word calculated on the lower stages frames; and (3) y_vec is all zeros on the first k.sub.i bits to maintain systematic structure of the BCH frame, k.sub.i being the number of information bits of the i.sup.th BCH frame.
(15) FIG. 3 illustrates exemplary GTP word and delta syndromes (DS) tables, according to embodiments of the disclosure. In particular, the left table presents an example of an GTP code word, having 11 frames spanned over 4 stages. Starting from the top of the table, the first 3 frames belong to stage 0, thus each frame is an n-bit BCH code word with a correction capability of t.sub.0 bits. Each BCH codeword at this stage comprises K.sub.0 information bits 11 and L.sub.0=mt.sub.0 parity bits 12. For higher stages, the correction capability is higher and thus there are more parity bits and fewer information bits per frame. Thus, as illustrated in the table, stage 1 has K.sub.1 information bits, and L.sub.1 parity bits, stage 2 has K.sub.2 information bits, and L.sub.2 parity bits, and stage 3 has K.sub.3 information bits, and L.sub.3 parity bits. Note that K.sub.0>K.sub.1>K.sub.2>K.sub.3, L.sub.0<L.sub.1<L.sub.2<L.sub.3, and that K.sub.i+L.sub.i=n for i=0 to 3. The right table is a delta syndrome (DS) table for those 11 frames. In this example, there are 4 delta syndromes S.sub.ji for each frame, where j is the frame number and i is the delta syndrome number (indices start from zero). The DSs 13 are calculated over the frame, and the DSs 14 are forced DSs, calculated by RS (Reed Solomon) encoding. For example, for the first DS there are 3 DSs calculated on the first 3 frames, and those are used as the information of the RS code, which has 8 parity symbols (used as forced DSs for next frames).
(16) According to embodiments, as described above, the input to the y_vec calculator module per frame is a plurality of DSs. A y_vec according to an embodiment can force the frame to which it is added, to be a code word in code C.sub.0 with t.sub.0 correction capability and to have its DSs comply with the corresponding parity symbols of the RS codes. For a simple structure in which there are only two stages, i.e., frames of code C.sub.0 with correction capability t.sub.0 and frames of code C.sub.1 with correction capability t.sub.1, t.sub.1=t.sub.1t.sub.0, this can be viewed as:
(17)
where the number of rows in H.sub.0 is m.Math.t.sub.0, the number of rows in H.sub.1 is m.Math.t.sub.1, s is a vector of forced delta syndromes, i.e., RS parity symbols, and each s.sub.j has m bits. x is a code word in C.sub.1 and thus has zero syndrome for the entire H matrix, and y is only in C.sub.0 and thus has zero syndromes for H.sub.0, but non-zero delta syndromes for H.sub.1. In addition, adding y to x does not change the systematic nature of x that is, the y must be all zeros, except for the last m.Math.t.sub.1 bits: y=[0, . . . , 0, y].sup.T.
(18) Embodiments of the disclosure can find y that comply with the above formula. A method of calculating y can find a matrix U that fulfills:
(19)
(20) According to an embodiment, a matrix U can be found as follows. First, since a code according to an embodiment is cyclic, there exists a matrix U such that:
U.Math.H.sub.1=[A;I],(3)
where I is the identity matrix of size mt.sub.1mt.sub.1, A is a matrix of size mt.sub.1(nmt.sub.1), and the symbol ; denotes concatenation. Multiplying by U on both left hand sides of EQ. (1) produces the following:
(21) 0
Hence:
(22)
(23) It can be seen that A multiplies only zeros, and thus its actual values do not matter, and this equation becomes:
(24)
which is the desired result. Going back to EQ. (3), knowing that A is irrelevant, the equation can be formulated as:
U.Math.
=1,(4)
where
is the right most mt.sub.1mt.sub.1 submatrix of H.sub.1. Thus, according to embodiments, U can be found by inverting the matrix
.
(25) At this point, the matrix U is an mt.sub.1mt.sub.1 binary matrix. According to embodiments, U can be calculated offline, and can be realized in hardware by a network of XOR gates, that iseach bit in the vector {tilde over (y)} can be generated by xor-ing several bit of s. However, U may be very large. Taking another look at EQ. (2), it can be seen that U is multiplying mt.sub.0 zeros, thus there is no need to hold the full U matrixit is enough to hold only a matrix of size mt.sub.1mt.sub.1, which is the right side of U. From HW point of view, this is a reduction of complexity, or more explicitly, a reduction of hardware area (gate count) and power.
{tilde over (y)}=.Math.s.(5)
(26) FIG. 4 illustrates an exemplary hardware realization of the y-vec calculation, {tilde over (y)}=.Math.s. For the example in FIG. 3, the first y_vec is calculated for the 4.sup.th frame, and s=s.sub.3,0. FIG. 4 is a schematic illustration of a y_vec calculation block. The inputs are the forced DSs, denoted as S_vec, which are de-multiplexed by demultiplexer 22 to different multiplication modules 23a, . . . , 23n according to the current stage, and the outputs from the multiplication modules is multiplexed by multiplexer 24 back to a single output. Note that the S_vec is different for each frame, both by size and by data. Each multiplication module 23a, . . . , 23n is a logic function realizing one component of the matrix multiplication.
(27) According to embodiments, the complexity of EQ. 5 can be further reduced. Since, from EQ. (4), U.Math.
=I, it can be shown that the following is also valid:
.Math.U=I.
(28) Since H.sub.1 is the syndrome matrix, it can be viewed as checking the (zero-padded) columns of U to be code words in code C.sub.1. Looking at a single column of U as the coefficients of a polynomial, such a polynomial would be a code word in BCH code C.sub.1 if it is a multiplier of all minimal polynomials generating this code. But, since the multiplication
.Math.U does not result in zero matrix, (there is a single 1 in each result column), none of the columns of U is a code word in C.sub.1. Nevertheless, since every m rows of H.sub.1 check if a polynomial is divisible by a specific minimal polynomial, it means that every column of U, treated as a polynomial, is divisible by all minimal polynomials generating the code except a single minimal polynomial. When looking at the multiplication of
by , there is the same effect, except the result is a matrix of the form:
.Math.=[0;I].sup.T.
(29) The zero matrix is of size mt.sub.0mt.sub.1, which means that all columns of , treated as polynomials, are multiples of all minimal polynomials up to t.sub.0, and of all higher minimal polynomials but one. According to embodiments, the multiplier of all minimal polynomials up to t.sub.0 will be referred to as g.sub.0(x), which is a common multiplier of all the columns of . From EQ. (5):
{tilde over (y)}=.Math.s=.sub.k=0.sup.mt.sup.1.sup.1s.sub.k{tilde over ()}.sub.k,
where {tilde over ()}.sub.k is the k-th column of , or in a polynomial representation:
{tilde over (y)}(x)=.sub.k=0.sup.mt.sup.1.sup.1s.sub.k
(x)=g.sub.0(x).Math..sub.k=0.sup.mt.sup.1.sup.1s.sub.k
.sub.k(x),(6)
where:
.sub.k(x)=
(x)/g.sub.0(x),(7)
and as described above,
(x) is divisible by g.sub.0(x).
(30) So, according to embodiments, the matrix can be replaced with a new matrix, U, of size mt.sub.1mt.sub.1, where its columns can be calculated offline using EQ. (7). This matrix has a much reduced complexity relative to U. To calculate {tilde over (y)}, another multiplication, i.e., a polynomial coefficients convolution, is performed after the multiplication:
{tilde over (y)}(x)=(U.Math.s(x)*g.sub.0(x)
(31) The additional convolution can be performed with a very low complexity by multi cycle implementation, thus its added complexity is negligible.
(32) Thus, according to embodiments, the result y=U.Math.s can be achieved in two phases: (1) tmp=U.Math.s; and (2) y=conv (tmp, g.sub.t.sub.0(x)). This result has the following properties: (1) the U matrices are significantly smaller than the original U matrices; (2) g.sub.t.sub.0(x) is a polynomial common to all U matrices; and (3) multiplicationby g.sub.t.sub.0(x) can also be realized as a matrix, i.e., an XOR gates network. Moreover, multiplication by g.sub.t.sub.0(x) can be realized in a multi-cycle format, which enables reuse of a small HW over multiple clock periods, thus further decreasing the matrix size.
(33) FIG. 5 illustrates an exemplary hardware realization of an embodiment of a y_vec calculation divided into two phases as described above. The hardware configuration of FIG. 5 is substantially similar to that of FIG. 4, and thus only differences between the configurations will be described. The output of the demultiplexor 24 is tmp, and a convolution y=conv (tmp, g.sub.t.sub.0(x)) according to an embodiment is implemented by the block 35. A convolution according to an embodiment is a finite impulse response (FIR) filter, and thus can be performed in several clock cycles using a state (memory) register, instead of in a single clock cycle, which enables using a smaller logic function.
(34) Embodiments of the disclosure utilize only the zero part of
.Math.. However, as described above, since the result is of the form [0; I].sup.T, there exist more common characteristics of the result columns: every batch of m columns is a multiplier of g.sub.0(x), as above, but also of all other minimal polynomials, except one. According to embodiments, this can be formulated as follows:
(35)
where s.sub.j,l is defined as the s.sub.mj+l, the m-th bit in the J-th delta syndrome,
(36)
m.sub.i(x) is the i-th minimal polynomial and h.sub.j,l(x)=.sub.mj+l(x)/M.sub.j(x). As before, all of the polynomials h.sub.j,l(x) and M.sub.j(x) can be calculated offline.
(37) According to embodiments, by defining an mm matrix H.sub.j to be the concatenation of the m polynomials h.sub.j,l, l=0, . . . , m1 as columns, it can be seen that:
.sub.t=0.sup.m1s.sub.j,l.Math.h.sub.j,l(x)=H.sub.j.Math.s.sub.j=h.sub.j(x),
and a final formula according to an embodiment is obtained:
{tilde over (y)}(x)=g.sub.0(x).Math..sub.j=0.sup.t.sup.1.sup.1M.sub.j(x).Math.h.sub.j(x).
(38) According to another embodiment of the disclosure, a calculation has been divided into three phases: 1. Calculating h.sub.j(x), 2. Multiplying by M.sub.j(x) and summing, 3. Multiplying by g.sub.0(x).
(39) In a first section, each delta syndrome is multiplied by its own offline calculated matrix H.sub.j. A third section is exactly as described in an embodiment as illustrated in FIG. 5. A second section according to another embodiment combines both multiplication and summation. Note that each hardware stage requires its own H and M matrices, for all delta syndromes that participate in that stage. That is, there is a different hardware for DSI when it is used in stage i and j. The multiplication of two polynomials represented by coefficient vectors can be implemented in a multi cycle operation as follows:
h(x)=h.sup.n1x.sup.n1+ah.sub.n2x.sup.n2+ . . . +h.sub.1x+h.sub.0,
and
h(x).Math.M(x)=h.sub.0.Math.M(x)+x.Math.h.sub.1.Math.M(x)+ . . . +h.sub.n1.Math.x.sup.n1.Math.M(x).
(40) Since for binary polynomials, multiplication by x.sup.j is a shift left by j, the multiplication of the two polynomials can be realized as adding each bit of one polynomial multiplied by the other polynomial with the corresponding shift. Thus if the coefficient of x.sup.m1 (MSB) of h.sub.j (x) is multiplied by M.sub.i(x) in the first cycle, and it is added to the bit that was right-shifted by one bit in the multiplication of the coefficient of x.sup.m2 of h.sub.j(x) by M.sub.j(x) in the second cycle, and so on, the result of M.sub.j(X).Math.h.sub.i(x) is obtained. Since a sum over j is desired, the same can be performed in a matrix format, that isdefine a matrix G which is a concatenation of all M.sub.j as columns, and in each cycle i, over in cycles, multiply G by a vector v.sub.i=[h.sub.0.sup.(i), h.sub.1.sup.(i), . . . , h.sub.t.sub.2.sub.1.sup.(i)].sup.T and add it to the result shifted by i bits to the right, where h.sub.j.sup.(i) is defined as the coefficient of x.sup.i of (x).
(41) FIG. 6 illustrates an exemplary hardware realization of an embodiment of a y_vec calculation divided into three phases as described above. However, embodiments of the disclosure can encompass any number of phases. In the figure, N is the number of stages, m is the binary BCH code over GF(2.sup.m), and dt.sub.n=t.sub.nt.sub.0. In addition, for clarity, only blocks on the left-hand side have reference numbers. The figure depicts a HW configuration where multiplication by G is done in m cycles, where the input for each cycle is one bit from each register, and the bits are chosen according to the cycle number. For each cycle, the output bits are summed with one shift relative to a previous cycle. Referring to the figure, blocks 41 represent the components of each of dt.sub.N1 syndrome vectors, blocks 42 represents the sums .sub.l=0.sup.m1s.sub.j,l.Math.h.sub.j,l(x)=H.sub.j.Math.s.sub.j=h.sub.j(x), whose output is stored in registers 43, blocks 44 and represent the sums .sub.j=0.sup.t.sup.1.sup.1M.sub.j(x).Math.h.sub.j(x) with the m shifts, as discussed above, whose results are summed and stored in registers 46, block 47 is a multiplexer, and block 48 represents the final multiplication by g.sub.0(x), which, as described above, may be done in multi cycle operation.
(42) System Implementations
(43) It is to be understood that embodiments of the present disclosure can be implemented in various forms of hardware, software, firmware, special purpose processes, or a combination thereof in one embodiment, the present disclosure can be implemented in hardware as an application-specific integrated circuit (ASIC), or as a field programmable gate array (FPGA). In another embodiment, the present disclosure can be implemented in software as an application program tangible embodied on a computer readable program storage device. The application program can be uploaded to, and executed by, a machine comprising any suitable architecture.
(44) FIG. 7 is a block diagram of a system for efficiently encoding generalized tensor product codes, according to an embodiment of the disclosure. Referring now to FIG. 7, a computer system 51 for implementing an embodiment of the present disclosure can comprise, inter alia, a central processing unit (CPU) 52, a memory 53 and an input/output (I/O) interface 54. The computer system 51 is generally coupled through the I/O interface 54 to a display 55 and various input devices 56 such as a mouse and a keyboard. The support circuits can include circuits such as cache, power supplies, clock circuits, and a communication bus. The memory 53 can include random access memory (RAM), read only memory (ROM), disk drive, tape drive, etc., or a combinations thereof. The present disclosure can be implemented as a routine 57 that is stored in memory 53 and executed by the CPU 52 to process the signal from the signal source 58. As such, the computer system 51 is a general purpose computer system that becomes a specific purpose computer system when executing the routine 57 of the present disclosure. Alternatively, as described above, embodiments of the present disclosure can be implemented as an ASIC or FPGA 57 that is in signal communication with the CPU 52 to process the signal from the signal source 58.
(45) The computer system 51 also includes an operating system and micro instruction code. The various processes and functions described herein can either be part of the micro instruction code or part of the application program (or combination thereof) which is executed via the operating system. In addition, various other peripheral devices can be connected to the computer platform such as an additional data storage device and a printing device.
(46) It is to be further understood that, because some of the constituent system components and method steps depicted in the accompanying figures can be implemented in software, the actual connections between the systems components (or the process steps) may differ depending upon the manner in which an embodiment of the present disclosure is programmed. Given the teachings of the present disclosure provided herein, one of ordinary skill in the related art will be able to contemplate these and similar implementations or configurations of the present disclosure.
(47) While embodiments of the present disclosure has been described in detail with reference to exemplary embodiments, those skilled in the art will appreciate that various modifications and substitutions can be made thereto without departing from the spirit and scope of the disclosure as set forth in the appended claims.