Convolutional de-interleaver and convolutional de-interleaving method
10025709 ยท 2018-07-17
Assignee
Inventors
Cpc classification
International classification
Abstract
A convolutional de-interleaver for processing multiple groups of convolutional interleaved data is provided. The groups of convolutional interleaved data include multiple sets of convolutional interleaved data formed by performing a convolutional interleaving process on multiple groups of non-interleaved data. Each set of non-interleaved data includes L sets of data, where L is a positive integer. The convolutional de-interleaver includes: an input data buffer, buffering the groups of convolutional interleaved data; a memory controller, accessing the convolutional interleaved data buffered in the input data buffer with a memory to perform a convolutional de-interleaving process, a memory address of each set of stored convolutional interleaved data being determined according to a corresponding delay depth, the value L and a delay depth difference corresponding to the set of data; and an output data buffer, buffering the multiple groups of convolutional de-interleaved data read from the memory.
Claims
1. A convolutional de-interleaver, applied to process a plurality of groups of convolutional interleaved data, the groups of convolutional interleaved data comprising a plurality of sets of convolutional interleaved data formed by performing a convolutional interleaving process on a plurality of groups of non-interleaved data, each of the groups of non-interleaved data comprising L sets of data, two successive sets of data in each group of non-interleaved data corresponding to a delay depth difference after the convolutional interleaving process, L being a positive integer, the convolutional de-interleaver comprising: an input data buffer, that buffers the groups of convolutional interleaved data; a memory controller, that accesses the groups of convolutional interleaved data buffered in the input data buffer with a memory to perform a convolutional de-interleaving process; wherein, a memory address of each set of convolutional interleaved data stored is determined according to a corresponding delay depth, the value L and the delay depth difference; and an output data buffer, that buffers a plurality of groups of convolutional de-interleaved data read from the memory, wherein the L sets of data of one of the groups correspond to L different delay depths, respectively; wherein the memory is arranged in a plurality of LL tiles; wherein each of the groups of non-interleaved data corresponds to one group transmitting sequence J, J being a positive integer, and the memory address of each set of convolutional interleaved data stored is determined further according to the value J, such that at least one of the LL tiles comprises a part of the L sets of data corresponding to a same J and the part of the L sets of data are continuous.
2. The convolutional de-interleaver according to claim 1, wherein the delay depth difference is I operation clocks, I is a positive integer, a maximum of the delay depths of the convolutional interleaved data is [(L1)I+Q] operation clocks, and Q is an integer not smaller than 0 and represents a minimum of the delay depths.
3. The convolutional de-interleaver according to claim 1, wherein the memory controller reads (L1) sets of convolutional interleaved data by using N same row accessing units in the memory to serve as (L1) sets of one of the groups of convolutional de-interleaved data; in the (L1) sets of convolutional interleaved data, [(L/N)1] sets of the convolutional interleaved data is stored in one of the N same row accessing memory units, and (N1)L/N sets of the convolutional interleaved data is stored in (N1) same row accessing memory units of the N same row accessing units, where N is a positive integer.
4. A convolutional de-interleaver, applied to process a plurality of groups of convolutional interleaved data, the groups of convolutional interleaved data comprising a plurality of sets of convolutional interleaved data, the convolutional de-interleaver comprising: an input data buffer, that buffers the groups of convolutional interleaved data; a memory controller, that accesses the groups of convolutional interleaved data buffered in the input data buffer with a memory to perform a convolutional de-interleaving process to obtain a plurality of groups of convolutional de-interleaved data, and stores a plurality of sets of data of the sets of convolutional interleaved data corresponding to a same group of convolutional de-interleaved data of the groups of convolutional de-interleaved data in a plurality of same row accessing memory units in the memory; and an output data buffer, that buffers the convolutional de-interleaved data read from the memory, wherein the sets of convolutional interleaved data is formed by performing a convolutional interleaving process on a plurality of groups of non-interleaved data, each of the groups of non-interleaved data comprises L sets of data, two successive sets of data in each group of the non-interleaved data corresponds to a delay depth difference after the convolutional interleaving process, the delay depth difference is I operation clocks, L and I are positive integers, a maximum of delay depths of the convolutional interleaved data is [(L1)I+Q] operation clocks, and Q is an integer not smaller than 0 and represents a minimum of the delay depths, wherein the L sets of data of one of the groups correspond to L different delay depths, respectively, wherein the memory is arranged in a plurality of LL tiles, and wherein each of the groups of non-interleaved data corresponds to one group transmitting sequence J, J being a positive integer, a memory address of each set of the convolutional interleaved data is determined according to a corresponding delay depth of the set of data, the value L, the delay depth difference and the value J such that at least one of the LL tiles comprises a part of the L sets of data corresponding to a same J and the part of the L sets of data are continuous.
5. A convolutional de-interleaving method, applied to process a plurality of groups of convolutional interleaved data, the groups of convolutional interleaved data comprising a plurality of sets of convolutional interleaved data, the convolutional de-interleaving method comprising: accessing the groups of convolutional interleaved data with a memory to perform a convolutional de-interleaving process to obtain a plurality of groups of convolutional de-interleaved data; wherein, a plurality of sets of data of the sets of convolutional interleaved data corresponding to a same group of convolutional de-interleaved data of the groups of convolutional de-interleaved data is stored in a plurality of same row accessing memory units in the memory, wherein the sets of convolutional interleaved data is formed by performing a convolutional interleaving process on a plurality of groups of non-interleaved data, each of the groups of non-interleaved data comprises L sets of data, two successive sets of data in each group of the non-interleaved data corresponds to a delay depth difference after the convolutional interleaving process, the delay depth difference is I operation clocks, L and I are positive integers, a maximum of delay depths of the convolutional interleaved data is [(L1)I+Q] operation clocks, and Q is an integer not smaller than 0 and represents a minimum of the delay depths, wherein the L sets of data of one of the groups correspond to L different delay depths, respectively, wherein the memory is arranged in a plurality of LL tiles, wherein in the step of accessing the groups of convolutional interleaved data with the memory to perform the convolutional de-interleaving process, a memory address of each set of the convolutional interleave data stored is determined according to a delay depth, the value L and the value I, and wherein each of the groups of non-interleaved data corresponds to one group transmitting sequence J, J being a positive integer, and the memory address of each set of the convolutional interleaved data stored is determined further according to the value J, such that at least one of the LL tiles comprises a part of the L sets of data corresponding to a same J and the part of the L sets of data are continuous.
6. The convolutional de-interleaving method according to claim 5, wherein each of the groups of convolutional de-interleaved data corresponds to L sets of convolutional interleaved data of the sets of convolutional interleave data; (L1) sets of convolutional interleaved data of the L sets of convolutional interleaved data is stored in N same row accessing memory units in the memory; in the (L1) sets of convolutional interleaved data, [(L/N)1] sets of the convolutional interleaved data is stored in one of the N same row accessing memory units, and the rest sets of the convolutional interleaved data is distributed and stored in (N1) same row accessing memory units of the N same row accessing units, where L and N are positive integers.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION OF THE INVENTION
(14) The present invention discloses a convolutional de-interleaver and a convolutional de-interleaving method, which perform de-interleaving by a dynamic random access memory (DRAM) to save hardware costs, and reduce the number of row access changes of a memory through appropriately selecting memory addresses for data access to further improve the performance.
(15)
(16) For example, referring to
(17) As previously described, in order to restore the continuity of data (i.e., to restore the convolutional interleaved data to non-interleaved (de-interleaved) data), the memory access controller 520 performs de-interleaving in response to the four delay depths of 3, 2, 1 and 0 unit corresponding to the convolutional interleaving processing unit 600 (or an equivalent unit).
(18) Referring to
(19) In a 2.sup.nd time unit T2, the 2.sup.nd-column data (x, , ) in the 33 data storage units is read out and aligned with the 1.sup.st set of data x of the 2.sup.nd group of interleaved data in the inputted data, and the two jointly serves as a 2.sup.nd group of de-interleaved data (x, x, , ) and is outputted. The remaining sets of data (x, C.sub.0, D.sub.1) of the 2.sup.nd group of interleaved data in the inputted data is written, and corresponds to the rows of the predetermined storage positions of the sets of data B, C and D.
(20) In a 3.sup.rd time unit T3, the 3.sup.rd-column data (x, x, ) in the 33 data storage units is read out and aligned with the 1.sup.st set of data x of the 3.sup.rd group of interleaved data in the inputted data, and the two jointly serves as a 3.sup.rd group of de-interleaved data (x, x, x, ) and is outputted. The remaining sets of data (B.sub.0, C.sub.1, D.sub.2) of the 3.sup.rd group of interleaved data in the inputted data is written, and corresponds to the rows of the predetermined storage positions of the sets of data B, C and D.
(21) In a 4.sup.th time unit T4, the 4.sup.th-column data (B.sub.0, C.sub.0, D.sub.0) in the 33 data storage units is read out and aligned with the 1.sup.st set of data A.sub.0 of the 4.sup.th group of interleaved data in the inputted data, and the two jointly serves as a 4.sup.th group of de-interleaved data (A.sub.0, B.sub.0, C.sub.0, D.sub.0) and is outputted. The remaining sets of data (B.sub.1, C.sub.2, D.sub.3) of the 4.sup.th group of interleaved data in the inputted data is written, and corresponds to the rows of the predetermined storage positions of the sets of data B, C and D. The remaining 5.sup.th to 7.sup.th time units T5 to T7 and access details of subsequent time units can be deduced from the above description.
(22) In continuation, the data A.sub.J corresponding to 3 delay depths is directly outputted, the storage address of the data B.sub.J corresponding to 2 delay depths is located at the {[J mod(L1)]I+1}.sup.th position (where mod represents a remainder operation) at the 1.sup.st row in the 33 data storage unit, the storage position of the data C.sub.J corresponding to 1 delay depth is located at the {[J mod(L1)]I+1}.sup.th position at the 2.sup.nd row in the 33 data storage unit, and the storage position of the data D.sub.J corresponding to 0 delay depth is located at the {[J mod(L1)]I+1}.sup.th position at the 3.sup.rd row in the 33 data storage unit. The number of row at which each set of data is located is the value of the maximum delay depth (3 in this example) subtracted by the value of the delayed depth of the set of data. According to the above, in the embodiment, the data at each access address is read out before being overwritten, the behavior of delaying a buffering element is simulated through the additionally reserved storage units, read addresses and written addresses that are interleaved, and the accessed addresses are aligned with the access addresses of the same column. It should be noted that the descriptive terms column, row and position are for indicating relationships among the access addresses, and are not to be construed as limitations to physical circuit relationships of the memory 530.
(23) It should be noted that, before the memory access controller 520 stores first convolutional interleaved data (e.g., the data C.sub.2 in
(24) To better understand the differences between the present invention and the prior art,
(25) In practice, the number of sets of data in each group of convolutional interleaved/de-interleaved data (alternatively, the J.sup.th group of convolutional interleaved/de-interleaved data) is usually greater than the number of sets in the foregoing example. Taking
(26) It should be noted that, in the above example, LL=3232 data storage units form M (48=32) tiles, with each tile storing (LL/M)=3232/32=32 sets of convolutional interleaved data. Wherein, N=32/8=4 tiles can be utilized by the memory access controller (e.g., the controller 520 in
(27) In continuation, under the same settings of the number of groups of data, the delay depths and the delay depth differences, an access condition of a conventional solution is as shown in
(28) In addition to the above device, the present invention further discloses a convolutional de-interleaving method for processing a plurality of groups of convolutional interleaved data by M same row accessing memory units of a DRAM. Similarly, the plurality of groups of convolutional interleaved data is formed by performing a convolutional interleaving process on a plurality of groups of non-interleaved data. Each of the groups of non-interleaved data includes L sets of data. The J.sup.th group of the plurality of groups of non-interleaved data includes L consecutive sets of data, the L sets of the J.sup.th group of data corresponds to L different delay depths, respectively, L and J are positive integers, and the value of J corresponds to a group transmitting sequence of the plurality of groups of non-interleaved data. Under the above setting, a method according to an embodiment includes following steps, as shown in
(29) In step S1110, received data of the plurality of groups of convolutional interleaved data is written to the M same row accessing memory units in the DRAM. Wherein, the memory addresses at which two successive sets of convolutional interleaved data is written are not consecutive, and (L1) sets of the L sets of convolutional interleaved data of the plurality of groups of convolutional interleaved is written to Nw same row accessing memory units of the M same row accessing memory units, where M is an integer greater than 1, and Nw is a positive integer not greater than M. In this example, the (L1) sets of convolutional interleaved data is distributed and stored in the Nw same row accessing memory units, the number of sets of data of the J.sup.th group of convolutional interleaved data stored in each of the same row accessing memory units is not greater than [int((L1)/Nw)+1], where int represents an integer operation. Further, a delay depth difference corresponding to two successive sets of convolutional interleaved data is I operation clocks, and the memory address of each set of convolutional interleaved data is determined according to values of L, J and I.
(30) In step S1120, one set of the L sets of convolutional interleaved data is outputted as one set of data of one group of convolutional de-interleaved data, and the (L1) sets of convolutional interleaved data in Nr same row accessing memory units is accessed through not greater Nr number of row access changes to serve as (L1) sets of data of that group of convolutional de-interleaved data. The Nr access memory units are included in the M same row accessing memory units, where Nr is a positive integer not greater than M. In this example, Nw is greater than Nr.
(31)
(32) In step S1210, received data of the plurality of groups of convolutional interleaved data is written to the M same row accessing memory units in the DRAM. Wherein, the memory addresses at which two successive sets of convolutional interleaved data is written are not consecutive, where M is an integer greater than 1. In this example, a delay depth difference corresponding to two successive sets of convolutional interleaved data is I operation clocks. The group transmitting sequence of the plurality of groups of non-interleaved data is J (alternatively, each group/set of convolutional interleaved data corresponds to a group receiving sequence J), and the memory address of each set of convolutional interleaved data is determined according to values of L, J and I, where I and J are positive integers.
(33) In step S1220, (L1) sets of convolutional interleaved data stored in the M same row accessing memory units is read through Nr number of row access changes to serve as (L1) sets of data of one group of convolutional de-interleaved data, where Nr is a positive integer not greater than M. In the (L1) sets of data, [(L/Nr)1] sets of data is stored in one same row accessing memory unit, and the remaining (Nr1)L/Nr sets of data is evenly stored in (Nr1) same row accessing memory units.
(34) In conclusion, the convolutional de-interleaver and the convolutional de-interleaving method perform de-interleaving with a DRAM to save hardware costs, and reduce the number of row access changes of a memory through appropriately selecting memory addresses for data access to further improve the performance.
(35) While the invention has been described by way of example and in terms of the preferred embodiments, it is to be understood that the invention is not limited thereto. On the contrary, it is intended to cover various modifications and similar arrangements and procedures, and the scope of the appended claims therefore should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements and procedures.