Apparatus and Methods of Providing an Efficient Radix-R Fast Fourier Transform
20180373676 ยท 2018-12-27
Assignee
Inventors
Cpc classification
G06F17/142
PHYSICS
International classification
Abstract
In some embodiments, an apparatus can include a memory configured to store data at a plurality of addresses and a generalized radix-r fast Fourier transform (FFT) processor configured to determine a plurality of FFTs for any positive integer Discrete Fourier Transform (DFT) by utilizing three counters to access the data and the coefficient multipliers at each stage of the FFT processor.
Claims
1. An apparatus comprising: a memory configured to store data at a plurality of addresses; and a generalized radix-r fast Fourier transform (FFT) processor configured to determine a plurality of FFTs for any positive integer Discrete Fourier Transform (DFT) by utilizing three counters to access the data and the coefficient multipliers at each stage of the FFT processor.
2. The apparatus of claim 1, wherein the positive integer DFT is a prime number.
3. The apparatus of claim 1, wherein the generalized radix-r fast FFT processor performs a Decimation in Frequency (DIF) operation.
4. The apparatus of claim 1, wherein the generalized radix-r fast FFT processor performs a Decimation in Time (DIT) operation.
5. The apparatus of claim 1, wherein the generalized radix-r fast FFT processor includes an address generator configured to reduce memory accesses to coefficient multipliers of the FFTs stored by the plurality of addresses of the memory by regrouping data with their corresponding coefficient multipliers.
6. The apparatus of claim 5, wherein the regrouping of the data with their corresponding coefficient multipliers avoids trivial multiplication by one operations during the FFT calculation.
7. The apparatus of claim 5, wherein the regrouping of the data with their corresponding coefficient multipliers ensures that zero-padding within the FFT calculation does not contribute to computational load.
8. An apparatus comprising: an input configured to receive input data having a size that is a multiple of an arbitrary integer a; a memory configured to store data at a plurality of addresses; and a generalized radix-R fast Fourier transform (FFT) processor coupled to the input into the memory, the generalized radix-r FFT processor configured to determine an FFT of the input data using three counters to access data and coefficient multipliers at each stage of the FFT processor.
9. The apparatus of claim 8, wherein the generalized radix-R FFT processor is configured to apply an interlaced decomposition to the input data to separate even and odd samples.
10. The apparatus of claim 8, wherein the generalized radix-R FFT processor is configured to determine an 8-point decimation in time discrete Fourier transform in three stages.
11. The apparatus of claim 8, wherein the generalized radix-R FFT processor is configured to determine an 8-point decimation in frequency discrete Fourier transform in three stages.
12. The apparatus of claim 8, wherein the generalized radix-R FFT processor is configured to iteratively divide a discrete Fourier transform (DFT) into a predetermined number of smaller DFTs
13. The apparatus of claim 12, wherein an address generator of the generalized radix-R FFT processor is configured to provide a simple mapping of an FFT stage, a butterfly stage, and an element to addresses of the coefficient multipliers.
14. The apparatus of claim 8, wherein the address generator is configured to reduce memory accesses to the coefficient multipliers of the FFTs stored by the plurality of addresses of the memory by regrouping data with their corresponding coefficient multipliers.
15. The apparatus of claim 14, wherein the regrouping of the data with their corresponding coefficient multipliers avoids trivial multiplication by one operations during the FFT calculation.
16. The apparatus of claim 14, wherein the regrouping of the data with their corresponding coefficient multipliers ensures that zero-padding within the FFT calculation does not contribute to computational load.
17. An apparatus comprising: a memory configured to store data at a plurality of addresses; and a generalized radix-r fast Fourier transform (FFT) processor configured to determine a plurality of FFTs for any positive integer Discrete Fourier Transform (DFT) by utilizing three counters to access the data and the coefficient multipliers at each stage of a plurality of stages of the FFT processor, the plurality of stages including an FFT stage and at least one butterfly stage.
18. The apparatus of claim 17, wherein the generalized radix-R FFT processor is configured to apply an interlaced decomposition to the input data to separate even and odd samples.
19. The apparatus of claim 17, wherein the generalized radix-R FFT processor is configured to determine an 8-point decimation in time discrete Fourier transform in three stages.
20. The apparatus of claim 17, wherein the generalized radix-R FFT processor is configured to determine an 8-point decimation in frequency discrete Fourier transform in three stages.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029] In the following discussion, the same reference numbers are used in the various embodiments to indicate the same or similar elements.
DETAILED DESCRIPTION OF ILLUSTRATIVE EMBODIMENTS
[0030] Despite many new technologies, the Fourier transform may remain the workhorse for signal processing analysis. The Fast Fourier Transform (FFT) is an algorithm that can be applied to compute the Discrete Fourier transform (DFT) and its inverse, both of which can be optimized to remove redundant calculations. These optimizations can be made when the number of samples to be transformed is an exact power of two and, if not, the number of samples can be zero padded to the nearest number that is power of two.
[0031] The present disclosure may be embodied in one or more address generators that can be used in conjunction with one or more butterfly processing elements. The one or more address generators can be configured to support a generalized radix-r FFT that may allow the efficient calculation of discrete Fourier transform of arbitrary sizes, including prime sizes. In some embodiments, the embodiments of the present disclosure may utilize a computing device including an interface coupled to a processor and configured to receive data. The processor may be configured to apply a butterfly computation, which may include a simple multiplication of input data with an appropriate coefficient multiplier. In the context of an FFT computation, a butterfly computation is a portion of the DFT computation that combines the results of smaller DFTs into a larger DFT (or vice versa) or segments a larger DFT into smaller DFTs. These smaller DFTs may be written to or read from memory, and such read/write operations contribute to the overall speed of the DFT computation. Embodiments of a system in accordance with the present disclosure may include one or more simple address generators (AGs), which can compute address sequences from a small parameter set that describes the address pattern.
[0032] A processor may be configured to implement a butterfly operation (or may be configured to compute the mathematical transformations), and dataflow may be controlled by an independent device or by another processor of the device. In an embodiment, peripheral devices may be used to control data transfers between an I/O (Input/Output) subsystem and a memory subsystem in the same manner that a processor can control such transfers, reduce core processor interrupt latencies, and conserve digital signal processor (DSP) cycles for other tasks leading to increased performance. Embodiments described herein may present a generalized radix-r FFT that allows the efficient calculation of DFTs of arbitrary size, and including prime sizes.
[0033] Referring now to
[0034] In some embodiments, the one or more CPU cores 102 can include internal memory 114, such as registers and memory management. Further, the one or more CPU cores 102 can include an address generator 116 including a plurality of counters 118. In some embodiments, the one or more CPU cores 102 can be coupled to a floating-point unit (FPU) processor 104.
[0035] The one or more CPU cores 102 can be configured to process data using FFT DIF operations or FFT DIT operations. Embodiments of the present disclosure utilize an address generator 116 including a plurality of counters 118 to provide generalized radix-r FFTs, which allow for the efficient calculation of discrete Fourier transforms of arbitrary sizes, including prime sizes. The address generator 116 and the counters 118 can be used to reduce the overall number of memory accesses (read operations and write operations) for the various FFT calculations, thereby enhancing the overall efficiency, speed and performance of the one or more CPU cores 102.
[0036] It should be appreciated that the FFT operations may be managed using a dedicated processor or processing circuit. In some embodiments, the FFT operations may be implemented as CPU instructions that can be executed by the individual processing cores of the one or more CPU cores 102 in order to manage memory accesses and various FFT computations.
[0037] In order to appreciate the improvements to the processing cores provided by the present disclosure, it is important to understand at least one possible implementation of the FFT computations. In the following discussion of
[0038]
[0039] Further, the four signals can be decomposed into eight signals using the interlace decomposition at a fourth stage 208. The eight signals can be decomposed into sixteen signals using the interlace decomposition at a fifth stage 210.
[0040] Each of the stages uses an array of a size that is a power of two. If the data size is not a power of two, it can be zero padded to the nearest number that is a power of two. As used herein, the term zero padded refers to the insertion of a plurality of zeros at the beginning or end of a number in order to fill the array to form an array having a size that is a power of two. All the above cited algorithms require data sizes that have been power of two and if not it should be zero padded to the nearest number that is power of two. Zero-padding from a natural computation size to the nearest two-to-a-power size introduces increased computational complexity and memory requirements and reduces accuracy, especially in multidimensional problems.
[0041] The definition of the DFT is represented by the following equation:
[0042] where x.sub.(n) represents the input sequence, X.sub.(k) represents the output sequence, N represents the transform length, and w.sub.N represents the N.sup.th root of unity, w.sub.N=e.sup.j2/N. Both the input sequence (x.sub.(n)) and the output sequence (X.sub.(k)) are complex valued sequences of length N=r.sup.S, where the variable r represents the radix and the variable S represents the number of stages.
[0043] The decimation-in-time (DIT) FFT first rearranges the input elements into bit-reverse order, then builds up the output transform in log.sub.e N iterations. The DIT FFT computes an 8-point DIT DFT in three stages as depicted in
[0044]
[0045] In the embodiments of
[0046] In general, higher radix butterfly implementations can reduce the communication burden. For example, a sixteen-point DFT can be determined in two stages of radix-4 butterflies, as shown in
[0047] It is also possible to derive FFT algorithms that first go through a set of log.sub.2 N iterations on the input data, and rearrange the output values into bit-reverse order. These are called decimation-in-frequency (DIF) DFT outputs. One possible example of a three-stage eight-point DIF FFT process is described below with respect to
[0048]
[0049] The integers n and k in equation (1) (for the case N=2.sup.) can be expressed in binary numbers depicted in the following equations:
n=2.sup.1n.sub.1+2.sup.2n.sub.2+ . . . +n.sub.0,(2)
and
k=2.sup.1k.sub.1+2.sup.2k.sub.2+ . . . +k.sub.0,(3)
in which the variables n and k can take the values 0 and one only. Accordingly, equation (1) can be rewritten as follows:
[0050] Based on equation (4), the .sup.ple sum can be divided into separate summations as follows:
[0051] The computation of equation (1) can be divided into log.sub.2N= stages, where each stage can have a computational complexity of N. As a result, the total computational complexity can be decreased from N.sup.2 to N log.sub.2 N. If the result needs to be in the natural order, an unscrambling stage for X.sub. can be included. The signal flow graph for an 8-points radix-2 DIF FFT described below with respect to
[0052]
[0053] In the illustrated example of
[0054] The radix-2 butterfly can include two complex additions and one complex multiplication. A conceptual representation of the radix-2 butterfly is described below with respect to
[0055]
[0056] The basis of the radix-r FFT is that a DFT can be divided into r smaller DFTs, each of which is divided into r smaller DFTs, in a continuing process that results in a combination of r point DFTs. By properly dividing the DFT into partial DFTs, the system can control the number of multiplications and stages. In some embodiments, the number of stages may correspond to the amount of global communication, the amount of memory accesses, or any combination thereof. Thus, advantages can be achieved by reducing the number of stages.
[0057] Conceptually, the FFT address generator can provide a simple mapping of the three indices (FFT stage, butterfly, and element) to the addresses of the multiplier coefficients. At the outset, equation (1) can be expressed in compact form as depicted in equation (8) below:
for k=0, 1, L, N1, and with p=0, 1, . . . , (N/r)1 and q=0, 1, . . . , r1, with
X=[X.sub.(p),X.sub.(p+N/r),X.sub.(p+2N/r), . . . ,X.sub.(p+(r1)N/r)].sup.T,(9)
W.sub.N=diag(w.sub.N.sup.0,w.sub.N.sup.p,w.sub.N.sup.2p, . . . ,w.sub.N.sup.(r1)p),(10)
[0058] Therefore, by defining [T.sub.r].sub.l,m as the element at the l.sup.th line and m.sup.th column in the matrix T.sub.r equation (11) can be rewritten as follows:
[T.sub.r].sub.l,m=(12)
where l=0, 1, . . . , r1, m=0, 1, . . . , r1 and x
.sub.N represents the operation x modulo N and where W.sub.N(m,v,s) represents the set of the twiddle factor matrix as follows:
[W.sub.N].sub.l,m(v,s)=diag(.sub.N (0,v,s),
.sub.N (1,v,s), . . . ,
.sub.N (r1,v,s)),(13)
where the indices r represents the FFT's radix; the values v=0, 1, . . . , V1 represents the number of words of size r (V=N/r) and the value s=0, 1, . . . , S represents the number of stages (or iterations S=log.sub.r N1). Further, equation (13) can be expressed for the different stages in an FFT process as follows:
for the DIF process. Equation (14) can be expressed as follows:
for the DIT process, where l=0, 1, . . . , r1 is the l.sup.th butterfly's output, m=0, 1, . . . , r1 is the m.sup.th butterfly's input and x represents the integer part operator of x.
[0059] As a result, the l.sup.th transform output during each stage can be illustrated according to the following equation:
for the DIF process and
for the DIT process.
[0060] The read address generator (RAG), the write address generator (WAG), and the coefficient address generator (CAG) can be used for DIF and DIT processes, respectively. The m.sup.th butterfly's input data of the v.sup.th word x.sub.(m) at the s.sup.th stage (s.sup.th iteration) is fed by equations (12) and (13) for the DIF process and by equation (14) for the DIF process of the RAG as follows:
and for s>0
and for the DIT process
where the butterfly's input m=0, 1, K, r1, v=0, 1, K, V1 and s=0, 1, K, S, S=log.sub.r N1.
[0061] For both cases, the l.sup.th processed butterfly's output X.sub.(l,v,s)(l=0, 1, K, r1) for the v.sup.th word at the s.sup.th stage should be stored into the memory address location given for the WAG as follows:
WAG.sub.(l,v,s)=l(N/r)+v,(21)
[0062] It should be noted that, for both algorithms, the input and output data are in natural order during each stage of the FFT process known at all stages as the Ordered Input Ordered Output (0100) algorithms. The coefficient multipliers (Twiddle Factors or Twiddle Coefficients), which are used during each stage and which are fed to the m.sup.th butterfly's input of v.sup.th word x.sub.(m) at the s.sup.th stage (s.sup.th iteration), are provided as follows:
for the DIF process and
for the DIT process. Based on equations (15), (20), (21), and (23), the generalized radix-r FFT can be implemented in a field-programmable gate array (FPGA), a circuit, or software that can execute on a processor. Regardless of how the mathematical processes are implemented, the generalized radix-r FFT can be used with a variety of different circuits, devices, and systems.
[0063]
[0064] At 706, the method 700 can include computing the first stage. At 708, the method 700 may include computing the S1 stages. At 710, the method 700 may include executing the butterfly computations with trivial multiplication using unitary Twiddle factors. At 712, the method 700 can include executing the butterfly computations with non-trivial multiplications using the complex Twiddle factors.
[0065] At 714, if the selected stage is not greater than the total number of stages minus one, the method 700 can include incrementing the stage counter at 716. The method 700 then returns to 708 to compute the S1 stages. Returning to 714, if the selected stage is greater than the total number of stages minus one, the method 700 can terminate at 718.
[0066]
[0067] In the source code 800, a plurality of for loops are nested to iteratively determine the read data addresses and the twiddle (coefficient) factor addresses and to determine the x-integer for the butterfly FFT. The illustrative source code 800 may correspond to equations 17, 20, and 23 above.
[0068] In some embodiments, the generalized radix-r FFT operations and the associated address generator and counters disclosed herein take advantage of the occurrence of the multiplication by one. For example, the elements of the twiddle factor matrix illustrated in equation (4) that may be equal to one can be easily predicted when the shifting counter in both cases is equal to zero (i.e., v<r.sup.s or v<r.sup.(Ss)). The trivial multiplication by one) (w.sup.0) during the entire FFT process is consequently avoided. Thus, embodiments of the present disclosure may take advantage of this mathematical equivalence to ensure that the zero-padding does not contribute to the computational load.
[0069] Additionally, as can be seen in the source code 800 of
However, the division (MOD) operation is more costly in terms of processor flops than multiplication and thus can be more intensive.
[0070]
[0071] Many FFT users may prefer the natural order outputs of the computed FFT and that is why many developers have concentrated their efforts in reducing the computational time impact in the bit reversal stage, which is the first stage of the DIT process known as the bit reversal data shuffling technique. The DIT FFT has been attractive in fixed point implementations because DIT processes executed in fixed-point arithmetic have been shown to be more accurate than the decimation-in-frequency (DIF) processes. Furthermore, it is highly recommended to reorder the intermediate stage of the FFT algorithm in order to facilitate the operation on consecutive data elements for many hardware architectures. To these ends, a number of alternative implementations have been proposed. One such alternative implementation may adopt an out-of-place algorithm where the output array is distinct from the input array.
[0072] For example, in a bit-reversal technique developed by Rius and De Porata-Doria (J. M. Rius and R. De Porrata-Doria New FFT Bit-Reversal Algorithm, IEEE Transactions On Signal Processing, Vol. 43, No. 4, April 1995, pp. 991-994), the operational count excluding the index calculations for each stage as follows:
N2 integer additions,
2(N2)integer increments,
(log.sub.2 N)1 multiplications by 2,
(log.sub.2 N)1 divisions by 2,(25)
plus two more divisions N/2 and N/4. In equation (25), multiplications and divisions can be efficiently implemented using bit-shift operations. Further, this Rius implementation uses a storage table of N/2 index numbers. In contrast, a faster bit-reversal permutation is described by Prado (J. Prado A New Fast Bit-Reversal Permutation Algorithm Based on Symmetry, IEEE Signal Processing Letters, Vol. 11, No. 12, December 2004, pp. 933-936). An even faster implementation was described by Pei and Chang (S. Pei, K. Chang Efficient Bit and Digital Reversal Algorithm Using Vector Calculation IEEE Transactions on Signal Processing, Vol. 55, No. 3, March 2007, pp. 1173-1175). The embodiment described by Pei and Chang provides a significant improvement in the operation count, which includes N shifts, N additions, and an index adjusting and will require the use of O(N) memories.
[0073] However, embodiments of the radix-r implementation described herein do not utilize memory to store a table index number. Thus, the overall memory accesses can be reduced as compared to the prior implementation. A table is shown in Table 1 below; which depicts the memory storage for the three implementations described above.
TABLE-US-00001 TABLE 1 Memory for table index number Methods Memory Rius and De Porrata-Doria N/2 Prado S even, {square root over (N)} S odd, {square root over (N/2)} Pei [3] N FFT Algorithm herein 0
[0074] By examining equations (16) and (17), it can be determined that the data in both algorithms were grouped with their corresponding coefficient multipliers at each stage because the m.sup.th coefficient multiplier of the l.sup.th butterfly's output shifts if, and only if, the v (v=0, 1, K, V1) is equal to r.sup.(Ss) in the DIF process or v=r.sup.s in the DIT process. As a result, and since V=N/r=r.sup.S; the total number of shifts during each stage in the DIT process would be r.sup.s and the total number of shifts during each stage in the DIF process is r.sup.(Ss). Therefore, by implementing the word counter r.sup.(Ss) (word-counter=0, 1, . . . , r.sup.(Ss)1) and the shifting counter r.sup.s (shift-counter=0, 1, . . . , r.sup.s1) in the DIT process or the word counter r.sup.s and the shifting counter r.sup.(Ss) in the DIF process, embodiments of systems, methods and devices can achieve highly efficient, self-sorting DIT/DIF radix-r processes through which accesses to the coefficient multiplier's memory are reduced as compared with the conventional radix-r DIT/DIF processes.
[0075] The DIF FFT can be derived based on the above-equations and the discussion below. For the first iteration (i.e., s=0), equation (20) may equal equation (21) due to the fact that the second term of this equation may be equal to v and the third term may be equal to zero. Thus, for the first iteration, the RAG and WAG may have the same structure.
[0076] In fact, when s=0, the third term of equation (20) can be determined as follows:
and since r.sup.S=V, equation (26) can be determined as follows:
[0077] Since v=0, 1, K, V1 therefore,
is always equal to zero. Similarly, the second term v
.sub.r.sub.
v.sup..sub.r.sub.
[0078] Also, for the first iteration when s=0, the Coefficients Address Generator (CAG) illustrated in equation (23) could be expressed for a conventional radix-r butterfly where the term mlV
.sub.N represents the adder tree matrix T.sub.r, as follows:
As a result, the first iteration involves no twiddle factor multiplication.
[0079] For s>1, modulo and integer part operations dominate the workload in the reading and coefficient address generators. The variable A
.sub.B denotes A modulo B, which is equal to the residue (remainder) of the division of A by B, and the variable A/B denotes the quotient (Integer Part) of the division of A by B. The arithmetical operation modulo, in a hardware implementation, can be represented by a resettable counter. During each stage, v words (v=0, 1,K, V1) may be processed. Thus, the third term of equations (20) and (23) is a function of r.sup.s and could be replaced by the arithmetical operation modulo. In fact, since v varies between 0 and (V1), the third term can be expressed as follows:
and will vary between 0 and r.sup.s1. As a result, the integer part operation in equations (20) and (23) can be simplified as follows:
for I=0, 1, . . . , r.sup.s1, s=0, 1, . . . , S, and S=log.sub.rN1, where S is the number of stages.
[0080] Based on equations (31) and (33), for s>1, r.sup.(Ss) words may encounter trivial multiplication (i.e., .sup.0=1). As a result, the proposed simplified algorithm can be based on three simple counters as follows:
[0081] 1. Stage or Iteration Counter
s=0,1, . . . ,S
S=log.sub.rN1;(32)
2. Shifting Counter
[0082]
I=0,1, . . . ,r.sup.S1;(33)
and
[0083] 3. Word Counter
M=0,1, . . . ,r.sup.(Ss)1.(34)
One possible implementation of the DIT radix-r address generator, which uses some of the above equations, is described below with respect to
[0084]
[0085] At 1008, the method 1000 can include determining a read address generator for each word. The method 1000 can also include executing the butterfly Radix-r, at 1010. At 1012, the method 1000 may include determining a write address generator for each word. At 1014, if the current word counter (v) is not greater than the total number of words minus one, the method 1000 may include incrementing the word counter, at 1016. The method 1000 may then return to 1008 to determine the read address generator.
[0086] Otherwise, at 1014, if the word counter is greater than the total number of words minus one, the method 1000 may include initializing a plurality of parameters, at 1018. At 1020, the method 1000 can include initializing a plurality of additional parameters. At 1022, the method 1000 may include determining the read address generator. At 1024, the method 1000 can include executing the butterfly Radix-r. At 1026, the method 1000 may include determining the write address generator. At 1028, if the word counter (v) is not greater than the total number of words (B) minus one, the method 1000 may include incrementing the word counter, at 1030. The method may then return to 1022 to determine the read address generator.
[0087] Otherwise, at 1028, if the word counter (v) is greater than the total number of words minus one, the method 1000 may include initializing a plurality of parameters, at 1032. At 1034, the method 1000 may include initializing further parameters. At 1036, the method 1000 can include determining a read address generator. At 1038, the method 1000 may include executing the Radix-r butterfly. At 1040, the method 1000 can include determining the write address generator.
[0088] At 1042, if the iteration counter (L) is not greater than a total number of words (B) minus one, the method 1000 may include incrementing the iteration counter 1044. The method 1000 may return to 1036 to determine the read address generator.
[0089] Returning to 1042, if the iteration counter (L) is greater than the total number of words (B) minus one, the method 1000 may advance to 1046. If, at 1046, the word counter (v) is not greater than a total number of words minus two, the method 1000 may include incrementing the word counter at 1048. The method 1000 may then advance to 1034 to initialize a plurality of parameters.
[0090] Returning to 1046, if the word counter (v) is greater than the total number of words minus two, the method 1000 can include advancing to 1050. If, at 1050, the stage counter (s) is not greater than the total number of stages minus one, the method 1000 may include incrementing the stage counter, at 1052. The method 1000 may then return to 1020 to initialize a plurality of parameters. Otherwise, at 1050, if the stage counter (s) is greater than the total number of stages minus one, the method 1000 may terminate, at 1054.
[0091]
[0092] Returning to 1111, if the word counter (v) is greater than the total number of words minus one, the method 1100 may include initializing a plurality of parameters, at 1114. At 1116, the method 1100 may include initializing additional parameters. At 1118, the method 1100 may include determining the read address generator. At 1120, the method 1100 can include executing the butterfly Radix-r. At 1122, the method 1100 can include determining the write address generator. At 1124, if the word counter (v) is not greater than the total number of words minus one, the method 1100 may include incrementing the word counter at 1126. The method 1100 may then return to 1118 to determine the read address generator.
[0093] Returning to 1124, if the word counter (v) is greater than the total number of words minus one, the method 1100 may initialize a plurality of parameters at 1128 and 1130. The method 1100 may include determining a read address generator at 1132, executing the butterfly Radix-r at 1134, and determining a write address generator at 1136. At 1138, if the word counter (L) is not greater than the number of words minus one, the method 1100 may include incrementing the word counter at 1140 and then returning to 1130 to initialize some of the parameters.
[0094] Returning to 1138, if the word counter (L) is greater than the total number of words minus one, the method 1100 may include setting the input (Xin) equal to the output (Xout), at 1142. At 1144, if the shifting counter (v) is not greater than the total number of shifts minus two, the method 1100 may include incrementing the shifting counter at 1146 and returning to 1130 to initialize some of the parameters.
[0095] Returning to 1144, if the shifting counter (v) is greater than the total number of shifts minus two, the method 1100 may advance to 1148. At 1148, if the stage counter (s) is not greater than the total number of stages minus one, the method 1100 may increment the stage counter (s) at 1150. The method 1100 may then return to 1116 to initialize some of the parameters. Returning to 1148, if the stage counter (s) is greater than the total number of stages minus one, the method 1100 may terminate at 1152.
[0096] In general, the method 1000 in
[0097] It should be appreciated that the method 1000 of
[0098] In some embodiments, an apparatus, such as a processor, a central processing unit, or other data processing circuit, can be configured to implement the methods described with respect to at least one of
[0099]
[0100]
[0101]
[0102]
[0103] In the illustrated example, the butterfly input (Bin) includes the input function. The butterfly Radix-r 1502 may be configured to generate a butterfly output (Bout) as well as the butterfly write-address output (Bw).
[0104] While it should be appreciated that the examples above utilized Matlab with the purpose of demonstrating the function, the generalized radix-R FFT functionality may be programmed utilizing other programming languages or utilizing software modules implemented in a variety of different programming languages and configured to share information. Examples provided are for illustrative purposes only and are not intended to be limiting.
[0105] In conjunction with the methods, devices, and systems described above with respect to
[0106] The processes, machines, and manufactures (and improvements thereof) described herein are particularly useful improvements for computers that process complex data. Further, the embodiments and examples herein provide improvements in the technology of image processing systems. In addition, embodiments and examples herein provide improvements to the functioning of a computer by enhancing the speed of the processor in handling complex mathematical computations by reducing the overall number of memory accesses (read and write operations) performed in order to complete the computations. Thus, the improvements provided by the FFT implementations described herein provide for technical advantages, such as providing a system in which real-time signal processing and off-line spectral analysis are performed more quickly than conventional devices, because the overall number of memory accesses (which can introduce delays) are reduced. Further, the radix-r FFT can be used in a variety of data processing systems to provide faster, more efficient data processing. Such systems may include speech, satellite and terrestrial communications; wired and wireless digital communications; multi-rate signal processing; target tracking and identifications; radar and sonar systems; machine monitoring; seismology; biomedicine; encryption; video processing; gaming; convolution neural networks; digital signal processing; image processing; speech recognition; computational analysis; autonomous cars; deep learning; and other applications. For example, the systems and processes described herein can be particularly useful to any systems in which it is desirable to process large amounts of data in real time or near real time. Further, the improvements herein provide additional technical advantages, such as providing a system in which the number of memory accesses can be reduced. While technical fields, descriptions, improvements, and advantages are discussed herein, these are not exhaustive and the embodiments and examples provided herein can apply to other technical fields, can provide further technical advantages, can provide for improvements to other technologies, and can provide other benefits to technology. Further, each of the embodiments and examples may include any one or more improvements, benefits and advantages presented herein.
[0107] The illustrations, examples, and embodiments described herein are intended to provide a general understanding of the structure of various embodiments. The illustrations are not intended to serve as a complete description of all of the elements and features of apparatus and systems that utilize the structures or methods described herein. Many other embodiments may be apparent to those of skill in the art upon reviewing the disclosure. Other embodiments may be utilized and derived from the disclosure, such that structural and logical substitutions and changes may be made without departing from the scope of the disclosure. For example, in the flow diagrams presented herein, in certain embodiments, blocks may be removed or combined without departing from the scope of the disclosure. Further, structural and functional elements within the diagram may be combined, in certain embodiments, without departing from the scope of the disclosure. Moreover, although specific embodiments have been illustrated and described herein, it should be appreciated that any subsequent arrangement designed to achieve the same or similar purpose may be substituted for the specific embodiments shown.
[0108] This disclosure is intended to cover any and all subsequent adaptations or variations of various embodiments. Combinations of the examples, and other embodiments not specifically described herein, will be apparent to those of skill in the art upon reviewing the description. Additionally, the illustrations are merely representational and may not be drawn to scale. Certain proportions within the illustrations may be exaggerated, while other proportions may be reduced. Accordingly, the disclosure and the figures are to be regarded as illustrative and not restrictive.