Convolutional neural networks based computationally efficient method for equalization in FBMC-OQAM system
11368349 · 2022-06-21
Assignee
Inventors
- Muhammad Moinuddin (Jeddah, SA)
- Ubaid M. Al-Saggaf (Jeddah, SA)
- Abdulrahman U. Alsaggaf (Jeddah, SA)
- Shujaat Khan (Daejeon, KR)
- Syed Sajjad Hussain Rizvi (Karachi, PK)
Cpc classification
H04L27/26416
ELECTRICITY
H04L27/2698
ELECTRICITY
International classification
Abstract
A filter bank multi-carrier (FBMC)-offset quadrature amplitude modulation (OQAM) system is disclosed. The FBMC-OQAM system includes a processing circuitry which is configured to receive a signal over a transmission medium, equalize the signal by a convolution neural network (CNN) equalizer, wherein the CNN equalizer is configured to estimate the received signal without performing channel estimation, and output the estimated signal as a bit stream.
Claims
1. A method of performing an equalization in filter bank multi-carrier (FBMC)-offset quadrature amplitude modulation (OQAM) system, comprising: receiving a signal over a transmission medium; equalizing the signal by a convolution neural network (CNN) equalizer, wherein the CNN equalizer is configured to estimate the received signal without performing channel estimation; and outputting the estimated signal as a bit stream, wherein the CNN equalizer is trained on inputted pilot signal sets as training samples and the inputted pilot signal sets include currently received signal sets.
2. The method of claim 1, wherein the CNN includes a plurality of layers which include a plurality of convolution layers, a dropout layer, and a plurality of activation function layers.
3. The method of claim 2, wherein the plurality of activation function layers include a plurality of rectified linear unit activation function layers and a plurality of linear layers.
4. The method of claim 1, wherein inputted pilot signal sets include previously transmitted or known pilot signal sets.
5. The method of claim 1, wherein separate sets of training samples are used which correspond to different noise ratios.
6. The method of claim 2, wherein the CNN equalizer uses skip connections by allowing at least one input layer to by-pass adjacent layers on route to a target layer.
7. The method of claim 5, wherein the CNN equalizer uses a max pooling technique as a pooling operation.
8. A method of performing an equalization in filter bank multi-carrier (FBMC)-offset quadrature amplitude modulation (OQAM) system, comprising: receiving a signal over a transmission medium; equalizing the signal by a convolution neural network (CNN) equalizer, wherein the CNN equalizer is configured to estimate the received signal without performing channel estimation; and outputting the estimated signal as a bit stream, wherein the estimated signal is converted from a parallel format to a serial format prior to being output as a bit stream; and after receiving the signal over the transmission medium, converting the signal from a serial format to a parallel format, decomposing the signal by a plurality of filter banks; and converting the signal by a first algorithm.
9. The method of claim 8, wherein the first algorithm is a fast Fourier transform.
10. A filter bank multi-carrier (FBMC)-offset quadrature amplitude modulation (OQAM) system, comprising: processing circuitry configured to: receive a signal over a transmission medium; equalize the signal by a convolution neural network (CNN) equalizer, wherein the CNN equalizer is configured to estimate the received signal without performing channel estimation; and output the estimated signal as a bit stream, wherein the CNN equalizer is trained on inputted pilot signal sets as training samples and the inputted pilot signal sets include currently received signal sets.
11. The system of claim 10, wherein the CNN includes a plurality of layers which include a plurality of convolution layers, a dropout layer, and a plurality of activation function layers.
12. The system of claim 11, wherein the plurality of activation function layers include a plurality of rectified linear unit activation function layers and a plurality of linear layers.
13. The system of claim 10, wherein inputted pilot signal sets include previously transmitted or known pilot signal sets.
14. The system of claim 10, wherein separate sets of training samples are used which correspond to different noise ratios.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) A more complete appreciation of this disclosure and many of the attendant advantages thereof will be readily obtained as the same becomes better understood by reference to the following detailed description when considered in connection with the accompanying drawings, wherein:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) In the drawings, like reference numerals designate identical or corresponding parts throughout the several views. Further, as used herein, the words “a,” “an” and the like generally carry a meaning of “one or more,” unless stated otherwise.
(12) Furthermore, the terms “approximately,” “approximate,” “about,” and similar terms generally refer to ranges that include the identified value within a margin of 20%, 10%, or preferably 5%, and any values therebetween.
(13) Aspects of this disclosure are directed to a Filter Bank Multi-Carrier (FBMC)-Offset Quadrature Amplitude Modulation (OQAM) system and a method of performing an equalization in the FBMC-OQAM system. In the present disclosure, Convolutional Neural Networks (CNN) is adapted for equalization without requiring channel estimation, and hence it reduces the complexity of the FBMC-OQAM system.
(14)
(15) As described in
(16) The OQAM pre-processing block 102 is configured to maintain orthogonality between subcarriers. The OQAM pre-processing block 102 processes complex symbols in real part and virtual part, and interlaces half a symbol period in time interval to become transmission symbols. Accordingly, the real and imaginary parts of the interleaved delay are divided into subcarriers. The OQAM post processing block 122 is configured to take a real part of a signal modulated to the subcarrier and then reconstruct the real part of the signal into a complex signal through mutual conversion of a real number and complex number. The IFFT block 104 is configured to perform an IFFT operation on the transmission symbols. The poly-phase filter 106 and the poly-phase filter 118 are configured to decompose a signal. The companding block 108 is configured to attenuate high peaks of a signal and amplify low amplitudes of the signal. The inverse companding block 116 is configured to recover the original signal. The P/S converter 110 is configured to convert a signal from a parallel format to a serial format. The S/P converter 114 is configured to convert a signal from a serial format to a parallel format. The FFT block 120 is configured to convert a signal using FFT. The AWGN block 112 is configured to add white Gaussian noise to a signal.
(17) For the FBMC-OQAM system 100, complex orthogonality condition g11,k1(t), g12,k2(t)
=δ(12-11), (k2-k1) is substituted by less strict real orthogonality condition R{
g11,k1(t), g12,k2(t)
}=δ(12-11),(k2-k1). The transmitted basis pulse for l.sup.th frequency and k.sup.th time symbol denoted by g.sub.l,(t) can be defined as:
(18)
where g.sub.l,k represents the sampled version of the basis pulse g.sub.l,k(t). Further, g.sub.l,k is used to denote the N samples basis pulse vector for l.sup.th frequency and k.sup.th time symbol. In an example, there may be total L frequency sub-carries and K time symbols. Accordingly, by stacking all the basis pulse vectors in a large transmit matrix G∈C.sup.N×LK as:
G=[g.sub.1,1 . . . g.sub.L,1 g.sub.1,2 . . . g.sub.L,K] (2)
and all data symbols in a large transmit symbol vector x∈C.sup.LK×1 as:
x=[x.sub.1,1 . . . x.sub.1,K x.sub.L,1 . . . x.sub.L,K].sup.T. (3)
(19) The sampled transmit signal s∈C.sup.N×1 is expressed as:
s=Gx. (4)
(20) Also, multipath propagation over time-variant channels is modeled by a time-variant impulse response denoted as h[m.sub.τ, n], where m.sub.τ represents the delay and n represents the time position. In an example, the impulse response in a time-variant convolution matrix H∈C.sup.N×N may be defined as:
[H].sub.i,j=h[i−j,i] (5)
(21) Accordingly, the received signal can be expressed as:
r=HGx+ñ (6)
where, ñ is a zero mean complex white Gaussian noise vector with correlation matrix P.sub.nI and P.sub.n is the noise power.
(22) Further, the sampled receive basis pulses q.sub.l,k∈C.sup.N×1 can be stacked in a matrix as:
Q=[q.sub.1,1 . . . q.sub.1,k q.sub.L,1 . . . q.sub.L,K]. (7)
(23) According to an aspect, the receiver of the FBMC-OQAM system 100 uses matched filter, that is, Q=G. Thus, the received signal as expressed in equation (6) after pulse de-shaping by Q=G can be written as:
y=G.sup.Hr=G.sup.HHGx+n (8)
where, n˜CN (0, P.sub.nG.sup.HG). The received signal may be processed to obtain an estimate of the transmitted signal.
(24) According to aspects of the present disclosure, a Convolution Neural Network (CNN) equalizer is applied for performing an equalization in an FBMC-OFDM system. The CNN is a specialized type of neural network. Unlike conventional fully connected dense neural network models where every neuron is hard-wired (i.e., connected) to the adjacent neuron, in CNN, each layer is connected to the next layer with the help of convolution pathways. The convolution pathways are defined by a number of convolution filters in that layer. This adaptive connectivity allows the CNN to learn spatially invariant features and reduce the computational cost. The CNN includes a plurality of layers which include a plurality of convolution layers, a dropout layer, and a plurality of activation function layers. In an example, the plurality of activation function layers include a plurality of rectified linear unit activation function layers and a plurality of linear layers. According to an aspect, the CNN equalizer is trained on inputted pilot signal sets as training samples. In an example, the inputted pilot signal sets include previously transmitted or known pilot signal sets. For example, the inputted pilot signal sets include currently received signal sets. Further, separate sets of training samples are used, which correspond to different noise ratios.
(25) The CNN equalizer uses skip connections by allowing at least one input layer to by-pass adjacent layers on route to a target layer. In an example, the skip connections during a testing phase allows one input layer to by-pass the adjacent layers with the help of a pooling operation or a pooling technique which improves decision, and during a training phase, the skip connections allow the back propagation of error gradient which help fast learning. In an example, with the linear increase in skip connections, the representation power of the CNN equalizer can be increased exponentially. Examples of pooling techniques include a max pooling technique, a min pooling technique, and an average pooling technique. The pooling technique allows the filtration of essential information from one layer to another layer of the CNN. In an example, according to the max pooling technique, the maximum signal may be selected. For example, only those filter's output are allowed to the next layer whose output is largest among all the selected set of signals. The selected set is defined by a pooling window size. Similarly, the min pooling technique and the average pooling technique may allow the minimum and average values, respectively.
(26) The present disclosure describes deep FBMC method that utilizes the CNN equalizer. The deep FBMC employing the CNN equalizer may interchangeably be referred to as CNN based FBMC equalizer or deep FBMC CNN. The CNN based FBMC equalizer as described herein uses the max pooling technique as a pooling operation. The max pooling operation is used as it is computationally less expensive in comparison to the min pooling technique and the average pooling technique. The present disclosure uses Rectified Linear Unit (ReLU) (also known as ReLU activation function). According to the ReLU activation function, the function output is rectified for negative inputs. In an example, a non-linear operation of rectification with linear combination creates a combinatorial set of decision space where one decision space is separated by another decision space in a piece-wise fashion. The linear operation for positive inputs gives a constant gradient value that further reduces the computation and allows rapid learning. An architecture 302 of the CNN based FBMC equalizer is depicted in
(27) As shown in
(28) According to aspects of the present disclosure, the CNN based FBMC equalizer may be a pre-trained model that evaluates the input signal based on previous analysis and makes a decision in a single iteration. In an example, multiple convolution modules are used to construct a deep architecture of the CNN based FBMC equalizer. Each convolution module includes two convolutions, dropout, and ReLU activation function layers, except the final layer which uses the linear layer. The CNN based FBMC equalizer can be trained on pre-stored pilot signal and received signal to estimate the channel behavior. After estimating the weights of neurons which provide desired performance, the trained CNN based FBMC equalizer can be used for equalization in a test scenario.
(29) According to aspects of the present disclosure, the FBMC-OQAM system 100 is configured to receive a signal over a transmission medium. In an example, the signal may be received from a plurality of channels. Further, the FBMC-OQAM system 100 is configured to equalize the signal using the CNN based FBMC equalizer. The CNN based FBMC equalizer is configured to estimate the received signal without performing channel estimation. The FBMC-OQAM system 100 then outputs the estimated signal as a bit stream. Also, the FBMC-OQAM system 100 is configured to convert the estimated signal from a parallel format to a serial format before outputting the estimated signal as a bit stream.
(30) In an aspect, after receiving the signal over the transmission medium, the FBMC-OQAM system 100 is configured to convert the signal from a serial format to a parallel format. Also, the FBMC-OQAM system 100 is configured to decompose the signal by a plurality of filter banks and convert the signal by a first algorithm. In an example, the first algorithm may be a fast Fourier transform.
(31) Overall system design for the CNN based FBMC equalizer is depicted in
Examples and Experiments
(32) The following examples are provided to illustrate further and to facilitate the understanding of the present disclosure.
(33) Experimental Data and Analysis
(34) For the experiment, seven sets were generated for different signal to noise ratios i.e., 10 dB, 15 db, 20 dB, 25 dB, 30 dB, 35 dB, and 40 dB. For each SNR value, thousand (1000) samples were produced, and each sample included thirty (30) time representations and twenty-four (24) frequencies representations. From thousand (1000) samples, eight hundred (800) samples were used for training, hundred (100) samples were used for validation, and the remaining hundred (100) samples were used for testing.
(35) Further, MATLAB code (MATLAB 2018) was used to generate input and output data for the FBMC-OQAM system 100 in a doubly selective channel. As a result, multiple signal sets were generated, representing the pilot signals. Also, the FBMC-OQAM system 100 was tested on keras tensor flow framework using python 3 on different subsets of data, and performance statistics were recorded.
(36) In an aspect, four performance statistics were used to evaluate the performance of the FBMC-OQAM system 100. The four performance statistics include (1) computation time, (2) mean-absolute error in recovered and transmitted signal, (3) model complexity in terms of learning parameters, and (4) bit error rate (BER) between transmitted and received signal.
(37) The FBMC-OQAM system 100 is a machine learning-based approach that requires a deep neural network model training using a limited amount of training data and computational resources. Further, since the FBMC-OQAM system 100 is trained multiple times for different channel conditions, the FBMC-OQAM system 100 is trained using a fast method approach.
(38) The effective training of the CNN based FBMC equalizer depends on batch-size, regularization technique, learning rate, optimization algorithm, stopping conditions, data-shuffling, and loss function. An Adam optimization algorithm with default learning rate (step-size) and decay rates was used for training of the CNN based FBMC equalizer. The maximum number of allowed epochs was set to thousand (1000) while training of the CNN based FBMC equalizer was conditioned on reducing mean absolute error. For every epoch, the training dataset was shuffled randomly to improve generalization, and on completion of the training, a final mean absolute error value of the epoch was compared with the previous best value of error. If the newly updated weights did not support in improving the error, then it is counted as a patience value (or step). In an example, after thousand (1000) epochs or hundred (100) patience steps, the training is terminated. On every successful improvement in error, the weight of the CNN based FBMC equalizer was stored as a best model weight. Upon completion of the training, final weights from best model weights were loaded to test the CNN based FBMC equalizer. Further, the CNN based FBMC equalizer is used for all signal-to-noise ratio and channel conditions. Also, the use of the CNN based FBMC equalizer facilitates in significantly reducing the requirement for large memory or high power computing hardware. Further, the use of the CNN based FBMC equalizer in the FBMC-OQAM system 100 results in low computational complexity as the CNN based FBMC equalizer does not employ large matrix inversion as used in conventional Minimum Mean Squared Error (MMSE) equalizers and Zero Forcing (ZF) equalizers.
(39) In the simulation setup, twenty-four (24) sub-carriers (i.e., L=24) and thirty (30) time symbols (i.e., K=30) were used, and five hundred (500) Monte Carlo simulation runs were used. The subcarrier spacing was set to 15 kHz. For OQAM implementation, the PAM modulation order was set to sixteen (16), which is equivalent to 256 QAM. Further, the Hermite prototype filter was used for waveform shaping at both transmitter and receiver. The SNR range 1 dB to 45 dB was used. The zero mean complex circular Gaussian channel vector ‘h’ was generated using the Jakes model for 2.5 GHz carrier frequency and 500 km/h vehicular speed, corresponding to a maximum Doppler shift of 1.16 kHz.
(40) It was found that the CNN based FBMC equalizer with thirty two (32) channels in each layer produces the best results. A summary of computational complexity of the CNN based FBMC equalizer in terms of a number of learnable and non-learnable parameters (complexity) for different design choices (number of filters) is provided in table 1.
(41) TABLE-US-00001 TABLE 1 Computational Complexity of the CNN based FBMC Equalizer Non-Trainable Channels Total Parameters Trainable Parameters Parameters 8× 11729 11569 160 16× 45345 45025 320 32× 178241 177601 640 64× 706689 705409 1280 128× 2814209 2811649 2560 256× 11231745 11226625 5120
(42) It can be seen in Table 1 that by increasing the size of a channel (i.e., size of filters), the number of both trainable and non-trainable parameters is increased. The total parameters used in the CNN based FBMC equalizer with thirty two (32) channels is 178241 in comparison to conventional equalizer which is equal to 1245800. Hence, the computational complexity of the CNN based FBMC equalizer is much lesser than the conventional equalizer.
(43)
(44)
(45)
(46) The CNN based FBMC equalizer is selected based on optimal validation performance and minimal complexity.
(47)
(48) It can be observed in
(49)
(50) At step 802, the method 800 includes receiving a signal over a transmission medium.
(51) At step 804, the method 800 includes equalizing the signal by a CNN equalizer, where the CNN equalizer is configured to estimate the received signal without performing channel estimation. The CNN equalizer includes a plurality of layers which include a plurality of convolution layers, a dropout layer, and a plurality of activation function layers. Further, the plurality of activation function layers include a plurality of rectified linear unit activation function layers and a plurality of linear layers. The CNN equalizer is trained on inputted pilot signal sets as training samples. The inputted pilot signal sets include previously transmitted or known pilot signal sets. In an example, the inputted pilot signal sets include currently received signal sets. Further, separate sets of training samples are used which correspond to different noise ratios. In an aspect, the CNN equalizer uses skip connections by allowing at least one input layer to by-pass adjacent layers on route to a target layer. Further, the CNN equalizer uses a max pooling technique as a pooling operation.
(52) At step 806, the method 800 includes outputting the estimated signal as a bit stream. In an aspect, the estimated signal is converted from a parallel format to a serial format prior to being output as a bit stream.
(53) According to an aspect, after receiving the signal over the transmission medium, the signal is converted from a serial format to a parallel format. Thereafter, the signal is decomposed by a plurality of filter banks. Further, the signal is converted by a first algorithm. In an example, the first algorithm is a fast Fourier transform.
(54) Obviously, numerous modifications and variations of the present disclosure are possible in light of the above teachings. It is therefore to be understood that within the scope of the appended claims, the invention may be practiced otherwise than as specifically described herein.