FAST MODULATION RECOGNITION METHOD FOR MULTILAYER PERCEPTRON BASED ON MULTIMODALLY-DISTRIBUTED TEST DATA FUSION

20220014401 · 2022-01-13

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention discloses a fast modulation recognition method for a multilayer perceptron (MLP) based on multimodally-distributed test data fusion. The method sequentially includes: preprocessing a received signal, obtaining a signal feature sequence, generating a matrix of decision statistics data custom-character*.sub.hj−, generating an MLP an input feature by fusing the decision statistics data, recognizing a modulation mode by using the MLP, and matching an output with a corresponding classification label. The present invention has a low algorithm complexity as compared with a classical likelihood algorithm, and at the same time improves the recognition precision of a single distribution test algorithm.

    Claims

    1. A fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion, comprising the following steps: step 1) preprocessing a received signal, a used normalization formula being: r k = ( r k ) - ( r ) _ σ ( ( r ) ) + j �� ( r k ) - �� ( r ) _ σ ( �� ( r ) ) , and obtaining a signal feature sequence {z.sub.k}.sub.k=1.sup.N from the received signal {r.sub.k}.sub.k=1.sup.N, wherein custom-character(r.sub.k) and custom-character(r.sub.k) represent a real part and an imaginary part of the kth complex signal r.sub.k respectively, custom-character and custom-character represent a mean value of the real part and imaginary part of the complex signal respectively, and σ(custom-character(r)) and σ(custom-character(r)) represent a standard deviation of the real part and imaginary part of the complex signal respectively; step 2) obtaining decision statistics data custom-character*.sub.hj− by using four distribution test algorithms: a Kolmogorov-Smirnov (KS) test, a Cramer-Von Mises test, an Anderson-Darling test, and a variance test, custom-character*.sub.hj− being defined as: t mod KS = max 1 n N .Math. F 1 ( z n ) - F 0 ( z n .Math. M ) .Math. , t mod CVM = - [ F 1 ( z n ) - F 0 ( z n .Math. M ) ] 2 dF 0 ( z n ) , t mod AD = - [ F 1 ( z n ) - F 0 ( z n .Math. M ) ] 2 F 0 ( z n .Math. M ) ( 1 - F 0 ( z n .Math. M ) ) dF 0 ( z n .Math. M ) , t mod Var = 1 N .Math. i = 1 N ( d i - μ ) 2 , d i = F 1 ( z n ) - F 0 ( z n .Math. M ) , μ = 1 N .Math. i = 1 N d i , wherein F.sub.1(z.sub.n) is an empirical cumulative distribution of the received signal, F.sub.0(z.sub.n|M) is a theoretical cumulative distribution of a candidate modulation mode, and M is the candidate modulation mode; step 3) obtaining a matrix of custom-character*.sub.hj−: ( t mod 1 KS t mod 2 KS .Math. t mod n KS t mod 1 CVM t mod 2 CVM .Math. t mod n CVM t mod 1 AD t mod 2 AD .Math. t mod n AD t mod 1 Var t mod 2 Var .Math. t mod n Var ) ; step 4) generating an input feature of an MLP classifier by using the matrix, the input feature being defined as:
    t*.sub.mod=t.sub.mod i.sup.KS+t.sub.mod i.sup.CVM+t.sub.mod i.sup.AD+t.sub.mod i.sup.Var, i=1,2, . . . ,n, wherein an input feature of each modulation mode is inputted into the MLP classifier; and step 5) obtaining the input features of different modulation modes after the fusion of decision statistics data, recognizing the modulation modes by the MLP classifier with the input features, and matching an output with a corresponding classification label:
    label={0,1,2, . . . ,n−1}

    2. The fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion according to claim 1, wherein {z.sub.k}.sub.k=1.sup.N is a real part, an imaginary part, an amplitude or a phase taken from a received complex signal.

    3. The fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion according to claim 1, wherein the step of generating the matrix of the decision statistics data custom-character*.sub.hj− is as follows: first, calculating the empirical cumulative distribution F.sub.1(z.sub.n) of the received signal, next, calculating the theoretical cumulative distribution F.sub.0(z.sub.n|M) of the candidate modulation mode, and finally, calculating the decision statistics data custom-character*.sub.hj− in different modulation modes based on the four distribution test algorithms.

    4. The fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion according to claim 1, wherein data fusion is performed on the matrix of custom-character*.sub.hj− in a column addition manner.

    5. The fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion according to claim 1, wherein the MLP classifier uses a forward propagation technology and a backward propagation technology to train a model and update weights.

    6. The fast modulation recognition method for a multilayer perceptron based on multimodally-distributed test data fusion according to claim 5, wherein recognition precision is calculated after the training ends, and the recognition precision P acc = N c N total × 100 % is calculated for a classification result of the MLP classifier, wherein N.sub.c is a quantity of accurately classified samples, and N.sub.total is a total quantity of test signal samples.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0023] FIG. 1 is a flowchart of a DTE algorithm according to the present invention.

    [0024] FIG. 2 is a diagram of experimental simulation results in a Gaussian noise channel of a DTE algorithm according to the present invention.

    [0025] FIG. 3 is a diagram of experimental simulation results in a phase shift channel of a DTE algorithm according to the present invention.

    [0026] FIG. 4 is a diagram of experimental simulation results in a frequency shift channel of a DTE algorithm according to the present invention.

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0027] The present invention is further described below with reference to the accompanying drawings and specific embodiments, to enable a person skilled in the art to better understand and implement the present invention. However, the embodiments are not used to limit the present invention.

    [0028] Referring to FIG. 1, an embodiment of a fast modulation recognition method for a MIMO communication system based on multimodally-distributed test data fusion of the present invention is provided. The method sequentially includes steps of preprocessing a received signal, obtaining a signal feature sequence, generating a matrix of decision statistics data custom-character*.sub.hj−, generating an input feature of an MLP classifier and an output result of the MLP classifier, where the MLP classifier requires learning, and calculating recognition precision.

    [0029] In the present invention, an experimental simulation stage includes four modulation signals M={BPSK,8-PSK,4-QAM,16-QAM}. Amplitude components of the received signal are used. The length (N) of a signal sample is 128. A quantity (N.sub.T) of transmit antennas is 2. A quantity (N.sub.R) of receive antennas is 4.

    [0030] The invention has good recognition performance in different channel conditions. In a Gaussian noise channel, when an SNR is greater than 16 dB, the method in the present invention may reach a recognition accuracy of 90%. In a fading channel, the method in the present invention has robustness in both a frequency shift case and a phase shift case, and the recognition rate is nearly not affected.

    [0031] Specifically, in the step of preprocessing the received signal, the received signal is first preprocessed before the recognition of a modulation mode. In a preprocessing manner, the distribution of real parts and imaginary parts of imaginary signals are normalized. A normalization formula is:

    [00005] r k = ( r k ) - ( r ) _ σ ( ( r ) ) + j �� ( r k ) - �� ( r ) _ σ ( �� ( r ) ) .

    [0032] In the step of obtaining the signal feature sequence, because the real part component, imaginary part component, amplitude, and phase of a signal all carry information of the signal to different degrees, the signal feature sequence {z.sub.k}.sub.k=1.sup.N of a distribution test can be extracted from the real part component, imaginary part component, amplitude, and phase. In the present invention, an amplitude component z.sub.k=|r.sub.k|=√{square root over (custom-character{r.sub.k}).sup.2+(custom-character{r.sub.k}).sup.2)} of the signal is selected in an experimental simulation stage.

    [0033] In the step of generating the matrix of the decision statistics data custom-character*.sub.hj− is as follows: An empirical cumulative distribution F.sub.1(z.sub.n) of the received signal is first calculated. Next, a theoretical cumulative distribution F.sub.0(z.sub.n|M) of a candidate modulation mode is calculated. The decision statistics data custom-character*.sub.hj− in different modulation modes is then calculated based on distribution test algorithms: a KS test, a CVM test, an AD test, and a Var test, to construct a matrix of the decision statistics data:

    [00006] ( t mod 1 KS t mod 2 KS .Math. t mod n KS t mod 1 CVM t mod 2 CVM .Math. t mod n CVM t mod 1 AD t mod 2 AD .Math. t mod n AD t mod 1 Var t mod 2 Var .Math. t mod n Var ) .

    [0034] In the step of generating the input feature of the MLP classifier, for a 4×n matrix of the decision statistics data obtained in the foregoing step, in the present invention, a column addition manner is used for data fusion,

    [0035] to obtain n t*.sub.mod=t.sub.mod i.sup.KS+t.sub.mod i.sup.CVM+t.sub.mod i.sup.AD+t.sub.mod.sup.Var, i=1, 2, . . . , n.

    [0036] An input feature of each modulation mode is inputted into the MLP classifier. Finally, the modulation mode is recognized by using the MLP classifier, and an output is matched with a corresponding classification label:


    label={0,1,2, . . . ,n−1}

    [0037] Before formal use, the MLP classifier requires learning. A forward propagation technology and backward propagation technology are used to train a model and update weights. In the present invention, a quasi-Newton method is used, an activation function is set to an inverse trigonometric function (tanh), the quantity of layers of a neural network is 3, the quantity of hidden neurons is 8, there are both 1000 training samples and 1000 test samples, and the maximum quantity of iterations is 300 epochs.

    [0038] In the step of calculating the recognition precision, the recognition precision

    [00007] P acc = N c N total × 100 %

    is calculated for a classification result of the MLP classifier, where N.sub.c is a quantity of accurately classified samples, and N.sub.total is a total quantity of test signal samples.

    [0039] In the method in the present invention, it is fully considered that in the case of a high-order modulation mode and a large-scale MIMO system, a common likelihood-based modulation recognition method for a single antenna system has very high calculation complexity. To resolve the problem of calculation complexity, the method in the present invention uses a distribution test-based algorithm to resolve a modulation recognition problem in a MIMO environment.

    [0040] FIG. 2 shows the comparison of recognition performance of different recognition algorithms in different SNRs. The algorithm in the present invention may be referred to as DTE for short, and is better than a classical distribution test algorithm in the entire SNR. Particularly, at a high SNR, when SNR=16 dB, the recognition rate is higher than that of an AD test algorithm by 2%.

    [0041] FIG. 3 and FIG. 4 show the robustness of the algorithm in the present invention in different fading channel conditions. It is very clear that an ML algorithm is highly susceptible to the impact of a frequency shift and a phase shift, the shift degree is excessively large, and the recognition performance of the ML algorithm is greatly reduced.

    [0042] As shown in Table 1:

    TABLE-US-00001 Classifier Addition Multiplication Logarithm Exponent ML [00008] 6 NM N T N R .Math. .Math. m = 1 M N T I m [00009] 5 NM N T N R .Math. .Math. m = 1 M N T I m NM.sup.N.sup.TN.sub.R [00010] NM N T N R .Math. .Math. m = 1 M N T I m K-S test MN.sub.R(log.sub.2 N + 2N) 0 0 0 C-v-M MN.sub.R(log.sub.2 N + 3N) NMN.sub.R 0 0 test A-D test MN.sub.R(log.sub.2 N + 3N) 2NMN.sub.R 0 0 Var test MN.sub.R(log.sub.2 N + N) 0 0 0 DTE MN.sub.R(4log.sub.2 N + 9N) 3NMN.sub.R 0 0 MLP [00011] .Math. k = 2 l n k - 1 n k [00012] .Math. k = 2 l n k - 1 n k 0 0

    [0043] The comparison of calculation complexity of different algorithms is given. A classical ML algorithm includes exponential operation, and as a result the complexity of the algorithm is greatly increased. However, the algorithm in the present invention only involves addition and multiplication and has clear advantages.

    [0044] The foregoing embodiments are merely preferred embodiments used to fully describe the present invention, and the protection scope of the present invention is not limited thereto. Equivalent replacements or variations made by a person skilled in the art to the present invention all fall within the protection scope of the present invention. The protection scope of the present invention is as defined in the claims.