Bridge damage identification method considering uncertainty
11709979 · 2023-07-25
Assignee
Inventors
- Wenyu He (Hefei, CN)
- Zhidong Li (Hefei, CN)
- Jing Zhang (Hefei, CN)
- Dong Yang (Hefei, CN)
- Zuocai Wang (Hefei, CN)
Cpc classification
G06F30/23
PHYSICS
International classification
G06F30/23
PHYSICS
Abstract
A bridge damage identification method considering uncertainty is used for damage identification based on a convolutional neural network. A domain classifier is added to form a domain adversarial transfer network, a finite element model of a bridge and a time domain acceleration signal of a real structure serve as input, and parameters in a feature extractor are continuously updated in an adversarial process of the domain classifier and the feature extractor, so as to design a brand-new feature extractor, and to achieve a purpose that extracted features are only sensitive to damage. The bridge damage identification method can solve the problem that model-based methods for bridge damage identification are influenced by environment uncertainty or modeling error to generate a difference between the finite element model and the real structure, resulting in reduction in damage identification performance of the method in practical application.
Claims
1. A bridge damage identification method considering uncertainty, comprising: step 1: constructing a bridge acceleration response dataset; step 1.1: determining a geometric material parameter of a bridge, establishing a bridge finite element model, dividing the bridge finite element model into units and numbering the units as [1, 2, 3, . . . , n] in sequence, numbering nodes between the units as [1, 2, 3, . . . , n−1] in sequence, and arranging accelerometers at the nodes between the units; step 1.2: constructing a bridge lossless dataset in a bridge lossless state: firstly, applying a Gaussian distributed random load once to the node 1, and obtaining an acceleration response signal matrix [a.sub.1, . . . , a.sub.c, . . . , a.sub.C].sup.T of each node between the units through a Newmark−β method, wherein a.sub.C is an acceleration response signal with a length of w at a c-th node between the units, C is a number of the accelerometers, and C=n−1; step 1.3: normalizing acceleration response signals of all the nodes, to obtain a normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.C].sup.T; wherein ā.sub.c represents a normalized acceleration response signal at the c-th node; step 1.4: randomly intercepting the normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.C].sup.T according to a length of {tilde over (w)}, to obtain an acceleration response matrix segment A=[ã.sub.1, . . . , ã.sub.c, . . . , ã.sub.C].sup.T of the bridge when the random load is applied to the node 1, and taking the acceleration response matrix segment as a sample; wherein ã.sub.c represents the normalized acceleration response signal with the length of {tilde over (w)} at the c-th node; step 1.5: repeatedly applying m times of Gaussian distributed random load to the node 1 between the units, and performing steps 1.2-1.4 for processing, to obtain m samples; and step 1.6: processing the nodes between the units in sequence according to steps 1.2-1.5, to obtain the acceleration response matrix segment of the bridge when the random load is applied to each node between the units, wherein a total of j.sub.1=m×(n−1) samples form the bridge lossless dataset; step 2: constructing a bridge damage dataset: simulating unit damage in a mode of reducing stiffness of one unit, and setting t different reduction coefficients to simulate damage grades; step 2.1: selecting a damage grade at the unit 1 for damage simulation, and processing each node between the units according to steps 1.2-1.6 when the unit 1 is at the current damage grade, to obtain an acceleration response matrix segment of the bridge under the current damage grade of the unit 1; step 2.2: obtaining, according to step 2.1, acceleration response matrix segments of the bridge under all the damage grades of the unit 1, with a total of t×(n−1)×m damage samples; and step 2.3: processing all the units in sequence according to steps 2.1-2.2, to obtain acceleration response matrix segment samples of the bridge under different damage grades of each unit, with a total of j.sub.2=t×(n−1)×m×n damage samples to form the bridge damage dataset; step 3: setting labels Y=[y.sub.1, . . . , y.sub.l, . . . , y.sub.L] for the bridge lossless dataset and the bridge damage dataset, wherein the y.sub.l represents a label corresponding to an l-th sample, L is a total number of the samples, and L=j.sub.1+j.sub.2; and y.sub.l∈[0, 1, 2, . . . , n], wherein y.sub.l=0 represents the bridge lossless state corresponding to the l-th sample, and y.sub.l=1, 2, . . . , n represents a serial number of a damaged unit of the bridge corresponding to the l-th sample; step 4: combining the bridge lossless dataset and a label of the bridge lossless dataset with the bridge damage dataset and a label of the bridge damage dataset to obtain a source domain dataset S(X,Y) with a label, wherein X represents a union set of the bridge lossless dataset and the bridge damage dataset, X=[A.sub.1, . . . , A.sub.l, . . . , A.sub.L], and A.sub.l represents an l-th combined sample; step 5: simulating a real structure of the bridge by adding uncertainty to the bridge finite element model; step 5.1: taking uncertainty of stiffness of the bridge as a working condition (1), and simulating the working condition (1) by multiplying stiffness of each unit of the bridge finite element model by a random factor δ, wherein the random factor δ obeys Gaussian distribution, then a target domain dataset T.sub.1(X) is obtained according to step 1 and step 2, and the target domain dataset T.sub.1(X) is free of labels; step 5.2: taking a geometric error and a material error of the bridge as a working condition (2), simulating the material error by changing density and elastic modulus parameters of the bridge finite element model, and simulating the geometric error by changing a length, and a width and a height of a cross section of the bridge of the bridge finite element model, to obtain a target domain dataset T.sub.2(X) according to step 1 and step 2 under conditions of the working condition (1) and the working condition (2), wherein the target domain dataset T.sub.2(X) is free of labels; and step 5.3: taking an influence of environmental noise in actual measurement as a working condition (3), simulating the working condition (3) by adding Gaussian distributed noise D.sub.noise−(0,σ.sup.2) with a mean value being 0 and a variance being σ.sup.2 into the bridge finite element model, that is, adding noise D.sub.noise−N(0,σ.sup.2) into the normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.C].sup.T, to obtain a target domain dataset T.sub.3(X) according to step 1 and step 2 under conditions of the working condition (1), the working condition (2) and the working condition (3), wherein the target domain dataset T.sub.3(X) is free of labels; step 6: constructing a domain adversarial transfer learning neural network, wherein the domain adversarial transfer learning neural network comprises: a feature extractor G.sub.f, a label predictor G.sub.y and a domain classifier G.sub.q; the feature extractor G.sub.f comprises e.sub.1 convolution layers, a first LeakRelu layer is added to each convolution layer, a first normalization layer and a maximum pooling layer are added between every two convolution layers, convolution kernels of the convolution layers have a size of k.sub.1, a number of h.sub.1, and a step length of s.sub.1, and convolution kernels of the maximum pooling layers have a size of k.sub.2 and a step length of s.sub.2; the label predictor G.sub.y comprises e.sub.2 full connection layers, and a second LeakRelu layer is added between each full connection layer; and the domain classifier G.sub.q comprises e.sub.3 full connection layers, and a Rectified Linear Unit (ReLU) layer and a second normalization layer are added between each full connection layer; step 7: preprocessing data, dividing the source domain dataset S(X,Y) and the target domain dataset T.sub.i(X), i=1, 2, 3 into a source domain training set Ds.sub.tra, a source domain verification set Ds.sub.val, a target domain training set Dt.sub.tra and a target domain verification set Dt.sub.val separately according to a proportion; and step 8: performing a training and verification stage; step 8.1: randomly extracting P source domain samples X.sub.s=(A.sub.s1, . . . , A.sub.sp, . . . , A.sub.sP), Y.sub.s=(y.sub.1, . . . , y.sub.p, . . . , y.sub.p) and target domain samples X.sub.t=(A.sub.t1, . . . , A.sub.tp, . . . , A.sub.tP) from the source domain training set Ds.sub.tra and the target domain training set Dt.sub.tra each time as a small batch to be input a domain adversarial network and then trained until all samples of the source domain training set Ds.sub.tra and the target domain training set Dt.sub.tra are extracted; wherein A.sub.sp represents a p-th source domain sample of the source domain samples in the small batch; and y.sub.p is a label corresponding to the p-th sample of the source domain samples, and A.sub.tp represents a p-th target domain sample of the target domain samples in the small batch; step 8.2: mapping the p-th source domain sample A.sub.sp and the target domain sample A.sub.tp in the small batch into a source domain feature vector f.sub.sp and a target domain feature vector f.sub.tp by the feature extractor G.sub.f(A,θ.sub.f) respectively; wherein the A is the source domain sample and the target domain sample, and θ.sub.f represents parameter vectors of all layers in mapping; step 8.3: mapping the source domain feature vector f.sub.sp by the label predictor G.sub.y to obtain a prediction label ŷ.sub.p=G.sub.y(f.sub.sp,θ.sub.y), wherein θ.sub.y represents a mapping parameter of the label predictor G.sub.y; step 8.4: computing a loss L.sub.y(ŷ.sub.p,y.sub.p) of the label predictor by using equation (1):
L.sub.y(ŷ.sub.p,y.sub.p)=−y.sub.p log(ŷ.sub.p) (1) step 8.5: setting a domain label Q.sub.S=(q.sub.s1, . . . , q.sub.sp, . . . , q.sub.sP) for the source domain sample X.sub.s=(A.sub.s1, . . . , A.sub.sp, . . . , A.sub.sP); and setting a domain label Q.sub.t=(q.sub.t1, . . . , q.sub.tp, . . . , q.sub.tP) for the target domain sample X.sub.t=(A.sub.t1, . . . , A.sub.tp, . . . , A.sub.tP) wherein q.sub.sp and q.sub.tp represent the domain label corresponding to the p-th source domain sample A.sub.sp and the domain label corresponding to the p-th target domain sample A.sub.tp respectively, wherein q.sub.sp=0, and q.sub.tp=1; step 8.6: inputting the source domain feature vector f.sub.sp and the target domain feature vector f.sub.tp into the domain classifier G.sub.q(f,θ.sub.q) for mapping, to obtain a prediction domain label {circumflex over (q)}.sub.p, wherein f represents the source domain feature vector and the target domain feature vector, and θ.sub.q represents a mapping parameter of the domain classifier G.sub.q; step 8.7: computing a loss L.sub.q({circumflex over (q)}.sub.p,q.sub.p) of the domain classifier by using equation (2):
L.sub.q({circumflex over (q)}.sub.p,q.sub.p)=−q.sub.p log({circumflex over (q)}.sub.p) (2) wherein in equation (2), q.sub.p is a domain label corresponding to the p-th sample in the small batch; step 8.8: establishing a global objective function E(θ.sub.f,θ.sub.y,θ.sub.q) by using equation (3):
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION OF THE EMBODIMENTS
(6) A structure of a domain adversarial transfer network in the present invention is shown in
(7) step 1: construct a bridge acceleration response dataset;
(8) step 1.1: determine a geometric material parameter of a bridge, establish a bridge finite element model, divide the bridge finite element model into units and number the units as [1, 2, 3, . . . , 10] in sequence, number nodes between the elements as [1, 2, 3, . . . , 9] in sequence, and arrange accelerometers at the nodes between the elements;
(9) step 1.2: construct a bridge lossless dataset in a bridge lossless state:
(10) firstly, apply a Gaussian distributed random load once to the node 1, where the random load obeys Gaussian distribution with a mean value of 0 and a standard deviation of 200, and obtain an acceleration response signal matrix [a.sub.1, . . . , a.sub.c, . . . , a.sub.C].sup.T of each node between the elements through a Newmark−β method, where a.sub.c is an acceleration response signal with the length of w at the c-th node between the elements, C is the number of the accelerometers, C=n−1, an acceleration signal in this example has a length w=5120, and C=9;
(11) step 1.3: normalize the acceleration response signals of all the nodes, to obtain a normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.9].sup.T; where ā.sub.c represents the normalized acceleration response signal at the c-th node;
(12) step 1.4: randomly intercept the normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.C].sup.T according to the length {tilde over (w)}=1024, to enhance data diversity, so as to obtain an acceleration response matrix segment A=[ã.sub.1, . . . , ã.sub.c, . . . , ã.sub.9].sup.T of the bridge when the random load is applied to the node 1, and take the segment as a sample; where ã.sub.c represents the normalized acceleration response signal with the length of {tilde over (w)} at the c-th node;
(13) step 1.5, repeatedly apply m times of Gaussian distributed random load to the node 1 between the elements, and performing steps 1.2-1.4 for processing, to obtain m samples, where in order to balance the samples in this example, computation is repeated for m=80 times at each random load position in a lossless state; and
(14) step 1.6: process the nodes between the elements in sequence according to steps 1.2-1.5, so as to obtain the acceleration response matrix segment of the bridge when the random load is applied to each node between the elements, where a total of j.sub.1=m×(n−1)=80×9=720 samples form the bridge lossless dataset;
(15) step 2: construct a bridge damage dataset:
(16) simulate elemental damage in a mode of reducing stiffness of one element, and set t different reduction coefficients to simulate damage grades, where in this embodiment, t=5 damage grades are set, that is, the stiffness of the element is reduced by 10%, 20%, 30%, 40% and 50%;
(17) step 2.1: select a damage grade at the element 1 for damage simulation, and process each node between the elements according to steps 1.2-1.6 when the element 1 is at the current damage grade, where computation is repeated for m=16 times at each random load position in a lossless state, so as to obtain an acceleration response matrix segment of the bridge under the current damage grade of the element 1;
(18) step 2.2: obtain, according to step 2.1, the acceleration response matrix segments of the bridge under all loss grades of the element 1, with a total of t×(n−1)×m=5×9×16=720 damage samples; and
(19) step 2.3: process all the elements in sequence according to steps 2.1-2.2, so as to obtain acceleration response matrix segment samples of the bridge under different loss grades of each element, with a total of j.sub.2=t×(n−1)×m×n=720×10=7200 damage samples to form the bridge damage dataset;
(20) step 3: set labels Y=[y.sub.1, . . . , y.sub.l, . . . , y.sub.L] for the bridge lossless dataset and the bridge damage dataset, where the y.sub.l represents the label corresponding to the l-th sample, L is the total number of the samples, and L=j.sub.1+j.sub.2=720+7200=7920; and y.sub.l∈[0, 1, 2, . . . , 10], when y.sub.l=0, it represents the bridge lossless state corresponding to the l-th sample, and when y.sub.l=1, 2, . . . , n, it represents a serial number of a damaged unit of the bridge corresponding to the l-th sample;
(21) step 4: combine the bridge lossless dataset and the label thereof with the bridge damage dataset and the label thereof to obtain a source domain dataset S(X,Y) with a label, where X represents a union set of the bridge lossless dataset and the bridge damage dataset, X=[A.sub.1, . . . , A.sub.l, . . . , A.sub.L], and A.sub.l represents an l-th combined sample;
(22) step 5: simulate the real bridge structure by adding uncertainty to the bridge finite element model;
(23) step 5.1: take the uncertainty of stiffness of the bridge as a working condition (1), and simulate the working condition (1) by multiplying stiffness of each element of the bridge finite element model by a random factor δ, where the random factor δ obeys Gaussian distribution with a mean value of 1 and a standard deviation of 0.02, then a target domain dataset T.sub.1(X,Y) is obtained according to steps 1 and 2, the target domain is free of labels, and for later verification of an effect of the method, a label is also set here, and Y is the same as the source domain;
(24) step 5.2: take a geometric error and a material error of the bridge as a working condition (2), simulate the material error by changing density and elastic modulus parameters of the bridge finite element model, where the material has density increased by 2% and elastic modulus reduced by 2% in this example, and simulate the geometric error by changing a bridge length, a cross section width and a height of the finite element model, where the bridge has a length set to be 1.98 m (with a relative error of 1%), a cross section width of 0.29 m (with a relative error of 3.33%), and a height of 0.098 m (with a relative error of 2%) in this example, so as to obtain a target domain dataset T.sub.2(X,Y) according to steps 1 and 2 under the condition of the working condition (1) and the working condition (2), where the target domain is free of labels, and for later verification of an effect of the method, a label is also set here, and Y is the same as the source domain; and
(25) step 5.3: take an influence of environmental noise in actual measurement as a working condition (3), simulate the working condition (3) by adding noise D.sub.noise−N(0,σ.sup.2) obeying Gaussian distribution with a mean value of 0 and a variance of σ.sup.2 into the bridge finite element model, where in this example, D.sub.noise−N(0, 0.1.sup.2), that is, add noise D.sub.noise−N(0,σ.sup.2) into the normalized acceleration response signal matrix [ā.sub.1, . . . , ā.sub.c, . . . , ā.sub.C].sup.T, so as to obtain a target domain dataset T.sub.3(X,Y) according to steps 1 and 2 under the conditions of the working condition (1), the working condition (2) and the working condition (3), where the target domain dataset is free of labels, and for later verification of an effect of the method, a label is also set here, and Y is the same as the source domain;
(26) step 6: construct a domain adversarial transfer learning neural network, where the domain adversarial transfer learning neural network includes: a feature extractor G.sub.f, a label predictor G.sub.y and a domain classifier G.sub.q;
(27) the feature extractor G.sub.f includes e.sub.1 convolution layers, a LeakRelu layer is added to each convolution layer, a normalization layer and a maximum pooling layer are added between every two convolution layers, convolution kernels of the convolution layers have a size of k.sub.1, the number of h.sub.1, and a step length of s.sub.1, and convolution kernels of the maximum pooling layers have a size of k.sub.2 and a step length of s.sub.2, where in this embodiment, e.sub.1 is 6, the convolution kernels have the size k.sub.1=16, the number h.sub.1 of the convolution kernels is 32, 32, 64, 64, 128 and 128 in sequence, the step length s.sub.1=1, and the convolution kernels of the maximum pooling layers have the size of k.sub.2=4 and the step length of s.sub.2=4;
(28) the label predictor G.sub.y is composed of e.sub.2 full connection layers, and a LeakRelu layer is added between each full connection layer, where in this example, e.sub.2 is 3, a vector dimension flattened by the above feature extractor is 2048, the input of the full connection layers is 2048, 256 and 128 in sequence, and the final output is 11;
(29) the domain classifier G.sub.q is composed of e.sub.3 full connection layers, and a ReLU layer and a normalization layer are added between each full connection layer, where in this example, a.sub.3 is 3, a vector dimension flattened by the above feature extractor is 2048, the input of the full connection layers is 2048, 1024 and 256 in sequence, and the final output is 2;
(30) step 7, preprocess data, divide the source domain dataset S(X,Y) and the target domain dataset T.sub.i(X), i=1, 2, 3 into a source domain training set Ds.sub.tra, verification set Ds.sub.val, and test set Ds.sub.test and a target domain training set Dt.sub.tra, verification set Dt.sub.val and test set Dt.sub.test separately according to a proportion of 7:1:2, where the divided test set is used for an application stage, to check the effectiveness of the method, such that training set sample size: verification set sample size: test set sample size in each of the two dataset is =0.7:0.1:0.2=5544:792:1584;
(31) step 8, perform a training and verification stage, where a training flow diagram is shown in
(32) step 8.1: randomly extract P=64 source domain samples X.sub.s=(A.sub.s1, . . . , A.sub.sp, . . . , A.sub.s64), Y.sub.s=(y.sub.1, . . . , y.sub.p, . . . , y.sub.64) and target domain samples X.sub.t=(A.sub.t1, . . . , A.sub.tp, . . . , A.sub.t64) from the source domain training set Ds.sub.tra and the target domain training set Dt.sub.tra each time as a small batch to be input a domain adversarial network and then trained until all the samples of the source domain training set Ds.sub.tra and the target domain training set Dt.sub.tra are extracted; where A.sub.sp represents the p-th source domain sample of the source domain samples in a small batch; and y.sub.p is a label corresponding to the p-th sample of the source domain samples, and A.sub.tp represents the p-th target domain sample of the target domain samples in a small batch;
(33) step 8.2: map the p-th source domain sample A.sub.sp and the target domain sample A.sub.tp in a small batch into a source domain feature vector f.sub.sp and a target domain feature vector f.sub.tp by means of the feature extractor G.sub.f(A,θ.sub.f) respectively; where the A is the source domain sample and target domain sample, and θ.sub.f represents parameter vectors of all layers in mapping;
(34) step 8.3: map the source domain feature vector f.sub.sp by means of the label predictor G.sub.y to obtain a prediction label ŷ.sub.p=G.sub.y(f.sub.sp,θ.sub.y), where θ.sub.y represents a mapping parameter of the label predictor G.sub.y;
(35) step 8.4: compute a loss L.sub.y(ŷ.sub.p,y.sub.p) of the label predictor by using equation (1):
L.sub.y(ŷ.sub.p,y.sub.p)=−y.sub.p log(ŷ.sub.p) (1)
(36) step 8.5: set a domain label Q.sub.S=(q.sub.s1, . . . , q.sub.sp, . . . , q.sub.sP) for the source domain sample X.sub.s=(A.sub.s1, . . . , A.sub.sp, . . . , A.sub.sP); and set a domain label Q.sub.t=(q.sub.t1, . . . , q.sub.tp, . . . , q.sub.tP) for the target domain sample X.sub.t=(A.sub.t1, . . . , A.sub.tp, . . . , A.sub.tP), q.sub.sp and q.sub.tp represent the domain label corresponding to the p-th source domain sample A.sub.sp and the domain label corresponding to the p-th target domain sample A.sub.tp respectively, q.sub.sp=0, and q.sub.tp=1;
(37) step 8.6: input the source domain feature vector f.sub.sp and the target domain feature vector f.sub.tp into the domain classifier G.sub.q(f,θ.sub.q) for mapping, to obtain a prediction domain label {circumflex over (q)}.sub.p, where f represents the source domain feature vector and the target domain feature vector, and θ.sub.q represents a mapping parameter of the domain classifier G.sub.q;
(38) step 8.7: compute a loss L.sub.q({circumflex over (q)}.sub.p,q.sub.p) of the domain classifier by using equation (2):
L.sub.q({circumflex over (q)}.sub.p,q.sub.p)=−q.sub.p log({circumflex over (q)}.sub.p) (2)
(39) where in equation (2), q.sub.p is a domain label corresponding to the p-th sample in a small batch;
(40) step 8.8: establishing a global objective function E(θ.sub.f,θ.sub.y,θ.sub.q) by using equation (3):
(41)
(42) where in equation (3), L.sub.y.sup.p and L.sub.q.sup.p represent a label classifier loss function and a domain classifier loss function calculated by means of the p-th sample in a small batch respectively, λ represents measurement of a weight between the two targets; and λ represents measurement of a weight between two targets, the domain classifier needs to identify a source domain and a target domain as far as possible, to form an adversarial relation with the feature extractor, such that the greater loss is better, which is not facilitates the solution of the entire loss, and a negative sign is added before the domain adversarial loss;
(43) step 8.9: use a standard stochastic gradient descent (SGD) solver to carry out gradient descent search on saddle point parameters of equations (4) and (5), where SGD optimizer parameters are set as follows: a learning rate is set to be 0.01, and momentum is set to be 0.9, so as to obtain a saddle point parameter
(44)
(45) where in equations (4) and (5), {circumflex over (θ)}.sub.f,{circumflex over (θ)}.sub.y,{circumflex over (θ)}.sub.q represent mapping parameters of the feature extractor, the label predictor and the domain classifier respectively when the global objective function E(θ.sub.f,θ.sub.y,θ.sub.q) is converged to an optimal state, at the saddle point, the parameters θ.sub.q of the definition domain classifier minimize a definition domain classification loss (because they come in with a negative sign), while the parameters θ.sub.y of the label predictor minimize a label prediction loss, and the feature mapping parameters θ.sub.f minimize a label prediction loss (that is, the features are discriminative) while maximizing a domain classification loss (that is, the features have domain invariance);
(46) step 8.10: repeat steps 8.2-8.9 to complete training of the domain adversarial transfer network by all small batches, and finally obtain a saddle point parameter {circumflex over (θ)}.sub.f,{circumflex over (θ)}.sub.y,{circumflex over (θ)}.sub.q when the global objective function E(θ.sub.f,θ.sub.y,θ.sub.q) is converged to be optimal and an optimal network model; and
(47) step 8.11: perform a validation stage:
(48) verify a label prediction correct rate index of the optimal network model based on the source domain verification set Ds.sub.val and the target domain verification set Dt.sub.val, indicate, under the condition that the label prediction correct rate index reaches a threshold, that training of the domain adversarial transfer learning network is completed, save the mapping parameter {circumflex over (θ)}.sub.f,{circumflex over (θ)}.sub.y,{circumflex over (θ)}.sub.q when the global objective function E(θ.sub.f,θ.sub.y,θ.sub.q) is converged to the optimal state, and otherwise, return to steps 8.1-8.10 for retraining, and repeat steps 8.1-8.10 for 64 times, such that the verification set correct rate of each group of working conditions basically tends to a threshold; and
(49) step 9: an application stage is performed:
(50) take the T.sub.i(X,Y) test set Dt.sub.test as input in an application process as shown in