Method for diagnosing analog circuit fault based on vector-valued regularized kernel function approximation

11486925 · 2022-11-01

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for diagnosing analog circuit fault based on vector-valued regularized kernel function approximation, includes steps of: step (1) acquiring a fault response voltage signal of an analog circuit; step (2) carrying out wavelet packet transform on the collected signal, and calculating a wavelet packet coefficient energy value as a characteristic parameter; step (3) utilizing a quantum particle swarm optimization algorithm to optimize a regularization parameter and kernel parameter of vector-valued regularized kernel function approximation, and training a fault diagnosis model; and step (4) utilizing the trained diagnosis model to recognize circuit faults.

Claims

1. A method for diagnosing an analog circuit fault based on vector-valued regularized kernel function approximation, comprising steps of: step (1): collecting output signals comprising: extracting time domain response voltage signals of all nodes of an analog circuit under test; step (2): carrying out wavelet packet decomposition on the output signals collected, calculating energy of all nodes as original sample characteristic data, and equally dividing the original sample characteristic data into a training sample set and a test sample set; step (3): utilizing a quantum particle swarm optimization (QPSO) algorithm to optimize a regularization parameter and kernel parameter of vector-valued regularized kernel function approximation (VVRKFA) on the basis of the training sample set, and constructing a VVRKFA-based fault diagnosis model; step (4): inputting the test sample set to the constructed VVRKFA-based fault diagnosis model to recognize circuit fault classes; wherein in step (3), the process of constructing the VVRKFA-based fault diagnosis model comprises steps of: (3.a) determining a type of a kernel function: adopting a Gaussian kernel function K(x.sub.i,x.sub.j)=exp(σ∥x.sub.i−x.sub.j) is as the kernel function of VVRKFA to establish a mathematic model of VVRKFA, wherein σ is a width factor of the Gaussian kernel function; wherein a mathematic model of VVRKFA is: Min J ( Θ , b , ξ ) = C 2 tr ( [ Θ b ] T [ Θ b ] ) + 1 2 .Math. i = 1 m .Math. ξ j .Math. 2 ; ( 1 ) s . t . Θ K ( x i T , B T ) T + b + ξ i = Y i , i = 1 , 2 , .Math. , m wherein, Θ∈R.sup.N×m is a regression coefficient matrix for mapping a characteristic inner product space to a label space; N is an amount of fault classes; m is a dimensionality of training samples in the training sample set; m is a dimensionality of samples in a dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; b∈custom character.sup.N×1 is a base vector; C is the regularization parameter; ξ.sub.i∈custom character.sup.N×1 is a slack variable; matrix B∈custom character.sup.m×n is the dimensionality-reduced sample data set (m≤m) obtained after the dimensionality of the training sample set is reduced; n is an amount of all the training samples in the training sample set; K(.,.) is the kernel function; x.sub.i is the training samples in the training sample set; Y.sub.i is a category label vector of the training sample x.sub.i; T is a matrix transpose symbol; J is a target function name; [Θ b]=YP.sup.T[CI+PP.sup.T].sup.−1, wherein P=[k(A,B.sup.T)e].sup.T∈custom character.sup.(m+1)×m is the training sample set; I is a unit matrix, and Y∈custom character.sup.N×m is a matrix comprising m category label vectors of the training samples; (3.b) by utilizing the quantum particle swarm optimization algorithm to optimize the mathematic model of VVRKFA, obtaining an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA; (3.c) taking the training samples x.sub.i in the training sample set as input data, constructing a vector value mapping function (2) for the optimum regularization parameter and the optimum kernel parameter obtained in step (3.b);
p(x.sub.i)=ΘK(x.sub.i.sup.T,B.sup.T).sup.T+b  (2) wherein, Θ∈custom character.sup.N×m is the regression coefficient matrix; B∈custom character.sup.m×n is the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; K(.,.) is the kernel function, and in formula (2), the kernel function refers to the optimum kernel function obtained in step (3.b); C is the regularization parameter, and in formula (2), the regularization parameter C is the optimum kernel parameter obtained in step (3.b); b∈custom character.sup.N×1 is the base vector, N is the number of fault classes; m is the dimensionality of training samples in the training sample set; m is the dimensionality of the samples in the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; n is the amount of all the training samples in the training sample set; (3.d) by utilizing the mapping function constructed in step (3.c), establishing a decision function of VVRKFA, wherein the decision function of VVRKFA is expressed as:
Class(x)=argmin.sub.1≤j≤Nd.sub.M({circumflex over (p)}(x.sub.t),p.sup.(j)|Σ)  (3) wherein, x.sub.t is test samples in the test sample set; p.sup.(j)=(1/n.sub.j)Σ.sub.i=1.sup.n.sup.j{circumflex over (p)}(x.sub.i) is the center point of all training samples corresponding to the j.sup.th fault class in the training sample set, d.sub.M is the Mahalanobis distance, x.sub.i is the training samples in the training sample set, {circumflex over (Σ)}=Σ.sub.j=1.sup.N(n.sub.j−1){circumflex over (Σ)}.sup.(j)/(n−N) is an intra-class covariance matrix, and n is the amount of all the training samples in the training sample set; N is the number of fault classes; n.sub.j is the amount of training samples corresponding to the j.sup.th fault class in the training sample set; {circumflex over (p)}(x.sub.i) is a mapping projection of the training samples x.sub.i in a characteristic sub-space; {circumflex over (p)}(x.sub.t) is a mapping projection of the test samples x.sub.t in the characteristic sub-space. wherein when the decision function is established, constructing the VVRKFA-based fault diagnosis model is accomplished.

2. A method for diagnosing an analog circuit fault based on vector-valued regularized kernel function approximation, comprising steps of: step (1): collecting output signals comprising: extracting time domain response voltage signals of all nodes of an analog circuit under test; step (2): carrying out wavelet packet decomposition on the output signals collected, calculating energy of all nodes as original sample characteristic data, and equally dividing the original sample characteristic data into a training sample set and a test sample set; step (3): utilizing a quantum particle swarm optimization (QPSO) algorithm to optimize a regularization parameter and kernel parameter of vector-valued regularized kernel function approximation (VVRKFA) on the basis of the training sample set, and constructing a VVRKFA-based fault diagnosis model; step (4): inputting the test sample set to the constructed VVRKFA-based fault diagnosis model to recognize circuit fault classes; wherein in step (3), the process of constructing the VVRKFA-based fault diagnosis model comprises steps of: (3.a) determining a type of a kernel function: adopting a Gaussian kernel function K(x.sub.i,x.sub.j)=exp(σ∥x.sub.i−x.sub.j∥.sup.2) is as the kernel function of VVRKFA to establish a mathematic model of VVRKFA, wherein σ is a width factor of the Gaussian kernel function; wherein a mathematic model of VVRKFA is: Min J ( Θ , b , ξ ) = C 2 tr ( [ Θ b ] T [ Θ b ] ) + 1 2 .Math. i = 1 m .Math. ξ i .Math. 2 ; s . t . Θ K ( x i T , B T ) T + b + ξ i = Y i , i = 1 , 2 , .Math. , m ( 1 ) wherein, Θ∈custom character.sup.N×m is a regression coefficient matrix for mapping a characteristic inner product space to a label space; N is an amount of fault classes; m is a dimensionality of training samples in the training sample set; m is a dimensionality of samples in a dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; b∈custom character.sup.N×1 is a base vector; C is the regularization parameter; ξ.sub.i∈custom character.sup.N×1 is a slack variable; matrix B∈custom character.sup.m×n is the dimensionality-reduced sample data set (m≤m) obtained after the dimensionality of the training sample set is reduced; n is an amount of all the training samples in the training sample set; K(.,.)_is the kernel function; x.sub.i is the training samples in the training sample set; Y.sub.i is a category label vector of the training sample x.sub.i; T is a matrix transpose symbol; J is a target function name; [Θ b]=YP.sup.T[CI+PP.sup.T].sup.−1, wherein P=[k(A,B.sup.T)e].sup.T∈custom character.sup.(m+1)×m is the training sample set; I is a unit matrix, and Y∈custom character.sup.N×m is a matrix comprising m category label vectors of the training samples; (3.b) by utilizing the quantum particle swarm optimization algorithm to optimize the mathematic model of VVRKFA, obtaining an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA; (3.c) taking the training samples x.sub.i in the training sample set as input data, constructing a vector value mapping function (2) for the optimum regularization parameter and the optimum kernel parameter obtained in step (3.b);
p(x.sub.i)=ΘK(x.sub.i.sup.T,B.sup.T).sup.T+b  (2) wherein, Θ∈custom character.sup.N×m is the regression coefficient matrix; B∈custom character.sup.m×n is the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; K(.,.) is the kernel function, and in formula (2), the kernel function refers to the optimum kernel function obtained in step (3.b); C is the regularization parameter, and in formula (2), the regularization parameter C is the optimum kernel parameter obtained in step (3.b); b∈custom character.sup.N×1 is the base vector, N is the number of fault classes; m is the dimensionality of training samples in the training sample set; m is the dimensionality of the samples in the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; n is the amount of all the training samples in the training sample set; (3.d) by utilizing the mapping function constructed in step (3.c), establishing a decision function of VVRKFA, wherein the decision function of VVRKFA is expressed as:
Class(x)=arg min.sub.1≤j≤Nd.sub.M({circumflex over (p)}(x.sub.t),p.sup.(j)|{circumflex over (Σ)})  (3) wherein, x.sub.t is test samples in the test sample set; p.sup.(j)=(1/n.sub.j)Σ.sub.i=1.sup.n.sup.j{circumflex over (p)}(x.sub.i) is the center point of all training samples corresponding to the j.sup.th fault class in the training sample set, du is the Mahalanobis distance, x.sub.i is the training samples in the training sample set, {circumflex over (Σ)}=Σ.sub.j=1.sup.N(n.sub.j−1){circumflex over (Σ)}.sup.(j)/(n−N) is an intra-class covariance matrix, and n is the amount of all the training samples in the training sample set; N is the number of fault classes; n.sub.j is the amount of training samples corresponding to the j.sup.th fault class in the training sample set; {circumflex over (p)}(x.sub.i) is a mapping projection of the training samples x.sub.i in a characteristic sub-space; {circumflex over (p)}(x.sub.t) is a mapping projection of the test samples x.sub.t in the characteristic sub-space. wherein when the decision function is established, constructing the VVRKFA-based fault diagnosis model is accomplished; wherein in step (3.b): utilizing the quantum particle swarm optimization algorithm to optimize the regularization parameter and kernel parameter of the mathematic model of VVRKFA to obtain an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA specifically comprises steps of: (3.b.1) initializing parameters of the QPSO algorithm, comprising velocity, location, population size, iterations and optimization range, wherein each particle is a two-dimensional vector, a first dimension is the regularization parameter of the mathematic model of VVRKFA, and a second dimension is the kernel parameter of the mathematic model of VVRKFA; (3.b.2) calculating a fitness of the particles to obtain a global optimum individual and a local optimum individual; (3.b.3) updating a velocity and location of the particles; and (3.b.4) repeating step (3.b.2) and Step (3.b.3) until a maximum iteration is reached, and outputting an optimum parameter result, which is denoted as the optimum regularization parameter and the optimum kernel parameter of the mathematic model of VVRKFA.

3. A method for diagnosing an analog circuit fault based on vector-valued regularized kernel function approximation, comprising steps of: step (1): collecting output signals comprising: extracting time domain response voltage signals of all nodes of an analog circuit under test; step (2): carrying out wavelet packet decomposition on the output signals collected, calculating energy of all nodes as original sample characteristic data, and equally dividing the original sample characteristic data into a training sample set and a test sample set; step (3): utilizing a quantum particle swarm optimization (QPSO) algorithm to optimize a regularization parameter and kernel parameter of vector-valued regularized kernel function approximation (VVRKFA) on the basis of the training sample set, and constructing a VVRKFA-based fault diagnosis model; step (4): inputting the test sample set to the constructed VVRKFA-based fault diagnosis model to recognize circuit fault classes; wherein in step (3), the process of constructing the VVRKFA-based fault diagnosis model comprises steps of: (3.a) determining a type of a kernel function: adopting a Gaussian kernel function K(x.sub.i,x.sub.j)=exp(σ∥x.sub.i−x.sub.j∥.sup.2) is as the kernel function of VVRKFA to establish a mathematic model of VVRKFA, wherein σ is a width factor of the Gaussian kernel function; wherein a mathematic model of VVRKFA is: Min J ( Θ , b , ξ ) = C 2 tr ( [ Θ b ] T [ Θ b ] ) + 1 2 .Math. i = 1 m .Math. ξ i .Math. 2 ( 1 ) s . t . Θ K ( x i T , B T ) T + b + ξ i = Y i , i = 1 , 2 , .Math. , m wherein, Θ∈custom character.sup.N×m is a regression coefficient matrix for mapping a characteristic inner product space to a label space; N is an amount of fault classes; m is a dimensionality of training samples in the training sample set; m is a dimensionality of samples in a dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; b∈custom character.sup.N×1 is a base vector; C is the regularization parameter; ξ.sub.i∈custom character.sup.N×1 is a slack variable; matrix B∈custom character.sup.m×n is the dimensionality-reduced sample data set (m≤m) obtained after the dimensionality of the training sample set is reduced; n is an amount of all the training samples in the training sample set; K(.,.) is the kernel function; x.sub.i is the training samples in the training sample set; Y.sub.i is a category label vector of the training sample x.sub.i; T is a matrix transpose symbol; J is a target function name; [Θ b]=YP.sup.T[CI+PP.sup.T].sup.−1, wherein P=[k(A,B.sup.T)e].sup.T∈custom character.sup.(m+1)×m is the training sample set; I is a unit matrix, and Y∈custom character.sup.N×m is a matrix comprising m category label vectors of the training samples; (3.b) by utilizing the quantum particle swarm optimization algorithm to optimize the mathematic model of VVRKFA, obtaining an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA; (3.c) taking the training samples x.sub.i in the training sample set as input data, constructing a vector value mapping function (2) for the optimum regularization parameter and the optimum kernel parameter obtained in step (3.b);
p(x.sub.i)=ΘK(x.sub.i.sup.T,B.sup.T).sup.T+b  (2) wherein, Θ∈custom character.sup.N×m is the regression coefficient matrix; B∈custom character.sup.m×n is the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; K(.,.)_is the kernel function, and in formula (2), the kernel function refers to the optimum kernel function obtained in step (3.b); C is the regularization parameter, and in formula (2), the regularization parameter C is the optimum kernel parameter obtained in step (3.b); b∈custom character.sup.N×1 is the base vector, N is the number of fault classes; m is the dimensionality of training samples in the training sample set; m is the dimensionality of the samples in the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; n is the amount of all the training samples in the training sample set; (3.d) by utilizing the mapping function constructed in step (3.c), establishing a decision function of VVRKFA, wherein the decision function of VVRKFA is expressed as:
Class(x)=argmin.sub.1≤j≤Nd.sub.M({circumflex over (p)}(x.sub.t),p.sup.(j)|{circumflex over (Σ)})  (3) wherein, x.sub.t is test samples in the test sample set; p.sup.(j)=(1/n.sub.j)Σ.sub.i=1.sup.n.sup.j{circumflex over (p)}(x.sub.i) is the center point of all training samples corresponding to the j.sup.th fault class in the training sample set, d.sub.M is the Mahalanobis distance, x.sub.i is the training samples in the training sample set, {circumflex over (Σ)}=Σ.sub.j=1.sup.N(n.sub.j−1){circumflex over (Σ)}.sup.(j)/(n−N) is an intra-class covariance matrix, and n is the amount of all the training samples in the training sample set; N is the number of fault classes; n.sub.j is the amount of training samples corresponding to the j.sup.th fault class in the training sample set; {circumflex over (p)}(x.sub.i) is a mapping projection of the training samples x.sub.i in a characteristic sub-space; {circumflex over (p)}(x.sub.t) is a mapping projection of the test samples x.sub.t in the characteristic sub-space. wherein when the decision function is established, constructing the VVRKFA-based fault diagnosis model is accomplished; wherein in step (4), inputting the test sample data set to the fault diagnosis model to recognize the circuit fault classes to obtain the fault class of each test sample in the test sample set, thereby the diagnosis accuracy of each fault class is obtained, and diagnosis of the analog circuit under test is completed.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a flow diagram of method for diagnosing analog circuit fault based on vector-valued regularized kernel function approximation of the present invention.

(2) FIG. 2 is a circuit diagram of a video amplifier circuit adopted in the present invention.

(3) FIG. 3 is shows a training process for optimizing parameters of vector-valued regularized kernel function approximation by means of a quantum particle swarm optimization algorithm.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

(4) The present invention is further expounded below in conjunction with the accompanying drawings and embodiments.

(5) Referring to FIG. 1, an analog circuit fault diagnosis method based on Vector-Valued regularized kernel function approximation of the present invention comprises following four steps: (1) extracting time domain response signals of an analog circuit under test; (2) carrying out three-layer db10 wavelet packet decomposition on the signals to extract three layers of nodes, adopting energy of eight nodes as original sample characteristic data, and equally dividing the original sample characteristic data into a training sample set and a test sample data; (3) on the basis of the training sample set, optimizing parameters of a mathematic model of VVRKFA by means of a QPSO algorithm; and (4) inputting the test sample set to a constructed VVRKFA-based fault diagnosis model to recognize circuit fault classes.

(6) In step (1), the time domain response signals of the analog circuit under test are acquired, an input terminal is excited by a sinusoidal signal with a voltage of 5V and a frequency of 100 Hz, and voltage signals are sampled at an output terminal.

(7) In step (2), the energy of the nodes is calculated as follows.

(8) During wavelet packet analysis, the signals are projected to a space spanned by a set of mutually orthogonal wavelet basis functions and are decomposed into a high-frequency part and a low-frequency part, and the low-frequency part and the high-frequency part are decomposed in the next layer of decomposition, so that finer analysis is realized.

(9) The function of wavelet packet μ.sub.j,k(t) is defined as:
μ.sub.j,k(t)=2.sup.j/2μ(2.sup.jt−k);

(10) Wherein, j∈Z is the number of decomposition layers, k∈Z is the number of frequency band data points, and t is the time point;

(11) A wavelet packet decomposition algorithm for a set of discrete signals x(t) is as follows:

(12) d j + 1 2 n = .Math. k h ( k - 2 t ) d j n ( k ) d j + 1 2 n + 1 = .Math. k g ( k - 2 t ) d j n ( k ) ;

(13) wherein, h(k−2t) and g(k−2t) are respectively the coefficients of a low pass filter and a high pass filter in corresponding multi-scale analysis; d.sub.j.sup.n(k) is the k.sup.th wavelet decomposition coefficient point in the n.sup.th frequency band in the j.sup.th layer; d.sub.j+1.sup.2n is a wavelet decomposition sequence of the 2n.sup.th frequency band of the (j+1).sup.th layer; d.sub.j+1.sup.2n+1 is a wavelet decomposition sequence of the (2n+1).sup.th frequency band of the (j+1)th layer; k∈Z is the number of frequency band data points; t is the time point.

(14) d j n ( k ) = 2 [ .Math. t h ( k - 2 τ ) d j + 1 2 n + 1 ( k ) + .Math. t g ( k - 2 τ ) d j + 1 2 n ( k ) ]
represents the k.sup.th coefficient corresponding to the node (j,n) after wavelet packet decomposition, the node (j,n) represents the nth frequency band of the j.sup.th layer, and τ is a translation parameter.

(15) The energy values of wavelet packet nodes are calculated as follows:

(16) E i = .Math. k = 1 N .Math. d j n ( k ) .Math. 2 ; i = 1 , 2 , .Math. , 2 j ;

(17) wherein, N is the length of the i.sup.th frequency band, j is the number of layers of wavelet decomposition, k is the sequence points of the frequency bands, and d.sub.j.sup.n(k) is the k.sup.th wavelet decomposition coefficient of the i.sup.th frequency band of the j.sup.th layer.

(18) The VVRKFA-based fault diagnosis model is established through the following steps:

(19) (3.a) The type of a kernel function is determined:

(20) A Gaussian kernel function K(x.sub.i,x.sub.j)=exp(σ∥x.sub.i−x.sub.j∥.sup.2) is adopted as the kernel function of VVRKFA to establish a mathematic model of VVRKFA, wherein is a width factor of the Gaussian kernel function;

(21) The mathematic model of VVRKFA is as follows:

(22) Min J ( Θ , b , ξ ) = C 2 tr ( [ Θ b ] T [ Θ b ] ) + 1 2 .Math. i = 1 m .Math. ξ i .Math. 2 ( 1 ) s . t . Θ K ( x i T , B T ) T + b + ξ i = Y i , i = 1 , 2 , .Math. , m

(23) wherein, Θ∈custom character.sup.N×m is a regression coefficient matrix for mapping a characteristic inner product space to a label space; N is the number of fault classes; m is the dimensionality of training samples in the training sample set; m is the dimensionality of samples in a dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; b∈custom character.sup.N×1 is a s a base vector; C is a regularization parameter; ξ.sub.i∈custom character.sup.N×1 is a slack variable; the value of elements x.sub.j in the kernel function of VVRKFA is matrix B, matrix B∈custom character.sup.m×n is the dimensionality-reduced sample data set (m≤m) obtained after the dimensionality of the training sample set is reduced; n is the number of all the training samples in the training sample set; K(.,.) is the kernel function; x.sub.i is the training samples in the training sample set; Y.sub.i is a category label vector of the training sample x.sub.i; T is a matrix transpose symbol; J is a target function name; [Θ b]=YP.sup.T[CI+PP.sup.T].sup.−1, wherein P=[k(A, B.sup.T)e].sup.T ∈custom character.sup.(m+1)×m; A∈custom character.sup.m×n is the training sample set; I is a unit matrix, and Y∈custom character.sup.N×m is a matrix including m category label vectors of the training samples;

(24) (3.b) the quantum particle swarm optimization algorithm is utilized to optimize the mathematic model of VVRKFA to obtain an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA;

(25) (3.c) with the training samples x.sub.i in the training sample set as input data, the following vector value mapping function is constructed for the optimum regularization parameter and the optimum kernel parameter obtained in Step (3.b);
p(x.sub.i)=ΘK(x.sub.i.sup.T,B.sup.T).sup.T+b  (2)

(26) wherein, O∈custom character.sup.N×m is the regression coefficient matrix, B∈custom character.sup.m×n is the dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; K(.,.) is the kernel function, and in formula (2), the kernel function particularly refers to the optimum kernel function obtained in Step (3.b); C is a regularization parameter, and in formula (2), the regularization parameter C is the optimum kernel parameter obtained in Step (3.b); b∈custom character.sup.N×1 is the base vector, N is the number of fault classes; m is the dimensionality of training samples in the training sample set; m is the dimensionality of the samples in a dimensionality-reduced sample data set obtained after the dimensionality of the training sample set is reduced; n is the number of all the training samples in the training sample set;

(27) (3.d) the mapping function constructed in Step (3.c) is utilized to establish a decision function of VVRKFA, wherein the decision function of VVRKFA is expressed as follows:
Class(x)=arg min.sub.1≤j≤Nd.sub.M({circumflex over (p)}(x.sub.t),p.sup.(j)|{circumflex over (Σ)});  (3)

(28) wherein, x.sub.t is test samples in the test sample set; p.sub.(j)=(1/n.sub.j)Σ.sub.i=1.sup.n.sup.j{circumflex over (p)}(x.sub.i) is the center point of all training samples corresponding to the j.sup.th fault class in the training sample set, d.sub.M is the Mahalanobis distance, x.sub.i is the training samples in the training sample set, {circumflex over (Σ)}=1 (n.sub.j−1){circumflex over (Σ)}.sup.(j)/(n−N) is an intra-class covariance matrix, and n is the number of all the training samples in the training sample set; n.sub.j is the number of training samples corresponding to the j.sup.th fault class in the training sample set; {circumflex over (p)}(x.sub.i) is a mapping projection of the training sample x.sub.i in a characteristic sub-space; {circumflex over (p)}(x.sub.i) is a mapping projection of the test sample x.sub.i in the characteristic sub-space.

(29) When the decision function is established, constructing the VVRKFA-based fault diagnosis model is accomplished.

(30) The process of utilizing the quantum particle swarm optimization algorithm to optimize the regularization parameter and kernel parameter of the mathematic model of VVRKFA to obtain an optimum regularization parameter and an optimum kernel parameter of the mathematic model of VVRKFA particularly comprises following steps of:

(31) (3.b.1) initializing parameters of the QPSO algorithm, comprising velocity, location, population size, iterations and optimization range, wherein each particle is a two-dimensional vector, a first dimension is the regularization parameter of the mathematic model of VVRKFA, and a second dimension is the kernel parameter of the mathematic model of VVRKFA;

(32) (3.b.2) calculating the fitness of the particles to obtain a global optimum individual and a local optimum individual;

(33) (3.b.3) updating the velocity and location of the particles according to the following update expression:

(34) (3.b.4) repeating step (3.b.2) and step (3.b.3) are until the maximum iterations is reached, and a result is output.

(35) The particle location update formula in QPSO algorithm is:
X.sub.i(t+1)=P′.sub.i(t)±α|Mbest.sub.i(t+1)−X.sub.i(t)|×In(1/u);

(36) In the formula

(37) Mbest i ( t ) = 1 N .Math. j - 1 N P j ( t - 1 ) ,

(38) P i ( t ) = P i ( t - 1 ) + ( 1 - β ) P g ( t - 1 ) , α = ω max - iter iter max × ω min ,
wherein, N is the population size, and Mbest is the mean point of individual optimum locations of all the particles; ω.sub.max is the maximum inertia weight; ω.sub.min is the minimum inertia weight, and P.sub.j and P.sub.g are respectively the individual optimum location and the global optimum location of particle j; X is the particle location; t is the current iteration; α is a compression and expansion factor; u and β are random numbers regularly distributed within [0,1]; P′.sub.i is an updated location of particle i; P.sub.i is the current location of particle i.

(39) In step (4), the test sample data set is input to the fault diagnosis model to recognize the circuit fault classes to obtain the fault class of each test sample in the test sample set, then the diagnosis accuracy of each fault class is obtained, and diagnosis of the analog circuit under test is completed.

(40) The implementation process and performance of the analog circuit fault diagnosis method based on VVRKFA of the present invention are explained below with reference to an embodiment.

(41) Nominal values and tolerances of all elements of a video amplifier circuit shown in FIG. 2 are marked out in the figure. The whole process of the fault diagnosis method of the present invention is illustrated with this circuit as an example, wherein a sinusoidal signal with a voltage of 5V and a frequency of 100 Hz is used as an excitation source, and time domain fault response signals are sampled at the output terminal of the circuit. R1, R2, R3, R4, R5, R6, R8 and Q1 are selected as test objects. Table 1 and Table 2 show the fault codes, nominal values and fault values of the test elements, wherein T represents that the fault value is greater than the nominal value, and J represents that the fault value is lower than the nominal value. No fault is also regarded as a fault with the fault code F0. 60 data is sampled for each fault class and is divided into two parts, wherein the front 30 data is used to construct the VVRKFA-based fault diagnosis model, and the last 30 data is used to test the performance of the VVRKFA-based fault diagnosis model.

(42) TABLE-US-00001 TABLE 1 Parameter faults of the video amplifier circuit Fault Fault Fault Fault Fault Fault code class value code class value F1 R2↑ 15 kΩ F2 R2↓ 6 kΩ F3 R4↑ 100Ω F4 R4↓ 27Ω F5 R6↑ 150Ω F6 R6↓  5Ω F7 R8↑ 1 kΩ F8 R8↓ 50Ω

(43) TABLE-US-00002 TABLE 2 Catastrophic faults of the video amplifier circuit Fault code Fault value Fault code Fault value F9 R1 open circuit F10 R1 short circuit F11 R3 short circuit F12 C4 open circuit F13 R5 open circuit F14 base open circuit F15 base-emitter short F16 collector open circuit circuit

(44) In the QPSO algorithm, the population size and the iterations are respectively weight is set to 0.5. During simulation, the regularization parameter and kernel width factor obtained by optimization are respectively 1.0076*10.sup.−4 and 1.0095. The training process of the QPSO-optimized VVRKFA is shown in FIG. 3. The optimum regularization parameter and the optimum kernel width factor obtained by optimization are used to construct the VVRKFA-based fault diagnosis model, and test data is input to the fault diagnosis model to recognize faults. The diagnosis results are shown in Table 3. The GMKL-SVM fault diagnosis model with the parameters selected by QPSO correctly recognized all faults F0, F1, F3, F4, F7, F8, F11, F11, F12, F13, F14, F15 and F14, recognized two faults F2 into faults F5 by mistake, recognized one fault F5 into fault F3 by mistake, and recognized one fault F6 into fault F0 by mistake. It can be considered that the VVRKFA-based fault diagnosis model with the regularization parameter and kernel width factor optimized by the QPSO algorithm has a good fault diagnosis effect. Through calculation, the overall fault diagnosis accuracy of the analog circuit can reach 98.82%.

(45) TABLE-US-00003 TABLE 3 Diagnosis results of fault classes F0 F1 F2 F3 F4 F5 F6 F7 F8 F9 F10 F11 F12 F13 F14 F15 F16 F0 30 F1 30 F2 28 2 F3 30 F4 30 F5 1 29 F6 1 29 F7 30 F8 30 F9 30 F10 30 F11 30 F12 30 F13 30 F14 30 F15 30 F16 30