Fault diagnosis device based on common information and special information of running video information for electric-arc furnace and method thereof
10345046 ยท 2019-07-09
Assignee
Inventors
Cpc classification
G06V20/41
PHYSICS
F27D11/08
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
H04N7/181
ELECTRICITY
F27D2021/0085
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
F27D2021/026
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
G06V20/52
PHYSICS
F27D21/0021
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
International classification
F27D21/00
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
H04N7/18
ELECTRICITY
F27D11/08
MECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
Abstract
A fault diagnosis method for an electrical fused magnesia furnace includes steps of: 1) arranging six cameras; 2) obtaining video information by the six cameras and sending the video information to a control center; then analyzing the video information by a chip of the control center; wherein a multi-view-based fault diagnosis method is used by the chip, comprising steps of: 2-1) comparing a difference between two consecutive frame histograms for shots segmentation; 2-2) computing a set of characteristic values for each shot obtained by the step 2-1), and then computing color, texture, and motion vector information; finally, evaluating shot importance via entropy; 2-3) clustering shots together by calculating similarity; 2-4) generating and optimizing a multi-view video summarization with a multi-objective optimization model; and 2-5) providing fault detection and diagnosis; and 3) displaying results of the fault detection and diagnosis on a host computer inter face of the control center.
Claims
1. A fault diagnosis method based on common information and special information of running video information for an EFMF (electrical fused magnesium furnace), comprising steps of: 1) arranging six cameras, wherein three of the six cameras are respectively arranged at relative positions of three electrodes above the EFMF and aim at the electrodes of the EFMF, so as to monitor a furnace eruption fault; rest of the six cameras are symmetrically arranged around a furnace body by a 120 degree difference and aim at the furnace body, so as to monitor occurrence of a furnace leaking fault; 2) obtaining video information by the six cameras and sending the video information to a control center; then analyzing the video information by a chip of the control center; wherein in order to simplify a difficulty of analysis and improve a real-time performance of video data analysis, multi-view video summarization technology is introduced, so that industrial process monitoring based on running video information is able to be realized; specifically, a multi-view-based fault diagnosis method is used by the chip, comprising steps of: 2-1) comparing a difference between two consecutive frame histograms for shots segmentation; 2-2) computing a set of characteristic values for each shot obtained by the step 2-1), and then computing color, texture, and motion vector information; finally, evaluating shot importance via entropy; 2-3) clustering shots together by calculating similarity, wherein calculation of the similarity of the shots comprises the similarity of the shots in a mono-view and correlation of the shots in different views; 2-4) generating and optimizing a multi-view video summarization with a multi-objective optimization model; wherein the shot in the shot cluster is either reserved or abandoned so as to obtain the multi-view video summarization with a less number and a shorter length of the shots but contains more fully video information; and 2-5) providing fault detection and diagnosis; and 3) displaying results of the fault detection and diagnosis on a host computer inter face of the control center.
2. The fault diagnosis method, as recited in claim 1, wherein the step 2-2) specifically comprises a step of computing the color information by a color histogram; wherein an HSV (hue, saturation and value) color space is used to obtain color histogram information, so as to describe color entropy, wherein: for a frame f with N color values, a probability of appearance of a i.sup.th color value in an image is P.sub.i, thus the color entropy is defined as:
3. The fault diagnosis method, as recited in claim 2, wherein the step 2-2) specifically comprises a step of computing the texture information by an edge direction histogram; wherein texture features are extracted using edge direction histogram descriptor; a sobel operator is selected to calculate an edge direction of each pixel; an image space is separated by four lines: horizontal, vertical, 45, and 135, in such a manner that the image is divided into eight bins on a center point of the image; then an edge direction information is gathered and an edge direction histogram is obtained; information entropy E.sub.EDGE(f) is calculated based on the edge direction histogram of each frame.
4. The fault diagnosis method, as recited in claim 3, wherein the step 2-2) specifically comprises a step of computing the motion vector information by a motion-related feature vector; wherein V(t,k) is used to represent a k.sup.th bin grey value of the color histogram of a frame t, where 0k127; a motion-related feature vector is represented by a histogram difference between the frame t and a previous frame t1, which is determined as
5. The fault diagnosis method, as recited in claim 4, wherein the step 2-2) specifically comprises a step of evaluating the shot importance via the entropy; wherein an entropy fusion model is applied to deal with the entropy, and different weights are chosen to merger of different types of the entropy:
E.sub.com(f)=.sub.1E.sub.HSV(f)+.sub.2E.sub.edge(f)+.sub.3E.sub.motion(f)(4) wherein .sub.i meets: .sub.1+.sub.2+.sub.3=1; thus an important frame set is obtained:
F.sub.imp(Video)={f.sub.i1,f.sub.i2, . . . ,f.sub.in}(5) then an entropy score of each frame is obtained; wherein the entropy with a high score is defined by a threshold to retain, which indicates important frames; a definition of the entropy score is as follows:
S.sub.imp=Int(f.sub.i1,f.sub.i2, . . . , f.sub.i)(7) wherein Int(.Math.) is an integration operation to combine the important frames of a same shot, so as to obtain the important shots.
6. The fault diagnosis method, as recited in claim 5, wherein in the step 2-3), the similarity of the shots in the mono-view is measured by two indexes: a temporal adjacency and a visual similarity; specifically, the temporal adjacency refers that two shots are likely to reflect a same event, which is defined as:
d.sub.T(T.sub.i,T.sub.j)=.sub.1+.sub.2|T.sub.iT.sub.j|+.sub.3|T.sub.iT.sub.j|.sup.2(8) wherein T.sub.i and T.sub.j respectively denote a time of middle frames of the i.sup.th and j.sup.th shots along a time axis in a same view. .sub.1, .sub.2 and .sub.3 are control coefficients; wherein correlation of the shots are measured by the color histogram and the edge direction histogram; a further Euclidean distance is used to measure a difference between two color histograms and two edge direction histograms separately; if k is the k.sup.th bin of the histogram, then:
7. The fault diagnosis method, as recited in claim 6, wherein in the step 2-3), the correlation of the shots in the different views is measured by a principle component analysis-scale invariant feature transform (PCA-SIFT) algorithm; wherein n frames are randomly selected from each of the similar shots in each view for PCA-SIFT detection; through the feature vector generated by key points in each frame, the descriptor of each frame is obtained, and then the Euclidean distance d of the feature vector is considered as a similarity determination measure between the two frames to obtain a correlation degree of the shots under the different views; assuming that V is a selected shot in one view and S is a shot to be compared in another view, thus the distance between the two shots is be measured by:
8. The fault diagnosis method, as recited in claim 7, wherein in the step 2-4), a given decision vector x=(x.sub.1, x.sub.2, . . . , x.sub.n) is provided, which meets:
u(F(x))=max.sub.i.Math.UV.sub.1(18) wherein F(x)=((f.sub.1(x)),(f.sub.2(x)),(f.sub.3(x)),(f.sub.4(x))).sup.T; and .sub.i is a weight value control coefficient of the objective function and meets .sub.1+.sub.2+.sub.3+.sub.4=1 with non-negative .sub.i; then the multi-view video summarization is optimized by solving x*:
9. The fault diagnosis method, as recited in claim 8, wherein the step 2-5) specifically comprises a step of building monitoring indexes COMI and SPEI by characteristic variables extracted from the multi-view video summarization; wherein in view of a monitoring video in the different views, the video summarization is taken as common information of a surveillance video by a global multi-object optimization under the single view and the different views; then, video sets are obtained by extracting the important shots from the single view respectively, which will be regarded as information source of special parts of the monitoring video; wherein the monitoring indexes are defined as:
COMI=c.sup.T.sub.c.sup.1c(20)
SPEI.sub.l=s.sub.l.sup.T.sub.l.sup.1s.sub.l, l=1,2,3(21) wherein c is a vector of common information feature variables, and s.sub.l is a vector of special information feature variables at a l.sup.th single view; .sub.c and .sub.l are variance of modeling data about multi-view common video information and special video information respectively.
10. The fault diagnosis device, as recited in claim 9, wherein the cameras are CCD (charge-coupled device) cameras.
11. The fault diagnosis device, as recited in claim 10, wherein the multi-view-based fault diagnosis method used for analyzing the video information is stored in the chip of the control center.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(11) Embodiment 1: a fault diagnosis device based on common information and special information of running video information for an EFMF (electrical fused magnesia furnace).
(12) For the smelting process, we know that the strong coupling between the multi-source interference and the conventional monitoring variables due to the complex field conditions often makes process monitoring results of EFMF difficult to be satisfactory. Therefore, video information as process variables will be intended to solve the above difficulties. In order to obtain the above process data, we designed the following process monitoring device, as shown in
(13) According to
(14) In
(15) The CCD camera monitoring location has been roughly introduced hereinbefore, and the CCD camera monitoring location settings will be explained in accordance with the fused magnesia smelting crystal structure and smelting work area division in the following. In
(16) Fault diagnosis device based on multi-view method has carried on the detailed introduction hereinbefore. Then a description of multi-view-based fault diagnosis method needs giving in the following.
(17) Embodiment 2: a fault diagnosis method based on common information and special information of running video information for an EFMF.
(18) Multi-view technology is method which shoots the same scene from different viewpoints and then extracts the same shots or shots with more associated information. This can obtain all-round video image information in a plane so as to avoid blind spots. However, for continuous monitoring video, storage and analysis have been already a relatively troublesome problem. Then extraction and analysis of the monitoring video information from multiple cameras on the same scene are more complicated and difficult. In order to simplify the difficulty of analysis and improve the real-time performance of video data analysis, multi-view video summarization technology is introduced into the present invention, so that industrial process monitoring based on running video information can be realized. So, the multi-view video summarization technology will be explained in detail in this section.
(19) We first introduce the method of shots segmentation and then we demonstrate the entropy model to evaluate the importance of shots. Next, the similarity of shots in multi-view is detailed in order to facilitate the similar shots cluster. Later we implement the multi-objective optimization to select the most representative shots to generate the finial multi-view video summarization. Last but not least, we can build the monitoring indexes for fault detection and diagnosis.
(20) Shots Segmentation:
(21) The first step in the video summarization extraction process is shot segmentation. In the present invention, we use the method of grayscale histogram via comparing the difference between the two consecutive frame histograms.
(22) Entropy Model:
(23) The evaluation of the shot importance is an important part of multi-view video summarization because the importance of the shot often contains more useful information of the event. Generally speaking, static or low-activity shots can be defined as unimportant shots. In this part, we compute a set of characteristic values for each shot and then consider color, texture, and motion vector information. Finally, we represent them via entropy to evaluate the shot importance.
(24) Color Histogram: In this issue, it is in order to represent the importance of the shots that we decide to obtain the color histogram information using the HSV color space to describe the color entropy, because it is found to be more robust to small color changes of the multi-view camera.
(25) For a frame f with N color values, the probability of appearance of the i.sup.th color value in the image is P.sub.i, thus the color entropy is defined as:
(26)
(27) Edge Direction Histogram: These texture features are extracted using edge direction histogram descriptor. The sobel operator is selected to calculate the edge direction of each pixel. The image space is separated by four lines: horizontal, vertical, 45, and 135. As a result, the image is divided into eight bins on the center point of the image. Then the edge direction information is gathered and the edge direction histogram is obtained. Next, the information entropy E.sub.EDGE(f) can be calculated based on edge direction histogram of each frame.
(28) Motion-related Feature Vector: When the moving object changes in the same scene, it will lead to the pixels' changes. In this part, the motion entropy is defined based on the gray-scale histogram and some appropriate improvements on the gray-scale entropy of the image are carried on. Let V(t,k) represent the k.sup.th bin grey value of the color histogram of the frame t, where 0k127. Its motion-related feature vector can be represented by the histogram difference between frame t and its previous frame t1, which is determined as
(29)
(30) According to different video contents and the user's needs, while emphasizing the impact of useful information, the entropy fusion model is applied to deal with the entropy. Taking into account the different entropy on the impact of the results, different weights are chosen to merger of different types of entropy:
E.sub.com(f)=.sub.1E.sub.HSV(f)+.sub.2E.sub.edge(f)+.sub.3E.sub.motion(f)(4)
(31) and .sub.i meets: .sub.1+.sub.2+.sub.3=1.
(32) Thus the important frame set can be obtained:
F.sub.imp(Video)={f.sub.i1,f.sub.i2, . . . , f.sub.in}(5)
(33) Then we can obtain the entropy score of each frame. Generally speaking, the higher the score is, the more information that the frame contains. The entropy with a high score can be defined by the threshold to retain, that is, the important frame. The definition of entropy score is as follow:
(34)
(35) Where is the entropy score. It can be customized according to user requirements, and the range value of the threshold is in the interval [0,1]. E.sub.i(f) represents the entropy of the i.sup.th frame, max E.sub.i(f) and min E.sub.i(f) represent the maximum and minimum values of all entropies separately. The frame whose values are greater than the threshold will be retained.
(36) Finally, the important shot S.sub.imp is defined as follows:
S.sub.imp=Int(f.sub.i1,f.sub.i2, . . . , f.sub.i)(7)
(37) where Int(.Math.) is the integration operation to combine the important frames which belong to the same shot. Thus we obtain the important shots.
(38) The similarity of shots in multi-view:
(39) A considerable part of shots above reflect similar events. Therefore, these shots should be clustered together by calculating their value of similarity. For the multi-view videos, each shot correlates closely with not only the temporally adjacent shots in its view but also the shots in other views. The calculation of the similarity of shots includes the similarity of shots in mono-view and the correlation of shots in different views.
(40) The similarity of the shots in mono-view can be measured by two indexes: temporal adjacency and visual similarity. In particular, temporal adjacency refers that two shots are likely to reflect the same event if they are temporally adjacent to each other and therefore they tend to have a high degree of similarity. Here we define:
d.sub.T(T.sub.i,T.sub.j)=.sub.1+.sub.2|T.sub.iT.sub.j|+.sub.3|T.sub.iT.sub.j|.sup.2(8)
(41) Where T.sub.i and T.sub.j respectively denote the time of the middle frames of the i.sup.th and j.sup.th shots along the time axis in the same view. .sub.1, .sub.2 and .sub.3 are the control coefficient.
(42) In terms of visual similarity, the color histogram and the edge direction histogram are applied to measure the correlation of the shots. Further Euclidean distance is used to measure the difference between two color histograms and two edge direction histograms separately. Suppose k be the k.sup.th bin of the histogram, then:
(43)
(44) Where H.sub.i(k) and H.sub.j(k), respectively, denote the color histogram of the k.sup.th bin in the i.sup.th and j.sup.th frame, H.sub.i(k)H.sub.j(k) denotes the difference of the color histogram between the two corresponding bins k. And G.sub.i(k), G.sub.j(k) denote the edge direction histogram of the k.sup.th bin in the i.sup.th and j.sup.th frame respectively, G.sub.i(k)G.sub.j(k) denote the difference of the edge direction histogram between the two corresponding bins k.
(45) Finally, combine the temporal adjacency and the visual similarity to obtain the similarity of the shots under the same view, namely:
(46)
(47) Where, .sub.1, .sub.2, .sub.3 are the regularization parameter and then by setting the threshold, the similar shots can be obtained under the same view.
(48) Principle component analysis-scale invariant feature transform (PCA-SIFT) algorithm is used to measure the correlation of the shots in different views. It has been confirmed that SIFT features have the characteristic of scale invariance, even if the rotation angle, image brightness or shooting angle are changed, and it can still get a good detection effect. Therefore, it is suitable to find the similar shots in different views. The PCA-SIFT algorithm can effectively reduce the dimensionality while preserving the good characteristics of SIFT.
(49) We randomly select n frames from each of the similar shots in each view for PCA-SIFT detection. Through the feature vector generated by the key points in each frame, we get the descriptor of each frame, and then the Euclidean distance d of the feature vectors is considered as the similarity determination measure between the two frames to obtain the correlation degree of the shots under different views. Assuming that V is the selected shot in one view and S is the shot to be compared in another view, thus the distance between the two shots can be measured by:
(50)
(51) where n is the number of frames in V.Math.g and f.sub.i are frames from S and V respectively.
(52) With this presentation, the correlation of the shots in different views can be transformed into the problem to find a shot
(53)
(54) Thus the correlation of shots in different views can be defined as follow:
(55)
(56) Finally, the similar shots (including the same view and different views) are gathered to form similar shot cluster. Similar shot cluster can be finished by K-means clustering algorithm.
(57) The Multi-Objective Optimization:
(58) An ideal video summarization is expected to present as much video shot information as possible with the shortest summary duration and the minimum number of shots. At the same time, the most representative shot of the different views in the multi-view video is selected to present in the one video summarization. Therefore, we need to adopt the multi-objective optimization model to make the obtained multi-view video summarization as good as possible.
(59) Suppose a given decision vector x=(x.sub.1, x.sub.2, . . . , x.sub.n), and it meets:
(60)
(61) The multi-objective optimization function is given by
(62)
(63) Where U=[1,1, 1, 1], V=diag(f.sub.1(x),f.sub.2(x), f.sub.3(x),f.sub.4(x)). f.sub.1(x)=x.sub.1+x.sub.2+ . . . +x.sub.n represents the sum of all shots reserved. f.sub.2(x)=D.sub.1x.sub.1+D.sub.2x.sub.2+ . . . +D.sub.nx.sub.n represents the sum of every shot duration time, and D.sub.i denotes the time length of the i.sup.th shot. f.sub.3(x)=I.sub.1x.sub.1+I.sub.2x.sub.2+ . . . +I.sub.nx.sub.n represents the importance of the shot, where the greater the value is, the more information the multi-view video summarization covers, and I.sub.i denotes the importance of the i.sup.th shot. And the last one can be donated as f.sub.4(x)=.sub.i,j=1,ij.sup.nSim(S.sub.i,S.sub.j).Math.x.sub.i.Math.x.sub.j.
(64) In the constraint, (.Math.) denotes normalization of the linear function. D.sub.max and I.sub.min denote the maximum length of shot duration and the minimum significance of the shots respectively when the video summarization is generated. and are the control coefficient that meet D.sub.i=D.sub.max, I.sub.i=I.sub.min. The objective function is given by
u(F(x))=max.sub.i.Math.UV.sub.1(18)
(65) Where F(x)=((f.sub.1(x)),(f.sub.2(x)),(f.sub.3(x)),(f.sub.4(x))).sup.T. And .sub.i is the weight value control coefficient of the objective function and meets .sub.1+.sub.2+.sub.3+.sub.4=1 with non-negative .sub.i.
(66) Then the multi-objective optimization above can be transformed into the 0-1 mixed integer programming problem and we can get the result of the optimization by solving the x*:
(67)
(68) According to the results, the shot in the shot cluster can be either reserved or abandoned so as to obtain the multi-view video summarization with less number and shorter length of the shots but contains more fully video information in the result.
(69) Fault Detection and Diagnosis:
(70) In accordance with the ideal multi-view video summarization, we can build monitoring indexes COMI and SPEI by characteristic variable extracting from the video summarization.
(71) In view of the monitoring video in different views, the video summarization is taken as the common information of surveillance video by the global multi-object optimization under the single view and different views. Then, the video sets are obtained by extracting the important shots from the single view respectively, which will be regarded as information source of the special parts of monitoring video. Because there is a dead zone for each view in the monitoring process, it is significant to make use of the special information of single view to compensate the common information of the global video summarization. According to this idea, we construct the following monitoring indicators as:
COMI=c.sup.T.sub.c.sup.1c(20)
SPEI.sub.l=s.sub.l.sup.T.sub.l.sup.1s.sub.l, l=1,2,3(21)
(72) Where c is a vector of common information feature variables, and s.sub.l is a vector of special information feature variables at the l.sup.th single view. Moreover, .sub.c and .sub.l are the variance of modeling data about multi-view common video information and special video information respectively.
(73) Embodiment 3: experiment results.
(74) Magnesia is a kind of refractory material widely used in the field of metallurgical industry, glass industry, cement industry, household heater, chemical industry and so on. EFMF is one of the most widely used production device in the fused magnesia industry. In order to guarantee the normal operation of the EFMF, we must ensure its safety. If there is any fault during normal operation, the performance of control system could get severe damage, even lead to the breakdown of the entire system and enormous loss, when the fault will not be able to timely adjust or alarm. Therefore, the fault detection and diagnosis of EFMF is imperative.
(75) In the monitoring process, we select one of the most important shots belonging to time for a second video as a sample, and each shot consists of three frames of image. According to the monitoring indexes, we can obtain video summarization samples in single view and multi-view. As illustrated in
(76) After obtaining the above four video sets, we extract eight texture features, seven Hu moment invariants, optical flow potential and acceleration potential from four video sets, totally seventeen variables. Since the three images within a shot change little, we compute the mean of three-frame image texture feature and Hu moment invariants feature as the first 15 variables in the shot features. In addition, we define the 2-norm of the optical flow field between the two images as the optical flow potential, and define the 2-norm of the acceleration field among the three images as the acceleration potential simultaneously. Similar to the above, we make the average of optical flow potential corresponding to image within the two shots and the average of acceleration potential corresponding to image within the three shots as the last two variables of shot characteristics. Therefore, in order to obtain the optical flow potential and the acceleration potential, it is necessary to obtain two pre-sampling shots before monitoring.
(77) First of all, we use the fault diagnosis device designed to detect the furnace eruption fault. In this process, 200 seconds of the normal running video are selected as the original modeling video data, 200 important shots are screened at single views and multi-view respectively, and then 200 sets of process variables are extracted as modeling samples corresponding to the four video sets. Next, 200 seconds of the original video are selected to extract 200 sets of process variables as the test data sets at single views and multi-view respectively, where furnace eruption fault occurs continuously. Furthermore, the modeling datasets extracted are standardized and calculated to obtain the collection of monitoring indexes according to the equations (20) and (21). And we carry out the probability statistics of the monitoring indexes of modeling data and carry on the kernel density fitting to the density function of the monitoring indexes. The probability density fitting curves of the monitoring indexes obtained from the multi-view method is shown in
(78) Based on the control limitations obtained from the probability density analysis of the modeling data, the detecting results of the test data are shown in
(79)
(80) For the monitoring of the furnace leaking fault, the three cameras overlap mutual information are little in the furnace monitoring process, so we do not consider the common part of the video summarization information, and only use their respective single view to monitor the furnace leaking fault at the electrode. In the process of identifying fault, we assume that when the large bright incandescent yellow spot appears on the furnace wall, the furnace leaking fault occurs. As a result of the acquisition of the furnace leaking fault control limitation and the analysis of variable fault contribution plot consistent with the analysis of furnace eruption fault, the simulation detecting plot of the furnace leaking fault is only given here, shown in
(81) The above simulation experiments, furnace eruption fault and furnace leaking fault do not happen at the same time, however, they may also unfortunately occur at the same time in the actual production. Therefore, it is necessary to detect and anticipate the occurrence of faults in a timely and effective manner. The experimental results demonstrate that the fault diagnosis device designed has good performance for the fault detection, and the optical flow potential and the acceleration potential introduced need to consider the previous variable information, so it is helpful to predict the occurrence of the fault.
(82) Fault diagnosis device based on multi-view method has shown good performance in the process monitoring by using industrial video information. At the same time, the feature variables extracted by the multi-view video summarization method can effectively compress the raw video data and solve the problem that the video information is complicated and difficult to deal with. This solves the trouble of using the video information for online monitoring. Moreover, the constructed optical flow potential and the acceleration potential provide the possibility of predicting the occurrence of faults. All in all, the fault diagnosis device designed can solve the detection and diagnosis problems of furnace eruption fault and furnace leaking fault in the smelting process, effectively.
(83) One skilled in the art will understand that the embodiment of the present invention as shown in the drawings and described above is exemplary only and not intended to be limiting.
(84) It will thus be seen that the objects of the present invention have been fully and effectively accomplished. Its embodiments have been shown and described for the purposes of illustrating the functional and structural principles of the present invention and is subject to change without departure from such principles. Therefore, this invention includes all modifications encompassed within the spirit and scope of the following claims.