Method For Extracting Dam Emergency Event Based On Dual Attention Mechanism

Abstract

A method for extracting a dam emergency event based on a dual attention mechanism is provided. The method includes: performing data preprocessing, building a dependency graph, building a dual attention network, and filling a document-level argument. The performing data preprocessing includes labeling a dam emergency corpus and encoding sentences. Building a dependency graph includes assisting a model to mine a syntactic relation based on a dependency. Building a dual attention network includes weighing and fusing an attention network based on a graph transformer network (GTN) and capturing key semantic information in the sentence. Filling a document-level argument includes filling a document-level argument by detecting a key sentence and ordering a semantic similarity. The method introduces a dependency and overcomes the long-range dependency problem based on the dual attention mechanism, thus achieving high identification accuracy and reducing a lot of labor costs.

Claims

1. A method for extracting a dam emergency event based on a dual attention mechanism, comprising mining a syntactic relation based on a graph transformer attention network (GTAN) and an attention network and extracting and filling an event argument role based on a dam emergency corpus, wherein the method comprises the following steps: (1) performing a data preprocessing: labeling the dam emergency corpus and encoding sentences and a document with information on the dam emergency event; (2) building a dependency graph: introducing a dependency and building the dependency graph based on a sentence structure and a semantic structure to identify and classify all parameters of the dam emergency event; (3) building a dual attention network: generating a new dependency arc based on the GTAN, and aggregating node information to capture a long-range dependency and a potential interaction; and introducing the attention network, fusing features extracted by a GTAN layer and an attention network layer according to a set ratio, capturing key semantic information in the sentence, and extracting a sentence-level event argument; and (4) filling a document-level argument: detecting a sentence with a key event in a dam emergency document and filling in an argument role with a highest similarity in a surrounding sentence to a key event missing part through a twin neural network.

2. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein the dam emergency corpus comprises special inspection reports and daily inspection reports of a dam over the years.

3. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein in step (1), the step of performing the data preprocessing specifically comprises: labeling data of a special inspection report and a daily inspection report of a dam in a blocking I/O (BIO) mode; taking a 312-dimensional vector of a last layer of an ALBERT model as a word embedding vector, and concatenating an event type embedding vector, an entity type embedding vector, and a part-of-speech tag embedding vector; and mining the concatenated embedding vectors through a bidirectional long short-term memory (BiLSTM) network to acquire hidden vectors H=h.sub.1, . . . , h.sub.h.

4. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein in step (2), the step of building the dependency graph specifically comprises: building an adjacency matrix A.sup.d of a dependency tree and a dependency label score matrix Ã.sup.dl according to a word relation in the dam emergency corpus; calculating a score between hidden vectors h.sub.i and h.sub.j acquired in step (1) to acquire a semantic score matrix A.sup.s; and concatenating A.sup.d, Ã.sup.dl, and A.sup.sto acquire a dependency graph matrix A=[A.sup.d, Ã.sup.dl, A.sup.s].

5. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein in step (3), the step of building the dual attention network specifically comprises: proposing the GTAN, replacing a graph convolutional network (GCN) by a graph attention network (GAN), and performing a reasonable weight distribution; applying, by the GTAN, a 1×1 convolution to an adjacency matrix A set through a graph transformer layer, and generating a new meta-path graph A.sup.l through a matrix multiplication; applying, by a graph attention layer, the GAN to each channel of the meta-path graph A.sup.l, and concatenating multiple node representations as a Z vector; calculating a weight matrix α.sub.a of the attention network layer, multiplying α.sub.a, point by a hidden vector H to generate a vector {tilde over (H)}, and connecting, by a hyperparameter λ, the Z vector generated by the GTAN layer and the {tilde over (H)} vector generated by the attention network layer to acquire a fused vector {tilde over (W)}:
{tilde over (W)}=σ(λ.Math.Z+(1−λ).Math.{tilde over (H)}) wherein σ is a sigmoid function.

6. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein in step (4), the step of filling the document-level argument specifically comprises: concatenating four embedding vectors of a special inspection report and a daily inspection report of a dam, namely, an argument label, an entity type, sentence information, and document information; building a text convolutional neural network (textCNN), taking the concatenated vectors as an input vector, detecting a key sentence regarding an event, and determining the key event; and calculating, by the twin neural network based on a Manhattan LSTM network, a semantic similarity between the sentences, and filling the argument role.

7. The method for extracting the dam emergency event based on the dual attention mechanism according to claim 1, wherein a dam emergency refers to a working state of a dam in case of a natural disaster.

8. A system for extracting the dam emergency event based on the dual attention mechanism, comprising: (1) a data preprocessing module configured for labeling a dam emergency corpus and encoding sentences and a document with information on the dam emergency event; (2) a dependency graph building module configured for introducing a dependency and building a dependency graph based on a sentence structure and a semantic structure to identify and classify all parameters of the dam emergency event; (3) a dual attention network building module configured for generating a new dependency arc based on a GTAN, and aggregating node information to capture a long-range dependency and a potential interaction; and introducing an attention network, fusing features extracted by a GTAN layer and an attention network layer according to a set ratio, capturing key semantic information in the sentence, and extracting a sentence-level event argument, and (4) a document-level argument filling module configured for detecting a sentence with a key event in a dam emergency document and filling in an argument role with a highest similarity in a surrounding sentence to a key event missing part through a twin neural network.

9. A computer device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement the method for extracting the dam emergency event based on the dual attention mechanism according to claim 1.

10. A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program for implementing the method for extracting the dam emergency event based on the dual attention mechanism according to claim 1.

11. The computer device according to claim 9, wherein in the method, the dam emergency corpus comprises special inspection reports and daily inspection reports of a dam over the years.

12. The computer device according to claim 9, wherein in step (1) of the method, the step of performing the data preprocessing specifically comprises: labeling data of a special inspection report and a daily inspection report of a dam in a blocking I/O (BIO) mode: taking a 312-dimensional vector of a last layer of an ALBERT model as a word embedding vector, and concatenating an event type embedding vector, an entity type embedding vector, and a part-of-speech tag embedding vector; and mining the concatenated embedding vectors through a bidirectional long short-term memory (BiLSTM) network to acquire hidden vectors H=h.sub.1, . . . , h.sub.h.

13. The computer device according to claim 9, wherein in step (2) of the method, the step of building the dependency graph specifically comprises: building an adjacency matrix A.sup.d of a dependency tree and a dependency label score matrix Ã.sup.dl according to a word relation in the dam emergency corpus; calculating a score between hidden vectors h.sub.i and h.sub.j acquired in step (1) to acquire a semantic score matrix A.sup.s; and concatenating A.sup.d, Ã.sup.dl, and A.sup.s to acquire a dependency graph matrix A=[A.sup.d, Ã.sup.dl, A.sup.s].

14. The computer device according to claim 9, wherein in step (3) of the method, the step of building the dual attention network specifically comprises: proposing the GTAN, replacing a graph convolutional network (GCN) by a graph attention network (GAN), and performing a reasonable weight distribution; applying, by the GTAN, a 1×1 convolution to an adjacency matrix A set through a graph transformer layer, and generating a new meta-path graph A.sup.l through a matrix multiplication; applying, by a graph attention layer, the GAN to each channel of the meta-path graph A.sup.l, and concatenating multiple node representations as a Z vector; calculating a weight matrix α.sub.a of the attention network layer, multiplying α.sub.a point by a hidden vector H to generate a vector {tilde over (H)}, and connecting, by a hyperparameter λ, the Z vector generated by the GTAN layer and the {tilde over (H)} vector generated by the attention network layer to acquire a fused vector {tilde over (W)}:
{tilde over (W)}=σ(λ.Math.Z+(1−λ).Math.{tilde over (H)}) wherein σ is a sigmoid function.

15. The computer device according to claim 9, wherein in step (4) of the method, the step of filling the document-level argument specifically comprises: concatenating four embedding vectors of a special inspection report and a daily inspection report of a dam, namely, an argument label, an entity type, sentence information, and document information; building a text convolutional neural network (textCNN), taking the concatenated vectors as an input vector, detecting a key sentence regarding an event, and determining the key event; and calculating, by the twin neural network based on a Manhattan LSTM network, a semantic similarity between the sentences, and filling the argument role.

16. The computer device according to claim 9, wherein in the method, a dam emergency refers to a working state of a dam in case of a natural disaster.

17. The computer-readable storage medium according to claim 10, wherein in the method, the dam emergency corpus comprises special inspection reports and daily inspection reports of a dam over the years.

18. The computer-readable storage medium according to claim 10, wherein in step (1) of the method, the step of performing the data preprocessing specifically comprises: labeling data of a special inspection report and a daily inspection report of a dam in a blocking I/O (BIO) mode; taking a 312-dimensional vector of a last layer of an ALBERT model as a word embedding vector, and concatenating an event type embedding vector, an entity type embedding vector, and a part-of-speech tag embedding vector; and mining the concatenated embedding vectors through a bidirectional long short-term memory (BiLSTM) network to acquire hidden vectors H=h.sub.1, . . . , h.sub.h.

19. The computer-readable storage medium according to claim 10, wherein in step (2) of the method, the step of building the dependency graph specifically comprises: building an adjacency matrix A.sup.d of a dependency tree and a dependency label score matrix Ã.sup.dl according to a word relation in the dam emergency corpus; calculating a score between hidden vectors h.sub.i and h.sub.j acquired in step (1) to acquire a semantic score matrix A.sup.s; and concatenating A.sup.d, Ã.sup.dl, and A.sup.s to acquire a dependency graph matrix A=[A.sup.d, Ã.sup.dl, A.sup.s].

20. The computer-readable storage medium according to claim 10, wherein in step (3) of the method, the step of building the dual attention network specifically comprises: proposing the GTAN, replacing a graph convolutional network (GCN) by a graph attention network (GAN), and performing a reasonable weight distribution; applying, by the GTAN, a 1×1 convolution to an adjacency matrix A set through a graph transformer layer, and generating a new meta-path graph A.sup.l through a matrix multiplication; applying, by a graph attention layer, the GAN to each channel of the meta-path graph A.sup.l, and concatenating multiple node representations as a Z vector; calculating a weight matrix α.sub.a of the attention network layer, multiplying α.sub.a, point by a hidden vector H to generate a vector {tilde over (H)}, and connecting, by a hyperparameter λ, the Z vector generated by the GTAN layer and the {tilde over (H)} vector generated by the attention network layer to acquire a fused vector {tilde over (W)}:
{tilde over (W)}=σ(λ.Math.Z+(1−λ).Math.{tilde over (H)}) wherein σ is a sigmoid function.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0027] FIG. 1 is a flowchart of a method according to an embodiment of the present disclosure.

[0028] FIG. 2 is a flowchart of a text convolutional neural network (textCNN) according to a specific embodiment of the present disclosure.

[0029] FIG. 3 is a flowchart of a twin neural network based on a Manhattan long short-term memory (LSTM) network according to a specific embodiment of the present disclosure.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0030] The present disclosure will be further explained in conjunction with the specific embodiments. It should be understood that these embodiments are intended to illustrate the present disclosure rather than limit the scope of the present disclosure. Various equivalent modifications to the present disclosure made by those skilled in the art after reading the specification should fall within the scope defined by the appended claims.

[0031] As shown in FIG. 1, a method for extracting a dam emergency event based on a dual attention mechanism specifically includes the following steps:

[0032] (1) Data preprocessing is performed: A dam emergency corpus is labeled, and sentences and a document with information on a dam emergency event are encoded based on four embedding vectors.

[0033] (1.1) The data of a special inspection report and a daily inspection report of a dam in a BIO mode are labeled, that is, each element is labeled as B-X, I-X or O. B-X denotes a beginning part of a key argument belonging to a type X, I-X denotes an intermediate part of the key argument belonging to the type X, and O denotes other words in the sentence except the key argument.

[0034] For example, the Chinese sentence “On Aug. 13, 2018, an M5.0 earthquake occurred in Tonghai County, Yuxi City, Yunnan, with a focal depth of 7 kilometers, and the straight-line distance from the epicenter of the earthquake to the dam of the Manwan Hydropower Station is about 231 kilometers.” is labeled in the BIO mode as follows: OnO AugustB-Time 13I-Time,O 2I-Time0I-Time1I-Time8I-Time,O anO MB-Magnitude5I-Magnitude.I-Magnitude0I-Magnitude earthquakeO occurredO inO TongB-PlacehaiI-Place CountyI-Place,O YuI-PlacexiI-Place CityI-Place,O YunI-PlacenanI-Place,O withO aO focalO depthO ofO 7B-Depth kilometersI-Depth,O andO theO straight-lineO distanceO fromO theO epicenterO ofO theO earthquakeO toO theO damB-Place ofI-Place theI-Place ManI-PlacewanI-Place HydropowerI-Place StationI-Place isO aboutO 231B-Range kilometersI-Range.O

[0035] (1.2) A sentence is given a length of n, that is, W=w.sub.1, w.sub.2, . . . , w.sub.n.

[0036] (1.3) A 312-dimensional vector of the last layer of an ALBERT model is taken as a word embedding vector, where an event type embedding vector, an entity type embedding vector, and a part-of-speech tag embedding vector are generated through a trainable lookup table.

[0037] (1.4) The word embedding vector, the event type embedding vector, the entity type embedding vector, and the part-of-speech tag embedding vector are concatenated. The concatenated embedding vectors are mined through BiLSTM to capture important contextual information. A sequence of hidden vectors H=h.sub.1, . . . , h.sub.h for representation is acquired at the next step.

[0038] (2) A dependency graph is built: A dependency is introduced, and a dependency graph is built based on a sentence structure and a semantic structure to identify and classify all parameters of the dam emergency event.

[0039] (2.1) An adjacency matrix A.sup.d of a dependency tree is taken as one of the syntactic structures of event extraction, where A.sup.d is a binary matrix of N×N. If the words w.sub.i and w.sub.j are linked in the dependency tree, A.sup.d(i,j) has a value of 1. If not, it has a value of 0.

[0040] (2.2) A matrix A.sup.dl is initialized according to a dependency label. A P-dimensional embedding vector of r is found from a trainable embedding lookup table by A.sup.dl(i,j). If there is a dependency edge between the words w.sub.i and w.sub.j, the dependency label is r. Otherwise, A.sup.dl(i,j) is initialized by a P-dimensional all-zero vector.

[0041] (2.3) The dependency label matrix A.sup.dl is transformed to a dependency label score matrix Ã.sup.dl:

[00001] A ~ dl ( i , j ) = exp ( UA dl ( i , j ) ) / .Math. v = 1 .Math. N exp ( UA dl ( i , v ) )

[0042] where U is a trainable weight matrix.

[0043] (2.4) A score between the hidden vectors h.sub.i and h.sub.j is calculated to acquire a semantic score matrix A.sup.s:


k.sub.i=U.sub.kh.sub.i,q.sub.i=U.sub.qh.sub.i,

[00002] A S ( i , j ) = exp ( k i q j ) / .Math. v = 1 .Math. N exp ( k i q v )

[0044] where U.sub.k and U.sub.q are trainable weight matrices.

[0045] (2.5) A dependency graph matrix A=[A.sup.d, Ã.sup.dl, A.sup.s] is acquired, where A.sup.d is the adjacency matrix of the dependency tree, Ã.sup.dl is the dependency label score matrix, and A.sup.s is the semantic score matrix.

[0046] (3) The dual attention network is built: A new dependency arc is generated based on the GTAN, and node information is aggregated to capture a long-range dependency and a potential interaction. The attention network is introduced. Features extracted by a GTAN layer and an attention network layer are fused according to a set ratio. Key semantic information in the sentence is captured, and a sentence-level event argument is extracted.

[0047] (3.1) The GTAN is proposed, a graph convolutional network (GCN) is replaced by a graph attention network (GAN), and a reasonable weight distribution is performed. The vector generated by the attention layer goes through a Dropout layer to prevent the model from overfitting. The GTAN is an improvement of a graph transformer network (GTN), and the GCN in the GTN is replaced by the GAN. It is reasonable to give a higher weight to a trigger and an arc of a key argument in the dependency, which can give full play to the effect of the dependency.

[0048] (3.2) The GTAN is formed by two parts: A graph transformer layer and a graph attention layer. A 1×1 convolution is applied on the adjacency matrix A set by the graph transformer layer. Two intermediate adjacency matrices Q.sub.1 and Q.sub.2 in the matrix vector are softly selected after the 1×1 convolution, and the matrices Q.sub.1 and Q.sub.2 are multiplied to generate a new meta-path graph A.sup.l.

[0049] (3.3) A graph attention network (GAN) is applied to each channel of the meta-path graph A.sup.l by the graph attention layer, and multiple node representations are concatenated as Z:

[00003] Z = .Math. C i = 1 σ ( D ~ i - 1 A ~ i ( l ) XV )

[0050] where ∥ is a join operator, C is a number of channels, Ã.sub.i.sup.(l) (Ã.sub.i.sup.(l)=A.sub.i.sup.(l)+l) is the adjacency matrix of an i-th channel of A.sup.l, {tilde over (D)}.sub.i is a degree matrix of Ã.sub.i.sup.(l), V is a trainable weight matrix shared across channels, X is a feature matrix, and I is an identity matrix.

[0051] (3.4) A weight matrix α.sub.a.sup.k of the attention network layer is calculated according to the following formula:


α.sub.a.sup.k=softmax(tan h(W.sub.a.sup.Th.sub.k+b.sub.k))

[0052] where h.sub.k is a k-th vector in the hidden vector H generated through BiLSTM, W.sub.a is a trainable weight matrix, and b.sub.k is a bias.

[0053] (3.5) A α.sub.a point of the weight matrix of the attention network layer is multiplied by a hidden vector H to generate a vector {tilde over (H)}, and the Z vector generated by the GTAN layer and the {tilde over (H)} vector generated by the attention network layer are connected by a hyperparameter λ to acquire a fused vector {tilde over (W)}:


{tilde over (W)}=σ(λ.Math.Z+(1−λ).Math.{tilde over (H)})

[0054] where σ is a sigmoid function.

[0055] (3.6) The feature fused vector {tilde over (W)} is fined by a conditional random field (CRF) to predict a label of each character.

[0056] (4) A document-level argument is filled: A sentence with a key event in the dam emergency document is detected, and an argument role with the highest similarity is filled in a surrounding sentence to a key event missing part through a twin neural network.

[0057] (4.1) An initial vector of the event argument label is set in the form of a one-hot label to be formed by 1 and 0, 1 denoting a key argument position and 0 denoting other positions. A randomly generated initial vector is trained into a 128-dimensional embedding vector by Word2vec.

[0058] (4.2) An entity type is generated by looking up a randomly initialized embedding table, and the embedding vector is set to be 128-dimensional.

[0059] (4.3) The sentence information and document information are transformed into 312-dimensional embedding vectors respectively through ALBERT.

[0060] (4.4) The four embedding vectors (i.e., argument label, entity type, sentence information, and document information) are concatenated to generate an 880-dimensional new vector.

[0061] (4.5) A text convolutional neural network (textCNN) is established, as shown in FIG. 2. The 880-dimensional new vector acquired in step (4.4) is taken as an input vector to detect a key sentence in an event and determine a key event. The textCNN is composed of four parts: an embedding layer, a convolutional layer, a pooling layer, and a fully connected layer. The embedding layer projects the input 880-dimensional vector into a low-dimensional space with a dimension of 128 through a hidden layer to help encode a semantic feature. The convolutional layer has three convolution kernels with sizes of 3, 4, and 5. The number of each type of convolution kernel is 128, and the convolution kernel has a width that is consistent with the dimension of the feature vector. By moving the convolution kernel down, a local correlation between words is extracted. The pooling layer represents the feature by extracting a maximum value of each feature vector. Each pooling value is spliced to generate a final feature vector, and finally, the fully connected layer determines whether the sentence includes a key event.

[0062] (4.6) A <key sentence, adjacent sentence> pair is processed by the twin neural network based on a Manhattan LSTM network shown in FIG. 3 to represent the similarity in space and infer a potential semantic similarity of the sentences. The final hidden state in the convolutional network is taken as a vector representation of the two sentences, the similarity of the two sentences is measured by a Manhattan distance, and similarities between the key sentence and a surrounding sentence thereof are calculated and sorted from high to low according to the similarity. A corresponding missing argument is found and filled with an argument role in the adjacent sentence with the highest similarity.

[0063] To verify the validity of the model of the present disclosure, an experiment was carried out based on the dam emergency corpus. The case available in the corpus is shown in Table 1, and the event types and corresponding event arguments are shown in Table 2. The evaluation criteria used in the experiment are P,R, and F.sub.1, where P denotes a precision rate, R denotes a recall rate, and F.sub.1 is a comprehensive evaluation criterion for evaluating general classification problems. The event extraction models involved in the comparison experiment include a deep multi-pooling convolutional neural network (DMCNN) model, a convolutional bidirectional long short-term memory (C-BiLSTM) model, a joint recurrent neural network (JRNN) model, a hierarchical modular event argument extraction (HMEAE) model, and a joint multiple Chinese event extractor (JMCEE) model. The DMCNN model uses dynamic multi-pooling layers for event extraction based on event triggers and arguments. The C-BiLSTM model performs Chinese event extraction from the perspective of the character-level sequence labeling paradigm. The JRNN model performs event extraction through joint event extraction via a recurrent neural network. The HMEAE model designs a neural module network based on a concept level for each basic unit and forms a role-oriented module network through logical operations to classify specific argument roles. The JMCEE model jointly performs predictions on event triggers and event arguments based on a shared feature representation of a pre-trained language model.

TABLE-US-00001 TABLE 1 Dam dataset case Case On Aug. 13, 2018, an M5.0 earthquake occurred in Tonghai County, Yuxi City, Yunnan, with a focal depth of 7 kilometers, and the straight-line distance from the epicenter of the earthquake to the dam of the Manwan Hydropower Station is about 231 kilometers, so the Manwan production area was slightly shaken. In order to grasp the impact of the earthquake on the hydraulic structures of the Manwan Hydropower Station, the Manwan Hydropower Station carried out a comprehensive special inspection in a timely manner.

TABLE-US-00002 TABLE 2 Event types and corresponding event arguments in the dam dataset Event Type Argument Role Earthquake Time, place, magnitude, focal depth, area of influence Rainstorm Start time, end time, place, rainfall, warning level Flood discharge Start time, end time, place, cause, monitoring means, monitoring effect Pre-flood safety Start time, end time, place, cause, monitoring means inspection Comprehensive Time, place, cause, monitoring means, monitoring special inspection effect Daily Start time, end time, place, maintenance location, maintenance level, measure, result analysis Daily inspection Time, place, inspection location, measure, result analysis

[0064] Table 3 shows the comparison results between the model of the embodiment of the present disclosure and the five models DMCNN, C-BiLSTM, JRNN, HMEAE, and JMCEE. The results show that the model of the embodiment of the present disclosure makes full use of the syntactic relation and semantic structure and has a better event extraction effect based on the dam emergency corpus than the five models.

TABLE-US-00003 TABLE 3 Comparison of experimental results of different event extraction models Models P R F1 DMCNN 57.64 54.37 55.96 C-BiLSTM 59.13 57.44 58.27 JRNN 60.84 56.95 58.83 HMEAE 62.49 54.81 58.40 JMCEE 65.37 57.86 61.39 The present disclosure 83.45 62.27 71.32

[0065] A system for extracting a dam emergency event based on a dual attention mechanism includes:

[0066] (1) a data preprocessing module configured for labeling a dam emergency corpus, and encoding sentences and a document with information on a dam emergency event;

[0067] (2) a dependency graph building module configured for introducing a dependency and building a dependency graph based on a sentence structure and a semantic structure to identify and classify all parameters of the dam emergency event;

[0068] (3) a dual attention network building module configured for generating a new dependency arc based on a GTAN, and aggregating node information to capture a long-range dependency and a potential interaction; and introducing an attention network, fusing features extracted by a GTAN layer and an attention network layer according to a set ratio, capturing key semantic information in the sentence, and extracting a sentence-level event argument; and

[0069] (4) a document-level argument filling module configured for detecting a sentence with a key event in the dam emergency document and filling in an argument role with the highest similarity in a surrounding sentence to a key event missing part through a twin neural network.

[0070] The specific implementation of the system is the same as that of the method.

[0071] Obviously, a person skilled in the art should know that the steps or modules of the embodiments of the present disclosure may be implemented by a universal computing apparatus. These modules or steps may be concentrated on a single computing apparatus or distributed on a network consisting of a plurality of computing apparatuses and may optionally be implemented by programmable code that can be executed by the computing apparatuses. These modules or steps may be stored in a storage apparatus for execution by the computing apparatuses and may be implemented, in some cases, by performing the shown or described steps in sequences different from those described herein, or making the steps into integrated circuit modules respectively, or making multiple modules or steps therein into a single integrated circuit module. In this case, the embodiments of the present disclosure are not limited to any specific combination of hardware and software.