ASSISTANCE APPARATUS FOR LOCALIZING ERRORS IN A MONITORED TECHNICAL SYSTEM
20220170976 · 2022-06-02
Inventors
- Martin Ringsquandl (Raubling, DE)
- Mitchell Joblin (München, DE)
- Dagmar Beyer (München, DE)
- Sebastian Weber (Nürnberg, DE)
- Sylwia Henselmeyer (Erlangen, DE)
- Marcel Hildebrandt (München, DE)
Cpc classification
H02J2300/10
ELECTRICITY
G01R31/085
PHYSICS
H02J3/0012
ELECTRICITY
G01R31/3336
PHYSICS
Y02E40/70
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
Y04S10/12
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
H02J2203/20
ELECTRICITY
H02J2203/10
ELECTRICITY
Y04S10/30
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
G01R31/08
PHYSICS
G01R31/333
PHYSICS
Abstract
Provided is an assistance apparatus for localizing errors in a monitored technical system consisting of devices and/or transmission lines, including at least one processor configured to obtain values of actual attributes of the devices and/or of the transmission lines, determine an error probability for each device and/or transmission line by processing a graph neural network with the obtained actual values of attributes as input, wherein the graph neural network is trained by training attributes assigned to an attributed graph representation of the technical system, and output an indication for such devices and/or transmission lines, whose error probability is higher than a predefined threshold.
Claims
1. An assistance apparatus for localizing errors in a monitored technical system including devices and/or transmission lines, comprising: at least one processor configured to: obtain values of actual attributes of the devices and/or of the transmission lines; determine an error probability for each device and/or transmission line by processing a graph neural network with the values of actual attributes as input, wherein the graph neural network is trained by training attributes assigned to an attributed graph representation of the technical system; and output an indication for the devices and/or transmission lines, whose error probability is higher than a predefined threshold.
2. The assistance apparatus according to claim 1, wherein the attributed graph representation represents a topology of the technical system, wherein each device and each transmission line is represented by one node, and each pair of nodes interacting with each other is connected by an edge, and wherein different types of devices and types of transmission lines are represent by different types of nodes.
3. The assistance apparatus according to claim 1, wherein the attributes assigned to the node comprise at least one of sensor data of a set of parameters measured at the node and static features of the node.
4. The assistance apparatus according to claim 1, wherein the graph neural network determines a vector representation of each node and forwards the vector representation of the node to neighbouring nodes.
5. The assistance apparatus according to claim 1, wherein the graph neural network is trained by the attributed graph representations comprising training attributes of at least one node operating in an abnormal mode.
6. The assistance apparatus according to claim 5, wherein the graph neural network is trained by injecting attributes representing erroneous measurements of a predetermined node into the attributed graph representation, wherein the attributes of all other nodes represent error-free measurements.
7. The assistance apparatus according to claim 6, wherein the injected attribute representing erroneous measurements is a measurement value of reverse algebraic sign with respect to an error-free measurement value of the attribute of the node.
8. The assistance apparatus according to claim 5, wherein the trained graph neural network provides as an output a probability for the nodes either having an error or having no error.
9. The assistance apparatus according to claim 1, wherein the graph neural network is trained by attributed graph representations comprising training attributes of the nodes representing all nodes operating in a normal mode.
10. The assistance apparatus according to claim 8, wherein the trained graph neural network provides as an output predicted values of the attributes of each node.
11. The assistance apparatus according to claim 9, wherein the graph neural network is trained by minimizing a loss function between the values of the training attributes and predicted values of the training attributes of each node by minimizing a mean squared error function.
12. The assistance apparatus according to claim 1, wherein the technical system is an electrical power grid.
13. The assistance apparatus according to claim 12, wherein different types of nodes of the graph representation represent different types of power generation devices, power switching devices and/or power transmission lines and the edge between two nodes represents a potential flow of current.
14. A method for localizing errors in a monitored technical system including devices and/or transmission lines, the method comprising: obtaining values of actual attributes of the devices and/or of the transmission lines; determining an error probability for each device and/or transmission line by processing a graph neural network with the values of actual attributes as input, wherein the graph neural network is trained by training attributes assigned to an attributed graph representation of the technical system; and outputting an indication for the devices and/or transmission lines, whose error probability is higher than a predefined threshold.
15. A computer program product, comprising a computer readable hardware storage device having computer readable program code stored therein, said program code executable by a processor of a computer system to implement the method of claim 14 when the product is run on the digital computer.
Description
BRIEF DESCRIPTION
[0041] Some of the embodiments will be described in detail, with reference to the following figures, wherein like designations denote like members, wherein:
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
DETAILED DESCRIPTION
[0048] It is noted that in the following detailed description of embodiments, the accompanying drawings are only schematic, and the illustrated elements are not necessarily shown to scale. Rather, the drawings are intended to illustrate functions and the co-operation of components. Here, it is to be understood that any connection or coupling of functional blocks, devices, components or other physical or functional elements could also be implemented by an indirect connection or coupling, e.g., via one or more intermediate elements. A connection or a coupling of elements or components or nodes can for example be implemented by a wire-based, a wireless connection and/or a combination of a wire-based and a wireless connection. Functional blocks can be implemented by dedicated hardware, firmware or by software, and/or by a combination of dedicated hardware and firmware and software. It is further noted that each functional unit described for an apparatus can perform a functional step of the related method.
[0049] It is further noted that the phrase error and the phrase fault are used as synonyms throughout this document.
[0050]
[0051] State estimation is a technique to uncover hidden state information e.g. voltage magnitude and phase angles in a power grid, using different kinds of measurements. However, measurements have inherent uncertainty and meter information about measurements can also be error prone, e.g. a location or type of the measurement may be wrong. Traditional distribution system state estimation methods can deal to some extent with an error prone measurement but assume reliable static data and meter information describing a technical system in terms of the devices and transmission lines.
[0052] In the technical system 10 reliable information is required for devices 11, 12, 13 like generators, nodes, transmission lines and their characteristics. Errors in the static data propagate through power flow and state estimation calculations and lead to unreliable results. A main problem in the data of a technical system like a power grid are especially static data errors, e.g. wrong impedance of transmission lines or a wrong topology information. Also, dynamic data errors occur, e.g. wrong values of measurements, wrong location of the measurements which are often caused by wrong meter information to the measurement.
[0053] To localize such errors e.g. noisy measurements, typology errors or wrong measurement meta-information in underlying data of the technical system 10, various attributes of the devices and/or of the transmission lines 11, 12, 13 of the technical system 10 are obtained and processed by the assistance apparatus 20. The assistance apparatus 20 outputs an indication for such devices 11, 12 and/or transmission line 13, whose error probabilities is higher than a predefined threshold.
[0054] The assistance apparatus 20 consists of a data interface 21 which is configured to obtain the values of actual attributes. The assistance apparatus 20 further consists of a prediction unit 22 consisting of a graph representation generator 24 and a graph neural network unit 25. The graph representation generator 24 as well as the graph neural network unit 25 are functional units, whereas the function is performed by at least one processor processing a computer program product which is loadable into an internal memory of the assistance apparatus 20, not depicted, and which comprises software code portions for performing the respective method steps of this functional unit when the computer program product is run on said at least one processor.
[0055] The prediction unit 22 determines an error probability which relates to the probability of a device having errory or erroneous attributes and output via output interface 23. The determined error probability can also relate to a probability whether a predicted value of an attribute corresponds to a measured attribute which was obtained by the data interface 21 as actual attribute of the technical system 10. Based on the indication for such device an erroneous measurement, erroneous meter information of a measurement or erroneous data on static attributes of the technical system 10 can be localized and corrected in the distributed system state estimation model set up for the technical system 10. On the other hand, the indication of such an erroneous device can also be used to exchange, repair, or at least monitor the respective device or transmission line in the physical technical system 10, e.g. the electrical power grid. The graph neural network unit can be configured as a graph convolutional neural network.
[0056]
[0057] Data 29 of the technical system 10 consists of actual attributes of the devices and transmission line 11, 12, 13 and are associated to the devices and transmission lines 11, 12, 13. Dynamic attributes consisting of sensor data of a set of parameters measured at the device or transmission lines 11, 12, 13. Static attributes can also be node features, e.g. the length of a transmission line or impedance of a transmission line or maximum power of a generator. The obtained actual attributes can also be restricted to devices and transmission lines a sub-zone of the technical system 10 and its respective attributes can be input and processed by the assistance apparatus 20.
[0058] Data 29 of the technical system 10 is transformed by the graph representation converter 24 into an attributed graph representation 26 of the technical system 10. In a learning phase, the attributed graph representation 26 of the technical system 10 or of a similar technical system, is used to train the graph neural network. Characteristics of the attributed graph representation used for training are described below in more detail.
[0059] In an inference phase, the graph neural network unit 25 processes the graph representation 26 by the trained graph neural network and outputs an indication 28, e.g. in the graph representation 27 of such devices and/or transmission lines 11, 12, 13 whose error probability is higher than a predefined threshold.
[0060]
[0061] The message forwarding for a one-layer graphic convolutional network is shown in
[0062] The vector representation h for each node determined by the graph neural network, especially as determined by a graph convolutional neural network, are following:
[0063] Each node is represented by a vector representing the assigned attributes as follows:
wherein the attributes represent active power p, reactive power q, current i, frequency v, phase angle q as well as the impedance and the length as static features of a transmission line. The vector representation h for each node comprises of a non-linear activation function s, e.g. a sigmoid function or a ReLU-function, applied on an aggregation function Agg, typically a sum of a maximum or a mean value applied on weight matrices W of the graph neural network.
[0064] Node 32 is indicated, see dash line 70, being erroneous or deviating in one of the attributes from the predicted value with a probability higher than a predefined threshold.
[0065] The graph convolution of the two-layer graph convolution neural network is shown in
[0066] Based on the vector representations an error probability given by
is minimized. As a result, a predicted value for an attribute, e.g. a voltage magnitude and a phase angle are calculated. In summary the technical system 10 is converted to an attributed heterogeneous graph representation 26. Then a graph neural network 27 infers nodes which contain bad attributes, either bad measurements or a bad node feature.
[0067] When applying a one-layer graph neural network actual values of attributes of at least the direct neighbours are required as input and a vector representation for a node is at least passed to the nearest neighbour, whereas for a two-layer graph neural network, vector representations of a node are propagated to at least the subsequent neighbours. Accordingly, actual values of attributes obtained in the assistance apparatus are required for the respective nodes to which vector representations are propagated. Accordingly, the graph representation in
[0068] The attributed graph representation used for training the graph neural network comprises training attributes for each node of the technical system 10. In comparison to that if an already trained graph neural network is used to localize an error an attributed graph representation comprising the actual attributes of the technical system can comprise attributes for only a sub-set of nodes of the technical system 10.
[0069] Two training regimes can be applied to train the graph neural network of the graph neural network unit 25. A node classification can be performed as a first training regime. This means that the graph neural network is trained to identify nodes with errors as a binary classification problem such that “0” indicates no error and “1” indicates an error. A second training regime uses node regression. In this case the graph neural network of graph neural network unit 25 is trained to predict a state estimation value of a node as a regression problem.
[0070] For the node classification training regime, the graph neural network is trained by the attributed graph representation comprising training attributes of at least one node operating in an abnormal mode. In a preferred embodiment the graph neural network is trained by injecting attributes representing erroneous measurements or erroneous node features of one or few nodes at a predetermined location into the graph representation wherein the attributes of all other nodes represent error-free measurements or node features. Most preferred the erroneous measurement is set to a measurement value of reverse algebraic sign with respect to the error-free measurement value of the attribute of the node. In other words, erroneous measurements are injected into a predetermined node of the graph neural network to train the graph neural network to detect the location of the error. By flipping the sign of the attribute, i.e. measurement or feature of the node, a well-defined error type is presented to the graph neural network. In result, the graph neural network can predict with high probability the node having a erroneous attribute. The graph neural network provides as an output a probability for the nodes either having an error or having no error.
[0071] In contrast to other neural networks, graph neural networks can generalize across different topologies, e.g., power grids of different topologies. This is due to their flexibility of vector passing through nodes of any network. This training regime is particularly advantageous because an abundance of training data can be generated from each error-free power grid. If the performance of the graph neural network is insufficient, additional training data can be generated by conceiving new perturbation mechanisms that introduce additional error types.
[0072] In the second training regime, i.e. node regression, the graph neural network is trained by attributed graph representations comprising training attributes representing all nodes operating in a normal mode. Normal mode means here that it is assumed that the attributes are correct, i.e. error-free. The graph neural network is then trained by minimising a loss function between the values of the trained attributes and predicted values of the training attributes of each node, especially by minimising a mean squared error function. This means that a weight matrix of the graph neural network is determined by this loss function. Once the training is completed the trained graph neural network is used to predict the value of the attributes of each node.
[0073] In the inference phase, the predicted values of the attributes are compared to the value of obtained actual attributes. If the predicted and the obtained values of the attribute deviate significantly this is evidence of an error. In this case the trained graph neural network provides as an output predicted values of the attributes of each node.
[0074] When the assistance apparatus 20 is used to localize errors in an electrical power grid the different types of nodes of the graph representation provide different types of devices, for example power generation devices, power switching devices, power connecting or concentration devices like a bus bar or a power transmission line. The edges between two nodes in a respective graph representation represent a potential flow of current.
[0075] All localized properties of the graph neural network are beneficial in this case. Since the influencing area of a predicted value is only the local neighbourhood of that node, the search space for an error is also restricted to this local neighbourhood.
[0076]
[0077] Such the assistance apparatus and the respective method localizes errors in the data of a power grid used for example to generate a distribution system state estimation. Pointing to a location in the underlying power grid allows an operator of the power grid to investigate and resolve errors more quickly either in the calculation of the distribution system state estimation or in the real physical power grid.
[0078] It is to be understood that the above description of examples is intended to be illustrative and that the illustrated components are susceptible to various modifications. For example, the illustrated concepts could be applied for different technical systems and especially for different sub-types of the respective technical system with only minor adaptions.
[0079] Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention.
[0080] For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other steps or elements.