ASSET PROTECTION, MONITORING, OR CONTROL DEVICE AND METHOD, AND ELECTRIC POWER SYSTEM
20220326700 · 2022-10-13
Inventors
Cpc classification
H02H3/00
ELECTRICITY
H02H1/0092
ELECTRICITY
G06Q10/06
PHYSICS
G05B23/024
PHYSICS
International classification
Abstract
An asset protection, monitoring, or control device is operative to execute a decision-making logic to process inputs and generate a decision-making logic output that comprises one or more time series, process the decision-making logic output using a machine learning model, and cause an action to be performed responsive to a machine learning model output.
Claims
1. An asset protection, monitoring, or control device, the device comprising: an interface to receive inputs related to an asset; and at least one integrated circuit operative to execute a decision-making logic to process the inputs and generate a decision-making logic output that comprises one or more time series, process the decision-making logic output using a machine learning (ML) model, and cause an action to be performed responsive to a ML model output.
2. The device of claim 1, wherein the ML model is operative to receive the decision-making logic output as a ML model input, and output the ML model output that indicates whether the action is to be taken.
3. The device of claim 1, wherein the device is an asset protection device, the action is a protective action, and the asset protection device further comprises an output interface to output a control signal to effect the protective action.
4. The device of claim 1, wherein the one or more time series include indicator values indicating whether the decision-making logic considers a fault to be present or absent.
5. The device of claim 1, wherein the one or more time series toggle between two, three, or more distinct discrete values.
6. The device of claim 1, wherein the ML model has a recurrent neural network (RNN) layer.
7. The device of claim 1, wherein the ML model has a plurality of RNN layers.
8. The device of claim 7, wherein the ML model is operative to process at least one additional decision-making logic output provided by at least one additional decision-making logic.
9. The device of claim 1, wherein the ML model comprises a long short-term memory (LSTM) cell, a gated recurrent unit (GRU) cell, or at least one other gated cell.
10. The device of claim 1, wherein the ML model has an output dense layer.
11. The device of claim 1, wherein the ML model is operative to perform non-linear low-pass filtering of the decision-making logic output.
12. The device of claim 1, wherein the inputs comprise voltage and current measurements at an end of a transmission line of a power transmission system or of a distribution line of a power distribution system.
13. The device of claim 1, wherein the action comprises one or more of: a corrective action; a mitigating action; a protective action, including a circuit breaker trip; or causing information to be output via a human machine interface (HMI).
14. The device of claim 1, wherein the asset monitoring or control device is a power system asset, including a distance protection relay.
15. An electric power system, comprising: an asset; and a device comprising an interface to receive inputs related to the asset, and at least one integrated circuit operative to execute a decision-making logic to process the inputs and generate a decision-making logic output that comprises one or more time series, process the decision-making logic output using a machine learning (ML) model, and cause an action to be performed responsive to a ML model output.
16. The electric power system of claim 15, wherein the asset is a power transmission or distribution line, and the device is a protection relay operative to cause a circuit breaker trip responsive to the ML model output.
17. A method of protecting, monitoring, or controlling an asset, the method comprising: executing a decision-making logic to process inputs and generate a decision-making logic output that comprises one or more time series; processing the decision-making logic output using a machine learning (ML) model; and causing an action to be performed responsive to a ML model output.
18. The method of claim 17, wherein the ML model is operative to receive the decision-making logic output as an ML model input, and output the ML model output that indicates whether the action is to be taken.
19. The method of claim 17, wherein the method is a method of protecting an asset, the action is a protective action, and the method further comprises outputting, via an output interface of an asset protection device, a control signal to effect the protective action.
20. The method of claim 17, wherein the one or more time series toggle between two, three, or more distinct discrete values.
21. The method of claim 17, wherein the one or several time series include indicator values indicating whether the decision-making logic considers a fault to be present or absent, the ML model comprises one or more recurrent neural network (RNN) layers, and the ML model performs non-linear low-pass filtering of the decision-making logic output.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0158] The subject-matter of the present disclosure will be explained in more detail with reference to preferred exemplary embodiments which are illustrated in the attached drawings, in which:
[0159]
[0160]
[0161]
[0162]
[0163]
[0164]
[0165]
[0166]
[0167]
[0168]
[0169]
[0170]
DETAILED DESCRIPTION OF EMBODIMENTS
[0171] Exemplary embodiments will be described with reference to the drawings in which identical or similar reference signs designate identical or similar elements. While some embodiments will be described in the context of distance protection of a power distribution or transmission systems, the methods and devices described in detail below may be used in a wide variety of systems.
[0172] The features of embodiments may be combined with each other, unless specifically noted otherwise.
[0173] According to embodiments, an output of a decision-making logic is fed to a machine learning (ML) model. The ML model may have one or several recurrent neural network (RNN) layer(s). The ML model may be configured to perform nonlinear low pass filtering, in particular integration. This may be attained by training the ML model, without requiring a human expert for setting parameters of a threshold comparison and counter (as is the case for the logic of
[0174]
[0175] The protection device 30 may be arranged at an end of a power transmission line 11 or a power distribution line. The protection device 30 is operative to cause a circuit breaker (CB) 15 to trip responsive to detection of a fault and, optionally, responsive to detecting that the fault is in a zone for which the protection device 30 is responsible.
[0176] The protection device 30 has an input interface 31 to receive measurements. The measurements may include voltage measurements at a local bus 12 provided by a voltage transformer (VT) 13 and current measurements provided by a current transformer (CT) 14. The inputs received at the input interface 31 may be provided to a decision-making logic, it being understood that some pre-processing (such as filtering, Fourier transform, principal components analysis, other statistics techniques) may be performed to the inputs as they pass from the interface 31 to the decision-making logic.
[0177] The protection device 30 may be operative to process the current and voltage measurements to determine whether there is a fault which requires a mitigating or protective action, such as trip of the CB 15. The protection device 30 may have one or several integrated circuit(s) (IC(s)) to perform the processing.
[0178] The protection device 30 has an output interface 33 to output a control signal to effect an action, such as a protective or mitigating action. The protection device 30 may be communicatively coupled to other devices in the system 10. For illustration, the protection device 30 may be communicatively coupled to a control center 20. The protection device 30 may output information on the detection of the fault to the control center 20 for outputting via a human-machine interface (HMI) 23. The control center 20 may have IC(s) 21 to process messages received from the protection device 30 at an interface 22 and for controlling the HMI 23 responsive thereto.
[0179] As shown in
[0180] The decision-making logic 34 may receive voltage and/or current measurements or other inputs and may process the inputs to generate a decision-making logic output. The decision-making logic output may be a time series. The time series may have samples that may correspond to discrete, constant time intervals. For illustration, a new decision-making logic time series sample may be generated by sampling the inputs at constant intervals. The time series may be a time series of scalars or may be a time series of vectors.
[0181] When the decision-making logic output is a time series of scalars, the time series may be binary. The time series may toggle between a first value and a second value. The first value may be a first indicator value indicating that the decision-making logic 34 considers a fault to be present and the second value may be a second indicator value indicating that the decision-making logic 34 considers the fault to be absent. The time series may be real- or complex valued, without being necessarily limited to just the first and second values.
[0182] When the decision-making logic output is a time series of vectors, the time series of vectors may have one or several vector elements that are binary and that may toggle between a first value and a second value. The first value may be a first indicator value indicating that the decision-making logic 34 considers a fault to be present and the second value may be a second indicator value indicating that the decision-making logic 34 considers the fault to be absent. The time series of vectors may comprise real- or complex valued vector elements.
[0183] The decision-making logic 34 may be a conventional protection logic. For illustration, the decision-making logic 34 may be a conventional distance protection function or time domain protection function, which outputs a time series of values that indicate whether a certain fault is deemed to be present or absent at the respective time. The fault may be a ground fault in a zone for which the protection device 30 is responsible.
[0184] The ML model 35 may have one or several artificial neural network (ANN) layers, such as RNN layers, as will be explained below. The ML model 35 may have a cell having a forget (or reset) gate and/or an input gate, such as a long-short term memory (LSTM) cell or a gated recurrent unit (GRU) cell. The ML model 35 may be operative to perform non-linear low-pass filtering of the decision-making logic output. Other structures, e.g. other recurrent structures having several gates, may be used.
[0185] In the case of a single decision-making logic 34 feeding its output to the ML model 35 (as shown in
[0186] As shown in
[0187] Sensitivities and/or response times of a first decision-making logic and second decision-making logic of the two or more decision-making logics 36-38 may be different from each other. The different sensitivities and response times is automatically accounted for in ML model training.
[0188] Even in this more complex case, the trained ML model parameters (such as weights of an ANN or parameter matrices or vectors of a GRU cell or LSTM cell) afford interpretability. For illustration, the ML model weights indicate how strongly the various decision-making logic outputs from the different decision-making logics 36-38.
[0189] The ML model 35 can have non-linear low pass characteristics, such as integrator characteristics. The ML model 35 can implement low pass filter or more specifically, an integrator.
[0190] The ML model 35 can be operative to update a state variable of the ML model (such as the value of a neuron or a state value h.sub.t of a GRU cell or LSTM cell or other recurrent cell having several gates) responsive to a new time series sample received in the decision-making logic output so that it performs an accumulation or integration of the time series output by the decision-making logic received over time.
[0191] The ML model 35 can be operative to update a state variable of the ML model (such as a value of a neuron or a state value h.sub.t of a GRU cell or LSTM cell or other recurrent cell having several gates) responsive to a new sample of a time series in the decision-making logic output so that the state variable increases (e.g., by an increment that may be fixed or variable) when the time series sample is a first indicator value indicating that the decision-making logic considers a fault to be present.
[0192] The ML model 35 can be operative to update a state variable of the ML model (such as a value of a neuron or a state value h.sub.t of a GRU cell or LSTM cell or other recurrent cell having several gates) responsive to a new sample of a time series in the decision-making logic output so that the state variable decreases (e.g., by re-setting it or by decreasing it by a decrement that may be fixed or variable) when the time series sample is a second indicator value indicating that the decision-making logic considers the fault to be absent.
[0193] The ML model 35 affords flexibility to implement a behavior that mimics various reset strategies of conventional counter-based logics (as shown in
[0194] The RNN is capable of learning the task of processing the decision-making logic output. For a single decision-making logic 34 (e.g., a single protection logic), the RNN may be a single neuron unit.
[0195] Embodiments provides various effects.
[0196] Based on a dataset, which reflects probabilities of events (e.g., probability of a ground fault or short circuit fault in the zone for which the protection device is responsible or for other zones), the ML model can be trained to achieve the optimal solution, or in case no perfect solution exists, different emphasis can be given to speed, security and dependability.
[0197] By using a ML model downstream of the decision-making logic, a differentiable logic is provided. This means that it can be incorporated into a larger ML model. For illustration, the one or several ANN (e.g., RNN) layers can be included in a larger, more complex protection system logic based on ANNs. In this case, rather than tuning the ANN (or other ML model) implementing the decision-making logic and the ML model that processes the output of the decision-making logic independently, an embodiment allows the decision-making logic and the ML model that processes the output of the decision-making logic to be trained together. This reduces training time and can improve performance of the overall protection system, because the overall logic is optimized, rather than optimizing each component part separately.
[0198] The solution is interpretable. For example, in the case of a single input and single output LSTM recurrent layer consisting of 12 weights, it is possible to interpret the weights as increment, reset rules and operate decisions, as will be explained below.
[0199] The ML model is small in terms of memory requirements and does not require very high computational power to be trained.
[0200] The ML model can be readily deployed to protection, monitoring, or control devices in the field. The ML model can be deployed to IEDs (e.g., IEDs in accordance with IEC 61850, such as IEDs compatible with IEC 61850-3:2013 and/or operative to communication in a manner that is compatible with IEC 61850-6:2009).
[0201]
[0202]
[0203] The time series output by the decision-making logic may be or may include a binary output signal, without being limited thereto. The time series output by the decision-making logic may toggle between a first value and a second value. The first value may be a first indicator value indicating that the decision-making logic 34 considers a fault to be present and the second value may be a second indicator value indicating that the decision-making logic 34 considers the fault to be absent. The first and second indicator values may be fixed numerical values.
[0204] While a decision-making logic output that toggles between two discrete states is shown in
[0205] The decision-making logic output may output a real- or complex-valued scalar. The real-valued scalar may be included in a numerical range. Different sub-ranges of the numerical range may be associated with the presence and/or absence of a fault, and/or with different types of faults. Real and imaginary parts or modulus and phase of a complex-valued scalar may be included in numerical ranges. Different sub-ranges of the numerical ranges may be associated with the presence and/or absence of a fault, and/or with different types of faults.
[0206] The decision-making logic output may be a vector having plural vector elements. The vector elements may be taken from discrete values, continuous values from within a range, or combinations thereof (some vector elements being selected from discrete values and others being from within continuous ranges).
[0207]
[0208] While a ML model output 60 toggling between two discrete values is shown in
[0209]
[0210] At step 71, a decision-making logic 34 processes inputs. The inputs may include current and/or voltage measurements. The inputs may include current and/or voltage measurements at a local end of a power transmission line (i.e., the end at which the device is provided). The inputs be received from current and voltage transformers. The inputs may be included in messages of an IACS.
[0211] At step 72, the decision-making logic output is processed with the ML model 35. The ML model may include one or several ANN (e.g., RNN) layer(s). The ML model may be operative to increase a numerical value (e.g., a state variable of the ML model) responsive to a time series sample provided by the decision-making logic that shows that the decision-making logic considers a fault to be present. The ML model may be operative to decrease the numerical value (e.g., a state variable of the ML model) responsive to a time series sample provided by the decision-making logic output that shows that the decision-making logic considers the fault to be absent. Outputs from several decision-making logics may be processed similarly.
[0212] At step 73, an action is performed responsive to the ML model output. The action may be any one or any combination of: a corrective action, a mitigating action, a protective action, causing information to be output via a HMI. A CB trip is exemplary for such an action.
[0213]
[0214] At step 81, training of the ML model is performed. This may be done prior to field use of the ML model. The training may be supervised training. The training may be based on data collected during field use and/or synthetically generated data that includes a time series of ML model inputs and an associated desired ML model output. The training may be implemented using conventional techniques such as backpropagation-through-time. A dataset including ten or more (as shown in
[0215] At step 82, some of the trained ML model parameters may be output, e.g., via a HMI. The trained ML model outputs may be provided to a human operator for interpretation of the trained ML model.
[0216] At step 83, it may optionally be determined whether the ML model is suitable for field use. The determination may be made based on operator input and/or automatically. For illustration, it may be determined whether the ML model parameters obtained in the training ensure that there are no missed faults.
[0217] At step 84, the ML model may be deployed. The ML model may be deployed during configuration or commissioning of an electric power system or IACS. The ML model may be deployed during field use of a protection device, e.g., in an update procedure.
[0218] At step 85, operation of the ML model during field use may be monitored. Re-training of the ML model may be selectively initiated depending on ML model performance during field use.
[0219] ML Model Implementation
[0220] An implementation of the ML model will be described in more detail below for further illustration. A wide variety of other implementations of the ML model that processes the output of the decision-making logic may be used.
[0221] The ML model may have a single GRU layer and a single neuron dense output layer.
[0222] The GRU cell may have all of its biases set to zero. In this case, the GRU cell (similar to the cell illustrated in
z.sub.t=σ(W.sub.z.Math.x.sub.t+U.sub.z.Math.h.sub.t-1) (1)
r.sub.t=σ(W.sub.r.Math.x.sub.t+U.sub.r.Math.h.sub.t-1) (2)
h.sub.h=tan h(W.sub.h.Math.x.sub.t+r.sub.t⊙(U.sub.h.Math.h.sub.t-1)) (3)
h.sub.t=z.sub.t⊙h.sub.t-1+(1−z.sub.t)⊙h.sub.h (4)
[0223] In Equations (1)-(4), the following notation is used: [0224] x.sub.t-1: ML model input at time t−1; [0225] h.sub.t-1: previous GRU layer output (at time t−1); [0226] h.sub.t: current GRU layer output (at time t); [0227] σ: recurrent activation function (e.g., a sigmoid; e.g., σ(v)=0 for v<0, σ(0)=0.5, σ(V)=1 for v>0); [0228] h.sub.h: candidate for next GRU cell state; [0229] W.sub.z, W.sub.r, W.sub.h: Kernel weights; [0230] U.sub.z, U.sub.r, U.sub.h: recurrent weights; [0231] ⊙: Hadamard product.
[0232] Equations (1)-(3) may optionally include biases. Experiments conducted show that the biases may be set to zero. Inclusion of the biases allows further fine-tuning of the ML model, but the biases can be set to zero while still affording enhanced speed and dependability.
[0233] Various modifications may be used. For illustration, an activation function other than a hyperbolic tangent may be used in Equation (3). For further illustration, the coefficient of h.sub.t-1 and h.sub.h in Equation (4) may be exchanged.
[0234] The output dense layer serves as a final output scaling and can be defined as:
y.sub.t=σ.sub.a(W.sub.d.Math.h.sub.t+b.sub.d) (5)
[0235] In Equation (5), the following notation is used: [0236] σ.sub.a: dense activation function (e.g., a sigmoid) [0237] y.sub.t: output of ML model at time t; [0238] W.sub.d: dense Kernel weights; [0239] b.sub.d: dense bias weights.
[0240] By considering various exemplary cases (namely cases in which x.sub.t and/or h.sub.t-1 are set to zero), the interpretability of the parameters of the trained model can be determined.
[0241] For x.sub.t=0 and h.sub.t-1=0, one obtains:
z.sub.t=σ(0)=0.5 (6)
r.sub.t=σ(0)=0.5 (7)
h.sub.h=tan h(0+r.sub.t⊙0)=0 (8)
h.sub.t=z.sub.t⊙h.sub.t-1+(1−z.sub.t)⊙h.sub.h=0 (9)
y.sub.t=σ.sub.a(b.sub.d) (10)
[0242] For x.sub.t>0 and h.sub.t-1=0, one obtains:
z.sub.t=σ(W.sub.z.Math.x.sub.t) (11)
r.sub.t=σ(W.sub.r.Math.x.sub.t) (12)
h.sub.h=tan h(W.sub.h.Math.x.sub.t) (13)
h.sub.t=(1−z.sub.t)⊙h.sub.h (14)
[0243] For x.sub.t=0 and h.sub.t-1>0, one obtains:
z.sub.t=σ((U.sub.z.Math.h.sub.t-1) (15)
r.sub.t=σ(U.sub.r.Math.h.sub.t-1) (16)
h.sub.h=tan h(r.sub.t⊙(U.sub.h.Math.h.sub.t-1)) (17)
h.sub.t=z.sub.t⊙h.sub.t-1+(1−z.sub.t)⊙h.sub.h (18)
[0244] The various parameters can be interpreted as follows: [0245] The dense layer bias b.sub.d determines an initial output of the ML model (as can be concluded from, e.g., Equation (10)). [0246] Kernel weight W.sub.h quantifies an initial accumulation of the input state (as can be concluded from, e.g., Equations (11)-(14)). [0247] Recurrent weight U.sub.z quantifies the preservation of the previous state, which also increases an accumulation of the input signal (as can be concluded from, e.g., Equations (15)-(18)). [0248] Recurrent weight U.sub.h softens a reset as it determines how much of the previous output is kept (as can be concluded from, e.g., Equations (15)-(18)). [0249] Kernel weight W.sub.z influences the ratio of the previous state and the next candidate state if x.sub.t>0. The larger W.sub.z, the slower the speed of accumulation. [0250] Kernel weight W.sub.r increases a speed of reset. [0251] Recurrent weight U.sub.r softens a reset.
[0252] Thus, the parameters of the ML model can be interpreted in a quantitative manner. The parameters quantify the initial output of the ML model, the speed at which the ML model input x is accumulated, the speed at which an internal state (e.g., h) of the GRU or LSTM cell is reset, and whether the reset is softer or harder.
[0253] Operation of the ML model is quantitatively assessed in comparison with a conventional logic as shown in
[0254] The ML model was trained using a dataset of synthetically generated ML model inputs 91 and corresponding desired ML model outputs 92, as shown in
[0255] For comparison, the parameters of a counter threshold logic as shown in
[0256] The trained ML model had at least the same performance as the counter threshold logic of
[0257] In
[0258] As can be seen in
[0259] The ML model also mitigates the risk of false alarms. This is further illustrated in
[0260] In
[0261] As seen in the lower panel of
[0262] The device according to an embodiment properly raises all alarms with no false ones, providing enhanced speed and dependability for the synthetic dataset.
[0263] While embodiments have been described with reference to the drawings, various modifications and alterations may be implemented in other embodiments. For illustration, while some embodiments have been described in association with a CB trip, the methods, devices, and systems of embodiments may not only be used to determine whether an action is to be performed that protects an asset from (more severe) damage, but may also be used to determine whether the asset operation may safely return to its normal operation (e.g., by performing an auto reclose).
[0264] For further illustration, while an implementation has been described in which a GRU has its biases set to zero, the ML model may include one or several GRUs or other ANN or RNN structures having biases that do not need to be zero and that may be learned when training the ML model 35. For illustration, the ML model 35 may include a GRU cell defined by the following equations:
z.sub.t=σ(W.sub.z.Math.x.sub.t+U.sub.z.Math.h.sub.t-1+b.sub.z) (19)
r.sub.t=σ(W.sub.r.Math.x.sub.t+U.sub.r.Math.h.sub.t-1+b.sub.r) (20)
h.sub.h=tan h(W.sub.h.Math.x.sub.t+r.sub.t⊙(U.sub.h.Math.h.sub.t-1)+b.sub.h) (21)
h.sub.t=z.sub.t⊙h.sub.t-1+(1−z.sub.t)⊙h.sub.h (22),
[0265] where b.sub.z, b.sub.r, and b.sub.h are biases. One, two or three of the biases may be parameters that may be learned while training the ML model 35.
[0266] Various effects are attained by devices, systems, and methods according to embodiments. Parameters used in the logic of the devices, systems, and methods may be determined by ML, obviating the need for a human expert to properly set all parameters of a conventional counter and threshold logic. The devices, systems, and methods mitigate or eliminate the difficulties of setting the parameters while balancing speed and dependability. The devices, systems, and methods have little memory space requirements. The ML model is interpretable and can be readily incorporated into a larger ML-model based protection, control, or monitoring logic.
[0267] While embodiments have been described in detail in the drawings and foregoing description, the description is to be considered illustrative or exemplary and not restrictive. Variations to the disclosed embodiments can be understood and effected by those skilled in the art and practicing the embodiments, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. The mere fact that certain elements or steps are recited in distinct claims does not indicate that a combination of these elements or steps cannot be used; specifically, in addition to the actual claim dependency, any further meaningful claim combination shall be considered disclosed.