METHOD AND CENTRAL COMPUTER ARRANGEMENT FOR PREDICTING A GRID STATE, AND COMPUTER PROGRAM PRODUCT
20230018146 · 2023-01-19
Inventors
Cpc classification
H02J3/0012
ELECTRICITY
H02J3/004
ELECTRICITY
H02J2300/20
ELECTRICITY
G05B2219/2639
PHYSICS
H02J2203/10
ELECTRICITY
H02J3/24
ELECTRICITY
H02J13/00004
ELECTRICITY
H02J3/003
ELECTRICITY
International classification
H02J3/00
ELECTRICITY
H02J3/24
ELECTRICITY
Abstract
A method predicts a grid state of an electrical power distribution grid, in which a central computer arrangement is used to receive measured values from measuring devices. A state estimation device is used to predict a future grid state, wherein the prediction of the future grid state is taken as a basis for ascertaining measures to guarantee stability of the power distribution grid. The prediction is made for multiple times within a predefined time window. A first prediction device is used to ascertain a prediction for a first portion of the multiple times on the basis of a voltage var control method, and in that a second prediction device is used to ascertain a prediction for a second portion of the multiple times on the basis of a neural network method.
Claims
1. A method for predicting a grid state of an electrical power distribution grid, which comprises the steps of; receiving measured values via a central computer configuration from measuring devices; using a state estimation device to predict a future grid state, wherein a prediction of the future grid state is taken as a basis for ascertaining measures to guarantee stability of the electrical power distribution grid, and wherein the prediction is made for multiple times within a predefined time window, the using step including the further substeps of: using a first prediction device to ascertain a prediction for a first portion of the multiple times on a basis of a voltage var control (VVC) method; and using a second prediction device to ascertain a prediction for a second portion of the multiple times on a basis of a neural network method.
2. The method according to claim 1, wherein a high-voltage grid is used for at least part of the electrical power distribution grid.
3. The method according to claim 1, wherein a medium-voltage grid is used for at least part of the electrical power distribution grid.
4. The method according to claim 1, which further comprises ascertaining a grid load for the prediction.
5. The method according to claim 1, which further comprises ascertaining energy production by producers of renewable energy for the prediction.
6. The method according to claim 1, which further comprises taking into consideration several hours in a future for the predefined time window, each of the multiple times being at time intervals of between 3 and 60 minutes.
7. The method according to claim 6, which further comprises taking into consideration 24 hours in the future for the predefined time window, each of the multiple times being at time intervals of 15 minutes.
8. The method according to claim 1, which further comprises: using a first training device to create training data, wherein multiple contraventions of predefined threshold values for the measured values of the electrical power distribution grid are simulated; and taking the training data as a basis for using the first prediction device to ascertain a first prediction training dataset.
9. The method according to claim 8, which further comprises taking the training data as a basis for using the second prediction device to ascertain a second prediction training dataset, and in that a comparison device is used to ascertain a diversity value for a difference between the first and second prediction training datasets, wherein, if the diversity value exceeds a previously stipulated threshold value, a second training device is used to train a neural network for the second prediction device in such a way that the diversity value is reduced.
10. A central computer configuration for predicting a grid state of an electrical power distribution grid, the central computer configuration comprising: a receiver configured to receive measured values from measuring sensors; and a state estimation device configured to predict a future grid state, wherein a prediction of the future grid state is taken as a basis for ascertaining measures to guarantee stability of the electrical power distribution grid, and wherein the prediction is made for multiple times within a predefined time window, said state estimation device having a first prediction device configured to ascertain a prediction for a first portion of the multiple times on a basis of a voltage var control (VVC) method, and a second prediction device configured to ascertain a prediction for a second portion of the multiple times on a basis of a neural network method.
11. The central computer configuration according to claim 10, wherein said state estimation device is configured to ascertain a grid load for the prediction.
12. The central computer configuration according to claim 10, wherein said state estimation device is configured to ascertain energy production by producers of renewable energy for the prediction.
13. The central computer configuration according to claim 10, further comprising: a first training device configured to create training data, wherein multiple contraventions of predefined threshold values for the measured values of the electrical power distribution grid are simulated, and the training data are taken as a basis for using said first prediction device to ascertain a first prediction training dataset.
14. The central computer configuration according to claim 13, wherein said second prediction device is configured to take the training data as a basis for ascertaining a second prediction training dataset; further comprising a comparison device configured to ascertain a diversity value being a difference between the first and second prediction training datasets; and further comprising a second training device configured so as, if the diversity value exceeds a previously stipulated threshold value, to train a neural network for said second prediction device in such a way that the diversity value is reduced.
15. A non-transitory storage medium having computer executable code that when executed on a computer, carries out the method according to claim 1.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0039]
[0040]
[0041]
[0042]
[0043]
DETAILED DESCRIPTION OF THE INVENTION
[0044] Referring now to the figures of the drawings in detail and first, particularly to
[0045] Input values are e.g.:
limit value contraventions 3 for powers or voltages transmitted by remotely monitored transformers,
initial control positions 4 for controllable equipment such as load tap changers of transformers or switchgear,
sum of injected wattless power 5 from capacitors,
sum of the power 6 of the load,
sum of the wattless power 7 of the load,
sum of the energy production 8, and
sum of the wattless power from energy producers 9.
[0046] Output variables 14 comprise e.g. all final control positions for controllable equipment such as e.g. load tap changers of transformers or switchgear.
[0047]
[0048]
[0049] In the second prediction step 23, a second prediction training dataset is ascertained, the neural network from
[0050] In comparison step 24, a diversity value E is ascertained for a difference between the first and second prediction training datasets. If the diversity value E exceeds a previously stipulated threshold value T, i.e. there is an excessively large divergence between the two estimated grid states, then, in accordance with the decision Y, the neural network is trained in step 25 so that the threshold value T is fallen short of. If the diversity value E does not exceed the threshold value T, i.e. there is not an excessively large divergence between the two estimated grid states, then, in accordance with the decision N, the method proceeds directly to the final step 26.
[0051] The method 20 allows the neural network to be trained, in particular in the case of iterative repetition using many different training data, so that it delivers a very similar output to the output from the analytical method. In this case, however, the neural network is able to estimate a grid state very much more quickly than the analytical method needs for computing a grid state.
[0052]
[0053] Subsequently, step 29 involves a prediction for further times 65 to 96 being ascertained on the basis of the previously trained neural network method. That is to say that machine learning is used for the last 8 hours in order to estimate the future grid state.
[0054] In step 30, predicted grid states are provided for all 96 times, i.e. for 24 hours in advance.
[0055]
[0056] According to the invention, the state estimation device 54 contains a first prediction device 55 designed to create a prediction for a first portion of the multiple times. These may be the first 64 times within 16 hours, as in the example mentioned at the outset. The first prediction device 55 uses a voltage var control (VVC) method, i.e. an analytical method.
[0057] A second prediction device 56 is configured to ascertain a prediction for a second portion of the multiple times on the basis of a neural network method. These may be the last 32 times within 8 hours at the end of the time window of 24 hours, as in the example mentioned at the outset.
[0058] For this to work, the neural network method of the second prediction device 56 must first be trained to deliver very similar results to the analytical method of the first prediction device 55.
[0059] For this purpose, a first training device 61 is configured to create training data 60, wherein multiple contraventions of predefined threshold values for measured values of the electrical power distribution grid are simulated.
[0060] The first prediction device 55 takes the training data 60 as a basis for ascertaining a first prediction training dataset 62. The second prediction device 56 takes the same training data 60 as a basis for ascertaining a second prediction training dataset 63. The two prediction training datasets 62, 63 are transmitted to a comparison device 64.
[0061] The comparison device determines a diversity value E for a difference between the first and second prediction training datasets 62, 63. If the diversity value E exceeds a previously stipulated threshold value T, a second training device 65 is given the task of training the neural network for the second prediction device in such a way that the diversity value is reduced. The trained neural network 66 is transmitted to the second prediction device 56.