Method for generating a model ensemble for calibrating a control device

11203962 · 2021-12-21

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for generating a model ensemble that estimates at least one output variable of a physical process as a function of at least one input variable, the model ensemble being formed from a sum of model outputs from a plurality of models that have been weighted with a weighting factor.

Claims

1. A method for calibrating a technical system controllable by control variables with a model ensemble, that estimates at least one output variable (y) of the technical system as a function of at least one input variable (u), comprising: forming the model ensemble from a sum of model outputs (ŷ.sub.j) from a plurality (j) of models (M.sub.j) that have been weighted with a weighting factor (w.sub.j), determining for each model (M.sub.j) an empirical complexity measurement (c.sub.j), that evaluates the deviation of the model output variable (ŷ.sub.j) from the output variable (y) of the actual physical process over a specified input variable range (U), and a model error (E.sub.j), wherein the empirical complexity measurement (c.sub.j) is weighted with a complexity aversion parameter (a.sub.K), forming a surface information criterion (SIC.sub.j, SIC) from the empirical complexity measurement (c.sub.j) and the model error (E.sub.j) from which the weighting factor (w.sub.j) for the model ensemble is determined, calibrating the technical system using the model ensemble by setting the control variables of the technical system to ensure an optimized at least one output variable (y) of the technical system during operation.

2. The method according to claim 1, wherein the mean square error (MSE.sub.j) between the output variables (y) of the physical process measured at N input variables (u) and the model output variables calculated at these N input variables is used as model error (E.sub.j) of a model (M.sub.j) according to the relationship MSE j = 1 N .Math. i = 1 N ( y ( u i ) - y ^ j ( u i ) ) 2 .

3. The method according to claim 1, wherein the empirical complexity measurement (c.sub.j) of a model (M.sub.j) is calculated using the formula c j = U y ^ j ( u ) T y ^ j ( u ) du or the formula c j = U y ^ j ( u ) T y ^ j ( u ) du - 1 N .Math. i = 1 N y ^ j ( u i ) T y ^ j ( u i ) .

4. The method according to claim 1, wherein the weighting factors (w.sub.j) of each model (M.sub.j) of the model ensemble are calculated using the surface w j = e - 1 2 SIC j . information criterion (SIC.sub.j) of the model (M.sub.j) from the formula.

5. The method according to claim 1, wherein for the model ensemble the surface information criterion (SIC) is formed from an error matrix (F) that includes the model error (E.sub.j) of the models (M.sub.j) and a complexity measurement matrix (C) that includes the empirical complexity measurement (c.sub.j) of the models (M.sub.j), whereas the error matrix (F) and the complexity measurement matrix (C) according to the formula SIC={w.sup.TFw+w.sup.TCw} each being weighted twice using a weighting vector (w) that includes the weighting factors (w.sub.j) of the models (M.sub.j), and the surface information criterion (SIC) of the model ensemble being minimized with respect to the weighting factors (w.sub.j).

6. The method according to claim 5, wherein the error matrix (F) is calculated as a matrix product of a matrix (E), whereas the matrix (E) being calculated using the formula E=(y(u.sub.i)−ŷ.sub.j(u.sub.j)).

7. The method according to claim 5, wherein the complexity measurement matrix (C) is weighted using a complexity aversion parameter (a.sub.K).

8. The method according to claim 7, wherein the weighting factors (w.sub.j) are calculated for different complexity aversion parameters (a.sub.K) and the weighting vector (w.sub.a.sub.k) belonging to a selected complexity aversion parameter (a.sub.K) is chosen as the optimum weighting vector (w.sub.opt) for the model ensemble.

9. The method according to claim 7, wherein weighting vectors (w.sub.a.sub.k) are calculated for different complexity aversion parameters (a.sub.K) and, in order to determine the optimum weighting vector (w.sub.opt), the relationship { w α K T Fw α K + 2 N σ 2 w α K T p } is minimized with respect to the weighting vectors (w.sub.a.sub.k) calculated for the different complexity aversion parameters (a.sub.K).

10. The method according to claim 6, wherein the complexity measurement matrix (C) is weighted using a complexity aversion parameter (a.sub.K).

11. A combustion engine comprising a control device calibrated by a model ensemble generated by the method of claim 1.

12. The method according to claim 1, wherein the output variable comprises an emission variable.

13. A method of calibrating a technical system controllable by control variables using a model ensemble that estimates an output variable of the technical system as a function of an input variable, comprising: forming the model ensemble from a sum of model outputs of a plurality of models that have been weighted with a weighting factor; determining an empirical complexity measurement for each of the plurality of models that evaluates a deviation of the model output from the output variable over a specified input variable range, wherein the empirical complexity measurement is weighted with a complexity aversion parameter; determining a model error for each of the plurality of models; forming a surface information criterion from the empirical complexity measurement and the model error from which the weighting factor is determined; calibrating the technical system using the model ensemble by setting the control variables of the technical system to ensure an optimized output variable of the technical system during operation.

14. A combustion engine comprising a control device calibrated by the method of claim 13.

15. The method according to claim 13, wherein the output variable comprises an emission variable.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The present invention is explained below with reference to FIGS. 1 to 5, showing exemplary, schematic and non-restrictive advantageous embodiments of the invention. Shown are

(2) FIG. 1, a model ensemble having a plurality of models weighted with weighting factors,

(3) FIG. 2, the convergence of data points via different complex models,

(4) FIG. 3, the training and validation errors as a function of the number of model parameters and

(5) FIGS. 4 and 5, the effect of the calculation of weighting factors according to the invention.

DETAILED DESCRIPTION

(6) A model ensemble 1, as illustrated in FIG. 1, is made of a number j of models M.sub.j. Each model M.sub.j is defined by a model structure, for example a neural network, a Kriging model, a linear model network, a polynomial, etc. and by a fixed number p.sub.j of model parameters P.sub.j={p.sub.1,j, . . . , p.sub.pj,j}. Model parameters P.sub.j were or are trained or determined using an appropriate method and are generally known. Each model M.sub.j maps an input variable vector u={u.sub.1, . . . , u.sub.k} onto an estimated value of the output variable ŷ.sub.j of the modeled physical process. The actual output variable y of the physical process, which can be measured, for example, is approximated by model M.sub.j. For example, input variable u includes the input variables torque and speed of a combustion engine, as well as the engine coolant temperature of the combustion engine, and an emission or consumption variable is estimated. Input variables u.sub.i in input variable vector u can vary within a predetermined or specified input variable range U, with uεU.

(7) Using model ensemble 1, or models M.sub.j included therein, as a physical process e.g. an emission or consumption variable of a combustion engine, such as the NOx emission, the CO or CO.sub.2 emission or the fuel consumption, is estimated as an output variable ŷ of model ensemble 1, or as a model output variable ŷ.sub.j of model M.sub.j. In the following description, for the sake of simplicity, a single output variable y will be assumed without limitation of general applicability, whereas an output variable vector y made up of a plurality of output variables y also being possible of course.

(8) In model ensemble 1, each model output variable ŷ.sub.j is weighted with a weighting factor w.sub.j and output variable ŷ of model ensemble 1 is the weighted sum of model output variables ŷ.sub.j of individual models M.sub.j in the form

(9) 0 y ^ ( u ) = .Math. j w j y ^ j ( u ) .
In the description, for simplicity's sake, ŷ and ŷ.sub.j are also used, respectively, instead of the correct notation ŷ(u) and ŷ.sub.j(u). With respect to weighting factors w.sub.j, boundary conditions w.sub.j∈[0,1]and

(10) .Math. j w j = 1
are preferably to be taken into consideration. The problem is thus presented of how to best determine weighting factors w.sub.j so that output variables y of the physical process are approximated by model ensemble 1 or by its output variable ŷ, as best as possible. The goal here, of course, is for model ensemble 1 to estimate output variable y of the physical process over the complete input variable range U, or the range of interest, better than the best model M.sub.j of model ensemble 1.

(11) FIG. 2 shows model output variable ŷ.sub.j of a jth model M.sub.j as a function of a single input variable u (that is without limitation of generality the simplest case). In it the points are measured data points, thus in each case a measured output variable y(u) at an input variable u. A first model M.sub.1 having the associated model output variable ŷ.sub.j=M.sub.1 approximates the measured output variable y using a simple model. Model M.sub.2 with the associated model output variable ŷ.sub.j=M.sub.2, represents a somewhat more complex model that better approximates the measured output variable y (in the sense of a smaller deviation of model M.sub.j from the underlaying physical processes). The better approximation via model M.sub.2, however, comes at the cost of greater model complexity due to a greater degree of freedom in modeling in the form of a larger number p.sub.j of model parameters. In general, more complex models better approximate the actual physical processes, but require more model parameters and also more data to train the model and are also more sensitive to changes in the model parameters.

(12) This basic relationship is illustrated in FIG. 3. In it, for example, model error E (for example MSE as above) of model M.sub.j is applied across the number of model parameters p.sub.j. In one case as model error E.sub.T during the training of model M.sub.j using the available training data (all or part of the measured data points available). And the other time as model error E.sub.V, which was determined using predefined validation data (preferably available measurement values of the process output y(u) at specified input variables u). For the present application, however, there is also the problem that no or very few validation data are available.

(13) In order to evaluate the complexity of the jth model M.sub.j, an empirical complexity measurement c.sub.j is used according to the invention that does not evaluate the model structure as in the prior art, but instead evaluates the deviation of model output variable ŷ.sub.j from the output variable y of the physical process over a specified input variable range U. In contrast to a model error E, which relates to the deviation between model M.sub.j and the physical process at specific measured data points, empirical complexity measurement c.sub.j evaluates the deviation over a complete input variable range U, thus specifically also between the measured data points. Different approaches are available for an evaluation of this sort.

(14) In a first approach, the surface of the model output variable ŷ.sub.j over the input variable range U is used for evaluation. The inventive idea behind this can also be explained in reference to FIG. 2. As shown, (in a one-dimensional case illustrated here) the greater the length of the model output variable ŷ.sub.j (which corresponds in the generalized case to the surface) over input variable range uεU, the more complex model M.sub.j becomes, that is, the better output variable y is thus approximated by model M.sub.j. Naturally, this can be generalized to any desired dimension (number of input variables u; in input vector u). The empirical complexity measurement c.sub.j for evaluating the deviation of model M.sub.j from the physical process over input variable range U on the basis of the surface is determined according to the following relationship:

(15) c j = U y ^ j ( u ) T y ^ j ( u ) du .

(16) In this, ∇ is the known Nabla operator with respect to the input variables in the input variable vector u, therefore

(17) = ( u 1 , .Math. , u i ) .
The integral is determined over a specified input variable range uεU, preferably over the whole range. This integral increases monotonically with the surface of model output variable ŷ.sub.j. The surface of model output variable ŷ.sub.j over input variable range U is thus evaluated here as empirical complexity measurement c.sub.j.

(18) As an alternative empirical complexity measurement c, which evaluates the deviation of model M.sub.j or of model output variable ŷ.sub.j from output variable y of the physical process, the variance of the model output variables ŷ.sub.j can be employed. The variance (also designated as the second moment of a random variable) is, as is well known, the expected square deviation of a random variable from its expected value. Applied to the present invention, the model output variable ŷ.sub.j at the available N data points is compared, using the variance, to the model output variable ŷ.sub.j between these data points, which is designated here as variability. The idea behind this is that a model M.sub.j having an increased variability generally predicts the basic physical process over input variable range U worse than a model M.sub.j having a lower variability. This lies in the fact that the better model M.sub.j approximates the measured data points, i.e. the more complex the model M.sub.j becomes, the greater the probability of an increased variability becomes. However, if the variability becomes too large, the risk of overfit for model M.sub.j therefore also increases. The typical behavior of such an overfilled or too-complex model M.sub.j is a greatly varying model output variable ŷ.sub.j over input variable range U, which in turn can lead to a larger deviation between actual output variable y and model output variable ŷ.sub.j. This variability based on the variance can be mapped onto empirical complexity measurement c.sub.j if empirical complexity measurement c.sub.i is calculated according to the following formula.

(19) c j = U y ^ j ( u ) T y ^ j ( u ) du - 1 N .Math. i = 1 N y ^ j ( u i ) T y ^ j ( u i )

(20) It is clear that there are additional possibilities for evaluating the deviation between model M.sub.j and the physical process, or output variable y of the process and the model output variable ŷ.sub.j. The basic idea remains unaltered, namely, the idea that the larger the empirical complexity measurement c.sub.j, the more complex basic model M.sub.j is. Empirical complexity measurement c.sub.j therefore also evaluates the complexity of model M.sub.j.

(21) According to the invention, a surface information criterion SIC.sub.j of jth model M.sub.j is derived from empirical complexity measurement cj, which, analogous to the above Akaike Information Criterion AIC in the prior art, is again formed from model error E.sub.j of model M.sub.j and empirical complexity measurement c.sub.j, therefore SIC.sub.j=(E.sub.j+α.sub.K.Math.c.sub.j.sup.s). Mean square error

(22) MSE j = 1 N .Math. i = 1 N ( y ( u i ) - y ^ j ( u i ) ) 2 ,
for example, can again be used as model error E.sub.j, wherein also any other model error E.sub.j, such as in the form of the mean absolute deviation, could obviously also be used.

(23) The preferably used parameter α.sub.Kε[0, ∞[ in surface information criterion SIC.sub.j is used as a complexity aversion parameter. This represents the only degree of freedom with which the complexity of model M.sub.j of model ensemble 1 can be further penalized. The larger the complexity aversion parameter α.sub.K becomes, the more complexity enters into the surface information criterion SIC.sub.j. Small complexity aversion parameters α.sub.K therefore favor more complex models M.sub.j, meaning models M.sub.j having more degrees of freedom (number of model parameters p.sub.j).

(24) Analogous to the known Akaike Information Criterion, weighting factors w.sub.j can again be determined from

(25) w j = e - 1 2 SIC j ,
wherein w.sub.j∈[0,1] and

(26) .Math. j w j = 1
can be preferably be considered as boundary conditions. Although a model ensemble 1 can already be formed by using this, which, under the given conditions, better approximates the actual process, meaning with fewer errors than a model formed using the Akaike Information Criterion AIC, the quality of model ensemble 1 can be further improved according to the invention. The method involves the approach as explained below.

(27) It can be shown that the mean square model error MSE and the empirical complexity measurement c of model ensemble 1 with respect to a weighting vector w, which includes weighting factors w.sub.j of j models M.sub.j can each be represented as a quadratic function of model error E.sub.j and empirical complexity measurement c.sub.j of models M.sub.j in the form SIC={w.sup.TFw+α.sub.Kw.sup.TCw}. Within this, optional complexity aversion parameter α.sub.K represents a degree of freedom in the determination of weighting factors w.sub.j of j models M.sub.j.

(28) In this context, F designates an error matrix that includes model error E.sub.j of models M.sub.j and C a complexity measurement matrix that includes empirical complexity measurement c.sub.j of models M.sub.j. In the case of mean square error MSE; as model error E.sub.j and with a matrix E=(y(u.sub.i)−ŷ.sub.j(u.sub.i)), for all i ∈ N data points and j, error matrix F results as the product of matrix E with itself, according to F=E.sup.TE. Depending upon the empirical complexity measurement c.sub.j chosen, complexity measurement matrix C results in, for example,

(29) C = U y ^ a ( u ) T y ^ a ( u ) du or C = U y ^ a ( u ) T y ^ a ( u ) du - 1 N .Math. i = 1 N y ^ a ( u i ) T y ^ a ( u i ) ,
each having model output variable vector ŷ.sub.a, which contains model output variables ŷ.sub.j of j models, thus ŷ.sub.a={ŷ.sub.1 . . . ŷ.sub.j}. Matrices F and C can thus be calculated in advance and, above all, without knowledge of models M.sub.j or their model structure or the number of model parameters p.sub.j.

(30) For determining weighting factors w.sub.j (or, analogously, weighting vector w), surface information criterion SIC of model ensemble 1 for a specified complexity aversion parameter α.sub.K can be optimized with regard to weighting factors w.sub.j, in particular minimized. An optimization problem in the form

(31) w α = arg min w { w T Fw + α K w T Cw }
can be derived from this.

(32) As can be easily recognized, this is a quadratic optimization problem that can be solved quickly and efficiently using available standard solution algorithms for a predetermined complexity aversion parameter α.sub.K, w.sub.j∈[0,1] and

(33) 0 .Math. j w j = 1
preferably apply as boundary conditions for optimization. Any initial weighting vector w can be specified.

(34) The result of the optimization of Surface Information Criterion SIC of model ensemble 1 for determination of weighting factors w.sub.j is described in reference to FIG. 4. The exemplary embodiment is based on j models M.sub.j trained using a small number of data points. Within this, the two diagrams to the left show training error E.sub.TE and validation error E.sub.VE of model ensemble 1, which is determined using the optimization of surface information criterion SIC described above. Plotted on the abcissa is an empirical model ensemble complexity p.sub.eff.sup.w, which is derived from the complexities of each model M.sub.j in the form p.sub.eff.sup.w=w.sup.Tp. w is the weighting vector determined for a specified complexity aversion parameter α.sub.K, and vector p includes the number of model parameters p.sub.j for every model M.sub.j. If complexity aversion parameter α.sub.K is varied and optimization is resolved for each complexity aversion parameter α.sub.K, whereby one obtains an associated weighting vector w.sub.α.sub.K for each, one obtains the curves in the two diagrams to the left. The points in the diagram respectively represent training error E.sub.Tj and validation error E.sub.vj of a jth model M.sub.j of model ensemble 1 (the number p.sub.j of the model parameter of the jth model M.sub.j is applied for this in each case). As can easily be recognized, model ensemble 1 determined according to the invention is always better in each case, meaning having smaller errors, than the best single model M.sub.j.

(35) In FIG. 4, validation error E.sub.V is represented as a function of training error E.sub.T by the diagram on the right. The points again represent the individual models M.sub.j. In this, model ensemble 1 that is determined using the standard Akaike Information Criterion AIC is also compared to model ensemble 1 that is determined according to the invention using the surface information criterion SIC. As can be clearly recognized, surface information criterion SIC not only performs far better than Akaike Information Criterion AIC, but also better than each individual model M.sub.j.

(36) It can also be deduced from the diagram on the right in FIG. 4 that there is a complexity aversion parameter α.sub.K,opt that minimizes the model error of model ensemble 1. It can now be attempted to manually find this optimum complexity aversion parameter α.sub.K,opt or at least to manually approximate it. In a second step following the optimization of surface information criterion SIC, an attempt can also be made, however, to determine optimum complexity aversion parameter α.sub.K,opt, and thus also the associated optimum weighting vector w.sub.opt, with the procedure being described below.

(37) To accomplish this, the associated weighting vectors w.sub.α.sub.K are first determined for a plurality of complexity aversion parameters α.sub.K. A set of weighting vectors is thus obtained

(38) { w α K } α K 0 .

(39) Using the known Mallow equation, complexity aversion parameter α.sub.K is chosen as optimum complexity aversion parameter α.sub.K,opt, which solves the following optimization problem

(40) w opt = arg min w α K { w α K T Fw α K + 2 N σ 2 w α K T p } .

(41) Within this, F is again the error matrix (F=E.sup.TE) and a is the standard deviation of the available data points, but which is generally not known. There are, however, known methods (as described in Hansen, B. E. “Least squares model averaging,” Econometrica, 75(4), 2007, pp. 1175-1189, for example) to estimate the standard deviation σ from the available data points. Vector p again includes for all j models M.sub.j the number of model parameters p.sub.j. The knowledge of models M.sub.j or their model structures is, therefore, required for this step.

(42) This optimization is not, however, solved directly, but with respect to the initially determined set of weighting vectors

(43) { w α K } α K 0 .
This means that there is selected the weighting vector w associated to a specific complexity aversion parameter α.sub.K as optimum weighting vector w.sub.opt, which yields the minimal expression

(44) { w α K T Fw α K + 2 N σ 2 w α K T p } .

(45) In FIG. 5, the effect of the method according to the invention for determining weighting factors w.sub.j for a small number N of available data points is demonstrated using an example. To do this, 5,000 data points for NOx emission as well as for soot emission are measured on a specific combustion engine, thus in each case a measurement value for NOx and soot at 5,000 input vectors u, in order to have sufficient data for the demonstration example. Input vector u incorporated here, for example, five input variables u.sub.i, namely torque, speed, engine coolant temperature, position of a turbocharger having variable turbine geometry and position of an exhaust gas recirculation. Fifteen different models M.sub.j (different model structures and/or different number of model parameters and/or different model parameters) were trained using a random selection of data points from the 5,000 available data points. The random selection was used as a small number N of available data points. Number N of available data points was thereby increased from 10 to 150, thus 10≤N≤150. The data points remaining in each case (5,000-N) were used as validation data for validation of the example in order to verify the effectiveness of the invention. In FIG. 5, the validation error for NOx emission E.sub.V,NOx and soot emission E.sub.V,Soot is illustrated for each case. Validation error E.sub.V is the mean square error between model output variable ŷ.sub.j or output variable ŷ of model ensemble 1 and the validation data. For each number N of data points, a best and a worst model M.sub.k results (dashed lines in FIG. 5). This bandwidth of model M.sub.j is represented in FIG. 5. In addition, for each number N of data points, a model ensemble according to the known Akaike Information Criterion AIC and a model ensemble 1 using the surface information criterion SIC according to the invention were determined. The validation errors for this are shown in the diagrams in FIG. 5, as well. It is immediately obvious from this that model ensemble 1 determined according to the invention is not only usually better than the best model M.sub.j in each case, but also better than the model ensemble that was determined using Akaike Information Criterion AIC.

(46) A model ensemble determined according to the invention is used, for example, for calibrating a technical system, such as a combustion engine. In the calibration—in order to optimize at least one output variable of the technical system—control variables of the technical system, by which the technical system is controlled, are varied in a specified operational state of the technical system that is defined by state variables or a state variable vector. The optimization of output variables by variation of the control variables is generally formulated and solved as an optimization problem. There are sufficient known methods for accomplishing this. The control variables determined in this manner are stored as a function of the respective operational conditions, for example in the form of characteristic maps or tables. This relationship can then be used to control the technical system as a function of the actual operational state (which is measured or otherwise determined (for example, estimated)). This means that the stored control variables for the relevant operational state are readout from the stored relationship and used to control the technical process. In the case of a combustion engine as technical system, the operational condition is often described using measurable variables such as speed and torque, wherein other variables such as engine coolant temperature, ambient temperature, etc., can also be used. In a combustion engine, the position of a variable-turbine-geometry turbocharger, the position of an exhaust-gas recirculation system or the injection timing are often used as control variables. The output variable to be optimized in a combustion engine is typically the consumption and/or emission variable (for example, NOx, CO, CO.sub.2, etc.). Calibration of a combustion engine thus ensures by setting correct control variables that consumption and/or emission during operation are minimal.