MULTI-MODEL PREDICTIVE CONTROL METHOD FOR PICHIA PASTORIS FERMENTATION PROCESS
20240176327 ยท 2024-05-30
Inventors
- Bo WANG (Zhenjiang, CN)
- Mengyi HE (Zhenjiang, CN)
- Xianglin ZHU (Zhenjiang, CN)
- Xingyu WANG (Zhenjiang, CN)
Cpc classification
International classification
Abstract
Disclosed is a multi-model predictive control method for a Pichia pastoris fermentation process, including: dividing prior data into m training sample clusters by using a fuzzy C-means algorithm (FCM); obtaining, for each sample cluster, a corresponding prediction model by using a least squares support vector machine (LSSVM) and an improved particle swarm optimization method (IPSO); then, designing a corresponding predictive controller; and finally, calculating a deviation between an output of an object and an output of each sub-prediction model at each sampling time to establish a multi-model fusion predictive controller. According to the method, the adaptive ability of the model is improved and an actual state of a nonlinear system is described more accurately.
Claims
1. A multi-model predictive control method for a Pichia pastoris fermentation process, comprising: acquiring a concentration of protease K produced by Pichia pastoris fermentation; clustering the concentration of protease K produced by Pichia pastoris fermentation by using a fuzzy C-means algorithm (FCM), and acquiring m sample clusters; inputting each sample cluster into a least squares support vector machine (LSSVM) for training, and optimizing key parameters of LSSVM by using the improved particle swarm optimization (IPSO), and establishing m optimal sub-prediction models FCM-IPSO-LSSVM.sub.1-FCM-IPSO-LSSVM.sub.m; designing a corresponding model predictive controller for each optimal sub-prediction model; acquiring a current state of a system; calculating an output of the m optimal sub-prediction models and a mean-square error (MSE) R.sub.i(k) of controlled objects, and selecting an optimal sub-prediction model with a minimum MSE as a matching model; calculating a weight w.sub.i of each sub-model predictive controller according to the MSE R.sub.i(k); weighting and summing m sub-model predictive controllers according to a weight w.sub.i of m model predictive controllers, and taking a fusion controller as a control input to construct a multi-model fusion predictive controller; and controlling the Pichia pastoris fermentation process by the multi-model fusion predictive controller.
2. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the FCM comprises: inputting the number of clusters C, a fuzzy weighted parameter m and an iteration stop condition ?; initializing a cluster center V.sub.i.sup.0(i=1,2, . . . , C); calculating u.sub.ij(i=1,2 . . . , C, j=1,2, . . . , n),
3. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the LSSVM comprises: optimization issues, comprising:
?(x)=?.sub.i=1.sup.Na.sub.iK(x, x.sub.i)+b (6) where a.sub.i?R is a Lagrange multiplier, and through comparative analysis on multiple types of functions, a radial basis function (RBF) is selected as a kernel function of the LSSVM having a kernel width expression, comprising:
4. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the IPSO comprises the calculation formulas: supposing that a dimension of a target search space is m, the number of particles in a particle swarm is G, a position of a particle i in an m-dimensional space is represented as a vector X.sub.i=(x.sub.i,1, x.sub.i,2, . . . , x.sub.i,m), i=1,2, . . . , G, and a flight speed is represented as a vector V.sub.i=(v.sub.i,1, v.sub.i,2, . . . , v.sub.i,m), i=1,2, . . . , G; after adjustment, a best position of the particle is represented as P.sub.i=(p.sub.i,1, p.sub.i,2, . . . , p.sub.i,m), and finally a best position of a whole swarm is represented as g.sub.best=(p.sub.g,1, p.sub.g,2, . . . , p.sub.g,m); and in the k-round iteration process of IPSO, a state parameter adjustment formula of each particle in the particle swarm comprising:
5. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 4, wherein the inertia weight factor ? is dynamically adjusted by using an adaptive adjustment strategy, and a calculation formula comprises:
6. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 4, wherein an optimal iteration number N and the acceleration coefficients c.sub.1 and c.sub.2 are determined by using a variable method, comprising: obtaining an optimal iteration number value N, combined with the Pichia pastoris fermentation process and a fixed test acceleration coefficient and according to different iteration number values N; and simulating groups of different values of c.sub.1 and c.sub.2 according to the optimal iteration number value N and combined with the Pichia pastoris fermentation process, to obtain the best simulated c.sub.1 and c.sub.2 values.
7. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the calculating an output of the m optimal sub-prediction models and a mean-square error (MSE) R.sub.i(k) of controlled objects comprises:
8. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the calculating a weight w.sub.i of each sub-model predictive controller according to the MSE R.sub.i(k) comprises:
9. The multi-model predictive control method for a Pichia pastoris fermentation process according to claim 1, wherein the control input is expressed as:
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
[0065]
[0066]
[0067]
[0068]
[0069]
DETAILED DESCRIPTION
[0070] Technical solutions in the examples of the present disclosure will be described clearly and completely in the following with reference to the attached drawings in the examples of the present disclosure. Obviously, all the described examples are only some, rather than all examples of the present disclosure. Based on the examples in the present disclosure, all other examples obtained by those of ordinary skill in the art without creative efforts belong to the scope of protection of the present disclosure.
[0071] Referring to
At 1.1: Multi-Model Modeling based on FCM-IPSO-LSSVM
[0072] A basic idea of multi-model predictive control is to divide a nonlinear space of a controlled object into several subspaces. Then, a local model is established in each subspace, and a corresponding predictive controller is designed for each local model. The dynamic characteristics of the controlled object are approximated by multiple models in real time. Finally, the optimal control effect can be obtained by switching to an optimal sub-controller or by weighted summation of outputs of a plurality of sub-controllers.
[0073] The present disclosure provides a multi-model predictive control strategy based on a weighted algorithm. The prior data is divided into m training sample sets (sample clusters) by using FCM. For each sample cluster, a corresponding prediction model is obtained by using LSSVM and IPSO. Then, for m local prediction models (FCM-IPSO-LSSVM.sub.1-FCM-IPSO-LSSVM.sub.m), the corresponding predictive controllers (MPC.sub.1-MPC.sub.m) are designed. Finally, the deviation between the output of the object and the output of each sub-prediction model is calculated at each sampling time. Based on multi-model relative error weighted algorithm, a prediction model control strategy is established to optimize the control of the object. In the method, the transient performance of the system is improved by m local models, and the controlled variable can track a given value quickly. At the same time, the control strategy of real-time prediction model is constructed by the weighted algorithm based on a relative error, which improves the adaptive ability of the model and an actual state of a nonlinear system is described more accurately.
At 1.1.1: FCM
[0074] In a given data set X={x.sub.1, x.sub.2, . . . , x.sub.n}, n is the number of samples. By FCM, the data set X is divided into C(2?C?n) classes.
[0075] An objective function of FCM is as follows:
[0077] The objective function minJ.sub.m(U,V), V.sub.i and a degree of membership matrix U are calculated by the following equations:
[0078] The specific processes of FCM are as follows.
[0079] At step 1: the number of clusters C, a fuzzy weighted parameter m and an iteration stop condition ? are inputted.
[0080] At step 2: a cluster center V.sub.i.sup.0(i=1,2, . . . , C) is initialized.
[0081] At step 3: u.sub.ij(i=1,2 . . . , C, j=1,2, . . . , n) is calculated by using the equation (3).
[0082] At step 4: V.sub.i.sup.l(i=1,2, . . . , C) is calculated by using the equation (2).
[0083] At step 5: the iteration is stopped if ?V.sup.l?V.sup.0???, and step 6 is skipped to; and step 3 is skipped to if not.
[0084] At step 6: a clustering result (V, U) is outputted.
At 1.1.2: LSSVM
[0085] Optimization issues of LSSVM are as follows:
?(x)=?.sub.i=1.sup.Na.sub.iK(x, x.sub.i)+b (6) [0087] where a.sub.i?R is a Lagrange multiplier, and through comparative analysis on multiple types of functions, an RBF function is selected as a kernel function of the LSSVM having a kernel width expression:
[0090] The prediction ability of LSSVM mainly depends on the regularization parameters and kernel width, which affects the fitting accuracy and generalization ability of the model, and directly determines the computation amount and execution efficiency of the model. At present, the commonly used selection methods are network search algorithm and genetic algorithm, the former has a large computation amount and poor real-time performance, while the latter is easy to fall into local minimum. Herein, the IPSO is provided to optimize the parameters of LSSVM model, and good quality of parameter optimization is obtained.
At 1.1.3: IPSO
[0091] The IPSO is used to track and adjust the position and speed of each particle in the swarm to achieve the optimization effect of a whole swarm. It is supposed that a dimension of a target search space is m, the number of particles in a particle swarm is G, a position of a particle i in an m-dimensional space is represented as a vector X.sub.i=(x.sub.i,1, x.sub.i,2, . . . , x.sub.i,m), i=1,2, . . . , G, and a flight speed is represented as a vector V.sub.i=(v.sub.i,1, v.sub.i,2, . . . , v.sub.i,m), i=1,2, . . . , G; after adjustment, a best position of the particle is represented as P.sub.i=(p.sub.i,1, p.sub.i,2, . . . , p.sub.i,m), and finally a best position of a whole swarm is represented as g.sub.best=(p.sub.g,1, p.sub.g,2, . . . , p.sub.g,m); and in the k-round iteration process of IPSO, a state parameter of each particle in the particle swarm is adjusted by Equation (9):
[0093] The values of ?, c.sub.1, and c.sub.2 are typically obtained by examining historical data, which are periodically adjusted and updated, resulting in parameters that tend to lag behind changes in industrial processes. Therefore, herein, the adaptive adjustment strategy is adopted to dynamically adjust the inertia weight ?, and the fitness value J.sub.i of the current individual is compared with the average fitness value J.sub.avg of the whole swarm.
[0094] If J.sub.i is better than the average fitness J.sub.avg, the reduction of the inertia weight ? of the corresponding individual is smaller, which is easy to make it close to the optimal position; if J.sub.i is lower than J.sub.avg, the inertia weight of the corresponding single particle will be greater, thus making its search range wider and closer to a better search area. The adaptive adjustment strategy of the inertia weight ? is expressed as follows:
[0096] If only ? is decreased, it is difficult for the particle swarm optimization (PSO) algorithm to jump out of the local trap, which is easy to converge to a local extreme point. In addition, the acceleration coefficients c.sub.1 and c.sub.2 also have important influence on the global and local optimization ability of PSO algorithm, and different scholars have different views on the value of acceleration coefficients. The present disclosure adopts a variable method to determine the optimal number of iterations N and acceleration coefficients c.sub.1 and c.sub.2.
[0097] To explore the most appropriate value of N, i. e. the best number of iterations, combined with the Pichia pastoris fermentation process, the experimental acceleration coefficient c.sub.1=c.sub.2=2, and the N values are 100, 200, 500 and 1000. The simulation results of
[0098] To explore the most appropriate values of acceleration coefficients c.sub.1 and c.sub.2, the value of N is set as 200. Combined with the Pichia pastoris fermentation process, c.sub.1 and c.sub.2 are divided into three groups of different values to simulate: (1) c.sub.1=1.5, c.sub.2=1.7; (2) c.sub.1=c.sub.2=2; and (3) c.sub.1=1.7, c.sub.2=1.5. The simulation results of c.sub.1 and c2 are shown in
[0099] In view of the above analysis, the study adopts adaptive adjustment strategy to dynamically adjust w, which not only effectively improves the global search ability and improves the convergence speed in the initial stage of the algorithm, but also ensures the local search performance in the later stage and improves the convergence accuracy of the algorithm. At the same time, the optimal iteration time N and acceleration coefficients c.sub.1 and c.sub.2 are determined by the variable method, which has significantly improved the optimization of key parameters of LSSVM.
At 1.1.4: Multi-Model Modeling Algorithm
[0100] The flow chart of the multi-model modeling algorithm based on FCM-IPSO-LSSVM is shown in
[0101] At step 1: prior sample data is collected, the number of cluster centers m is given, and the prior sample data is preprocessed according to Equation (11) to reduce the adverse influence of the data range being too large or too small on the training process.
x*=(x?x.sub.min)/(x.sub.max?x.sub.min) (11) [0102] where x is original data of a previous sample, and x.sub.max and x.sub.min are upper and lower bounds.
[0103] At step 2: a degree of membership matrix is calculated according to Equation (3).
[0104] At step 3: an objective function J.sub.m is calculated; if J.sub.m<R, where R is a threshold value, the calculation is stopped, a final cluster center C and a fuzzy degree of membership matrix U are obtained, and step 5 is proceeded to; and otherwise, step 4 is proceeded to.
[0105] At step 4: the cluster center Vi is recalculated according to Equation (2).
[0106] At step 5: the samples belong to the category according to k-nearest neighbor discriminant method, and the training set of LSSVM is selected to eliminate abnormal prior samples.
[0107] At step 6: each class of training samples is inputted into LSSVM for training, an optimal key parameter of LSSVM is found by IPSO, and an optimal sub-prediction model is established.
[0108] The research of multi-model predictive control algorithm is mainly to design a predictive controller in advance for each local model. Then, the optimal sub-prediction model can be controlled by switching an index switch to a corresponding controller. In the design of multi-model weighted controller, the nonlinear space of the controlled object is divided into several subspaces, a local model is established in each subspace, and a corresponding predictive controller is designed for each local model. Then, the output of each sub-controller is weighted according to the relative error to obtain the actual control output.
At 1.2.1: System Structure
[0109] Taking three fixed system sub-prediction models as examples, the multi-model predictive control structure is shown in
[0110] In
At 1.2.2: Weighted Algorithm based on Relative Error
[0111] Herein, based on clustering modeling, a recursive method for weight factor convergence is provided by utilizing a relative error (k being a sampling time) between an output of each sub-prediction model y.sub.i(k) and an output of a controlled system y(k). A block diagram of the algorithm is shown in
[0112] The basic steps of the algorithm are as follows.
[0113] At step 1: system state data including a current system input, a last input and a last output is acquired.
[0114] At step 2: MSE R.sub.i(k) of the i.sup.th sub-prediction model and object is defined as:
[0116] At step 3: a weight of an i.sup.th predictive controller is obtained by the following equations:
[0118] At step 4: the control input of the object can be expressed as:
u(k)=?.sub.i=1.sup.mw.sub.i(k)u.sub.i(k) (16) [0119] where u is a control variable; and u.sub.i is an output value of an i.sup.th sub-predictive controller.
[0120] The present disclosure provides a weighted algorithm based on a relative error (soft-switching) to design the multi-model fusion controller, which improves the adaptive ability of the model and the actual state of the nonlinear system is described more accurately.
[0121] The method improves the transient performance of the system through multiple local models, so that the controlled variable can track the given value quickly. At the same time, the control strategy of real-time prediction model is constructed by the weighted algorithm based on a relative error, which improves the adaptive ability of the model and the actual state of the nonlinear system is described more accurately.
[0122] What has been disclosed above is only a few specific examples of the present disclosure, and various modifications and variations can be made to the examples of the present disclosure by those skilled in the art without departing from the spirit and scope of the present disclosure. However, the examples of the present disclosure are not limited thereto, and any variation that can be considered by those skilled in the art is to fall within the protection scope of the present disclosure.