Data-Difference-Driven Self-Learning Dynamic Optimization Method For Batch Process

20180259921 ยท 2018-09-13

    Inventors

    Cpc classification

    International classification

    Abstract

    The present invention discloses a data difference-driven self-learning dynamic batch process optimization method including the following steps: collect production process data off line; eliminate singular batches through PCA operation; construct time interval and index variance matrices to carry out PLS operation to generate initial optimization strategies; collect data of new batches; run a recursive algorithm; and update the optimization strategy. The present invention utilizes a perturbation method to establish initial optimization strategies for an optimized variable setting curve. On this basis, self-learning iterative updating is carried out for mean values and standard differences on the basis of differences in data statistics, so that the continuous improvement of optimized indexes is realized, and thereby a new method is provided for batch process optimization strategies for solving actual industrial problems. The present invention is fully based on operational data of a production process, and does not need priori knowledge about a process mechanism and a mechanism model. The present invention is applicable to the dynamic optimization of operation trajectories of batch reactors, batch rectifying towers, batch drying, batch fermentation, batch crystallization and other processes and systems adopting batch operation.

    Claims

    1. A data difference-driven self-learning dynamic batch process optimization method, comprising the following steps: (1) for a batch process complete in operation, collecting a to-be-optimized variable and final quality or yield indexes according to batches; (2) for the data collected in step (1), carrying out principal component analysis with the batches as variables and eliminating singular points in a principal component pattern graph, so that all the data points are within a degree of credibility; (3) partitioning the operational data into N time intervals at intervals on a time axis after the singular points are eliminated; expressing the data of each batch included in each time interval as a continuous variable C.sub.i (i=1, 2, . . . N), wherein these variables are time interval variables; wherein the value of each time interval variable is composed of the data of each batch of the to-be-optimized variable within a specific time interval; wherein a data matrix composed of multiple batches of time interval variables is a time interval variable matrix, denoted by L.sub.pxN, and P is a batch number; (4) defining the quality or yield index of each batch corresponding to step (3) as an index variable Y.sub.pxl, and the values of index variables to be continuous variables generated by final qualities or yields of p batches; (5) according to the time interval variable matrix L.sub.pxN and the index variables Y.sub.pxl generated in step (3) and step (4), respectively calculating a covariance matrix S.sub.LL and a pooled covariance matrix S.sub.LY; (6) carrying out principal component analysis on the covariance matrix S.sub.LL and the pooled covariance matrix S.sub.LY to obtain a PLS coefficient vector F.sub.i (i=1, 2, . . . 1V); (7) classifying the PLS coefficient variable elements in step (6) according to symbol magnitudes, and defining the functional symbols as follows: sign ( i ) = { + 1 if .Math. .Math. F i e 0 if .Math. .Math. .Math. F i .Math. < e - 1 if .Math. .Math. F i - e } , i = 1 , 2 , .Math. .Math. .Math. N wherein e is a threshold limit for noise, and sign(i) is a PLS coefficient symbol corresponding to the i.sup.th time interval; (8) calculating a mean value and a standard difference of every time interval variable; and establishing an initial optimization strategy for each time interval variable of the collected data of the batches according to the following perturbation magnitude calculation formula:
    J.sub.i=M.sub.i+sign.sub.i3.sub.i J.sub.i, M.sub.i and .sub.i here are the optimized target value, mean value and standard difference of the ith time interval variable; (9) combining the optimized target values of all the time intervals obtained in step (8) into an initial optimized variable curve for the whole batch process according to a time interval sequence i=1, 2, . . . N; (10) collecting time interval variables C.sub.i(k+1) (i=1, 2, . . . N) and index variable data Y(k+1) of new batches, and utilizing the following recursive formulas to calculate and update the covariance matrix S.sub.LL(k+1) and the pooled covariance matrix S.sub.LY(k+1):
    S.sub.LL(k+1)=S.sub.LL(k)+C(k+1).sup.TC(k+1)
    S.sub.LY(k+1)=S.sub.LY(k)+C(k+1).sup.TY(k+1) wherein C(k+1)=[C(k+1), C.sub.2(k+1) . . . C.sub.N(k+1)], 0<<1 is a forgetting factor for the existing covariance matrix; and when is equal to 1, it represents that no data are eliminated from the old covariance matrix; (11) carrying out principal component analysis on the covariance matrix S.sub.LL(k+1) and the pooled covariance matrix S.sub.LY(k+1) to obtain a PLS coefficient vector (k+1); (12) utilizing the following recursive formulas to calculate a mean value and a standard difference of every time interval variable under the data of the new batches:
    M.sub.i(k+1)=M.sub.i(k)+[C.sub.i(k+1)M.sub.i(k)]/(k+1)
    .sub.i(k+1)=.sub.i(k)+[C.sub.i(k+1)M.sub.i(k)][C.sub.i(k+1)M.sub.i(k+1)]; (13) according to the perturbation magnitude calculation formula in step (8), calculating an optimized target value of each time interval variable under the data of the new batches; (14) combining the optimized target values of all the time intervals obtained in step (13) into a new optimized variable curve according to a time interval sequence i=1, 2, . . . N; (15) judging whether there is new data to be updated; if there are new data, going to step (10) to continue with self-learning updating operation, or else end the learning process.

    2. The data difference-driven self-learning dynamic batch process optimization method according to claim 1, characterized in that the time intervals of batch process data collection in step (1) are equal or not equal.

    3. The data difference-driven self-learning dynamic batch process optimization method according to claim 1, characterized in that interval partitioning in step (3) is equal-interval partitioning or unequal-interval partitioning.

    4. The data difference-driven self-learning dynamic batch process optimization method according to claim 1, characterized in that digital filtering is carried out on the optimized variable curve established in step (14), so that a new optimized curve is smooth and easy to track and control.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0032] FIG. 1 is a temperature operation curve graph of a batch process.

    [0033] FIG. 2 is a principal component pattern graph with batch process temperature as an optimized variable.

    [0034] FIG. 3 is an exemplary graph of time interval variables.

    [0035] FIG. 4 is an exemplary PLS coefficient graph of the time interval variables.

    [0036] FIG. 5 is a calculation graph of an optimized target variable curve.

    [0037] FIG. 6 is a block diagram of a recursive calculation process of the present invention.

    [0038] FIG. 7 is a comparison graph of an optimized temperature curve and an original temperature curve.

    [0039] FIG. 8 is an optimized curve graph before and after filtering.

    [0040] FIG. 9 is a result comparison graph of a self-learning optimization strategy and an initial optimization strategy and an original operation.

    DETAILED DESCRIPTION OF THE INVENTION

    [0041] The present embodiment takes a batch crystallization process as an example, and the method does not limit the scope of the present invention.

    [0042] The method is divided into three parts. The first part is collection and preprocessing of data. The second part is calculation of initial optimization strategy. The third part is calculation of recursive optimization strategies on basis of obtaining updated batch data.

    [0043] The block diagram of the implementation steps of the method is shown in FIG. 6, and the specific implementation steps and algorithms are as follows:

    [0044] Step 1: For the batch crystallization process complete in operation, choose a temperature operation curve closely relates to product yield as a to-be-optimized variable, and collect 35 sets of temperature variables and final yield index data according to batches. The time interval of data collection is 1 minute. FIG. 1 is a temperature curve example (partial) of a batch crystallization process.

    [0045] Step 2: For the collected temperature data of all the batches, carry out principal component analysis on the temperature variables according to batches and eliminate singular points in a principal component pattern graph, so that all the data points are within a degree of credibility. FIG. 2 is a principal component pattern graph with a batch process and temperature as optimized variables, it can be seen from the drawing that the pattern of one batch at the lower right corner is at great distances from all the other points, and the data of this batch should be eliminated.

    [0046] Step 3: Divide the temperature data of the remaining 34 batches into 221 time intervals at equal intervals on a time axis, generating time interval variables C.sub.1, C.sub.2 . . . C.sub.221. For clarity, FIG. 3 gives time interval variables from C.sub.40 to C70.

    [0047] Step 4: Use the time interval variables of multiple batches to construct a time interval variable matrix L.sub.34221 and index variables Y.sub.341.

    [0048] Step 5: According to the time interval variable matrix L and the index variables Y generated in step 4, respectively calculate a covariance matrix S.sub.LL and a pooled covariance matrix S.sub.LY;

    [0049] Step 6: Carry out principal component analysis on the covariance matrix S.sub.LL and the pooled covariance matrix S.sub.LY to obtain a PLS coefficient vector F.sub.221. FIG. 4 is a PLS coefficient graph with 221 time interval variables.

    [0050] Step 7: For the PLS coefficient variable elements in step 6, define the following functional symbols:

    [00002] sign ( i ) = { + 1 if .Math. .Math. F i e 0 if .Math. .Math. .Math. F i .Math. < e - 1 if .Math. .Math. F i - e } , i = 1 , 2 , .Math. .Math. .Math. N

    [0051] wherein e is a threshold limit for noise, and e here is equal to 0.01.

    [0052] Step 8: Calculate a mean value and a standard difference of every time interval variable, and calculate an optimized target value of each time interval variable of the collected data of the batches according to the following perturbation magnitude calculation formula:


    J.sub.i=M.sub.i+sign.sub.i3.sub.i

    [0053] J.sub.i, M.sub.i and .sub.i here are the optimized target value, mean value and standard difference of the ith time interval variable.

    [0054] Step 9: Combine the optimized target values of all the time intervals obtained in step 8 into an optimized variable curve according to a time interval sequence i=1, 2, . . . 221. FIG. 5 is an example of establishing an optimized target variable curve. As shown in FIG. 5, when i is equal to 40, the mean value M.sub.40 is equal to 159.687, the standard difference 640 is equal to 0.577, F.sub.40 is equal to 0.000149, sign.sub.40 is equal to 0, and the obtained optimization strategy is J.sub.40=159.687; when i is equal to 45, the mean value M.sub.45 is equal to 170.26, the standard difference 645 is equal to 0.416, F.sub.45 is equal to 0.046091, sign.sub.45 is equal to 1, and the obtained optimization strategy is J.sub.45=169.020.

    [0055] The optimization strategies established for the data of the above-mentioned batches are adopted as initial values for the recursive learning of data of new batches.

    [0056] The step of establishment of recursive self-learning algorithm based on data of new batches and calculation is as follows:

    [0057] Step 10: Collect a time interval variable C.sub.i(k+1) (i=1, 2, . . . N) and index variable data Y(k+1) of data of a new batch, and k here is equal to 35.

    [0058] Utilize the following recursive self-learning formulas to update the covariance matrix S.sub.LL(k+1) and the pooled covariance matrix S.sub.LY(k+1):


    S.sub.LL(k+1)=S.sub.LL(k)+C(k+1).sup.TC(k+1)


    S.sub.LY(k+1)=S.sub.LY(k)+C(k+1).sup.TY(k+1)

    wherein k=35, 36, . . . , C(k+1)=[C.sub.1(k+1), C.sub.2(k+1) . . . C.sub.N(k+1)], and 0<<1 is a forgetting factor for the existing covariance matrix. When is equal to 1, it represents that no data are eliminated from the old covariance matrix. In the present embodiment, in order to keep the information of all the batches, is equal to 1.

    [0059] Step 11: Carry out principal component analysis on the covariance matrix S.sub.LL(k+1) and the pooled covariance matrix S.sub.LY(k+1) to obtain a PLS coefficient vector F.sub.i(k+1).

    [0060] Step 12: Utilize the following recursive self-learning formulas to calculate a mean value and standard difference of every time interval variable under the data of the new batches:


    M.sub.i(k+1)=M.sub.i(k)+[C.sub.i(k+1)M.sub.i(k)]/(k+1)


    .sub.i(k+1)=.sub.i(k)+[C.sub.i(k+1)M.sub.i(k)][C.sub.i(k+1)M.sub.i(k+1)]

    [0061] In the present embodiment, when i is equal to 40, the mean value M.sub.40(35) of the data of the original 35 batches is equal to 159.687, and the standard difference .sub.40(35) is equal to 0.577; after the data of the new batch are updated, the mean value M.sub.40(36) is equal to 159.772, and the standard difference .sub.40(36) is equal to 0.585; when i is equal to 45, the mean value M.sub.45(35) of the data of the original 35 batches is equal to 170.26, and the standard difference .sub.45(35) is equal to 0.416; after the data of the new batch are updated, the mean value M.sub.45(36) is equal to 170.336, and the standard difference .sub.45(36) is equal to 0.510.

    [0062] Step 13: According to the mean values and standard differences of the updated time interval variables obtained by calculation in the above-mentioned step, obtain an optimized target value of each time interval variable according to the perturbation magnitude calculation formula in step 8. If i is equal to 40, the mean value M.sub.40 is equal to 159.772, the standard difference .sub.40 is equal to 0.585, F.sub.40 is equal to 0.008856, and the obtained optimization strategy is J.sub.40=159.772; when i is equal to 45, the mean value M.sub.45 is equal to 170.336, the standard difference .sub.45 is equal to 0.510, F.sub.40 is equal to 0.04199, and the obtained optimization strategy is J.sub.45=168.832.

    [0063] Step 14: Combine the optimized target values of all the time intervals obtained in step 13 into an updated optimized variable curve according to the time interval sequence i=1, 2, . . . 221.

    [0064] Judge whether there is new data to be updated. If there are new data, go to step 10 to carry out self-learning recursive operation, or else end the calculation process. In the present embodiment, there are new data of 15 batches, and after the recursive algorithm is continuously calculated 15 times, that is, k is equal to 49, recursive calculation is ended.

    [0065] Normally, digital filtering needs to be carried out on the optimized variable curve established in above-mentioned step 14, so that a new optimized curve is smooth and easy to track and control. FIG. 7 is the comparison between an optimized curve and an original curve. FIG. 8 is an example of optimized curves before and after moving average filtering.

    [0066] In order to illustrate the effectiveness of the method of the present invention, FIG. 9 is an optimization result example of a batch crystallization process. The production data of the 35 batches serve as driving data for initial optimization, and the data of the 15 batches serve as recursively updated data. Yield is increased from 90.80% before the application of optimization strategies to 92.81% after the application of the initial optimization strategies, and finally, by the self-learning optimization strategies, yield is increased to 95.51%.