Method for effluent total nitrogen-based on a recurrent self-organizing RBF neural network
10570024 ยท 2020-02-25
Assignee
Inventors
Cpc classification
C02F1/008
CHEMISTRY; METALLURGY
C02F2209/08
CHEMISTRY; METALLURGY
G06N3/082
PHYSICS
C02F2209/003
CHEMISTRY; METALLURGY
International classification
C02F3/00
CHEMISTRY; METALLURGY
Abstract
In this present disclosure, a computing implemented method is designed for predicting the effluent total nitrogen concentration (TN) in an urban wastewater treatment process (WWTP). The technology of this present disclosure is part of advanced manufacturing technology and belongs to both the field of control engineer and environment engineer. To improve the predicting efficiency, a recurrent self-organizing radial basis function (RBF) neural network (RSORBFNN) can adjust the structure and parameters simultaneously. This RSORBFNN is developed to implement this method, and then the proposed RSORBFNN-based method can predict the effluent TN concentration with acceptable accuracy. Moreover, online information of effluent TN concentration may be predicted by this computing implemented method to enhance the quality monitoring level to alleviate the current situation of wastewater and to strengthen the management of WWTP.
Claims
1. A method of detecting the effluent total nitrogen (TN) concentration based on a recurrent self-organizing radial basis function (RBF) neural network (RSORBFNN), the method comprising: (1) determining input and output variables of the effluent TN concentration with respect to a sewage treatment process of an activated sludge system by analyzing the variables of the sewage treatment process and selecting the input variables of the effluent TN concentration computing model that include: ammonia nitrogen (NH.sub.4N), nitrate nitrogen (NO.sub.3N), effluent suspended solids (SS), biochemical oxygen demand (BOD), total phosphorus (TP), an output value of the computing model is detected effluent TN concentration; (2) initializing the RSORBFNN of which an initial structure comprises three layers: input layer, hidden layer, and output layer, there are 5 neurons in the input layer, J neurons in the hidden layer, and 1 neuron in the output layer, J>2 is a positive integer, connection weights between the input layer and hidden layer are assigned 1, feedback weights between hidden layer and output layer are randomly assigned with values, an assignment internal is 1 to 1; the number of the training sample is P, and an input vector of the RSORBFNN is x(t)=[x.sub.1(t), x.sub.2(t), x.sub.3(t), x.sub.4(t), x.sub.5(t)] at time t; y(t) is an output of the RSORBFNN, and y.sub.d(t) is a real value of the effluent TN concentration at time t, respectively; the output of the RSORBFNN is described using the equation (1):
h.sub.j(t)=[u.sub.1(t),u.sub.2(t),u.sub.3(t),u.sub.4(t),u.sub.5(t),w.sub.j.sup.1(t)y(t1)](3) wherein y(t1) is the output from RSORBFNN at (t1) time, w1 j(t) is the feedback weight connecting the jth neuron in the hidden layer hidden with the output neuron, w.sup.1(t)=[w1 1(t), w1 2(t), . . . , w1 J(t)].sup.T is the feedback weight between the output neuron and the hidden layer neuron, T means to transpose; the output of the output layer is:
(t+1)=(t)+((t)+(t)I).sup.1(t),(6) where (t)=[w.sup.1(t),w.sup.2(t),C(t),(t)] is the variable vector at time t, (t) is quasi Hessian matrix at time t, I is the identity matrix, (t) is the adaptive learning rate defined by the equations (7) and (8):
(1)=[w.sup.1(1),w.sup.2(1),C(1),(1)],(9) the quasi Hessian matrix (t) and the gradient vector (t) are accumulated as the sum of related submatrices and vectors:
(t)=j.sup.T(t)j(t),(10)
(t)=j.sup.Te(t),(11)
e(t)=y.sub.d(t)y(t),(12) e(t) is the approximating error at time t, y.sub.d(t) is the desired output and y(t) is the network output at time t, and the Jacobian-vector j(t) is calculated by the equation (13):
cp.sub.j(t)=f.sub.j(t).sub.j(t), j=1,2, . . . ,J,(14) wherein cp.sub.j(t) is the competitiveness of the jth hidden neuron, denotes the correlation coefficient between the hidden layer output and network output, (0, 1), f.sub.i(t) is the active state of the jth hidden neuron, .sub.j(t) is the width of the jth hidden neuron; the active state f.sub.j(t) is defined by the equation (15):
f.sub.j(t)=.sup.h.sup.
cp.sub.j(t)<(19) wherein is the preset pruning threshold, (0, E.sub.d), E.sub.d is the preset error, E.sub.d(0.001], the jth hidden neuron is pruned, and the number of hidden neurons is updated M.sub.2=M.sub.11, or the structure of RSORBFNN is adjusted, M.sub.2=M.sub.1; 6) Increasing a learning step for s, if s<P, go to step 3); if s=N, go to step 7); and 7) according to equation (5), calculate the performance of the RSORBFNN, if E(t)E.sub.d, performing step 3); if E(t)<E.sub.d, terminating the training process; and (4) predicting the effluent TN concentration using the testing samples as the input of the RSORBFNN to obtain the output of the RSORBFNN as computing values of the effluent TN concentration.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
DETAILED DESCRIPTION
(6) A computing implemented method is developed to predict the effluent TN concentration based on an RSORBFNN in this present disclosure. For this computing implemented method, the inputs are those variables that are easy to measure and the outputs are estimates of the effluent TN concentration. In general, the procedure of computing implemented method consists of three parts: data acquisition, data pre-processing and model design. For this present disclosure, an experimental hardware is set up as shown in
(7) This present disclosure adopts the following technical scheme and implementation steps:
(8) A computing implemented method for the effluent TN concentration based on an RSORBFNN, its characteristic and steps include the following steps:
(9) (1) Determine the input and output variables of effluent TN concentration:
(10) For sewage treatment process of activated sludge system, the variables of sewage treatment process are analyzed and select the input variables of effluent TN concentration soft-computing model: ammonia nitrogenNH.sub.4N, nitrate nitrogenNO.sub.3N, effluent suspended solidsSS, biochemical oxygen demandBOD, total phosphorusTP, The output value of soft-computing model is detected effluent TN concentration.
(11) (2) Initialize RSORBFNN
(12) The initial structure of RSORBFNN consists of three layers: input layer, hidden layer, and output layer. There are 5 neurons in the input layer, J neurons in the hidden layer and 1 neuron in the output layer; J>2 is a positive integer. Connection weights between input layer and hidden layer are assigned 1, the feedback weights between hidden layer and output layer randomly assign values, the assignment interval is to 1; the number of the training sample is P, and the input vector of RSORBFNN is x(t)=[x.sub.1(t), x.sub.2(t), x.sub.3(t), x.sub.4(t), x.sub.5(t)] at time t; y(t) is the output of RSORBFNN, and y.sub.d(t) is the real value of effluent TN concentration at time t, respectively; The output of RSORBFNN can be described:
(13)
wherein w.sup.2.sub.j(t) is the output weight between the jth hidden neuron and the output neuron, w.sup.2(t)=[w2 1(t), w2 2(t), . . . , w2 J(t)].sup.T is the output weight vector between hidden neurons and output neuron, j=1, 2, . . . , J, J is the number of hidden neurons, and .sub.j(t) is the output value of the jth hidden neuron which is usually defined by a normalized Gaussian function:
(14)
wherein h.sub.jc.sub.j represents the Euclidean distance between h.sub.j and c.sub.j, c.sub.j(t)=[c.sub.1j(t), c.sub.2j(t), . . . , c.sub.5j(t)].sup.T and .sub.j represent the center vector and radius of the jth hidden neuron, respectively; c.sub.ij(t) is ith element of jth hidden neuron, and h.sub.j is the input vector of jth hidden neuron
h.sub.j(t)=[u.sub.1(t),u.sub.2(t),u.sub.3(t),u.sub.4(t),u.sub.5(t),w.sub.j.sup.1(t)y(t1)](3)
(15) wherein y(t1) is the output from RSORBFNN at (t1) time, w1 j(t) is the feedback weight connecting the jth neuron in the hidden layer hidden with the output neuron, w.sup.1(t)=[w1 1(t), w1 2(t), . . . , w1 J(t)].sup.T is the feedback weight between the output neuron and the hidden layer neuron, T means to transpose.
(16) The output of the output layer is:
(17)
(18) wherein w.sup.2(t)=[w2 1(t), w2 2(t), . . . , w2 J(t)].sup.T is the weight vector connecting the hidden layer and the output layer at time t, w2j(t) is the weight connecting the hidden layer and the output layer at time t, (t)=[.sub.1(t), .sub.2(t), . . . , .sub.J(t)].sup.T is the output vector of the hidden layer at time t, .sub.j(t) is the output of the hidden layer neuron j at time t, and y(t) is the output of the RSORBFNN at time t.
(19) The training error function of RSORBFNN is defined
(20)
(21) wherein P is the number of the training samples.
(22) (3) Train RSORBFNN
(23) 1) Given RSORBFNN, the initial number of hidden layer neurons is J; J>2 is a positive integer. The input of RSORBFNN is x(1), x(2), . . . , x(t), . . . , x(P), the desired output is y.sub.d(1), y.sub.d(2), . . . , y.sub.d(t), . . . , y.sub.d(P); the desired error value is set to E.sub.d, E.sub.d(0, 0.01), the initial center is CJ(1)(2, 2), the initial width value .sub.j(1) (0, 1), the initial feedback weight is w1 j(1)(0, 1), and the initial weight is w2j(1)(0, 1), j=1, 2, . . . , J;
(24) 2) Set the learning step s=1;
(25) 3) t=s, calculate the output y(t) of RSORBFNN, update the weight, width, and center of RSORBFNN using the rule:
(t+1)=(t)+((t)+(t)I).sup.1(t),(6)
where (t)=[w.sup.1(t),w.sup.2(t),C(t),(t)] is the variable vector at time t, (t) is quasi Hessian matrix at time t, I is the identity matrix, (t) is the adaptive learning rate defined as:
(26)
wherein (t) is the adapting factor at time t, and the initial value of (t) is (1)=1, .sup.max(t) and .sup.min(t) are the maximum and minimum eigenvalues of (t), respectively; 0<.sup.min(t)<.sup.max(t), 0<(t)<1 and (1)=1. (t) contains four kinds of variables: the feedback connection weight vector w.sup.1(t) at time t, the connection weight vector w.sup.2(t) at time t, the centre matrix C(t)=[c.sub.1(t), c.sub.2(t), . . . , c.sub.j(t)].sup.T and width vector (t)=[.sub.1(t), .sub.2(t), . . . , .sub.J(t)].sup.T at time t.
(1)=[w.sup.1(1),w.sup.2(1),C(1),(1)],(9)
(27) the quasi Hessian matrix (t) and the gradient vector (t) are accumulated as the sum of related submatrices and vectors:
(t)=j.sup.T(t)j(t),(10)
(t)=j.sup.Te(t),(11)
e(t)=y.sub.d(t)y(t),(12)
(28) e(t) is the approximating error at time t, y.sub.d(t) is the desired output and y(t) is the network output at time t, and the Jacobian-vector j(t) is calculated as:
(29)
(30) 4) t>3, calculate competitiveness of the jth hidden neuron:
cp.sub.j(t)=f.sub.j(t).sub.j(t), j=1,2, . . . ,J,(14)
wherein cp.sub.j(t) is the competitiveness of the jth hidden neuron, denotes the correlation coefficient between the hidden layer output and network output, (0, 1), f.sub.i(t) is the active state of the jth hidden neuron, .sub.j(t) is the width of the jth hidden neuron; the active state f.sub.j(t) is defined as
f.sub.j(t)=.sup.h.sup.
wherein (1,2), and f(t)=[f.sub.1(t), f.sub.2(t), . . . , f.sub.j(t)], the correlation coefficient .sub.j(t) at time t is calculated as
(31)
wherein the correlation coefficient of hidden neurons A.sub.j(t)=w2 j(t).sub.j(t), the correlation coefficient of output layer B(t)=y(t), (t) is the average value of correlation coefficient of hidden neurons at time t,
(32) 5) Adjust the structure of RSORBFNN:
(33) If the competitiveness of the jth hidden neuron and training error at time t and t+ satisfy
(34)
wherein
(35)
denotes the value of j when cp.sub.j(t) obtain the maximum
value. E(t) and E(t+) are the training errors at times t and t+, respectively, is a time interval, =5, and is the preset threshold, =0.001. Add one hidden neuron, and the number of hidden neurons is M.sub.1=J+1. Otherwise, the structure of RSORBFNN will not be adjusted, M.sub.1=J.
(36) When the competitiveness of the jth hidden neuron satisfies
cp.sub.j(t)<(19)
wherein is the preset pruning threshold, (0, E.sub.d), E.sub.d is the preset error, E.sub.d=0.002. The jth hidden neuron will be pruned, the number of hidden neurons will be updated M.sub.2=M.sub.11. Otherwise, the structure of RSORBFNN will not be adjusted, M.sub.2=M.sub.1.
(37) 6) Increase 1 learning step for s, if s<P, go to step 3); if s=N, proceed to step 7).
(38) 7) According to Eq. (24), calculate the performance of RSORBFNN. If E(t)E.sub.d, proceed to step 3); if E(t)<E.sub.d, stop the training process.
(39) The training result of the computing implemented method for effluent TN concentration is shown in
(40) (4) Effluent TN concentration concentration prediction;
(41) The testing samples are used as the input of RSORBFNN, and the output of RSORBFNN is the soft-computing values of effluent TN concentration. The predicting result is shown in
(42) Tables 1-14 show the experimental data in this present disclosure. Tables 1-6 show the training samples of biochemical oxygen demandBOD, ammonia nitrogenNH.sub.4N, nitrate nitrogenNO.sub.3N, effluent suspended solidsSS, total phosphorusTP real effluent TN concentration. Table 7 shows the outputs of the RSORBFNN in the training process. Tables 8-14 show the testing samples of biochemical oxygen demandBOD, ammonia nitrogenNH.sub.4N, nitrate nitrogenNO.sub.3N, effluent suspended solidsSS, total phosphorusTP and real effluent TN concentration. Table 14 shows the outputs of the RSORBFNN in the predicting process
(43) Training samples are provided as follow.
(44) TABLE-US-00001 TABLE 1 The training samples of biochemical oxygen demand-BOD (mg/L) 192 222 201 264 195 209 260 197 206 289 188 350 210 204 200 180 230 338 200 330 320 232 260 240 218 316 310 172 210 316 310 244 248 168 204 145 170 142 190 260 200 240 280 174 250 136 222 204 239 242 310 232 290 210 144 214 251 158 262 290
(45) TABLE-US-00002 TABLE 2 The training samples of ammonia nitrogen-NH4N (mg/L) 64.3 69.4 72.6 71.7 71.5 63.5 70.7 68.4 64.3 68.3 71.9 64.3 63.8 56.9 44.6 64.9 68.9 76.9 63.5 70 60.3 60 72.1 69.7 70.5 66.1 62.2 58.8 60.5 63.5 65.7 59.4 54.8 60 59.1 63.7 64.5 58.1 61.9 66.7 57.6 70.7 61.3 57.8 55.3 65.8 65.1 61.3 72 62.8 63.4 61.4 71.3 61.2 58.7 55.7 67.7 58.5 61.5 73.2
(46) TABLE-US-00003 TABLE 3 The training samples of nitrate nitrogen-NO3N (mg/L) 13.8325 13.7215 13.6408 13.6666 13.7288 13.8617 13.8873 13.9157 13.9758 14.1119 14.4164 14.4829 15.2031 15.2791 15.6909 16.1498 16.6379 16.9443 16.8975 16.8101 16.5498 16.2205 15.7517 15.3732 14.5885 13.9968 13.5851 12.9808 12.6256 12.2428 11.9133 11.6286 11.4642 10.7946 10.3934 10.4852 10.9491 11.5281 12.2201 12.8419 13.3324 13.0934 12.8794 12.9103 12.5906 12.3108 12.0798 11.9742 11.8102 11.6730 11.6093 11.4942 11.4940 11.5036 11.4617 11.4878 11.3927 11.3851 11.4866 11.7895
(47) TABLE-US-00004 TABLE 4 The training samples of effluent suspended solids-SS (mg/L) 146 192 226 208 154 264 276 208 178 250 204 288 210 172 200 170 214 324 186 422 168 238 232 260 184 330 312 230 162 300 268 231 270 132 252 204 148 116 182 292 210 210 350 214 212 170 262 178 228 164 296 308 240 170 140 178 196 312 164 320
(48) TABLE-US-00005 TABLE 5 The training samples of total phosphorus-TP (mg/L) 6.38 6.71 7.15 7.29 6.31 7.03 7.35 7.05 6.66 7.28 7.06 7.73 6.92 6.7 6.91 6.38 7.18 7.81 7.39 8.21 6.56 6.83 6.95 7.41 6.82 9.84 7.91 7.23 6.64 7.3 7.81 7.19 6.63 6 6.65 5.84 5.87 6.15 6.53 7.62 6.9 6.2 8.08 6.47 7.2 5.86 7.69 6.55 6.94 7.01 7.78 6.98 7.55 6.56 5.92 6.17 7.05 6.73 7.65 8.09
(49) TABLE-US-00006 TABLE 6 The training samples of real effluent TN (mg/L) 75.3 86 91.3 91.8 88.5 83.9 84.8 82.1 80 84.4 80 89.6 79.9 82.2 77.6 55.5 85.1 85.4 90.4 84.2 80.9 76.1 73.7 86.6 83.1 85.9 81.7 79.6 72 78 79.3 81.77 73.7 62.4 73.2 70.7 72.2 71.1 63 75.3 81.8 72.7 88.9 77.4 74.1 71.2 80.5 76.5 75.8 82.6 80.1 70.3 86.5 71.5 67.9 65.6 68.6 70.9 77.4 87.2
(50) TABLE-US-00007 TABLE 7 The effluent TN concentration outputs in the training process (mg/L) 75.09123 85.75465 91.29607 91.6917 88.23302 83.95164 85.46349 82.11712 79.64609 84.5503 79.87456 89.64711 79.92864 81.83561 77.36899 57.73073 84.80773 85.69525 90.44198 82.75301 81.46583 76.12251 73.87198 86.63506 82.91107 85.88516 81.91191 79.37446 72.01563 78.18965 79.34218 81.66961 73.74434 62.82255 73.0666 70.48056 72.29508 71.25872 63.62556 74.98458 81.483 72.48675 88.93721 77.31496 74.22315 70.59969 80.91807 76.37911 75.78082 82.65934 80.05047 71.01168 85.82914 71.58082 67.73245 65.72093 69.74704 69.91498 76.98607 87.36917
Testing Samples
(51) TABLE-US-00008 TABLE 8 The testing samples of biochemical oxygen demand-BOD (mg/L) 217 226 218 390 260 200 248 370 342 347 290 440 289 460 188 318 334 290 341 335 287 346 266 430 294 450 262 372 370 198 347 610 326 283 395 233 331 209 282 174
(52) TABLE-US-00009 TABLE 9 The testing samples of ammonia nitrogen-NH4N (mg/L) 48.6 56.9 64.2 58.9 50.3 61.3 63.7 68.6 54 40.8 53.4 60.2 66.4 60.9 63.4 54.4 40.7 69 63.4 55 66.3 63.2 62.3 52.7 60.5 57 62.1 68.2 64 69 67.2 61.5 66 64.5 62.1 51.4 51 55.5 55.5 58.5
(53) TABLE-US-00010 TABLE 10 The testing samples of nitrate nitrogen-NO3N (mg/L) 12.3085 12.6792 13.0400 13.2389 13.5262 13.4614 13.2849 12.9682 12.7089 12.2269 12.0995 12.1315 12.1361 12.2122 12.2197 12.3499 12.4464 12.4927 12.7326 12.8156 12.9392 13.0438 13.7367 14.1627 14.8751 15.9604 16.7487 17.6572 18.6773 19.1970 19.9069 20.5030 20.9495 21.3475 21.8734 22.4720 22.7922 23.2325 23.4924 23.2459
(54) TABLE-US-00011 TABLE 11 The testing samples of effluent suspended solids-SS (mg/L) 154 158 214 204 110 232 226 254 122 538 130 162 142 360 376 231.2 166 118 142 220 266 172 296 235 180 146 206 208 202 146 398 270 328 126 244 218 272 168 262 110
(55) TABLE-US-00012 TABLE 12 The testing samples of total phosphorus-TP (mg/L) 5.17 5.39 6.03 5.96 5.24 6.22 5.78 6.17 5.6 5.22 4.75 5.46 6.1 6.48 6.84 5.5 4.06 5.74 5.73 5.8 6.71 5.63 6.18 5.11 5.03 4.6 5.24 5.86 5.62 6.13 7.01 6.11 6.65 5.56 6.52 6.22 6.25 5.2 5.77 6.17
(56) TABLE-US-00013 TABLE 13 The testing samples of real effluent TN (mg/L) 62.8 67.4 75.3 70.1 59.4 78.5 75.4 77.3 70.2 54.5 60.7 66.7 74.1 74.9 78.6 66 60.9 65.4 52.3 60.5 72.7 68.2 70 65.1 69.1 61.9 69.3 71.5 70.7 76.7 80.8 73.9 77.3 73.5 76.3 73.4 74.1 64.5 66.6 67.8
(57) TABLE-US-00014 TABLE 14 The effluent TN concentration outputs in the testing process (mg/L) 60.43193 67.16412 75.34496 70.96676 63.11076 67.66785 81.3452 79.78831 64.06407 58.64447 63.99991 66.24501 72.44785 72.43734 77.82645 67.75635 57.96904 75.32191 63.95107 56.05289 62.8231 65.67208 71.03243 61.22433 66.2433 65.8583 68.8428 76.71578 67.04345 74.80853 78.61247 75.88474 80.21718 68.98426 77.51966 67.57056 73.42719 71.17669 65.88281 66.41494