A parallel analog circuit optimization method based on genetic algorithm and machine learning
20230092630 · 2023-03-23
Inventors
- Ranran Zhou (Jinan, CN)
- Yaping Li (Jinan, CN)
- Yong Wang (Jinan, CN)
- Yusong Li (Jinan, CN)
- Xuezheng Huang (Jinan, CN)
- Juanjuan Sun (Jinan, CN)
Cpc classification
G06F30/367
PHYSICS
G06F2119/02
PHYSICS
G06N3/126
PHYSICS
International classification
G06F30/367
PHYSICS
Abstract
A parallel analog circuit automatic optimization method based on genetic algorithm and machine learning comprises global optimization based on genetic algorithm and local optimization based on machine learning, with the global optimization and the local optimization performed alternately. The global optimization based on genetic algorithm utilizes parallel SPICE simulations to improve the optimization efficiency while guaranteeing the optimization accuracy, combined with parallel computing. The local optimization based on machine learning establishes a machine learning model near the global optimal point obtained by the global optimization, and uses the machine learning model to replace the SPICE simulator, thus reducing the time costs brought by a large number of simulations.
Claims
1. A parallel analog circuit optimization method based on genetic algorithm and machine learning, which comprises the following steps: (1) generate the initial population: Sampling is done in the design space according to the orthogonal Latin square principle and the obtained data forms the initial population. The design space refers to the value range of the circuit design variables, which is to be determined by the user. The sampling refers to the process of selecting multiple sets of variable values in the design space. The sampled data includes multiple sets of circuit design variable values, each of which contains the values of each design variable. SPICE simulations are carried out multiple times for the initial population simultaneously with the SPICE simulator and combined with the parallel technique, that is to say, parallel SPICE simulations are performed. SPICE simulations are done for each set of design variable values and multiple SPICE simulations are performed synchronously to obtain the circuit performance parameters of each individual in the initial population. Each set of circuit design variable values in the initial population is referred to as an individual, while each circuit performance value is referred to as a circuit performance parameter (2) each individual's cost value is computed, and the individual with the lowest cost value in the population is selected as the optimal individual: (3) training and test data sets are generated for training machine learning-based circuit performance models: (4) the training and test data sets obtained from Step (3) are used to train and generate the machine learning-based circuit performance model: the training and test data sets obtained from Step (3) are used to train models. The said machine learning-based circuit performance model refers to a mathematical model established in a local range near the global optimal point based on the machine learning algorithm. It can reflect the relations between circuit design parameters and performance indexes. (5) local optimization is to be done based on the well-trained machine learning-based circuit performance model from Step (4): a search is carried out in the neighborhood of the optimal individual obtained by the global search, and the well-trained machine learning-based circuit performance model from Step (4) is used for prediction. Namely, the circuit performance model runs with the circuit design variable values as input items, and then outputs the results as circuit performance parameters. The cost value is calculated with the computational formula as shown in equation (I) based on the circuit performance parameters output by the circuit performance model. The local search is carried out in the direction where the cost value decreases. (6) SPICE simulation verification is done for the results obtained by the local optimization of machine learning-based circuit performance model from Step (5). Then, the optimal individual in the initial population is updated with the circuit design variable values and simulation results after the SPICE simulation verification (7) determine whether the optimization termination condition is met. If the pre-set iteration times are reached or the goal of circuit optimization is met, then the optimization is finished. If not, Step (8) is entered. (8) enter the evolutionary process of the genetic algorithm: A new population is generated through selection, crossover and mutation. The said selection is to select excellent individuals from the population according to the cost values, namely the smaller the cost value, the more likely the individual is to be selected. The said crossover is to select a pair of individuals randomly from the population according to the pre-set probabilities, namely two sets of parameter values of the circuit are randomly selected, and then utilize the crossover operator to cross over them and generate new individuals. The said mutation is to use the mutation operator to mutate some values of the individuals in a population to produce new individuals; (9) parallel SPICE simulations are performed for the new population generated by Step (8) with the SPICE simulator combined with the parallel computing technique; (10) repeat Step (2) to Step (7).
2. The parallel analog circuit optimization method based on genetic algorithm and machine learning as described in claim 1, which is characterized in that the said Step (2) is to: put the SPICE simulation results obtained from Step (1), namely the circuit performance parameters of each individual in the initial population, into the cost function and calculate the cost value F.sub.cost (cost value) of each individual in the population. The cost value is used to measure the circuit performance. The individual with the lowest cost value is the optimal individual of the population. The cost function F.sub.cost is as shown in equation (I):
F.sub.cost=Σ.sub.n=1.sup.N(W.sub.n*P.sub.n) (I) in equation (1), N indicates the number of performance indexes to be optimized of the circuit (n=1, 2, . . . N); P.sub.n indicates the square of the difference between the SPICE simulation result and the circuit performance parameter for the nth circuit performance parameter, and W.sub.n is a weight value that indicates the importance of the nth circuit performance parameter (the value of W.sub.n is to be set by the users according to needs and must be a real number).
3. The parallel analog circuit optimization method based on genetic algorithm and machine learning as described in claim 1, which is characterized in that the said Step (3) is to: perform uniform sampling and obtain sample data in the neighborhood of the optimal individual selected in Step (2) through use of the orthogonal Latin square principle, wherein the neighborhood indicates a range within 5% of the circuit performance parameters of the optimal individual or a user-defined range; and conduct SPICE simulations multiple times for the sampled data in the neighborhood of the optimal individual simultaneously, namely parallel SPICE simulations, with the SPICE simulator combined with the parallel technique to obtain the simulation results. The sampled data and the simulation constitute the training and test data sets jointly.
4. The parallel analog circuit optimization method based on genetic algorithm and machine learning as described in claim 1, which is characterized in that the machine learning-based circuit performance model as described in Step (3) is an ANN model, a KNN model, an SVM model, a DNN model, a DT model, or a Random Forests model.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0039]
[0040]
[0041] Embodiment 1 of the parallel analog circuit optimization method based on genetic algorithm and machine learning as described in the invention.
[0042]
[0043]
[0044]
DETAILED EMBODIMENTS
[0045] The invention is further described in combination with the attached figures and embodiments as follows, but is not limited to that.
Embodiment 1
[0046] A parallel analog circuit optimization method based on genetic algorithm and machine learning, which uses a fifth-order complex filter circuit as shown in
TABLE-US-00001 TABLE 1 Circuit performance Design index Passband ripple Minimization Bandwidth 9 ± 5% MHz Center frequency 12.24 ± 5% MHz
[0047] In other words, the optimization objective of the embodiment is to minimize the passband ripple while guaranteeing a center frequency of 12.24 MHz and a bandwidth of 9 MHz (within 5% deviation). To this end, the resistance values of 5 sets of coupling resistors R.sub.1, R.sub.2, R.sub.3, R.sub.4, R.sub.5 are selected as the circuit design variables to be optimized. The specific implementation steps are as follows: as shown in the attached
[0048] (1) Generate the initial population: Sampling is done in the design space according to the orthogonal Latin square principle and the obtained data forms the initial population. The sampling refers to the process of selecting multiple sets of variable values in the design space. The sampled data includes multiple sets of circuit design variable values, each of which contains the values of each design variable.
[0049] SPICE simulations are carried out multiple times for the initial population simultaneously with the SPICE simulator and combined with the parallel technique, that is to say, parallel SPICE simulations are performed. As the initial population contains multiple sets of variable values, SPICE simulations are done for each of them in order to evaluate its performance in the population, and multiple SPICE simulations are performed synchronously to obtain the circuit performance parameters of each individual in the initial population. Each set of circuit design variable values in the initial population is referred to as an individual, while each circuit performance value is referred to as a circuit performance parameter, such as gain of operational amplifier, bandwidth, and passband ripple of complex filter, etc. The circuit performance parameters are determined by the optimized target circuit.
[0050] The initial population of the genetic algorithm is generated by orthogonal Latin square sampling, which can reduce the number of individuals in the required population and have the individuals distributed as evenly as possible in the design space. Through combining SPICE simulation with parallel computing, the circuit performance of many individuals in the population is to be evaluated via parallel SPICE simulations, which has improved the efficiency greatly.
[0051] (2) Each individual's cost value is computed, and the individual with the lowest cost value in the population is selected as the optimal individual:
[0052] The SPICE simulation results obtained from Step (1), namely the circuit performance parameters of each individual in the initial population, are put into the cost function to calculate the cost F.sub.cost value (cost value) of each individual in the population. The cost value is used to measure the circuit performance. The individual with the lowest cost value is the optimal individual of the population. The cost function F.sub.cost is as shown in equation (I):
F.sub.cost=Σ.sub.n=1.sup.N(W.sub.n*P.sub.n) (I)
[0053] In equation (1), N indicates the number of performance indexes to be optimized of the circuit (n=1,2, . . . N); P.sub.n indicates the square of the difference between the SPICE simulation result and the circuit performance parameter for the nth circuit performance parameter, and W.sub.n is a weight value that indicates the importance of the nth circuit performance parameter (the value of W.sub.n is to be set by the users according to needs and must be a real number).
[0054] (3) Training and test data sets are generated for training machine learning-based circuit performance models: this is to perform uniform sampling and obtain sample data in the neighborhood of the optimal individual selected in Step (2) through use of the orthogonal Latin square principle, wherein the neighborhood indicates a range within 5% of the circuit performance parameters of the optimal individual or a user-defined range; and conduct SPICE simulations multiple times for the sampled data in the neighborhood of the optimal individual simultaneously, namely parallel SPICE simulations, with the SPICE simulator combined with the parallel technique to obtain the simulation results. The sampled data and the simulation constitute the training and test data sets jointly. Compared to serial simulations, parallel SPICE simulations can reduce time costs and improve the optimization efficiency.
[0055] (4) The training and test data sets obtained from Step (3) are used to train and generate the machine learning-based circuit performance model:
[0056] The training and test data sets obtained from Step (3) are used to train models. The machine learning-based circuit performance model refers to a mathematical model established in a local range near the global optimal point based on the machine learning algorithm. It can reflect the relations between circuit design parameters and performance indexes. Since the said machine learning model is established in a local region, the training data required by it is less than that needed by the model built in the global scope, which thus can reduce the time costs. Additionally, as the circuit performance is relatively stable in a local region, the machine learning model established in this way has a higher accuracy, which is conductive to the improvement of the accuracy of the analog circuit optimization method. During the local optimization, the established machine learning model will be used in replacement of the SPICE simulator to evaluate the circuit performance. The machine learning model can predict much faster than SPICE simulation, and thus can greatly improve the efficiency.
[0057] (5) Local optimization is to be done based on the well-trained machine learning-based circuit performance model from Step (4):
[0058] A search is carried out in the neighborhood of the optimal individual obtained by the global search, aiming to further search for points with better performance and lower cost values. Then, the circuit performance is no longer evaluated through SPICE simulation, and instead, is predicted with the well-trained machine learning-based circuit performance model from Step (4). Namely, the circuit performance model runs with the circuit design variable values as input items, and then outputs the results as circuit performance parameters. The cost value is calculated with the computational formula as shown in equation (I) based on the circuit performance parameters output by the circuit performance model. The local search is carried out in the direction where the cost value decreases.
[0059] (6) SPICE simulation verification is done for the results obtained by the local optimization of machine learning-based circuit performance model from Step (5) to improve the accuracy of the optimization results in the previous step. Then, the optimal individual in the initial population is updated with the circuit design variable values and simulation results after the SPICE simulation verification, which is to replace the original value with the current value. The SPICE simulation verification aims to improve the accuracy of the optimization results based on machine model local optimization. The local optimization results after SPICE verification are added to the next-generation population to get a better population.
[0060] (7) Determine whether the optimization termination condition is met. If the pre-set iteration times are reached or the goal of circuit optimization is met, then the optimization is finished. If not, Step (8) is entered.
[0061] (8) Enter the evolutionary process of the genetic algorithm: A new population is generated through selection, crossover and mutation. The said selection is to select excellent individuals from the population according to the cost values, namely the smaller the cost value, the more likely the individual is to be selected. The said crossover is to select a pair of individuals randomly from the population according to the pre-set probabilities, namely two sets of parameter values of the circuit are randomly selected, and then utilize the crossover operator to cross over them and generate new individuals. The said mutation is to use the mutation operator to mutate some values of the individuals in a population to produce new individuals;
[0062] (9) Parallel SPICE simulations are performed for the new population generated by Step (8) with the SPICE simulator combined with the parallel computing technique;
[0063] (10) Repeat Step (2) to Step (7).
[0064] In the embodiment, the average relative error and the correlation coefficient are used to evaluate the accuracy of the machine learning-based circuit performance model. Computational formulas of the average relative error and the correlation coefficient are as shown in Equations (II) and (III):
[0065] Where: n, x, and y denote the size of the training data set, the predicted value of the machine learning model, and the SPICE simulation result respectively. The average relative error represents the error between the output of the machine learning model and the value of the SPICE simulation result. The correlation coefficient is a statistical index that measures the goodness of fit between the output of the machine learning model and the value of the SPICE simulation result. If the correlation coefficient is equal to 1.0, the model output value matches the target value (SPICE simulation value) perfectly.
[0066] To select an appropriate machine learning model, the embodiment has compared six common machine learning models, including K-Nearest Neighbor (KNN) model, Support Vector Machine (SVM) model, Deep Neural Networks (DNN) model, Random Forests model, Decision Trees (DT) model, and Artificial Neutral Networks (ANN) model, according to the above formulas. The embodiment uses 10 training and test data sets which are generated through SPICE simulations to train and test these six models. The minimum values, maximum values, and averages of the average relative errors and correlation coefficients of the six models, as well as the training time of the models are as listed in Table 2.
TABLE-US-00002 TABLE 2 Average relative error (%) Correlation coefficient Main model Training Minimum Maximum Minimum Maximum Model parameters time (sec) Data set value value Average value value Average KNN k = 1 0.2929 Training — — — — — — set Test set 0.9236 5.7663 2.9142 0.9560 0.9990 0.9829 SVR C = 1000, ‘RBF’ 0.2980 Training 2.3879 14.934 6.6970 0.6036 0.9958 0.8290 set Test set 2.1111 13.013 6.3128 0.4453 0.9972 0.8657 DNN [128, 128, 687.18 Training 0.2273 1.5794 0.6844 0.9994 0.9999 0.9985 128, 128, 128], set ’ReLu’,’Adam’ Test set 0.2277 2.2103 0.9386 0.9989 0.9998 0.9973 DT — 0.2975 Training — — — — — — set Test set 1.3901 14.047 5.7461 0.7556 0.9944 0.9088 Random n_estimators = 30 0.3307 Training 1.0932 3.3700 1.8993 0.9898 0.9961 0.9912 Forests set Test set 2.2120 9.7492 4.1453 0.9323 0.9956 0.9671 ANN [8], ‘tansig’, 0.2822 Traini 0.1000 1.2600 0.5970 0.9969 1.0000 0.9989 ‘trainlm’ ng set 0.1500 1.7300 0.7870 0.9954 1.0000 0.9986 Test set
[0067] As can be seen from Table 2, the ANN model has the smallest relative error (2%), and the largest correlation coefficient (very close to 1.0) among the six models. That is to say, in this embodiment, the ANN model has the highest accuracy. In addition, the training time of ANN model is also very short. Through overall consideration, the embodiment selects the ANN model as its training model.
[0068] The embodiment compares the following three optimization methods: global optimization method based on genetic algorithm and SPICE parallel simulation (GA(SPICE)), global optimization based on genetic algorithm and SPICE parallel simulation plus local optimization based on SPICE simulation method(GA(SPICE)+LMS(SPICE)), and global optimization based on genetic algorithm plus local optimization based on artificial neural networks method(GA(SPICE)+LMS(ANN)). Among them, the second optimization method has the highest accuracy and the best optimization effects theoretically as it is based on SPICE simulation and has combined global optimization and local optimization.
[0069] The fifth order complex bandpass filter used in this embodiment adopts the 130 nm CMOS technology. The three optimization methods all run in a server environment with 80 Intel Xeon 1.9-ghz CPU cores and a 125-GB storage space.
TABLE-US-00003 TABLE 3 Center Optimization Passband frequency Bandwidth Running method ripple (dB) (MHz) (MHz) time GA(SPICE) 0.987 12.20 9.3 0:14:00 GA(SPICE) + 0.479 12.24 9.3 2:56:00 LMS(SPICE) GA(SPICE) + 0.577 12.24 9.3 0:42:00 LMS(ANN)
[0070] As can be seen from
Embodiment 2
[0071] A parallel analog circuit optimization method based on genetic algorithm and machine learning, which uses a second-order differential operational amplifier circuit as shown in
[0072] The optimization indexes of the operational amplifier circuit are as listed in Table 4.
TABLE-US-00004 TABLE 4 Circuit performance Design indexes Open-loop gain Maximization Bandwidth Maximization Unity-gain bandwidth >1 GHz Phase margin >60°
[0073] In other words, the optimization objective of the embodiment is to maximize the open-loop gain and the bandwidth under the condition of satisfying the unity-gain bandwidth and the phase margin. According to the symmetry requirements, transistors M.sub.2, M.sub.4, M.sub.G, and M.sub.8 must be the same as M.sub.1, M.sub.3, M.sub.5, and M.sub.7 respectively, namely the following relations shall exist with their widths: W.sub.1=W.sub.2, W.sub.3=W.sub.4, W.sub.5=W.sub.6, and W.sub.7=W.sub.8. Additionally, compensation capacitor C.sub.C and compensation resistor R.sub.C also need to be optimized. Therefore, there are seven design variables in total to be optimized, respectively W.sub.1, W.sub.3, W.sub.5, W.sub.7, W.sub.9, C.sub.C, and R.sub.C.
[0074] (1) Generate the initial population: Sampling is done in the design space according to the orthogonal Latin square principle and the obtained data forms the initial population. The sampling refers to the process of selecting multiple sets of variable values in the design space. The sampled data includes multiple sets of circuit design variable values, each of which contains the values of each design variable.
[0075] SPICE simulations are carried out multiple times for the initial population simultaneously with the SPICE simulator and combined with the parallel technique, that is to say, parallel SPICE simulations are performed. As the initial population contains multiple sets of variable values, SPICE simulations are done for each of them in order to evaluate its performance in the population, and multiple SPICE simulations are performed synchronously to obtain the circuit performance parameters of each individual in the initial population. Each set of circuit design variable values in the initial population is referred to as an individual, while each circuit performance value is referred to as a circuit performance parameter, such as gain of operational amplifier, bandwidth, and passband ripple of complex filter, etc. The circuit performance parameters are determined by the optimized target circuit.
[0076] The initial population of the genetic algorithm is generated by orthogonal Latin square sampling, which can reduce the number of individuals in the required population and have the individuals distributed as evenly as possible in the design space. Through combining SPICE simulation with parallel computing, the circuit performance of many individuals in the population is to be evaluated via parallel SPICE simulations, which has improved the efficiency greatly.
[0077] (2) Each individual's cost value is computed, and the individual with the lowest cost value in the population is selected as the optimal individual:
[0078] The SPICE simulation results obtained from Step (1), namely the circuit performance parameters of each individual in the initial population, are put into the cost function to calculate the cost value F.sub.cost (cost value) of each individual in the population. The cost value is used to measure the circuit performance. The individual with the lowest cost value is the optimal individual of the population. The cost function F.sub.cost is as shown in equation (I):
F.sub.cost=Σ.sub.n=1.sup.N(W.sub.n*P.sub.n) (I)
[0079] In equation (1), N indicates the number of performance indexes to be optimized of the circuit (n=1,2, . . . N); P.sub.n indicates the square of the difference between the SPICE simulation result and the circuit performance parameter for the nth circuit performance parameter, and W.sub.n is a weight value that indicates the importance of the nth circuit performance parameter (the value of W.sub.n is to be set by the users according to needs and must be a real number).
[0080] (3) Training and test data sets are generated for training machine learning-based circuit performance models: this is to perform uniform sampling and obtain sample data in the neighborhood of the optimal individual selected in Step (2) through use of the orthogonal Latin square principle, wherein the neighborhood indicates a range within 5% of the circuit performance parameters of the optimal individual or a user-defined range; and conduct SPICE simulations multiple times for the sampled data in the neighborhood of the optimal individual simultaneously, namely parallel SPICE simulations, with the SPICE simulator combined with the parallel technique to obtain the simulation results. The sampled data and the simulation constitute the training and test data sets jointly. Compared to serial simulations, parallel SPICE simulations can reduce time costs and improve the optimization efficiency.
[0081] (4) The training and test data sets obtained from Step (3) are used to train and generate the machine learning-based circuit performance model:
[0082] The training and test data sets obtained from Step (3) are used to train models. The machine learning-based circuit performance model refers to a mathematical model established in a local range near the global optimal point based on the machine learning algorithm. It can reflect the relations between circuit design parameters and performance indexes. Since the said machine learning model is established in a local region, the training data required by it is less than that needed by the model built in the global scope, which thus can reduce the time costs. Additionally, as the circuit performance is relatively stable in a local region, the machine learning model established in this way has a higher accuracy, which is conductive to the improvement of the accuracy of the analog circuit optimization method. During the local optimization, the established machine learning model will be used in replacement of the SPICE simulator to evaluate the circuit performance. The machine learning model can predict much faster than SPICE simulation, and thus can greatly improve the efficiency.
[0083] (5) Local optimization is to be done based on the well-trained machine learning-based circuit performance model from Step (4):
[0084] A search is carried out in the neighborhood of the optimal individual obtained by the global search, aiming to further search for points with better performance and lower cost values. Then, the circuit performance is no longer evaluated through SPICE simulation, and instead, is predicted with the well-trained machine learning-based circuit performance model from Step (4). Namely, the circuit performance model runs with the circuit design variable values as input items, and then outputs the results as circuit performance parameters. The cost value is calculated with the computational formula as shown in equation (I) based on the circuit performance parameters output by the circuit performance model. The local search is carried out in the direction where the cost value decreases.
[0085] (6) SPICE simulation verification is done for the results obtained by the local optimization of machine learning-based circuit performance model from Step (5) to improve the accuracy of the optimization results in the previous step. Then, the optimal individual in the initial population is updated with the circuit design variable values and simulation results after the SPICE simulation verification, which is to replace the original value with the current value. The SPICE simulation verification aims to improve the accuracy of the optimization results based on machine model local optimization. The local optimization results after SPICE verification are added to the next-generation population to get a better population.
[0086] (7) Determine whether the optimization termination condition is met. If the pre-set iteration times are reached or the goal of circuit optimization is met, then the optimization is finished. If not, Step (8) is entered.
[0087] (8)Enter the evolutionary process of the genetic algorithm: A new population is generated through selection, crossover and mutation. The said selection is to select excellent individuals from the population according to the cost values, namely the smaller the cost value, the more likely the individual is to be selected. The said crossover is to select a pair of individuals randomly from the population according to the pre-set probabilities, namely two sets of parameter values of the circuit are randomly selected, and then utilize the crossover operator to cross over them and generate new individuals. The said mutation is to use the mutation operator to mutate some values of the individuals in a population to produce new individuals;
[0088] (9)Parallel SPICE simulations are performed for the new population generated by Step (8) with the SPICE simulator combined with the parallel computing technique;
[0089] (10) Repeat Step (2) to Step (7).
[0090] To select an appropriate machine learning model, the embodiment has compared six common machine learning models, including K-Nearest Neighbor (KNN) model, Support Vector Machine (SVM) model, Deep Neural Networks (DNN) model, Random Forests model, Decision Trees (DT) model, and Artificial Neutral Networks (ANN) model. The embodiment compares the accuracy of the six machine learning models from the four perspectives of gain of operational amplifier, bandwidth, unity-gain bandwidth, and phase margin. The embodiment uses 10 training and test data sets which are generated through SPICE simulations to train and test each of these models. The minimum values, maximum values, and averages of the average relative errors and correlation coefficients of the six models from the four perspectives of gain of operational amplifier, bandwidth, unity-gain bandwidth, and phase margin in the embodiment are as listed in Table 5, Table 6, Table 7 and Table 8. In addition, Table 5 through Table 8 also list the training time of each model.
TABLE-US-00005 TABLE 5 Average relative error (%) Correlation coefficient Main model Training Minimum Maximum Minimum Maximum Model parameters time (sec) Data set value value Average value value Average KNN k = 1 0.2905 Training — — — — — — set Test set 0.0250 0.1514 0.0885 0.9949 0.9999 0.9982 SVR C = 1000, 0.3043 Training 0.0283 0.0608 0.0442 0.9940 0.9993 0.9970 ‘RBF’ set Test set 0.0568 0.1047 0.0821 0.9936 0.9994 0.9977 DNN [64, 64, 54.306 Training 0.0312 0.1199 0.0578 0.9745 0.9993 0.9944 64, 64, 64], set ’ReLu’, ’Adam’ Test set 0.0398 0.1273 0.0676 0.9885 0.9996 0.9977 DT — 0.2627 Training — — — — — — set Test set 0.0463 0.2628 0.1547 0.9387 0.9990 0.9739 Random n_estimators = 30 0.3084 Training 0.0433 0.0833 0.0617 0.9915 0.9977 0.9952 Forests set Test set 0.1139 0.2359 0.1690 0.9760 0.9933 0.9872 ANN [8], ‘tansig’, 0.3736 Trainin 0.0104 0.0214 0.0106 0.9990 1.0000 0.9997 ‘trainlm’ g set Test set 0.0152 0.0326 0.0207 0.9989 1.0000 0.9995
TABLE-US-00006 TABLE 6 Average relative error (%) Correlation coefficient Main model Training Minimum Maximum Minimum Maximum Model parameters time (sec) Data set value value Average value value Average KNN k = 1 0.2923 Training — — — — — — set Test set 0.2191 0.2937 0.2573 0.9970 0.999 0.9989 SVR C = 1000, 0.2840 Training 0.0383 0.1346 0.0609 0.9972 0.9998 0.9992 ‘RBF’ set Test set 0.0760 0.3196 0.1350 0.9972 0.9996 0.9985 DNN [64, 64, 54.601 Training 0.0471 0.1220 0.0737 0.9972 0.9995 0.9986 64, 64, 64], set ’ReLu’, ’Adam’ Test set 0.0563 0.1524 0.1056 0.9930 0.9998 0.9981 DT — 0.2711 Training — — — — — — set Test set 0.2231 0.7051 0.4540 0.8489 0.9984 0.9386 Random n_estimators = 30 0.3058 Training 0.1208 0.1358 0.1291 0.9941 0.9966 0.9956 Forests set Test set 0.3356 0.4618 0.3894 0.9720 0.9961 0.9849 ANN [8], ‘tansig’, 0.3303 Training 0.0078 0.0857 0.0293 0.9990 1.0000 0.9997 ‘trainlm’ set Test set 0.0135 0.1200 0.0376 0.9990 0.9999 0.9995
TABLE-US-00007 TABLE 7 Average relative error (%) Correlation coefficient Main model Training Minimum Maximum Minimum Maximum Model parameters time (sec) Data set value value Average value value Average KNN k = 1 0.2947 Training — — — — — — set Test set 3.1062 4.3661 3.8100 0.9835 0.9999 0.9904 SVR C = 1000, 0.3113 Training 4.6467 5.2512 5.0144 0.9847 0.9931 0.9894 ‘RBF’ set Test set 3.5008 4.4419 4.0178 0.9918 0.9962 0.9940 DNN [64, 64, 54.208 Training 0.1284 0.5980 0.2530 0.9979 1.0000 0.9996 64, 64, 64], set ’ReLu’, ’Adam’ Test set 0.1768 0.9245 0.3257 0.9973 1.0000 0.9996 DT — 0.2711 Training — — — — — — set Test set 2.0015 2.7802 2.4158 0.9657 0.9859 0.9776 Random n_estimators = 30 0.3281 Training 0.6553 0.7963 0.7266 0.9972 0.9986 0.9980 Forests set Test set 1.2873 1.7314 1..5160 0.9881 0.9953 0.9923 ANN [8], ‘tansig’, 0.1764 Training 0.0563 0.4320 0.2181 0.9999 1.0000 1.0000 ‘trainlm’ set Test set 0.0876 0.4938 0.2314 0.9999 1.0000 1.0000
TABLE-US-00008 TABLE 8 Average relative error (%) Correlation coefficient Main model Training Minimum Maximum Minimum Maximum Model parameters time (sec) Data set value value Average value value Average KNN k = 1 0.2986 Training — — — — — — set Test set 3.5797 4.0517 3.7478 0.9832 0.9996 0.9931 SVR C = 1000, 0.3068 Training 0.4511 1.2788 0.9022 0.9957 0.9992 0.9974 ‘RBF’ set Test set 0.3937 1.3538 0.8697 0.9950 0.9992 0.9974 DNN [64, 64, 14.846 Training 0.1803 2.0427 0.6062 0.9863 0.9999 0.9978 64, 64, 64], set ’ReLu’, ’Adam’ Test set 0.2136 17.951 2.5033 0.9918 0.9998 0.9985 DT — 0.2835 Training — — — — — — set Test set 2.1589 2.9769 2.5648 0.9686 0.9845 0.9777 Random n_estimators = 30 0.3315 Training 0.6228 0.8214 0.7240 0.9972 0.9987 0.9980 Forests set Test set 1.3188 1.8520 1.5429 0.9898 0.9951 0.9924 ANN [8], ‘tansig’, 0.2877 Training 0.1728 0.4435 0.2946 0.9993 0.9999 0.9996 ‘trainlm’ set Test set 0.2437 0.7589 0.3857 0.9990 0.9998 0.9994
[0091] As can be seen from Table 5 through Table 8, the ANN model has the smallest relative error (2%), and the largest correlation coefficient (very close to 1.0) among the six models. That is to say, in this embodiment, the ANN model has the highest accuracy. In addition, the training time of ANN model is also very short. Through overall consideration, the embodiment selects the ANN model as its training model.
[0092] The optimization steps in this embodiment are the same as those of Embodiment 1. The embodiment also compares the three optimization methods as described in Embodiment 1. The operational amplifier used in this embodiment adopts the 130 nm CMOS technology. The three optimization methods all run in a server environment with 80 Intel Xeon 1.9-ghz CPU cores and a 125-GB storage space.
[0093] Table 9 lists the optimization results of the three optimization methods.
TABLE-US-00009 TABLE 9 Unit gain Phase Optimization Open-loop Bandwidth bandwidth margin Running method gain (V/V) (KHz) (GHz) (°) time GA(SPICE) 689.088 835.455 1.151 60.457 0:5:48 GA(SPICE) + 709.430 1345.455 1.177 60.007 1:36:00 LMS(SPICE) GA(SPICE) + 684.507 1285.953 1.246 60.221 0:21:00 LMS(ANN)
[0094] As can be seen from