MODEL-FREE OPTIMIZATION METHOD OF PROCESS PARAMETERS OF INJECTION MOLDING
20210389737 · 2021-12-16
Inventors
- Peng Zhao (Hangzhou, CN)
- Zhengyang DONG (Hangzhou, CN)
- Kaipeng JI (Hangzhou, CN)
- Hongwei Zhou (Hangzhou, CN)
- Jianguo Zheng (Hangzhou, CN)
- Tingyu WANG (Hangzhou, CN)
- Jianzhong Fu (Hangzhou, IN)
Cpc classification
G06F17/11
PHYSICS
G06F17/16
PHYSICS
G05B13/042
PHYSICS
B29C2945/76949
PERFORMING OPERATIONS; TRANSPORTING
B29C45/766
PERFORMING OPERATIONS; TRANSPORTING
International classification
B29C45/76
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The present invention discloses a model-free optimization method of process parameters of injection molding to solve the problems of frequent tests required and performing adaptive adjustment on different parameters in the existing optimization method. The method need not build a surrogate model between a product quality index and a process parameter to render the process parameter to converge nearby the optimal solution by an on-line iteration method. The present invention calculates the gradient direction of a current point by an iterative gradient estimation method, and uses adaptive moment estimation algorithm to allocate an adaptive step for each parameter. The method can significantly reduce the cost and time required in the process parameter, which greatly helps improving the optimization efficiency of process parameters of injection molding.
Claims
1. A model-free optimization method of process parameters of injection molding, characterized by comprising the steps of: (1) determining a quality index Q to be optimized and a quality index target value Q.sub.target directed to the injection molding technology to be optimized; (2) determining n process parameters to be optimized, determining and pretreating initial values of the process parameters to be optimized to obtain an initial parameter sample; (3) respectively perturbing each parameter of a current parameter sample to obtain n sets of perturbed parameter samples, and finally obtaining n+1 sets of parameter samples comprising the current parameter sample and corresponding n+1 quality index values Q.sub.1; (4) using n+1 quality index values Q.sub.1 obtained by acquisition to calculate a gradient value at the current parameter sample; (5) based on the obtained gradient value, updating the current parameter sample, and using the updated parameter sample to calculate a corresponding quality index value Q.sub.2; making a comparison between the quality index value Q.sub.2 and the quality index target value Q.sub.target of the quality index: outputting an optimum process parameter if it meets the preset requirements; or, using the updated parameter sample and quality index value Q.sub.2 to replace a set of parameter sample having the worst optimal target value in n+1 sets of parameter samples in step (3) and the quality index value Q.sub.1, and returning to step (4).
2. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: in step (2), the pretreating process is as follows:
3. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: the quality index value Q.sub.1, and the quality index value Q.sub.2 are obtained by simulations or experiments.
4. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: in step (4), a method of calculating the gradient value is as follows:
5. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: in step (5), a method of updating the current parameter sample is as follows:
X.sub.update.sup.0=X.sup.0−α°.Math.∇J(X.sup.0) wherein, α.sup.0 is a step size of parameter adjustment; X.sup.0 is a current parameter sample; X.sub.update.sup.0 is the updated current parameter sample; and ∇J(X.sup.0) is a gradient value.
6. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: in step (5), Adam (Adaptive Moment Estimation) is used to achieve the updating of the current parameter sample based on the accumulation of history gradient information.
7. The model-free optimization method of process parameters of injection molding according to claim 1, characterized in that: the following Formula is used to achieve the updating of the current parameter sample:
8. The model-free optimization method of process parameters of injection molding according to claim 7, characterized in that: during updating for the mth time, correction of the first-order exponential moving average of the corresponding history gradients is ; correction of the second-order exponential moving average of the corresponding history gradients is
; and the correction is respectively calculated by the following Formula:
9. The model-free optimization method of process parameters of injection molding according to claim 8, characterized in that: β.sub.1 is 0.85-0.95, and β.sub.2 is 0.99-0.999.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0051]
[0052]
[0053]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0054] The present invention will be described in detail by reference to the flow diagram of the present invention. The model-free optimization method of process parameters of injection molding provided by the present invention uses an iterative gradient estimation method to calculate the gradient direction of the current point as a parameter adjustment direction, thus optimizing the parameters constantly; during parameter optimization process, the adaptive moment estimation algorithm is used to achieve the adaptive adjustment of each parameter step, and the parameter obtained after adjustment each time will be subjected to a test to obtain an optimal target value corresponding to the parameter for such kind of iterative optimization till being up to the standard.
[0055] The present invention will be further described in detail in combination with the examples below.
[0056] As shown in
[0057] (1) Method initiation: determining an optimized quality index Q (e.g., product weight, warping, and the like), a quality index target value Q.sub.target and a permissible error δ of an optimization objective; and choosing parameters X=(x.sub.1,x.sub.2, . . . , x.sub.n).sup.T to be optimized and feasible scope thereof, as shown in the Formula (1):
[0058] J(X) denotes the difference between the current quality index value and the target value. The quality index Q(X) can be regarded as an implicit function of process parameters X, and X consists of n parameters x.sub.1,x.sub.2, . . . , x.sub.n, such as, packing pressure, melt temperature; U.sub.i and L.sub.i denote upper and lower bound values of parameter x.sub.i.
[0059] Different parameters have differences of orders of magnitude, for example, there is a difference of two orders of magnitude between melt temperature and injection time; usually, parameters need to be normalized by the Formula (2) during parameter optimization process, and the normalization can ensure that each parameter can change with the same extent.
[0060] (2) Initial parameter selection and initial gradient calculation. Firstly, an initial point X.sup.0 is selected within a feasible scope, and each parameter is disturbed respectively; that is, x.sub.i.sup.j=x.sub.j.sup.0+δ.sub.i, where i denotes the ith parameter, j shows the jth set of data, x.sub.i.sup.j denotes the ith parameter in the jth set of parameter samples, x.sub.j.sup.0 is the jth parameter in the current parameter sample, δ.sub.i is perturbing quantity; and the perturbing size and direction are not strictly limited, to obtain n+1 sets of process parameters and the corresponding quality values after perturbation, thereby obtaining the corresponding optimal target value J (a difference value between the corresponding quality value and quality target value):
[0061] J(X) is expanded according to the Taylor Series Expansion thereof at X.sup.0:
[0062] A matrix form is written below:
[0063] The gradient value at the initial point X.sup.0 is calculated by a gradient computation matrix (6);
[0064] (3) Formula (6) is used to calculate the reverse direction of the gradient, that is, −∇J(X.sup.0) is an optimized direction of the parameter; a next iteration point is produced by Formula (7), and the obtained new process parameter will be closer to the preset target value;
X.sub.updte.sup.0=X.sup.0−α.sup.0∇J(X.sup.0) (7)
[0065] α.sup.0 is a step size of the current parameter adjustment. After obtaining the updated parameter X.sub.updte.sup.0, the updated parameter needs to be de-normalized (as shown in Formula (8)), so that the parameter returns to a physical parameter value, which facilitates technicians to obtain a quality index value Q(X.sub.updte.sup.0) at X.sub.updte.sup.0 through experimental operation.
X.sub.denorm=X.sub.norm×(U−L)+L (8)
[0066] (4) Judging whether Q(X.sub.denorm) satisfies the requirement, if |Q(X.sub.denorm)−Q.sub.target|<δ, the obtained optimization objective has satisfied termination conditions, and the optimization progress stops; at this time, X.sub.denorm is the optimal parameter, otherwise, it cannot satisfy the termination conditions, continuously perform step (5).
[0067] (5) Updating the gradient computation matrix. A newly obtained (X.sub.updte.sup.0,J(X.sub.updte.sup.0)) is used to substitute the point whose quality value is the worst in the primary gradient computation matrix (if it is (X.sup.0),J(X.sup.0), it is without loss of generality), and J(X) is expanded according to Taylor at X.sub.updte.sup.0 thereof to obtain an updated gradient computation matrix at this time X.sup.0=X.sub.updte.sup.0:
[0068] (6) Adaptive step adjustment. After obtaining the gradient information, the existing optimization method is to update parameters usually by a stochastic gradient descent. In such case, a step size α.sup.0 is set as a constant, and parameter variation ΔX=α.sup.0.Math.∇J(X.sup.0) only depends on the gradient of the current single point, but overlooks the accumulation of history gradient information; and the accumulation of history gradient information may provide a useful suggestion to the following optimized direction and step, thus adjusting the step size α.sup.0 more adaptively in the subsequent optimal steps. Therefore, adaptive moment estimation algorithm is used in the present invention, and after through adaptive adjustment, the optimal step of each parameter is
and the method has the following specific steps:
[0069] (6.1) Calculation of the first-order exponential moving average of history gradients:
[0070] β.sub.1 is a first-order exponential decay rate coefficient, and generally set as 0.9. M is the current step of updating, choose the corresponding computational formula when corresponds to the frequency of updating, the first-order exponential moving average is calculated to accelerate the optimization procedure. For example, if the gradient of a certain parameter component in preceding several optimal steps is a positive value, the first-order exponential moving average v will record and accumulate the trend, and increases the step of the parameter in the subsequent optimal step, thus achieving acceleration. The function of the first-order exponential moving average v is similar to the concept of momentum in physics; therefore, it is called momentum algorithm.
[0071] (6.2) Calculation of the second-order exponential moving average of history gradients:
[0072] β.sub.2 is a second-order exponential decay rate coefficient, and generally set as 0.999; and ⊙ denotes the element-wise multiplication corresponding to the matrix. The optimization objective is only sensitive to a portion of parameters, but not sensitive to the change of other parameters. In respect to the optimization problem of multiple parameters, different optimal step sizes need to be configured directed to different parameters, thus fitting in the influence to the optimization objective. The second-order exponential moving average sevaluates the influence to each parameter by accumulating the square of a gradient, thus improving the adaptability of the optimization method. If the square of the gradient of a target function on a certain dimension is always small, the descending step on the dimension will increase to accelerate convergence; otherwise, for some parameters having larger fluctuation, the optimal step will decrease, thus reducing fluctuation.
[0073] (6.3) Error correction of exponential moving average. Initial values of v and s are generally set to 0, which will result in an error in exponential moving average; and according to the Formula (12), such kind of error caused by the initial value is corrected; during updating for the mth time, correction of the first-order exponential moving average of the corresponding history gradients is ; and correction of the second-order exponential moving average of the corresponding history gradients is
:
[0074] where: m is the current step of updating, v.sup.m-1 is a first-order exponential moving average of the history gradients during updating for the mth time; s.sup.m-1 is a second-order exponential moving average of the history gradients during updating for the mth time; v.sup.0=0; s.sup.0=0; ∇J(X.sup.m-1) is a gradient value obtained after updating for the (m−1)th time.
[0075] (6.4) Obtaining the updating formula shown in (13) of the current parameter sample, and returning to step (4).
[0076] η is a preset step size coefficient; and δ is a very small positive number (e.g., 10.sup.−8) to avoid that denominator is 0.
[0077] The present invention will be further described in detail in combination with the examples below.
EXAMPLE
[0078] Hereafter, optimization of process parameters of an optical plastic lens was set as an example to describe the specific implementation measures of the prevent invention in the process parameter optimization of plastic injection molding. The injection molding machine used in the example is type Zhafir VE400 from China HAITIAN GROUP, and the material used is polymethyl methacrylate (PMMA) from Sumipex.
[0079] Firstly, step (1) was performed for the initialization of the optimization method. Product weight served as an optimization objective, and the standard weight of the product was 8.16 g, that is, the target value of the product weight was 8.16 g, and process parameters to be optimized and scope thereof were selected, as shown in Table 1.
TABLE-US-00001 TABLE 1 Process parameters to be optimized and scope thereof Parameter Symbol Scope Unit Injection time x.sub.1 0.4-2.0 s Dwell time x.sub.2 0.5-4.0 s Dwell pressure x.sub.3 30-60 MPa Melt temperature x.sub.4 200-230 ° C. Die temperature x.sub.5 20-50 ° C.
[0080] Afterwards, step (2) was performed to choose an initial parameter within a scope of parameters randomly. The initial parameter selected in the example was X.sup.0=[x.sub.1.sup.0,x.sub.2.sup.0, . . . , x.sub.5.sup.0].sup.T=[1.6 s, 3 s, 50 MPa, 200° C., 20° C.].sup.T, and then X.sup.0 was normalized to be X.sub.norm.sup.0=[0.75, 0.714,0.667, 0, 0].sup.T; and the product weight corresponding to the initial parameter was 25.3371 g. Each parameter component was perturbed to obtain other 4 sets of process parameters: X.sub.norm.sup.1=[0.812, 0.714,0.667, 0, 0].sup.T, X.sub.norm.sup.2=[0.75, 0.742,0.667, 0, 0].sup.T, X.sub.norm.sup.3=[0.75, 0.714,0.733, 0, 0].sup.T, and X.sub.norm.sup.4=[0.75, 0.714,0.667, 0.0667, 0].sup.T; X.sub.norm.sup.5=[0.75, 0.714,0.667, 0, 0.0667].sup.T was subjected to a test to obtain the corresponding product weight (product weight values corresponding to 1-5 points on the x-coordinate in
[0081] An optimal step size α=0.01 was set in step (3) to obtain a new process parameter X.sub.norm.sup.new=[0.749, 0.70, 0.616, 0.011,0.0047] after being optimized for one step.
[0082] Step (4) was used to judge whether a new process parameter X.sub.norm.sup.new satisfied terminal conditions, and then the optimization proceeded to step (5), and a new process parameter X.sub.norm.sup.new and its corresponding product weight replaced X.sub.norm.sup.0, then gradient computation matrix was updated to obtain the gradient information at X.sub.norm.sup.new.
[0083] Step (6) was executed to adaptively adjust the optimal step of each parameter component, returning to step (3). This cycle was repeated till the obtained product weight can satisfy the desired error range.
[0084] Error! Reference source not found. shows a variation trend diagram of experiment number and product weight in the optimization procedure of an example; it can be seen that after iterative optimization for 10 times, the product weight is up to the preset target; moreover, for the optimization method using adaptable step adjustment, the test number required is obviously less than that of the optimization method without adaptable step adjustment (see
[0085] The optimization method of the present invention can be not only used for the process optimization of plastic injection molding, but also used for the injection molding technology using other injection materials, such as, rubber, magnetic powder, metal and the like or process optimization having the similar principle.