METHOD FOR PREDICTING STATUS OF MACHINING OPERATION

20210312262 · 2021-10-07

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for predicting status of machining operation, in particular chatter occurrence comprising the following steps: training a neural network having an input layer, at least one hidden layer, an output layer and a plurality of weights in a pre-training phase and a final-training phase, wherein in the pre-training phase a pre-training data set is provided to the neural network to obtain a pre-trained neural network and in the final-training phase a final-training data set is fed to the pre-trained neural network to obtain a final-trained neural network, wherein the pre-training data set comprises simulated data and the final-training data set comprises experimental data; and performing prediction by utilizing the final-trained neural network to derive prediction data.

    Claims

    1. A method for predicting status of machining operation, in particular chatter occurrence comprising: a) training a neural network having an input layer, at least one hidden layer, an output layer and a plurality of weights in a pre-training phase and a final-training phase, wherein in the pre-training phase a pre-training data set is provided to the neural network to obtain a pre-trained neural network and in the final-training phase a final-training data set is fed to the pre-trained neural network to obtain a final-trained neural network, wherein the pre-training data set comprises simulated data and the final-training data set comprises experimental data; and b) performing prediction by utilizing the final-trained neural network to derive prediction data.

    2. The method according to claim 1, wherein the weights of the pre-trained neural network determined during the pre-training phase are adapted in the final-training phase by utilizing the final-training data set.

    3. The method according to claim 1, wherein the amount of the data included in the pre-training data set is larger than the amount of data included in the final-training data set.

    4. The method according to claim 1, wherein the pre-training data set comprises exclusively simulated data generated using a physical model and/or the final-training data set comprises exclusively experimental data.

    5. The method according to claim 4, wherein the pre-training data set is a collection of a plurality of samples, which includes a value of the at least one input and a value of the at least one output, wherein the value of the output is determined by providing the value of the input to the physical model as input data.

    6. The method according to claim 4, wherein at least two final-trained neural networks are obtained by training at least two neural networks independently using at least two different pre-training data sets and each pre-training data set is generated by varying at least one variable parameter, in particular the variable parameter is a part of input data of the physical model.

    7. The method according to claim 5, wherein the physical model is a stability model defining the chatter occurrence in the machine tool and the inputs include machining parameters such as axes position, axes feed direction, depth of cut, spindle speed and workpiece parameters, and the outputs are stability status of the machining operation.

    8. The method according to claim 6, wherein the variable parameters include one or more of the following: Young's modulus of a tool, Young's modulus of a holder, density of the tool, loss factor of the tool, loss factor of the holder, outer diameter equivalent cylinder of fluted section, translational tool-holder contact stiffness, rotational tool-holder contact stiffness, rotational tool-holder contact damping, tangential cutting coefficient and radial cutting coefficient.

    9. The method according to claim 7, wherein optimized prediction data is determined by averaging the prediction data determined by using each final-trained neural network, in particular the prediction data represent the chatter occurrence in a machine tool including stability and chatter frequency.

    10. The method according to claim 1, wherein the method further comprises determining a stability lobe diagram from the prediction data and/or optimized prediction data.

    11. A prediction unit configured to conduct the method according to claim 1.

    12. A machine tool comprising a controller configured to control the machine tool, a monitoring unit and, the prediction unit according to claim 11, wherein the monitoring unit is configured to detect and characterize the chatter occurrence during the machining and to prepare the experimental data to be fed into the prediction unit.

    13. A system including a plurality of machine tools and a prediction unit according to claim 12.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0043] In order to describe the manner in which advantages and features of the disclosure can be obtained, in the following a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. These drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope. The principles of the disclosure are described and explained with additional specificity and detail through the use of the accompanying drawings in which:

    [0044] FIG. 1 illustrates a physical model;

    [0045] FIG. 2 illustrates a neural network;

    [0046] FIG. 3 illustrates pre-training, final training and prediction;

    [0047] FIG. 4 illustrates an example of physical model requiring uncertain parameters;

    [0048] FIG. 5 illustrates an example of physical model requiring uncertain parameters;

    [0049] FIG. 6 illustrates multiple neural networks; and

    [0050] FIG. 7 illustrates a model for stability prediction;

    [0051] FIG. 8 illustrates a model for stability prediction;

    [0052] FIG. 9 illustrates a stability lobe diagram; and

    [0053] FIG. 10 illustrates one embodiment of prediction chatter occurrence;

    [0054] FIG. 11 illustrates one embodiment of prediction chatter occurrence;

    [0055] FIG. 12 illustrates one embodiment of prediction chatter occurrence;

    [0056] FIG. 13 illustrates one embodiment of prediction chatter occurrence; and

    [0057] FIG. 14 illustrates one embodiment of prediction chatter occurrence.

    EXEMPLARY EMBODIMENTS

    [0058] FIGS. 1, 2 and 3 illustrate one embodiment of the present invention based on transfer learning. Transfer learning describes a method where a model that has been trained on one problem is used as a starting point for a slightly different but related problem. FIG. 1 depicts a schematic of a physical model comprising three inputs, x1, x2 and x3 and two outputs y1 and y2. The number of inputs and outputs is however not limited to these numbers shown in FIG. 1. A pre-training data set is derived from the physical model. The pre-training data set includes a large amount of samples for example in the range of 1000 to 10000. For different samples, different value of inputs are set to calculate the corresponding outputs.

    [0059] During the pre-training, the pre-training data set is fed into a neural network shown in FIG. 2 having an input layer, several hidden layers and an output layer. The input layer has the same number of nodes as the inputs, namely three in this example. The output layer has the same number of outputs, namely two in this example.

    [0060] FIG. 3 depicts an overview of pre-training phase, final-training phase and predictions. In the pre-training phase, the weights of the neural network are determined by utilizing the simulated data from the physical model and at the end of the pre-training phase a pre-trained neural network having determined weights is obtained. This pre-trained neural network is, hence, a mapping of the physical model and is aware of the main influencing factors and the general behavior of the physical model.

    [0061] The pre-trained neural network having the determined weights is further trained in the final-training phase. Contrary to the pre-training phase, in this phase experimental data set is provided to adjust the weights determined in the pre-training phase to further improve the accuracy of the learning.

    [0062] After the final-training, the final-trained neural network is ready to be deployed for prediction.

    [0063] FIGS. 4, 5 and 6 show an embodiment by which the prediction accuracy can be further increased by training more than one neural network independently. FIG. 4 illustrates an example of a physical model, which includes variable parameters having uncertainty. In addition to the inputs x1, x2 and x3 further variable parameters p1, p2 and p3 influence the output of the physical model. These parameters are not known precisely. The value of the variable parameters can vary in a range, thereby introduces uncertainty into the outputs and cause the reduction of the prediction accuracy. In order to minimize the influences considerable uncertainty on the prediction accuracy, a plurality of neural networks are applied. The three neural networks shown in FIG. 6 is merely to demonstrate that more than one neural networks is implemented, however, the number of neural networks is not limited to three.

    [0064] Before starting the pre-training, pre-training data set must be generated using the physical model. In order to calculate the output of the model by varying the inputs, the values of the variable parameters must be pre-determined. Since the variable parameters are uncertain, different values of the variable parameters are selected to be used to determine pre-training data set for different neural networks. By this way, the negative influences of the uncertainty of the variable parameters on the final prediction results can be reduced. As shown in the FIG. 5, three groups of value of variable parameters p1, p2 and p3 are determined. For generating the simulated data set for NN1, the values v11, v21 and v31 are used. The plurality of samples are generated for NN1 by using the same values v11, v21 and v31 and varying the value of the inputs x1, x2 and x3. For generating the simulated data set for NN2, the values v12, v22 and v32 are used. The plurality of samples are generated for NN2 by using the same values v12, v22 and v32 and varying the value of the inputs x1, x2 and x3. For generating the simulated data set for NN3, the values v13, v23 and v33 are used. The plurality of samples are generated for NN3 by using the same values v13, v23 and v33 and varying the value of the inputs x1, x2 and x3.

    [0065] At the end of pre-training phase, three pre-trained neural networks are obtained. In the final-training phase, the same experimental data are fed into the pre-trained neural networks. If sufficient experimental data are available, three different final-training data sets may also be an option. The three final-trained neural networks can be individually used to make the prediction. An optimized prediction data is determined by calculate the average of the prediction data sets obtained from each of the three final-trained neural networks.

    [0066] FIGS. 7 to 14, demonstrate an application of the method for chatter prediction. The physical model for stability prediction used for machine tool shown in FIG. 7, in particular for milling comprises five inputs: x1 as clamping length (S.sub.cl), x2 as spindle speed (n), x3 as depth of cut (a.sub.p), x4 as entry angle (φ.sub.st), and x5 as exit angle (φ.sub.ex). Additional information, namely variable parameters p1 to p13 to be provided to the model are listed in FIG. 8. These variable parameters are associated with uncertainty. The two output parameters of this physical model represent the two conditions: stable and chatter. The physical model provides a prediction if the machining under the given inputs runs stable or not. Typically, stability lobe diagrams are used to distinguish between stable and unstable cutting depths as a function of the spindle speed.

    [0067] Each of the variable parameters has an estimated reference value. However, this value can vary in a range according to a given probabilistic model, which can be taken for a normal distribution from the value of standard deviation. The value shown in FIG. 8 is merely as exemplary purpose.

    [0068] In a first step, pre-training data set is generated by selecting variable parameters, preparing inputs of the samples, feeding these inputs into an existing stability model such that output of stability model can be derived. For the generation of the simulated data set it is not directly clear which values for the variable parameters summarized in the FIG. 7 should be assumed in the modelling stage. Here, an extension to the classic transfer learning idea is applied, which takes the modelling uncertainty into account to further improve the accuracy of the chatter prediction. It is based on the idea of ensemble learning, where multiple networks are trained, and their individual estimates are combined to obtain a single prediction.

    [0069] FIG. 10 shows the selected variable parameters used to prepare different pre-training data set for different neural networks. All variable parameters are sampled 20 times from their distributions defined in FIG. 8, because 20 neural networks are applied. At the same time, 1000 simulated cutting samples are generated, where spindle speed, depth of cut, entry and exit angle are sampled uniformly from defined ranges. These ranges can be derived from the range of the experimental data set. For example, if the experimental data set was obtained with spindle speeds between 6000 rpm and 15000 rpm, the same range can be selected for the simulated data set. The generated pre-training data set consisting of the inputs spindle speed, depth of cut, entry and exit angle as well as the clamping length and the outputs stable/unstable are used for pre-training of one network. FIGS. 11a and 11b illustrate how to prepare the pre-training data set for NN1 and NN20. Each pre-training data set includes 1000 samples. Different values of the five inputs are fed into physical model to determine the output, while the same variable parameters for NN1 are used for all samples.

    [0070] The generated pre-training data sets are then used for training the neural networks, in this example 20 neural networks are applied, however the number of the neural networks severs merely for exemplary purpose and is not limited to 20. The pre-trained neural networks are then aware of the main influences on the stability lobes and has also learnt the concept of stability pockets, which repeat with the spindle speed. Nevertheless, these networks may have a poor performance when comparing its predictions with actual experimental stability states.

    [0071] While with the simulated data it is targeted to make the neural network aware of the general shape of stability lobes shown in FIG. 9 and its basic dependencies, the goal of the fine-tuning is to compensate four sources of errors, which were possibly introduced in the pre-training stage: inaccuracy in the modelling of tool-workpiece contact zone dynamics, uncertainty about the cutting coefficients, potential operational changes of dynamics and cutting coefficients (e.g. spindle speed dependency) and the inaccuracies of the stability model used. This problem is solved in the fine-tuning stage. A much smaller experimental data set, e.g. 50 as shown in FIG. 13 is now fed to the pre-trained networks, whose initial weights are equal to the optimized network weights from the pre-training stage. The network weights will now adapt slightly to match the neural network predictions with the experimentally observed stability states.

    [0072] In the next step, the fine-tuned networks can be used for stability predictions of new cutting scenarios and much more accurate stability predictions are possible. FIG. 14 shows utilizing multiple neural networks for prediction.

    [0073] When predicting a stability chart for new process conditions, each of the networks makes a prediction. For example, the stability lobes shown in at the FIG. 14 are results from the different final-trained neural networks. All network predictions are averaged using a truncated mean approach, where very high and very low predictions are excluded.