Method for collision avoidance and laser machining tool

11583951 · 2023-02-21

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for collision avoidance of a laser machining head (102) in a machining space (106) of a laser machining tool (100), having the steps of: —Monitoring a workpiece (112) in the machining space (106) with at least one optical sensor; —Capturing images of the workpiece (112); —Detecting a change in an image of the workpiece (112); —Recognising whether the change comprises an object standing upright relative to the workpiece (112); —Checking for a collision between the upright object and the laser machining head (102) based on a predetermined cutting plan and/or the current position (1016) of the laser machining head; —Controlling the drives for moving the laser machining head (102) for collision avoidance in case of recognised risk of collision.

Claims

1. A method for collision avoidance of a laser machining head in a machining space of a laser machining tool, having the steps of: monitoring a workpiece in the machining space with at least one optical sensor; capturing images of the workpiece, including a first image captured at a first time and a second image captured at a second time that is different from the first time; detecting a change between the first image and the second image of the workpiece; recognizing whether the detected change comprises an object, having a predetermined shape according to a predetermined cutting plan, cut from the workpiece is tilted with at least a portion of the object standing upright relative to the workpiece; checking for a collision between the upright portion of the object and the laser machining head based on the predetermined cutting plan and/or the current position of the laser machining head; and controlling drives to move the laser machining head for collision avoidance in case of recognizing a risk of collision.

2. The method according to claim 1, wherein measuring points are defined along a cutting contour of a cut part and monitored for brightness and/or colour values.

3. The method according to claim 1, wherein the images are captured offset in time and that the detected change in the images of the workpiece is detected by comparing the second image to the first image, which is a chronologically earlier image of the workpiece.

4. The method according to claim 1, wherein a 3D object of the change is modeled and is checked for collision between the 3D object and the laser machining head.

5. The method according to claim 1, further comprising: calculating at least two shapes consisting of points and located parallel to a border of a cutting contour in an image, wherein one shape is located inside the border and one shape is located outside the border; extracting image pixels according to the points of the shapes; normalizing the image pixels by calculating a histogram of pixel brightness for each shape; inputting the histograms into a deep neural network comprising an input layer, a plurality of internal layers and an output layer; processing the histograms with the deep neural network; outputting a variable by the deep neural network; and recognizing whether the object in the cutting contour is tilted for a value of the variable being on a first side of a threshold or whether an object in the cutting contour is not tilted for a value of the variable being on a second side of a threshold.

6. The method according to claim 5, wherein two further shapes are calculated, wherein a first further shape is located on the cutting contour and a second further shape covers the whole area inside the cutting contour.

7. The method according to claim 5, wherein before the step of extracting image pixels, image pixels that are not covered by parts of the laser machining tool are determined, for the determination a dynamic 3D model of the parts of the laser machining tool is provided and updated with live coordinates from the laser machining tool, the visible, to be extracted, image pixels are calculated by comparison of the dynamic 3D model with the images.

8. The method according to claim 5, wherein the step of normalizing the image pixels further includes calculating a 2D histogram for the at least two shapes.

9. The method according to claim 1, wherein the recognition is based on already pre-calculated possible positions of cut parts of the workpiece.

10. The method according to claim 1, wherein a cut part is identified and compared with a position of the cut part with already calculated possible positions of this cut part.

11. The method according to claim 1, wherein, when the collision between the upright object and the laser machining head is anticipated, a trajectory of the laser machining head is driven to bypass the detected change or to stop.

12. A method for collision avoidance of a laser machining head in a machining space of a laser machining tool, having the steps of: monitoring a workpiece in the machining space with at least one optical sensor; capturing images of the workpiece, including a first image captured at a first time and a second image captured at a second time that is different from the first time; detecting a change between the first image and the second image of the workpiece; recognizing whether the detected change comprises an object standing upright relative to the workpiece; checking for a collision between the upright object and the laser machining head based on a predetermined cutting plan and/or the current position of the laser machining head; controlling drives to move the laser machining head for collision avoidance in case of a recognized risk of collision; calculating at least two shapes consisting of points and located parallel to a border of a cutting contour in an image, wherein one shape is located inside the border and one shape is located outside the border; extracting image pixels according to the points of the shapes; normalizing the image pixels by calculating a histogram of pixel brightness for each shape; inputting the histograms into a deep neural network comprising an input layer, a plurality of internal layers and an output layer; processing the histograms with the deep neural network; outputting a variable by the deep neural network; and recognizing whether an object in the cutting contour is tilted for a value of the variable being on a first side of a threshold or whether an object in the cutting contour is not tilted for a value of the variable being on a second side of a threshold.

13. The method according to claim 12, wherein two further shapes are calculated, wherein a first further shape is located on the cutting contour and a second further shape covers the whole area inside the cutting contour.

14. The method according to claim 12, wherein before the step of extracting image pixels, image pixels that are not covered by parts of the laser machining tool are determined, for the determination a dynamic 3D model of the parts of the laser machining tool is provided and updated with live coordinates from the laser machining tool, the visible, to be extracted, image pixels are calculated by comparison of the dynamic 3D model with the images.

15. The method according to claim 12, wherein the step of normalizing the image pixels further includes calculating a 2D histogram for the at least two shapes.

Description

(1) The invention will be explained below in exemplary embodiments with reference to the accompanying drawings. In the figures:

(2) FIG. 1 shows a schematic perspective view of a numerically controlled laser machining tool;

(3) FIG. 2 shows a schematic representation of a control of the numerically controlled laser machining tool of FIG. 1;

(4) FIG. 3 shows a schematic representation of two cameras of the laser machining tool for capture of the machining space;

(5) FIG. 4 shows a schematic representation of two other cameras of the laser machining tool for capture of the machining space;

(6) FIG. 5 shows a schematic representation of the capture areas of the four cameras of FIG. 4;

(7) FIG. 6 shows a schematic representation of a flown-away part of a workpiece;

(8) FIG. 7 shows a schematic representation of a cut-out part of a workpiece with measuring points;

(9) FIG. 8 shows a schematic representation of the cut-out part of FIG. 7 showing the part extracted by image processing;

(10) FIG. 9 shows a schematic representation of the matching of the extracted part;

(11) FIG. 10 shows a flowchart of a method for collision avoidance of a laser machining head;

(12) FIG. 11 shows a flow chart of a general method for collision avoidance of a laser machining head; and

(13) FIG. 12 shows an exemplary depiction of shapes of a cutting contour.

(14) FIG. 1 shows a schematic perspective view of a numerically controlled laser machining tool 100, in particular a laser cutting machine with a laser machining head 102, in particular a laser cutting head. The laser cutting head 102 is arranged on a movable bridge 104 so that it can be moved in at least the x and y directions in a machining space 106 of the laser machining tool 100. A laser source 108 generates laser light and supplies it to the laser cutting head 102 via a light guide 110. A workpiece 112, for example a metal sheet, is arranged in the machining space 106 and is cut by the laser beam.

(15) FIG. 2 shows a schematic representation of a controller 200 of the numerically controlled laser machining tool 100 from FIG. 1. A numerical control unit 202, also called CNC (Computerised Numerical Control), executes the cutting plan as an EtherCAT master 204 in that the position signals are output via an EtherCAT bus 206 to the drives 208 as EtherCAT slave 210. One of the drives 208 is exemplified as EtherCAT slave 210. This EtherCAT slave 210 and other EtherCAT slaves write data, for example from sensors, such as incremental encoders, to the EtherCAT bus 206, and read data, which for example is used to control outputs, from the EtherCAT 206 bus.

(16) In this example, four cameras 212 are provided, the arrangement of which in the machining space 106 of the numerically controlled laser machining tool will be explained in more detail in the following figures. Preferably, CMOS cameras or image recording units are provided without image processing, which enables a very high processing speed.

(17) The image data of the cameras 212 are forwarded to a graphics processing unit 214 where the processing of the image data takes place. The graphics processing unit 214 preferably comprises a plurality, for example, 512 or more GPUs, and is preferably configured for real-time image processing. Particularly suitable are highly parallel GPUs with 256 or more cores. The graphics processing unit 214 also operates as EtherCAT slave 210 and thus is in direct communication with numerical control unit 202.

(18) The graphics processing unit 214 and/or the numerical control unit 202 are configured to carry out the methods or operations illustrated in FIGS. 6 through 10 and described below. In particular, the graphics processing unit 214 is configured to process data from the cameras 212 to recognise changes to the workpiece, to model a change from a 3D object, and, optionally together with the numerical control unit 202, to check for collision between the 3D object and the laser machining head based on a predetermined one cutting plan and/or the current position of the laser machining head. In addition, the numerical control unit 202 is configured for collision avoidance in the event of a recognised risk of collision.

(19) The graphics processing unit 214 obtains the cutting geometry or trajectory of the laser cutting head from the numerical control unit 202 via the EtherCAT bus 206. Before a collision event occurs, the graphics processing unit 214 can signal this via the EtherCAT bus 206. The signalling can be sent to the numerical control unit 202 and/or directly to the drive(s) 208 for the fastest possible response, such as emergency stop or bypass.

(20) This can be done gradually depending on the time available up to a collision. If there is sufficient time for an evasive manoeuvre, the graphics processing unit 214 sends data, such as the position or coordinates of the collision to the numerical control unit 202, which in turn calculates an evasive route and drives the drives 208 accordingly. The new alternate route is also sent to the graphics processing unit 214, which now continues to check the new route for collision.

(21) If there is insufficient time for an evasive manoeuvre, the graphics processing unit 214 sends emergency stop commands directly to the drives 208 to achieve the fastest possible stop of the laser cutting head.

(22) A computing unit 216 of the graphics processing unit 214 can be realised either by means of a CPU, a graphics processing unit GPU, or a combination of both. The computing unit 216 has enough computing power to evaluate the received camera data in real time and to make a decision as to whether a collision is imminent. This must be done fast enough that the numerical control unit 202 of the machine can take appropriate action to avoid the collision. The computing unit 216 or the graphics processing unit 214 is connected to the numerical control unit 202, for example via the illustrated EtherCAT bus 206.

(23) All elements of the controller 200, in particular the graphics processing unit 214, the cameras 212, and the bus 206, are configured for a real-time capability of the system.

(24) FIGS. 3 through 5 show schematic representations of a camera system 300 of the numerically controlled laser machining tool 100 with at least two cameras 212. In addition to the cameras 212, suitable illuminations, for example LED lights, can be provided to enhance the quality of the camera images.

(25) FIG. 3 shows two cameras 212 for which the capture areas 302 are aligned in the same direction. The capture area 302 of a first camera 212 captures a first half of the workpiece 112 or of the machining space 106. The capture area 302 of a second camera 212 captures a second half of the workpiece 112 or of the machining space 106. Thus, the two cameras capture the entire machining space 106. The two cameras 212 are arranged laterally offset from a longitudinal axis A of the machining space 106, so that the capture areas 302 extend laterally or obliquely into the machining space 106.

(26) FIG. 4 shows a further schematic representation of the camera system 400 of the numerically controlled laser machining tool 100. Here, the two cameras 212 are arranged in a mirrored manner in comparison to the arrangement of FIG. 3 on the longitudinal axis A of the machining space 106. Likewise, the capture areas 402 are inclined and, in comparison to FIG. 3, aligned opposite to the longitudinal axis A. Analogous to FIG. 3, the capture area 402 of a first camera 212 captures a first half of the workpiece 112 or of the machining space. The capture area 402 of a second camera 212 captures a second half of the workpiece 112 or of the machining space.

(27) FIG. 5 shows a further schematic illustration of the camera system 500 of the numerically controlled laser machining tool 100 with the capture areas 302 and 402 of the four cameras (not shown here).

(28) In this example, cameras are installed on both sides of the cutting area or machining area to refine the evaluation. The combination of both viewpoints 302 and 402 provides information about the depth of the observed object. This depth or spatial information enables the modelling of a 3D object from a change in the workpiece 112.

(29) FIG. 6 shows a schematic illustration of a flown-away part or cut part 600 of a workpiece. The cut part 600 shown here is located next to the cutting contour 602, where the cut part 600 was originally located, that is before the cutting. The illustration shown here can be, for example, the shot from a single camera.

(30) Such cut parts 600, which fly away due to the gas pressure and land anywhere on the raw material or the workpiece 112, can be detected in that first a reference depiction of the workpiece 112 is created and then current depictions or shots are continuously compared with the reference depiction. This can be done in particular at the points where the raw material has not yet been processed. If a position or change in the comparisons is classified as critical, the exact position, in particular the height, of the part resting over the workpiece 112 can be determined with a 3D fitting.

(31) As a possible remedy in a critical classification, that is, a potential collision between the cut part 600 and the laser machining head, the flown-away cut part 600 can be blown away with gas pressure, this area will not be cut, or the operation will be discontinued.

(32) FIG. 7 shows a schematic representation of a cut-out part or cut part 700 of a workpiece 112. The two illustrations at the top of FIG. 7 can in turn be shots from one or more cameras. The lowermost illustration in FIG. 7 depicts a colour-along-edges algorithm for detecting a change in an image of the workpiece 112.

(33) For the colour-along-edges algorithm, a very accurate projection of 3D points in the machine coordinate system onto the 2D images is desirable. For this, the cameras 212 must be calibrated. Image processing executed in, for example, the graphics processing unit 214 is used for the calibration and projection. Two different calibrations are performed. The first is the calibration of the intrinsic camera parameters. The second calibration is the calibration of the translation and rotation parameters in the coordinate system of the camera 212 compared to the coordinate system of the machine 100.

(34) The calibration of the intrinsic camera parameters is complex and not automated. A single pass can be sufficient as long as the lens on the camera 212 is not adjusted. For this calibration, images of a chessboard at different angles are needed. The intrinsic parameters are then calibrated with image processing and these images. This calibration creates the software model of the camera and lens.

(35) The calibration of the translation and rotation parameters can be repeated with each movement of the cameras 212 or the fixtures thereof. This calibration is easy to automate, so it is recommended to periodically recalibrate these parameters. Movements over time are to be expected due to of vibrations or slight thermal deformation of the machine housing. At least 4 points in the machine coordinate system and in the image must be known for this calibration.

(36) A Harris corner of sufficient size can be attached to the cutting head as a target for this calibration. This Harris corner can be recognised with the cameras 212 and compared with the current cutter head coordinate. Corresponding machine and image coordinates can be connected.

(37) The target, for example a Harris corner, is preferably attached to the cutting head. This target can be recognised automatically if its approximate position on the image is known. This is the case with a periodic recalibration.

(38) For the calibration process, therefore, the following steps are performed respectively. First, the cutting head is positioned in four defined positions. At each of these positions, one image is taken with each of the two cameras or two viewing angles. On each image, the image coordinates of the Harris corner are determined. From the machine coordinates of the four positions and the image coordinates of the Harris corner, the translation and rotation parameters are calculated.

(39) From the workpiece 112, the cut part 700 is cut out by means of a laser beam 702. This process is observed by the cameras. Measurements are taken at certain measuring points 704 along the sectional contour in an image processing executed in the graphics processing unit 214. The measurement points 704 are used to detect a change in an image of the workpiece 112.

(40) Now, when the cut part 700 tilts, the changes in the amount of light along the contour 706 reaching the camera are detected in the first step. This change in the amount of light occurs through the change in the reflection angle of the cut part 700. This can mean both additional brightness and reduced brightness.

(41) The tilted cut part 700 partially disappears under the remaining workpiece 112, resulting in a strong contrast. Here the contrast is shown as a change between white and black. In fact, changes in colour values, brightness values and/or contrast values can be used.

(42) These changes are analysed and a check is made to see if a threshold to start further processing has been reached. According to the illustration at the bottom of FIG. 7, the difference between the colour value of a measuring point 704 lying within the contour 706 and a point located outside the contour 706 is determined and then evaluated.

(43) If both colour values of the reflected light are the same or have only a very slight deviation, then the cut part 700 is not tilted (FIG. 7, top) or the tilted cut part 700 and the remaining work piece 112 are at about the same height (FIG. 7, middle), such as in the lower left corner of the cut part 700. In this case, there is no risk and the value is below the threshold. Further action is not necessary and monitoring will be continued.

(44) If both colour values are different, then the cut part 700 at these measuring points is no longer within the contour 706 (FIG. 7, centre), for example, in the upper left corner of the cut part 700. In this case, there is also no risk and the value is below the threshold. Further action is not necessary and monitoring will be continued.

(45) If both colour values are partly different, then the cut part 700 is located at these measurement points outside the contour 706 and above the remaining workpiece 112 (FIG. 7, centre), such as in the upper right corner of the cut part 700. In this case, there is a risk of a collision since the cut part 700 rises and the threshold is exceeded.

(46) The threshold for starting the second algorithm is then reached. The second algorithm is called 3D fitting and will be described with reference to FIGS. 8 and 9. In contrast to the colour-along-edges algorithm, in which a change and thus potential risk are quickly recognised, the 3D fitting involves the recognition of whether the corresponding part actually poses a risk to the cutting head and thus for the machining process. It is quite possible that a change is detected, but it does not turn out to be a risk. Such cases do not lead to a stopping of the machining process due to this bifurcation of the algorithm.

(47) FIG. 8 shows a schematic representation of the cut-out part of FIG. 7, showing the part extracted by image processing. The contour 800 of the upright cut part 700 in the camera image is determined. For this purpose, subtraction algorithms are used, for example. This determination also takes place in the graphics processing unit 214.

(48) FIG. 9 shows a schematic representation of the matching of the extracted part. The matching or comparison also takes place in the graphics processing unit 214.

(49) From the cutting plan, first the critical cut part 700 which was detected in the colour-along-edge algorithm is selected. In contrast to the camera image (see FIG. 8), the complete contour 900 of the part 700 is obtained from the cutting plan.

(50) A possible matching algorithm works as described below.

(51) The original contour 900 is rotated by the 3D fitting algorithm along one, several, or all three axes. Thus, the contour 900 is modelled in all possible positions in which the cut part 700 can lie. By way of example, the contours 900a, 900b, and 900c are shown here.

(52) This modelling of the contours 900a, 900b, and 900c or the cutting parts can be done before the start of cutting, so that the algorithm is as efficient as possible during testing, since what must be done is only the comparison but not the modelling.

(53) Now, when the information of the model is available and a cut part tilts, the contour 800 of the tilted part 700 recognised by the camera is compared with the models 900a, 900b, and 900c.

(54) The best match between the model and the contour 800 of the tilted part 700 is defined. Here it is the contour 900a. From this, it can then be calculated at which position and by how much the part stands upright. Together with the information on where the cutting head will move within the next few seconds, it can be calculated whether or not a collision is possible.

(55) If a collision is possible, the area around the position is marked as a risk zone. Now it must be decided what the control unit should initiate as a countermeasure. The collision can be prevented, for example, by stopping the machine rapidly. The even more efficient solution is that the cutting head either drives around the risk zone, lifts up to avoid the collision, or a combination of both.

(56) FIG. 10 shows a flow chart of a method for collision avoidance by a laser machining head in a machining space of a laser machining tool.

(57) In a first step 1000, camera data are generated, i.e., images of the workpiece with at least one optical sensor, preferably two, four, or more sensors.

(58) In a second step 1002, changes are detected in an image of the workpiece by means of a previously described colour-along-edges algorithm. If a local change is detected in step 1004, the method proceeds to block 1010. If not, then branching back to the monitoring in step 1002 results in a monitoring loop. This algorithm detects local changes in cut parts or the like, and not global changes, such as feeding or removing a workpiece. The two steps 1002 and 1004 are part of a local change recognition process.

(59) The numerical control process 1012 is executed in the numerical control unit. The numerical control unit knows the cutting plan 1014 and the current position 1016 of the cutting head, and in step 1018 calculates the planned track or route of the cutting head from the given cutting plan 1014 and/or the current position of the laser machining head.

(60) The cutting plan 1014 is supplied to a process for modelling the interior space or the machining space. This process, as well as the local change recognition process, operates in the collision monitoring system formed in or executed by the graphics processing unit.

(61) The block 1006 of the interior modelling process is supplied with the cutting plan 1014. A topology of the interior or the workpiece to be machined is created from the cutting plan 1014. The topology comprises the workpiece as well as the cutting pattern planned on the workpiece and can comprise the respective circumferences and locations of the cut parts. This topology is supplied to block 1010.

(62) In block 1010, the 3D fitting is carried out, that is to say the modelling of a 3D object of the change, as described above. For this purpose, the camera data 1000 is supplied to the block 1010. The 3D fitting is started when a local change is detected in block 1004. As the output of the modelling, a 3D topology 1008 of the change is provided, such as a contour 800.

(63) This 3D topology 1008, like the planned track 1018, is supplied to a process collision detector. This process is formed in or executed by the graphics processing unit.

(64) The 3D topology 1008 and the planned track 1018 are supplied to a collision detector 1020, an algorithm in the graphics processing unit, and/or the numerical control unit 1012. The collision detector 1020 checks to see if the 3D topology 1008 is within the planned track. If it is determined in step 1022 that a collision is possible, the method proceeds to block 1024. If not, then branching back to the monitoring in step 1002 (not shown) results in a monitoring loop. The block or step 1002 is executed continuously.

(65) In step 1024, a countermeasure is taken by driving the laser machining head for collision avoidance in the event of a recognised risk of collision. The countermeasure is a stop and/or evasion of or bypassing the obstacle. Blocks 1020, 1022, and 1024 are part of the collision detector process.

(66) The result of step 1024 is supplied to the CNC process 1012 for processing and implementation. For example, for an emergency stop, the drives of the laser machining head can also be controlled directly, that is without the involvement of the numerical control unit.

(67) FIG. 11 shows a flow chart of a general method for collision avoidance of a laser machining head. In a first step 1100, the entire cutting area or the entire machining space is continuously monitored by the sensor system.

(68) Checks for a local change as stated above are also made continuously in step 1101. A local change is usually caused by a cutting process. If there is no local change, branching back to step 1100 results in a monitoring loop.

(69) If a local change is recognised, the method proceeds to step 1102 where the change is analysed as outlined above.

(70) FIG. 12 shows an exemplary depiction of shapes of a cutting contour like the cutting contour 602 of FIG. 6. For the cutting contour, the system calculates four sets of points, called shapes 1200, 1202, 1024, and 1206. The shapes 1200, 1202, 1204, and 1206 may be arranged on an image of the workpiece or the cutting contour.

(71) A first shape 1200 consists of points lying on the actual cutting line. A second shape 1202 consists of a trace of points inside the cutting contour, at an offset of five mm. A third shape 1204 consists of a trace of points outside the cutting contour, also at an offset of for example five mm. A fourth shape 1206 covers the whole area inside of the cutting contour.

(72) Image pixels of the four shapes 1200, 1202, 1204, and 1206 are extracted from the image. In a normalization step, histograms of pixel brightness are calculated for each of the four shapes 1200, 1202, 1024, and 1206. Each histogram has for example 32 bins. In addition, a co-occurance histogram is calculated for the second shape 1202 and the third shape 1204 three. This co-occurance or 2D histogram includes 32 by 32 bins and puts the brightness of corresponding points inside and outside of the cut in correlation. The x axis of the 2D histogram may be the second shape 1202 and the y axis of the 2D histogram may be the third shape 1204.

(73) Then, concatenation of this input data into a vector is calculated. The size of the vector differs for the Neural network for two cameras and the Neural network for one camera.

(74) The Neural network accepts input data as a vector containing the concatenated histograms. For the Neural network that predicts contours visible by two cameras, the following sequence is used: Shape 1200 histogram from right camera (32 values) Shape 1200 histogram from left camera (32 values) Shape 1202 histogram from right camera (32 values) Shape 1202 histogram from left camera (32 values) Shape 1204 histogram from right camera (32 values) Shape 1204 histogram from left camera (32 values) Shape 1206 histogram from right camera (32 values) Shape 1206 histogram from left camera (32 values) Co-occurance histogram from right camera (32×32=1024 values) Co-occurance histogram from left camera (32×32=1024 values)
This totals up to 2304 input values for the Neural network.

(75) For the Neural network that predict contours visible only by one camera, the sequence is as follows: Shape 1200 histogram (32 values) Shape 1202 histogram (32 values) Shape 1204 histogram (32 values) Shape 1206 histogram (32 values) Co-occurance histogram (32×32=1024 values)
This totals up to 1152 input values for the Neural network.

(76) The Neural network is in this example a deep Neural network consisting of one flattening layer as input layer, five internal dense layers with batch normalization and one dense layer with sigmoid activation as output layer.

(77) From the normalized and concatenated input data, the deep Neural network outputs one floating point value in the range from 0,0 to 1,0 per contour. If the value is below 0,5, the contour is predicted to be safe. If the value is 0,5 or above, the contour is predicted to be dangerously tilted.

(78) In FIG. 10 showing a flowchart of a method for collision avoidance of a laser machining head, the above described implementation of the Neural network may replace the interior modelling (steps 1006 and 1010) and the step 1008. Alternatively, the above described implementation of the Neural network may replace the recognition of local changes (steps 1002 and 1004), the interior modelling (steps 1006 and 1010), and the step 1008.

(79) The method presented here for collision avoidance by a laser machining head in a machining space of a laser machining tool enables a simple and precise recognition of possible obstacles in the planned track in real time and a collision avoidance in case of recognised risk of collision.