METHOD AND APPARATUS FOR DETERMINING THE SIZE OF DEFECTS DURING A SURFACE MODIFICATION PROCESS

20230038435 · 2023-02-09

Assignee

Inventors

Cpc classification

International classification

Abstract

A method is specified for determining a size of a defect occurring in a surface region of a component while a surface modification process is performed on the surface region. The method includes identifying an occurrence of a defect occurring in a surface region of a component on a basis of a set of images and determining a size of the defect in a separate method step from the occurrence of the defect identified. In addition, an apparatus and a computer program are specified for determining a size of a defect occurring in a surface region of a component while a surface modification process is performed on the surface region.

Claims

1. A computer-implemented method for determining a size of a defect occurring in a surface region of a component while a surface modification process is performed on the surface region, the method comprising: identifying an occurrence of a defect occurring at a surface region of a component based on a set of images; and determining a size of the defect identified at the surface region in response to the occurrence of the defect being identified.

2. The method according to claim 1, wherein the size of the defect is determined using a You Only Look Once style (YOLO-style) model.

3. The method according to claim 1, wherein the identifying the occurrence of the defect based on the set of images further comprises: providing an image sequence comprising a plurality of image frames of the surface region to be evaluated, each image frame showing an image section of the surface region and with a plurality of image sections of the plurality of image frames at least partially overlapping one another; assigning the plurality of image frames to at least one of at least two image classes, of which at least one image class is a defect image class having a defective attribute; checking whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class; and outputting a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class.

4. The method according to claim 3 further comprising providing a trained neural network, wherein the plurality of image frames is assigned to the image classes by the trained neural network.

5. The method according to claim 3 further comprising recording the image sequence of the surface region to be evaluated, wherein a rate of recording the image sequence is faster than a rate of determining the size of the defect.

6. The method according to claim 3, wherein the image section of each of the plurality of image frames is moved together with a surface modification device for carrying out the surface modification process.

7. The method according to claim 4, wherein: the size of the defect is determined using a You Only Look Once style (YOLO-style) model, and the YOLO-style model has been trained with the same training data as the trained neural network.

8. The method according to claim 3, wherein the determining the size of the defect is based on the defect signal being output.

9. An apparatus for determining a size of a defect occurring in a surface region of a component while a surface modification process is performed on the surface region, the apparatus comprising one or more processors and one or more non-transitory computer-readable mediums storing instructions that are executable by the one or more processors, wherein the one or more processors operate as: a data processing unit that is configured to: identify an occurrence of a defect occurring at a surface region of a component based on a set of images; and determine a size of the defect in response to the occurrence of the defect being identified.

10. The apparatus according to claim 9, wherein the data processing unit is configured to determine the size of the defect using a You Only Look Once style (YOLO-style) model.

11. The apparatus according to claim 9, wherein to identify the occurrence of the defect based on the set of images, the data processing unit is configured to: assign one or more image frames of an image sequence comprising a plurality of image frames of the surface region to be evaluated to at least one image class of at least two image classes, each image frame showing an image section of the surface region and with a plurality of image sections of the plurality of image frames at least partially overlapping one another, and wherein at least one image class is a defect image class having a defective attribute; check whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class; and output a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class.

12. The apparatus according to claim 11, wherein the data processing unit comprises a trained neural network for assigning each of the plurality of image frames to the least one of the at least two image classes.

13. The apparatus according to claim 12, wherein: the size of the defect is determined using a You Only Look Once style (YOLO-style) model, and the YOLO-style model has been trained with the same training data as the trained neural network.

14. The apparatus according to claim 11 further comprising: a camera configured to capture the image sequence comprising the plurality of image frames of the surface region to be evaluated, wherein a rate of capturing the image sequence is faster than a rate of determining the size of the defect.

15. The apparatus according to claim 9 further comprising a surface modification device configured to modify surface of the surface region of the component.

16. A computer program for determining a size of a defect occurring in a surface region of a component while a surface modification process is performed on the surface region, the computer program stored in a non-transitory recording medium and including one or more commands executable by one or more processors, the one or more commands comprise: identifying an occurrence of a defect occurring in a surface region of a component based on a set of images; and determine a size of the defect after the occurrence of the defect is identified.

17. The computer program according to claim 16, wherein the one or more commands further comprise: assigning one or more image frames of an image sequence comprising a plurality of image frames of the surface region to be evaluated to at one of at least two image classes, each image frame showing an image section of the surface region and with a plurality of image sections of the plurality of image frames at least partially overlapping one another, and wherein at least one image class is a defect image class having a defective attribute; checking whether multiple image frames of a specifiable number of directly consecutive image frames in the image sequence have been assigned to the defect image class; and outputting a defect signal when the multiple image frames of the specifiable number of directly consecutive image frames have been assigned to the defect image class.

18. The computer program according to claim 16, wherein the size of the defect is determined using a You Only Look Once style (YOLO-style) model.

19. The computer program according to claim 17, wherein the image frames are assigned to the at least one of the at least two image classes via a trained neural network.

20. A computer readable data carrier, on which the computer program according to claim 16 is stored or transmits the computer program.

Description

DRAWINGS

[0099] In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:

[0100] Further advantages of the present disclosure are apparent from the figures and the associated description. In the drawings:

[0101] FIG. 1 shows a flow diagram of an example method, according to the teachings of the present disclosure;

[0102] FIG. 2 shows a schematic illustration of an example apparatus, according to the teaching of the present disclosure;

[0103] FIG. 3 shows one form of an image sequence, according to the teachings of the present disclosure;

[0104] FIG. 4 shows another form of the image sequence, according to the teachings of the present disclosure;

[0105] FIG. 5 shows still another form of the image sequence, according to the teachings of the present disclosure;

[0106] FIG. 6 shows an illustration of the prediction accuracy, according to the teachings of the present disclosure;

[0107] FIG. 7a shows a first image frame of two consecutive image frames with an object bounding box for size determination, according to the teachings of the present disclosure; and

[0108] FIG. 7b shows a second image frame of the two consecutive image frames, with an object bounding box for size determination, according to the teachings of the present disclosure.

[0109] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.

DETAILED DESCRIPTION

[0110] The following description is merely exemplary in nature and is not intended to limit the present disclosure, application, or uses. It should be understood that throughout the drawings, corresponding reference numerals indicate like or corresponding parts and features.

[0111] The disclosure is explained in more detail below by reference to FIGS. 1 and 2 based on a laser soldering process and an associated apparatus 200. Therefore, a method 100 and an apparatus 200 are described for identifying defects 7 occurring during the execution of a laser soldering process on a surface region 8 of a component. Specifically, this is a laser brazing process for connecting metal sheets, namely connecting a roof of a passenger car to the associated side panel. However, the disclosure is not limited to this process and can be used analogously for other surface modification processes.

[0112] The method 100 is carried out by means of the apparatus 200 shown schematically in FIG. 2. The apparatus 200 comprises a surface modification device 4, which in in one form is a laser soldering device. The laser soldering device is designed and configured to generate a laser beam and emit it in the direction of a surface region 8 to be treated. In addition, the surface region 8 is fed a solder, e.g., in the form of a soldering wire, which is melted by means of the laser beam and used to join the vehicle roof to a side panel.

[0113] The apparatus 200 also comprises a camera unit 3. In one form, the camera unit 3 includes a SCeye® process monitoring system manufactured by Scansonic MI GmbH. The camera unit 3 is designed and configured as a coaxial camera and has a laser lighting device, wherein the wavelength of the laser of the laser lighting device differs from the wavelength of the machining laser of the laser soldering device. In one form, a wavelength of approx. 850 nm was selected for the laser lighting device. The camera unit 3 is appropriately sensitive to this wavelength. Due to the wavelength of approx. 850 nm, interference effects from ambient light and other light sources are largely avoided.

[0114] The camera unit 3 is arranged with respect to the laser soldering device in such a way that an image sequence 5 in the form of a video can be captured through the processing laser beam. In other words, an image sequence 5 is recorded that consists of a plurality of image frames 6 of the surface region 8 to be evaluated. The image section 9 is selected in such a way that it extends from the end region of the soldering wire through the process zone to the newly solidified solder joint. The camera unit 3 is moved simultaneously with the machining laser beam so that the image section 9 moves over the surface region 8 accordingly and the image sections 9 of the image frames 6 at least partially overlap. For this purpose, the frame rate of the camera unit 3 and the speed at which the processing laser and the camera unit 3 are moved are matched accordingly. In one form, at typical processing speeds, the frame rate can be 100 frames per second.

[0115] As already mentioned, the camera unit 3 is configured and designed to capture an image sequence 5 consisting of a plurality of consecutive image frames 6 of the surface region 8 to be evaluated. This image sequence 5 is transmitted to a data processing unit 1 of the apparatus 200. Therefore, the camera unit 3 and the data processing unit 1 are operatively connected for signal communication.

[0116] The data processing unit 1 is used to process the image frames 6 of the image sequence 5 in order to identify the occurrence of a defect 7 and if a defect 7 is present, to determine its size. For this purpose, the data processing unit 1 has a trained neural network 2, by means of which the image frames 6 are assigned to two image classes 10a, 10b. In this case, image frames 6 recognized as “ok” are assigned to the first image class 10a and image frames 6 recognized as “defective” are assigned to the defect image class 10b.

[0117] The trained neural network 2 in one form is a neural network that has been trained by means of transfer learning. The trained neural network 2 is based on the pre-trained neural network designated as “ResNet50”, which was described earlier. This pre-trained neural network was further trained with 40 image sequences 5 acquired during a laser beam soldering process, in which the image sequences 5 contained a total of 400 image frames 6 in which the assignment to the image classes 10a, 10b was specified. Using this additional training process, a trained neural network 2 was created that is capable of detecting surface defects such as pores, holes, spatter, but also device defects, such as a defective protective glass of the soldering optics, on image frames 6.

[0118] The data processing unit 1 is also designed and configured to check whether multiple image frames 6 of a specifiable number of directly consecutive image frames 6 in the image sequence 5 have been assigned to the defect image class 10b. In one form, four directly consecutive image frames 6 in the image sequence 5 are checked to determine whether all four image frames 6 were assigned to the defect image class 10b. This specification can be varied depending on the accuracy desired. If all four of the four directly consecutive image frames 6 have been assigned to the defect image class 10b, a defect signal 11 is output.

[0119] The defect signal 11 causes a You Only Look Once style (YOLO-style) model 12 to be activated in a subsequent method step. The YOLO-style model 12 is used to determine the size of the previously detected defect 7. To this end, the YOLO-style model 12 was trained with the same training data as the trained neural network 2.

[0120] In one form, the apparatus 200 described above can be used to carry out the following method 100, which is elucidated with reference to FIG. 1.

[0121] The method 100 is used to identify, in a computer-implemented manner, the occurrence of defects 7 during the laser soldering process. In addition, the size of the defects 7 that occurred is determined.

[0122] After the start of the method 100, in method step S1 an image sequence 5 is acquired containing a plurality of image frames 6 of the surface region 8 to be evaluated. The image is acquired at a frame rate of 100 frames per second. Different frame rates are possible. The image section 9 of each image frame 6 is selected in such a way that the image sections 9 of the image frames 6 partially overlap. In one form, an overlap of 80% can be provided, i.e., in two directly consecutive frames 6, the image section 9 is 80% identical. During the acquisition of the image sequence 5, the image section 9, or the camera unit 3 that images the image section 9, is moved together with the surface modification device 4.

[0123] In method step S2, the image sequence 5 is submitted for further processing, e.g., transmitted from the camera unit 3 to the data processing unit 1. In parallel, the trained neural network 2 is provided in method step S3.

[0124] In the method step S4, the image frames 6 of the image sequence 5 are assigned to the two image classes 10a, 10b by means of the trained neural network 2, i.e., a decision is made as to whether the image frame 6 to be assigned shows a defect 7 or not. In the first case, the image is assigned to the defect image class 10b, otherwise to the other image class 10a.

[0125] In the subsequent method step S5, it is checked whether multiple image frames of a specifiable number of directly consecutive image frames 6 in the image sequence 5 have been assigned to the defect image class 10b. As already mentioned, in one form, four directly consecutive image frames 6 in the image sequence 5 are checked to determine whether all four image frames 6 were assigned to the defect image class 10b.

[0126] If this is the case, the method 100 continues to method step S6, in which a defect signal 11 is output. If four directly consecutive image frames 6 have not been assigned to the defect image class 10b, the method 100 returns to method step S1.

[0127] The defect signal 11 output in method step S6 serves as a trigger signal or starting signal for the subsequent method step S7. In method step S7, the size of the defect 7 is determined using a YOLO-style model 12. In one form, the defect 7 can be classified according to whether the size of the defect 7 is very small, small, or large. Very small can mean, in one form that no further measures need to be taken and that the corresponding component can be further processed in the same way as functional components. Small can mean that the defect 7 can be repaired, e.g., by polishing the corresponding surface region of the component concerned. Large can mean that the defect 7 cannot be repaired and the component in question must be rejected. After method step S7, the method 100 ends.

[0128] Of course, deviations from this form of the method 100 are possible. Thus, it can be provided that the method 100 is not terminated after method step S7, but also returns to method step 51 thereafter. It is advantageous to carry out the method 100 in real time during the laser soldering process, wherein the individual method steps 51 to S7 can overlap in time. This means that while the image frames 6 that are currently being acquired are assigned to the image classes 10a, 10b, further image frames 6 are acquired, etcetera.

[0129] By evaluating the surface region 8 not only on the basis of a single image frame 6, but by using successive image frames 6 as temporal data, it is possible to observe whether a suspected or actual defect 7 “is traveling through the camera image”. Only if this is the case, i.e., if the defect 7 can be detected on multiple image frames 6, is an actual defect 7 assumed. This can significantly increase the reliability of the defect prediction compared to a conventional automated quality assurance, as fewer false-positive and false-negative defects 7 are identified. Compared to visual inspection, the proposed method 100 has the advantage, in addition to a reduced personnel requirement and associated cost savings, that even small defects 7 that are not visible to the naked eye can be identified. Thus, the overall quality of the surface-treated components can be increased, as components of low quality can be rejected or process parameters and/or parts of the apparatus can be altered such that the detected defects 7 no longer occur.

[0130] By determining the size of the defect 7 in a method step S7 that is separate from the method steps S1 to S6 and, consequently, the size is not determined for every image frame 6 but only for defects 7 that have already been detected, the method 100 overall can be carried out at high speed, in particular in real time, even for processes with high component throughput, while at the same time provides high reliability in the defect identification and size determination. This contributes to further increase of the quality assurance.

[0131] FIG. 3 shows an example image sequence 5 of a surface region 8 of a component to be evaluated, the surface of which is treated by means of a laser soldering process. The image sequence 5 comprises 25 image frames 6, the image sections 9 of which partially overlap. The image frames 6 were acquired by the camera unit 3 in the sequence from top left to bottom right and transferred to the data processing unit 1 of the apparatus 200 for evaluation.

[0132] By means of the trained neural network 2 of the data processing unit 1, the image frames 6 were each assigned to an image class 10a, 10b, as can be seen in FIG. 3 on the basis of the classification as “ok” or “defective”. The first eight frames 6 were classified as “ok” and thus assigned to the first image class 10a. These are followed by twelve image frames 6, which were classified as “defective” and thus assigned to the defect image class 10b. These are followed by seven image frames 6, which were again classified as “ok” and assigned to image class 10a.

[0133] In the image frames 6 assigned to the defect image class 10b, a pore can be identified as the defect 7. This defect 7 travels across the image section 9 as a result of the movement of the camera unit 3 together with the surface processing device 4 from left to right.

[0134] To be able to detect the defect 7 reliably with a high probability, a check is carried out, in one form, to determine whether four directly consecutive image frames 6 have been assigned to the defect image class 10b. This is the case with the image sequence shown in FIG. 3, since a total of twelve (12) directly consecutive image frames 6 have been assigned to the defect image class 10b. As a result, it can be concluded with a high probability that a defect 7 is actually present and so a defect signal 11 is output. The defect signal 11 can, in one form, interrupt the surface modification process in order to allow the faulty component to be removed from the production process. Alternatively, the production process can continue and the component in question will be removed after completion of its surface modification, or visually inspected as a further check.

[0135] FIG. 4 shows another form image sequence 5 of a surface region 8 of a component to be evaluated, the surface of which is treated by means of a laser soldering process. The image sequence 5 again comprises 25 a plurality of image frames 6, the image sections 9 of which partially overlap. As in FIG. 3, the image frames 6 were acquired by the camera unit 3 in the sequence from top left to bottom right and transferred to the data processing unit 1 of the apparatus 200 for evaluation.

[0136] By means of the trained neural network 2 of the data processing unit 1, the image frames 6 were each assigned to an image class 10a, 10b, as can be seen in FIG. 4 on the basis of the classification as “ok” or “defective”. In this case, the first six image frames 6 were classified as “ok” and thus assigned to the first image class 10a, two image frames 6 that were classified as “defective”, one image frame 6 that was classified as “ok”, nine image frames 6 that were classified as “defective”, and a further seven frames 6 that were classified as “ok”. In other words, with the exception of a single image frame 6, twelve directly consecutive frames 6 were assigned to the defect image class 10b.

[0137] In the image frames 6 assigned to defect image class 10b, a pore can be identified as the defect 7. This defect 7 travels across the image section 9 as a result of the movement of the camera unit 3 together with the surface processing device 4 from left to right.

[0138] To be able to detect the defect 7 reliably with a high probability, a check is carried out, in one form, to determine whether four directly consecutive image frames 6 have been assigned to the defect image class 10b. This is the case with the image sequence shown in FIG. 4, since a total of nine directly consecutive image frames 6, i.e., the 10th to the 18th image frame, have been assigned to the defect image class 10b. As a result, it can be concluded with a high probability that a defect 7 is actually present and so a defect signal 11 is output.

[0139] FIG. 5 shows another example image sequence 5 of a surface region 8 of a component to be evaluated, the surface of which is treated by means of a laser soldering process. The image sequence 5 comprises 20 image frames 6, the image sections 9 of which partially overlap. As in FIG. 3, the image frames 6 were acquired by the camera unit 3 in the sequence from top left to bottom right and transferred to the data processing unit 1 of the apparatus 200 for evaluation.

[0140] By means of the trained neural network 2 of the data processing unit 1, the image frames 6 were each assigned to an image class 10a, 10b, as can be seen in FIG. 5 on the basis of the classification as “ok” or “defective”. The first eight image frames 6 have been classified as “ok” and thus assigned to the first image class 10a. The ninth image frame 6 was classified as “defective”. The other image frames were again classified as “ok”.

[0141] However, the image frame 6 classified as “defective” is an incorrect classification, since this image frame 6 does not actually show a defect 7. If each image frame 6 alone were used for predicting defects independently of the other image frames 6, this incorrectly classified image frame 6 would trigger the output of a defect signal 11 and possibly stop component production.

[0142] However, as the proposed method provides a check of whether multiple image frames 6 of a specifiable number of directly consecutive image frames 6 in the image sequence 5 have been assigned to the defect image class 10b, when the proposed method is used no defect signal 11 is output, since only a single image frame 6 was assigned to the defect image class 10b. The detection of false-positive defects 7 can thus be avoided.

[0143] FIG. 6 shows an illustration of the prediction accuracy of defects 7 by means of the above-described method 100 compared to a visual inspection, which has been standard practice up to now. The surface region 8 of 201 components was analyzed, i.e., 201 components were surface treated using a laser soldering process.

[0144] From the diagram, it is apparent that 100% of the components identified as “defective” by visual inspection were also identified as “defective” by means of the proposed method (category “true positive”). None of the components identified as “ok” by visual inspection were identified as “defective” by means of the proposed method (category “false positive”). Similarly, none of the components identified as “defective” by visual inspection were identified as “ok” by means of the proposed method (category “false negative”). Again, 100% of the components identified as “ok” by visual inspection were identified as “ok” by the proposed method (category “true negative”), where the asterisk “*” in FIG. 7 indicates that an actual defect 7 was correctly identified by means of the proposed method, but not during the standard manual visual inspection. The defect 7 was so small that it was no longer visible after the downstream surface polishing process. A subsequent manual analysis of the process video showed that the defect 7 was actually a very small pore.

[0145] The existence of the defect 7 could only be confirmed by further investigations. Consequently, it can be concluded that the proposed method 100 not only achieves, but can even exceed, the accuracy of the surface quality assessment of the visual inspection that is currently normally used, i.e., it also detects defects 7 which are not detectable by standard visual inspection.

[0146] FIGS. 7a and 7b show two consecutive image frames 6 with two defects 7. The associated object bounding boxes 13 can also be seen, which are used to determine the size of the defects 7 using the YOLO-style model. The object frame 13a encloses a pore in the solder joint. The object frame 13b encloses a solder spatter adhering to the outer sheet next to the solder joint. Based on the size of the object frames 13a, 13b, the size of the individual defects 7 can be determined and it can thus be ascertained whether reworking is necessary.

[0147] In summary, the disclosure offers the following main advantages:

[0148] Even very small defects 7 can be detected, which means that a visual inspection of the surface region 8 of the component after the completion of the surface modification process is not necessary.

[0149] The size determination can be carried out reliably and with high accuracy, since for the size determination only those image frames 6 that show a defect 7 according to the defect identification are examined, and therefore more computational resources are available for the size determination.

[0150] The defect identification and size determination can be carried out in real time, i.e., making a downstream quality control process unnecessary.

[0151] The predictive accuracy is significantly better than previous methods, i.e., there are fewer false-positive or false-negative results.

[0152] Unless otherwise expressly indicated herein, all numerical values indicating mechanical/thermal properties, compositional percentages, dimensions and/or tolerances, or other characteristics are to be understood as modified by the word “about” or “approximately” in describing the scope of the present disclosure. This modification is desired for various reasons including industrial practice, material, manufacturing, and assembly tolerances, and testing capability.

[0153] As used herein, the phrase at least one of A, B, and C should be construed to mean a logical (A OR B OR C), using a non-exclusive logical OR, and should not be construed to mean “at least one of A, at least one of B, and at least one of C.”

[0154] The apparatuses and methods described in this application may be partially or fully implemented by a special purpose computer created by configuring a general-purpose computer to execute one or more particular functions embodied in computer programs. The functional blocks, flowchart components, and other elements described above serve as software specifications, which can be translated into the computer programs by the routine work of a skilled technician or programmer.

[0155] The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure.