Material testing of optical test pieces
11790510 · 2023-10-17
Assignee
Inventors
Cpc classification
G01M11/0278
PHYSICS
International classification
Abstract
The invention relates to techniques for material testing of optical test pieces, for example of lenses. Angle-variable illumination, using a suitable illumination module, and/or angle-variable detection are carried out in order to create a digital contrast. The digital contrast can be, for example, a digital phase contrast. A defect detection algorithm for automated material testing based on a result image with digital contrast can be used. For example, an artificial neural network can be used.
Claims
1. A method for material testing of an optical test object using an optical imaging system, wherein the method comprises: for each of at least one pose of the test object in relation to the optical imaging system: respectively actuating the optical imaging system at least two images of the test object by means of at least one of angle-variable illumination and angle-variable detection, for each of the at least one pose: respectively processing the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carrying out material testing of the test object, wherein carrying out the material testing comprises the application of a defect detection algorithm for automatically detecting defects, wherein, for a detected defect, the defect detection algorithm determines a corresponding influence of the respective defect on an optical effect of the test object, wherein the test object is at least partly transparent to visible light or infrared light or ultraviolet light.
2. The method as claimed 1, wherein, for a detected defect, the defect detection algorithm provides at least one of a corresponding segmentation of the respective result image and a corresponding classification of the respective defect in relation to predefined defect classes.
3. The method as claimed 1, wherein the defect detection algorithm determines an aggregated influence on the optical effect of the test object for a plurality of detected defects.
4. The method as claimed 1, wherein the defect detection algorithm comprises a trained artificial neural network, which is configured to receive one or more of the at least one result image as input map.
5. The method as claimed in claim 4, wherein the artificial neural network is configured to provide an output map which indicates a position and a classification of the defects detected.
6. The method as claimed in claim 4, furthermore comprising: carrying out retraining of the artificial neural network on the basis of a result of the material testing.
7. The method as claimed 1, wherein the defect detection algorithm comprises at least one of a threshold analysis of the digital contrast of the at least one result image and a statistical analysis of the digital contrast.
8. The method as claimed 1, wherein the defect detection algorithm comprises an anomaly detector, which detects defects of a priori unknown defect classes.
9. The method as claimed 1, wherein the detected defects are marked in at least one error map, wherein the method furthermore comprises: comparing the at least one error map with a reference error map and determining a quality value of the material testing on the basis of the comparison.
10. The method as claimed in claim 9, wherein the at least one pose comprises a plurality of poses which are associated with a plurality of result images, wherein the defect detection algorithm is respectively applied to each of the plurality of result images to obtain a respective error map, wherein the method furthermore comprises: registering defects marked in the error maps in a global coordinate system, wherein the registered error maps are compared to the reference error map.
11. The method as claimed in claim 9, wherein the at least one pose comprises a plurality of poses which are associated with a plurality of result images, wherein the method furthermore comprises: registering the plurality of result images in a global coordinate system, wherein the defect detection algorithm is coherently applied to the registered result images for the purposes of detecting 3D defects.
12. The method as claimed in claim 1, wherein carrying out the material testing comprises the application of an algorithm with machine learning, wherein the algorithm with machine learning is configured to receive the at least one result image as input and to provide a quality value of the test object as output.
13. A method for material testing of an optical test object using an optical imaging system, wherein the method comprises: for each of at least one pose of the test object in relation to the optical imaging system: respectively actuating the optical imaging system at least two images of the test object by means of at least one of angle-variable illumination and angle-variable detection, for each of the at least one pose: respectively processing the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carrying out material testing of the test object, wherein carrying out the material testing comprises the application of an algorithm with machine learning, wherein the algorithm with machine learning is configured to receive the at least one result image as input and to provide a quality value of the test object as output, wherein the input of the algorithm with machine learning comprises a sequence of sliding windows, wherein the sliding windows of the sequence of sliding windows image different regions of the at least one result image, wherein the output of the algorithm with machine learning is determined on the basis of a combination of results of a sequence of results of the algorithm with machine learning, wherein the sequence of results corresponds to the sequence of sliding windows.
14. The method as claimed in claim 1, furthermore comprising: for each of the at least one pose: varying at least one imaging modality when capturing the at least two images, wherein the at least one imaging modality is selected from the following group: color of the light used; polarization of the light used; magnitude of the at least one of angle-variable illumination and angle-variable detection; orientation of the at least one of angle-variable illumination and angle-variable detection.
15. The method as claimed in claim 1, wherein the digital contrast is selected from the group of: virtual dark field; digital phase gradient contrast; and digital phase contrast.
16. A method for material testing of an optical test object using an optical imaging system, wherein the method comprises: for each of at least one pose of the test object in relation to the optical imaging system: respectively actuating the optical imaging system at least two images of the test object by means of at least one of angle-variable illumination and angle-variable detection, for each of the at least one pose: respectively processing the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carrying out material testing of the test object, wherein the optical imaging system furthermore comprises a correction lens which has an optical effect that is complementary to an optical effect of the test object.
17. A method for material testing of an optical test object using an optical imaging system, wherein the method comprises: for each of at least one pose of the test object in relation to the optical imaging system: respectively actuating the optical imaging system at least two images of the test object by means of at least one of angle-variable illumination and angle-variable detection, for each of the at least one pose: respectively processing the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carrying out material testing of the test object, wherein the at least one pose comprises a plurality of poses, wherein the plurality of poses define a z-stack and/or an xy-stack.
18. A computing unit, which is configured to: actuate an optical imaging system in order to capture, by means of at least one of angle-variable illumination and angle-variable detection, at least two respective images of a test object for each of at least one pose of said optical test object in relation to the imaging system, the test object is at least partly transparent to visible light or infrared light or ultraviolet light for each of the at least one pose: respectively process the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carry out material testing of the test object, wherein the material testing of the test object is configured to apply a defect detection algorithm for automatically detecting defects, and for a detected defect, the defect detection algorithm is configured to determine a corresponding influence of the respective defect on an optical effect of the test object.
19. The computing unit as claimed in claim 18, wherein the computing unit is configured to perform material testing of said optical test object using said optical imaging system, wherein: for each of at least one pose of the test object in relation to the optical imaging system: respectively actuating the optical imaging system to capture at least two images of the test object by means of at least one of angle-variable illumination and angle-variable detection, for each of the at least one pose: respectively processing the corresponding at least two images to generate a result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carrying out material testing of the test object.
20. A computing unit, which is configured to: actuate an optical imaging system in order to capture, by means of at least one of angle-variable illumination and angle-variable detection, at least two respective images of a test object for each of at least one pose of said optical test object in relation to the imaging system, for each of the at least one pose: respectively process the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carry out material testing of the test object, wherein the material testing is configured to apply an algorithm with machine learning configured to receive the at least one result image as input and to provide a quality value of the test object as output, wherein the input of the algorithm with machine learning comprises a sequence of sliding windows, wherein the sliding windows of the sequence of sliding windows image different regions of the at least one result image, wherein the output of the algorithm with machine learning is configured to determine on the basis of a combination of results of a sequence of results of the algorithm with machine learning, wherein the sequence of results corresponds to the sequence of sliding windows.
21. A computing unit, which is configured to: actuate an optical imaging system in order to capture, by means of at least one of angle-variable illumination and angle-variable detection, at least two respective images of a test object for each of at least one pose of said optical test object in relation to the imaging system, for each of at the least one pose: respectively process the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carry out material testing of the test object, wherein the optical imaging system comprises a correction lens which has an optical effect that is complementary to an optical effect of the test object.
22. A computing unit, which is configured to: actuate an optical imaging system in order to capture, by means of at least one of angle-variable illumination and angle-variable detection, at least two respective images of a test object for each of at least one pose of said optical test object in relation to the imaging system, for each of the at least one pose: respectively process the corresponding at least two images to generate at least one result image with a digital contrast, and on the basis of the at least one result image which was generated on the basis of the at least two images for the at least one pose: carry out material testing of the test object, wherein the at least one pose comprises a plurality of poses, wherein the plurality of poses define a z-stack and/or an xy-stack.
Description
BRIEF DESCRIPTION OF THE FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
DETAILED DESCRIPTION OF THE INVENTION
(18) The properties, features and advantages of this invention described above and the way in which they are achieved will become clearer and more clearly understood in association with the following description of the exemplary embodiments which are explained in greater detail in association with the drawings.
(19) The present invention is explained in greater detail below on the basis of preferred embodiments with reference to the drawings. In the figures, identical reference signs denote identical or similar elements. The figures are schematic representations of different embodiments of the invention. Elements illustrated in the figures are not necessarily illustrated as true to scale. Rather, the various elements illustrated in the figures are rendered in such a way that their function and general purpose become comprehensible to the person skilled in the art. Connections and couplings between functional units and elements as illustrated in the figures can also be implemented as an indirect connection or coupling. A connection or coupling can be implemented in a wired or wireless manner. Functional units can be implemented as hardware, software or a combination of hardware and software.
(20)
(21) An illumination module 111 is configured to illuminate the test object fixed on the sample holder 113. By way of example, this illumination could be implemented by means of Köhler illumination. Here, use is made of a condenser lens and a condenser aperture stop. This leads to a particularly homogeneous intensity distribution of the light used for illumination purposes in the plane of the test object. By way of example, a partially incoherent illumination can be implemented. The illumination module 111 could also be configured to illuminate the test object in dark field geometry.
(22) A controller 115 is provided to actuate the various components 111-114 of the optical apparatus 100. For example, the controller 115 could be configured to actuate a motor of the sample holder 113 to implement an autofocus application, By way of example, the controller 115 could be implemented as a microprocessor or microcontroller. As an alternative or in addition thereto, the controller 115 could comprise an FPGA or ASIC, for example.
(23)
(24) In one example, the illumination module 111 is thus configured to facilitate an angle-variable illumination. This means that different illumination geometries of the light employed to illuminate the test object can be implemented by means of the illumination module 111. The different illumination geometries can correspond to an illumination of the test object from at least partly different illumination directions.
(25) Here, different hardware implementations for providing the different illumination geometries are possible in the various examples described herein. By way of example, the illumination module 111 could comprise a plurality of adjustable illumination elements that are configured to locally modify or emit light. The controller 115 can actuate the illumination module 111 or the illumination elements for the purposes of implementing a certain illumination geometry.
(26)
(27) Instead of a matrix structure it would also be possible, in other examples, to use different geometric arrangements of the adjustable elements, for example a ring-shaped arrangement, a semicircular arrangement, etc.
(28) In one example, the adjustable illumination elements 121 could be implemented as light sources, for example as light-emitting diodes. Then, it would be possible, for example, for different light emitting diodes with different luminous intensities to emit light for illuminating the test object. An illumination geometry can be implemented as a result. In a further implementation, the illumination module 111 could be implemented as a spatial light modulator (SLM). The SLIM can undertake an intervention in a condenser pupil in a spatially resolved manner, which may have a direct effect on the imaging. It would also be possible to use a DMD.
(29)
(30) The imaging position of the optical test object varies according to the illumination geometry 300. This change in the imaging position can be exploited to generate the digital contrast.
(31) In addition to such angle-variable illumination techniques, an angle-variable detection can also be used in a further example. For the angle-variable detection, the detection optical unit 112 is configured accordingly. Corresponding examples are illustrated in conjunction with
(32)
(33)
(34) On account of a focus position 181 that is not equal to zero—i.e., a distance of the position of the test object 150 along the optical axis 130 from the focal plane 201 of the detection optical unit 112—the imaged representations 151, 152 of the test object 150 that are brought about by the rays 131, 132 are positioned on the sensor surface 211 in a manner spaced apart by a distance 182. Here, the distance 182 is dependent on the focus position 181:
(35)
where Δz denotes the focus position 181, Δx denotes the distance 182, ØP denotes the diameter of the aperture stop, Δk denotes the length of the distance between the light-transmissive regions of the filter patterns 380-1, 380-2, m denotes the magnification of the detection optical unit 112, and NA denotes a correction value based on the oblique angle of incidence 138, 139. NA can be determined for example empirically or by beam path calculation.
(36) The equation shown is an approximation. In some examples it may be possible to further take into account a dependence of the angle of incidence 138, 139 on the focus position 181, i.e. Δz. This dependence can be system-specific.
(37) On account of the dependence of the focus position Δz on the distance Δx, a digital contrast can be generated by processing appropriate images—which are captured with the different filter patterns 380-1, 380-2. In this case, for instance, it is possible to exploit the fact that an optical test object typically has a significant extent in the Z-direction and therefore edges extending in the Z-direction—for instance, edges of defects—can be made visible by the presence of the distance Δx in the images.
(38) In principle, it may be desirable in various examples to define particularly thin light-transmissive regions 381, 382—i.e. a small extent in the X-direction. This can make possible a particularly high accuracy when determining the distance 182 and thus the focus position 182. On the other hand, the intensity on the sensor surface 211 is decreased on account of a small extent of the light-transmissive regions in the X-direction. For this reason, a balance can be struck between intensity and accuracy. It is evident from
(39)
(40)
(41) It is evident from
(42) By using the filter patterns 351, 352, which can be transformed into one another by way, of translation along the vector 350, it is possible to obtain a one-dimensional spacing of the imaged representations 151, 152. The distance 182 is then parallel to the vector 350. Rotating the filter patterns allows the orientation of the angle-variable detection to be altered.
(43)
(44) Image capture takes place in block 1001. By way of example, an optical imaging system, such as the optical apparatus 100 of
(45) In particular, the optical imaging system can be actuated to capture at least two images of an optical test object by means of at least one of angle-variable illumination and angle-variable detection in block 1001.
(46) The angle variable illumination can comprise the use of a plurality of illumination geometries, wherein, in each case, at least one illumination geometry is associated with one of the at least two images. By way of example, different illumination geometries can be implemented by the activation of different light sources.
(47) The angle-variable detection can be implemented by filtering a spectrum of a beam path by the provision of appropriate filters near or in the pupil plane of a detection optical unit of the optical imaging system.
(48) The magnitude and/or orientation of the angle-variable illumination can be varied by varying the employed light sources or the employed illumination geometries. Accordingly, the magnitude and/or orientation of the angle-variable detection can be varied by varying the employed filter patterns.
(49) If the magnitude and/or the orientation of the angle-variable illumination and/or of the angle-variable detection are varied; this can vary the magnitude and/or direction of the subsequently determined digital contrast. Certain defects can be made visible particularly clearly by the variation of the direction of the digital contrast.
(50) Image evaluation takes place in block 1002. This can comprise processing the at least two images captured in block 1001 for the purposes of generating a result image with a digital contrast. By way of example, the digital contrast can be selected from the following group: phase contrast; phase gradient contrast; and virtual dark field.
(51) Then, material testing takes place in block 1003. In general, material testing can serve to assess the quality of the optical test object.
(52) In this case, various techniques can be applied for the purposes of carrying out the material testing in block 1003. In a simple example, it would be possible, for example, for the result image with the digital contrast to be displayed to a user; then, the user is able to manually identify defects.
(53) In block 1003, the material testing can also be carried out in automated fashion. By way of example, a quality value can be determined, for instance using an ANN or any other algorithm with machine learning. As an input map, the ANN can receive one or more of the result images from block 1002. As an output, the ANN can provide the quality value. By way of example, the quality value could specify whether the test object is “within the standard”, “outside of the standard” or—optionally—“uncertain”.
(54) In a specific implementation, carrying out the material testing can also serve to detect individual defects of the optical test object, for example using a defect detection algorithm for automated detection of defects.
(55) Blocks 1001-1003 need not be worked through strictly in sequence; rather, it would be possible, for example, to start the image evaluation 1002 and possibly the material testing in blocks 1002, 1003 already during the image capture of block 1001.
(56)
(57) An orientation and/or a Z-position and/or an XY-position of the optical test object is set in block 1011. This means that the pose of the test object is set. By way of example, this could be implemented by actuating a motorized sample holder 113 by way of the controller 115 (cf.
(58) In a further example, the user is prompted by way of a human-machine interface to manually set a certain pose of the test object.
(59) In a further example, the test object remains stationary but the detection optical unit is displaced relative to the test object (camera defocus).
(60) Such techniques, as described in conjunction with block 1011, are helpful, in particular, if the optical test object has a large extent in comparison with the field-of-view of the optical imaging system. Then, setting the Z-position—for instance, by internal focusing or camera defocus—allows a different plane of the optical test object to be imaged in focus on the detector 114 in each case. As a result, a volume testing of the optical test object can be facilitated by obtaining a stack of images for different Z-positions (so-called Z-stacks). The same also applies to setting the XY-position, which is sometimes also referred to as “stitching”. As a result, different lateral positions of the optical test object can be brought into the field of view of the optical imaging system. As a result, it is possible to obtain a stack of images which corresponds to different lateral positions of the optical test object (so-called XY-stack).
(61) A further example for processing an optical test object with a large extent in comparison with the field of view of the optical imaging system comprises the use of so-called “sliding windows”. By way of sliding windows, it is possible to implement a processing of images—for instance, result images and/or raw images—even if these images do not completely fit into a memory of the corresponding computing unit, for instance in a level 1 or level 2 memory of the computing unit. Then, it may be necessary to initially buffer store the corresponding image in an external RAM block and to subsequently load and process individual pieces of the images—in accordance with the sliding window. Here, it is possible, in principle, for successive sliding windows of a sequence of sliding windows to contain an overlap region which images the same region of the test object. Thus, it would be possible to evaluate sequentially overlapping regions of an image e.g., by means of an algorithm with machine learning—and subsequently combine the results of this evaluation to form an overall result, for instance in order to obtain the error map, as described herein.
(62) Here, it is possible, in principle, to combine such an approach with sliding windows with a technique that uses XY-stacks of images in order thus, for example, to directly interlink the image recording and evaluation: thus, instead of initially recording an entire XY-stack and then providing the latter to the ANN—or, in general, an algorithm with machine learning—as an input, each individual image can be transferred directly from the camera as an input to the ANN during the recording of an XY-stack (e.g., with overlap of the image edges). Thus, as a general rule, (i) the capture of raw images and the generation of result images and (ii) the performance of the material testing on the basis of an algorithm with machine learning can be carried out at least partly in parallel, for instance for different poses of the test object in relation to the imaging system and/or for different sliding windows of a corresponding sequence. As a result, the overall time for the image recording and evaluation can be significantly reduced in comparison with reference implementations.
(63) Setting the orientation of the optical test object can be helpful, in particular, if certain regions of the optical test object are only visible under a certain orientation; by way of example, this would be conceivable in the case of partly mirrored surfaces, etc. Some defects might only be visible under a certain orientation.
(64) Thus, in general, the pose of the optical test object in relation to the optical imaging system can be varied by way of the various techniques for arranging the optical test object in relation to the optical imaging system, as described in conjunction with block 1011.
(65) Subsequently, one or more imaging modalities are set in block 1012. In general, very different imaging modalities can be set in the process, for instance: color of the light used; polarization of the light used; magnitude of the angle-variable illumination and/or angle-variable detection; orientation of the angle-variable illumination and/or angle-variable detection.
(66) What can be achieved by such a variation of the imaging modality is that certain defects can be made particularly clearly visible, i.e., made to have a high contrast. By way of example, certain defects could only be visible in the case of a certain orientation and magnitude of the angle-variable illumination and/or angle-variable detection. Accordingly, it would be possible for certain defects of the optical test object to only be visible in the case of a certain color or polarization of the light employed.
(67) Subsequently, at least two images are captured with angle-variable illumination and/or angle-variable detection in block 1013. In the process, use is made of the one or more imaging modalities set in block 1012. Moreover, the poses of the optical test object with respect to the optical imaging system, set in block 1011, are used.
(68) When capturing the various images in one iteration of block 1013, the angle of the illumination or the angle of the detection is varied for the various images, in accordance with the modality set. By way of example, use could be made of two line patterns 351, 352 (cf.
(69) In block 1014, a check is carried out as to whether there should be a further variation of the one or more imaging modalities for the currently valid pose. Should this be the case, block 1012 is carried out again and the corresponding variation of the one or more imaging modalities is carried out. Then, at least two images are captured, once again, by means of angle-variable illumination and/or angle-variable detection in a further iteration of block 1013—for the newly set one or more imaging modalities. By way of example, line patterns rotated through 90° in the XY-plane in relation to the line patterns 351, 352 of
(70) Once it is finally determined in block 1014 that there should be no further variation of the one or more imaging modalities for the current pose, block 1015 is carried out, A check as to whether a further pose should be set is carried out in block 1015. Should this be the case, block 1011 is carried out again.
(71) Otherwise, the image capture is complete. Subsequently, the image evaluation in block 1002 takes place (cf.
(72)
(73) Initially, a current pose is selected in block 1021 from one or more specified poses, for which the images were captured. By way of example, all poses which were activated in block 1011 of
(74) Subsequently, an imaging modality is selected in block 1022. By way of example, all one or more imaging modalities, which were presented in block 1012 of
(75) Then, the respective at least two images, which were captured in block 1013 of
(76) Depending on the type of combined block 1023, the result image can have different digital contrasts.
(77) In one example of the angle-variable illumination, it is possible, for example, to respectively capture pairs of images which are respectively associated with complementary, line-shaped illumination directions (for instance, this would correspond to the activation of all light sources 121 which are arranged on one column or in one row of the matrix of the illumination module 111 of
(78) In that case, a difference could be formed, for example in accordance with
(79)
where Ileft and Iright denote the images that are in each case associated with a semicircular illumination geometry or filter pattern that is oriented left or right, and where Itop and Ibottom denote the images that are in each case associated with a semicircular illumination geometry or filter pattern oriented top or bottom. As a result, a phase gradient contrast can be generated for the result images.
(80) Accordingly, it is also possible to determine different result images for different filter patterns during the angle-variable detection.
(81) Subsequently, a check is carried out for the currently selected pose in block 1024 as to whether one or more further imaging modalities are present for the purposes of selecting a further iteration of block 1022.
(82) Should this not be the case, a check is subsequently carried out in block 1025 as to whether further images are captured for a further pose and, where applicable, a further iteration of block 1021 is carried out. If no further pose is available for selection, the image evaluation is completed. Then, all images have been processed and corresponding result images are available.
(83) Subsequently, material testing can take place. Corresponding examples are illustrated in
(84)
(85) Initially, all result images—which were obtained from a plurality of iterations of block 1023—are registered in a global coordinate system in—optional—block 1031. Here, different result images correspond to different poses (cf. block 1021 of
(86) In block 1032, a defect detection algorithm is applied to the available result images for the automated detection of defects. Optionally, the defect detection algorithm could also be applied to the available raw images.
(87) Should block 1031 have been carried out, the defect detection algorithm can be applied in combination to the registered result images. This means the registered result images can be transferred collectively to the defect detection algorithm and the defect detection algorithm takes account of relationships between the result images. This can correspond to a coherent evaluation of the registered result images. As a result, it is possible to detect 3D defects. By way of example, this allows defects which have an extent across a plurality of result images to be detected. In the case of a Z-stack (XY-stack) of result images, this allows defects extending in the i-direction (XY-plane) to be detected, for example.
(88) Should block 1031 not be carried out, it would be possible to respectively apply the defect detection algorithm to each of the result images. This means that the defect detection algorithm can be carried out multiple times and receives respectively one result image as input per execution. Then, the defect detection algorithm need not be embodied to take account of relationships between the result images. This corresponds to non-coherent evaluation of the result images. Then, the defect detection algorithm can have a particularly simple embodiment.
(89) By way of example, the defect detection algorithm can output one respective error map per execution. If the defect detection algorithm is carried out multiple times for different result images, a plurality of error maps are obtained. The error maps mark defects in the region of the optical test object imaged by the respective result image.
(90) Block 1033, in turn, is an optional block. In particular, block 1333 can be carried out if block 1031 was not carried out.
(91) In block 1033, the defects marked by the error maps can be registered in a global coordinate system. This allows defects imaged by different results images, which defects are detected by the respective execution of the defect detection algorithm, to be related to one another. In this way, it is possible to obtain a global error map, in which defects are marked in the entire region of the optical test object.
(92) Subsequently, there is a comparison of one or more error maps, for instance a global error map, with one or more reference error maps in block 1034. By way of example, reference error maps can be derived from an industrial standard. By way of example, reference error maps can indicate tolerances for certain defects, By way of example, a quality value of the material testing can be determined on the basis of the comparison in conjunction with block 1034. By way of example, a check could be carried out as to whether the detected defects remain within a tolerance range specified by the reference error map. Thus—in conjunction with determining the quality value—there can be assignment of the test object into one of a plurality of predefined result classes—e.g., “within tolerance”/“out of tolerance” in respect of a suitable standard or any other specification.
(93) If there is any uncertainty in the context of the comparison between the error map and the reference error map, there can also be an “uncertain” output; in that case, a manual visual inspection can be proposed. This corresponds to a “red-yellow-green” principle. Such an uncertainty in the context of the comparison may arise for a number of reasons. By way of example, an anomaly detector, which is used in conjunction with the defect detection algorithm, could indicate that unknown defects are present, which defects, for instance, are not detected by an algorithm—such as, e.g., an ANN—trained with different defect classes.
(94) The quality value can then be indicated in block 1035 on the basis of the comparison of block 1034. By way of example, the quality value can indicate whether or not the optical test object complies with a certain industrial standard. A quantitative statement could also be made, for example in respect of the number, size or distribution of defects which are represented by the quality value.
(95) Block 1034 can adopt different embodiments depending on the available error maps. By way of example, if the defect detection algorithm is applied in block 1032 to a multiplicity of result images which are all registered in a common global coordinate system, the comparison in block 1034 could be carried out between a single global error map, which is obtained by the corresponding defect detection algorithm, and the reference error map. However, if many different error maps are applied as a result of a plurality of executions of the defect detection algorithm in block 1032, these plurality of error maps could each be compared to the reference error map or to a plurality of corresponding reference error maps. Here, a relative arrangement of the various error maps could be taken into account by way of an appropriate registration in block 1033.
(96) As a general rule, the error maps can have different informational content in the various examples described herein.
(97) As a general rule: different items of information in respect of the defects detected by the defect detection algorithm can be stored in the error map.
(98) (i) By way of example, it would be possible for a defect detection algorithm to provide a corresponding segmentation of the respective result image for a detected defect. Thus, it would be possible for the error map to mark the position of the defect—or, in general, the relative position, orientation and/or form—by way of the segmentation.
(ii) As an alternative or in addition thereto, it would also be possible for the defect detection algorithm to provide, for a detected defect, a corresponding classification of the respective defect in relation to predefined defect classes. This information can also be stored in the error map. Examples of defect classes include: scratches; inclusions; material defects; matt surfaces; streaks; flaking; bore errors; optical offset; and turning marks. Further examples of defect classes are: dirt; chips; holes; cracks; splashes; spray points; residues; stains; imperfections; shifted coating; chromatic aberration; run-off track; glass defects; bubbles; cuts; double coating.
(iii) A further example of additional information which can be ascertained and output by the defect detection algorithm is an influence of the respective defect on the optical effect of the test object. By way of example, it may be the case that some defects—although being geometrically comparatively large, for instance—only have a small influence on the optical effect of the test object. On the other hand, even small defects may have a large influence on the optical effect of the test object in some scenarios. By way of example, the influence of the defect on the optical effect could describe, either qualitatively or else quantitatively, a change in the optical effect. By way of example, the change in the focal length of a lens, which implements the test object, could be specified in terms of the sign of the change and, optionally, in terms of the magnitude of the change.
(iv) An even further example of additional information which can be ascertained and output by the defect detection algorithm is an aggregated influence of all detected defects on the optical effect of the test object. This is based on the discovery that it can sometimes be possible for two separate defects to be intensified or attenuated overall, depending on the type and arrangement of the defects.
(99) From the aforementioned points (i)-(iv), it is evident that the error map output by the defect detection algorithm in various examples can contain different additional information items in respect of the defects.
(100) Depending on the informational content of an error map, the comparison in block 1034 can assume various embodiments. In the case of simple information, the reference error map could specify, for example, the maximum extent of defects and/or maximum density of defects—independently of the classification of the defect. In a further embodiment, the reference error map could specify the maximum number of defects of a certain defect class, for example. These and further examples can be combined with one another.
(101) Typically, certain powers of ten are specified in standards. By way of example, the reference error map can map these powers of ten. In the simplest case, these powers of ten are defined by way of the edge length of the error area—considered to be square, for instance. This facilitates a direct relationship between the segmentation by the defect detection algorithm—i.e., the number of pixels assigned to the defect—and the power of ten by way of the imaging scale.
(102) Moreover, the length and width can be determined, particularly in the case of scratches but, in general, in the case of defects from other defect classes, too. This can comprise the adaptation of algorithmic pattern shapes to the segmentation; as an alternative or in addition thereto, the size of the scratches can be determined by means of image processing processes—e.g., skeleton, medial axis transformation.
(103) Particularly in the case of chips at the edge and layer offsets, the relative position and extent of the defect can be determined in relation to the test surface. This can be implemented by means of an algorithm which captures the surroundings of the defect, e.g., the accurate relative position of the edge of the test object.
(104) A further criterion that can be mapped by the reference error map and the error map is the frequency of defects in a certain percentage of the test area. This can correspond to the maximum density of defects.
(105) As a further general rule, a different type of defect detection algorithm can be used, depending on implementation, in the various examples described herein. By way of example, the defect detection algorithm could comprise a trained ANN in one implementation. By way of example, the defect detection algorithm could comprise a convolution network. The ANN can be configured to obtain one or more result images as an input map for an input layer. Typically, an ANN comprises an input layer, one or more hidden layers, and an output layer. The output layer can comprise one or more neurons. The position and/or classification of detected defects can be indicated in this way on the basis of the activity of the various neurons. By way of example, the ANN could be configured to provide an output map—which then implements the error map or from which the error map can be derived—which indicates the position and the classification of the detected defects. Different neurons in the network can correspond to different entries in the output map.
(106) However, in the process, the defect detection algorithm is not restricted to the use of an ANN in the various examples described herein. In further examples, it would also be possible for—as an alternative or in addition to an ANN—other types of defect detection algorithm to be used. Examples comprise, for instance, a threshold analysis of the digital contrast of the result images and a statistical analysis of the digital contrast. By way of example, the assumption could be made that defects have a particularly pronounced digital contrast; in this way, the defects can be segmented by means of the threshold analysis. Then, from the form of the segmentation, there could be an assignment to the predefined defect classes as a classification, for example. By way of example, statistical analysis could be used to determine the density of defects or an average distance between defects.
(107)
(108) Initially, the image capture takes place in block 1001. Raw data 800 are obtained from the image capture 1001. These raw data correspond to images 890, which are captured with different modalities and/or different poses of the test object 150 in relation to the optical apparatus 100.
(109) By way of example, a plurality of Z-stacks 801 of images 890 could be captured in the example of
(110) As an alternative or in addition to the capture of Z-stacks 801, XY-stacks could be captured as well.
(111) Then the image evaluation 1002 or image processing takes place. Now, result images 891 are obtained in addition to the raw data 800. The result images 891 are obtained by the suitable combination of the various images 890 of the raw data 800. Images 890 which have the same pose and the same imaging modality are combined in this case. By way of example, in the example of
(112) Subsequently, the defect detection algorithm is applied, 1032. As a result, a Z-stack 801 of error maps 895 is obtained, respectively marking the defects 901-903 in different Z-planes.
(113)
(114) By way of example, sliding windows could be used when applying the defect detection algorithm in block 1032 in order to supply the information of the result images 891 to the defect detection algorithm in sequence. This can allow the image field of the defect detection algorithm to be expanded. In particular, a number of pixels of the result image 891 can then be greater than a number of neurons of the ANN in the input layer.
(115) The ANN 950 can be trained, in block 1052. To train the ANN 950, reference measurements can be taken for a large number of optical test objects and there can be a manual annotation of visible defects by experts in block 1051. Partly automated machine learning would also be conceivable. The training sets the connections in the various layers, i.e., e.g., appropriate weighting, etc.
(116) Subsequently, the various defects 901-903 are measured in block 1061. This can correspond to a segmentation of the marked defects 901-903 in the various result images 891.
(117) Moreover, there is a classification of the defects 901-903. By way of example, in the scenario of
(118) The defects 901-903, which are marked in the error maps 895, can also be registered in a global coordinate system; i.e., it is possible to determine the relative positioning of the various defects 901-903 in relation to one another or in relation to a reference point. To this end, it is possible, in block 1062, to take account of reference data which are obtained, for example, by a calibration measurement with a reference sample 910. The reference data can facilitate determination of the absolute size and/or relative positioning of the different defects 901-903 with respect to one another.
(119) Then, in block 1034, there can be a comparison of the processed error maps 895 with a reference error map 896, which is derived from an industrial standard in the example of
(120) Finally, a quality value which corresponds to a compliance or non-compliance with an industrial standard in the example of
(121) In the example of
(122)
(123)
(124)
(125)
(126)
(127)
(128) As a general rule, provision can be made for the use of a correction lens in the various examples described herein. The correction lens can be disposed upstream and downstream of the optical test object 150 in the beam path of the light from the illumination module 111 to the detector 114. By means of the correction lens, it may be possible to compensate an optical effect of the optical test object 150—which may implement a lens, for example. To this end, the correction lens can have optical effect that is complementary to that of the optical test object. As a result, it is possible to correct falsifications on account of the refractive power of the optical test object 15. This renders it possible to reduce falsifications in the images and result images on account of the optical effect of the test object.
(129) In conclusion, techniques were described above, by means of which fully automated material testing is facilitated by assessing the quality of optical test objects. A classification in conjunction with an industrial standard is facilitated. The techniques described herein facilitate a particularly high sensitivity of the material testing by using a tailored digital contrast. In turn, this facilitates fewer production rejects since certain tolerances during the testing can be dimensioned to be comparatively small. Moreover, the material testing can be carried out at a high speed.
(130) By way of example, techniques have been described, which facilitate a testing process as follows: (i) applying a defect detection algorithm for quantifying features of individual defects, i.e., for example, the relative position, form, size, and/or optical effect of the defects; (ii) quantifying the overall influence of all errors on the optical effect; i.e., the aggregated influence of all detected effects on the optical effect can be determined; (iii) assigning the test object to one of a plurality of predefined result classes—e.g., “within tolerance”/“outside of tolerance” in respect of a suitable standard or another specification. This means that a quality value can be determined. By way of example, this assignment of the test object can resort to an error map, in which the information of (i) and (ii) is stored. (iv) optionally displaying the result image on a monitor, e.g., together with an indication of defects on the basis of an error map of the defect detection algorithm.
(131) In some examples, it is possible for (i) and (ii) to be carried out by separate algorithms. By way of example, a defect detection algorithm could be applied first for carrying out (i); then, following (ii), an output of the defect detection algorithm could be compared to a reference error map. On the basis of the comparison, for example on the basis of a deviation between an error map as an output of the defect detection algorithm and the reference error map, compliance with the standard (iii) can then be checked. However, in other examples, it would also be possible for (i) and (ii) to be mapped in combination by an ANN or other techniques of machine learning; information about individual defects, for instance a corresponding segmentation, can then be dispensed with in such implementations. Sometimes, it might even be possible for (i), (ii), and (iii) to be implemented in combination by a common ANN or other techniques of machine learning; such an ANN can obtain one or more of the result images as an input map and can have an output map with two or three neurons, which correspond to “within the tolerance”, “outside of the tolerance” and, optionally, “uncertain”. The techniques described herein are able to be integrated into a production process. By way of example, an optical imaging system, as described above, could be equipped with an optical test object by a robotic arm; then, the techniques described herein can be carried out.
(132) Using the techniques described herein, it is also possible to detect defects with a variable Z-position within the optical test object. This can be implemented by setting the focal position of the optical test object. Z-stacks of images can be captured. In general, the pose of the optical test object can be varied in relation to the optical imaging system.
(133) As described above, the material testing can be implemented in automated fashion. In so doing, use can be made of algorithms with machine learning, for example as a defect detection algorithm or else in any other form.
(134) In this context, use can be made of reference implementations of algorithms with machine learning, which are known in advance. In this case, it may be possible to adapt such reference implementations of algorithms of machine learning in order to process raw images captured by means of angle-variable illumination and/or angle-variable detection and/or corresponding result images, which are obtained from the raw images, particularly well. A few corresponding examples are specified below.
(135) By way of example, it would be possible to normalize the raw images and/or the result images. Such techniques of normalization are based on the discovery that—depending on the implementation of the angle-variable illumination and the processing of the raw images—different digital contrasts may be contained in the result images. The signal level of pixel values can vary significantly depending on the digital contrast. It was also determined that different locations on the test object can have significantly different signal levels of the pixel values on account of the material properties of the test object. Examples of normalization techniques comprise instance or batch normalization.
(136) In a further example, the image field of the algorithm with machine learning can be expanded, for instance by a parallel atrous convolution and a normal convolution. In this way, size-limited inputs of algorithms for machine learning can be adapted to the comparatively extensive raw images and/or result images. By way of example, corresponding techniques are described in Chen, Liang-Chieh, et al. “Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs.” IEEE transactions on pattern analysis and machine intelligence 40.4 (2018): 834-848.
(137) In a further example, it is possible that the input of the algorithm with machine learning includes a sequence of sliding windows. Here, different windows of the sequence can map different regions of the at least one result image, with or without overlap. Then, it is possible that the output of the algorithm is determined from a combination of the results of a sequence of results. Here, the sequence of results can correspond to the sequence of sliding windows. Such a technique enables the image field of the algorithm to be expanded.
(138) For instance, a CNN called “U-net” can be used as an example of an ANN; see Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. “U-net: Convolutional networks for biomedical image segmentation.” International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. Such a U-net CNN can be modified by the use of skip connections in order to maintain a high local resolution of the model predictions by means of segmentation. Such techniques are described in, for example, Isola, Phillip, et al. “Image-to-image translation with conditional adversarial networks.” arXiv preprint (2017).
(139) In one example, it may be possible to provide multi-modal inputs into an algorithm for machine learning. By way of example, each raw image and/or each result image with a digital contrast could be considered a separate channel. Such a multi-modal input can differ from a conventional “3-channel red/green/blue” input: The object lies in common processing of all available image information, i.e., result images and/or raw images. Such common processing can be achieved, for example, by direct concatenation of all channels. However, as an alternative or in addition thereto, it would also be possible to take account of a concatenation of the results of preprocessing that is individually learnable for each channel. This means that, in such a case, specific preprocessing can occur for each channel in the first layers of the ANN before the concatenation of the channels is carried out.
(140) It goes without saying that the features of the embodiments and aspects of the invention described above can be combined with one another. In particular, the features can be used not only in the combinations described but also in other combinations or on their own without departing from the scope of the invention.
(141) By way of example, techniques were described above which are based on the use of an ANN as a defect detection algorithm. In general, use can also be made of other types of defect detection algorithm; in particular, it is possible to use defect detection algorithms which, in general, are based on techniques of machine learning. The techniques described in conjunction with the use of an ANN can also be applied to other techniques of machine learning.
(142) Further, techniques using an ANN were described above. In general, use can also be made of other algorithms with machine learning, e.g., parametric mapping or a generative model.