Surface defect inspection method and surface defect inspection apparatus
10859507 ยท 2020-12-08
Assignee
Inventors
Cpc classification
G06V10/267
PHYSICS
G01N21/8851
PHYSICS
G06F18/2433
PHYSICS
G06T7/44
PHYSICS
G06V10/758
PHYSICS
G06V10/449
PHYSICS
International classification
G01N21/00
PHYSICS
G01N21/898
PHYSICS
Abstract
A surface defect inspection method includes: acquiring an original image by capturing an image of a subject of an inspection; generating texture feature images by applying a filtering process using spatial filters to the original image; generating a feature vector at each position of the original image, by extracting a value at a corresponding position from each of the texture feature images, for each of the positions of the original image; generating an abnormality level image representing an abnormality level for each position of the original image, by calculating, for each of the feature vectors, an abnormality level in a multi-dimensional distribution formed by the feature vectors; and detecting a part having the abnormality level that is higher than a predetermined level in the abnormality level image as a defect portion or a defect candidate portion.
Claims
1. A surface defect inspection method comprising: an image input step of acquiring an original image by capturing an image of a subject of an inspection; a texture feature image generating step of generating texture feature images by applying a filtering process using spatial filters to the original image, wherein the texture feature image generating step includes a process of generating an additional texture feature image by applying the filtering process using the spatial filters to an image that is a reduction of the original image or to an image that is a reduction of one of the texture feature images; a texture feature extracting step of generating a feature vector at each position of the original image, by extracting a value at a corresponding position from each of the texture feature images and the additional texture feature image, for each of the positions of the original image; an abnormality level calculating step of generating an abnormality level image representing an abnormality level for each position of the original image, by calculating, for each of the feature vectors, an abnormality level in a multi-dimensional distribution formed by the feature vectors; and a detecting step of detecting a part having the abnormality level that is higher than a predetermined level in the abnormality level image as a defect portion or a defect candidate portion.
2. The surface defect inspection method according to claim 1, wherein a direction for reducing the original image or for reducing the one of the texture feature images includes a direction in parallel with a linear defect that is to be detected.
3. The surface defect inspection method according to claim 2, wherein the spatial filters are achieved by wavelet conversion.
4. The surface defect inspection method according to claim 3, wherein the spatial filters includes a Gabor filter.
5. The surface defect inspection method according to claim 4, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
6. The surface defect inspection method according to claim 3, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
7. The surface defect inspection method according to claim 2, wherein the spatial filters includes a Gabor filter.
8. The surface defect inspection method according to claim 7, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
9. The surface defect inspection method according to claim 2, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
10. The surface defect inspection method according to claim 1, wherein the spatial filters are achieved by wavelet conversion.
11. The surface defect inspection method according to claim 10, wherein the spatial filters includes a Gabor filter.
12. The surface defect inspection method according to claim 11, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
13. The surface defect inspection method according to claim 10, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
14. The surface defect inspection method according to claim 1, wherein the spatial filters includes a Gabor filter.
15. The surface defect inspection method according to claim 14, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
16. The surface defect inspection method according to claim 1, wherein Mahalanobis distance is used as an abnormality level in the multi-dimensional distribution formed by the feature vectors.
17. A surface defect inspection apparatus comprising: an image capturing unit configured to capture a subject of an inspection; an image input unit configured to acquire an original image of the subject of the inspection, the original image being captured by the image capturing unit; a texture feature image generating unit configured to generate texture feature images by applying a filtering process using spatial filters to the original image, wherein the texture feature image generating unit is configured to generate an additional texture feature image by applying the filtering process using the spatial filters to an image that is a reduction of the original image or to an image that is a reduction of one of the texture feature images; a texture feature extracting unit configured to generate a feature vector at each position of the original image by extracting a value at a corresponding position from each of the texture feature images additional texture feature image, for each of the positions of the original image; an abnormality level calculating unit configured to generate an abnormality level image representing an abnormality level for each position of the original image, by calculating, for each of the feature vectors, an abnormality level in a multi-dimensional distribution formed by the feature vectors; and a detecting unit configured to detect a part having the abnormality level that is higher than a predetermined level in the abnormality level image as a defect portion or a defect candidate portion.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
(13) A configuration of a surface defect inspection apparatus according to one embodiment of the present invention, and a surface defect inspection method using the surface defect inspection apparatus will now be explained with reference to some drawings.
(14) [Configuration]
(15)
(16) The illumination device 2 illuminates a surface of a steel strip S that is the subject of an inspection performed by the surface defect inspection apparatus 1.
(17) The image capturing device 3 captures an image of a part of the surface of the steel strip S being illuminated by the illumination device 2, and transmits data of the acquired image of the surface of the steel strip S (original image) to the image processing device 4. The image capturing device 3 may be what is called a line sensor camera having a one-dimensional imaging device, or what is called an area camera having a two-dimensional imaging device, but, in either case, the image capturing device 3 captures the image synchronously with the conveyance of the steel strip S. When the image capturing device 3 is a line sensor camera, a continuous illumination device is used as the illumination device 2. When the image capturing device 3 is an area camera, a flash illumination device that emits flash light every time the steel strip S is carried by a certain distance is used as the illumination device 2.
(18) The image processing device 4 analyzes the image data of the surface of the steel strip S received from the image capturing device 3, detects a surface defect, if any, on the surface of the steel strip S, determines the type and the adversity level of the surface defect, and outputs the information to the display device 5.
(19) The image processing device 4 includes an image input unit 41, an image correcting unit 42, a texture feature image generating unit 43, a texture feature extracting unit 44, an abnormality level calculating unit 45, and a defect candidate detecting unit 46, all of which are provided internally. The image processing device 4 also includes a defect feature calculating unit 47 and a defect determining unit 48, as required, that are provided internally.
(20) The image input unit 41 includes an internal temporary storage area, and buffers pieces of surface image data of the steel strip S received from the image capturing device 3, sequentially in the temporary storage area.
(21) The image correcting unit 42 generates a corrected image by sequentially reading the pieces of image data stored in the temporary storage area included in the image input unit 41, and applying a correction process to the read image data. In the correction process, to begin with, if one or both of the edges of the steel strip S are included in the image, the image correcting unit 42 detects the position of the edge, sets the image area corresponding to the outside of the edge of the steel strip S as an area to be excluded from the inspection, and fills the area to be excluded from the inspection with a mirror image of the inner area of the steel sheet S, being inner with respect to a border at the edge position, for example. The image correcting unit 42 then corrects (applies a shading correction of) the luminance non-uniformity resultant of illuminance non-uniformity attributable to the illumination device 2, in the image of the steel strip S, so that the entire image comes to have a uniform brightness.
(22) The texture feature image generating unit 43 applies a plurality of spatial filters to the corrected image, and generates a plurality of feature images (hereinafter, sometimes particularly referred to as texture feature images) corresponding to the respective spatial filters, and representing the features (local frequency features) of a local pattern (texture) at each position of the image. The spatial filter herein is a process for generating an output image using a corresponding pixel of the input image, and the pixel values surrounding that pixel. Particularly, it is preferable to use a plurality of spatial filters allowing different wavelength ranges to be passed, or allowing signals in different waveform directions to be passed.
(23) The texture feature extracting unit 44 extracts a value from each of the texture feature images generated by the texture feature image generating unit 43, at a position corresponding to each position of the input image or the corrected image, and extracts a texture feature vector at the position of the image. The number of the texture feature vectors corresponds to the number of the pixels of the entire image, and the dimension of the texture feature vector is matched with the number of the texture feature images.
(24) The abnormality level calculating unit 45 analyzes a distribution in the multi-dimensional space formed by the texture feature vectors extracted by the texture feature extracting unit 44, and calculates an abnormality level for each of the texture feature vectors. The abnormality level calculating unit 45 then generates an abnormality level image by mapping the calculated abnormality level of each of the texture feature vectors, to the corresponding position of the image.
(25) The defect candidate detecting unit 46 binarizes the abnormality image generated by the abnormality level calculating unit 45 using a predetermined abnormality level as a threshold, and detects (labels) an image area in which pixels having an abnormality level equal to or higher than the predetermined threshold are connected continuously as a defect portion or a defect candidate. The defect candidate detecting unit 46 may also perform a process for excluding a defect candidate that can be considered as not being an adverse defect, e.g., a defect the area of which is too small, from the detected defect candidates, or a process of connecting a plurality of defect candidates detected at nearby position as one defect candidate.
(26) The defect feature calculating unit 47 calculates a defect feature quantity for each of the defect candidates detected by the defect candidate detecting unit 46. The defect feature quantity is calculated using a defect portion gray-scale image that is a cutout of a region corresponding to the defect candidate from the corrected image, and a defect portion abnormality level image that is also a cutout of a region of the defect candidate from the abnormality level image.
(27) The defect determining unit 48 determines the defect type and the adversity level of each of the defect candidates based on the defect feature quantity calculated by the defect feature calculating unit 47.
(28) The display device 5 displays information related to the surface defects detected by the image processing device 4, such as detection information (the images and the positions of the surface defects), determination information (the types, the adversity levels), and statistical information (e.g., the total number and the frequency at which surface defects of each type and each adversity level appear in the entire steel strip S).
(29) [Surface Defect Inspecting Process]
(30) A surface defect inspecting process according to one embodiment of the present invention will be explained in detail with reference to
(31)
(32) As illustrated in
(33) [Pre-Process]
(34) The pre-process (Step S2) will now be explained with reference to
(35)
(36) At the edge detecting step S21, the image correcting unit 42 detects the position of an edge of the steel strip S from an input image I(x, y). Where x is the coordinate of a pixel corresponding to the width direction of the steel strip S, and y is the coordinate of the pixel corresponding to the length direction of the steel strip S, and x=0, 1, . . . , n.sub.x1, and y=0, 1, . . . , n.sub.y1. n.sub.x represents the size of the image in the x direction, and n.sub.y represents the size in the y direction.
(37) At the outside-of-edge correcting step S22, the image correcting unit 42 designates the area outside of the edge of the steel strip S as an area to be excluded from the inspecting process, and generates an outside-of-edge corrected image I.sub.E(x, y) that is no longer adverse, by filling the area to be excluded from the inspecting process with values in the area on the inner side of the edge, as a mirror image, for example, so that no edge portion is detected therefrom.
(38) At the shading correcting step S23, the image correcting unit 42 computes a corrected image I.sub.C(x, y) in which the brightness of the entire image is uniformized, by correcting the luminance non-uniformity of (applying a shading correction to) the outside-of-edge corrected image I.sub.E(x, y). In the shading correction process, for example, the brightness may be standardized by subtracting a moving average of the one-dimensional luminance from the luminance in the original image, and dividing the result with the moving average, or by performing the same process using a moving average of two-dimensional luminance in both directions of the x and the y directions. Furthermore, it is also possible to use a low-pass filter instead of the moving average of the luminance. Still furthermore, as a simple method, a luminance difference or a high-pass filter that is equivalent thereto may also be used.
(39) In the series of pre-processes described above, it is possible to perform any of the edge detecting step S21, the outside-of-edge correcting step S22, and the shading correcting step S23 in a selective manner, as appropriate, depending on the conditions of the input image I(x, y). If none of these steps are to be performed, for example, the corrected image I.sub.C(x, y)=the input image I(x, y).
(40) [Defect Detecting Process]
(41) The defect detecting process (Step S3) will now be explained with reference to
(42)
(43) At the texture feature image generating step S31, the texture feature image generating unit 43 generates a texture feature image F.sub.j(x, y) (j=0, 1, 2, . . . , N.sub.T1) by applying a plurality of filtering processes to the corrected image I.sub.C(x, y). Where N.sub.T is the number of spatial filters. In this embodiment, Gabor filters are used as the spatial filters. A Gabor filter is a linear filter that uses a Gabor function expressed as Equation (1) below.
(44)
(45) Equation (1) is a general expression of a Gabor filter, and has a format of a multiplication of a Gaussian attenuation function with a sine wave. In Equation (1), represents the wavelength of the sine wave, and represents the direction of the stripe pattern of the Gabor function. a represents the spread of the wavelength (the bandwidth of the Gaussian function). b represents the space aspect ratio, and represents the ellipticity of the support of the Gabor function. i is an imaginary unit.
(46) In image analyses, the Gabor function indicated as Equation (1) is used as a spatial filter, but sometimes a Gabor function indicated as Equation (2) below, particularly b=1, a=1/, is used. The following explanations will be made using Equation (2).
(47)
(48) By performing a convolution operation of the Gabor filter on the image, the local frequency component of the wavelength (pixel) and the orientation of the wave can be extracted. Furthermore, by changing the parameters and variously, and also changing the scales in the x and y directions variously (e.g., the spread in the x direction can be increased four times by replacing x with x/4), local frequency components of the image corresponding thereto can be extracted. In the practical computation, however, if the spread of the Gabor filter is increased, the computational time required in the convolution operation will be also increased. Therefore, it is better to use a technique for reducing the image size by down-sampling, instead of increasing the spread of the Gabor filter, as will be explained later. It is, however, preferable to apply a low-pass filter to the image to which the filtering is to be applied, in advance, to avoid aliasing.
(49)
(50)
(51) In the definitions indicated in Equations (3) to (6), k.sub.x=k.sub.y=11 is suitable. The number of waves in the direction in which the wave travels in the Gabor function illustrated in
(52) In
LP=hh.sup.T,h= 1/16[1 4 6 4 1](7)
LPY=[1 1 1 1].sup.T(7)
(53) In the convolution operation of these two low-pass filters LP and LPY described above on the image, the computations are carried out assuming that the outside of the image border is a mirror image. Furthermore, the block denoted as X2, Y2 in
(54) To begin with, the texture feature image generating unit 43 sets the initial image as I.sub.0(x, y)=I.sub.C(x, y). The texture feature image generating unit 43 then computes reduced images I.sub.p(p=1, 2, 3)(x, y) by sequentially applying the low-pass filter LP and the X2, Y2 down-sampling to the initial image I.sub.0(x, y). I.sub.p(x, y) are reduced images resultant of applying the low-pass filter LP and the X2, Y2 down-sampling p times. The texture feature image generating unit 43 also computes feature images I.sub.pq(x, y) (p=0, 1, 2, 3, q=0, 1, 2, 3) by performing a convolution operation of each of four Gabor filters G.sub.q(x, y) (q=0, 1, 2, 3) on each of the reduced images I.sub.p(x, y) (p=0, 1, 2, 3) including the initial image.
(55) At this stage, 16 feature images I.sub.pq(x, y) in total are acquired, but, because the feature images I.sub.pq(x, y) are complex numbers according to the equation defining the Gabor filter (Equation (1)), by discomposing each of the feature images into a real part image and an imaginary part image, 32 feature images in total are acquired. Therefore, these are established as a texture feature images F.sub.j(x, y)(j=0, 1, 2, . . . , 31, that is, N.sub.T=32). However, because the feature images I.sub.pq(x, y) satisfying p1 have been reduced in size by the down-sampling, such feature images are enlarged to the size of the initial image I.sub.0(x, y), and used as the texture feature images. As a method for enlarging the size, it is possible to use a linear interpolation, an approach for setting the same value as the nearest pixel, or an approach for copying and filling with the same value as that in each pixel of the feature image I.sub.pq(x, y) in the number down-sampled, in the down-sampled direction.
(56) With this operation, spatial filters having different space frequencies and having different waveform directions of the spatial filters can be combined and applied to the original image. Particularly, by performing a convolution operation of the original spatial filter on the down-sampled image, the same effect as that achieved by applying a spatial filter having a higher spatial frequency on the original image can be achieved with a smaller amount of computation. While it is possible to use the 32 texture feature images F.sub.j(x, y)(j=0, 2, . . . , 31) acquired by the process described above, by performing an expansion process for linear defects surrounded by dotted lines in
(57) To begin with, the texture feature image generating unit 43 computes feature images J.sub.0s(x, y) (s=0, 1, 2) by performing convolution operations of three Gabor filters H.sub.s(x, y) (s=0, 1, 2) on an image J.sub.0(x, y) generated by performing the Y4 down-sampling of the image I.sub.0(x, y). The texture feature image generating unit 43 also computes feature images J.sub.1s(x, y) (s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y)(s=0, 1, 2) on an image J.sub.1(x, y) generated by performing the Y4 down-sampling to the image J.sub.0(x, y). The texture feature image generating unit 43 also computes feature images J.sub.2s(x, y)(s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y) (s=0, 1, 2) on an image J.sub.2(x, y) generated by performing the Y4 down-sampling to the image J.sub.1(x, y).
(58) The texture feature image generating unit 43 then computes feature images J.sub.3s(x, y) (s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y)(s=0, 1, 2) on an image J.sub.3(x, y) generated by performing the Y4 down-sampling to the image I.sub.1(x, y). The texture feature image generating unit 43 then computes feature images J.sub.4s(x, y)(s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y) (s=0, 1, 2) on an image J.sub.4(x, y) generated by performing the Y4 down-sampling to the image J.sub.3(x, y). The texture feature image generating unit 43 then computes feature images J.sub.5s(x, y)(s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.8(x, y) (s=0, 1, 2) on an image J.sub.5(x, y) generated by performing the Y4 down-sampling to the image I.sub.2(x, y). The texture feature image generating unit 43 also computes feature images J.sub.6s(x, y)(s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y)(s=0, 1, 2) on an image J.sub.6(x, y) generated by performing the Y4 down-sampling to the image J.sub.5(x, y).
(59) The texture feature image generating unit 43 then computes feature images J.sub.7s(x, y) (s=0, 1, 2) by performing convolution operations of the three Gabor filters H.sub.s(x, y)(s=0, 1, 2) on an image J.sub.7(x, y) generated by performing the Y4 down-sampling to the image I.sub.3(x, y). Through the operations described above, it is possible to apply spatial filters having a spatial frequency longer in the Y direction, so that the S/N ratio for the surface defects that are long in the Y direction can be improved. By setting the direction for which the S/N ratio is improved to the direction in which the surface defects that are long in the Y direction appear, that is, the longitudinal direction of the steel strip S in the manufacturing process of the steel strip S, long surface defects can be detected more easily, advantageously.
(60) Through the process described above, additional 32 feature images J.sub.rs(x, y) (r=0, 1, . . . , 7, s=0, 1, 2) are acquired. Among all of these feature images including the 16 feature images I.sub.pq having been acquired previously, and the additional 32 feature images J.sub.rs each of the following 32 feature images in total, including the feature images I.sub.pq(x, y) (p=0, 1, 2, 3, q=1, 2, 3), J.sub.00(x, y), J.sub.02(x, y), J.sub.10 (x, y), J.sub.12(x, y), J.sub.20(x, y), J.sub.21(x, y), J.sub.22(x, y), J.sub.30(x, y), J.sub.32(x, y) J.sub.40(x, y), J.sub.41(x, y), J.sub.42(x, y), J.sub.50(x, y), J.sub.52(x, y), J.sub.60(x, y), J.sub.61(x, y), J.sub.62(x, y), J.sub.70(x, y), J.sub.71(x, y), J.sub.72(x, y) that are at the terminals in
(61) Because the feature images J.sub.rs(x, y) have been reduced by down-sampling, in the same manner as those without the expansion process for linear defects being performed, these feature images are enlarged to the size of the initial image I.sub.0(x, y), and used as the texture feature images. The correspondence between the value j and each of the feature images may be determined in any way, but in the explanation herein, the feature images described above are indicated as follows, for the convenience of explanation, from those with smaller suffixes assigned to I and J, in the order of the real part and the imaginary part: The real part of I.sub.00(x, y)=0; the imaginary part of I.sub.00(x, y)=1; the real part of I.sub.01(x, y)=2, . . . , the real part of J.sub.00(x, y)=25, . . . , the imaginary part of J.sub.72(x, y)=64.
(62) Used in the algorithm herein are 64 texture feature images F.sub.j(x, y) (j=0, 1, 2, . . . , 63) acquired by decomposing each of the 12 texture feature images I.sub.01(x, y), I.sub.02(x, y), I.sub.03(x, y), I.sub.11(x, y), I.sub.12(x, y), I.sub.13(x, y), I.sub.21(x, y), I.sub.22(x, y), I.sub.23(x, y), I.sub.31(x, y) I.sub.32(x, y), I.sub.33(x, y), and additional 20 texture feature images J.sub.00(x, y), J.sub.02(x, y), J.sub.10(x, y), J.sub.12(x, y), J.sub.20(x, y), J.sub.21(x, y), J.sub.22(x, y), J.sub.30(x, y), J.sub.32(x, y), J.sub.40 (x, y), J.sub.41(x, y), J.sub.42(x, y), J.sub.50(x, y), J.sub.52(x, y), J.sub.60(x, y), J.sub.61 (x, y), J.sub.62(x, y), J.sub.70(x, y), J.sub.71(x, y), J.sub.72(x, y), that is, 32 texture feature images in total, into an image corresponding to the real part and an image corresponding to the imaginary part. It is also possible to use sixty-eight texture feature images, with additions of I.sub.00(x, y), I.sub.10(x, y), I.sub.20(x, y), I.sub.30(x, y), but these can be omitted because the filters J.sub.xx(x, y) cover the range passed by the frequency range resultant of applying a fast Fourier transform (FFT) to the filters I.sub.00(x, y), I.sub.10(x, y), I.sub.20(x, y), I.sub.30(x, y). By limiting the number to 64, not only the amount of computations can be reduced, but also numbers can be handled more easily in the subsequent computations, advantageously, because the images corresponding to 2 raised to the power of 6 are used.
(63) The embodiment at the texture feature image generating step S4 is not limited to that described above, and it is also possible to use another set of spatial filters. For example, two-dimensional wavelet conversion or wavelet packet conversion are applicable, and the simplest one is the Haar wavelet. Alternatively, differential filters, edge extraction filters, Difference-of-Gaussian (DoG) filters, Laplacian-of-Gaussian (LoG) filters, or the like may be used, or these filters may be used in combination.
(64) Furthermore, as mentioned earlier, the anti-aliasing filter applied prior to the down-sampling described above does not need to be the spatial filter described above. Use of the Gabor filters in acquiring the texture feature images has an advantage that the amount of computations is reduced.
(65) At the texture feature extracting step S32, the texture feature extracting unit 44 extracts a texture feature vector at each pixel. For the feature images F.sub.j(x, y)={f.sub.j(x, y)} (where f.sub.j(x, y) represents the value at the coordinate x, y in the feature image F.sub.j(x, y)), by replacing the x, y pair with another index i, as indicated in Equation (9) below, the feature vector can be defined as F.sub.j={f.sub.i,j}; j=0, 2, . . . , N.sub.T1, as indicated in Equation (10) below. Where i is an index uniquely assigned to a pixel (x, y) of the input image (corrected image), and is defined as i=n.sub.xy+x, for example, and i=0, 1, 2, . . . , n.sub.xn.sub.y1. Furthermore, f.sub.i,j=f.sub.j(x, y) is established.
(66)
(67) At the abnormality level calculating step S33, the abnormality level calculating unit 45 generates an abnormality level image by statistically analyzing the distribution of the texture feature vectors extracted at the texture feature extracting step S32, in the N.sub.T-dimensional space, and by calculating the abnormality level at each pixel of the input image (corrected image). As the abnormality level, Mahalanobis distance is used, for example. Specifically, to begin with, the abnormality level calculating unit 45 prepares a matrix F in which the feature vectors F.sub.j are arranged as row vectors, and such row vectors are arranged in the column direction, as indicated in Equation (11) below.
(68)
(69) The abnormality level calculating unit 45 then calculates m.sub.j indicated in Equation (12) below. in Equation (12) represents the sum across the entire pixels, and m.sub.j is the average across the entire pixels of the matrix F.
(70)
(71) The abnormality level calculating unit 45 acquires a matrix Z (={z.sub.i,j}) resultant of subtracting the average m.sub.j from the matrix F, as indicated in Equation (13) below.
Z={z.sub.i,j},z.sub.i,j=f.sub.i,jm.sub.j(13)
(72) At this time, the abnormality level calculating unit 45 calculates variance-covariance matrix C={c.sub.j1,j2}(j1, j2=0, 1, 2, . . . , N.sub.T1) from z.sub.i,j, as indicated in Equation (14) below.
(73)
(74) The abnormality level calculating unit 45 then calculates Mahalanobis distance D.sub.i (the square of the Mahalanobis distance, to be exact, but it is herein simply referred to as a Mahalanobis distance), as indicated in Equation (15) below.
(75)
(76) Where W={w.sub.i,j} is a solution of simultaneous equations WC=Z, that is, W=ZC.sup.1. Finally, the abnormality level calculating unit 45 re-maps the Mahalanobis distance D.sub.i to the coordinate (x, y) based on the index i, to acquire an abnormality level image D(x, y)=D.sub.i (i=x+n.sub.xy).
(77) The calculation for acquiring the Mahalanobis distance is equivalent to the following operation. In other words, it can be understood that, looking at each coordinate of the N.sub.T feature images F.sub.j(x, y), the coordinate has N.sub.T-dimensional value of f.sub.1(x, y), f.sub.2(x, y), . . . , f.sub.NT(x, y). This can be represented as a point in the N.sub.T-dimensional space, as p(x, y)=(f.sub.1(x, y), f.sub.2(x, y), . . . , f.sub.NT(x, y)) In the manner described above, every pixel of the image is plotted to the N.sub.T-dimensional space. The plotted set is then standardized for dispersion. The distance from the point of origin in the standardized space represents the Mahalanobis distance.
(78) The standardization for the dispersion is achieved by taking an orthogonal basis from the directions in which the dispersion is the highest, calculating the standard deviation that is the dispersion in each of the basis directions, and dividing each component of the basis direction by the standard deviation. This is the same as what is called a principal component analysis.
(79) At the defect candidate detecting step S34, the defect candidate detecting unit 46 binarizes the abnormality level image D(x, y) using a threshold D.sub.thr and detects an area where the pixels satisfying D(x, y)D.sub.thr are connected as a defect portion or a defect candidate. Furthermore, at this step, the defect candidate detecting unit 46 may impose a constraint on the areas of the regions to be connected or to the maximum value of the abnormality level within the regions to be connected, and excludes the defect candidate not satisfying the constraint. For example, the defect candidate detecting unit 46 excludes a defect candidate having an area smaller than, and having a highest abnormality level lower than respective minimum value settings. Furthermore, for any two of the defect candidates, if the distance between the regions to be connected satisfies a predetermined condition, the defect candidate detecting unit 46 performs a process of connecting the defect candidates as one and the same defect candidate. For example, assuming that a coordinate in a defect candidate 1 is (x.sub.1, y.sub.1), that a coordinate in a defect candidate 2 is (x.sub.2, y.sub.2), that a distance constraint pertaining to the x coordinate is denoted as d.sub.x, that the distance constraint pertaining to the y coordinate is denoted as d.sub.y, if there are coordinates (x.sub.1, y.sub.1), (x.sub.2, y.sub.2) that satisfy |x.sub.1x.sub.2|<d.sub.x and |y.sub.1y.sub.2|<d.sub.y, the defect candidate detecting unit 46 connects the defect candidate 1 and the defect candidate 2. This connection may be performed by repeating image expansions and contractions.
(80) It is favorable to set the threshold D.sub.thr as indicated in Equation (16) below, under an assumption that the Mahalanobis distance (the square thereof) follows a Chi-squared distribution. In Equation (16), p.sub.thr represents a significance level (probability) for determining that the abnormality level is a defect, and f.sub.2.sup.1(p, n) is an inverse function of a cumulative distribution function in the Chi-squared distribution with a freedom of n. In this manner, the threshold can be established for the abnormality level as a probability.
D.sub.thr=f.sub..sub.
(81) [Defect Determining Process]
(82) The defect determining process (Step S4) will now be explained with reference to
(83)
(84) At the defect feature calculating step S41, the defect feature calculating unit 47 performs various operations to a defect portion gray-scale image that is a cutout of the region of the defect candidate from the corrected image I.sub.C(x, y), and the defect portion abnormality level image that is a cutout of the region of the defect candidate from the abnormality level image D(x, y), to calculate defect feature quantities. Examples of typical defect feature quantities include those pertaining to the size or the shape of the defect, e.g., the width, the length, the area, the aspect ratio, and the perimeter length, for example, and those pertaining to the gray scale, such as the maximum luminance, the minimum luminance, the average luminance, and the histogram frequency related to the luminance within the defect area, and these quantities are acquired from the defect portion gray-scale image. Furthermore, in the embodiment, the feature quantities pertaining to the abnormality level are also acquired from the defect portion abnormality level image, and examples of such feature quantities include the maximum abnormality level, the average abnormality level, and the histogram frequency related to the abnormality level in the defect portion. Furthermore, in the embodiment, defect feature quantities pertaining to each of the texture features are also acquired from the defect portion texture feature image that is a cutout of the region of the defect candidate from each of the texture feature images F.sub.j(x, y).
(85) At the defect determining step S42, the defect determining unit 48 determines the defect type and the adversity level of each of the defect candidates, based on the defect feature quantities calculated at the defect feature calculating step S41. To determine, a determination rule related to the defect feature quantities, and created by a user (if-then rules or determination tables), a discriminator acquired by what is called machine learning, or combination thereof are used.
(86) As may be clear from the explanation above, in the surface defect inspection apparatus 1 according to one embodiment of the present invention, the texture feature image generating unit 43 generates a plurality of texture feature images by applying a filtering process using a plurality of spatial filters to an input image; the texture feature extracting unit 44 generates a feature vector at each position of the image, by extracting a value at a corresponding position from each of the texture feature images, for each position of the input image; the abnormality level calculating unit 45 generates an abnormality level image representing an abnormality level at each position of the input image, by calculating an abnormality level, for each of the feature vectors, in a multi-dimensional distribution formed by the feature vectors; and the defect candidate detecting unit 46 detects a part having the abnormality level that is higher than a predetermined level in the abnormality level image as a defect portion or a defect candidate portion. In this manner, the sensitivity for a thin and long linear surface defect can be improved, and surface defects can be detected at high sensitivity even when a thin and long linear surface defect and a small and short surface defect are both present.
Examples
(87)
(88) While one embodiment that is one application of the present invention made by the inventors is explained above, the scope of the present invention is not limited to the descriptions and the drawings making up a part of the disclosure of the present invention using the embodiment. In other words, other embodiments, examples, operational technologies, and the like made by any person skilled in the art on the basis of the embodiment, for example, all fall within the scope of the present invention.
(89) According to embodiments of the present invention, it is possible to provide a surface defect inspection method and a surface defect inspection apparatus capable of improving the sensitivity for a thin and long linear surface defect, and of detecting a thin and long linear surface defect and a small and short surface defect even when these surface defects are both present.
REFERENCE SIGNS LIST
(90) 1 surface defect inspection apparatus 2 illumination device 3 image capturing device 4 image processing device 5 display device 41 image input unit 42 image correcting unit 43 texture feature image generating unit 44 texture feature extracting unit 45 abnormality level calculating unit 46 defect candidate detecting unit 47 defect feature calculating unit 48 defect determining unit S steel strip