Method for quantitatively identifying the defects of large-size composite material based on infrared image sequence
11587250 · 2023-02-21
Assignee
Inventors
- Yuhua CHENG (Chengdu, CN)
- Chun Yin (Chengdu, CN)
- Xiao Yang (Chengdu, CN)
- Kai CHEN (Chengdu, CN)
- Xuegang Huang (Chengdu, CN)
- Gen Qiu (Chengdu, CN)
- Yinze Wang (Chengdu, CN)
Cpc classification
G06T7/30
PHYSICS
G06T3/4038
PHYSICS
International classification
G06T7/30
PHYSICS
Abstract
The present invention provides a method for quantitatively identifying the defects of large-size composite material based on infrared image sequence, firstly obtaining the overlap area of an infrared splicing image, and dividing the infrared splicing image into three parts according to overlap area: overlap area, reference image area and registration image area, then extracting the defect areas from the infrared splicing image to obtain P defect areas, then obtaining the conversion coordinates of pixels of defect areas according to the three parts of the infrared splicing image, and further obtaining the transient thermal response curves of centroid coordinate and edge point coordinates, finding out the thermal diffusion points from the edge points of defect areas according to a created weight sequence and dynamic distance threshold ε.sub.ttr×d.sub.p_max, finally, based on the thermal diffusion points, the accurate identification of quantitative size of defects are completed.
Claims
1. A method for quantitatively identifying the defects of large-size composite material based on infrared image sequence, comprising: (1). obtaining a plurality of local reconstruction images of a large-size composite material based on a plurality of the infrared image sequences recorded by an infrared thermal imaging camera through a plurality of local detection; (2). locating the overlap area of two adjacent local reconstruction images 2.1). splicing two adjacent local reconstruction images into an infrared splicing image, and calculating the coordinate supplement values X.sub.add and Y.sub.add taking a local reconstruction image as reference image I.sub.1, and an adjacent local reconstruction image which has overlap area with reference image I.sub.1 as registration image I.sub.2, putting reference image I.sub.1 and registration image I.sub.2 into a world coordinate system, and then splicing reference image I.sub.1 with registration image I.sub.2 by using an affine transformation matrix H to obtain an infrared splicing image I.sub.12, where the size of reference image I.sub.1 and registration image I.sub.2 is the same: the width is m pixels, the height is n pixels; where affine transformation matrix H is:
X.sub.add=0|X.sub.min>0,X.sub.add=X.sub.min|X.sub.min≤0
Y.sub.add=0|Y.sub.min>0,Y.sub.add=Y.sub.min|Y.sub.min≤0 where:
I.sub.12(x.sub.12_i,y.sub.12_j), i=1, . . . , M, j=1, . . . , N
M=Round(X.sub.max−X.sub.min)
N=Round(Y.sub.max−Y.sub.min) where the width of infrared splicing image I.sub.12 is M pixels, the height of infrared splicing image I.sub.12 is N pixels, Round( ) is a function of rounding a number to the nearest integer; 2.3). determining the three parts of the infrared splicing image {circle around (1)}. transforming reference image I.sub.1 and registration image I.sub.2 to the search rectangle area: with the lower left corner as the origin, along the x-axis and y-axis, putting the pixel values I.sub.1(x.sub.1_i,y.sub.1_j),i=1, . . . , m,j=1, . . . , n of reference image I.sub.1 into the search rectangle area, and extending reference image I.sub.1 to the search rectangle area to obtain pixel values I′.sub.1(x.sub.1_i,y.sub.1-j), i=1, . . . , M, j=1, . . . , N, where there is no pixel value on reference image I.sub.1, 0 is added in; transforming the pixel values I.sub.2(x.sub.2_i,y.sub.2_j),i=1, . . . , m,j=1, . . . , n of registration image I.sub.2 into the search rectangle area through the affine transformation of H.Math.I.sub.2(x.sub.2_i,y.sub.2_j) to obtain pixel values I′.sub.2(x.sub.2_i,y.sub.2_j), i=1, . . . , M,j=1, . . . , N, where there is no pixel value, 0 is added in; {circle around (2)}. initializing i=1,j=1; {circle around (3)}. judging whether both of pixel value I′.sub.1(x.sub.1_i,y.sub.1_j) and pixel value I′.sub.2(x.sub.2_i,y.sub.2_j) are non-zero values, if yes, pixel value I.sub.12(x.sub.12_i,y.sub.12_j) of infrared splicing image I.sub.12 is a pixel value of overlap area, if no, pixel value I.sub.12(x.sub.12_i,y.sub.12_j) of infrared splicing image I.sub.12 is not a pixel value of overlap area, i=i+1; {circle around (4)}. if i>M, then j=j+1 and returning step {circle around (3)}, or directly returning step {circle around (3)}, until j>N, thus all the pixel values of overlap area forms a overlap area denoted by I.sub.12_overlap; dividing infrared splicing image I.sub.12 into three parts according to overlap area I.sub.12_overlap: overlap area I.sub.12_overlap, reference image area I.sub.12_1 and registration image area I.sub.12_2, where reference image area I.sub.12_1 is the part of reference area I.sub.1 which does not belong to overlap area I.sub.12_overlap, registration image area I.sub.12_2 is the part of affine image I′.sub.2 which does not belong to overlap area I.sub.12_overlap, affine image I′.sub.2 is obtained through the following transformation:
VT.sub.p_q_t=|ΔV.sub.p_cen_t,ΔV.sub.p_q_t|.sub.1,2,3,t=1,2, . . . , T−1
VT.sub.p_q_t=1,t=T where ΔV.sub.p_cen_t is the value of temperature change sequences ΔV.sub.p_cen at t.sup.th frame, ΔV.sub.p_q_t is the value of temperature change sequences ΔV.sub.p_q at t.sup.th frame; |ΔV.sub.p_cen_t,ΔV.sub.p_q_t|.sub.1,2,3 means: if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_t is less than a change threshold ε.sub.Δ, then the value VT.sub.p_q_t of t.sup.th frame is 1, if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_t is not less than a change threshold ε.sub.Δ, and both of ΔV.sub.p_cen_t and ΔV.sub.p_q_t are positive or negative, then the value VT.sub.p_q_t of t.sup.th frame is 2, if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_t is not less than a change threshold ε.sub.Δ, and ΔV.sub.p_cen_t and ΔV.sub.p_q_t have different signs, then the value VT.sub.p_q_t of t.sup.th frame is 3; creating a weight sequence ω.sub.p for the p.sup.th defect area de′.sub.p, to the value ω.sub.p_t of weight sequence ω.sub.p at t.sup.th frame, if the temperature of the corresponding frame at the corresponding transient thermal curve TTR.sub.p_q is the maximum temperature, then the value ω.sub.p_t is 1.5, or is 1; calculating the distance between transient thermal response curve TTR.sub.p_cen and transient thermal response curve TTR.sub.p_q:
Description
BRIEF DESCRIPTION OF THE DRAWING
(1) The above and other objectives, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
(11) Hereinafter, preferred embodiments of the present invention will be described with reference to the accompanying drawings. It should be noted that the similar modules are designated by similar reference numerals although they are illustrated in different drawings. Also, in the following description, a detailed description of known functions and configurations incorporated herein will be omitted when it may obscure the subject matter of the present invention.
Embodiment
(12)
(13) In one embodiment of the present invention, As shown in
(14) Step S1: obtaining local reconstruction images
(15) obtaining a plurality of local reconstruction images of a large-size composite material based on a plurality of the infrared image sequences recorded by an infrared thermal imaging camera through a plurality of local detection;
(16) Step S2: locating the overlap area of two adjacent local reconstruction images
(17) Step S2.1: splicing two adjacent local reconstruction images into an infrared splicing image, and calculating the coordinate supplement values X.sub.add and Y.sub.add
(18) In the process of determining the overlap area of the two adjacent local reconstruction images, it is easier to guarantee the horizontal positions of the infrared thermal imaging camera and the detecting subject, i.e. the large-size composite material due to few horizontal influencing factors in the recording of the infrared thermal imaging camera, so a defect in different local reconstruction images has the same size. However, due to the limit that the distance between the infrared thermal imaging camera and the detecting subject can't be accurately adjusted, the different local reconstruction images have certain rotation angles and height changes. Therefore, the present invention uses an affine transformation matrix H to perform image registration for two adjacent local reconstruction images with overlap area, the details are described as follows:
(19) Taking a local reconstruction image as reference image I.sub.1, and an adjacent local reconstruction image which has overlap area with reference image I.sub.1 as registration image I.sub.2, putting reference image I.sub.1 and registration image I.sub.2 into a world coordinate system, and then splicing reference image I.sub.1 with registration image I.sub.2 by using an affine transformation matrix H to obtain an infrared splicing image I.sub.12, where the size of reference image I.sub.1 and registration image I.sub.2 is the same: the width is m pixels, the height is n pixels;
(20) where affine transformation matrix H is:
(21)
(22) where o.sub.1, o.sub.2, o.sub.3, o.sub.4, o.sub.5, o.sub.6 are the coefficients which are determined by pairs of matching pixels of reference image I.sub.1 and registration image I.sub.2.
(23) Calculating inverse matrix H.sup.−1 according to affine transformation matrix H, and denoted by:
(24)
(25) where o′.sub.1, o′.sub.2, o′.sub.3, o′.sub.4, o′.sub.5, o′.sub.6 are the calculated coefficients.
(26) Considering that there is no negative counting for coordinate of pixel and the world coordinate system in which a negative number is taken as origin may happen after affine transforming, we need to calculate the coordinate supplement values X.sub.add and Y.sub.add according to affine transformation matrix H and registration image I.sub.2:
X.sub.add=0|X.sub.min>0,X.sub.add=X.sub.min|X.sub.min≤0
Y.sub.add=0|Y.sub.min>0,Y.sub.add=Y.sub.min|Y.sub.min≤0
(27) where:
(28)
(29) where (x.sub.2_1,y.sub.2_n) is the pixel coordinate at column 1 and row n of registration image I.sub.2, (x.sub.2_1,y.sub.2_1) is pixel coordinate at column 1 and row 1 of registration image I.sub.2, (x.sub.2_m,y.sub.2_1) is the pixel coordinate at column m and row 1 of registration image
(30)
denotes choosing the minimal x-coordinate from two pixel coordinates,
(31)
denotes choosing the minimal y-coordinate from two pixel coordinates.
(32) Step S2.2: determining a search rectangle area
(33) Calculating the maximum x-coordinate X.sub.max and the maximum x-coordinate Y.sub.max according to affine transformation matrix H and registration image I.sub.2:
(34)
(35) where (x.sub.2_m,y.sub.2_n) is the pixel coordinate at column m and row n of registration image I.sub.2, (x.sub.2_m,y.sub.2_1) is the pixel coordinate at column m and row 1 of registration image I.sub.2, (x.sub.2_1,y.sub.2_n) is the pixel coordinate at column 1 and row n of registration image
(36)
denotes choosing the maximal x-coordinate from two pixel coordinates,
(37)
denotes choosing the maximal y-coordinate from two pixel coordinates.
(38) Judging and determining the values of four vertices: when X.sub.min>0, then X.sub.min=1, when X.sub.min≤0, then X.sub.min=X.sub.min, when Y.sub.min>0, then Y.sub.min=1, when Y.sub.min≤0, then Y.sub.min=Y.sub.min, when X.sub.max>m, then X.sub.max=X.sub.max, when X.sub.max≤m, then X.sub.max=m, when Y.sub.max>n, then Y.sub.max=Y.sub.max, when Y.sub.max≤n, then Y.sub.max=n;
(39) Connecting the four vertices (X.sub.max,Y.sub.max), (X.sub.max,Y.sub.min), (X.sub.min,Y.sub.max), (X.sub.min,Y.sub.min) to form the search rectangle area;
(40) Denoting the pixel values of infrared splicing image I.sub.12 as:
I.sub.12(x.sub.12_i,y.sub.12_j), i=1, . . . , M, j=1, . . . , N
M=Round(X.sub.max−X.sub.min)
N=Round(Y.sub.max−Y.sub.min)
(41) where the width of infrared splicing image I.sub.12 is M pixels, the height of infrared splicing image I.sub.12 is N pixels, Round( ) is a function of rounding a number to the nearest integer.
(42) Step S2.3: determining the three parts of the infrared splicing image
(43) {circle around (1)}. Transforming reference image I.sub.1 and registration image I.sub.2 to the search rectangle area: with the lower left corner as the origin, along the x-axis and y-axis, putting the pixel values I.sub.1(x.sub.1_i,y.sub.1_j),i=1, . . . , m,j=1, . . . , n of reference image I.sub.1 into the search rectangle area, and extending reference image I.sub.1 to the search rectangle area to obtain pixel values I′.sub.1(x.sub.1_i,y.sub.1_j), i=1, . . . , M,j=1, . . . , N, where there is no pixel value on reference image I.sub.1, 0 is added in; transforming the pixel values I.sub.2(x.sub.2_i,y.sub.2_j), i=1, . . . , m,j=1, . . . , n of registration image I.sub.2 into the search rectangle area through the affine transformation of H.Math.I.sub.2(x.sub.2_i,y.sub.2_j) to obtain pixel values I′.sub.2(x.sub.2_i,y.sub.2_j), i=1, . . . , M,j=1, . . . , N, where there is no pixel value, 0 is added in;
(44) {circle around (2)}. Initializing i=1,j=1;
(45) {circle around (3)}. Judging whether both of pixel value I′.sub.1(x.sub.1_i,y.sub.1_j) and pixel value I′.sub.2(x.sub.2_i,y.sub.2_j) are non-zero values, if yes, pixel value I.sub.12(x.sub.12_i,y.sub.12_j) of infrared splicing image I.sub.12 is a pixel value of overlap area, if no, pixel value I.sub.12(x.sub.12_i,y.sub.12_j) of infrared splicing image I.sub.12 is not a pixel value of overlap area, i=i+1;
(46) {circle around (4)}. If i>M, then j=j+1 and returning step {circle around (3)}, or directly returning step {circle around (3)}, until j>N, thus all the pixel values of overlap area forms a overlap area denoted by I.sub.12_overlap;
(47) Dividing infrared splicing image I.sub.12 into three parts according to overlap area I.sub.12_overlap: overlap area I.sub.12_overlap, reference image area I.sub.12_1 and registration image area I.sub.12_2, where reference image area I.sub.12_1 is the part of reference area I.sub.1 which does not belong to overlap area I.sub.12_overlap, registration image area I.sub.12_2 is the part of affine image I′.sub.2 which does not belong to overlap area I.sub.12_overlap, affine image I′.sub.2 is obtained through the following transformation:
(48)
(49) where (x.sub.2_i,y.sub.2_j) is the pixel coordinate at column i and row j of registration image I.sub.2, i=1,2, . . . , m, j=1,2, . . . , n, (x′.sub.2_i′,y′.sub.2_j′) is the pixel coordinate at column i′ and row j′ of affine image I′.sub.2.
(50) The three parts of infrared splicing image I.sub.12 are shown in
(51) Step S3: extracting the defect areas from infrared splicing image I.sub.12
(52) The accuracy of the quantitative assessment of morphological information at the defect location can be improved by clustering the pixels of infrared splicing image based on L*a*b color space and extracting the defect areas of maximum brightness, the details are described as follows:
(53) Step S3.1: Transforming the pixel values (temperature characteristic values) of infrared splicing image I.sub.12 from RGB color space to L*a*b color space;
(54) Step S3.2: Clustering, retaining, discarding the pixels of infrared splicing image, performing morphological opening and closing operations to obtain defect areas:
(55) Clustering the pixels of infrared splicing image I.sub.12 into K clusters according to a* and b* color values, retaining the pixels of the cluster which has maximum L* (brightness), discarding the rest pixels; then performing morphological opening and closing operations on infrared splicing image I.sub.12 to connect adjacent pixels to obtain defect areas de.sub.1,de.sub.2, . . . , de.sub.P, where P is the number of defect areas, for the p.sup.th defect area, its number of pixels is denoted by N.sub.p.
(56) Step S4: Quantitatively identifying the size of each defect area of infrared splicing image I.sub.12
(57) Through step S2, infrared splicing image I.sub.12 is divided into three parts: overlap area I.sub.12_overlap, reference image area I.sub.12_1 which is the part of reference area I.sub.1 that does not belong to overlap area I.sub.12_overlap, registration image area I.sub.12_2 which is the part of affine image I′.sub.2 that does not belong to overlap area I.sub.12_overlap. Comparing the pixels of defect areas de.sub.1,de.sub.2, . . . , de.sub.P obtained through step S3 with the coordinates of the three parts, the following three cases are obtained.
(58) Step S4.1: obtaining the actual number NR.sub.p of pixels of the p.sup.th defect area de.sub.p
(59) Case1: to the p.sup.th defect area de.sub.p, as shown in
(60)
(61) where (x.sub.12_p_i,y.sub.12_p_j) is the pixel coordinate of the p.sup.th defect area de.sub.p of infrared splicing image I.sub.12 at column i and row j, (x.sub.1_p_i″,y.sub.1_p_j″) is the corresponding conversion coordinate of pixel of the p.sup.th defect area de.sub.p on reference image I.sub.1 at column i″ and row j″.
(62) Case2: to the p.sup.th defect area de.sub.p, as shown in
(63)
(64) where (x.sub.12_p_i,y.sub.12_p_j) is the pixel coordinate of the p.sup.th defect area de.sub.p of infrared splicing image I.sub.12 at column i and row j, (x.sub.2_p_i″,y.sub.2_p_j″) is the corresponding conversion coordinate of pixel of the p.sup.th defect area de.sub.p on registration image I.sub.2 at column i″ and row j″.
(65) As shown in
(66) To case 1 and case 2: obtaining the p.sup.th defect area de′.sub.p on reference image I.sub.1 or on registration image I.sub.2 according the corresponding conversion coordinates of pixels of the p.sup.th defect area de.sub.p; extracting the coordinates of pixels of the edge points from the p.sup.th defect area de′.sub.p to obtain a edge point coordinate set denoted by c.sub.p;
(67) obtaining the centroid coordinate (x.sub.p_cen,y.sub.p_cen) of the p.sup.th defect area de′.sub.p according to edge point coordinate set c.sub.p, and then obtaining the corresponding transient thermal response curve TTR.sub.p_cen from the corresponding infrared image sequence, according to the centroid coordinate (x.sub.p_cen,y.sub.p_cen) of the p.sup.th defect area de′.sub.p;
(68) obtaining the corresponding transient thermal response curve TTR.sub.p_q from the corresponding infrared image sequence, according to the coordinate (x.sub.p_q,y.sub.p_q) of pixel of the edge point in edge point coordinate set c.sub.p, q is the coordinate serial number, q=1,2, . . . , Q.sub.p, Q.sub.p is the number of the edge points in edge point coordinate set c.sub.p;
(69) to transient thermal response curve TTR.sub.p_cen and transient thermal response curve TTR.sub.p_q, calculating the temperature change rate of each frame (time) to obtain temperature change sequence ΔV.sub.p_cen and temperature change sequence ΔV.sub.p_q respectively;
(70) comparing the temperature change rates of temperature change sequence ΔV.sub.p_cen and temperature change sequence ΔV.sub.p_q at each frame to obtain a weighting factor sequence VT.sub.p_q, where the value VT.sub.p_q_t of t.sup.th frame is:
VT.sub.p_q_t=|ΔV.sub.p_cen_t,ΔV.sub.p_q_t|.sub.1,2,3,t=1,2, . . . , T−1
VT.sub.p_q_t=1,t=T
(71) where ΔV.sub.p_cen_t is the value of temperature change sequences ΔV.sub.p_cen at t.sup.th frame, ΔV.sub.p_q_t is the value of temperature change sequences ΔV.sub.p_q at t.sup.th frame; |ΔV.sub.p_cen_t,ΔV.sub.p_q_t|.sub.1,2,3 means:
(72) if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_t is less than a change threshold ε.sub.Δ, then the value VT.sub.p_q_t of t.sup.th frame is 1, if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_tis not less than a change threshold ε.sub.Δ, and both of ΔV.sub.p_cen_t and ΔV.sub.p_q_t are positive or negative, then the value VT.sub.p_q_t of t.sup.th frame is 2, if the difference between ΔV.sub.p_cen_t and ΔV.sub.p_q_t is not less than a change threshold ε.sub.Δ, and ΔV.sub.p_cen_t and ΔV.sub.p_q_t have different signs, then the value VT.sub.p_q_t of t.sup.th frame is 3;
(73) creating a weight sequence ω.sub.p for the p.sup.th defect area de′.sub.p, to the value ω.sub.p_t of weight sequence ω.sub.p at t.sup.th frame, if the temperature of the corresponding frame at the corresponding transient thermal curve TTR.sub.p_q is the maximum temperature, then the value ω.sub.p_t is 1.5, or is 1;
(74) calculating the distance between transient thermal response curve TTR.sub.p_cen and transient thermal response curve TTR.sub.p_q:
(75)
(76) where TTR.sub.p_cen_t is the value of transient thermal response curve TTR.sub.p_cen at t.sup.th frame, TTR.sub.p_q_t is the value of transient thermal response curve TTR.sub.p_q at t.sup.th frame;
(77) finding out the maximum value from all distances d.sub.p_q, q=1,2, . . . , Q.sub.p, which is denoted by d.sub.p_max, then judging whether the distance d.sub.p_q between transient thermal response curve TTR.sub.p_cen and transient thermal response curve TTR.sub.p_q is greater than ε.sub.ttr×d.sub.p_max, if yes, then the q.sup.th edge point of the p.sup.th defect area de′.sub.p is a thermal diffusion point, or is a defect point, where ε.sub.ttr is a distance coefficient which is greater than 1 and less than 1, ε.sub.ttr×d.sub.p_max is a dynamic distance threshold;
(78) counting up the number NS.sub.p of thermal diffusion points, and then obtaining the actual number NR.sub.p of pixels of the p.sup.th defect area de.sub.p of infrared splicing image I.sub.12: NR.sub.p=N.sub.p−NS.sub.p.
(79) Case 3: to the p.sup.th defect area, as shown in
(80) Step S4.1.1: obtaining the conversion coordinates of the pixels within reference image area I.sub.12_1 and overlap area I.sub.12_overlap as case 1, and then obtaining the number NS.sub.p_1 of thermal diffusion points on reference image I.sub.1 as case 1 and case 2;
(81) Step S4.1.2: obtaining the conversion coordinates of pixels within registration image area I.sub.12_2 and overlap area I.sub.12_overlap as case 2, and then obtaining the number NS.sub.p_2 of thermal diffusion points on registration image I.sub.2 as case 1 and case 2;
(82) Step S4.1.3: obtaining the conversion coordinates of pixels within overlap area I.sub.12_overlap as case 1, and then obtaining a plurality of transient thermal response curves through the infrared image sequence which corresponds to reference image I.sub.1 according the conversion coordinates of pixels;
(83) Step S4.1.4: obtaining the conversion coordinates of pixels within overlap area I.sub.12_overlap as case 2, and then obtaining a plurality of transient thermal response curves through the infrared image sequence which corresponds to registration image I.sub.2 according the conversion coordinates of pixels;
(84) to the similarity of two transient thermal response curves, which are obtained from the same infrared image sequence, we can calculate their Euclidean distance, and take it as their similarity. However, the two transient thermal response curves, which are respectively obtained from two different infrared image sequences at the same location, are not corresponded along the same x axis. For the reason that the thermal conductivity and thermal resistance of the defect is not changed, as shown in
(85) Step S4.1.5: calculating the similarity of the two transient thermal response curves which are respectively from the plurality of transient thermal response curves obtained in step S4.1.3 and the plurality of transient thermal response curves obtained in step S4.1.4 and correspond to the same location at overlap area I.sub.12_overlap by using the dynamic time warping algorithm, if the similarity is greater than similarity threshold SI.sub.threshold, then the corresponding pixel on overlap area I.sub.12_overlap is a consistent pixel;
(86) Step S4.1.6: counting the consistent pixels on overlap area I.sub.12_overlap to obtain the number NR.sub.p_overlap of the consistent pixels on overlap area I.sub.12_overlap, thus obtaining the number NS.sub.p of thermal diffusion points of the p.sup.th defect area de.sub.p of infrared splicing image I.sub.12:
(87)
(88) where .sub.orgNR.sub.p_overlap is the number of the pixels of of the p.sup.th defect area de.sub.p on overlap area I.sub.12_overlap;
(89) obtaining the actual number NR.sub.p of pixels of the p.sup.th defect area de.sub.p of infrared splicing image I.sub.12: NR.sub.p=N.sub.p−NS.sub.p.
(90) Step S4.2: calculating the size S.sub.p of the p.sup.th defect area de.sub.p:
(91)
(92) where L is the width of detection area, B is the height of detection area, P.sub.x is the number of pixels along the width direction of image, P.sub.y is the number of pixels along the height direction of image.
(93) Step S4.3: processing all P defect areas according the step S4.1˜S4.2), thus the sizes S.sub.1,S.sub.2, . . . , S.sub.P of P defect areas de.sub.1,de.sub.2, . . . , de.sub.P are obtained, the identification of quantitative size of defects are completed.
EXAMPLE
(94) In this example, we perform two local detection on a specimen of a large-size composite material, there have a certain overlap area between the two local detection. Reconstructing the two infrared image sequences obtained by the two detections, two local reconstruction images are obtained. One is taken as reference image I.sub.1, the other is taken as registration image I.sub.2, and both are putted into a world coordinate system. Then, splicing reference image I.sub.1 with registration image I.sub.2 by using an affine transformation matrix H, an infrared splicing image I.sub.12 is obtained by using an affine transformation matrix H. As shown in
(95) In this example, affine transformation matrix H is:
(96)
(97) Affine transformation matrix H is a homography transformation matrix, its inverse matrix is also a homography transformation matrix. In this example, the inverse matrix H.sup.−1 of affine transformation matrix H is:
(98)
(99) In this example, X.sub.max=758.5277, X.sub.min=1, Y.sub.max=640, Y.sub.min=−5.3680, the search rectangle area is (758.5277, 640), (758.5277,−5.3680), (1,640), (1, −5.3680).
(100) In this example, the number of pixels of overlap area I.sub.12_overlap is 166868.
(101) In this example, the images of infrared splicing image I.sub.12 before and after clustering, retaining and discarding are shown in
(102) The results that the defect areas belong to according to the three parts of infrared splicing image I.sub.12 are shown in
(103) TABLE-US-00001 TABLE 1 Defect Serial number in Part(s) in infrared area FIG. 8A~8C splicing image I.sub.12 de.sub.1 1 I.sub.12_1 de.sub.2 2 I.sub.12_1 de.sub.3 3 I.sub.12_overlap de.sub.4 4 I.sub.12_overlap de.sub.5 5 I.sub.12_1, I.sub.12_overlap, I.sub.12_2 de.sub.6 6 I.sub.12_2 de.sub.7 7 I.sub.12_2
(104) As shown in
(105) As shown in
(106) To defect area (defect) de.sub.5, its number N.sub.5 of pixels is 11394, the number NS.sub.5_1 of thermal diffusion points on reference image I.sub.1 is 228 (the dynamic distance threshold is 346.2116), the number NS.sub.5_2 of thermal diffusion points on registration image I.sub.2 is 237 (the dynamic distance threshold is 598.1519). Thus, the number NS.sub.5 of thermal diffusion points of the 5.sup.th defect area de.sub.5 of infrared splicing image I.sub.12 is:
(107)
(108) Thus, the actual number NR.sub.5 of pixels of the 5.sup.th defect area de.sub.s of infrared splicing image I.sub.12 is
NR.sub.5=N.sub.5−NS.sub.5=11394−265=11129
(109) In this example, the defect areas de.sub.1˜4 belong to case 1, the defect areas de.sub.6˜7 belong to case 2, the actual numbers NR.sub.1˜4 and NR.sub.6˜7 of pixels of the defects are listed in table 2.
(110) TABLE-US-00002 TABLE 2 number of thermal actual number number of pixels diffusion points NS.sub.p pixels defect of defect area dynamic distance threshold: of defect area area N.sub.p ε.sub.ttr × d.sub.p_max NR.sub.p de.sub.1 46 1 45 52.0766 de.sub.2 17 1 16 37.4789 de.sub.3 4089 7 4082 55.5715 de.sub.4 5295 61 5234 174.1479 de.sub.6 130 25 105 20.0233 de.sub.7 200 54 146 8.9148
(111) In this example, the sizes of the defects are obtained according to the actual size of pixel, and listed in table 3.
(112) TABLE-US-00003 TABLE 3 Number of pixels Defect area corresponding to Identified (Defect) Actual size actual area number of pixels Identified size Difference de.sub.1 19.63 mm.sup.2 50.25 45 17.85 mm.sup.2 −1.78 mm.sup.2 de.sub.2 7.07 mm.sup.2 18.10 16 6.35 mm.sup.2 −0.72 mm.sup.2 de.sub.3 1600 mm.sup.2 4096 4082 1619.45 mm.sup.2 −19.45 mm.sup.2 de.sub.4 2000 mm.sup.2 5120 5234 2076.48 mm.sup.2 76.48 mm.sup.2 de.sub.5 4400 mm.sup.2 11264 11129 4415.19 mm.sup.2 15.19 mm.sup.2 de.sub.6 38.48 mm.sup.2 98.51 105 41.66 mm.sup.2 3.18 mm.sup.2 de.sub.7 63.16 mm.sup.2 161.69 146 57.92 mm.sup.2 −5.24 mm.sup.2
(113) As can be seen from Table 3, the present invention of a method for quantitatively identifying the defects of large-size composite material based on infrared image sequence has realized the accurate identification of quantitative size of defects.
(114) While illustrative embodiments of the invention have been described above, it is, of course, understand that various modifications will be apparent to those of ordinary skill in the art. Such modifications are within the spirit and scope of the invention, which is limited and defined only by the appended claims.