METHOD FOR ADAPTIVELY SELECTING GROUND PENETRATING RADAR IMAGE FOR DETECTING MOISTURE DAMAGE

20220276374 · 2022-09-01

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for adaptively selecting a ground penetrating radar (GPR) image for detecting a moisture damage is provided. The method adaptively selects the GPR image according to a contrast of the GPR image. The method includes the following steps: S1, reading pre-processed GPR data; S2, adjusting a resolution of a picture; S3, inputting a data set into a recognition model; S4, outputting a moisture damage result; S5, judging whether there is a detection target or not by using an initial random image data set; and S6, generating the GPR image randomly incrementally and selecting the GPR image with a proper contrast. A proper B-scan image is found effectively, quickly and automatically by combining a recognition algorithm and a deep learning model or an image classification model to achieve an automatic recognition and detection based on the GPR image and improving a recognition precision as well.

Claims

1. A method for adaptively selecting a Ground Penetrating Radar (GPR) image for detecting a moisture damage, wherein the method adaptively selects the GPR image with a proper contrast according to data of the GPR image, comprising the following steps: S1, reading pre-processed GPR data: generating GPR images with different contrasts randomly in a set contrast data range after pre-processing GPR data to construct an initial random image data set, wherein the initial random image data set comprising N pictures; S2, adjusting resolutions of the N pictures: defining the initial random image data set as an RID data set, zooming the RID data set to 224*224 to obtain a zoomed data set and defining the zoomed data set as an RBD data set; zooming a resolution of a moisture damage initial image data set directly to 224*224 to obtain the RBD data set; S3, inputting the RBD data set into a recognition model: inputting the RBD data set obtained in the step S2 into the recognition model, and executing a step S4 after an operation of the recognition model, wherein a picture input resolution size of the recognition model is 224*224 and a picture output resolution size of the recognition model is 224*224; the recognition model is a mixed deep learning model, wherein the mixed deep learning model is comprised of two portions: a feature extraction adopting ResNet50 and a target detection adopting a YOLO V2 frame; S4, outputting a moisture damage result: post-processing an output result of the recognition model in the S3, wherein the post-processing comprising the following steps: S41, judging a quantity of candidate boxes BBoxes of a spectra in the output result, executing S42 when the quantity of candidate boxes BBoxes is greater than 1, otherwise, outputting a result directly; S42, judging whether the candidate boxes BBoxes are overlapped or not, executing S43 when the candidate boxes BBoxes are overlapped, otherwise, outputting the result directly; S43, judging whether label names corresponding to overlapped candidate boxes are identical or not, wherein when label names corresponding to the overlapped candidate boxes are identical, the label names corresponding to merged candidate boxes are invariable and when label names corresponding to the overlapped candidate boxes are not identical, indicating that moisture damage label names and bridge joint label names are comprised simultaneously, the label names are output as bridge joint, Joint; S44, merging the candidate boxes BBoxes, taking a minimum value of intersected candidate boxes in x and y directions, taking a maximum value of w and h, wherein coordinates of a merged candidate box are [x.sub.min, y.sub.min, w.sub.max, h.sub.max]; S45, outputting the result, adjusting an output picture resolution to be equal to a picture resolution of the moisture damage initial image data set in the output result of the recognition model, wherein the output result is a label name with a target and an image of the candidate box BBoxes (x, y, w, h) corresponding to the target; S5, judging whether a detection target is present or not with the initial random image data set: S51, converting the output result in the S4 into a matrix A.sub.i corresponding to pixel points in a picture, wherein A.sub.i is defined as: A i [ m , n ] = { 1 , x i m x i + W i and y i n y i + h i 0 , other , where 1≤m≤H.sub.0, 1≤n≤W.sub.0; wherein H.sub.0 is a picture height of an image output by the recognition model and W.sub.0 is a picture width of the image output by the recognition model; summating the matrixes A.sub.i corresponding to the N pictures in the RID data set and calculating a mean value of the matrixes A.sub.i to acquire a mean value matrix A, wherein A is defined as: A = 1 N .Math. i = 1 N A i ; S52, setting k.sub.1=0.8 and θ.sub.0=0.5, and updating the mean value matrix A according to a formula below to acquire an updated mean value matrix A;
A(A<max(k.sub.1*max(max(A)), θ.sub.0))=0; wherein k.sub.1 is a target association coefficient; θ.sub.0 is a minimum value in the matrix A when the target is comprised, and no target is present when the mean value is lower than the minimum value; max(max(A)) is a maximum value in the mean value matrix A; S53, acquiring a judging condition T for judging whether the target is present or not according to a formula below on a basis of the updated mean value matrix A, and when the judging condition T is equal to 1, indicating that the target is present and when the judging condition T is equal to 0, indicating that no target is present; T = { 1 , where max ( max ( A ) ) > 0 0 , where max ( max ( A ) ) = 0 ; and S6, generating the GPR image randomly incrementally and selecting the GPR image with the proper contrast: when the picture contains the target, performing an initial judgment; S61, when Flag is equal to 0, indicating that a random image sample set is generated for a first time in an initial sample set stage, not entering a follow-up selecting judgment, setting Flag=1, then adding 5% of the N pictures additionally as a sample of the initial random image data set, a total number of the N pictures in the sample being is N=(1+5%) N, and returning to the S2; S62, when the Flag is not equal to 0, indicating a non-initial stage, setting a picture association coefficient, and selecting the picture with a maximum association coefficient of the mean value matrix A as the GPR image with the proper contrast; the picture association coefficient R.sub.i is defined as: R i = .Math. m = 1 H 0 .Math. n = 1 W 0 ( A ( m , n ) - μ A ) ( A i ( m , n ) - μ A i ) .Math. m = 1 H 0 .Math. n = 1 W 0 ( A ( m , n ) - μ A ) 2 .Math. m = 1 H 0 .Math. n = 1 W 0 ( A i ( m , n ) - μ A i ) 2 ; wherein R.sub.i is the picture association coefficient between the matrix A.sub.i corresponding to an i.sup.th image and the mean value matrix A; m is a coordinate value in a height direction; n is a coordinate value in a width direction; μ.sub.A is a total mean value of the mean value matrix A; and μ.sub.A.sub.i is a total mean value of the matrix A.sub.i; a termination condition of a selection process is as follows: STOP = abs ( F 1 - F 1 p r e ) < 0.01 && F 1 > 0.8 ; F 1 = 2 * Precision .Math. Recall Precision + Recall ; Precision = T P T P + F P ; Recall = TP TP + F N ; wherein F1 is an evaluation index of a deep learning; F1.sub.Pre is an evaluation index of a previous deep learning, and is 0 initially; TP is a true target region; FP is a misrecognized true value, representing that an unrecognized true value is judged as a negative value or a background mistakenly; and FN is a misrecognized negative value wherein the background is taken as the target; assigning the evaluation index F1 calculated to a variable F1.sub.Pre when the termination condition is not met, and then returning to the step S61 to increase the random image sample set so as to re-select; outputting the GPR image with the proper contrast by a GPR system when the termination condition is met.

2. The method according to claim 1, wherein acquiring the GPR data comprises: acquiring field data of an asphalt pavement by using the GPR system, determining a damaged region of the asphalt pavement with pumping or whitening in a field data acquisition process and acquiring the GPR data corresponding to the damaged region.

3. The method according to claim 1, wherein during a field data acquisition process, sampling parameter requirements comprise a sampling interval smaller than 15 cm, an antenna frequency greater than 1.6 GHz and a sampling frequency 10-20 times of a main frequency of an antenna.

4. The method according to claim 1, wherein pre-processing is a course of adopting a direct current drift connection algorithm, a ground correction algorithm, a background deduction algorithm, a band-pass filtering algorithm and performing a sliding average algorithm to perform the pre-processing.

5. The method according to claim 1, wherein the set contrast data range is 0.5-1.8.

6. The method according to claim 1, wherein N is equal to 100.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0056] FIG. 1 is a flow diagram of a moisture damage defect detection method based on GPR.

[0057] FIG. 2 is a typical image feature of the moisture damage defect data set.

[0058] FIG. 3 is a post-processing flow diagram of a recognition model result.

[0059] FIG. 4 is an index comparison diagram of a mixed model under different resolution data sets.

[0060] FIG. 5 is a moisture damage detection result diagram under the mixed model.

[0061] FIG. 6 is a detection result diagram of an ACF algorithm.

[0062] FIG. 7 is a detection result diagram by using a Cifar 10 model.

[0063] FIG. 8 is GPR image corresponding to different contrast values.

[0064] FIG. 9 is a GPR image recognition algorithm with the proper contrast.

[0065] FIG. 10 is a change rule of related coefficients dependent on number of sampled samples.

[0066] FIG. 11 is a change rule of related indexes of the deep model dependent on number of sampled samples.

[0067] FIG. 12 is a heatmap result for overall detection of a random data set overlapping on an optimum image.

[0068] FIG. 13 is a heatmap result of mean value of matrix A overlapping on an optimum result

[0069] FIG. 14 is a heatmap result for overall detection of the normal pavement and the random data set.

[0070] FIG. 15 is a distribution rule of the number of random samples of a test data set.

[0071] FIG. 16 is a comparison result of the algorithm (IRS) and the random sample (RS) in a moisture damage test set.

[0072] FIG. 17 is a comparison result of the algorithm (IRS) and the random sample (RS) with the normal pavement increasingly.

[0073] Implications of the marks in the drawings are as follows: 1-1, GPR image corresponding to a proper contrast value, 1-2, GPR image corresponding to a too small contrast value, 1-3, GPR image corresponding to a too large contrast value and 1-4, a true moisture damage defect range in the GPR image corresponding to the proper contrast.

[0074] Further description of specific embodiments of the present invention in detail will be made below in combination with drawings.

DETAILED DESCRIPTION OF THE EMBODIMENTS

[0075] Reasons of contrasts (plot scale) on influence of GPR spectra: asphalt pavement investigation is carried out by using GPR setting to obtain radar data, the radar data is post-processed to increase difference between the target body and the background and then the processed radar data is converted into the GPR image. FIG. 8 is the GPR image corresponding to different contrasts, wherein 1-1 is a GPR image corresponding to the proper contrast, and 1-4 is the moisture damage defect range detected in the GPR data. 1-2 is a radar image corresponding to the too small contrast. But the defect feature in the region corresponding to 1-4 is not highlighted on the 1-2 image and is not easily recognized by a GPR expert or model; 1-3 is a GPR image corresponding to the too large contrast. Except the region corresponding to 1-4 is displayed as the moisture damage defect, the rest of normal pavement regions are highlighted regions which will be misjudged as the defect regions. It can be known from analysis of FIG. 8, the contrast value or plot scale value is quite important to recognize the GPR image target with the suitable contrast.

[0076] Specific embodiments of the present invention are given below. It should be noted that the present invention is not limited to the specific embodiments below and equivalent transformations made based on the technical scheme of the application shall fall within the scope of protection of the present invention.

Embodiment 1

[0077] The embodiment provides a method for detecting a moisture damage of an asphalt pavement as shown in the FIG. 1 to FIG. 3. The method includes the following steps:

[0078] S1, a moisture damage image data set is acquired through GPR field survey on asphalt pavements:

[0079] S11, GPR pavement investigation and data acquisition: field GPR data of the asphalt pavement acquired by using the GPR system, and a damaged region of the pavement with stripping or whitening is determined in the field data acquisition process;

[0080] In the S11, during the field data acquisition process, required by sampling parameters, a sampling interval is smaller than 15 cm, an antenna frequency is greater than 1.6 GHz and a sampling frequency is 10-20 times of a main frequency of an antenna;

[0081] these marks will emerge above the GPR image in form of small squares. In the FIG. 2, the marks of the radar spectra are “□”, and the lower sides of the marks correspond to the moisture damage defect region. The GPR images corresponding to the marks as true values of moisture damage are used for determining features of the moisture damage defect;

[0082] S12, an initial image set of the moisture damage is acquired: after pre-processing the GPR data corresponding to the damage region, the contrast of the GPR image is set and the GPR image is intercepted according to a length of 5-6 m to construct the initial image data set of the damage with the moisture damage, the bridge joint and the normal pavement, and features are marked respectively;

[0083] the image resolution of the initial image data set of the damage is 1090*300;

[0084] In the S12, pre-processing is a course of adopting a direct current (DC) drift correction algorithm (DC offset correction), a ground correction algorithm (find the ground layer), a background deduction algorithm (subtract the mean value of A-Scans), a band-pass filtering algorithm and performing a sliding average algorithm to perform pre-processing.

[0085] in the S12, the contrast of the set GPR image is 1.2-1.6, preferably 1.4 in the embodiment.

[0086] FIG. 2 is the typical image of the moisture damage defect data set, a field picture is on the left side, the corresponding GPR image is on the right side, and in a label, Moisture is the moisture damage and Joint is the bridge joint.

[0087] A process of acquiring the image data set of the moisture damage: when passing through the moisture damage region, a GPR antenna will mark in data acquisition software, and main features of the moisture damage are determined by plenty of investigation of living examples:

[0088] 1) there are continuous or discontinuous highlighted regions in the asphalt layer;

[0089] 2) the width/Height ratio in the image region is indefinite and is positively correlated to order of severity of the moisture damage.

[0090] The lowermost image in the FIG. 2 is the bridge joint image which is characterized in that the bridge joint presents a continuous highlighted region from the pavement downwards and the continuous highlighted region is primarily different from the highlighted region of the moisture damage:

[0091] 1) The feature is highlighted from the surface of the pavement downwards and hyperbola features will emerge on two sides;

[0092] 2) the highlighted region is continuous in feature and the depth from the surface to the lower side Depth is greater than or equal to 0.1 m;

[0093] 3) the Width/Height ratio in the image region is smaller than 4 and the area Area is greater than 1000 pixel.sup.2.

[0094] S2, resolutions of the pictures are adjusted:

[0095] It is found by researches that the images with different resolutions are different in accuracy in the recognition model and the resolutions of the pictures affect the model recognition effect directly;

[0096] the damaged initial image data set is defined as an RID data set, the ID data set is directly zoomed to 224*224 and the zoomed data set is defined as a BD data set;

[0097] the resolution of a damaged initial image data set is directly zoomed to 224*224 to obtain the BD data set;

[0098] S3, the data set is input into the recognition model:

[0099] the BD data set acquired in the S2 is input into the recognition model, and S4 is executed after operation by the recognition model;

[0100] the picture input resolution size of the recognition model is 224*224 and the picture output resolution size of the recognition model is 224*224;

[0101] the recognition model is a mixed deep learning model, the mixed deep learning model is comprised of two portions: feature extraction adopting ResNet50 and target detection adopting a YOLO V2 frame;

[0102] the ResNet50 and YOLO V2 frames are known deep learning models.

[0103] Feature extraction is comprised of four stages to achieve 16-time down sampling to convert the input 224*224 into 14*14*1024, thereby providing CNN original data to follow-up YOLO detection;

[0104] In the YOLO v2 frame, a target detection and candidate frame is provided, YOLO Class Cony is provided with grids Grid=14*14, Anchor boxes=6. Loss function set by YOLO Transform is MSE (Mean Squared Error).

[0105] The mixed deep learning model is divided into a training set and a test set by means of the images obtained in the S2, the distribution proportion being 70% and 30%. A specific model training method includes training the designed mixed deep learning model by using a TL (Transfer learning) method. Loss function of the model uses a MSE method, and the number of Anchor boxes is acquired by classifying Height/Width ratios of the moisture damage and the bridge joint of the sample set according to a K-means method.

[0106] The mixed deep learning model uses three indexes: F1, Recall and Precision to measure performance of the model.

[0107] S4, a moisture damage result is output:

[0108] A result given by the recognition model has an overlapping phenomenon, including:

[0109] 1) a longer moisture damage defect will have a plurality of predicted results which are overlapped;

[0110] 2) part of the bridge joints expect a plurality of results judged are misjudged as moisture damages;

[0111] therefore, FIG. 3 is the post-processing flow diagram of the GPR image with coordinate axis, specifically including:

[0112] the output result of the recognition model in the S3 is post-processed, post-processing including the steps:

[0113] S41, the quantity of candidate boxes BBoxes of GPR images in the output result is judged, S42 is executed if the quantity of candidate boxes BBoxes is greater than 1, otherwise, a result with no target is output directly;

[0114] S42, whether the candidate boxes BBoxes are overlapped or not is judged, S43 is executed if the candidate boxes BBoxes are overlapped, otherwise, the result is output directly;

[0115] S43, whether label names corresponding to the overlapped candidate boxes are judged same or not, and if yes, the label names corresponding to the merged candidate boxes being invariable and if no, moisture damage label names and bridge joint label names are comprised simultaneously are indicated, the label names being output as the bridge joints;

[0116] S44, the candidate boxes are merged, the minimum value of intersected candidate boxes in x and y directions is taken, the maximum value of w and h is taken, and coordinates of the merged candidate box being [x.sub.min, y.sub.min, x.sub.max, h.sub.max];

[0117] S45, the result is output, the output picture resolution is adjusted to be equal to a picture resolution of the damage initial image data set in the output result of the recognition model, the output result being the label name with a target and an image of the candidate box BBoxes (x, y, w, h) corresponding to the target;

[0118] (A) the present invention breaks through detection focused on hyperbola feature targets in automatic detection in the existing GPR field and achieves automatic detection of moisture damage defects of the asphalt pavement with complex target body features, thereby providing a ground for precise pre-maintain the asphalt pavement and automatic positioning of the moisture damage defect.

[0119] (B) as the present invention considers influence of zooming of the resolutions of the pictures and detects the moisture damage defects automatically by using the mixed model, it is time- and labor-wasting to recognize existing moisture damage defects by means of expertise and it is affected by human factors.

[0120] (C) the training sample sets of the present invention are originated from field test data and the samples are of wide representativeness, so that the problem that a FDTD simulation software synthesized data set for existing GPR field researches is not representative in sample is solved and limitation that automatic recognition in GPR field is only focused on automatic detection of hyperbola features is broken through.

[0121] (D) as the method provided by the present invention can achieve automatic detection and accurate positioning of the moisture damage defects, the recognition model can be provided for automatic detection based on an unmanned inspection vehicle in the later period, thereby achieving periodical detection and inspection in a defect region and further achieving precise curing and intelligent pavement maintenance.

COMPARATIVE EXAMPLE 1

[0122] The comparative example provides the method for detecting the moisture damage of the asphalt pavement. Other steps of the method are same as those in the embodiment 1 and the difference is merely that the S2 is different, and the input images in the S3 are different.

[0123] S2, resolutions of the pictures are adjusted:

[0124] the damaged initial image data set is defined as an ID data set, the ID data set is cut according to a dimension of 224*224 and the cut images including the moisture damages and the bridge joints are defined as an SD data set;

[0125] the damaged initial image data set is cut according to the dimension 224*224 to obtain the SD data set.

COMPARATIVE EXAMPLE 2

[0126] The comparative example provides the method for detecting the moisture damage of the asphalt pavement. Other steps of the method are same as those in the embodiment 1 and the difference is merely that the S2 is different, and the input images in the S3 are different.

[0127] S2, resolutions of the pictures are adjusted:

[0128] the damaged initial image data set is defined as an ID data set, the ID data set is cut according to a dimension of 224*224 and the cut spectra including the moisture damages and the bridge joints are defined as an SD data set, and the spectra constructed by mixing the BD data set and the SD data set as an MD data set;

[0129] the resolution of the damaged initial image data set is adjusted to obtain the MD data set.

[0130] Contrastive analysis is performed on the embodiment 1, the comparative example 1 and the comparative example 2, 1431 spectra of the original image data set is constructed according to the algorithm, and the BD, SD and MD data sets are constructed according to algorithm respectively. FIG. 4 is a result of the training model. It can be known from the figure that the networks of mixed deep model trained on the data sets have better results on test set, showing that the mixed deep model is feasible. The model trained on the BD data set is optimum, the recognition precision of the model is F1=91.97%, Recall=94.53% and Precision=91.00%. Thus, the training model is selected as the BD model, and preferably, a resolution zooming method zooms the original spectra directly in an equal proportion.

COMPARATIVE EXAMPLE 3

[0131] The comparative example provides a method for detecting the moisture damage of the asphalt pavement. The method detects the moisture damage of the asphalt pavement by using an ACF (Aggregate Channel Features) algorithm.

COMPARATIVE EXAMPLE 4

[0132] The comparative example provides a method for detecting the moisture damage of the asphalt pavement. The method detects the moisture damage of the asphalt pavement by using a Cifar10 model.

[0133] Contrastive analysis is performed on the embodiment 1, the comparative example 3 and the comparative example 4. FIG. 5 to FIG. 7 are comparative results between the deep model and ACF (Aggregate Channel Features) and Cifar10 and Ground Truth is a true value of the moisture damage. It is found by comparison that the two comparative methods have redundant detection regions or a lot of leak detection regions, and the comparative results further verify the accuracy of the method.

Embodiment 2

[0134] The embodiment provides a method for adaptively selecting a ground penetrating radar image for detecting a moisture damage. As shown in the FIG. 9, the method adaptively selects the GPR image according to a contrast of the GPR image, the method including the following steps:

[0135] S1, pre-processed GPR data is read:

[0136] GPR images with different contrasts are generated randomly in a set contrast data range after pre-processing GPR data to construct an initial random image data set, the initial random image data set including N pictures;

[0137] The method for acquiring the GPR data includes: acquiring field data of the asphalt pavement by using the GPR system, determining a damaged region of the pavement with stripping or whitening in the field data acquisition process and acquiring the GPR data corresponding to the damaged region.

[0138] In a field data acquisition process, required by sampling parameters, a sampling interval is smaller than 15 cm, an antenna frequency is greater than 1.6 GHz and a sampling frequency is 10-20 times of a main frequency of an antenna.

[0139] Pre-processing is performed in a pre-processing course by adopting a direct current drift correction algorithm, a ground correction algorithm, a background deduction algorithm, a band-pass filtering algorithm and a moving average algorithm.

[0140] The set contrast value range is 0.5-1.8.

[0141] N is equal to 100.

[0142] S2, resolutions of the pictures are adjusted:

[0143] It is found by researches that the images with different resolutions are different in accuracy in the recognition model and the resolutions of the pictures affect the model recognition effect directly;

[0144] the initial random image data set is defined as an RID data set, the RID data set is zoomed to 224*224 and the data set is defined as a RBD data set;

[0145] the resolution of the moisture damage initial image data set is zoomed directly to 224*224 to obtain the RBD data set;

[0146] S3, the data set is input into a recognition model:

[0147] the RBD data set acquired in the S2 is input into the recognition model, and S4 is executed after operation by the recognition model;

[0148] the picture input resolution size of the recognition model is 224*224 and the picture output resolution size of the recognition model is 224*224;

[0149] the recognition model is a mixed deep learning model, the mixed deep learning model is comprised of two portions, feature extraction adopts ResNet50 and target detection adopts a YOLO V2 frame;

[0150] the ResNet50 and YOLO V2 frames are known deep learning models.

[0151] Feature extraction is comprised of four stages to achieve 16-time down sampling to convert the input 224*224 into 14*14*1024, thereby providing CNN original data to follow-up YOLO detection;

[0152] In the YOLO v2 frame, a target detection and candidate frame are provided, YOLO Class Cony is provided with grids Grid=14*14, Anchor boxes=6.Loss function set by YOLO Transform is MSE.

[0153] The mixed deep learning model is divided into a training set and a test set by means of the images obtained in the S2, the distribution proportion being 70% and 30%. A specific model training method includes training the designed mixed deep learning model by using a TL method. Loss function of the model uses a MSE method, and the number of Anchor boxes is acquired by classifying Height/Width ratios of the moisture damage and the bridge joint of the sample set according to a K-means method.

[0154] The mixed deep learning model uses three indexes: F1, Recall and Precision to measure performance of the model.

[0155] S4, a moisture damage result is output:

[0156] the output result of the recognition model in the S3 is post-processed, post-processing including the steps:

[0157] S41, the quantity of candidate boxes BBoxes of images in the output result is judged, S42 is executed if the quantity of candidate boxes BBoxes is greater than 1, otherwise, the result directly is output;

[0158] S42, whether the candidate boxes BBoxes are overlapped or not are judged, S43 is executed if the candidate boxes BBoxes are overlapped, otherwise, the result directly is output;

[0159] S43, whether label names corresponding to the overlapped candidate boxes are identical or not are judged, and if yes, the label names corresponding to the merged candidate boxes being invariable and if no, indicating that moisture damage label names and bridge joint label names are comprised simultaneously, the label names being output as bridge joint, Joint;

[0160] S44, the candidate boxes are merged, the minimum value of intersected candidate boxes in x and y directions is taken, the maximum value of w and h is taken, and coordinates of the merged candidate boxes being [x.sub.min, y.sub.min, w.sub.max, h.sub.max];

[0161] S45, the result is output, the output picture resolution is adjusted to be equal to a picture resolution of the damage initial image data set in the output result of the recognition model, the output result being the label name with a target and an image of the candidate box BBoxes (x, y, w, h) corresponding to the target;

[0162] S5, whether a detection target is present or not is judged with an initial random image data set:

[0163] S51, the output result in the S4 is converted into a matrix A.sub.i corresponding to pixel points in the picture, A.sub.i being defined as:

[00006] A i [ m , n ] = { 1 , x i m x i + W i and y i n y i + h i 0 , other ,

where 1≤m≤H.sub.0, 1≤n≤W.sub.0

[0164] wherein H.sub.0 is a picture height of the image output by the recognition model and W.sub.0 is a picture width of the image output by the recognition model;

[0165] the matrixes A.sub.i corresponding to the N pictures in the RID data set are summated and a mean value thereof is solved to acquire a mean value matrix A, A being defined as

[00007] A = 1 N .Math. i = 1 N A i ;

[0166] S52, as the range of the contrast is optimized, the target is greatly different from the background. If the tested GPR data contains the target, output results of most GPR images shall include the target region, and a mean value matrix A is large in value in the region. If the tested GPR data is free of the target, fewer image corresponding to improper contrasts have targets, and the mean value matrix A is small in value in the region.

[0167] k.sub.1=0.8 and θ.sub.0=0.5 are set, and the mean value matrix A is updated according to a formula below to acquire an updated mean value matrix A,


A(A<max(k* max(max(A)), θ.sub.0))=0

[0168] wherein

[0169] k.sub.1 is a target correlation coefficient for adjusting the maximum value of the mean value so as to judge different targets;

[0170] θ.sub.0 is the minimum value in the matrix A if the target is comprised, and if it is lower than the value, there is no target;

[0171] max(max(A)) is the maximum value in the mean value matrix A;

[0172] S53, a judging condition T for judging whether the target is present or not is acquired according to a formula below on a basis of the updated mean value matrix A, if T is equal to 1, indicating that a target is present and if T is equal to 0, indicating that no target is present;

[00008] T = { 1 , where max ( max ( A ) ) > 0 0 , where max ( max ( A ) ) = 0 ;

[0173] S6, generating the GPR image randomly with increment method and selecting the image with a proper contrast:

[0174] when the picture contains the target, performing initial judgment;

[0175] S61, if Flag is equal to 0, indicating that the random image sample set is generated for the first time, i.e., an initial sample set stage, not entering follow-up selecting judgment, setting Flag=1, then adding 5% of N pictures additionally as a sample of the random image data set, the total number of the pictures in the sample being N=(1+5%) N, and returning to the S2;

[0176] S62, if Flag is not equal to 0, indicating a non-initial stage, setting a picture association coefficient, and selecting the picture with the maximum association coefficient of the mean value matrix A as the image with the proper contrast;

[0177] the association coefficient R.sub.i is defined as

[00009] R i = .Math. m = 1 H 0 .Math. n = 1 W 0 ( A ( m , n ) - μ A ) ( A i ( m , n ) - μ A i ) .Math. m = 1 H 0 .Math. n = 1 W 0 ( A ( m , n ) - μ A ) 2 .Math. m = 1 H 0 .Math. n = 1 W 0 ( A i ( m , n ) - μ A i ) 2 ;

[0178] wherein R.sub.i is an association coefficient between the matrix A.sub.i corresponding to the i.sup.th image and the mean value matrix A; m is a coordinate value in a height direction; n is a coordinate value in a width direction; μ.sub.A is a total mean value of the mean value matrix A; and μ.sub.A.sub.i is a total mean value of the matrix A.sub.i;

[0179] a termination condition of the selection process is as follows:

[00010] STOP = abs ( F 1 - F 1 p r e ) < 0.01 && F 1 > 0.8 ; F 1 = 2 * Precision .Math. Recall Precision + Recall ; Precision = T P T P + F P ; Recall = TP TP + F N ;

[0180] wherein F1 is an evaluation index of deep learning; F1.sub.Pre is an evaluation index of the previous deep learning, and is 0 initially; TP is a true target region; FP is a misrecognized true value, representing that a unrecognized true value is judged as a negative value or a background mistakenly; and FN is a misrecognized negative value, i.e., the background is taken as the target; an index F1 calculated this time is assigned to a variable F1.sub.Pre when the termination condition is not met, and then returning to the S61 to increase the sample set so as to re-select;

[0181] the image with the proper contrast is output by the system when the termination condition is met.

[0182] Effect Test Comparison:

[0183] By adopting the moisture damage image data set constructed manually, the deep learning model is trained by using the YOLO detection frame and the transfer learning and is recognized in combination with the algorithm in the FIG. 9. GPR data is consistent with data in the FIG. 8. In order to observe the change rule of indexes dependent on random sample number of the method, the initial sample set in the FIG. 9 is N=20, the value of the contrast is 0.6-1.8, the parameter k.sub.1=0.8, θ.sub.0=0.5 in the matrix A is updated, and N is equal to 100 in actual application.

[0184] FIG. 10 is a change rule of the correlation coefficient dependent on the sampled sample number, wherein Referenced F1 is a F1 index, and the other two curves are correlation coefficients between the preferred image and the mean value A and the true value (Ground Truth). It can be known from the figure that with increase of the sample number, the correlation coefficients R are increased and is stabilized at a fixed value and are not increased along with increase of the sample number, showing that the algorithm has found the preferred GPR image.

[0185] FIG. 11 is a change rule of the deep model related index dependent on the sampled sample number, F1, Precision and Recall are evaluation indexes of the deep model. Similarly, after the preferred result is found, the indexes are stable, showing that a proper image is found.

[0186] FIG. 12 is a heatmap for overall detection of the random data set with 160 random sample sets overlapped on the preferred image. It can be known that in the region corresponding to 103.5 m, 160 pictures have moisture damages (target) results at the position and are consistent with the true value (1-4 in the FIG. 8) region. The three indexes are greater than 0.93, showing that the preferred image is very close to the true value and the correctness of the algorithm is verified.

[0187] FIG. 13 is a heatmap of the mean value A overlapped on the preferred result (i.e., all detection results in the random image data set are accumulated and the size of the accumulated results is reflected by color). As the mean value A takes out all the results below maximum value*0.8 in the matrix A in the updating process and the results are average values. It can be known by comparing the detection result with the Ground Truth that the goodness of fit is very high, and the correctness of the algorithm is further verified.

[0188] FIG. 14 is a heatmap of overall detection of the normal pavement on the random data set. As the size of the initialized sample number is 100 and the maximum value in the figure is only 28 (the mean value is 0.28 after calculation) and is smaller than a threshold value of θ.sub.0=0.5). The test data is classified into the normal pavement, which is consistent with actual condition.

[0189] FIG. 15 is a distribution rule of the random sample number of the test data set. 31 samples (11 normal pavements and 20 moisture damage samples) are tested by the algorithm, showing that the algorithm classifies the 11 normal pavements effectively, wherein the number of used samples is 100; the random samples needed by the moisture damage region is mainly focused between 200 and 300. As 95% of GPR data for pavement investigation are normal pavements, the algorithm can save the calculating cost effectively.

[0190] In order to further describe the effectiveness of the method (an incremental sampling method, marked as IRS) and compare the result of random selection method (RS), the FIG. 16 is a comparison result for data sets (20) with moisture damages and FIG. 17 increases 11 normal pavement results. It can be known form the result that the IRS method can recognize the defects effectively and is higher in precision.

[0191] It is shown by the experiment that the incremental sampling method and the deep model are combined in use, such that the radar spectra with proper contrasts can be selected from the GPR original data effectively, thereby providing an effective method for automatic application of GPR.

[0192] Although the method verifies recognition of the moisture damage defects, the method is not limited to the case. Recognition of the targets in other radar spectra by the method shall fall within the scope of protection of the present invention.

Embodiment 3

[0193] The embodiment provides a method for detecting the moisture damage of the asphalt pavement based on adaptive selection of gray levels of images. As shown in the FIG. 1 to FIG. 17, the method is substantially as same as the method for detecting the moisture damage of the asphalt pavement of the embodiment 1. The difference is merely replacing setting the contrast of the GPR image and intercepting the GPR image at a length of 5-6 m with selecting the GPR image with a proper contrast and intercepting the GPR image at a length of 5-6 m in the S12.

[0194] The selection method for the GPR image with the proper contrast is the adaptive selection method for the GPR image;

[0195] the adaptive selection method for the ground penetrating image is as same as the method for detecting the moisture damage of the asphalt pavement in the embodiment 2.

[0196] The recognition models in the embodiment 1 and the embodiment 2 are same, and the post-processing steps in the embodiment 1 and the embodiment 2 are same.

[0197] The method of the embodiment can optimize the image for each GPR data and input the optimized image to the deep model to obtain the detection result. The method solves the problem of image optimization and image recognition of the moisture damage, thereby truly achieving automatic and intelligent work on moisture damage defect detection.