METHOD FOR DETECTING FIELD NAVIGATION LINE AFTER RIDGE SEALING OF CROPS
20230005260 · 2023-01-05
Assignee
Inventors
- Xiuqin RAO (Zhejiang, CN)
- Yangyang LIN (Zhejiang, CN)
- Yanning ZHANG (Zhejiang, CN)
- Xiaomin ZHANG (Zhejiang, CN)
- Yibin YING (Zhejiang, CN)
- Haitao YANG (Zhejiang, CN)
- Haiyi JIANG (Zhejiang, CN)
- Yihang ZHU (Zhejiang, CN)
Cpc classification
G06V10/23
PHYSICS
G06V10/26
PHYSICS
International classification
G06V10/22
PHYSICS
Abstract
A method for detecting a field navigation line after ridge sealing of crops includes the following steps. A field crop image is acquired. Image color space transformation, image binaryzation, longitudinal integration, neighborhood setting and region integration calculation are sequentially performed on the field crop image to obtain a crop row image. Detections of an initial middle ridge, a left ridge and a right ridge are performed on the crop row image to obtain center lines of the initial middle ridge, left ridge and right ridge. Center lines of a left (right) crop row are established by using an area 1 between the center lines of the left (right) ridge and the initial middle ridge. A center line model of a middle ridge is established by using an area 0 between the center lines of the left and right crop rows, namely a navigation line of a field operation machine.
Claims
1. A method for detecting a field navigation line after a ridge sealing of crops, characterized in that the method comprises steps as follows: 1) performing crop image acquisition, wherein a camera is used to acquire a field crop image, recorded as an original image S1; 2) performing image color space transformation, wherein the original image S1 is converted to HSI color space to obtain an HSI image S2; 3) performing image binarization to obtain a binary image S3; 4) processing longitudinal integration of the binary image S3 to obtain a longitudinal integral image S4; 5) performing neighborhood setting, wherein a neighborhood of a current pixel is set, the neighborhood is 1/48 of an image width of the original image S1, a 3-row two-dimensional matrix R is used to represent the neighborhood, a column in the two-dimensional matrix R represents a column in the neighborhood, each element of a first row in the two-dimensional matrix R represents a column offset of each column in the neighborhood relative to the current pixel, each element of a second row represents an abscissa offset of a start row of each column in the neighborhood, and each element of a third row represents an abscissa offset of an end row of each column in the neighborhood; 6) performing region integration calculation, wherein a blank image having a size the same as that of the longitudinal integral image S4 is constructed as a region integration image S5, and each pixel is traversed on the longitudinal integral image S4, which is processed in the way as follows: when traversing, current pixel coordinates are marked as (x, y), an accumulator C is set, and an initial value of the accumulator C is set to be 0; when traversing each column of the two-dimensional matrix R and traversing the two-dimensional matrix R, elements of the first row to the third row of a current j-th column are R.sub.1j, R.sub.2j, and R.sub.3j, a difference is obtained by subtracting a pixel value of a pixel with coordinates (x+R.sub.3j−1, y+R.sub.1j) from a pixel value of a pixel with coordinates (x+R.sub.3j, y+R.sub.1j) on the longitudinal integral image S4, the difference is accumulated into the accumulator C, after traversing the two-dimensional matrix R is completed, a value in the accumulator C is taken as a regional integral value M of the current pixel, the regional integral value M is assigned to the pixel having coordinates the same as those of the current pixel in the region integration image S5, and after traversing each pixel of the longitudinal integral image S4 is completed, the region integration image S5 is obtained; 7) performing detections of crop rows, wherein each row is traversed in the region integration image S5, an average value of the regional integral value M of all pixels in each row is calculated, a pixel whose regional integral value M is greater than the average value is set to be 1, the remaining pixels are set to be 0, and a crop row image S6 is obtained; 8) performing detections of an initial middle ridge, a left ridge, and a right ridge; 9) performing detections of left crop row and right crop row; 10) performing detection of a middle ridge.
2. The method for detecting the field navigation line after the ridge sealing of the crops according to claim 1, wherein the step 3) specifically comprises: setting a pixel value of the pixel whose hue component value H is between 0.2 and 0.583 in the HSI image S2 to 1, and setting a pixel value of remaining pixels to 0 to obtain the binary image S3.
3. The method for detecting the field navigation line after the ridge sealing of the crops according to claim 1, wherein the step 4) specifically comprises: duplicating the binary image S3 as the longitudinal integral image S4; traversing each column on the longitudinal integral image S4; in each column, traversing each pixel downward from a pixel of the second row; when traversing, pixel values of pixels of the previous row are added; and covering a result with a pixel value of the current pixel, so as to obtain the longitudinal integral image S4.
4. The method for detecting the field navigation line after the ridge sealing of the crops according to claim 1, wherein the step 8) specifically comprises: 8.1) dividing the crop row image S6 into N crop row sub-images S7 having a width the same as a width of the crop row image S6 and a height 1/N of a height of the crop row image S6; 8.2) taking an i-th crop row sub-image S7, and calculating a longitudinal projection vector S8 of the i-th crop row sub-image S7; 8.3) performing detection of the left boundary of the initial middle ridge, wherein an initial middle ridge start detection template ML0 is constructed, the initial middle ridge start detection template ML0 is a vector whose length is ⅙ of a width of the original image S1, a first half is 1, a second half is −1, the longitudinal projection vector S8 is convolved with the initial middle ridge start detection template ML0, and a column number of a position of a point with a maximum convolution value is taken as an initial middle ridge left boundary p0L0.sub.i of the i-th crop row sub-image S7; 8.4) performing detection of a right boundary of the initial middle ridge, wherein an initial middle ridge termination detection template MR0 is constructed, the initial middle ridge termination detection template MR0 is a vector whose length is ⅙ of the width of the original image S1, a first half is −1, a second half is 1, the longitudinal projection vector S8 is convolved with the initial middle ridge termination detection template MR0, and a column number of a position of a point with a maximum convolution value is taken as an initial middle ridge right boundary p0R0.sub.i of the i-th crop row sub-image S7; 8.5) calculating an initial middle ridge center p0M0.sub.i of the i-th crop row sub-image S7 by a formula as follows: p0M0.sub.i=(p0L0.sub.i+p0R0.sub.i)/2; 8.6) performing detection of a left boundary of an initial left row, wherein an initial left row start detection template MR1 is constructed, the initial left row start detection template MR1 is a vector whose length is ½ of a length of the initial middle ridge termination detection template MR0, a first half is −1, a second half is 1, the initial left row start detection template MR1 is used to be convolved with data of the longitudinal projection vector S8 on the left side of the initial middle ridge left boundary p0L0.sub.i, and a column number of a position of a point with the maximum convolution value is taken as an initial left row left boundary CL0.sub.i of the i-th crop row sub-image S7; 8.7) performing detection of a right boundary of an initial right row, wherein an initial right row termination detection template ML1 is constructed, wherein the initial right row termination detection template ML1 is a vector whose length is ½ of a length of the initial middle ridge start detection template ML0, a first half is 1, a second half is −1, the initial right row termination detection template ML1 is used to be convolved with data of the longitudinal projection vector S8 on the right side of the initial middle ridge right boundary p0R0.sub.i, and a column number of a position of a point with the maximum convolution value is taken as an initial right row right boundary CR0.sub.i of the i-th crop row sub-image S7; 8.8) performing estimation of a center point of the left ridge, wherein an initial left row horizontal center column CLM0.sub.i of the i-th crop row sub-image S7 is calculated by the following formula: CLM0.sub.i=(CL0.sub.i+p0L0.sub.i)/2, and then a column pLM0.sub.i where the center point of the left ridge of the i-th crop row sub-image S7 is located is calculated by the following formula: pLM0.sub.i=2×CLM0.sub.i−p0M0.sub.i; 8.9) performing estimation of a center point of the right ridge, wherein an initial right row horizontal center column CRM0.sub.i of the i-th crop row sub-image S7 is calculated by the following formula: CRM0.sub.i=(CR0.sub.i+p0R0.sub.i)/2, and then a column pRM0.sub.i where the center point of the right ridge of the i-th crop row sub-image S7 is located is calculated by the following formula: pRM0.sub.i=2×CRM0.sub.i−p0M0.sub.i; 8.10) performing calculation of an ordinate of the crop row sub-image S7, wherein an ordinate of a position of a center point of the crop row sub-image S7 on the crop row image S6 is taken as an ordinate S7y.sub.i of the crop row sub-image S7; 8.11) determining center lines of the initial middle ridge, the left ridge, and the right ridge, wherein step 8.2) to step 8.11) are repeated, the N crop row sub-images S7 of the crop row image S6 are sequentially traversed, each crop row sub-image S7 obtains an initial middle ridge center p0M0.sub.i, an initial left row horizontal center column CLM0.sub.i an initial right row horizontal center column CRM0.sub.i, and the ordinate S7y.sub.i of the crop row sub-image S7, and therefore results of all N crop row sub-images S7 are composed to obtain a set of an initial middle ridge center set p0M0, an initial left row horizontal center column set CLM0, an initial right row horizontal center column set CRM0, and an ordinate set S7y of the crop row sub-image S7; wherein the ordinate S7y.sub.i of the crop row sub-image S7 serves as an independent variable, the initial middle ridge center p0M0.sub.i, the initial left row horizontal center column CLM0.sub.i, the initial right row horizontal center column CRM0.sub.i serve as dependent variables, respectively, univariate regression models pM, pL and pR are constructed between the initial middle ridge center p0M0.sub.i and the ordinate S7y.sub.i of the crop row sub-image S7, between the initial left row horizontal center column CLM0.sub.i and the ordinate S7y.sub.i of the crop row sub-image S7, and between the initial right row horizontal center column CRM0.sub.i and the ordinate S7y.sub.i of the crop row sub-image S7, respectively.
5. The method for detecting the field navigation line after the ridge sealing of the crops according to claim 1, wherein the step 9) specifically comprising: 9.1) constructing a blank right crop row point set SCR and a blank left crop row point set SCL; 9.2) taking a k-th row on the crop row image S6 as a row image S9, wherein an ordinate of the row image S9 as an independent variable is substituted into univariate regression models pM, pL, and pR to obtain a crop middle ridge center column p0M1.sub.k, a crop left ridge horizontal center column CLM1.sub.k, and a crop right ridge horizontal center column CRM1.sub.k on the current row image S9; 9.3) on the current row image S9, adding the coordinates of a pixel with a pixel value of 1 between the crop middle ridge center column p0M1.sub.k and the crop left ridge horizontal center column CLM1.sub.k corresponding to the crop row image S6 to the left crop row point set SCL; 9.4) on the current row image S9, adding the coordinates of a pixel with a pixel value of 1 between the crop middle ridge center column p0M1.sub.k and the crop right ridge horizontal center column CRM1.sub.k corresponding to the crop row image S6 to the right crop row point set SCR; 9.5) repeating step 9.2) to step 9.4), wherein each row of the crop row image S6 is traversed to obtain the complete left crop row point set SCL and the right crop row point set SCR; 9.6) wherein ordinates of the pixels in the left crop row point set SCL serve as independent variables, abscissas of the pixels in the left crop row point set SCL serve as dependent variables, a univariate regression model for the left crop row point set SCL is constructed, and a left crop row centerline model CL is obtained; 9.7) wherein ordinates of the pixels in the right crop row point set SCR serve as independent variables, abscissas of the pixels in the right crop row point set SCR serve as dependent variables, a univariate regression model for the right crop row point set SCR is constructed, and a right crop row centerline model CR is obtained.
6. The method for detecting the field navigation line after the ridge sealing of the crops according to claim 1, wherein the step 10) specifically comprises: 10.1) constructing a blank middle ridge point set Spath; 10.2) taking a q-th row on the crop row image S6 as a row image S10, wherein an ordinate of the line image S10 as an independent variable is substituted into a left crop row centerline model CL and a right crop row centerline model CR to obtain a left row center point CL1.sub.q and a right row center point CR1.sub.q on the current line image S10; 10.3) on the current line image S10, adding the coordinates of a pixel with a pixel value of 0 between the left row center point CL1.sub.q and the right row center point CR1.sub.q corresponding to the crop row image S6 into the middle ridge point set SPath; 10.4) repeating step 10.2) to step 10.3), wherein each row image S10 of the crop row image S6 is traversed to obtain the complete middle ridge point set Spath; 10.5) wherein the ordinates of the pixels in the middle ridge point set SPath serve as independent variables, abscissas of the pixels in the middle ridge point set SPath serve as dependent variables, a univariate regression model is constructed for the middle ridge point set SPath, a middle ridge centerline model pPath is obtained, and the straight line where the middle ridge centerline model pPath is located is the navigation line for the field machinery.
Description
BRIEF DESCRIPTION OF THE DRAWING
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
DESCRIPTION OF THE EMBODIMENTS
[0059] The invention is further illustrated with reference to the accompanying drawings and embodiments in the subsequent paragraphs.
[0060] The invention includes steps as follows.
[0061] 1) Crop image acquisition: a camera is used to acquire a field crop image, recorded as an original image S1, as shown in
[0062] The optical axis of the camera takes pictures in the direction of the field ridge.
[0063] 2) Image color space transformation: the original image S1 is converted to HSI color space to obtain an HSI image S2, as shown in
[0064] 3) Image binarization: the pixel value of the pixel whose hue component value H is between 0.2 and 0.583 in the HSI image S2 is set to be 1, and the pixel values of the remaining pixels are set to be 0 to obtain a binary image S3, as shown in
[0065] 4) Longitudinal integration:
[0066] The binary image S3 is duplicated as a longitudinal integral image S4, and each column on the longitudinal integral image S4 is traversed. In each column, each pixel is traversed downward from the pixel of the second row. When traversing, the pixel values of the pixels of the previous row are added, and the result is covered with the pixel value of the current pixel, so as to obtain the longitudinal integral image S4, as shown in
[0067] 5) Neighborhood setting:
[0068] The neighborhood of the current pixel is set, and the neighborhood is 1/48 of the image width of the original image S1, as shown in
[0069] 6) Region integration calculation:
[0070] A blank image having a size the same as that of the longitudinal integral image S4 is constructed as a region integration image S5, and each pixel is traversed on the longitudinal integral image S4, which is processed in the way as follows. When traversing, the current pixel coordinates are marked as (x, y), an accumulator C is set, and the initial value of the accumulator C is set to be 0. When traversing each column of the two-dimensional matrix R and traversing the two-dimensional matrix R, the elements of the first row to the third row of the current j-th column are R.sub.1j, R.sub.2j, and R.sub.3j, the difference is obtained by subtracting the pixel value of the pixel with coordinates (x+R.sub.3j−1, y+R.sub.1j) from the pixel value of the pixel with coordinates (x+R.sub.3j, y+R.sub.1j) on the longitudinal integral image S4, and the difference is accumulated into the accumulator C. After traversing the two-dimensional matrix R is completed, the value in the accumulator C is taken as the regional integral value M of the current pixel, and the regional integral value M is assigned to the pixel having coordinates the same as those of the current pixel in the region integration image S5.
[0071] After traversing each pixel of the longitudinal integral image S4 is completed, the region integration image S5 is obtained, as shown in
[0072] 7) Detections of crop rows: Each row is traversed in the region integration image S5, the average value of the regional integral value M of all pixels in each row is calculated, the pixel whose regional integral value M is greater than the average value is set to be 1, the remaining pixels are set to be 0, and a crop row image S6 is obtained, as shown in
[0073] 8) Detections of the initial middle ridge, the left ridge, and the right ridge Step 8) is as follows specifically.
[0074] 8.1) The crop row image S6 is divided into N crop row sub-images S7 having a width the same as the width of the crop row image S6 and a height 1/N of the height of the crop row image S6.
[0075] 8.2) The i-th crop row sub-image S7 is taken, and a longitudinal projection vector S8 of the i-th crop row sub-image S7 is calculated.
[0076] 8.3) Detection of the left boundary of the initial middle ridge: an initial middle ridge start detection template ML0 is constructed, and the initial middle ridge start detection template ML0 is a vector whose length is ⅙ of the width of the original image S1, the first half is 1, and the second half is −1, as shown in
[0077] 8.4) Detection of the right boundary of the initial middle ridge: an initial middle ridge termination detection template MR0 is constructed, and the initial middle ridge termination detection template MR0 is a vector whose length is ⅙ of the width of the original image S1, the first half is −1, and the second half is 1, as shown in
[0078] 8.5) An initial middle ridge center p0M0.sub.i of the i-th crop row sub-image S7 is calculated by the following formula: p0M0.sub.i=(p0L0.sub.i+p0R0.sub.i)/2.
[0079] 8.6) Detection of the left boundary of the initial left row: an initial left row start detection template MR1 is constructed, and the initial left row start detection template MR1 is a vector whose length is ½ of the length of the initial middle ridge termination detection template MR0, the first half is −1, and the second half is 1. The initial left row start detection template MR1 is used to be convolved with the data of the longitudinal projection vector S8 on the left side of the initial middle ridge left boundary p0L0.sub.i. The column number of the position of the point with the maximum convolution value is taken as an initial left row left boundary CL0.sub.i of the i-th crop row sub-image S7.
[0080] 8.7) Detection of the right boundary of the initial right row: an initial right row termination detection template ML1 is constructed, and the initial right row termination detection template ML1 is a vector whose length is ½ of the length of the initial middle ridge start detection template ML0, the first half is 1, and the second half is −1. The initial right row termination detection template ML1 is used to be convolved with the data of the longitudinal projection vector S8 on the right side of the initial middle ridge right boundary p0R0.sub.i. The column number of the position of the point with the maximum convolution value is taken as an initial right row right boundary CR0.sub.i of the i-th crop row sub-image S7.
[0081] 8.8) Estimation of the center point of the left ridge: an initial left row horizontal center column CLM0.sub.i of the i-th crop row sub-image S7 is calculated by the following formula: CLM0.sub.i=(CL0.sub.i+p0L0.sub.i)/2. Then, a column pLM0.sub.i where the center point of the left ridge of the i-th crop row sub-image S7 is located is calculated by the following formula: pLM0.sub.i=2×CLM0.sub.i−p0M0.sub.i.
[0082] 8.9) Estimation of the center point of the right ridge: an initial right row horizontal center column CRM0.sub.i of the i-th crop row sub-image S7 is calculated by the following formula: CRM0.sub.i=(CR0.sub.i+p0R0.sub.i)/2. Then, a column pRM0.sub.i where the center point of the right ridge of the i-th crop row sub-image S7 is located is calculated by the following formula: pRM0.sub.i=2×CRM0.sub.i−p0M0.sub.i.
[0083] 8.10) Calculation of the ordinate of the crop row sub-image S7: The ordinate of the position of the center point of the crop row sub-image S7 on the crop row image S6 is taken as an ordinate S7y.sub.i of the crop row sub-image S7.
[0084] 8.11) Determining the center lines of the initial middle ridge, the left ridge, and the right ridge.
[0085] Step 8.2) to step 8.11) are repeated, the N crop row sub-images S7 of the crop row image S6 are sequentially traversed. Each crop row sub-image S7 obtains an initial middle ridge center p0M0.sub.i, an initial left row horizontal center column CLM0.sub.i, an initial right row horizontal center column CRM0.sub.i, and an ordinate S7y.sub.i of the crop row sub-image S7, and therefore the results of all N crop row sub-images S7 are composed to obtain a set of an initial middle ridge center set p0M0, an initial left row horizontal center column set CLM0, an initial right row horizontal center column set CRM0, and an ordinate set S7y of the crop row sub-image S7.
[0086] As shown in
[0087] As shown in
[0088] 9) Detections of the left crop row and the right crop row:
[0089] The implementation is as follows.
[0090] 9.1) A blank right crop row point set SCR and a blank left crop row point set SCL are constructed.
[0091] 9.2) The k-th row is taken on the crop row image S6 as a row image S9, the ordinate of the row image S9 as an independent variable is substituted into the univariate regression model pM, pL, and pR to obtain a crop middle ridge center column p0M1.sub.k, a crop left ridge horizontal center column CLM1.sub.k, and a crop left ridge horizontal center column CRM1.sub.k on the current row image S9.
[0092] 9.3) The blank left crop row point set SCL is constructed. On the current row image S9, the coordinates of the pixel with a pixel value of 1 between the crop middle ridge center column p0M1.sub.k and the crop left ridge horizontal center column CLM1.sub.k corresponding to the crop row image S6 is added to the left crop row point set SCL.
[0093] 9.4) The blank right crop row point set SCR is constructed. On the current row image S9, the coordinates of the pixel with a pixel value of 1 between the crop middle ridge center column p0M1.sub.k and the crop left ridge horizontal center column CRM1.sub.k corresponding to the crop row image S6 is added to the right crop row point set SCR.
[0094] 9.5) Step 9.2) to step 9.4) are repeated. Each row of the crop row image S6 is traversed to obtain the complete left crop row point set SCL and the right crop row point set SCR.
[0095] 9.6) The ordinates of the pixels in the left crop row point set SCL serve as independent variables, the abscissas serve as dependent variables, a univariate regression model for the left crop row point set SCL is constructed, and a left crop row centerline model CL is obtained.
[0096] 9.7) The ordinates of the pixels in the right crop row point set SCR serve as independent variables, the abscissas serve as dependent variables, a univariate regression model for the right crop row point set SCR is constructed, and a right crop row centerline model CR is obtained.
[0097] The left crop row centerline model CL and the right crop row centerline model CR are actually a fitted straight line.
[0098] 10) Detection of the middle ridge:
[0099] The implementation is as follows.
[0100] 10.1) A blank middle ridge point set Spath is constructed.
[0101] 10.2) The q-th row on the crop row image S6 is taken as a row image S10, and the ordinate of the line image S10 as an independent variable is substituted into the left crop row centerline model CL and the right crop row centerline model CR to obtain a left row center point CL1.sub.q and a right row center point CR1.sub.q on the current line image S10.
[0102] 10.3) On the current line image S10, the pixel with a pixel value of 0 between the left row center point CL1.sub.q and the right row center point CR1.sub.q corresponding to the coordinates of the crop row image S6 is added into the middle ridge point set SPath.
[0103] 10.4) Step 10.2) to step 10.3) are repeated. Each row image S10 of the crop row image S6 is traversed to obtain the complete middle ridge point set Spath.
[0104] 10.5) The ordinates of the pixels in the middle ridge point set SPath serve as independent variables, the abscissas serve as dependent variables, a univariate regression model is constructed for the middle ridge point set SPath, and a middle ridge centerline model pPath is obtained. The straight line where the middle ridge centerline model pPath is located is the navigation line for the field machinery. As shown in