PEST MONITORING METHOD BASED ON MACHINE VISION
20200178511 ยท 2020-06-11
Inventors
- Yu TANG (Guangzhou, Guangdong, CN)
- Shaoming LUO (Guangzhou, Guangdong, CN)
- Zhenyu ZHONG (Guangzhou, Guangdong, CN)
- Huan LEI (Guangzhou, Guangdong, CN)
- Chaojun HOU (Guangzhou, Guangdong, CN)
- Jiajun ZHUANG (Guangzhou, Guangdong, CN)
- Weifeng HUANG (Guangzhou, Guangdong, CN)
- Zaili CHEN (Guangzhou, Guangdong, CN)
- Jintian LIN (Guangzhou, Guangdong, CN)
- Lixue ZHU (Guangzhou, Guangdong, CN)
Cpc classification
A01M1/04
HUMAN NECESSITIES
G06V10/255
PHYSICS
G06F18/214
PHYSICS
A01M1/02
HUMAN NECESSITIES
G06V20/52
PHYSICS
A01M1/026
HUMAN NECESSITIES
International classification
A01M1/02
HUMAN NECESSITIES
Abstract
The present invention relates to a pest monitoring method based on machine vision. The method includes the following steps: arranging a pest trap at a place where pests gather, and setting an image acquisition device in front of the pest trap to acquire an image; identifying a pest in the acquired image, and obtaining a number of pests; extracting multiple suspicious pest images from a region of each identified pest in the image, and determining identification accuracy of each suspicious pest image, if the number of pests is greater than or equal to a preset threshold for the number of pests; and calculating a predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image. The present invention acquires a pest image automatically through the image acquisition device in front of the pest trap.
Claims
1. A pest monitoring method based on machine vision (VM), comprising: arranging a pest trap at a place where pests gather, and setting an image acquisition device in front of the pest trap to acquire an image; identifying a pest in the acquired image, and obtaining a number of pests; extracting multiple suspicious pest images from a region of each identified pest in the image, and determining identification accuracy of each suspicious pest image, when the number of pests is greater than or equal to a preset threshold for the number of pests; and calculating a predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image.
2. The pest monitoring method based on machine vision LVMZ according to claim 1, wherein a statistical analysis model is established in advance; the statistical analysis model is used to calculate the predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image.
3. The pest monitoring method based on machine vision (VM) according to claim 2, wherein the predicted level H(n) of pest damage is calculated based on the statistical analysis model according to the following formula:
4. The pest monitoring method based on machine vision (VM) according to claim 1, wherein the pest trap comprises a box and a trap lamp arranged in the box; the box is a polyhedron, and the box is open on at least one side; the image acquisition device is arranged to face a side of the box with an opening to acquire an image.
5. The pest monitoring method based on machine vision (VM) according to claim 4, wherein an opening of the box facing the image acquisition device is covered with a light-transmitting film.
6. The pest monitoring method based on machine vision (VM) according to claim 1, wherein the identifying a pest in the acquired image comprises: identifying a region in the acquired image that blocks the light of the trap lamp; and determining whether a geometric feature of each region matches a shape of the pest, and if yes, identifying the corresponding region as a pest.
7. The pest monitoring method based on machine vision (VM) according to claim 6, wherein each region matches the shape of the pest is determined at least according to an area and a perimeter of the region.
8. The pest monitoring method based on machine vision (VM) according to claim 1, wherein a pest discriminative model is established in advance; the pest discriminative model is used to determine the identification accuracy of each suspicious pest image.
9. The pest monitoring method based on machine vision (VM) according to claim 8, wherein the establishing a pest discriminative model comprises: making a positive sample set and a negative sample set of the pest image, positive samples being pest images in various situations, and negative samples being images including no pest; and training a neural network by the positive sample set and the negative sample set to generate a pest discriminative model.
10. The pest monitoring method based on machine vision (VM) according to claim 1, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
11. The pest monitoring method based on machine vision (VM) according to claim 2, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
12. The pest monitoring method based on machine vision (VM) according to claim 3, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
13. The pest monitoring method based on machine vision (VM) according to claim 4, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
14. The pest monitoring method based on machine vision (VM) according to claim 5, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
15. The pest monitoring method based on machine vision (VM) according to claim 6, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
16. The pest monitoring method based on machine vision (VM) according to claim 7, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
17. The pest monitoring method based on machine vision (VM) according to claim 8, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
18. The pest monitoring method based on machine vision (VM) according to claim 9, wherein the acquired image needs to be preprocessed by denoising before identifying a pest in the acquired image and obtaining a number of pests.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
DETAILED DESCRIPTION
[0034] The present patent is further described below with reference to the accompanying drawings. The accompanying drawings are only for illustrative description and should not be construed as a limitation to the present patent. In order to better describe the present patent, some components may be omitted, enlarged or reduced in the accompanying drawings. Those skilled in the art should understand that some well-known structures and descriptions thereof may be omitted in the accompanying drawings.
[0035] As shown in
[0036] preprocess the acquired image by denoising, identify multiple pests in the image acquired by the image acquisition device by using a blob algorithm, and obtain a number of pests;
[0037] extract multiple suspicious pest images from a region of each identified pest in the image, and determine identification accuracy of each suspicious pest image, if the number of pests is greater than or equal to a preset threshold for the number of pests, where the threshold for the number of pests can be 3; considering the phototaxis of a pest, for example, a citrus psyllid, if a pest damage occurs, the number of identified pests is likely to be more than 3; besides, the number of pests is identified under the interference of environmental factors such as a fallen leaf and a bee; therefore, when the number of pests is less than the threshold for the number of pests, that is, 3, it can be determined that no pest damage occurs and the growth of a crop is not affected; more preferably, the threshold for the number of pests can be obtained based on the results of multiple tests in an area with different degrees of pest damage or based on past experience;
[0038] calculate a predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image; and
[0039] perform different levels of early warning according to the predicted level of pest damage, and send a related parameter of pest damage to a remote terminal for further artificial confirmation and determination.
[0040] The related parameter of pest damage includes the predicted level of pest damage, the number of pests, the region of each pest in the image, and the corresponding image acquired by the image acquisition device. The artificial determination is specifically: a person combines the above related parameter of pest damage to determine whether an actual number of pests in the image is consistent with or more than a number of identified pests, and if yes, takes a corresponding control measure according to the level of an early warning.
[0041] Specifically, the image acquisition device can be a camera.
[0042] The present invention acquires a pest image automatically through the image acquisition device in front of the pest trap. The present invention avoids the disadvantage of laborious visual inspection by a person, and realizes real-time pest monitoring. The present invention combines the number of pests and the identification accuracy of each suspicious pest image to calculate the predicted level of pest damage. Compared with the prior art that calculates the predicted level of pest damage based on the number of pests alone, the present invention has higher accuracy and obtains a more significant predicted level of pest damage. Therefore, the present invention can better guide pest control.
[0043] A statistical analysis model is established in advance. The statistical analysis model is used to calculate the predicted level of pest damage based on the number of pests and the identification accuracy of each suspicious pest image. The predicted level H(n) of pest damage is calculated based on the statistical analysis model according to the following formula:
[0044] where, n is the number of pests; allow.sub.max is the threshold for the number of pests; p.sub.i is the identification accuracy of an i-th suspicious pest image; a value range of H(n) is [0,1]. A pest damage grade is set according to the value of H(n). For example, a first grade corresponds to a value below 0.5, a second grade corresponds to a value of 0.5-0.7, and a third grade corresponds to a value of 0.7-0.9. An early warning is given based on these grades corresponding to the value of H(n).
[0045] The statistical analysis model is obtained based on training. The model can fit a relation between the predicted level of pest damage and the number of pests as well as the identification accuracy of each suspicious pest image. The final predicted level of pest damage is more targeted and more significant for guiding pest control. When the number of pests does not reach the threshold for the number of pests, the predicted level H(n) of pest damage is zero, that is, no pest damage occurs. When the number of pests is greater than or equal to the threshold for the number of pests, an average identification accuracy of all suspicious pest images is calculated. The suspicious pest images and respective possibility are taken into account, which is conducive to obtaining a more scientific predicted level H(n) of pest damage and improving the guiding significance of pest control.
[0046] The pest trap includes a box and a trap lamp arranged in the box. The box is a polyhedron, specifically a rectangular solid. The box is open on at least one side. The trap lamp uses a white light source with a good backlight effect. The box contains a volatile for luring a pest, for example, a mixed volatile of -caryophyllene and terpinolene which lures a citrus psyllid. The image acquisition device is arranged to face a side of the box with an opening to acquire an image. The box is used to gather light of the trap lamp, so that the image acquired by the image acquisition device is clear. The use of the box is convenient to identify a pest in the image later and improve the identification accuracy. In this way, the present invention further improves the practicability of the method, and improves the prediction accuracy, thereby facilitating people to control the pest in time.
[0047] An opening of the box facing the image acquisition device is covered with a light-transmitting film. The image acquisition device should have a certain distance away from the box, so that a shooting range of the image acquisition device covers the light-transmitting film. The light-transmitting film makes the light received by the image acquisition device more uniform and softer. The light-transmitting film improves the imaging quality, facilitates the identification of a pest in the image later, and improves the identification accuracy. Therefore, the present invention further improves the practicability of the method, and improves the prediction accuracy, thereby facilitating people to control the pest in time.
[0048] In addition, due to the covering of the light-transmitting film, the light received by the image acquisition device is more uniform and softer. Thus, the image acquired by the image acquisition device has a cleaner background to distinguish a noise effectively. Therefore, it is possible to preprocess the image acquired by the image acquisition device by denoising and to make the shot image sharper.
[0049] The step of identifying multiple pests in the image acquired by the image acquisition device by using a blob algorithm is specifically: identify a region in the acquired image that blocks the light of the trap lamp; and determine whether a geometric feature of each region matches a shape of a pest, and if yes, identify the corresponding region as a pest. Because of the setting of the trap lamp, it is only necessary to determine whether a geometric feature of the region blocking the light of the trap lamp in the image matches a pest. This avoids a complicated image identification process, improving the identification efficiency. Therefore, this method ensures a real-time performance so that people can take a control measure more quickly against the pest.
[0050] Each region matches the shape of the pest is determined at least according to an area and a perimeter of the region. The area, the perimeter and a ratio of regions are important features. Their combination greatly reduces the rate of misjudgment, and improves the identification efficiency and the efficiency of obtaining the final predicted level of pest damage. Therefore, the method is timelier for pest control. More preferably, in addition to the calculation of the area and the perimeter of each region, a minimum circumscribed rectangle and a centroid position of each region need to be calculated.
[0051] When multiple suspicious pest images are extracted from a region of each identified pest in the image, in fact, the minimum circumscribed rectangle of the region of each pest in the image is also calculated in the above step. The minimum circumscribed rectangle locates the region of each pest in the image, so that multiple suspicious pest images can be extracted.
[0052] The area and the perimeter are calculated according to a pixel in the region. The area is obtained by accumulating all pixels in the region, and the perimeter is obtained by accumulating pixels at a boundary of the region. The region is generally an irregular polygon, and the simple accumulation by pixels can avoid a complex formula calculation of the area and perimeter of the irregular polygon.
[0053] Let an i-th region be R.sub.i(x,y), f(x,y) is a binary pixel value at a pixel (x,y) in the image acquired by the image acquisition device, then an area S(R.sub.i(x,y)) of the i-th region is:
[0054] The binary pixel value f(x,y) is obtained by preprocessing. In a specific implementation process, f(x,y) at a pixel in a dark region of the image, that is, f(x,y) in a region that blocks the light of the trap lamp, is set to 1, and f(x,y) at a pixel in a bright region of the image is set to 0. Therefore, values of f(x,y) in R.sub.i(x,y) can be accumulated to serve as the area of the region R.sub.i(x,y).
[0055] A perimeter of the i-th region is a number of pixels (x,y) on a boundary (numbered as 5 in
[0056] A centroid (numbered as 0 in
[0057] where, a matrix M.sub.pq(R.sub.i(x,y))=.sub.(x,y)R.sub.
[0058] Specifically, the minimum circumscribed rectangle of the i-th region is calculated by the following formulas:
[0059] An origin of the above coordinate values (x,y) is a vertex of an upper left corner of the image. An X-axis is horizontal to the right, and a Y-axis is vertically downward. Therefore, left, bottom, right, and top correspond to the numbers 1, 2, 3 and 4 in
[0060] A pest discriminative model is established in advance. The pest discriminative model is used to determine the identification accuracy of each suspicious pest image. The pest discriminative model is obtained based on training. The model can fit a relation between the identification accuracy of each suspicious pest image and each suspicious pest image. The final identification accuracy of the suspicious pest image is more targeted, and the final predicted level of pest damage is more significant for guiding pest control.
[0061] The step of establishing a pest discriminative model is specifically: make a positive sample set and a negative sample set of the pest image, the positive sample set being pest images in various situations, and the negative sample set including multiple images including no pest; and train a neural network by the positive sample set and the negative sample set to generate a pest discriminative model. The neural network is specifically a visual geometry group convolutional neural network (VGGNet).
[0062] It should be noted that the above embodiments are merely preferred embodiments of the present invention, and are not intended to limit the present invention. Although the present invention is described in detail with reference to the embodiments, those skilled in the art should understand that they may still make modifications to the technical solutions described in the above embodiments or make equivalent replacements to some technical features thereof. Any modifications, equivalent replacements and improvements etc. made within the spirit and principle of the present invention should fall within the protection scope of the present invention.