Agricultural Treatment Control Device
20220174934 · 2022-06-09
Assignee
Inventors
Cpc classification
A01M21/00
HUMAN NECESSITIES
A01B69/001
HUMAN NECESSITIES
G06V20/56
PHYSICS
International classification
A01M7/00
HUMAN NECESSITIES
Abstract
The invention relates to a collaborative agricultural field processing control device intended to be mounted on an agricultural machine (1), composed of a set of detectors (2) for weeds or leaf symptoms of deficiencies or diseases collaborating in the decision to control the treatment devices (3) of the agricultural field.
Claims
1. Agricultural treatment control device to be mounted on an agricultural machine, said agricultural machine comprising at least one controllable treatment device, wherein the agricultural treatment control device comprises: at least one deficiency or disease foliar symptoms or weeds detection system, each being adapted for attachment to the agricultural machine; a localization system of at least one deficiency or disease foliar symptoms or weeds detection system; at least one deficiency or disease foliar symptoms or weeds detection system being characterized in that it is adapted to collaborate with a deficiency or disease foliar symptoms or weeds detection system whose detection zone partially overlaps with that of said deficiency or disease foliar symptoms or weeds detection system in order to collaboratively decide on the treatment to be applied to the detection zone of said deficiency or disease foliar symptoms or weeds detection; and a communication system between said at least one deficiency or disease foliar symptoms or weeds detection systems and at least one treatment device.
2. Device according to claim 1, wherein said at least one deficiency or disease foliar symptoms or weeds detection system is adapted to collaborate with another deficiency or disease foliar symptoms or weeds detection system whose detection zone partially laterally overlaps with that of said deficiency or disease foliar symptoms or weeds detection system.
3. Device according to claim 1, wherein said at least one deficiency or disease foliar symptoms or weeds detection system is adapted to collaborate with a deficiency or disease foliar symptoms or weeds detection system whose detection zone temporally overlaps with that of said deficiency or disease foliar symptoms or weeds detection system.
4. Device according to claim 1, wherein all of said at least one deficiency or disease foliar symptoms or weeds detection systems are adapted to collaboratively build a mapping of the agricultural field travelled by said agricultural machine, said cartography being constructed by a geostatistical process with localized detection data representing the real state as measured by said at least one deficiency or disease foliar symptoms or weeds detection system.
5. Device according to claim 4, further comprising a control screen, and in which the map of the travelled agricultural field is displayed on the control screen intended for the technician carrying out the treatment of the agricultural field.
6. Device according to claim 1, in which the localization system comprises a geolocalization system and/or an inertial unit.
7. Device according to claim 1, which further comprises at least one of the following features: at least two deficiency or disease foliar symptoms or weeds detection systems; at least one deficiency or disease foliar symptoms or weeds detection system is equipped with a localization system; at least one deficiency or disease foliar symptoms or weeds detection system is adapted to collaborate with another, deficiency or disease foliar symptoms or weeds detection systems; at least one deficiency or disease foliar symptoms or weeds detection system comprises a hyperspectral sensor; a deficiency or disease foliar symptoms or weeds detection system is adapted to detect the presence of weeds or foliar symptoms of deficiencies or diseases from peculiarities specific to weeds or foliar symptoms of deficiencies or diseases; a deficiency or disease foliar symptoms or weeds detection system is adapted to detect an area for a weed or foliar symptom of deficiency or disease; a deficiency or disease foliar symptoms or weeds detection system is supplemented with a probability of the presence of said peculiarities specific to weeds or foliar symptoms of deficiencies or diseases; the localization system is adapted to localize the treatment to be applied to the detection area; a communication system between said deficiency or disease foliar symptoms or weeds detection systems; a temporal overlap of said information on the deficiency or disease foliar symptoms or weeds detection is obtained.
8. Device according to claim 1, in which one detection system comprises a system for direct detection of features in the hyperspectral scene integrating a deep and convolutional neural network designed to detect at least one characteristic sought in said hyperspectral scene for a weed or a leaf symptom of deficiency or disease from at least one compressed image of the hyperspectral scene.
9. Device according to claim 1, in which one detection system comprises a system for detecting features in the hyperspectral scene comprising: a neural network configured to calculate a hyperspectral hypercube of the hyperspectral scene from at least one compressed image and an uncompressed image of the hyperspectral scene, a characterization module to detect the weed or the leaf symptom of deficiency or disease from the hyperspectral hypercube.
10. A system comprising a device according to claim 4, and further comprising a processor adapted to produce statistics on spraying, prevalence, species, densities, or stages of weeds or foliar symptoms of deficiencies or diseases present in the agricultural field using the mapping of the travelled agricultural field.
11. System comprising a device according to claim 1 and a controllable agricultural treatment device of an agricultural machine, in which said agricultural treatment device comprises at least one spray nozzle, the flow or pressure of said at least one spray nozzle being controlled by the collaborative decision of all of said at least two deficiency or disease foliar symptoms or weeds detection systems.
12. System comprising a device according to claim and a controllable agricultural treatment device of an agricultural machine, in which said device agricultural treatment comprises at least one LASER for destroying weeds, said at least one LASER being controlled by the collaborative decision of all of said at least two deficiency or disease foliar symptoms or weeds systems.
13. System comprising a device according to claim 1 and a controllable agricultural treatment device of an agricultural machine, in which said agricultural treatment device comprises at least one high pressure water jet whose objective is the destruction of weeds, said at least one high pressure water jet being controlled by the collaborative decision of all of said at least two deficiency or disease foliar symptoms or weeds detection systems.
14. System comprising a device according to claim 1 and a controllable agricultural treatment device of an agricultural machine, in which said agricultural treatment device comprises at least one mechanical hoeing weed control tool, said at least one mechanical hoeing weed control tool being controlled by the collaborative decision of all of said at least two deficiency or disease foliar symptoms or weeds detection systems.
15. System comprising a device according to claim 1 and a controllable agricultural treatment device of an agricultural machine, in which said agricultural treatment device comprises at least one electric weed control tool for destroying weeds, said at least one electric weed control tool being controlled by the collaborative decision of all of said at least two deficiency or disease foliar symptoms or weeds detection systems.
16. System according to claim 10, in which the agricultural treatment device is localized.
17. Method for collaborative control of agricultural treatment to be mounted on an agricultural machine, said agricultural machine comprising at least one controllable treatment device, wherein the agricultural treatment control method comprises: a collaborative decision of said at least one deficiency or disease foliar symptoms or weeds detection system of which the detection zones partially overlap, each being suitable for attachment to the agricultural machine and the localization of the treatment to be applied to the detection area; and a communication between said deficiency or disease foliar symptoms or weeds detection systems with said at least one treatment device.
18. Collaborative piloting method according to claim 17, the method comprising for each of at least two deficiency or disease foliar symptoms or weeds detection systems, the steps of: Acquisition of a new image datum from the ground of the travelled agricultural field on which an agricultural machine moves by means of said deficiency or disease foliar symptoms or weeds detection system; and Acquisition of additional position information from said deficiency or disease foliar symptoms or weeds detection system by means of the localization system; and Projection of said image data acquired by each of said deficiency or disease foliar symptoms or weeds detection systems on the ground plane; and Detection of the presence of weeds or foliar symptoms of deficiencies or diseases from said image data acquired and projected onto said ground plane; and Calculation of the positions of weeds or leaf symptoms of deficiencies or diseases in the detection zone of said deficiency or disease foliar symptoms or weeds detection system; said position calculation using the localization information of said localization system of said deficiency or disease foliar symptoms or weeds detection system and the detection information in said image data; and Communication of said positions of weeds or leaf symptoms of deficiencies or diseases in the detection zone of said deficiency or disease foliar symptoms or weeds detection system to all of the other deficiency or disease foliar symptoms or weeds detection systems; and Reception of said positions of weeds or foliar symptoms of deficiencies or diseases in the detection area of said deficiency or disease foliar symptoms or weeds detector from other deficiency or disease foliar symptoms or weeds detection systems; and Fusion of said positions of weeds or foliar symptoms of deficiencies or diseases of all the deficiency or disease foliar symptoms or weeds detection systems; and Calculation of the command to be sent to the treatment device concerned by the detection zone of said deficiency or disease foliar symptoms or weeds detection system; and Issuance of the command to the treatment device concerned by the detection zone of said deficiency or disease foliar symptoms or weeds detection system.
19. Collaborative piloting method according to claim 18, further comprising at least one of the following features: said projection uses information from said inertial unit of said deficiency or disease foliar symptoms or weeds detection system in order to determine the angle of capture of the image data relative to the normal vector on the ground; Communication of said positions of weeds or foliar symptoms of deficiencies or diseases in the detection zone of said deficiency or disease foliar symptoms or weeds detection system to others, in particular to all the other deficiency or disease foliar symptoms or weeds detection systems; the fusion is weighted according to the quality and the calculated distance of each detection.
20. Computer program comprising instructions which, when the program is executed by a computer, lead the latter to implement the method according to claim 17.
Description
SUMMARY DESCRIPTION OF THE FIGURES
[0095] The manner of carrying out the invention as well as the advantages which ensue therefrom will emerge clearly from the embodiment which follows, given by way of indication but not limitation, in support of the appended figures in which
[0096]
[0097]
[0098]
[0099]
[0100]
[0101]
[0102]
[0103]
[0104]
[0105]
[0106]
[0107]
[0108]
[0109]
[0110]
[0111]
[0112]
DETAILED DESCRIPTION
[0113] By “direct”, when we qualify the detection of feature, we thus describe that the result of output from the detection system is the sought feature. We exclude here the cases where the output of the detection system does not correspond to the sought feature, but only corresponds to an intermediary in the calculation of the feature. However, the output from the direct detection system can, in addition to corresponding to the feature sought, also be used for subsequent processing. In particular, by “direct”, it is meant that the output of the feature detection system is not a hyperspectral cube of the scene which, in itself, does not constitute a feature of the scene.
[0114] By “compressed”, we mean a two-dimensional image of a three-dimensional scene comprising spatial and spectral information of the three-dimensional scene. The spatial and spectral information of the three-dimensional scene is thus projected by means of an optical system onto a two-dimensional capture surface. Such a “compressed” image may include one or more diffracted images of the three-dimensional scene, or parts thereof. In addition, it can also include part of a non-diffracted image of the scene. Thus, the term “compressed” is used because a two-dimensional representation of three-dimensional spectral information is possible. By “spectral”, we understand that we go beyond, in terms of the number of frequencies detected, a “standard” RGB image of the scene.
[0115] By “standard”, we refer, as opposed to a “compressed” image, to an image exhibiting no diffraction of the hyperspectral scene. However, such an image can be obtained by optical manipulations using reflecting mirrors or lenses.
[0116] By “non-homogeneous”, we refer to an image whose properties are not identical on the whole image. For example, a “non-homogeneous” image can contain, at certain locations, pixels whose information essentially comprises spectral information at a certain respective wavelength band, as well as, in other locations, pixels the information of which essentially includes non-spectral information. Computer processing of such a “non-homogeneous” image is not possible, because the properties necessary for its processing are not identical depending on the locations in this image.
[0117] By “characteristic”, we mean a characteristic of the scene—this characteristic can be spatial, spectral, correspond to a shape, a color, a texture, a spectral signature or a combination of these, and can in particular be interpreted semantically.
[0118] “Object” refers to the common meaning used for this term. Object detection on an image corresponds to the location and a semantic interpretation of the presence of the object on the imaged scene. An object can be characterized by its shape, color, texture, spectral signature or a combination of these characteristics.
[0119]
[0120] As illustrated in
[0121] According to a first embodiment, the deficiency or disease foliar symptoms or weeds detection system 2 comprises a capture device 10 and a computerized characterization module 21.
[0122] The structure of this optical network is relatively similar to that described in the scientific publication “Computed-tomography imaging spectrometer: experimental calibration and reconstruction results”, published in APPLIED OPTICS, volume 34 (1995) number 22.
[0123] This optical structure makes it possible to obtain a compressed image 14′, illustrated in
[0124] Alternatively, three axes of diffraction can be used on the diffraction grating 33 so as to obtain a diffracted image 14′ with sixteen diffractions. The three diffraction axes can be equally distributed, that is to say separated from each other by an angle of 60°.
[0125] Thus, in general, the compressed image comprises 2R+1 diffractions if one uses R evenly distributed diffraction gratings, that is to say separated by the same angle from each other.
[0126] The capture surface 35 can correspond to a CCD sensor (for “charge-coupled device” in English, that is to say a charge transfer device), to a sensor CMOS (for “complementary metal-oxide-semiconductor” in Anglo-Saxon literature, a technology for manufacturing electronic components), or any other known sensor. For example, the scientific publication “Practical Spectral Photography”, published in Eurographics, volume 31 (2012) number 2, proposes to combine this optical structure with a standard digital camera to capture the compressed image.
[0127] Preferably, each pixel of the compressed image 14′ is coded on 8 bits thus making it possible to represent 256 colors.
[0128] A second sensor 12 makes it possible to obtain a non-diffracted image 17′ of a focal plane P12′ of the same observed scene, but with an offset induced by the offset between the first 11 and the second sensor 12. This second sensor 12 corresponds to an RGB sensor, that is to say a sensor making it possible to code the influence of the three colors Red, Green and Blue of the focal plane P12′. It makes it possible to account for the influence of the use of a blue filter F1, a green filter F2 and a red filter F3 on the observed scene.
[0129] This sensor 12 can be produced by a CMOS or CCD sensor associated with a Bayer filter. Alternatively, any other sensor can be used to acquire this RGB image 17′. Preferably, each color of each pixel of the RGB image 17′ is coded on 8 bits. Thus, each pixel of the RGB image 17′ is coded on 3 times 8 bits. Alternatively, a monochrome sensor could be used.
[0130] A third sensor 13 makes it possible to obtain an infrared image 18′, IR, of a third focal plane P13′ of the same observed scene with also an offset with the first 11 and the second sensors 12. This sensor 13 makes it possible to account for the influence of the use of an infrared filter F4 on the observed scene.
[0131] Any type of known sensor can be used to acquire this IR image 18. Preferably, each pixel of the IR image 18 is coded on 8 bits. Alternatively, only one or the other of sensor 12 and sensor 13 is used.
[0132] The distance between the three sensors 11-13 can be less than 1 cm so as to obtain a significant overlap of the focal planes P11′-P13′ by the three sensors 11-13. The sensors are for example aligned along the x axis. The topology and the number of sensors can vary without changing the invention.
[0133] For example, the sensors 11-13 can acquire an image of the same observed scene by using semi-transparent mirrors to transmit the information of the scene observed to the various sensors 11-13.
[0134] As illustrated in
[0135] In the example of
[0136] Preferably, the images 17′-18′ from RGB and IR sensors are cross-checked using a cross-correlation in two dimensions. The extraction of the focal plane of the diffracted image 14′ is calculated by interpolation of the x and y offsets between the sensors 12-13 with reference to the position of the sensor 11 of the diffracted image by knowing the distance between each sensor 11-13. This preprocessing step is not always necessary, in particular, when the sensors 11-13 are configured to capture the same focal plane, for example with the use of semi-transparent mirrors.
[0137] When the images 14, 17 and 18 of each focal plane P11-P13 observed by each sensor 11-13 are obtained, the construction module 16 implements a neural network 20 to form a hyperspectral image 15 from the information in these three images 14, 17-18.
[0138] This neural network 20 aims at determining the intensity I.sub.X,Y,λ of each voxel V.sub.X,Y,λ of the hyperspectral image 15.
[0139] To do this, as illustrated in
[0140] The first neuron of the input layer 40 makes it possible to extract the intensity I.sub.IR(x,y) from the IR image 18 as a function of the x and y coordinates of the sought voxel V.sub.X,Y,λ. For example, if the IR image 18 is coded on 8 bits, this first neuron transmits to the output layer 41 the 8-bit value of the pixel of the IR image 18 at the sought x and y coordinates. The second neuron of the input layer 40 performs the same task for the red color 17a of the RGB image 17.
[0141] According to the previous example, each color being coded on 8 bits, the sought intensity I.sub.R(x; y) is also coded on 8 bits. The third neuron searches for the intensity I.sub.V(x; y) in the same way for the green color 17b and the fourth neuron searches for the intensity I.sub.B(x; y) for the blue color 17c. Thus, for these first four neurons, it is very easy to obtain the intensity, because it is enough to use the position in x and y of the desired voxel.
[0142] The following neurons of the input layer 40 are more complex, since each of the following neurons is associated with a diffraction R0-R7 of the diffracted image 14.
[0143] These neurons seek the intensity of a specific diffraction I.sub.n(x, y) as a function of the position in x and y, but also of the wavelength λ of the sought voxel V.sub.X,Y,λ.
[0144] This relation between the three coordinates of the voxel V.sub.X,Y,λ and the position in x and y can be coded in a memory during the integration of the neural network 20.
[0145] Preferably, a learning phase makes it possible to define this relationship using a known model, the parameters of which are sought from representations of known objects. An example model is defined by the following relation:
with:
n=floor (M(dt−1)/DMAX);
n between 0 and M, the number of diffractions of the compressed image;
λ=(dt−1)mod(DMAX/M);
dt between 1 and DMAX;
xt between 0 and XMAX;
yt between including between 0 and YMAX;
XMAX the size along the x axis of the tensor of order three of the input layer;
YMAX the size along the y axis of the tensor of order three of the input layer;
DMAX the depth of the tensor of order three of the input layer;
λ.sub.sliceX, the spectral step constant along the x axis of said compressed image;
λ.sub.sliceY, the spectral step constant along the y axis of said compressed image;
x.sub.offsetX(n) corresponding to the offset along the x axis of the diffraction n;
y.sub.offsetY(n) corresponding to the offset along the y axis of the diffraction n.
Floor is a well-known truncation operator.
Mod represents the mathematical operator modulo.
[0146] A learning phase therefore makes it possible to define the parameters λ.sub.sliceX, λ.sub.sliceY, x.sub.offsetx(n), and v.sub.offsetY(n), so that each neuron can quickly find the intensity of the corresponding pixel. As a variant, other models are possible, in particular depending on the nature of the used diffraction grating 33.
[0147] In addition, the information related to the intensity of the pixel I.sub.n(x, y) sought by each neuron can be determined by a product of convolution between the intensity of the pixel of the compressed image 14 and of its close neighbors in the different R0-R7 diffractions. According to the previous example, the output of these neurons from the input layer 40 is also coded on 8 bits.
[0148] All these different intensities of the input layer 40 are injected into a single neuron of the output layer 41 which has the function of combining all this information and of providing the value of the intensity l.sub.X,Y,λ of the desired voxel.
[0149] To do this, this output neuron 41 associates a weight with each item of information as a function of the wavelength λ of the voxel sought. Following this modulation on the influence of the contributions of each image 17-18 and of each diffraction R0-R7, this output neuron 41 can add up the contributions to determine an average intensity which will form the intensity I.sub.x,y,λ of the sought voxel V.sub.X,Y,λ, for example coded on 8 bits. This process is repeated for all the coordinates of the voxel V.sub.X,Y,λ, so as to obtain a hypercube containing all the spatial and spectral information originating from the non-diffracted images 17-18 and from each diffraction R0-R7. For example, as illustrated in
[0150] The invention thus makes it possible to obtain a hyperspectral image 15 quickly and with great discretization in the spectral dimension. The use of a neural network 20 makes it possible to limit the complexity of the operations to be carried out during the analysis of the diffracted image 14. In addition, the neural network 20 also allows the association of the information of this diffracted image 14 with those of non-diffracted images 17-18 to improve the precision in the spatial dimension.
[0151] A computerized characterization module 21 is used downstream to determine a weed or a leaf symptom of deficiency or disease. For example, the input of the computerized characterization module is the hyperspectral image 15 in three dimensions. The computerized characterization module can for example apply a predefined treatment, characterizing the weed or the leaf symptom of deficiency or disease, to the hyperspectral image 15 in three dimensions, and outputting a presence or absence of the weed or the leaf symptom of deficiency or disease.
[0152] The computerized characterization module can for example apply, as described in the article “Hyperspectral image analysis techniques for the detection and classification of the early onset of plant disease and stress”, Amy Lowe, Nicola Harrison and Andrew P. French, Plant Methods (2017), an index-based detection (for example the “Normalized Difference Vegetation Index”—NDVI—or “Photochemical Reflectance Index” (PRI)), in order to pre-process the hyperspectral image 15 in three dimensions by selecting a subset of spectral bands which are assembled by means of an index. For example, the PRI index is a two-dimensional image composed of the bands at 531 nm and 570 nm by the equation lmg=(R.sub.531−R.sub.570)/(R.sub.531+R.sub.570), where R.sub.n represents the intensity of the voxel with coordinates (x; y; n) of the hyperspectral cube. The resulting image identifies the presence of plants in the image. The value in one pixel is compared to a pre-defined scale to classify the detection in this pixel. Typically, in the resulting image, a value in a pixel of between −0.2 and 0.2 indicates the presence of a healthy plant in this pixel.
[0153] Other indices are applicable, each one making it possible to process the hyperspectral image and to detect the presence either of a weed, or of a leaf symptom of deficiency or disease, or the presence of a plant. The potentially applicable indices include the following: [0154] “Normalized difference vegetation index” (NDVI), defined by the equation: (RNIR−RRED)/(RNIR+RRED), with RRED=680 nm, RNIR=800 nm, used to detect the presence of plants; [0155] “Red edge” NDVI, defined by the equation (R.sub.750−R.sub.705)/(R.sub.750+R.sub.705), used to detect the presence of plants; [0156] “Simple ratio index” (SRI), defined by the equation RNIR/RRED with RRED=680 nm, RNIR=800 nm, allowing to detect the presence of plants; [0157] “Photochemical reflectance index” (PRI), defined by the equation (R.sub.531−R.sub.570)/(R.sub.531+R.sub.570), used to detect the vigor (or good health) of a plant; [0158] “Plant senescence reflectance index” (PSRI), defined by the equation (Red Green)/NIR, where Red represents the sum of the intensities of the voxels with wavelengths between 620 and 700 nm, Green represents the sum of intensities of voxels with wavelengths between 500 and 578 nm, NIR represents the sum of the intensities of voxels with wavelengths between 700 and 1000 nm, making it possible to detect the senescence of a plant, the stress of a vegetable or the maturity of a fruit; [0159] “Normalized phaeophytinization index” (NPQI), defined by the equation (R.sub.415−R.sub.435)/(R.sub.415+R.sub.435), used to measure the degradation of leaf chlorophyll; [0160] “Structure Independent Pigment Index” (SIPI), defined by the equation (R.sub.800−R.sub.445)/(R.sub.800+R.sub.680), used to detect the vigor (or good health) of a plant; and [0161] “Leaf rust disease severity index” (LRDSI), defined by equation 6.9×(R.sub.605/R.sub.455)−1.2, used to detect rust disease in wheat leaves.
[0162] Any other index suitable for detecting a particular disease or stress can be used.
[0163] If applicable, the predefined equation gives a probability of the presence of the weed or the foliar symptom of deficiency or disease. If necessary, an additional output from the computerized characterization module is a localisation of the weed or the leaf symptom of deficiency or disease in image 17 or 18.
[0164] In the context of the present patent application, the detection system described above is considered to be a single detection system, even if it uses different sensors whose information is merged to detect a weed or deficiency or disease leaf syndrome.
[0165] According to a second embodiment, the deficiency or disease foliar symptoms or weeds detection system 2 comprises a capture device 202.
[0166] As illustrated in
[0167] This optical structure makes it possible to obtain a compressed image 211, illustrated in
[0168] Alternatively, as illustrated in
[0169] The structure of this optical assembly is relatively similar to that described in the scientific publication “Compressive Coded Aperture Spectral Imaging”, IEEE Signal Processing Magazine, Volume 31, Issue 1, Gonzalo R. Arce, David J. Brady, Lawrence Carin, Henry Arguello, and David S. Kittle.
[0170] Alternatively, the capture surfaces 35 or 246 can correspond to the photographic acquisition device of a smartphone or any other portable device including a photographic acquisition arrangement, by adding the capture device 202 of the hyperspectral scene 203 in front of the photographic acquisition device.
[0171] As a variant, the acquisition system 204 may include a compact mechanical embodiment which can be integrated into a portable and autonomous device and the detection system is included in said portable and autonomous device.
[0172] Alternatively, the capture surfaces 35 or 246 can be a device whose wavelengths captured are not in the visible part. For example, the device 202 can integrate sensors whose wavelength is between 0.001 nanometer and 10 nanometers or a sensor whose wavelength is between 10,000 nanometers and 20,000 nanometers, or a sensor whose length wave is between 300 nanometers and 2000 nanometers. It can be an infrared device.
[0173] When the image 211 of the observed hyperspectral focal plane is obtained, the detection system 2 implements a neural network 212 to detect a particular feature in the observed scene from the information of the compressed image 211.
[0174] This neural network 212 aims at determining the probability of the presence of the characteristic sought for each pixel localised at the x and y coordinates of the observed hyperspectral scene 203.
[0175] To do this, as illustrated in
[0176] The input layer 230 is populated from the pixels forming the compressed image. Thus, the input layer is a tensor of order three, and has two spatial dimensions of size X.sub.MAX and Y.sub.MAX, and a depth dimension of size D.sub.MAX, corresponding to the number of subsets of the compressed image copied into the input layer. The invention uses the nonlinear relation f(x.sub.t, y.sub.t, d.sub.t).fwdarw.(x.sub.img, y.sub.ing) defined for x.sub.tϵ[0 . . . X.sub.MAX[, y.sub.tϵ[0 . . . Y.sub.MAX[and d.sub.tϵ[0 . . . D.sub.MAX[allowing to calculate the x.sub.img and y.sub.img coordinates of the pixel of the compressed image whose intensity is copied into the tensor of order three of said input layer of the neural network at the coordinates (x.sub.t, y.sub.t, d.sub.t).
[0177] For example, in the case of a compressed image 211 obtained from the capture device of
with:
n=floor (M(d.sub.t−1)/D.sub.MAX);
n between 0 and M, the number of diffractions of the compressed image;
λ=(d.sub.t−1)mod(D.sub.MAX/M);
d.sub.t between 1 and D.sub.MAX;
x.sub.t between 0 and X.sub.MAX;
y.sub.t between 0 and Y.sub.MAX;
X.sub.MAX the size along the x axis of the tensor of order three of the input layer;
Y.sub.MAX the size along the y axis of the tensor of order three of the input layer;
D.sub.MAX the depth of the tensor of order three of the input layer;
λ.sub.sliceX, the spectral step constant along the x axis of said compressed image;
λ.sub.sliceY, the spectral step constant along the y axis of said compressed image;
x.sub.offsetX(n) corresponding to the offset along the x axis of the diffraction n;
y.sub.offsetY(n) corresponding to the offset along the y axis of the diffraction n.
Floor is a well-known truncation operator.
[0178] Mod represents the mathematical operator modulo.
[0179] As is clearly visible in
[0180] Alternatively, the invention makes it possible to correlate the information contained in the different diffractions of the diffracted image with information contained in the non-diffracted central part of the image.
[0181] According to this variant, an additional slice can be added in the direction of the depth of the input layer, the neurons of which will be populated with the intensity detected in the pixels of the compressed image corresponding to the non-diffracted detection. For example, if we assign to this slice the coordinate d.sub.t=0, we can keep the above formula for populating the input layer for d.sub.t greater than or equal to 1, and populate the layer d.sub.t=0 in the following way:
x.sub.img=(lmg.sub.width/2)−X.sub.MAX+x.sub.t;
y.sub.img=(lmg.sub.height/2)−Y.sub.MAX+y.sub.t;
With:
[0182] lmg.sub.width the size of the compressed image along the x axis;
lMg.sub.height the size of the compressed image along the y axis.
[0183] The compressed image obtained by the optical system contains the focal plane of the non-diffracted scene in the center, as well as the diffracted projections along the axes of the different diffraction filters. Thus, the neural network uses, for the direct detection of the sought features, the following information of said at least one diffracted image: [0184] the light intensity in the central and non-diffracted part of the focal plane of the scene at the x and y coordinates; and [0185] light intensities in each of the diffractions of said compressed image whose coordinates x′ and y′ are dependent on the coordinates x and y of the non-diffracted central part of the focal plane of the scene.
[0186] As a variant, in the case of a compressed image 213 obtained from the capture device of
f(x.sub.t,y.sub.t,d.sub.t)={(x.sub.img=x.sub.t);(y.sub.img=y.sub.t)}(lmg=MASK if d.sub.t=0;lmg=CASSI if d.sub.t>0),
With:
[0187] MASK: image of the compression mask used,
CASSI: measured compressed image,
lmg: Selected image from which the pixel is copied.
[0188] On slice 0 of the tensor of order three of the input layer the image of the used compression mask is copied.
[0189] The compressed slices of the hyperspectral scene are copied from the other slices of the tensor of order three of the input layer.
[0190] The architecture of said neural network 212, 214 is composed of a set of convolutional layers assembled linearly and alternately with decimation (pooling) or interpolation (unpooling) layers.
[0191] A convolutional layer of depth d, denoted CONV (d), is defined by d convolution kernels, each of these convolution kernels being applied to the volume of the input tensor of order three and of size x.sub.input,y.sub.input,d.sub.input. The convolutional layer thus generates an output volume, tensor of order three, having a depth d. An ACT activation function is applied to the calculated values of the output volume of this convolutional layer.
[0192] The parameters of each convolution kernel of a convolutional layer are specified by the learning procedure of the neural network.
[0193] Different ACT activation functions can be used. For example, this function can be a ReLu function, defined by the following equation:
ReLu(x)=max(0,x)
[0194] Alternating with the convolutional layers, decimation layers (pooling), or interpolation layers (unpooling) are inserted.
[0195] A decimation layer makes it possible to reduce the width and height of the tensor of order three at the input for each depth of said tensor of order three. For example, a MaxPool decimation layer (2,2) selects the maximum value of a sliding tile on the surface of 2×2 values. This operation is applied to all the depths of the input tensor and generates an output tensor having the same depth and a width divided by two, as well as a height divided by two.
[0196] An interpolation layer makes it possible to increase the width and the height of the tensor of order three as input for each depth of said tensor of order three. For example, a MaxUnPool(2,2) interpolation layer copies the input value of a sliding point onto the surface of 2×2 output values. This operation is applied to all the depths of the input tensor and generates an output tensor having the same depth and a width multiplied by two, as well as a height multiplied by two.
[0197] A neural network architecture allowing the direct detection of features in the hyperspectral scene can be as follows:
Input
[0198] CONV(64) [0199] MaxPool(2,2) [0200] CONV(64) [0201] MaxPool(2,2) [0202] CONV(64) [0203] MaxPool(2,2) [0204] CONV(64) [0205] CONV(64) [0206] MaxUnpool(2,2) [0207] CONV(64) [0208] MaxUnpool(2,2) [0209] CONV(64) [0210] MaxUnpool(2,2) [0211] CONV(1) [0212] Output
[0213] As a variant, the number of convolution CONV(d) and MaxPool(2,2) decimation layers can be modified in order to facilitate the detection of features having a higher semantic complexity. For example, a higher number of convolution layers makes it possible to process more complex signatures of shape, texture, or spectral of the feature sought in the hyperspectral scene.
[0214] Alternatively, the number of deconvolution CONV (d) and MaxUnpool(2, 2) interpolation layers can be changed to facilitate reconstruction of the output layer. For example, a higher number of deconvolution layers makes it possible to reconstruct an output with greater precision.
[0215] As a variant, the CONV(64) convolution layers can have a depth different from 64 in order to deal with a number of different local features. For example, a depth of 128 allows local processing of 128 different features in a complex hyperspectral scene.
[0216] Alternatively, the MaxUnpool(2,2) interpolation layers may be of different interpolation dimensions. For example, a MaxUnpool(4, 4) layer increases the processing dimension of the top layer.
[0217] Alternatively, the ACT activation layers of the ReLu(x) type inserted following each convolution and deconvolution, may be of different type. For example, the softplus function defined by the equation: ƒ(x)=log(1+e.sup.x) can be used.
[0218] As a variant, the MaxPool(2,2) decimation layers can be of different decimation dimensions. For example, a MaxPool(4,4) layer makes it possible to reduce the spatial dimension more quickly and to concentrate the semantic research of the neural network on local features.
[0219] As a variant, fully connected layers can be inserted between the two central convolution layers at line 6 of the description in order to process the detection in a higher mathematical space. For example, three fully connected layers of size 128 can be inserted.
[0220] Alternatively, the dimensions of the CONV(64) convolution, MaxPool(2, 2) decimation, and MaxUnpool(2, 2) interpolation layers can be adjusted on one or more layers, in order to adapt the architecture of the neural network closest to the type of features sought in the hyperspectral scene.
[0221] Alternatively, normalization layers, for example of the BatchNorm or GroupNorm type, as described in “Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift”, Sergey Ioffe, Christian Szegedy, February 2015 and “Group Normalization”, Yuxin Wu, Kaiming He, FAIR, June 2018, can be inserted before or after each activation layer or at different levels of the structure of the neural network.
[0222] The weights of said neural network 212 are calculated by means of learning. For example, backward propagation of the gradient or its derivatives from training data can be used to calculate these weights.
[0223] Alternatively, the neural network 212 can determine the probability of the presence of several distinct features within the same observed scene. In this case, the last convolutional layer will have a depth corresponding to the number of distinct features to be detected. Thus the convolutional layer CONV(1) is replaced by a convolutional layer CONV(u), where u corresponds to the number of distinct features to be detected.
[0224]
[0225] As illustrated in
[0226] The capture device 302 can also comprise a device for acquiring an uncompressed “standard” image, comprising a converging lens 331 and a capture surface 232. The capture device 302 can also include a device for acquiring a compressed image as described above with reference to
[0227] In the presented example, the standard image acquisition device and the compressed image acquisition device are arranged juxtaposed with parallel optical axes, and optical beams at least partially overlapping. Thus, a portion of the hyperspectral scene is imaged at once by the acquisition devices. Thus, the focal planes of the various image acquisition sensors are offset from each other transversely to the optical axes of these sensors.
[0228] As a variant, a set of partially reflecting mirrors is used so as to capture said at least one non-diffracted standard image 312 and said at least one compressed image 211, 213 of the same hyperspectral scene 203 on several sensors simultaneously.
[0229] Alternatively, the sensing surface 232 can be a device whose sensed wavelengths are not in the visible part. For example, the device 202 can integrate sensors whose wavelength is between 0.001 nanometer and 10 nanometers or a sensor whose wavelength is between 10,000 nanometers and 20,000 nanometers, or a sensor whose wavelength is between 300 nanometers and 2000 nanometers.
[0230] When the images 211, 312 or 213 of the observed hyperspectral focal plane are obtained, the detection means implement a neural network 214 to detect a feature in the observed scene from the information of the compressed images 211 and 213, and the standard image 312.
[0231] As a variant, only the compressed 211 and standard 312 images are used and processed by the neural network 214.
[0232] As a variant, only the compressed 213 and standard 312 images are used and processed by the neural network 214.
[0233] Thus, when the description relates to a set of compressed images, it is at least one compressed image.
[0234] This neural network 214 aims at determining the probability of the presence of the sought feature for each pixel localised at the x and y coordinates of the observed hyperspectral scene 203.
[0235] To do this, as illustrated in
[0236] As illustrated in
[0237] The above-described filling corresponds to the filling of the first input (“Input1”) of the neural network, according to the architecture presented below.
[0238] For the second input (“Input2”) of the neural network, the population of the input layer relative to the “standard” image is populated by directly copying the “standard” image in the neuronal network.
[0239] According to an exemplary embodiment where a compressed image 213 is also used, the third input “Input3” of the neural network is populated as described above for the compressed image 213.
[0240] A neural network architecture allowing the direct detection of features in the hyperspectral scene can be as follows:
TABLE-US-00001 Input 1 Input 2 Input 3 .Math. CONV (64) .Math. CONV (64) .Math. CONV (64) .Math. MaxPool (2, 2) .Math. MaxPool (2, 2) .Math. MaxPool (2, 2) .Math. CONV (64) .Math. CONV (64) .Math. CONV (64) .Math. MaxPool (2, 2) .Math. MaxPool (2, 2) .Math. MaxPool (2, 2) .Math. CONV (64) .Math. CONV (64) .Math. MaxUnpool (2, 2) .Math. CONV (64) .Math. MaxUnpool (2, 2) .Math. CONV (64) .Math. MaxUnpool (2, 2) .Math. CONV (1) .Math. Output
[0241] In this description, “Input1” corresponds to the portion of the input layer 250 populated from the compressed image 211. “Input2” corresponds to the portion of the input layer 250 populated from the standard image 312, and “Input3” corresponds to the portion of the input layer 250 populated from the compressed image 213. The line “CONV (64)” in the fifth line of the architecture operates information fusion.
[0242] As a variant, the line “CONV (64)” in the fifth line of the architecture operating the information fusion can be replaced by a fully connected layer having as input all of the MaxPool(2, 2) outputs of the processing paths for all of the inputs “input1”, “input2” and “input3” and as output a tensor of order one serving as input to the next layer “CONV (64)” presented in the sixth line of architecture.
[0243] In particular, the fusion layer of the neural network takes into account the shifts of the focal planes of the different image acquisition sensors, and integrates the homographic function allowing the information from the different sensors to be merged taking into account the parallaxes of the different images.
[0244] The variants presented above for the embodiment of
[0245] The weights of said neural network 214 are calculated by means of learning. For example, backward propagation of the gradient or its derivatives from training data can be used to calculate these weights.
[0246] Alternatively, the neural network 214 can determine the probability of the presence of several distinct features within the same observed scene. In this case, the last convolutional layer will have a depth corresponding to the number of distinct features to be detected. Thus the convolutional layer CONV (1) is replaced by a convolutional layer CONV (u), where u corresponds to the number of distinct features to be detected.
[0247] According to an alternative embodiment, as shown in
[0248] Thus, the neural network 214 uses, for the direct detection of the sought features, the information of said at least one compressed image as follows: [0249] the light intensity in the central and non-diffracted part of the focal plane of the scene at the x and y coordinates; and [0250] light intensities in each of the diffractions of said compressed image whose coordinates x ‘and y’ are dependent on the coordinates x and y of the non-diffracted central part of the focal plane of the scene.
[0251] The invention has been presented above in different variants, in which a detected feature of the hyperspectral scene is a two-dimensional image whose value of each pixel at the coordinates x and y corresponds to the probability of presence of a feature at the same x and y coordinates of the hyperspectral focal plane of the scene 203. In particular, the feature corresponds to a feature potentially indicative of the presence of a weed or a leaf symptom of deficiency or disease in this pixel. Each weed, each leaf symptom of deficiency or disease can be characterized by one or more features. The detection system then combines the results of the detection of each feature associated with a weed or a leaf symptom of deficiency or disease to determine a probability of the presence of the weed or the leaf symptom of deficiency or disease. If necessary, this process is repeated for all the predetermined weeds or foliar symptoms of deficiency or disease sought in the field. One can, however, alternatively, provide, according to the embodiments of the invention, the detection of other features. According to an example, such another feature can be obtained from the image from the neural network presented above. For this, the neural network 212, 214, can have a subsequent layer, suitable for processing the image in question and determining the sought feature. According to an example, this subsequent layer can for example count the pixels of the image in question for which the probability is greater than a certain threshold. The result obtained is then an area (possibly related to a standard area of the image). According to an example of application, if the image has, in each pixel, a probability of the presence of a chemical compound, the result obtained can then correspond to a concentration of the chemical compound in the hyperspectral image scene which can be indicative of a weed or foliar symptom of deficiency or disease.
[0252] According to another example, this subsequent layer may for example have only one neuron, the value of which (real or boolean) will indicate the presence or absence of an object or a particular feature sought in the hyperspectral scene. This neuron will have a maximum value in the event of the presence of the object or the feature and a minimum value in the opposite case. This neuron will be fully connected to the previous layer, and the connection weights will be calculated by means of learning.
[0253] According to a variant, it will be understood that the neural network can also be architectured to determine this feature without going through the determination of an image of probabilities of presence of the feature in each pixel.
[0254] In the context of this patent application, the detection system described above is considered to be a single detection system, even if it uses different sensors whose information is merged to detect a weed or deficiency or disease leaf syndrome.
[0255] In addition, each detection system 2 can comprise a localisation system, of the type comprising an inertial unit and/or a geolocalisation system.
[0256] The agricultural treatment control device further comprises a communication system connecting the deficiency or disease foliar symptoms or weeds detection systems 2. The communication system is adapted to exchange data between the deficiency or disease foliar symptoms or weeds detection systems 2 such as, in particular, data of detection of weeds or leaf symptoms of deficiencies or disease, data of localisation from inertial units, and/or geolocalisation systems.
[0257] The plurality of said at least one controllable agricultural treatment device 3 is also fixed on the agricultural machine so as to be able to treat the target plants 4. As can be seen in particular in
[0258] The number of controllable agricultural treatment devices 3 need not be the same as the number of deficiency or disease foliar symptoms or weeds detection systems 2. In fact, according to one example, the collaborative treatment decision is transmitted to the controllable agricultural treatment device 3 having the least distance from the target plant.
[0259]
[0260] At each instant, said deficiency or disease foliar symptoms or weeds detection system 2.1 takes a photograph 6.1 of the area of agricultural field 5 facing its objective; said deficiency or disease foliar symptoms or weeds detection system 2.2, takes a picture 6.2 of the area of the agricultural field 5 facing its objective; said areas facing the optical objectives 9 of said deficiency or disease foliar symptoms or weeds detection systems 2.1 and 2.2 have a common area of acquisition.
[0261]
[0262] Preferably, the plurality of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 is composed of homogeneous systems, having the same detection properties.
[0263] The images 6.1 and 6.2 acquired respectively by said deficiency or disease foliar symptoms or weeds detection systems 2.1 and 2.2 are processed locally in each of said deficiency or disease foliar symptoms or weeds detection systems 2.1 and 2.2, in order to project each of said images acquired on the ground plane into an image projected on the ground 7.1 and 7.2. The following discussion can be applied to each detection system 2.
[0264] The projection on the ground of said image data is calculated according to the following relationships:
[0265] Where: [0266] lmg.sub.projected is the tensor containing the pixels of the image projected on the ground; and [0267] lmg.sub.acquired is the tensor containing the pixels of said raw image data; and [0268] R is the matrix containing the rotations along the three roll axes, pitch and yaw; and [0269] α is the yaw angle; and [0270] β is the roll angle; and [0271] γ is the pitch angle.
[0272] The angles α, β, and γ, correspond respectively to the current yaw, roll and pitch angles of the deficiency or disease foliar symptoms or weeds detection system 2 considered as calculated from the raw data from the inertial unit on board the considered deficiency or disease foliar symptoms or weeds detection system 2; this roll, pitch and yaw information is calculated continuously and kept up to date by the considered deficiency or disease foliar symptoms or weeds detection system 2 by means of an attitude estimation algorithm using the raw information of said inertial unit on board the considered deficiency or disease foliar symptoms or weeds detection system 2. For example, the attitude estimation algorithm, used to calculate roll, pitch and yaw information, can be an extended Kalman filter, a Mahony or Madgwick algorithm. The document “A comparison of multisensor attitude estimation algorithm”, A. Cirillo, P. Cirillo, G. De Maria, C. Natale, S. Pirozzi, in “Multisensor attitude estimation: Fundamental concepts and applications, Chapter 29, Publisher: CRC Press, Editors: H. Fourati, DEC Belkhiat, pp. 529-539, September 2016, describes and compares a set of algorithms for merging data from inertial units in order to extract the attitude, defined by the roll, pitch, and yaw angles of the system.
[0273] As illustrated in
[0274] Said image data projected on the ground are used to detect the presence of weeds or leaf symptoms of deficiencies or diseases from the features specific to weeds or leaf symptoms of deficiencies or diseases determined by one of the methods above, in order to detect the zones, identified at the coordinates of the image X.sub.detect and Y.sub.detect, in said projected image data in which the target plants 4 are present. A target plant 4 is a plant for which the detection device detects a weed or a leaf symptom of deficiency or disease. As shown in
[0275] As illustrated in
[0276] The calculation of geolocalisation 8.4 of a weed detection or foliar symptom of deficiency or disease is based on the following relationships:
Distance=ratio.sub.pixel2meter√[(X.sub.detect−w.sub.img/2).sup.2+(Y.sub.detect−h.sub.img/2).sup.2]
Bearing=cos[(Y.sub.detect−h.sub.img/2)/(distance/ratio.sub.pixel2meter)]
Rad.sub.fract=distance/EARTH.sub.RADIUS
Lat.sub.target(180.Math.asin(lat.sub.21+lat.sub.22))/π
Lng.sub.targ et=(180.Math.(lng.sub.1+atan 2(lng.sub.21,lng.sub.22)+3π)mod 2π)−π))/π
[0277] Where: [0278] EARTHRADIUS is the mean radius of the Earth, ie 6,371,000 meters; and [0279] ratiopixel2meter is the ratio between a pixel of the image and a meter on the ground; and [0280] X.sub.detect is the x coordinate, in pixels, of the detection center in the image; and [0281] Y.sub.detect is the y coordinate, in pixels, of the center of detection in the image; and [0282] W.sub.img is the width of the image in pixels; and [0283] h.sub.ing is the height of the image in pixels; and [0284] Lat is the latitude measured by said geolocalisation system of said deficiency or disease foliar symptoms or weeds detection system 2; and [0285] lng is the longitude measured by said geolocalisation system of said deficiency or disease foliar symptoms or weeds detection system 2; and [0286] lat.sub.target is the latitude of the target plant 4 detected in the image; and [0287] lng.sub.target is the longitude of the target plant 4 detected in the image.
[0288] Each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 continuously obtains the detection information geolocalised by the coordinates lat.sub.target and Inn target by means of the communication system between the different deficiency or disease foliar symptoms or weeds detection systems 2, from all the other deficiency or disease foliar symptoms or weeds detection systems 2. Each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 thus continuously communicates detection information geolocalised by the coordinates lat.sub.target and Ing.sub.target target by means of the communication system between the different deficiency or disease foliar symptoms or weeds detection systems 2 to all the other deficiency or disease foliar symptoms or weeds detection systems 2. For example, the GeoJSON format, as described in the document RFC7946, “The GeoJSON Format”, IETF August 2016, makes it possible to transport said geolocalisation detection information on said communication system.
[0289] As a variant, the ESRI Shapefile format, as described in the document ESRI Shapefile technical description, June 1998, makes it possible to transport said geolocalised detection information on said communication system.
[0290] As a variant, said latitude and longitude information can be calculated from the raw information from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems 2. Said raw information from the inertial units being exchanged by means of the communication system continuously connecting said at least two deficiency or disease foliar symptoms or weeds detection systems 2, the latitude estimation algorithm, executed on each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 can use all of the raw information. Thus, the latitude and longitude information is calculated relatively in the coordinate system of the traveled agricultural field. For example, an extended Kalman filter can be used in each of said at least two deficiency or disease foliar symptoms or weeds detection systems, by taking data from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems. In this variant, the calculation of the geolocalisation 8.4 of a detection of weed or leaf symptom of deficiency or disease is based on the same relationship with the following elements: [0291] Lat is the latitude of said deficiency or disease foliar symptoms or weeds detection system 2 calculated in the coordinate system of the travelled agricultural field from the data coming from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems; and [0292] lng is the longitude of said deficiency or disease foliar symptoms or weeds detection system 2 calculated in the coordinate system of the travelled agricultural field from the data from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems.
[0293] As a variant, one does not necessarily use a geolocalisation of the detections of weeds or foliar symptoms of deficiencies or diseases, but to a localisation of these in an instantaneous frame of reference of the agricultural machine. Such a localisation may be sufficient, insofar as the processing can also be ordered in this frame of reference. This could be the case in particular if the detection systems and the processing systems have known relative positions over time, for example are fixed with respect to each other over time. For a deficiency or disease foliar symptoms or weeds detection system, the coordinates (x.sub.target; y.sub.target) of the target relative to the center of the sensor can for example be determined as follows:
dist.sub.away=tan(sensor.sub.angle).Math.sensor.sub.height
X.sub.target=ratiopixel2meter.Math.(X.sub.detect−w.sub.img/2)
Y.sub.target=dist.sub.away+ratiopixel2meter.Math.(Y.sub.detect−h.sub.img/2)
[0294] Where: [0295] sensor.sub.angle is the angle between the vertical and the average viewing angle of the deficiency or disease foliar symptoms or weeds detection system 2; [0296] sensorheight is the height on the ground of the deficiency or disease foliar symptoms or weeds detection system 2; [0297] ratiopixel2meter is the ratio between a pixel in the image and a meter on the ground; [0298] X.sub.detect is the x coordinate, in pixels, of the center of detection in the image; [0299] Y.sub.detect is the y coordinate, in pixels, of the center of detection in the image; [0300] w.sub.img is the width of the image in pixels; [0301] him is the height of the image in pixels; [0302] X.sub.target is the relative longitudinal coordinate in meters of the target plant 4 detected in the image; [0303] Y.sub.target is the relative coordinate in meters facing said deficiency or disease foliar symptoms or weeds detection system 2 of the target plant 4 detected in the image.
[0304] All of the information on said detections of weeds or leaf symptoms of deficiencies or diseases from all of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 is stored in a geographic database local to each of said at least two deficiency or disease foliar symptoms or weeds detection systems.
[0305] Each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 having its detection zone of the sought-after features; weeds or leaf symptoms of deficiencies or diseases; in agricultural field 5, overlapping with said at least two neighbor deficiency or disease foliar symptoms or weeds detection systems 2, lateral overlapping of said information for detection of weeds or foliar symptoms of deficiencies or diseases is obtained.
[0306] Likewise, each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 detecting at the present time the sought features of weeds or leaf symptoms of deficiencies or diseases in the agricultural field 5 in the detection zone within reach of the optical objective of said deficiency or disease foliar symptoms or weeds detection system 2, a temporal recovery of said information of detections of weeds or leaf symptoms of deficiencies or disease is obtained. By temporal overlap, reference is made to the fact that the detection zones in two successive distinct instants overlap if the frequency of determination is sufficiently high.
[0307] Thus, said information for detecting weeds or leaf symptoms of deficiencies or diseases stored in said geographic database local to each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 contains the redundancies of said information of detections of weeds or foliar symptoms of deficiencies or diseases. Operation 8.5 of the merger can be a krigeage operation, as described in the book “Lognormal-de VVijsian Geostatistics for Ore Evaluation”, DG Krige, 1981, ISBN 978-0620030069, taking into account all of said geolocalised detection information of weeds or leaf symptoms of deficiencies or diseases and containing the probability of detection information, coming from the plurality of said at least two deficiency or disease foliar symptoms or weeds detection systems 2, as well as the lateral and temporal overlap information, thus confirming the probabilities of detection of weeds or leaf symptoms of deficiencies or diseases. Thus, at a given detection point, the result is determined from the detection result obtained for this point by each of the detection systems. The result makes it possible to decide whether or not to treat this point. For example, we compare the result with a certain predetermined threshold and, if the result is positive, we order the application of the treatment.
[0308] The merger in question takes into account the quality of the detection. For example, when the merged detections include maps of the probability of the presence of a weed or a leaf symptom of deficiency or disease, the result of the fusion may include a map of the probability of the presence of the weed or leaf symptom of deficiency or disease obtained from these individual maps. Therefore, intrinsically, each individual map carries information about the quality of the detection, and the merged result takes this quality into account. For example, if, at a given location, a detection system determines a probability of the presence of a leaf symptom of a certain disease at 90%, and another detection system determines a probability of the presence of a leaf symptom of this same disease at 30%, the quality of detection of at least one of the two detection systems is poor, and the final result transcribes this quality of detection.
[0309] According to a variant, during this fusion, the distance of each detection is also taken into account. Indeed, if at a given location, being close to the optical axis of a detection system, determines a probability of the presence of a leaf symptom of a certain disease at 30%, and another detection system, for which this same place is distant from the optical axis, determines a 90% probability of the presence of a leaf symptom of the same disease, we will apply a greater weight to the detection system facing the studied localisation during fusion.
[0310] As a variant, operation 8.5 of fusion is an operation taking into account all of the geolocalised information on the detection of weeds or leaf symptoms of deficiencies or diseases and containing the information on the probability of detection, from the plurality of said at least two deficiency or disease foliar symptoms or weeds detection systems 2, as well as the information on lateral and temporal overlaps, in order to calculate the consolidated probabilities of geolocalised detections of weeds or foliar symptoms deficiencies or diseases; Said consolidation operation taking into account the probabilities of each geolocalised detection of weeds or leaf symptoms of deficiencies or diseases.
[0311] In the variant of
[0312] Each of said at least two deficiency or disease foliar symptoms or weeds detection systems continuously calculates the instantaneous speed of movement by means of said localisation information obtained by means of said localisation system. The speed information is necessary in order to estimate the order of time of said at least one agricultural processing device and to anticipate the processing time as a function of said agricultural processing device.
[0313] Thus, depending on the nature and detected localisation of weeds or leaf symptoms of deficiencies or diseases, the nature and localisation of the treatment devices, and the speed of movement, the control device determines the processing device(s) to be actuated, and the temporal characteristics (instant, duration, etc.) of this actuation.
[0314] With regard to the calculation of the command 8.6 to be sent to said at least one agricultural treatment device 3, each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 estimates at each instant and for each of said target plants 4 currently in range of said at least one treatment device 3, which of said at least one treatment device 3 is the most suitable for treating said target plant 4.
[0315] The control commands are transmitted to said at least one agricultural treatment device by means of communication between said at least two deficiency or disease foliar symptoms or weeds detection systems and said at least one agricultural treatment device.
[0316] With regard to controlling said at least one agricultural treatment device, all of the information from said detections of weeds or leaf symptoms of deficiencies or diseases are geolocalised, agricultural treatment devices are also geolocalised, and said at least one agricultural treatment device are actuated at the exact moment when said at least one agricultural treatment device is above the target plants.
[0317] For example, when said at least one agricultural treatment device 3 is a spreading nozzle, the command 8.7 to be sent to each of said at least one agricultural treatment device 3 is a pressure and flow control taking into account the presence of a target plant at the instant present in the spraying zone of said spreading nozzle.
[0318] As a variant, when said at least one agricultural processing device 3 is a LASER, the command 8.7 to be sent to each of said at least one agricultural processing device 3 is a command for transverse and longitudinal shifts, and for lighting power taking into account the presence of a target plant at the instant present in the range of said LASER.
[0319] As a variant, when said at least one agricultural treatment device 3 is a high pressure water jet, the command 8.7 to be sent to each of said at least one agricultural treatment device 3 is a pressure and flow control taking into account the presence of a target plant at the instant present in the range area of the high pressure water injection nozzle.
[0320] As a variant, when said at least one agricultural treatment device 3 is a mechanical hoeing weeding tool, the command 8.7 to be sent to each of said at least one agricultural treatment device 3 is an activation command taking into account the presence of a target plant at the instant present in the area of said mechanical hoeing weedkiller.
[0321] As a variant, when said at least one agricultural treatment device 3 is an electric weed control tool, the command 8.7 to be sent to each of said at least one agricultural treatment device 3 is an activation command taking into account the presence of a target plant at the instant present in the area of said electric weeding tool.
[0322] In the presentation above, the acquired image is first projected in a given frame of reference, then the detection of weed or foliar symptom of deficiency or disease is implemented for the projected image. Alternatively, one could plan to start by making an image of the probability of the presence of a weed or foliar symptom of deficiency or detection from the raw acquired image, then to project it in the given frame of reference.
[0323] In the presentation above, the geolocalisation of each detection system is carried out independently, and the geolocalisation detections are merged so as to decide on the possible treatment. In variants, as described below, the geolocalisation of each detection system can be done collaboratively.
[0324] In a first variant, said attitude information can be calculated from the raw information from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems 2. Said raw information from inertial units being exchanged by means of the communication system continuously connecting said at least two deficiency or disease foliar symptoms or weeds detection systems 2, the attitude estimation algorithm executed on each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 can use all of the raw information. Thus, the estimates of roll, pitch and yaw are consolidated by a set of similar, consistent and covariant measures. For example, an extended Kalman filter can be used in each of said at least two deficiency or disease foliar symptoms or weeds detection systems, by taking data from the inertial units of all of said at least two deficiency or disease foliar symptoms or weeds detection systems. The document “Data Fusion Algorithms for Multiple Inertial Measurement Units”, Jared B. Bancroft and Gerard Lachapelle, Sensors (Basel), Jun. 29, 2011, 6771-6798 presents an alternative algorithm for merging raw data from a set of inertial units to determine attitude information.
[0325] In a second variant, said attitude information can be calculated from the raw information of the inertial units to which the geolocalisation data of all of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 are added. Said raw information from the inertial units as well as the geolocalisation data being exchanged by means of the communication system connecting the said at least two deficiency or disease foliar symptoms or weeds detection systems 2, the attitude estimation algorithm can use all of the raw information. For example, an extended Kalman filter can be used in each of said at least two deficiency or disease foliar symptoms or weeds detection systems, taking the data from inertial units as well as the geolocalisation data from the set of said at least two deficiency or disease foliar symptoms or weeds detection systems 2. Furthermore, a method, as described in the document “Attitude estimation for accelerated vehicles using GPS/INS measurements”, Minh-Duc Hua, July 2010, Control Engineering Practice Volume 18, Issue 7, July 2010, pages 723-732, allows a fusion of information from a geolocalisation system and an inertial unit.
[0326] For example, said communication system between said at least two deficiency or disease foliar symptoms or weeds detection systems 2 and said at least one agricultural treatment device 3 is a wired Ethernet 1 Gigabit network per second thus allowing each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 to communicate with the other deficiency or disease foliar symptoms or weeds detection systems 2 as well as with said at least one agricultural treatment device 3.
[0327] With regard to the mapping of the agricultural field 5 travelled by said agricultural machine, each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2 locally build a mapping of the specific features; or the presence of weeds or leaf symptoms of deficiencies or diseases; using a local geographic database. The geolocalised detection information of the presence of weeds or leaf symptoms of deficiency or diseases, detected by all of said at least two deficiency or disease foliar symptoms or weeds detection systems and exchanged by means of the system of communication, are thus stored in each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2.
[0328] Thus, the content of each of said geographic databases locally stored in each of said at least two deficiency or disease foliar symptoms or weeds detection systems 2, represents the real state, as measured by all of said at least two deficiency or disease foliar symptoms or weeds detection systems 2, and sanitary state of said traveled agricultural field 5.
[0329] As a variant, the mapping information of the agricultural field 5 travelled by said agricultural machine, is transmitted by means of a communication system, and displayed on a control screen intended for the technician carrying out the processing of the agricultural field 5.
[0330] Preferably, the communication system used to transmit the mapping information of the agricultural field 5 to said control screen intended for the technician carrying out the treatment of the agricultural field 5, comprises a wired Gigabit Ethernet network.
[0331] Alternatively, the communication system used to transmit the mapping information of the agricultural field 5 to said control screen intended for the technician processing the agricultural field 5, is a wired CAN network (“Control Area Network”).
[0332] The cartography of agricultural field 5 finds an advantageous use in order to produce statistics of sprays or treatments applied to said agricultural field 5. Said statistics also make it possible to measure the prevalence, the presence and the quantity of certain species of weeds, as well as their densities and stages. The prevalence, presence and density of leaf symptoms of deficiencies or diseases can also be calculated from the information contained in the mapping of the agricultural field 5.
[0333] In the example presented, each detection system communicates with neighboring detection systems, for decision making for collaborative processing. As a variant, it is possible to provide a central processor suitable for communicating, via the communication system, with the detection systems, making a decision, and communicating the processing instructions to the processing devices 3 via the communication system.
[0334] According to the invention, it is sufficient for a single deficiency or disease foliar symptoms or weeds detection system 2 to make a collaborative decision using information relating to other deficiency or disease foliar symptoms or weeds detection systems.
[0335] The methods which are described can be computerized methods. They can then be defined in computer programs, which can be executed by one or more processors of programmable machines.
REFERENCES
[0336] agricultural machine 1 [0337] deficiency or disease leaf symptoms or weeds detection systems 2 [0338] detection systems 2.1 and 2.2 [0339] agricultural treatment device 3 [0340] target plant 4 [0341] agricultural field 5 [0342] photo 6.1, 6.2 [0343] acquired images 7.1 and 7.2 from images 6.1 and 6.2 [0344] capture 8.1 [0345] ortho-projection 8.2 [0346] detections 8.3 [0347] geolocalisation calculation 8.4 [0348] merger operation 8.5 [0349] command 8.6 [0350] command 8.7 [0351] optical lens 9 [0352] capture device 10 [0353] first sensor 11 [0354] second sensor 12 [0355] third sensor 13 [0356] diffracted image 14, 14′ [0357] hyperspectral image 15 [0358] building module 16 [0359] non-diffracted image 17′ [0360] infrared image 18′ [0361] neural network 20 [0362] characterization module 21 [0363] isolate 25 [0364] extract 26 [0365] first converging lens 30 [0366] opening 31 [0367] collimator 32 [0368] diffraction grating 33 [0369] second converging lens 34 [0370] capture area 35 [0371] input layer 40 [0372] output layer 41 [0373] capture device 202 [0374] Hyperspectral scene 203 [0375] sensor, or acquisition system 204, [0376] two-dimensional compressed image 211 [0377] neural network 212 [0378] compressed image 213 [0379] neural network 214 [0380] input layer 230 [0381] output layer 231 [0382] sensing surface 232 [0383] first converging lens 241 [0384] mask 242 [0385] collimator 243 [0386] prism 244 [0387] second converging lens 245 [0388] capture surface 246 [0389] entry layer 250 [0390] encoder 251 [0391] convolutional layers or fully connected layers 252 [0392] decoder 253 [0393] acquisition device, or sensor, 301 [0394] capture device 302 [0395] focal plane 303 [0396] standard image 312 [0397] converging lens 331 [0398] output layer 350