MOBILE ANALYSIS AND PROCESSING DEVICE
20210176978 · 2021-06-17
Assignee
Inventors
Cpc classification
B64U2101/30
PERFORMING OPERATIONS; TRANSPORTING
B64U2101/40
PERFORMING OPERATIONS; TRANSPORTING
International classification
Abstract
The invention relates to a mobile analysis and processing device (14) for agriculture for processing the soil and/or manipulating flora and fauna. The device (14) comprises at least one sensor (62), a tool unit (52) having at least one motor-driven tool (58), an actuator (48) for moving at least the tool (58) of the tool unit (52), a motor (50) for driving the tool (58) and/or the actuator (48), a database (78), a first communication unit (60) having an interface (70a), and a first computer (46) for controlling the sensor (62), the tool unit (52) and the actuator (48) by means of generated control commands. The data captured by means of the sensor (62) are continually compared with the data stored in the database (78) in order to generate corresponding control signals for the actuator (48), the tool unit (52) and/or the motor (50). By means of said device, mobility and flexibility are created, in accordance with which flexibility the device (14) forms a unit by means of which all data can be processed in real time, control signals can be generated for the actuator (48), the tool unit (52) and/or the motor (50) and immediate operation in accordance with the control signals is possible. Thus, combination with, for example, different carriers (12), which move the device (14) over the field if necessary, is possible.
Claims
1-19. (canceled)
20. Mobile analysis and processing device (14) for agriculture for tilling the soil and/or for manipulating flora and fauna, comprising: at least one sensor (62), a tool unit (52) with at least one motor-driven tool (58), an actuator (48) for moving at least the tool (58) of the tool unit (52), a motor (50) for driving the tool unit (52) and/or the actuator (48), a database (78), a first communication unit (60) with an interface (70a) and a first computer (46) for controlling the sensor (62), the tool unit (52) and/or the actuator (48) on the basis of generated control commands, wherein the data detected by the sensor (62) is continuously compared with the data stored in the database (78) in order to generate corresponding control signals for the actuator (48), the tool unit (52) and/or the motor (50).
21. The analysis and processing device according to claim 20, characterized in that comparison of the data determined by the sensor (62) with data in the database (78) is performed in real time, in particular with verification and classification of the data determined by the sensor (62).
22. The analysis and processing device according to claim 20, characterized in that the sensor is a visual detection unit (62) with a camera (64).
23. The analysis and processing device according to claim 20, characterized in that means (82a, 82b) are provided for connecting the device, if necessary, to a carrier (12) for moving the device (14).
24. The analysis and processing device according to claim 20, characterized by a two-part design in which the sensor (62), the tool unit (52), the motor (50) for driving the tool (58), the tool unit (52) and/or the actuator (48), the actuator (48), the first computer (46) and the first communication unit (60) with interface (70a) are provided in a first unit (14a), and the database (78), a second computer (76) and a second communication unit (74) with interface (36b, 70b) are provided in the second unit (14b), wherein the first unit (14a) and the second unit (14b) can be connected to one another via the interfaces (70a, 70b) for data exchange.
25. The analysis and processing device according to claim 24, characterized in that the first unit (14a) comprises a first housing (40) and the second unit (14b) comprises a second housing (42).
26. The analysis and processing device according to claim 25, characterized in that the first and second housings (40, 42) are detachably connected to each other via a plug-in connection (44, 80).
27. The analysis and processing device according to claim 25, characterized in that the first and second housings (40, 42) have receptacles (14c) as means for connection to the carrier (12) as required, which receptacles are associated with corresponding holding means (82a, 82b) of the carrier (12), by means of which the device (14) can be gripped and moved by the carrier (12).
28. The analysis and processing device according to claim 25, characterized in that said first and second housings (40, 42) comprise, as means for connection to said carrier (12) as required, coupling means (26b, 28, 36b, 38) associated with corresponding coupling means (26a, 28, 36a, 38) of said carrier (12), by means of which said device (14) can be connected to and moved by said carrier (12).
29. The analysis and processing device according to claim 25, characterized in that the tool unit (52) comprises at least one feed unit (54) and a rotation unit (56) cooperating with the motor (50).
30. The analysis and processing device according to claim 29, characterized in that at a distal end thereof, the rotation unit (56) comprises at least the tool (58), in particular a miller (58) or a blade unit.
31. The analysis and processing device according to claim 20, characterized by a voltage connection (26b) for an external voltage supply.
32. The analysis and processing device according to claim 31, characterized in that the voltage connection (26b) is provided on the first unit (14a) and is used for voltage supply both to the first unit (14a) and the second unit (14b).
33. The analysis and processing apparatus according to claim 23, characterized by an additional communication interface (36b) for a carrier (12).
34. The analysis and processing device according to claim 33, characterized in that the additional communication interface (36b) is arranged in the second unit (14b).
35. Method for real-time control of the tilling of the soil and/or of the manipulation of flora and fauna by the device according to claim 20, comprising the following steps: a. continuous recording over time of data-defined voxels and/or pixels and/or images by the sensor; b. transmission of the recorded data to the database (78); c. storage of the recorded data in the database (78); d. qualitative data comparison of the recorded data with the data stored in the database (78), and at the same time performing segmentation, data reduction, and/or verification of the recorded data with the computer (46); e. evaluation of the compared recorded data with existing defined data sets in the database (78) by a classifier (68) in connection with the computer (46, 76); f. processing and conversion of the evaluation by the computer (46, 76) into control data and control-related data for the motor, the actuator, the tool unit and/or an associated carrier.
36. The method according to claim 35, characterized in that, after the control data and/or control-related data is available, the motor (50), the actuator (48), the tool unit (52) and/or the carrier (12) for tilling the soil and/or an associated carrier (12) for manipulating flora and fauna are started up.
37. The method according to claim 35, characterized in that the evaluation is performed in a computer (46, 76) cooperating with the classifier (68), in the second computer (76), and the processing and conversion of the evaluation into control data and/or control-related data is performed in another computer (46, 76), in the first computer (46), for which purpose the evaluation is transmitted from the one computer (46, 76) to said another computer (46, 76).
38. The method according to claim 35, characterized in that the storage, the qualitative comparison of the recorded data with data stored in the database (78) and/or the evaluation by the classifier (68) are supported by artificial intelligence.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0045] In the drawings,
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
DESCRIPTION OF THE INVENTION
[0060]
[0061] According to the first embodiment of the invention, the aerial drone 12 comprises an energy source in the form of batteries 24, which provides the energy supply for the drive 16 as well as for the further components of the aerial drone 12 and the mobile device 14. For this purpose, a voltage interface 26a is provided on the aerial drone 12 and a voltage interface 26b corresponding to this voltage interface 26a is provided on the mobile device 14, said interfaces being connected to one another via a detachable plug connection 28. In addition, a communication unit 30 with an antenna 32 and a GPS unit 34 is provided which latter continuously determines the location of the aerial drone 12, transmits the location data of the aerial drone 12 to the mobile device 14, for allocation to the data acquired by the mobile device 14, and to a remote central processing unit (not shown here). Telemetry can be performed with the aid of the GPS unit 34, the communication unit 30, and the mobile device 14. In addition, a control unit 12b is provided which controls the drive 16.
[0062] In addition to the antenna 32, the communication unit 30 of the aerial drone 12 comprises a further interface 36a which is assigned to an associated interface 36b of the mobile device 14, which interfaces are connected to one another for data exchange by a detachable plug connection 38.
[0063] The mobile device 14 comprises two units 14a, 14b, namely a first unit 14a having a first housing 40 and a second unit 14b having a second housing 42. The first housing 40 and the second housing 42 are releasably connected to each other via a plug connection 44 to form a unit constituting the mobile device 14. There is a set of different first units 14a on one side and a set of different second units 14b on the other side, which units can be individually configured and adapted to the respective needs by simply connecting them together.
[0064] In the first housing 40, there is a first computer 46, an actuator in the form of a motor-driven movable arm 48, a motor 50 cooperating with the arm 48, a tool unit 52 arranged on the arm 48 and comprising a feed unit 54 and a rotation unit 56. A tiller 58 is provided as a tool on the distal end of the rotation unit 56. The motor 50 drives both the arm 48 and the feed unit 54, the rotation unit 56 and thus also the tiller 58. The arm 48 may be of multi-part design and have various joints, which are not shown here since such motor-driven kinematic units are known. The arm 48 is used to move the tool unit 52 relative to the aerial drone 12 to its area of use, so that the tool unit 52 with the feed unit 54 and the rotation unit 56 can use the tiller 58 to process the plants, for example to remove weeds, and/or to till the soil.
[0065] Furthermore, a communication unit 60 and a visual detection unit 62 are arranged in the first unit 14a. The visual detection unit 62 comprises a camera 64 that captures images, a segmentation and data reduction device 66, a classifier 68 that performs classification of a plurality of pixel fields composed of pixels based on an intermediate image or intermediate data generated by the segmentation and data reduction device 66, as will be described in more detail below. The visual detection unit 62 is connected to the communication unit 60.
[0066] The first unit 14a has an interface 70a which is associated with an interface 70b of the second unit 14b. Communication link 72 is used to connect the communication unit 60 via interface 70a to interface 70b, and via this to a communication unit 74 in the second unit 14b. Via interface 36b, the communication unit 74 of the second unit 14b is connected via the plug-type connection 38 to interface 36a and to the communication unit 30 of the aerial drone 12.
[0067] A second computer 76 and a database 78 are furthermore provided in the second unit 14b.
[0068] Illustrated in
[0069]
[0070] The mobile device 14 can also be equipped with several different tool units 52 which are provided with a common arm 48 and, for example, a tool turret that will bring the required tool unit 52 into the activation position. However, it is also conceivable for the different tool units to each have their own actuator.
[0071] The flowchart of
[0072] In a first step 84, the carrier system 10 is first used to determine the measures required on the associated agricultural land. For this purpose, the carrier system 10 is for example brought to an agricultural area to be tilled, such as an agricultural field, or flown there directly from a central location. There the aerial drone 12 with the mobile device 14 then takes off and flies over the agricultural field. A stationary central computing unit supplies the carrier system 10 with the necessary data about the agricultural field to be surveyed. The central computing unit can also be a smartphone in this case. The visual detection unit 62 with the camera 64 of the mobile device 14 is used to capture images of the agricultural field. The images are evaluated and, after a comparison with data in the database 78, the necessary measures for this agricultural field are finally determined.
[0073] In a next step 86, based on the determined measures for the agricultural field or for partial areas of the agricultural field, the mobile unit 14 suitable for the necessary measure is then compiled from a set of first units 14a and a set of different second units 14b, which two units 14a, 14b are then connected to each other.
[0074] In a subsequent step 88, the gripper arm 82a and the gripper arm 82b, respectively, of the aerial drone 12 are used to grip the mobile unit 14 on the side and move it upwards towards the aerial drone 12 into a receptacle 12a of the aerial drone 12. In doing so, the voltage interfaces 26a, 26b are connected to each other via the connector 28 and the interfaces 36a, 36b are connected to each other via the connector 38. This supplies the mobile device 14 with voltage from the battery 24 of the aerial drone 12, and enables data exchange via the antenna 32 of the communication unit 30 of the aerial drone 12 with the communication units 60 and 74 of the mobile device 14 on the one hand and with a central processing unit on the other hand. As stated above, the central computing unit, which is independent of the carrier system 10, can also be a smartphone.
[0075] In a next step 90, the determined measures are performed using the carrier system 10 in the agricultural field. For example, the aerial drone 12 flies to the area of the agricultural field to be tilled. The arm 48 carrying the tool unit 52 moves to the weed to be removed. The feed unit 54 displaces the tiller 58 towards the weed in such a way that the weed will be milled away upon activation of the rotation unit 56.
[0076] In a fifth step 92, the aerial drone 12 then flies back, and exchanges the mobile device 14 for another mobile device 14 optimized for a different action, for example a pesticide or fertilizer applicator.
[0077] Alternatively, steps 86 and 88 may also be omitted if the aerial drone 12 is already ready for the action to be performed.
[0078] With reference to
[0079] In a first step 94, the continuous recording of data of technically defined voxels and/or pixels and/or images by the visual detection unit 62 of the mobile device 14 is performed. The voxels, pixels and images constitute recorded data which is continuously transmitted to the database 78—second step 96.
[0080] In a third step 98, the recorded data is stored.
[0081] In a fourth step 100, a qualitative data comparison of the recorded data with the data stored in the database 78 is performed. Here, a segmentation and data reduction of the recorded data is carried out by the segmentation and data reduction device 66. In particular, verification of the recorded data may also be performed by the second computer 76.
[0082] In a fifth step 102, evaluation is performed by the classifier 68 in conjunction with the second computer 76, supported by artificial intelligence, as will be detailed below.
[0083] Finally, in a sixth step 104, the processing and conversion of the evaluation by the first computer 46 into control data for the motor 50, the arm 48, the tool unit 52 and the aerial drone 12 is performed.
[0084] Finally, in a seventh step 106, the motor 50, the arm 48, and the tool unit 52 are started up for tilling the soil or for manipulating flora and fauna.
[0085] Where mention is made in this application of artificial intelligence, this relates to, among other things, the use of a classical convolutional neural network—CNN—of one or more convolutional layer(s) followed by a pooling layer. Basically, this sequence of convolutional and pooling layers can be repeated any number of times. Usually, the input is a two- or three-dimensional matrix, e.g. the pixels of a grayscale or color image. The neurons are arranged accordingly in the convolutional layer.
[0086] The activity of each neuron is calculated via a discrete convolution (convolutional layer). This involves intuitively moving a comparatively small convolution matrix (filter kernel) step by step over the input. The input of a neuron in the convolutional layer is calculated as the inner product of the filter kernel with the respective presently underlying image section. Accordingly, adjacent neurons in the convolutional layer will react to overlapping areas.
[0087] A neuron in this layer responds only to stimuli in a local environment of the previous layer. This follows the biological model of the receptive field. In addition, the weights for all neurons of a convolutional layer are identical (shared weights). This results in each neuron in the first convolutional layer encoding the intensity to which an edge is present in a certain local area of the input, for example. Edge detection as the first step of image recognition has high biological plausibility. It immediately follows from the shared weights that translation invariance is an inherent property of CNNs.
[0088] The input of each neuron, determined by discrete convolution, is now transformed by an activation function, for CNNs usually Rectified Linear Unit, or ReLu (f(x)=max(0, x), into the output that is supposed to model the relative firing frequency of a real neuron. Since backpropagation requires the computation of gradients, a differentiable approximation of ReLu is used in practice: f(x)=ln(1+e.sup.x). As with the visual cortex, in deeper convolutional layers there is an increase both in the size of the receptive fields and in the complexity of the recognized features.
[0089] In the subsequent step, pooling, superfluous information is discarded. For object recognition in images, for example, the exact position of an edge in the image is of negligible interest—the approximate localization of a feature being sufficient. There are different types of pooling. By far the most common type is max-pooling in which of each 2×2 square of neurons in the convolutional layer, only the activity of the most active (hence “max”) neuron is retained for further computational steps; the activity of the remaining neurons is discarded. Despite the data reduction (75% in the example), the performance of the network is usually not reduced by pooling.
[0090] The use of the convolutional neural network and the segmentation and data reduction device 66 is explained in more detail below with reference to
[0091] There are various approaches for the classification of all objects in an image by the classifier 68. Many approaches start by first finding the individual objects in the image and then classifying them. However, this is not always possible. Let us look at the classification of plants 108 in a field as an example. An example image 108 is shown in
[0092]
[0093] The image 108 shown in
[0094] However, since a single pixel does not contain sufficient information to determine which class it belongs to, a surrounding area must be used for the classification. This area can then be classified using a convolutional neural network (CNN) as described above. The network can be of a sequence as illustrated in
[0095] The input image 110 of
[0096] Subsequently, a new image section, usually an image section that is shifted by one pixel, is selected and classified again using CNN. As a result of this procedure, the calculations required by the convolutional neural network must be repeated for the number of pixels to be classified. This is time consuming. Image 110 of
[0097] By means of simple segmentation and data reduction by the segmentation and data reduction unit 66, it can be determined whether a pixel is a representation of a part of a plant 108 or of a background 118. In terms of computation, this segmentation is not as complex as a CNN and therefore faster. The segmentation and data reduction by the segmentation and data reduction unit 66 is performed in the same way as in
[0098] In a first step 120, each image of multiple pixels transmitted to the database 78 is converted to the RGB (red, green, and blue) color model.
[0099] In a next step 122, each pixel of the transmitted image is converted to an HSV (hue, saturation, value) color model based on the RGB color model.
[0100] In a next step 124, this HSV color model is evaluated.
[0101] Each pixel based on the HSV color model is evaluated with respect to color saturation according to a threshold value, wherein, if the color saturation value exceeds a threshold value, the pixel is assigned the binary value 1, and if the color saturation value falls below a threshold value, the pixel is assigned the binary value 0.
[0102] Parallel thereto, based on the HSV color model, each pixel is evaluated with respect to the hue angle based on a predetermined range, wherein, if the hue angle is within the predetermined range, the pixel is assigned the binary value 1, and if the hue angle is outside the range, the pixel is assigned the binary value 0.
[0103] In a next step 126, the binary hue angle and color saturation information is used to generate an intermediate image that contains significantly less data than the image 108 generated by the camera.
[0104] The segmentation illustrated in
[0105] This results in the first optimization: before the entire image 108 is decomposed into two million images, the segmentation according to
[0106] As a result of the segmentation, the background 118 is set to the value 0. The image elements that the CNN looks at now also have segmented images. Normally, in a convolution layer, the feature calculation would be applied to each pixel of the image element. However, this results in three cases 128, 130, 132 for the calculation, which are shown in
[0107] The Red case 128 shows a feature calculation in which the feature is completely on the background 118. Here, each element is multiplied by 0, which results in the entire calculation being 0, or the bias value. The result of this calculation is therefore already known before the calculation. Even if the background 118 were non-zero, i.e. contained soil, this calculation would not include any information about the plant 110, so the result may simply be a constant fictitious value.
[0108] In the Yellow case 130, the mean feature value is not on a plant 110. This means that part of it is also a multiplication by zero. In this case, the plant 110 is distorted in the margin and thus made larger in the feature map.
[0109] In the Blue case 132, at least the center pixel of the feature is on a plant.
[0110] After considering these three cases 128, 130, 132, only the Yellow and Blue cases 130 and 132 need to be calculated, i.e. the cases 130, 132 in which the feature has at least one non-zero input value. The results of all the other feature computations are known before the computation, they are zero and/or only the bias value. The coordinates in which the Blue case 132 occurs are known. These are the coordinates stored during the segmentation. For the Yellow case 130, a computation must again be made whether this case has occurred. This requires a check of each plant pixel found in the segmentation. Since such a check is too much effort and the Yellow Case 130 only occurs in the border area of a plant 110, this case shall be ignored.
[0111] Therefore, the calculation can be optimized in that the feature calculation and all other elements of the CNN are only applied to the plant pixels found.
[0112]
[0113] Closer inspection shows that there is a significant overlap in the region under consideration (Red/Purple) 144, 142. This in turn means that both image elements 136, 138 contain mostly the same values. If the CNN now calculates the feature in the convolution layer, the same values would also be obtained in the feature calculation.
[0114] In
[0115] However, these optimizations also cause changes in the classification result. The pooling layers have the greatest influence here. With each pooling, information is removed from the network. However, because the image elements are no longer considered individually, local reference is lost for the pooling. This problem is illustrated in
[0116] In
[0117] Another difference is represented by the missing edge regions of the plants. Since the features are not applied to all elements in which there is any overlap with the plant, computational differences exist here. This may also change the classification result compared to the conventional calculation.
[0118] The missing calculation of the feature values outside the plant can result in other values in that the result is given as zero, which in reality is the bias value, however.
[0119] While these three factors do affect the results, this still shows that the CNN is very robust and thus the results still meet a very high accuracy value.
[0120] The next step would be to train the network directly with these modifications, so that the network can adapt even better to its new calculation and thus compensate for any errors directly in the calculation.
[0121] The segmentation and data reduction device provides the pixels relating to the weed 154 with position coordinates.
LIST OF REFERENCE SIGNS
[0122] 10 carrier system [0123] 12 aerial drone [0124] 12a receptacle, receiving space of the aerial drone 12 [0125] 12b aerial drone control device [0126] 14 mobile device [0127] 14a first unit [0128] 14b second unit [0129] 14c gripper arm receptacle on mobile device 14 [0130] 16 drive [0131] 18 electric motor [0132] 20 propeller [0133] 22 feet [0134] 24 battery [0135] 26a voltage interface on aerial drone 12 [0136] 26b voltage interface on mobile device 14 [0137] 28 plug connection [0138] 30 communication unit [0139] 32 antenna [0140] 34 GPS unit [0141] 36a interface on aerial drone 12 [0142] 36b interface on mobile device 14 [0143] 38 plug connection [0144] 40 first housing of first unit 14a [0145] 42 second housing of second unit 14b [0146] 44 plug connection [0147] 46 first computer [0148] 48 arm serving as actuator [0149] 50 motor [0150] 54 feed unit [0151] 56 rotation unit [0152] 58 miller [0153] 60 first communication unit of first unit 14a [0154] 62 visual detection unit [0155] 64 camera [0156] 66 segmentation and data reduction unit [0157] 68 classifier [0158] 70a interface of first unit 14a [0159] 70b interface of second unit 14b [0160] 72 communication link [0161] 74 second communication unit of second unit 14b [0162] 76 second computer [0163] 78 database [0164] 80 plug connection [0165] 82a left gripper arm [0166] 82b right gripper arm [0167] 84 first step: determine necessary measures [0168] 86 second step: select mobile device 14 from available mobile devices 14 [0169] 88 third step: connect device to aerial drone 12 [0170] 90 fourth step: carry out determined measures [0171] 92 fifth step: exchange mobile device 14 for another mobile device 14 and carry out another measure [0172] 94 first step: continuous recording [0173] 96 second step: transmitting the data [0174] 98 third step: storing the data [0175] 100 fourth step: data comparison [0176] 102 fifth step: evaluation by the classifier 68 [0177] 104 sixth step: conversion into control data [0178] 106 seventh step: starting up the components [0179] 108 example image, input image [0180] 110 plant [0181] 116 feeding into in a dense layer [0182] 118 background [0183] 120 first step: convert into a RGB color model [0184] 122 second step: convert into an HSV color model [0185] 124 third step: evaluate HSV image [0186] 126 fourth step: create an intermediate image [0187] 128 first case, red [0188] 130 second case, yellow [0189] 132 third case, blue [0190] 134 feature [0191] 136 left plant pixel [0192] 138 right plant pixel [0193] 140 blue area [0194] 142 purple area [0195] 144 red area [0196] 146 orange area [0197] 148 feature, green [0198] 150 center black box [0199] 152 carrot plant [0200] 154 weeds [0201] 156 left image element [0202] 158 red frame [0203] 160 right picture element