METHOD FOR DETERMINING, IN PARTS, THE VOLUME OF A BULK MATERIAL FED ONTO A CONVEYOR BELT

20230075334 · 2023-03-09

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for determining, in parts, the volume of a bulk material (2) fed onto a conveyor belt (1) captures a depth image (6) of the bulk material (2), in parts, in a capturing region (4) by means of a depth sensor (3). So that bulk material can be reliably classified at conveying speeds of more than 2 m/s even in the case of overlaps without structurally complicated measures, the captured two-dimensional depth image (6) is fed to a convolutional neural network trained in advance, which has at least three convolutional layers lying one behind the other and a downstream volume classifier (20), the output value (21) of which is output as the bulk material volume present in the capturing region (4).

    Claims

    1. A method for determining, in parts, the volume of a bulk material fed onto a conveyor belt, said method comprising: capturing a depth image of the bulk material in parts in a capturing region with a depth sensor; and feeding the captured two-dimensional depth image to a pre-trained convolutional neural network that has at least three successive convolution layers and a downstream volume classifier; and outputting an output value of the pre-trained convolutional neural network as the volume of the bulk material present in the capturing region.

    2. The method according to claim 1, wherein the depth image comprises pixels each having a respective value having a depth, and the method further comprises removing from the depth image the values of the pixels the depth of which corresponds to, or exceeds, a previously detected distance between the depth sensor and a background for the pixel.

    3. The method according to claim 1, wherein a quantity classifier is arranged downstream of the convolution layers for each class of a particle size distribution, and the method further comprises outputting output values of said quantity classifiers as a particle size distribution.

    4. The method according to claim 1, wherein a cubicity classifier is arranged downstream of the convolution layers, the method further comprises outputting an output value thereof as cubicity.

    5. A training method for training a neural network for the method according to claim 1, said training method comprising: first acquiring example depth images each of a respective example grain with a respective known volume and storing each of said example depth images together with the respective known volume; combining a plurality of said example depth images randomly sa as to form a training depth image, to which a sum of the known volumes of the combined example depth images is assigned as an assigned bulk material volume; feeding the training depth image to the neural network on an input side and feeding the assigned bulk material volume to the neural network on an output side; and adapting weights of individual network nodes of the neural network in a learning step.

    6. The training method according to claim 5, wherein the training depth image is formed by assembling the example depth images with random alignment.

    7. The training method according to claim 5, wherein two of the example depth images are combined with partial overlaps in an overlap region so as to form the training depth image, and wherein the training depth image in the overlap region has a depth value that corresponds to a lowest depth of both of the combined example depth images.

    8. The training method according to claim 6, wherein two of the example depth images are combined with partial overlaps in an overlap region so as to form the training depth image, and wherein the training depth image in the overlap region has a depth value that corresponds to a lowest depth of both of the combined example depth images.

    9. The method according to claim 2, wherein a quantity classifier is arranged downstream of the convolution layers for each class of a particle size distribution, and the method further comprises outputting output values of said quantity classifiers as a particle size distribution.

    10. The method according to claim 2, wherein a cubicity classifier is arranged downstream of the convolution layers, the method further comprises outputting an output value thereof as cubicity.

    11. The method according to claim 3, wherein a cubicity classifier is arranged downstream of the convolution layers, the method further comprises outputting an output value thereof as cubicity.

    Description

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0017] FIG. 1 shows a device for carrying out the method according to the invention, which comprises a conveyor belt 1 on which bulk material 2 has been fed. A depth sensor 3 creates depth images 6 of the bulk material 2 in a capturing region 4 of the depth sensor 3 and sends them to a computing unit 5.

    [0018] In the computing unit 5, the depth images are fed to a neural network and processed by it. The determination of the bulk material volume can include the following steps as an example and is shown for a depth image 6 in FIG. 2: In a first step 7, the depth image 6 is fed to the first convolution layer. In this process, several outputs 8, so-called feature maps, which depict different aspects, are generated in the convolution layer from the depth image 6 by pixel-wise convolution of the depth image 6 with a convolution kernel. These outputs 8 have the same dimensions and the same number of pixels as the depth image 6. In the next step 9, the number of pixels is reduced by means of a pooling layer. In this process, for each output 8, only the one with the highest value is selected from a square of, for example, 4 pixels and transferred to a corresponding pixel of the output 10, which is now compressed compared to the output 8. Since these squares overlap, this reduces the number of pixels by a factor of 2. Steps 7 and 9 are now repeated in additional layers, but in step 11 the convolution is applied to each output 10, further increasing the number of outputs 12 generated. Applying the pooling layer to the outputs 12 in step 13 further lowers the pixel count and produces outputs 14. Step 15 is analogous to step 11 and produces outputs 16. Step 17 is analogous to step 13, lowering the pixel count and producing output 18. The application steps of the convolution and pooling layers can be repeated further depending on the aspects to be determined in depth image 6. In step 19, the pixels of output 18 are aligned by dimensional reduction, and their information is transmitted to a classifier, such as a volume classifier 20, whose output value 21 may be output as the bulk material volume present in the capturing region. In addition to the volume classifier 20, additional quantity classifiers 22 may be provided whose output values 23 form the relative or absolute quantities of the histogram of a particle size distribution. Furthermore, a cubicity classifier 24 can also be provided, the output value 25 of which corresponds to the average cubicity of the bulk material 2 present in the capturing region.

    [0019] The structure of a training depth image 26 can be seen in FIG. 3. Here, four example depth images 27, 28, 29, 30 of different grains measured in advance are combined to form a training depth image 25. The example depth images 27, 28, 29, 30 can be combined in any positioning and orientation to form a training depth image 26 and partially overlap. The overlaps are shown hatched in the training depth image 26.