METHOD FOR DETERMINING THE PARTICLE SIZE DISTRIBUTION OF PARTS OF A BULK MATERIAL FED ONTO A CONVEYOR BELT

20230175945 · 2023-06-08

    Inventors

    Cpc classification

    International classification

    Abstract

    The invention relates to a method for determining the particle size distribution of parts of a bulk material (2) fed onto a conveyor belt (1), wherein a depth image (6) of parts of the bulk material (2) is captured in a capturing region (4) by means of a depth sensor (3). In order to reliably classify bulk material at conveying speeds of more than 2 m/s even if there are overlaps, without having to take structurally complicated measures for this purpose, according to the invention, the captured two-dimensional depth image (6) is fed to a convolutional neural network, which has been trained in advance and which has at least three convolutional layers lying one behind the other and one downstream amount classifier (22) per class of a particle size distribution, the output values (21) of which amount classifiers are output as the particle size distribution of the bulk material present in the capturing region (4).

    Claims

    1. A method for determining a grain size distribution of parts of a bulk material fed onto a conveyor belt, said method comprising: capturing a two-dimensional depth image of the bulk material in sections in a capturing region with a depth sensor; feeding the captured two-dimensional depth image to a previously trained convolutional neural network that has at least three successive convolutional layers and, for each class of the grain size distribution, a downstream amount classifier; and outputting output values of the convolutional neural network as the grain size distribution of the bulk material present in the capturing region.

    2. The method according to claim 1, wherein the method further comprises removing from the depth image values of pixels thereof that have a depth that corresponds to a previously detected distance between a depth sensor and a background for the pixel or that exceeds said distance.

    3. The method according to claim 1, wherein a volume classifier is downstream of the convolutional layers and said volume classifier has an output value that is output as a volume of the bulk material present in the capturing region.

    4. The method according to claim 1, wherein a cubicity classifier that outputs an output value as cubicity is downstream of the convolutional layers.

    5. A method for training a neural network for a method according to claim 1, the method comprising: capturing and storing example depth images each of an example grain with a known volume together with the volume; and combining a plurality of the example depth images randomly so as to form a training depth image having an amount of example grains per class is assigned thereto as a grain size distribution thereof; and feeding the training depth image on the an input side of the neural network and feeding the assigned grain size distribution thereof on an output side of amount classifiers of the neural network, wherein weights of individual network nodes of the neural network are adapted in a learning step.

    6. The method according to claim 5, wherein the sample depth images are assembled with random alignment so as to form the training depth image.

    7. The method according to claim 5, wherein the example depth images are combined with partial overlaps so as to form the training depth image, wherein the training depth image has a depth value in an overlap region that corresponds to a lowest depth of both of the example depth images.

    8. The method according to claim 6, wherein the example depth images are combined with partial overlaps so as to form the training depth image, wherein the training depth image has a depth value in an overlap region that corresponds to a lowest depth of both of the example depth images.

    9. The method according to claim 2, wherein a volume classifier is downstream of the convolutional layers and said volume classifier has an output value that is output as a volume of the bulk material present in the capturing region.

    10. The method according to claim 2, wherein a cubicity classifier that outputs an output value as cubicity is downstream of the convolutional layers.

    11. The method according to claim 3, wherein a cubicity classifier that outputs an output value as cubicity is downstream of the convolutional layers.

    12. The method according to claim 9, wherein a cubicity classifier that outputs an output value as cubicity is downstream of the convolutional layers.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0012] In the drawings, the subject matter of the invention is shown by way of example, wherein:

    [0013] FIG. 1 shows a schematic side view of a conveyor belt loaded with bulk material, a depth sensor and a computing unit;

    [0014] FIG. 2 shows a schematic representation of the convolutional neural network, and

    [0015] FIG. 3 shows a training depth image composed of four example depth images.

    DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

    [0016] FIG. 1 shows a device for carrying out the method according to the invention, which comprises a conveyor belt 1 on which bulk material 2 has been fed. A depth sensor 3 creates depth images 6 of the bulk material 2 in a capturing region 4 of the depth sensor 3 and sends them to a computing unit 5.

    [0017] In the computing unit 5, the depth images are fed to a neural network and processed by it. The determination of the grain size distribution can include the following steps as an example and is shown for a depth image 6 in FIG. 2: In a first step 7, the depth image 6 is fed to the first convolutional layer. In this process, several outputs 8, so-called feature maps, are generated in the convolutional layer from the depth image 6 by pixel-wise convolution of the depth image 6 with a convolution kernel, which depict different aspects. These outputs 8 have the same dimensions and the same number of pixels as the depth image 6. In the next step 9, the number of pixels is reduced by means of a pooling layer. In this process, for each output 8, only the one with the highest value is selected from a square of, for example, 4 pixels and transferred to a corresponding pixel of the output 10, which is now compressed compared to the output 8. Since these squares overlap, this reduces the number of pixels by a factor of 2. Steps 7 and 9 are now repeated in additional layers, but in step 11 the convolution is applied to each output 10, further increasing the number of outputs 12 generated. Applying the pooling layer to the outputs 12 in step 13 further lowers the pixel count and produces outputs 14. Step 15 is analogous to step 11 and produces outputs 16. Step 17 is analogous to step 13 and lowers the pixel count and produces output 18. The application steps of the convolution and pooling layers can be repeated further depending on the aspects to be determined in depth image 6. In step 19, the pixels of the outputs 18 are aligned by dimensional reduction, and their information is transmitted to a classifier, such as a volume classifier 20, whose output value 21 may be output as the bulk volume present in the capturing region. Instead of or in addition to the volume classifier 20, amount classifiers 22 can be provided whose output values 23 form the relative or absolute quantities of the histogram of a grain size distribution. Furthermore, a cubicity classifier 24 can also be provided, the output value 25 of which corresponds to the average cubicity of the bulk material 2 present in the capturing region.

    [0018] The structure of a training depth image 26 can be seen in FIG. 3. Here, four sample depth images 27, 28, 29, 30 of different grains measured in advance are combined to form a training depth image 25. The example depth images 27, 28, 29, 30 can be combined in any positioning and orientation to form a training depth image 26 and partially overlap. The overlaps are shown hatched in the training depth image 26.