METHOD OF DUST SUPPRESSION FOR CRUSHERS WITH SPRAYING DEVICES
20230075710 · 2023-03-09
Inventors
Cpc classification
B02C23/18
PERFORMING OPERATIONS; TRANSPORTING
B02C25/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
B02C25/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A method of dust suppression for crushers (2) with spraying devices (3) is described. To facilitate a resource-sparing dust suppression independently of the operator and even in the case of heterogeneous bulk material, the deviation between an image representation recorded by a first sensor (4) of a pattern arranged in its detection region as an actual value and a specified target value is determined, whereupon the spraying devices (3) assigned to the pattern are activated if the deviation exceeds a specified threshold.
Claims
1. A method of dust suppression for a crusher with spraying devices, said method comprising: recording an image with a first sensor of a pattern arranged in a detection region such that said image serves as an actual value, and determining a deviation between the actual value and a specified target value; and when the deviation exceeds a specified threshold value, activating the spraying devices associated with the pattern.
2. The method according to claim 1, wherein the method further comprises detecting images of several patterns simultaneously with the first sensor.
3. The method according to claim 1, wherein an image of the pattern recorded by a second sensor is the target value, and the deviation is determined as a number of non-corresponding pattern points in the target value and the actual value.
4. The method according to claim 3, wherein the first sensor and the second sensor form a stereo camera.
5. The method according to claim 4, wherein the method further comprises generating with the stereo camera a two-dimensional depth image of bulk material conveyed past the stereo camera and feeding the two-dimensional depth image to a previously trained convolutional neural network that has at least three convolution layers arranged one behind the other and, for each class of a particle size distribution, a downstream quantity classifier, output values thereof being output as a particle size distribution.
6. The method according to claim 5, wherein the depth image comprises pixels each having a respective value, and the method comprises removing from the depth image the values of the pixels that have a depth that corresponds to, or exceeds, a previously detected distance between the stereo camera and a background for the pixel.
7. The method according to claim 5, wherein a volume classifier is arranged downstream of the convolution layers, and an output value of the volume classifier is output as a volume of the bulk material present in the detection region.
8. A training method for training a neural network for the method according to claim 5, said training method comprising: first acquiring example depth images of a respective example grain with a known volume and storing said depth images together with the known volume thereof; combining a plurality of example depth images randomly so as to form a training depth image to which a sum of the known volumes of the composite example depth images is assigned as bulk material volume or a class-wise distribution of bulk material volumes of the composite example depth images is assigned as the particle size distribution; and feeding the training depth image to the neural network on an input side thereof and feeding the assigned bulk material volume or the assigned particle size distribution is fed to the neural network on an output side thereof; and adapting weights of individual network nodes of the neural network in a learning step.
9. The method according to claim 2, wherein an image of the pattern recorded by a second sensor is the target value, and the deviation is determined as a number of non-corresponding pattern points in the target value and the actual value.
10. The method according to claim 9, wherein the first sensor and the second sensor form a stereo camera.
11. The method according to claim 10, wherein the method further comprises generating with the stereo camera a two-dimensional depth image of bulk material conveyed past the stereo camera and feeding the two-dimensional depth image to a previously trained convolutional neural network that has at least three convolution layers arranged one behind the other and, for each class of a particle size distribution, a downstream quantity classifier, output values thereof being output as a particle size distribution.
12. The method according to claim 11, wherein the depth image comprises pixels each having a respective value, and the method comprises removing from the depth image the values of the pixels that have a depth that corresponds to, or exceeds, a previously detected distance between the stereo camera and a background for the pixel.
13. The method according to claim 11, wherein a volume classifier is arranged downstream of the convolution layers, and an output value of the volume classifier is output as a volume of the bulk material present in the detection region.
14. The method according to claim 12, wherein a volume classifier is arranged downstream of the convolution layers, and an output value of the volume classifier is output as a volume of the bulk material present in the detection region.
15. The method according to claim 6, wherein a volume classifier is arranged downstream of the convolution layers, and an output value of the volume classifier is output as a volume of the bulk material present in the detection region.
Description
BRIEF DESCRIPTION OF THE INVENTION
[0016] In the drawing, the subject matter of the invention is shown by way of example, wherein:
[0017]
[0018]
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
[0019] A method according to the invention can be used, for example, for dust suppression of dust particles produced during crushing of bulk material 1. For this purpose, a mobile crusher 2 used for this purpose and shown in
[0020] In order to reduce the measurement and maintenance effort, several patterns can be arranged in the detection region 5 of the first sensor 4. This means that a large number of patterns can be detected with just one optical sensor 4, enabling differentiated and thus efficient activation of the spraying devices 2 arranged at different positions. A wide-angle or 360° camera, for example, is suitable as the first sensor 4 for this purpose.
[0021]
[0022] The first sensor 4 and the second sensor 9 can form a stereo camera 10, whereby, in addition to detecting the dust load in the area of the stereo camera 10, information about the depth data can also be acquired, which can subsequently be used to assess the condition of the bulk material 1.
[0023] As disclosed in