Conveying system, plant for sorting bulk goods having a conveying system of this type, and transport method
09833815 · 2017-12-05
Assignee
Inventors
- Robin Gruna (Baden-Baden, DE)
- Kai-Uwe Vieth (Karlsruhe, DE)
- Henning Schulte (Coburg, DE)
- Thomas Langle (Eggenstein, DE)
- Uwe Hanebeck (Waldbronn, DE)
- Marcus Baum (Gottingen, DE)
- Benjamin Noack (Karlsruhe, DE)
Cpc classification
B07C2501/0018
PERFORMING OPERATIONS; TRANSPORTING
B07C5/3425
PERFORMING OPERATIONS; TRANSPORTING
B07C5/342
PERFORMING OPERATIONS; TRANSPORTING
B07C5/10
PERFORMING OPERATIONS; TRANSPORTING
International classification
B07C5/00
PERFORMING OPERATIONS; TRANSPORTING
B07C5/10
PERFORMING OPERATIONS; TRANSPORTING
B07C5/36
PERFORMING OPERATIONS; TRANSPORTING
Abstract
Conveying system for transporting a material flow (M) comprising a large number of individual objects (O1, O2, . . . ), characterized in that with the conveying system, by means of optical detection of individual objects (O1, O2, . . . ) in the material flow (M), for these objects (O1, O2, . . . ) respectively the location position (x(t),y(t)) thereof at several different times (t.sub.−4, t.sub.−3, . . . ) can be determined and by means of the location positions (x(t),y(t)) for these objects (O1, O2, . . . ) determined at the different times (t.sub.−4, t.sub.−3, . . . ), respectively the location (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) thereof at at least one defined time (t.sub.b) after the respectively latest of the different times (t.sub.−4, t.sub.−3, . . . ) can be calculated.
Claims
1. A conveying system for transporting a material flow (M) comprising a large number of individual objects (O1, O2, . . . ), wherein with the conveying system, by means of optical detection of individual objects (O1, O2, . . . ) in the material flow (M), for these objects (O1, O2, . . . ) respectively the location position (x(t),y(t)) thereof at several different, fixed times (t.sub.−4, t.sub.−3, . . . ) can be determined and by means of the location positions (x(t),y(t)) determined at the different, fixed times (t.sub.−4, t.sub.−3, . . . ), for these objects (O1, O2, . . . ) respectively the location (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) thereof at the at least one defined time (t.sub.b) after the respectively latest of the different, fixed times (t.sub.−4, t.sub.−3, . . . ) can be calculated.
2. The conveying system according to claim 1, wherein the movement paths (1) composed of a plurality of location positions (x(t),y(t)) of the respective object at different times (t.sub.−4, t.sub.−3, . . . ) can be determined for the individual objects (O1, O2, . . . ), the movement paths of different objects (O1, O2, . . . ) being able to be determined and/or being able to be differentiated from each other via recursive or non-recursive estimating methods.
3. The conveying system according to claim 1, wherein a movement model can be determined respectively for the objects (O1, O2, . . . ) by means of the respective movement paths thereof, in particular can be selected from a prescribed quantity of movement models, and/or parameters for such a movement model can be determined.
4. The conveying system according to claim 1, wherein the individual objects (O1, O2, . . . ) can be classified on the basis of the optical detection.
5. The conveying system according to claim 1, wherein the classification of an object (O1, O2, . . . ) can be performed by taking into account the location positions (x(t),y(t)) determined for this object at the different, fixed times (t.sub.−4, t.sub.−3, . . . ), the movement path determined for this object and/or the movement model determined for this object.
6. The conveying system according to claim 1, wherein the two-dimensional location positions (x(t),y(t)), in particular two-dimensional location positions relative to the conveying system, can be determined for the objects (O1, O2, . . . ), or in that three-dimensional location positions in space can be determined for the objects (O1, O2, . . . ).
7. The conveying system according to claim 1, wherein with the conveying system, by means of optical detection of the individual objects (O1, O2, . . . ) in the material flow (M), for these objects (O1, O2, . . . ) respectively in addition to the location position (x(t),y(t)) thereof, also the orientation thereof at several different times (t.sub.−4, t.sub.−3, . . . ) can be determined and in that, by means of the location positions (x(t),y(t)) and orientations determined at the different times (t.sub.−4, t.sub.−3, . . . ) for these objects (O1, O2, . . . ), respectively the location (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) thereof at the at least one defined time (t.sub.b) after the respectively latest of the different times (t.sub.−4, t.sub.−3, . . . ) can be calculated.
8. The conveying system according to claim 1, wherein by means of the location positions (x(t),y(t)) and orientations determined at the different times (t.sub.−4, t.sub.−3, . . . ) for these objects (O1, O2, . . . ), respectively in addition to the location (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) thereof also the orientation thereof at the at least one defined time (t.sub.b) after the respectively latest of the different times (t.sub.−4, t.sub.−3, . . . ) can be calculated.
9. The conveying system according to claim 1, wherein the optical detection is effected by means of one or more optical detection unit(s), which comprises/comprise or preferably is/are one or more surface sensor(s) and/or a plurality of line sensors at a spacing from each other, and/or in that, during the optical detection, a sequence of two-dimensional images can be recorded, from which the location positions of the objects at the different times can be determined.
10. The conveying system according to claim 1, wherein within the scope of the optical detection of one or more of the objects (O1, O2, . . . ) at several different times (t.sub.−4, t.sub.−3, . . . ), images, in particular camera images, of this/these object/s can be produced, in that respectively the shape(s) of this/these object/s in the produced images can be determined and in that respectively a three-dimensional image of this/these objects/s can be calculated from the determined shapes.
11. The conveying system according to claim 1, wherein the calculation of the location(s) of the object/s at the defined time(s) is effected taking into account calculated three-dimensional image/s.
12. The conveying system according to claim 1, wherein classification of the object/s is effected using the calculated three-dimensional image/s.
13. A plant for bulk material sorting comprising a conveying system according to claim 1, wherein a sorting unit with which the objects (O1, O2, . . . ) can be sorted on the basis of the calculated locations (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) at the defined time(s) (t.sub.b).
14. A plant according to claim 1, wherein the objects can be sorted on the basis of the classification thereof, the classification being effected into good objects (GO1, GO2, . . . ) and into bad objects (SO1, SO2) and preferably the sorting unit having an ejection unit, in particular a blow-out unit, which is configured to remove bad objects from the material flow (M) using the calculated locations (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) at the defined time(s) (t.sub.b).
15. A method for transporting a material flow (M) comprising a large number of individual objects (O1, O2, . . . ), wherein in this method, by means of optical detection of individual objects (O1, O2, . . . ) in the material flow (M), for these objects (O1, O2, . . . ) respectively the location position (x(t),y(t)) thereof at several different, fixed times (t.sub.−4, t.sub.−3, . . . ) is determined, and in that, by means of the location positions (x(t),y(t)) determined at the different, fixed times (t.sub.−4, t.sub.−3, . . . ), for these objects (O1, O2, . . . ) respectively the location (x.sub.b(t.sub.b),y.sub.b(t.sub.b)) thereof at at least one defined time (t.sub.b) after the respectively latest of the different, fixed times (t.sub.−4, t.sub.−3, . . . ) is calculated, the method being implemented using a conveying system or a plant according to claim 1.
Description
(1) Subsequently, the invention is described with reference to embodiments. There are shown:
(2)
(3)
(4)
(5)
(6) The plant for bulk material sorting shown in
(7) Furthermore, the plant comprises a sorting unit, only the blow-out unit 4 of which is illustrated here. In addition, a computer system 6 is shown, with which all of the subsequently described calculations of the plant or of the conveying system are implemented.
(8) The individual objects O1, O2, O3 . . . in the material flow M are hence transported by means of the conveyor belt 2 through the detection region of the camera 3 and detected and evaluated there, with respect to the object positions thereof, by image evaluation algorithms in the computer system 6. Subsequently, separation is effected by the blow-out unit 4 into the bad fraction (bad objects SO1 and SO2) and into the good fraction (good objects GO1, GO2, GO3 . . . ).
(9) According to the invention, a surface sensor (surface camera) 3 is hence used. The image production at the bulk material or material flow M (or the individual objects O1, . . . of the same) is effected by the camera 3 on the conveyor belt 2 and/or in front of a problem-adapted background 7. The image recording rate is adapted to the speed of the conveyor belt 2 or synchronised by a position transducer (not shown).
(10) According to the invention, the aim is to produce an image sequence (instead of one momentary recording) of the bulk material flow at different times (in quick succession) by means of a plurality of surface scans or surface recordings of the material flow M by the surface camera 3, as follows (cf.
(11) In
(12) Within the scope of the invention, the data production can hence be effected on the basis of one (or also a plurality) of image-providing surface sensors, such as the surface camera 3. This enables a position determination and also a measurement of physical properties of the individual particles or objects O1, . . . of the bulk material M at several different times, as is illustrated in
(13) As
(14) In addition, the predictive multiobject tracking method which is used provides in addition uncertainty data relating to the estimated dimensions in the form of a variance (blow-out time) or covariance matrix (blow-out position).
(15)
(16)
(17) At the same time, parameters of movement equations can be estimated in the tracking phase, the movement equations being able to describe a movement model for the movement of an individual object. In this way, by means of the recorded, i.e. optically detected information (i.e. the movement path of the individual recorded location positions or, provided also the situation is detected, of the movement- and orientation change path which results from the object poses recorded at the several different times), the future movement path of the observed object can be estimated with great precision and hence also the location thereof at the later, potential (provided it concerns a bad object) blow-out time t.sub.b. Examples of parameters of the movement equations which can be estimated on the basis of the image sequences are acceleration values in all spatial directions, axes of rotation and directions of rotation. These parameters can be detected by the tracking in the image sequences and establish a movement model for each particle which comprises e.g. also rotation- and transverse movements.
(18) In the prediction phase (during which the observed object, after it has just left the imaging region of the camera 3, moves away out of the region 3′ and in the region 3″ between this region 3′, on the one hand, and the blow-out region 4′, on the other hand, and hence can no longer be detected by the camera 3), said prediction phase following the tracking phase (in which the observed object is situated in the image-detection region of the camera 3, i.e. in the region 3′), the determined movement equations can be used in order to predict, for the just observed object (i.e. with corresponding computer output for each detected object in the material flow M), an estimation or calculation of the subsequent location position (or also the pose).
(19) After the object to be tracked has left the field of vision 3′ of the camera 3, the prediction phase hence follows. This second phase of the object tracking can consist of one or more prediction steps which are based on the movement models (e.g. estimated rotational movements) determined previously in the tracking phase. The result of this prediction phase is an estimation of the location at a later time (such as for example of the blow-out time t.sub.b and of the location at this time, i.e. of the blow-out position x.sub.b(t.sub.b)). Tracking the objects is therefore effected in two phases. The tracking phase is composed of sequences of filter- and prediction steps. Filter steps relate to the processing of camera images in order to improve the current position estimations, and prediction steps extrapolate the position estimations until the next camera image, i.e. next filter step. The prediction phase following the tracking phase consists only of prediction steps since, because of a lack of camera data, a filter step can no longer be implemented.
(20) The tracking phase can be implemented in various ways: either non-recursively, the current object positions or objects situations being determined from each image (no movement models need hereby be used). All the object positions obtained over time can be assembled in order to determine therefrom trajectories for the individual objects. Also recursive processing is possible so that only the current position estimation of an object need be provided. The movement models are hereby used (prediction steps) in order to predict the object movement between camera measurements and hence to relate various filter steps. In one filter step, the prediction of the results of the preceding filter step serves as prior knowledge. In this case, weighting between the predicted positions and the positions determined from the current camera image takes place. Also, it is possible to operate recursively with an adaptation of the movement models: simultaneous estimation of object positions or -situations and model parameters is hereby effected. By observing image sequences, e.g. acceleration values can be determined as model parameters. The movement models are hence identified only during the tracking phase. This can thereby concern a set model for all the objects or individual movement models.
(21) The reference number 1′ denotes the extrapolation of the movement path 1, determined in the tracking phase, of an object beyond the detection period of this object by the camera 3, i.e. the predicted movement path of the object after leaving the detection region of the camera 3′, i.e. in particular even at the time of the trajectory past the blow-out unit 4 (or through the detection region 4′ of the same).
(22) The prediction phase can use directly the model information determined previously in the tracking phase and consists purely of prediction steps, since camera data are no longer available and hence filter steps can no longer be effected. The prediction phase can be further sub-divided, for example into a phase in which the objects are still situated on the conveyor belt and a trajectory phase after leaving the belt. For prediction of the movements, two different movement models can be used in both phases (for example a two-dimensional movement model on the conveyor belt and a three-dimensional movement model in the subsequent trajectory phase).
(23) One possibility for preparing the camera image data for the object tracking resides in converting the data by image pre-processing methods and segmentation methods into a quantity of object positions. Useable image pre-processing methods and segmentation methods are for example non-homogeneous point operations for removing lighting inhomogeneities and region-oriented segmentation methods, such as are described in the literature (B. Jähne, Digitale Bildverarbeitung und Bildgewinnung (Digital Image Processing and Image Production), 7.sup.th revised edition 2012, Springer, 2012; or J. Beyerer, F. P. León, and C. Frese “Automatische Sichtprüfung: Grundlagen, Methoden und Praxis der Bildgewinnung und Bildauswertung” (Automatic Visual Inspection: Bases, Methods and Practice of Image Production and Image Evaluation), 2013.sup.th ed. Springer, 2012).
(24) The assignment of measurements to prior estimations can be effected adapted to the computing capacities available in the computer system 6, for example explicitly by a next-neighbour search or also implicitly by association-free methods. Corresponding methods are described for example in R. P. S. Mahler “Statistical Multisource-Multitarget Information Fusion”, Boston, Mass.: Artech House, 2007.
(25) For simultaneous estimation of object positions and model parameters, for example Kalman filter methods or other methods for (non-linear) filtering and state estimation can be used, as are described for example in F. Sawo, V. Klumpp, U. D. Hanebeck, “Simultaneous State and Parameter Estimation of Distributed-Parameter Physical Systems based on Sliced Gaussian Mixture Filter”, Proceedings of the 11th International Conference on Information Fusion (Fusion 2008), 2008.
(26) Determination of movement model parameters hereby has two functions: 1. Firstly these parameters are used both in the tracking- and in the prediction phase for calculation of the prediction step(s) in order to enable precise prediction of blow-out time and -position (for example, during the tracking phase, the position of an object predicted by the model can be compared with the object position actually measured in this phase and the parameters of the model can be adapted if necessary). 2. Furthermore, the model parameters extend the feature space, on the basis of which the classification and the subsequent actuation of the blow-out unit can be effected. In particular, bulk materials can consequently be classified and correspondingly sorted, in addition to the optically recognisable features, by means of differences in the movement behaviour.
(27) As an alternative to the construction shown in
(28) Relative to the state of the art, the present invention has a series of essential advantages.
(29) By determining the movement path 1 of each object O1, O2, . . . , a significantly improved prediction or estimation (calculation) of the blow-out time t.sub.b and of the blow-out position x.sub.b(t.sub.b) is possible, even if the constant linear movement assumption of the bulk material is not fulfilled by the speed v.sub.belt. Consequently, the mechanical complexity for the material settling of uncooperative bulk materials can be significantly reduced.
(30) For extremely uncooperative materials, such as for example spherical bulk material, it is in fact even possible, for the first time in many cases, to implement optical sorting of the described type by means of the present invention.
(31) Against the background that end users, in particular in the food sphere, have a large number of different bulk material products M sorted on one and the same sorting plant, a wide product spectrum can be processed without the need for adaptation, by means of conveyor belt change (for example use of conveyor belts with a surface which is structured to different thicknesses) or other mechanical changes, to uncooperative bulk material.
(32) In addition, the method for multiobject tracking enables improved optical characterisation and feature production from the image data of the individual objects O of the observed bulk material flow M. Since the uncooperative objects are presented generally in different three-dimensional situations to the camera, because of their additional intrinsic movement, image features of different object views relating to an expanded object feature can be accumulated over the individual observation times. For example, also the three-dimensional shape of an object can consequently be estimated and used as a feature for sorting. Extrapolation of the three-dimensional shape of an object from the recorded image data can thereby be effected, as described in the literature (see e.g. S. J. D. Prince “Computer vision models, learning, and inference”, New York, Cambridge University Press, 2012), e.g. by means of the visual outline of the individual objects in different poses (Shape-from-Silhouettes method).
(33) As a result, improved differentiation of objects with orientation-dependent appearance is achieved. In many cases, a further camera for a two-sided examination can consequently be dispensed with. The expanded object features can in addition also be used for improved movement modelling within the scope of the predictive tracking by, for example, the three-dimensional shape being taken into account for prediction of the trajectory.
(34) Furthermore, the identified model, which characterises the movement path 1 of a specific object, can itself be used as feature for a classification- or sorting decision. The movement path 1 determined by means of the individual camera recordings and also that after leaving the scanning region 3′, i.e. the future movement path 1′ estimated on the basis of the movement path 1, are influenced by the geometric properties and also the weight of the object and consequently offer a conclusion option with respect to the association to a bulk material fraction.
(35) The evaluation of the additional uncertainty descriptions for the estimated blow-out time and the blow-out position provides a further technical advantage for the bulk material sorting. This enables adapted actuation of the pneumatic blow-out unit for each object to be ejected. If the estimated values are associated with great uncertainty, a larger blow-out window can be chosen in order to ensure ejection of a bad object. Conversely, the dimension of the blow-out window and hence the number of actuated nozzles can be scaled down in the case of estimations with low uncertainty. As a result, the consumption of compressed air can be reduced during the sorting process, as a result of which costs and energy can be saved.
(36) As a result of the multiple position determination of objects of the bulk material flow at different times and also the evaluation of an image sequence instead of a momentary image recording (this can also concern multiple measurement, calculation and accumulation of object features at different times and also use of identified movement models as feature for an object classification), in general a significantly improved separation is achieved during automatic sorting of any bulk materials. In addition, compared with the state of the art for sorting uncooperative materials, the mechanical complexity for material settling can be significantly reduced.
(37) Furthermore, the present invention can be used for sorting bulk materials of a complex shape which must be examined from several different viewpoints, only one individual surface camera at a fixed position being used.
(38) By using an identified movement model as differentiation feature, in addition bulk materials with the same appearance but object-specific movement behaviour (e.g. due to different masses or surface structures) can be classified and sorted automatically.