A METHOD FOR PROCESSING OF TREES USING MACHINE VISION WITH SOFTWARE MEANS, A SYSTEM AND A WOOD PROCESSING LINE
20260084339 ยท 2026-03-26
Assignee
Inventors
Cpc classification
G06V10/774
PHYSICS
B27L1/04
PERFORMING OPERATIONS; TRANSPORTING
G06V20/52
PHYSICS
International classification
B27L1/04
PERFORMING OPERATIONS; TRANSPORTING
G06V10/774
PHYSICS
Abstract
Processing trees using machine vision with a software, wherein trees are debarked with a debarking drum (12), harmful objects smaller than a selected dimension are removed debarked trees are routed from the debarking drum over a trap, trees on a conveyor are illuminated, trees are recorded on the conveyor using at least one camera, a harmful object is automatically detected utilising machine vision by recognising hue information or gradients as elements from the image, registering the elements to the field of view, determining element distances in the field of view thus creating three-dimensional information, recognising patterns with the help of registered elements and three-dimensional information, and recognizing stones or plastic as harmful objects, with a criterion preselected with the help of patterns, the conveyor is automatically stopped, the harmful object is removed, trees are moved using the conveyor.
Claims
1. A method for processing of trees using machine vision with a software, the method having steps of: debarking trees with a debarker having a debarking drum, while objects existing in the debarking drum smaller than a selected dimension are removed from among trees through bark openings included in the debarking drum, routing trees from the debarking drum over a trap for removing objects that are bigger than a selected dimension from among trees to the trap, illuminating trees on a conveyor after the debarking with a lighting device, by reflecting indirect lighting via ambient light covers on a section of the conveyor covered with the ambient light covers, recording trees on the conveyor after the debarking drum using at least one camera as a detector for creating images, detecting automatically a stone or plastic as a harmful object on the conveyor among trees from the images utilising machine vision in the following steps of: recognising hue information or gradients as elements from the registering the elements to a field of view as registered elements, determining element distances in the field of view thus creating three-dimensional information pertaining to the field of view, recognising patterns with the help of the registered elements and three-dimensional information, and recognising harmful objects with a criterion preselected with the help of the patterns; moving the trees forward using the conveyor (20), the method stopping the conveyor automatically when the harmful object is detected on the conveyor, and removing the harmful object on the conveyor from among trees.
2. The method according to claim 1, wherein the preselected criterion is one or more of the following: hue, intensity of colours, pattern shape, pattern size, pattern distance.
3. The method according to claim 1, wherein two cameras equipped with one lens or one stereoscopic camera equipped with two lenses are used as the detector, in which case a distance detector is a software component, which is arranged to determine both hues and distances from the images.
4. The method according to claim 1, wherein a colour camera is used as the camera.
5. The method according to claim 1, wherein a pattern neural network is used for recognising harmful objects wherein input information uses two channels, one of the channels for the three-dimensional information and the other channel for recognised hue information pertaining to the image.
6. The method according to claim 5, wherein the following steps are performed before the recording of trees: harmful objects are manually recognised from training images, edges of the harmful object are marked in training images, and the neural network is told to learn training images based on which the neural network recognises harmful objects.
7. The method according to claim 1, wherein the method uses a separate decomposition system for bark piles formed from bark detached from trees and the bark piles are decomposed with a pressurised water jet system by spraying water at a pressure of 80 to 300 bar to the bark piles for decomposing them for exposing harmful objects potentially existing in the bark piles.
8. The method according to claim 7, the bark piles travelling on an intermediate conveyor are recorded with an additional camera in connection with the intermediate conveyor immediately following the debarking drum before the trap, bark piles are recognised using machine vision of second software from images taken by the additional camera, and water is only sprayed if a the bark pile is recognized.
9. A system for detecting harmful objects from among trees, the system including an ambient light cover for covering a conveyor at least partially, at least one lighting device arranged to illuminate trees on the conveyor indirectly via the ambient light cover, detector comprising at least one camera for recording trees and creating images, arranged in connection with the conveyor, wherein the detector is located after a debarking drum and a trap included in the connection with the debarking drum, a computing unit comprising a memory and a software for automatically recognising harmful objects from the images based on machine vision wherein the harmful object is a stone or plastic, and the software is arranged to recognise hues or gradients as elements from the image, register elements to a field of view, determine element distances in the field of view thus creating three-dimensional information pertaining to the field of view, recognise patterns with the help of the registered elements and three-dimensional information, and recognise harmful objects with a criterion preselected with the help of the patterns, create a conveyor stop command automatically when the harmful object is detected on the conveyor, data transfer unit for sending the conveyor stop command to the conveyor after a recognition of the harmful object.
10. The system according to claim 9, wherein a surface of the ambient light cover facing the conveyor is matt surfaced for dispersing the light emitted by the lighting device for preventing reflections.
11. The system according to Claim 9, wherein the detector consist of two cameras equipped with one lens or one camera equipped with two lenses for detecting the harmful object, and the software is arranged to recognise hues from the images and function as a distance detector for determining distances from the images.
12. The system according to claim 9, the system including a separate decomposition system for bark piles, which comprises a pressurised water jet system for spraying water at a pressure of 80 to 300 bar to an intermediate conveyor for decomposing bark piles to expose harmful objects potentially existing in the bark piles.
13. The system according to claim 12, the decomposition system additionally including an additional camera arranged in the connection with the intermediate conveyor immediately following the debarking drum before the trap for recording bark piles travelling on the intermediate conveyor, a second software for recognising bark piles with machine vision from the images taken by the additional camera for controlling the water jet system, based on the images taken by the additional camera, to operate only when the bark pile is detected on the conveyor.
14. The system according to Claim 12, wherein the water jet system includes a high-pressure pump for pressurising water to a pressure of 80 to 300 bar, a pressure accumulator for storing pressurised water, a valve for closing pressurised water in the pressure accumulator, and a water nozzle arranged in connection with the intermediate conveyor for spraying water from the water nozzle to the intermediate conveyor for decomposing bark piles, while the second software controls the valve based on images taken by the additional camera for releasing water from the pressure accumulator to the water nozzle.
15. A wood processing line, which includes a debarking drum for debarking trees, the debarking drum including bark openings with a selected dimension for removing objects smaller than a selected dimension from among trees, a water-filled trap located after the debarking drum in a travel direction of trees for removing loose harmful objects bigger than a selected dimension existing among debarked trees from among trees, a conveyor for advancing debarked trees further, and a system according to claim 9 for recognising harmful objects among trees existing on the conveyor.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0077] The aspects of the disclosed embodiments is described below in detail with reference to the accompanying drawings that illustrate some of the embodiments of the aspects of the disclosed embodiments, in which
[0078]
[0079]
[0080]
[0081]
[0082]
[0083]
[0084]
[0085]
[0086]
[0087]
[0088]
[0089]
[0090]
[0091]
BRIEF DESCRIPTION OF THE DISCLOSED EMBODIMENTS
[0092]
[0093] Under the debarking drum, there is preferably a separate collection system 60, with which bark is recovered and any stones are separated from it before further processing of the bark by combustion, for example. The wood processing line is preferably a wood processing line 50 that precedes a pulp digester and the purpose of which is to debark and chip wood with a chipper 52 to woodchips of a suitable size, which can be cooked for separating the different ingredients of wood during pulp manufacture.
[0094] From the debarking drum 12, trees can move forward along a separate feed chute or preferably an intermediate conveyor 64 until they arrive at a trap 18. At this stage, stones may still exist among the trees, primarily with a size bigger than the dimensions of the bark openings 16 of the debarking drum 12. The trap 18 is preferably a stone trap, which is water-filled. The purpose of the trap is to collect stones that still exist among the trees from among the trees. The dimension of the trap in the travel direction of the trees is preferably shorter than the tree length, preferably smaller than a half of the length of trees to be moved. For example, the length of the trap can range between 0.4 and 0.8 m, if the tree length is from 2.5 to 3.0 m. The tree length can also be notably bigger, from 2.5 to 7.0 m. The trap 18 preferably includes a water-filled well 66, which is continuously supplied with water in such a way that the water level of the well 66 is equal to the level of the conveyor preceding or following the trap 18. The water cushion of the well 66 is overflowing, i.e., water flows over the edges of the well 66 and overflowing water is recovered with chutes surrounding the conveyors 64, 20 and the trap 18 (not shown in the figures). Thus, trees float on the surface of the water cushion over the well 66 of the trap 18 and continue to travel transported by the subsequent conveyor 20. Instead, stones that are heavier than wood sink through the water cushion down to the bottom of the well 66. The trap can be emptied during maintenance shutdowns discharging water present in the well for removing stones.
[0095] Advantageously, two successive traps 18 according to
[0096] A more detailed structure of the system is described referring to a first embodiment of a system 100 according to the aspects of the disclosed embodiments shown in
[0097] Advantageously, the inner surface of the ambient light cover, which is used to reflect light indirectly to the conveyor, is matt surfaced in order that light aimed at it is divided to several different parts and does not reflect back directly towards the conveyor. This provides indirect lighting, which prevents generation of reflections and reduces shadows on the conveyor. A matt surface can be provided with a covering of the ambient light cover.
[0098] For suspending cameras 36, the system 100 may include a separate bar 35 shown in
Illumination and Recording
First Embodiment
[0099]
[0100] Two cameras can also be used in stereoscopic measurement, where the conveyor 20 is recorded with two or more cameras 36 that are synchronized with each other. The cameras 36 record the same point on the conveyor from different angles. When the respective location of the cameras 36 is known, a three-dimensional set of point clouds or a disparity image, i.e., a 3D model of the material present on the conveyor can be created with the software means based on the images. When using a disparity image, the image indicates how much the images of the cameras differ from each other. The more the images differ, the closer the light-reflecting object, for example a stone or tree, at the pixel in question is to the camera. Based on a disparity image, an elevation image can be created, which shows the length, width and also height of the objects visible in the camera image; that is, a type of 3D model is created.
[0101] In other words, the two separate 2-dimensional images of the stereo camera are merged to form a 3-dimensional image. Merging of the images can take place either directly in the camera or later in the computing unit.
[0102] The camera or cameras used are preferably colour cameras, which enables a separation based on hues. A particularly advantageous method of implementation is to use a colour camera, which is also a stereo camera. An example of this is the Rudy 3d depth camera manufactured by German Nerian Vision GmbH.
[0103] Detection means 22 preferably operate at a sampling frequency selected based on the conveyor's transfer speed and the desired accuracy. In order to find stones with the system, for example, with a diameter of 5 cm while the conveyor moves at 1.2 m/s, the sampling frequency may be, for example, between 5 and 7 Hz, if two successive images are wanted for each object. For example, the width of the field of view may correspond to the width of the conveyor, 1.5 m, and the length of the area to be recorded can be 2 m in the longitudinal direction of the conveyor. Generally, the sampling frequency may range between 0.6 and 10 Hz depending on the conveyor speed, the distance between the detection means and the conveyor, and the field of view of the detection means. Advantageously, the sampling frequency is arranged according to the transfer speed of the conveyor so that a detection is obtained for each object monitored from 1 to 10, preferably 2 to 5, most preferably from three successive images. This improves the accuracy of the method.
[0104] According to
Second EmbodimentLaser Light
[0105]
[0106] Using the software means 24 of the computing unit 42, the laser line is sought from the images and its length is measured. The laser line can be measured by recording the image comprising the laser line with a camera; the Y coordinate of the point comprising the laser line in the image represents the distance of this point from the camera, the X coordinate represents the location of this point in the lateral direction, while the third dimension can be defined in a function of the time of recording and the movement of the object. The method is based on laser triangulation wherein the camera and the laser light are oriented to the same object, however, at an angle relative to each other. The laser beam reflected from the object recorded is oriented to different points in the camera cell depending on the distance between the object and the laser light. Based on this difference, it is possible to determine dimensions of the object when the locations of the laser light and the camera as well as the angle between the orientation are known.
[0107] The individual lengths of the laser line are stored in the memory 44 and successive lengths of the laser line are compared to each other. Based on the comparison, the software means are arranged, as a machine vision application, to recognise a stone based on successive growing and then shortening laser line lengths. In other words, stone detection is based on laser triangulation in the first embodiment.
[0108] In this case, the sampling frequency must be between 100 and 500 Hz, preferably between 200 and 400 Hz, in order that laser lines are detected at an interval of at least 1 cm and a sufficient number of detections of an individual stone is obtained for finding it. In
[0109] Since the use of a camera for recording trees requires the use of a lighting device for illuminating the trees for recording and, on the other hand, the use of a laser light preferably requires a dark environment, it is possible to interrupt the illumination for the camera by using a strobe light as the lighting device, which illuminates the object momentarily for camera recording and interrupts lighting during the use of a laser light. Advantageously, software means are arranged to control the timing of illumination and recording for coordinating the functions.
Data Transfer From a Camera to Software Means
[0110] Measurement data is transferred to the computing unit 42 using data transfer means 48. Data transfer means may consist of wireless data transfer means, such as wlan (wifi), or wired fixed data transfer means, such as a field bus. As the computing unit 42, in turn, a normal PC can be used. In the computing unit 42, measurement data is stored in the memory 44 of the computing unit.
Object Recognition
[0111] Object recognition can be based on two alternative methods that use machine vision, namely, at least partly, on the use of a neural network based on statistical analysis or on deterministic pattern recognition. More preferable of these is the use of a neural network, which is lighter for computation in a real-time process.
[0112] Advantageously, machine vision implemented with software means searches the measurement data for objects with a shape and hue of a stone and corresponding dimensions both in the case of a neural network and in pattern recognition. In other words, hues are recognised from the images and their location is registered to the field of view in the memory of the software means, hue distances in the field of view are determined and stored as three-dimensional information pertaining to the field of view, patterns and, furthermore, stones based on the patterns, are recognised with the help of registered hues and three-dimensional information pertaining to the field of view. A detection of a deviating shape is interpreted as a stone, based on which a conveyor stop command is preferably created.
Conveyor Stop
[0113] Once the software means 24 have recognised a stone 14 among trees 10 according to a selected criterion, the software means 24 preferably generate a conveyor 20 stop command, which can be automatically transferred, using data transfer means 48, to the operation control 68 of the conveyor 20 for stopping the conveyor 20. In addition, an alarm and an image of the stone detection can be presented to the supervisor so that the supervisor can check the image visually by him-/herself. If, in the opinion of the supervisor, the image depicts a stone, he/she can go and remove the stone from the conveyor and the conveyor can be restarted. Alternatively, if the supervisor considers that the system has stopped the line in the image for a detection of a branch, the supervisor can start the conveyor without going to the conveyor. In
Steps of the Method
[0114]
[0115] After this, the method includes, in the second and third embodiments, an extraordinary step 88 in which the conveyor is illuminated with visible light or a laser light beam is sent to the conveyor. Delivered light provides a reflection. In step 90, the conveyor is recorded with a camera and, in step 92, a stone is automatically detected with detection means based on images using software means. The stone recognition of step 92 uses recognition of hues and distances. In step 94, a decision is made whether or not a stone can be recognised on the conveyor based on the detections. If a stone is not detected on the conveyor, the detection means continue making detections for finding stones and, if a stone is found, the conveyor is preferably stopped in step 96 based on a stone detection. Finally, in step 98, the stone is removed from the conveyor. Once the stone is removed, the conveyor can be restarted again.
[0116]
[0117] In step 124, based on the locations, a three-dimensional image is produced wherein, in addition to the hue, also the distance relative to the camera has been stored for each pixel. In step 126, these individual pixels are stored in a three-dimensional coordinate system as a three-dimensional image. The minimum camera resolution can be 640480, preferably from 800600 to 14401056, for example. A resolution of 800600 provides a dot density of 2 mm at the measuring distance. The resolution can be higher if recording is desired on a larger area or from a bigger distance.
Harmful Object Recognition
Deep Learning Neural Network
[0118] In step 128, the three-dimensional image is processed with a Deep Learning (DL) neural network which has been told to learn images with stones taken with the same system during the calibration stage. In step 128, the DL neural network searches three-dimensional images for objects the hue and shape of which would correspond to stones trained to it. If one is found, DL computes a percentage of certainty for it in step 130 and the coordinates of the rectangular surrounding the object in step 132. Advantageously, henceforth, the field of view used in the processing is limited to comprise only the area A of the image defined by the rectangular for increasing the efficiency of observation, as shown marked with broken lines in
[0119] In the step of using the neural network, the input of the neural network is a three-dimensional image which is interpreted with the neural network. More precisely, the input information preferably uses two channels, one channel being said three-dimensional information and the other channel representing the recognised hue information of the image.
[0120] Software means monitor the stone detection information provided by the DL neural network and compare it to the limit values in step 134. If the percentage of certainty exceeds a preselected limit, for example 50%, the software means perform checks in step 136 on the elevation data of the object and the area surrounding it aiming to eliminate stops caused by tree ends. With the check, it is examined whether the elevation data of the object continues uniformly in the direction of the longitudinal axis of the object outside the rectangular surrounding the object in one direction and changes rapidly in the other direction. If this is the case, a tree end is probably concerned, not a stone. In addition to said percentage of certainty, stone recognition data includes the coordinates of the rectangular surrounding the detection, the hue information contained in the image and elevation data. If the percentage of certainty is exceeded and the dimensions of the object do not match with a typical tree end, the conveyor is stopped in step 96 and, in step 138, the images are presented on the control room monitor wall so that operators can estimate, in step 140, whether this is clearly an object other than a stone. If this is the case, operators can restart the conveyor immediately in step 142. If, on the other hand, a potential stone is concerned, operators go next to the conveyor in step 98 to observe the object and possibly remove it from among the trees.
[0121] When using a neural network, it is to be understood that the neural network must be told to learn a great number, preferably tens or hundreds of images, in which a harmful object appears before the neural network can be utilised in a method according to the aspects of the disclosed embodiments. This training work can be performed using a separate training computer suitable for this purpose. Advantageously, the user has manually defined the harmful object in the images to be trained for the neural network. On the other hand, it is also possible to crop out from the training material things that are not desired to be detected, in other words, that this is not a harmful object for which the conveyor should be stopped. The training computer is preferably more efficient for its computing capacity and particularly graphic computing capacity than a computer used in real-time observation in a system according to the aspects of the disclosed embodiments. This is because the neural network itself requires relatively little computer computing power during the method according to the aspects of the disclosed embodiments.
Pattern Recognition
[0122] Pattern recognition can be used instead of or in addition to the harmful object recognition performed with the Deep Learning neural network described above. Pattern recognition utilises the same 3d images for harmful object recognition as the neural network, but with a different principle of operation. Pattern recognition is a deterministic method with accurate initial parameters for harmful object recognition; these parameters are examined utilising certain formulas and conditions while examining 3d images. These conditions and formulas can be successive related to, for example, hue gradients and distances of images. For this reason, calibration of pattern recognition for an application may be more laborious than in the case of a neural network, which is only told to learn a great number of images in which a stone has been detected.
Decomposition System for Bark Piles
[0123] According to an advantageous embodiment, the system additionally includes a decomposition system 75 for bark piles shown in
[0124] More precisely, the decomposition system 75 for bark piles includes a water jet 76, shown in
[0125] The decomposition system 75 may also include, according to
[0126] The water nozzle or nozzles are preferably located on the sides of the intermediate conveyor 64 and oriented slightly upwards from the level of the intermediate conveyor.
[0127]
[0128] In this context, it is to be understood that steps depicted in
System
[0129]
[0130] The content of the software means is described in section 108 and it comprises a machine vision unit 110, a machine learning artificial intelligence unit 112 and a control unit 114. The machine vision unit 110 is arranged for creating and presorting of images and the machine learning deep learning artificial intelligence unit 112 is arranged for stone recognition based on the material trained and for storing images and outputs, and the control unit 114 for controlling user interface connections and creating an output analysis and stopping decision.
[0131] As was described earlier in the context of
[0132] In a system and a method according to the aspects of the disclosed embodiments, for machine vision based pattern recognition, it is possible to use, for example, software known by the product name VisionPro of Cognex Corporation (USA), to perform image processing and recognition. The software can be used with a conventional PC. In turn, the Deep Learning neural network can be, for example, Halcon MVtec software. The laser light used in the implementation according to the first embodiment can be, for example, a line laser marketed with the product name Stingray of Coherent Inc. (USA), model STR-660-100-CW-FL-L01-60-S-XX-8, in which the line laser power is 100 mW, the sending laser line uses the wavelength of 660 nm, the line laser produces an individual line with an angle of 60, and with a prefocusing distance of 500 mm. As the camera used as detection means, it is possible to use a camera marketed with the product name JAI SP-5000-GE2 of Stemmer Imaging AG, which records 44 images per second on a 5 MP CMOS cell. As optics in the context of the camera, it is possible to use, for example, a lens from the CBC Group Computar M0824-MPW2-R-series. If the camera used is a colour camera, for example, a colour stereo camera Nerian Ruby 3d depth can be used. As lighting devices producing visible light, it is possible to use, for example, modular LED lights from the LHF300 series of Smart Vision Lights. As the colour camera preferably used as an addition to the detection means, for example, the U3-36L0XC camera manufactured by IDS Imaging Development Systems GmbH can function in this context.
[0133] According to an embodiment not included in the aspects of the disclosed embodiments, illumination can also be direct, but in this case the surface area of light must be at least equal to the surface area recorded and with a uniform intensity to avoid reflections.
[0134] According to an embodiment not included in the aspects of the disclosed embodiments, a method according to the aspects of the disclosed embodiments can be utilised for recognising any objects from a material flow also in other applications apart from the debarking process and in the context of a debarking drum.