METHOD FOR THE COMPUTER-ASSISTED LEARNING OF AN ARTIFICIAL NEURAL NETWORK FOR DETECTING STRUCTURAL FEATURES OF OBJECTS
20230017444 · 2023-01-19
Assignee
Inventors
Cpc classification
G06F18/2414
PHYSICS
G06V20/70
PHYSICS
G06V10/7788
PHYSICS
G06V10/774
PHYSICS
International classification
G06V10/774
PHYSICS
G06V10/778
PHYSICS
G06V10/94
PHYSICS
Abstract
A method for the computer-aided training of an artificial neural network (ANN) for recognizing structural features on objects, by means of which method identified structural features on objects are recognizable rapidly and reliable. That is achieved by virtue of the fact that a convolutional neural network (CNN) having a multiplicity of neurons is used for the training of an ANN for feature recognition on objects. Said network comprises a multiplicity of convolutional and/or pooling layers for the extraction of information from images of individual objects. In this case, the images of the objects are respectively scaled or scaled up and/or down from layer to layer. During the scaling of the images information about the structural features of the objects is maintained, specifically independently of the scaling of the images.
Claims
1. A method for the computer-aided training of an artificial neural network for recognizing structural features on objects, in particular on plants or on plant constituents, wherein the network used is a convolutional neural network (CNN), in particular a regional convolutional neural network (R-CNN), having a multiplicity of neurons, said network comprising a multiplicity of convolutional and/or pooling layers for the extraction of information from images of the objects having the structural features to be recognized for a classification of the features by further layers, comprising scaling the images up and down from layer to layer and during the scaling of the images from layer to layer obtaining information about the structural features of the objects, specifically independently of the scaling of the images.
2. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, further comprising simultaneously transferring a plurality of images from different perspectives of the same object for recognizing the structural features are to the neural network, wherein computer-aided operations for recognizing the structural features of the object are carried out on the images in parallel on a plurality of GPUs.
3. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 2, wherein the plurality of images of the same object are scaled, in particular rescaled, and stitched prior to transfer to the neural network.
4. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in the claim 1, wherein labeling or designation of the features of objects is carried out semiautomatically on the images in preparation for the training process of the neural network, wherein preferably firstly the features and/or the objects are isolated from a background of the images and in particular afterward labelling of the features is carried out by a person.
5. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 4, wherein labeling of the features of the objects in a computer-aided manner is proposed.
6. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 4, wherein the images used are presorted prior to the labeling, wherein only images whose objects and/or features of the objects differ from objects and/or features of the objects of other images are used for the labeling.
7. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 4, wherein after the labeling the images are grouped into groups of images having objects which have few structural features, many structural features and/or complex structural features.
8. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, wherein for the training of the network the images having the objects whose features are to be recognized are fed only to a few layers, in particular to the upper layers, and only the weightings of these layers are adapted, wherein the rest of the layers are not adapted for the training process of the network, in particular their weightings remain unchanged for all of the images.
9. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, wherein for the training of the network only the weightings of the upper layers are adapted and the rest of the layers are not adapted for the training process, wherein in particular the weightings of the upper layers are adapted for each image and for the rest of the layers the weightings are not adapted for all of the images.
10. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, wherein the individual method steps are carried out simultaneously in parallel on a plurality of computer units, wherein the necessary operations are distributed among all the computer units in such a way that an optimum utilization of the computer capacity is attained.
11. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 4, wherein the labeling is carried out on a 3-dimensional image or object, wherein the 3-dimensional image or object is projected on two dimensions and is fed to the neural network for training purposes and is subsequently converted back into a 3-dimensional image or object.
12. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 11, wherein a recurrent neural network is used for the processing of the third dimension of the image or object, said network processing series of images of an object as a part of an image sequence, whereby information of an object from one perspective is transferred to other perspectives of the same object.
13. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, wherein the outputs of the neurons are fed again to the neural network for self-training purposes, wherein output errors are recognized by the network and/or a person and are marked as such.
14. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 13, wherein the person makes available to the network the information regarding how said network recognizes how the feature of the object is to be treated, in particular cut and/or grasped.
15. The method for the computer-aided training of an artificial neural network for recognizing structural features on objects as claimed in claim 1, wherein the recognized structural features of the objects are used in order to calibrate a laser for a treatment of the object and/or in order to control a laser in such a way that the latter cuts the object in a targeted manner.
16. An artificial neural network comprising a multiplicity of neurons, wherein the network is configured in such a way that it is trained by a method as claimed in claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032] Preferred exemplary embodiments in association with the invention are described in greater detail below with reference to the drawing, in which:
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
[0043] One exemplary embodiment of an apparatus is illustrated highly schematically in
[0044] The method and the apparatus essentially serve for the automated propagation of plants. The propagation rate or growth rate for plants is improved by the apparatus illustrated here and also by the method according to the invention. In the exemplary embodiment illustrated in
[0045] The plant 10 hanging from the tweezers 19 is then fed to a further image recognition device having two further cameras 21, 22. These cameras 21, 22 take photographs of the hanging plant 10 from various perspectives. The information about the plant 10 thus obtained is used by the control unit 17 to recognize plant-specific features of the plant. These plant-specific features can be for example the species of the plant and also properties of leaves, stems or branches. It is additionally conceivable that the control unit 17 recognizes the species of the plant. Equally, however, it is also conceivable that an operator has previously input the species of the plant to be propagated into the control unit 17 via an input means. In the control unit 17, an ideal cut position or an ideal cut pattern is then ascertained by the ANN according to the invention on the basis of the recognized plant-specific features. In this case, for this determination the ANN uses not only the information of the present plant 10 but also information about previous plants and data that were previously made available to the neural network by an operator.
[0046] With the aid of the ANN and the method according to the invention, it is possible not only to determine the ideal cut line but also to determine the type and/or the physical properties of a cutting means for an optimum cut. In the exemplary embodiment illustrated in
[0047] The separated constituent 24 or the clone then falls onto a second conveyor 25. It can be provided that a second gripping means 26 grasps the clone 24 from this second conveyor 25 and feeds it to a container 27 having a nutrient medium 28. A camera 29, which is likewise connected to the control unit 17 for ascertaining an optimum gripping position, is likewise used for preferred picking up of the clone 24 by the second gripping means 26. The containers 27 thus filled are then transferred out of the work region 13 via a third conveyor 30 and a conveying means 31. Directions of movement of the individual components are symbolized by the arrows illustrated in
[0048] Consequently, by way of the image recognition illustrated in
[0049]
[0050]
[0051] Depending on the type of plant and also the requirements in respect of division, it can be advantageous to use various cut images for the cutting.
[0052] Besides the U-cut 52 illustrated in
[0053] Furthermore, it can be provided that the V-cut 56 from
[0054] Besides the examples of cut images illustrated in
[0055] One exemplary embodiment of the image recognition of a plant 57 is illustrated highly schematically in
[0056] In a first step of the image recognition, two, preferably adjacent, cameras 59 are activated. At the same time, illuminants 60 situated near the cameras 59 are triggered and sufficiently illuminate the plant 57 (
[0057] The images thus captured are evaluated by the above-discussed control unit or by the ANN. This evaluation includes the recognition of plant-specific features along which the plant can preferably be divided by a cutting means. This image capture or this sequence of the individual recordings last a few 100 milliseconds.
[0058] Furthermore, it can be provided that the plant 57 is cut by a cutting means directly in the ring-like image recognition device 58, also called a theatre. The separated constituent of the plant 57 can either be grasped by a further gripping means or be conveyed away on a conveyor positioned below the image recognition device 58.
[0059] The cameras that are activated in
[0060] A further exemplary embodiment of an image recognition device 61 is illustrated in
[0061] In addition to the exemplary embodiments of the image recognition devices 58, 61 illustrated here, further geometries having more or fewer cameras are conceivable. These image recognition devices 58, 61 can be assigned to the exemplary embodiments of the invention in accordance with
LIST OF REFERENCE SIGNS
[0062] 10 Plant [0063] 11 Container [0064] 12 Conveying means [0065] 13 Work region [0066] 14 First conveyor [0067] 15 Camera [0068] 16 Camera [0069] 17 Control unit [0070] 18 First gripping means [0071] 19 Tweezers [0072] 20 Conveying means [0073] 21 Camera [0074] 22 Camera [0075] 23 Laser [0076] 24 Constituent [0077] 25 Second conveyor [0078] 26 Second gripping means [0079] 27 Container [0080] 28 Nutrient medium [0081] 29 Camera [0082] 30 Third conveyor [0083] 31 Conveying means [0084] 32 Plant [0085] 33 Camera [0086] 34 Camera [0087] 35 Control unit [0088] 36 Laser [0089] 37 Gripping means [0090] 38 Tweezers [0091] 39 Robot arm [0092] 40 Constituent [0093] 41 Conveyor [0094] 42 Plant [0095] 43 Conveyor [0096] 44 Arrow direction [0097] 45 Camera [0098] 46 Camera [0099] 47 Control unit [0100] 48 Laser [0101] 49 Robot arm [0102] 50 Gripping means [0103] 51 Constituent [0104] 52 U-cut [0105] 53 Leaf [0106] 54 Stem [0107] 55 Plant [0108] 56 V-cut [0109] 57 Plant [0110] 58 Image recognition device [0111] 59 Camera [0112] 60 Illuminant [0113] 61 Image recognition device [0114] 62 Camera [0115] 63 Illuminant [0116] 64 Plant