METHOD AND SYSTEM FOR ANALYZING AND/OR CONFIGURING AN INDUSTRIAL INSTALLATION
20210356946 · 2021-11-18
Inventors
Cpc classification
G05B2219/39271
PHYSICS
G05B19/41885
PHYSICS
B25J9/163
PERFORMING OPERATIONS; TRANSPORTING
G05B19/4183
PHYSICS
International classification
G05B19/418
PHYSICS
Abstract
A method for analyzing and/or configuring an industrial installation, which has at least one first installation component for capturing, handling and/or machining at least one first object. A process success of the first installation component is predicted and/or a value for a configuration parameter of the first installation component is determined on the basis of at least one first object model of the first object with the aid of at least one first machine-learned component model of the first installation component.
Claims
1-9. (canceled)
10. A method for analyzing and/or configuring an industrial installation, which includes at least one first installation component for capturing, handling, and/or machining at least one first object, the method comprising: at least one of: predicting a process success of the first installation component, or determining a value for a configuration parameter of the first installation component; wherein the predicting or determining is based on at least one first object model of the first object with the aid of at least one first machine-learned component model of the first installation component.
11. The method of claim 10, further comprising: at least one of: predicting a process success of at least one second installation component, or determining a value for a configuration parameter of the at least one second installation component; wherein the predicting or determining is based on at least one of: at least one of the first object model or at least one second object model of a second object, with the aid of at least one second machine-learned component model of the at least one second installation component, or the at least one second object model of the second object, with the aid of the first machine-learned component model of the first installation component.
12. The method of claim 10, wherein at least one component model of an installation component is at least one of: trained based on one or more object models of at least one of: a) the first object, b) at least one second object of the same type as the first object, or c) at least one second object of a different type than the first object; trained at least partially before installation of the installation component; or has a neural network.
13. The method of claim 12, wherein: the at least one component model of an installation component is trained based on more than one different object models; and the different object models are of the same type.
14. The method of claim 12, wherein the neural network is a deep neural network.
15. The method of claim 10, further comprising: making the first component model of the first installation component and the first object model of the first object available to a host; wherein the host predicts the process success and determines the value for the configuration parameter, respectively.
16. The method of claim 15, further comprising: at least one of: predicting with the host a process success of at least one second installation component, or determining with the host a value for a configuration parameter of the at least one second installation component; wherein the predicting or determining is based on at least one of: at least one of the first object model or at least one second object model of a second object, with the aid of at least one second machine-learned component model of the at least one second installation component, or the at least one second object model of the second object, with the aid of the first machine-learned component model of the first installation component.
17. The method of claim 10, wherein at least one of: the method further comprises making at least one object model of an object available to the component model with the aid of at least one of the first installation component or at least one second installation component; or at least one object model comprises at least one of: image data of the object, dimensions of the object, or at least one of mechanical, thermal, electrical, or optical parameters of the object.
18. The method of claim 10, wherein at least one installation component comprises at least one of: at least one sensor; at least one actuator; at least one machine tool; or at least one conveyor.
19. The method of claim 10, wherein at least one of: the at least one sensor is an optical sensor; the at least one actuator is an electromotive actuator; or the at least one actuator is a robot.
20. A system for analyzing and/or configuring an industrial installation, which includes at least one first installation component for capturing, handling, and/or machining at least one first object, the system comprising: means for at least one of: predicting a process success of the first installation component, or determining a value for a configuration parameter of the first installation component; wherein the predicting or determining is based on at least one first object model of the first object with the aid of at least one first machine-learned component model of the first installation component.
21. A computer program product for analyzing and/or configuring an industrial installation, which includes at least one first installation component for capturing, handling, and/or machining at least one first object, the computer program product comprising program code stored on a non-transient, computer-readable storage medium, the program code, when executed on a computer, causing the computer to: at least one of: predict a process success of the first installation component, or determine a value for a configuration parameter of the first installation component; wherein the predicting or determining is based on at least one first object model of the first object with the aid of at least one first machine-learned component model of the first installation component.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0046] The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and, together with a general description of the invention given above, and the detailed description given below, serve to explain the principles of the invention.
[0047]
DETAILED DESCRIPTION
[0048]
[0049] Exemplarily, the installation comprises a first installation component in the form of a robot 10, which is to machine a first object 20 and objects of the same type with respect thereto, and a further object 30 of a different type and objects of the same type with respect thereto, a further installation component in the form of a camera 40, and another further installation component in the form of a further robot 50.
[0050] A first machine-learned component model of the robot 10 in the form of a deep neural network 11 as well as a machine-learned component model of the further robot 50 in the form of a further deep neural network 51 are provided by the robot manufacturer and loaded onto a host 100, which have been pretrained or fully trained at the manufacturer.
[0051] From the supplier of the further object 30, an object model 31 of that object is provided and loaded onto the host 100.
[0052] An image 21 of the first object 20 is taken by the camera 40 and provided to the host 100 as an object model 21 of that object.
[0053] On the basis of these object models 21, 31, the host 100 analyzes, with the aid of the component model 11, whether a planned machining of the objects 20, 30 by means of the robot 10 is (probably) feasible and, if necessary, parameterizes the robot 10 for this purpose or outputs corresponding configuration parameter values.
[0054] Analogously, the host 100 uses the component model 51 to analyze, on the basis of the object models 21, 31, whether planned machining of the objects 20, 30 by means of the robot 50 is (probably) feasible, and, if necessary, parameterizes the robot 50 or outputs corresponding configuration parameter values.
[0055] The robot manufacturer has trained the neural networks 11, 51 on the basis of camera images, as provided by cameras of the type of camera 40, and CAD data 31, as provided for the further object 30, for example, in order to classify whether the robot 10 or 50 can grasp the corresponding object, or to determine suitable grasping poses. For this purpose, in addition to object models, in the embodiment camera images or CAD data of objects which are of the same type as the objects 20, 30 to be handled by robot 10 or 50, object models of objects which are of a different type than such objects are also used, in particular of objects which are not to be handled by robot 10 or 50 or which are to be handled with different configuration parameter values, in order to also provide the neural networks 11, 51 with negative examples.
[0056] In one embodiment, the (pre-trained) neural network 11 or 51 may be fully trained based on camera images from the camera 40.
[0057] Although embodiments have been explained in the preceding description, it should be noted that a plurality of variations are possible.
[0058] For example, in the above embodiment, object models of a different type are machined in the component models 11, 51, namely images 21 on the one hand and CAD data 31 on the other hand.
[0059] In one variation, instead, only object models of the same type are processed in one or both of the component models 11, 51 in each case, i.e. in the embodiment only images 21 or only CAD data 31 are processed in each case in the component models 11 and/or 51.
[0060] This allows the neural networks 11, 51 to operate advantageously, in particular more specifically, and thus in one embodiment to improve their speed, robustness and/or precision.
[0061] Additionally or alternatively, the neural network 11 and/or 51 (respectively) can also be trained first on the basis of the images captured by the camera 40.
[0062] Thus, for example, the robot manufacturer may (pre-)train the neural network 11 based on camera images, such as those provided by cameras of the type of camera 40, of objects of the type of the object 20 as positive examples and of objects of the type of the object 30 as negative examples.
[0063] If, in operation, the camera 40 then captures a first object of the type of the object 20, the neural network 11 can predict a positive process success for this or set or predetermine or output corresponding configuration parameter values for this, for example grasping positions or the like.
[0064] If, on the other hand, the camera 40 captures a first object of the type of the object 30 during operation, the neural network 11 can predict a negative process success for this or set or predetermine or output corresponding other configuration parameter values for this, for example other grasping positions or the like.
[0065] Furthermore, it should be noted that the embodiments are merely examples which are not intended to limit the scope of protection, the applications and the design in any way. Rather, the preceding description provides the person skilled in the art with a guideline for the implementation of at least one embodiment, whereby various modifications, in particular with respect to the function and arrangement of the described components, can be made without leaving the scope of protection as it results from the claims and these equivalent combinations of features.
[0066] While the present invention has been illustrated by a description of various embodiments, and while these embodiments have been described in considerable detail, it is not intended to restrict or in any way limit the scope of the appended claims to such de-tail. The various features shown and described herein may be used alone or in any combination. Additional advantages and modifications will readily appear to those skilled in the art. The invention in its broader aspects is therefore not limited to the specific details, representative apparatus and method, and illustrative example shown and described. Accordingly, departures may be made from such details without departing from the spirit and scope of the general inventive concept.
REFERENCE SIGN LIST
[0067] 10 Robot (first installation component) [0068] 11 Deep neural network (first machine-learned component model) [0069] 20 First object [0070] 21 Image (first object model) of the first object [0071] 30 Further object [0072] 31 CAD data (object model) of the further object [0073] 40 Camera (further installation component) [0074] 50 Robot (further installation component) [0075] 51 Deep neural network (machine-learned component model) [0076] 100 Host