METHOD AND DEVICE FOR MONITORING AN INDUSTRIAL PROCESS STEP
20210390303 ยท 2021-12-16
Assignee
Inventors
- Thomas NEUMANN (Villingen-Schwenningen, DE)
- Daniel MARCEK (Haigerloch-Stetten, DE)
- Florian WEISS (Furtwangen, DE)
Cpc classification
Y02P90/02
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
G05B2219/31447
PHYSICS
International classification
Abstract
A method for monitoring an industrial process step of an industrial process by a monitoring system. A machine learning system of the monitoring system is provided that contains a correlation between digital image data as input data and process states of the industrial process step to be monitored as output data using at least one machine-trained decision algorithm. Digital image data is recorded by at least one image sensor of at least one image acquisition unit of the monitoring system. At least one current process state is determined using the decision algorithm by generating at least one current process state of the industrial process step as output data rom the recorded digital image data as input data of the machine learning system. The industrial process step is monitored by generating a visual, acoustic and/or haptic output as a function of the at least one determined current process state.
Claims
1. A method for monitoring an industrial process step of an industrial process via a monitoring system, the method comprising: providing a machine learning system of the monitoring system that contains a correlation between digital image data as input data and process states of the industrial process step to be monitored as output data using at least one machine-trained decision algorithm; recording digital image data via at least one image sensor of at least one image acquisition unit of the monitoring system; determining at least one current process state of the industrial process step using the decision algorithm of the machine learning system by generating at least one current process state of the industrial process step as output data of the machine learning system based on the trained decision algorithm; and monitoring the industrial process step by generating a visual, acoustic and/or haptic output via an output unit as a function of the at least one determined current process state.
2. The method according to claim 1, wherein the machine learning system contains an artificial neural network as a decision algorithm.
3. The method according to claim 1, wherein the digital image data are recorded by at least one mobile device that is adapted to be carried by a person involved in the industrial process step and on which at least one digital image sensor of an image acquisition unit is arranged and are transmitted to the machine learning system.
4. The method according to claim 1, wherein, in a training mode, using a training module of the machine learning system, one or more parameters of the decision algorithm are learned based on the recorded digital image data, and/or wherein, in a productive mode, using the decision algorithm of the machine learning system, the at least one current process state of the industrial process step is determined.
5. The method according to claim 1, wherein the at least one current process state of the industrial process step is determined by the decision algorithm run on at least one mobile device, which is adapted to be carried by a person involved in the industrial process step.
6. The method according to claim 5, wherein the recorded digital image data are transmitted to a data processing system accessible over a network, wherein one or more parameters of the decision algorithm are learned based on the recorded digital image data using a training module of the machine learning system that is run on the data processing system and then the parameters of the decision algorithm are transmitted from the data processing system to the mobile device adapted to be carried by the person and are based on the decision algorithm.
7. The method according to claim 1, wherein the recorded digital image data are transmitted to a data processing system accessible over a network, wherein the at least one current process state of the industrial process step is determined by the decision algorithm run on the data processing system, wherein subsequently, as a function of the determined current process state of the industrial process step, the output unit is controlled by the data processing system for generating the visual, acoustic and/or haptic output.
8. The method according to claim 7, wherein one or more parameters of the decision algorithm are learned based on the recorded digital image data using a training module of the machine learning system which is run on the data processing system.
9. The method according to claim 1, wherein, on the data processing system, a plurality of decision algorithms is stored, which was or is independently trained, wherein as a function of a selection criterion and/or optimization criterion, a decision algorithm is selected from this plurality of decision algorithms, and wherein the selected decision algorithm is used as a basis for determining the current process state.
10. A monitoring system for monitoring an industrial process step of an industrial process, the monitoring system comprising: at least one image acquisition unit having at least one digital image sensor to record digital image data; a machine learning system having at least one machine-trained decision algorithm containing a correlation between digital image data as input data of the machine learning system and process states of the industrial process step to be monitored as output data of the machine learning system; at least one computing unit to determine at least one current process state of the industrial process step using the decision algorithm which is executable on the computing unit, in that, based on the trained decision algorithm, at least one current process state of the industrial process step is generated as output data of the machine learning system from the recorded digital image data generated as input data of the machine learning system; and an output unit that is set up to generate a visual, acoustic and/or haptic output to a person as a function of the at least one determined current process state.
11. The monitoring system according to claim 10, wherein the machine learning system comprises an artificial neural network as a decision algorithm.
12. The monitoring system according to claim 10, wherein the monitoring system includes at least one mobile device, which is designed to be carried by at least one person and on which the at least one digital image sensor of the image acquisition unit is arranged in such a way that digital image data are recordable, wherein the mobile device is set up to transmit the recorded digital image data to the machine learning system.
13. The monitoring system according to claim 10, wherein the monitoring system has a training mode in which one or more parameters of the decision algorithm are learned based on the recorded digital image data using a training module of the machine learning system, and/or wherein the monitoring system has a productive mode in which the decision algorithm of the machine learning system determines at least one current process state of the industrial process step.
14. The monitoring system according to claim 10, wherein the monitoring system has a mobile device comprising a computing unit and is adapted to be carried by a person involved in the industrial process step, wherein the mobile device is set up to determine the at least one current process state of the industrial process step using the decision algorithm executed on the computing unit.
15. The monitoring system according to claim 14, wherein the monitoring system has a data processing system accessible over a network, which is set up to receive the digital image data recorded by the image acquisition unit, to learn one or more parameters of the decision algorithm based on the received digital image data using a training module of the machine learning system which is run on the data processing system and then to transmit the parameters of the decision algorithm from the data processing system to the mobile device carried by the person.
16. The monitoring system according to claim 10, wherein the monitoring system has a data processing system accessible over a network, which is set up to receive the digital image data recorded by the image acquisition unit, to determine at least one current process state of the industrial process step using the decision algorithm executed on the data processing system and, as a function of the determined current process state of the industrial process step, to control the output unit for generating the visual, acoustic and/or haptic output.
17. The monitoring system according to claim 16, wherein the data processing system is further set up to learn one or more parameters of the decision algorithm based on the received digital image data using a training module of the machine learning system run on the data processing system and to base these on the decision algorithm.
18. The monitoring system according to claim 10, wherein the monitoring system is designed to carry out a method comprising: providing a machine learning system of the monitoring system that contains a correlation between digital image data as input data and process states of the industrial process step to be monitored as output data using at least one machine-trained decision algorithm; recording digital image data via at least one image sensor of at least one image acquisition unit of the monitoring system; determining at least one current process state of the industrial process step using the decision algorithm of the machine learning system by generating at least one current process state of the industrial process step as output data of the machine learning system based on the trained decision algorithm; and monitoring the industrial process step by generating a visual, acoustic and/or haptic output via an output unit as a function of the at least one determined current process state.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0051] The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:
[0052]
[0053]
[0054]
DETAILED DESCRIPTION
[0055]
[0056] The digital image data recorded by the image sensors 110 and 120 is then made available to a first computing unit 130, which, based on its calculations, then controls an output unit 140 of the augmented reality system 100. The output unit 140 is designed to provide a visual, acoustic and/or haptic output to a person.
[0057] Both the image sensors 110 or 120 and the output unit 140 do not necessarily have to be an integral part of a mobile device. It is also conceivable that these are distributed components that are only linked to the computing unit 130 by the mobile device. Conceivable and preferred, however, is an integral solution in which the mobile device, for example AR glasses or VR glasses, contains both the image sensors 110 or 120 and the output unit 140.
[0058] Thus, it is advantageous if the image sensors 110 or 120 per se and the output unit 140 are part of a glasses design, which is worn by the relevant person as glasses. The first computing unit 130 can also be part of the glasses, whereby a very compact design is made possible. However, it is also conceivable that the computing unit 130 is worn in the form of a mobile device on the body of the relevant person and is wired and/or wirelessly connected to the glasses.
[0059] The monitoring system 1 also has a data processing system 300, which is connected via a network 200 with the mobile device 100 or the augmented reality system 100. The data processing system 300 has a second computing unit 310, which is set up accordingly in association with the determination of the current process state. For example, the second computing unit 310 of the data processing system 300 can run a training module with which a decision algorithm is trained. It is also conceivable that the second computing unit 310 runs a productive module with which the current process state is determined based on a decision algorithm.
[0060] Furthermore, a configuration unit 400 to the data processing system 300 can be accessed via the network 200, which may contain information in particular regarding the classification of the images. This is useful, for example, if the recorded image data, be it 2D image data or 3D image data, has been previously analyzed and, possibly, classified.
[0061]
[0062] The image data D110 and/or the image data D120 are provided to the first decision module 131 of the first computing unit 130 of the augmented reality system 100, wherein the first decision module for running a decision algorithm, for example in the form of a neural network, is formed. The decision algorithm of the first decision module 130 is part of a machine learning system and contains a correlation between digital image data as input data on the one hand and process states of the industrial process step to be monitored as output data on the other. The decision algorithm of the first decision module 131 is now fed with the image data D110 and/or D120 as input data and then determines the current process state D131 as output data. The current process state D131 is locally generated decision data generated by the decision algorithm run on the first computing unit using the first decision module 131. This current process state D131 determined in this way is then transmitted via an interface of the first computing unit 130 to the output unit 140, where a corresponding acoustic, visual and/or haptic output can take place. The output unit 140 may be designed in such a way that it generates a corresponding output directly on the basis of the determined current process state D131. However, it is also conceivable that based on the current process state D131, a corresponding control of an output unit 140 existing without further intelligence takes place.
[0063] The augmented reality system 100 may operate independently of a possibly existing server system with regard to the productive mode, wherein the decision algorithm can be trained or remain untrained. It is conceivable that the first decision module will also carry out a training mode in order to further train the decision algorithm available in the first decision module. Training mode and productive mode are thus run together by the first computing unit 130.
[0064] It is conceivable that the image data D110 and D120 are transmitted to the data processing system 300 already known from
[0065] If the parameters D312 of the decision algorithm further trained by the data processing system are provided by the data processing system 300 via the network 200, these parameters D312 are made available to the first decision module 131. The decision algorithm existing there is now supplemented or extended or replaced by the parameters D312, so that the productive mode of the first decision module 131 is based on a decision algorithm trained in the data processing system. At the same time, of course, the image data D110 and D120 will continue to be provided to the first decision module 131 in order to determine the current process state D131 locally by the first computing unit 130. The base of the decision module 131 is constantly improved by a remotely trained decision algorithm, which can improve the recognition rate.
[0066] However, it is also conceivable that alternatively or in parallel, the data processing system 300 determines the current process state in a productive mode of a second computing unit 310 and then provides it to the first computing unit 130. If the current process state is determined only by the data processing system 300, this is then transferred to the output unit 140 as data D311. However, if at the same time a corresponding current process state D131 is determined by the first computing unit and the decision module 131 contained therein, both process states are made available to the corresponding output unit. This can then generate a corresponding output from the two process states (local: D131, remote: D311).
[0067]
[0068] The second decision module 311 has one or more decision algorithms that contain a correlation between the digital image data D110, D120 as input data and process states D311 as output data. The output data D311 in the form of current process states are then transmitted back to the augmented reality system 100 (see
[0069] Furthermore, the second computing unit 310 may have a training module 312, which also receives the image data D110 and D120. With the help of the training module, the parameters of the decision algorithm are then learned in a corresponding learning process and then, if appropriate, provided to the decision module 311 in the form of parameter data D312. The newly learned parameters D312 of the decision algorithm can in turn be provided by the training module 312 via the network to the augmented reality system 100.
[0070] The transfer of the learned parameters D312 to the augmented reality system 100 can take place at discrete, not necessarily fixed times. It is also conceivable that these parameters D312 of the decision algorithm are transmitted to more than one augmented reality system connected to the data processing system 300.
[0071] The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.