System, Computer Program, Computer-Readable Medium and Method for Automatically Configuring the System

20210351982 · 2021-11-11

    Inventors

    Cpc classification

    International classification

    Abstract

    A system, in particular an automation system, a computer program, a computer-readable medium and method for automatically configuring the system, wherein when the system is activated and/or during operation of the system, monitoring which physical and/or virtual hardware network interfaces the system comprises is monitored, if a hardware network interface is detected for the first time when the system is activated, or a hardware network interface is newly added during operation of the system, this is communicated to an auto-configuration module of the system, the auto-configuration module creates a configuration description using a template file, and the configuration description is implemented to configure the system in accordance with said description.

    Claims

    1.-24. (canceled)

    25. A method for automatically configuring a system, the method comprising: performing, during at least one of (i) startup of the system and (ii) ongoing operation of the system, monitoring of the system to ascertain what physical or virtual hardware network interfaces the system includes; communicating to an autoconfiguration module of the system together with information about at least one of (i) a type of affected hardware network interface and (ii) a networkconnected or needing to be connected to said hardware network interface, if at least one of (i) a physical or virtual hardware network interface is detected for a first time on startup of the system and (ii) a physical or virtual hardware network interface appears or disappears during operation of the system; utilizing, by the autoconfiguration module, a template file to produce for at least one of the affected hardware network interfaces a configuration description which comprises a first partial configuration description for at least one communication container, which comprises at least one application via which network functions can be provided, and a second partial configuration description for network functions for connecting the at least one communication container to the at least one affected hardware network interface; and executing the configuration description to configure the system in accordance therewith, the at least one communication container being generated and connected to the at least one affected hardware interface.

    26. The method as claimed in claim 25, wherein the configuration description is obtained by virtue of at least one wildcard provided in the template file being replaced with at least one parameter associated with the affected hardware network interface.

    27. The method as claimed in claim 25, wherein the template file is selected from a plurality of different template files.

    28. The method as claimed in claim 26, wherein the template file is selected from a plurality of different template files.

    29. The method as claimed in claim 27, wherein the selection is made based on filter rules contained in the template files.

    30. The method as claimed in claim 27, wherein at least one of: (i) at least one template file is stored for multiple types of physical and/or virtual hardware network interface that the system can have, and (ii) at least one template file is stored in each case for at least three different types of physical and/or virtual hardware network interfaces.

    31. The method as claimed in claim 29, wherein at least one of: (i) at least one template file is stored for multiple types of physical and/or virtual hardware network interface that the system can have, and (ii) at least one template file is stored in each case for at least three different types of physical and/or virtual hardware network interfaces.

    32. The method as claimed in claim 25, wherein at least one of (i) the detection of a physical or virtual hardware network interface for the first time on startup of the system and (ii) the appearance and/or disappearance of a physical or virtual hardware network interface during operation of the system is communicated to the autoconfiguration module by virtue of an operating system that runs on the system reporting an applicable event to the autoconfiguration module.

    33. The method as claimed in claim 25, wherein the at least one communication container is generated by a container controller module of the system.

    34. The method as claimed in claim 25, wherein at least one of: (i) the second partial configuration description is executed by a network controller module of the system and (ii) the first partial configuration description is executed by a container controller module of the system.

    35. The method as claimed in claim 25, wherein at least one application of the at least one communication container, in accordance with a first partial configuration description, provides functions of at least one of (i) an IPv6 router, (ii) NAT64 router (iii) a name service client or server and (iv) a brouter and (v) is provided by, or comprises, software for a WLAN access point.

    36. The method as claimed in claim 25, wherein the autoconfiguration module prompts at least one of (i) removal, (ii) deactivation of network functions for the affected interface and (ii) stopping of communication containers for the affected interface, if a physical or virtual hardware network interface disappears during operation of the system.

    37. The method as claimed in claim 25, wherein the autoconfiguration module produces for at least one further instance of the affected hardware network interfaces a network configuration description which concerns at least one of (i) at least one virtual network and/or at least one virtual bridge and (ii) at least one virtual switch for the at least one further hardware network interface and no communication container therefor.

    38. A system comprising a processor; memory; and an autoconfiguration module; wherein the system is configured to: send a resort to the autoconfiguration module if at least one of (i) a physical or virtual hardware network interface is detected for the first time on startup of the system and (ii) a physical or virtual hardware network interface appears or disappears during operation of the system, information about a type of at least one of (i) the affected hardware network interface and (ii) a network connected or needing to be connected to said hardware network interface being communicated together with said report; wherein the autoconfiguration module utilizes a template file to produce for at least one affected hardware network interface a configuration description which comprises a first partial configuration description for at least one communication container, which comprises at least one application via which network functions can be provided for the or the respective affected interface, and a second partial configuration description for network functions for connecting the at least one communication container to the or the respective affected interface; and wherein the configuration description is executed to configure the system in accordance therewith, where the at least one communication container being generated and connected to the at least one affected hardware interface.

    39. The system as claimed in claim 38, wherein the autoconfiguration module is designed to replace at least one wildcard provided in the template file with at least one parameter associated with the or the respective affected hardware network interface to obtain the configuration description.

    40. The system as claimed in claim 38, wherein the autoconfiguration module is configured to select the template file from a plurality of different template files.

    41. The system as claimed in claim 39, wherein the autoconfiguration module is further configured to select the template file from a plurality of different template files.

    42. The system as claimed in claim 40, wherein the autoconfiguration module is further configured to perform the selection based on filter rules contained in the template files.

    43. The system as claimed in claim 40, wherein at least one of: (i) at least one template file is stored in each case on the system for multiple types of physical and/or virtual hardware network interfaces that the system can have and (ii) at least one template file is stored in each case on the system for at least three different types of physical and/or virtual hardware network interfaces.

    44. The system as claimed in claim 42, wherein at least one of: (i) at least one template file is stored in each case on the system for multiple types of physical and/or virtual hardware network interfaces that the system can have and (ii) at least one template file is stored in each case on the system for at least three different types of physical and/or virtual hardware network interfaces.

    45. The system as claimed in claim 38, wherein the system is further configured to communicate at least one of (i) detection of a physical or virtual hardware network interface for the first time on startup of the system and (ii) one of an appearance and disappearance of a physical or virtual hardware network interface during operation of the system to the autoconfiguration module by virtue of an operating system that runs on the system reporting an applicable event to the autoconfiguration module.

    46. The system as claimed in claim 38, further comprising: a container controller module which is configured to generate the at least one communication container.

    47. The system as claimed in claim 38, further comprising at least one of (i) a network controller module which is configured to execute the second partial configuration description and (ii) a container controller module which is configured to execute the first partial configuration description.

    48. The system as claimed in claim 38, wherein the system is further configured such that at least one application of the at least one communication container, in accordance with the first partial configuration description, provides functions of at least one of (i) an IPv6 router, (ii) a NAT64 router, (iii) a name service client or server, (iv) a brouter, and (v) is provided by, or comprises, software for a WLAN access point.

    49. The system as claimed in claim 38, wherein the system is further configured such that the autoconfiguration module prompts at least one of (i) removal and (ii) deactivation of at least one of network functions and virtual networks for the affected interface, if the autoconfiguration module is sent a report that a physical or virtual hardware network interface disappears during operation of the system.

    50. The system as claimed in claim 38, wherein the autoconfiguration module is further configured to produce for at least one further instance of the affected hardware network interfaces a network configuration description that concerns at least one of (i) at least one virtual network, (ii) at least one virtual bridge and (iii) at least one virtual switch for the at least one further hardware network interface and no communication container therefor.

    51. A computer program comprising program code instructions for performing the method as claimed in claim 25.

    52. A non-transitory computer-readable medium encoded with computer program instructions which, when executed by a processor on at least one computer, cause the at least one computer to automatically configuring a system, the computer program instructions comprising: program code for performing, during at least one of (i) startup of the system and (ii) ongoing operation of the system, monitoring of the system to ascertain what physical or virtual hardware network interfaces the system includes; program code for communicating to an autoconfiguration module of the system together with information about at least one of (i) a type of affected hardware network interface and (ii) a network connected or needing to be connected to said hardware network interface, if at least one of (i) a physical or virtual hardware network interface is detected for a first time on startup of the system and (ii) a physical or virtual hardware network interface appears or disappears during operation of the system; program code for utilizing, by the autoconfiguration module, a template file to produce for at least one of the affected hardware network interfaces a configuration description which comprises a first partial configuration description for at least one communication container, which comprises at least one application via which network functions can be provided, and a second partial configuration description for network functions for connecting the at least one communication container to the at least one affected hardware network interface; and program code for executing the configuration description to configure the system in accordance therewith, the at least one communication container being generated and connected to the at least one affected hardware interface.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0097] Further features and advantages of the present invention will become clear from the description of an embodiment of the method and of the system of the present invention that follows with reference to the accompanying drawings, in which:

    [0098] FIG. 1 shows a purely schematic depiction of a system in accordance with the invention; and

    [0099] FIG. 2 is a flowchart of the method in accordance with the invention.

    DETAILED DESCRIPTION OF THE EXEMPLARY EMBODIMENTS

    [0100] Shown in FIG. 1 is a purely schematic depiction of an exemplary embodiment of an automation system 1 in accordance with the invention for an industrial automation installation, which is not depicted further in the figures.

    [0101] The automation system 1 in the present case is a programmable logic controller (PLC). It will be stressed that it may alternatively be a different type of system. The autoconfiguration in accordance with the invention can be provided, in principle, for any type of system from the industrial field, in particular industrial automation, whose basic equipment, on startup, comprises one or more hardware network interfaces and/or to which and/or from which one or more hardware network interfaces are (can be) added and/or removed during operation.

    [0102] The system 1 in the present case has four hardware network interfaces 2, 3, 4, 5. All four interfaces in the illustrated exemplary embodiment are physical hardware network interfaces 2, 3, 4, 5. Alternatively, one, more or all of these interfaces can be virtual hardware network interfaces, such as of one or more virtual machines (VMs).

    [0103] For each of the four interfaces 2, 3, 4, 5, an operating system of the automation system 1, which is not depicted in FIG. 1, comprises at least one respective driver.

    [0104] The interfaces 2, 3, 4, 5 are connected to various networks 6, 7, 8, 9.

    [0105] Specifically, the interface 2, which is an upstream/infrastructure network interface, is connected to a company or factory network 6, which is indicated purely schematically in FIG. 1 by a cloud. The interface 3 is connected to a wireless (local area) network 7 and is therefore a wireless network interface, and the two interfaces 4, 5 are each connected to a lower-level machine or cell network 8, 9. The interfaces 4, 5 in the present case connect lower-level fieldbus network segments to the company network 6. They are downstream network interfaces.

    [0106] For the purposes of better distinction, FIG. 1 depicts the three different network types and associated elements using different line types, specifically the company network 6 and associated elements using dotted lines, the wireless network 7 and associated elements using dashed lines and the lower-level machine or cell networks 8 and associated elements using dot-dashed lines.

    [0107] From the point of view of the user, it is worthwhile or desirable for continuous connectivity between the company and production network 6, the lower-level cell or machine networks 8, 9 and the wireless network 7 to be produced as quickly as possible, specifically in accordance with the hardware expansion of interfaces 2, 3, 4, 5 that is performed and/or changed by the user.

    [0108] To ensure this, the system 1 is configured to monitor, both on startup and during ongoing operation, what physical and/or virtual hardware network interfaces 2, 3, 4, 5 are present. The monitoring is effected in the present case via an operating system installed on the system 1.

    [0109] The system 1 additionally comprises an autoconfiguration module 10.

    [0110] If a physical or virtual hardware network interface 2, 3, 4, 5 is detected for the first time on startup of the system 1, or a physical or virtual hardware network interface 2, 3, 4, 5 appears or disappears during ongoing operation of the system 1, then this is communicated to the autoconfiguration module 10, specifically together with information about the type of the affected hardware network interface 2, 3, 4, 5 and/or the type of a network 6, 7, 8, 9 connected or needing to be connected thereto. The communication is effected in the present case by virtue of the operating system reporting an applicable event to the autoconfiguration module 10 as a plug and play event. The report comprises the information concerning what type of event is involved, i.e., whether an interface was detected for the first time or has appeared or disappeared. Additionally, the autoconfiguration module 10 is notified of a name of the respective affected interface 2, 3, 4, 5. The interface names are chosen such that it is possible to derive therefrom what type of interface 2, 3, 4, 5 is involved, or what role is anticipated therefor. If necessary, further information besides the interface name can also be added, for example, the bus connection (in particular hardware connection to the PCI bus or USB bus), or MAC addresses.

    [0111] Such a hardware or interface event, i.e., the first-time detection or the appearance or disappearance of an interface 2, 3, 4, 5, is indicated purely schematically in FIG. 1 in each case by a lightning flash provided with the reference numeral 11.

    [0112] Preferably, each time a report of such a hardware event 11 is received by the autoconfiguration module 10, the latter reacts thereto by selecting a template file 12, 13, 14 for the respective context 15, 16, 17 from a plurality of template files 12, 13, 14 stored on the system.

    [0113] In the present case, three template files 12, 13, 14 are stored in the system, one for each interface type or role, i.e., a template file 12 for a hardware event 11 that concerns a downstream interface 4, 5, a template file 13 for a hardware event 11 that concerns a wireless network interface 3, and a template file 14 for a hardware event 11 that concerns an upstream interface 2. It goes without saying that further template files can be stored for further interface types, or interface roles. Preferably, at least one, in particular precisely one, template file 12, 13, 14 can be stored on the system 1 for each type, or each role, of interface 2, 3, 4, 5 that can occur on a given system 1, which means that every possible hardware expansion, or every possible change thereto, is covered.

    [0114] The selection of a template file 12 from the plurality of template files 12, 13, 14 for the respective context 15, 16, 17 by the autoconfiguration module 10 is effected based on filter rules stored in the template files 12, 13, 14. Here, the filter rules concern names that are assigned to the interfaces 2, 3, 4, 5 and that are, or were, communicated to the autoconfiguration module 10 for a hardware event 11.

    [0115] The sequence is described below by way of illustration for the upstream interface 2 and the downstream interface 4, which are newly added by a user. The sequence can be similar if other interfaces 3, 5 are also affected or if they disappear or are detected for the first time on startup.

    [0116] In reaction to the detection of the interface 2, the autoconfiguration module 10 selects the associated template file 14, i.e., the template file 14 that is provided for such a hardware event 11, specifically for upstream interfaces.

    [0117] The template file 14 for the interface 2 is selected in this case based on the name of the interface 2, which in the present case is “eth0”; it will be stressed that this is intended to be understood purely by way of illustration. Here, the upstream interface 2 is or was provided with this name deliberately.

    [0118] The selected template file 14 for the upstream interface 2 reads as follows:

    TABLE-US-00001   match:  device: “eth0” networks:  - {{device.name}}:   script:    add:    - docker network create -d bridge     1uplink    - DSBR=‘docker network inspect -format     ‘{{.Id|printf “br-%.12s”}}’     1uplink'    - ip link set {{device.name}} master $DSBR     remove:     - Docker network rm 1uplink

    [0119] this being intended to be understood purely by way of illustration, and a template file 14 for an upstream interface 2 also being able to be read differently.

    [0120] With respect to the three indents under “add:” in the template file 14 cited by way of illustration, which are each preceded by a bullet dash, the following applies: the first indent is used to create an internal network for the interface 2, the second indent is used to ascertain the actual device name of the virtual bridge that was previously created as part of the network (in the example cited, a Linux device name, which is not intended to be understood as restrictive), and the third indent is used to connect the network interface 2 to the virtual bridge.

    [0121] The indent under “remove:” can be used to remove the internal network for the interface 2 again. This command would take effect, or takes effect, if the interface 2 should disappear, or disappears, again.

    [0122] As can be seen, the upstream interface 2 is assigned no communication container for the purposes of the exemplary embodiment described, here. No containers are necessary for this role (“upstream”) in the present case, this being intended to be understood purely by way of illustration. It should be understood it is also possible, in principle, for at least one upstream interface 2 to be alternatively assigned at least one container.

    [0123] The following will be cited as a further example of a template file 14:

    TABLE-US-00002   match:  device: “eth0” networks: - {{device.name}}:  script:   add:   - docker network create -d macvlan    -o parent={{device.name}}    1uplink   remove:   - Docker network rm 1uplink

    [0124] As can be seen, the second example is one for Macvlan. The line “-o parent={{device.name}}” can be used to notify the instance used for execution or processing, in particular Docker, that containers in this network later need to be coupled to the network interface eth0 by means of Macvlan.

    [0125] From the template file 14 (according to the first or second example), the autoconfiguration module 10 produces a network configuration description 18 for the interface 2, namely:

    [0126] #!/bin/bash docker network create -d bridge\1uplink DSBR=‘docker network inspect -format\‘{{.Id|printf “br-%.12s”}}’\1uplink’ip link set eth0 master $DSBR

    [0127] By using the second example of a template file 14, the following network configuration description 18 is produced:

    [0128] #!/bin/bash docker network create -d macvlan -o parent=eth0 \1uplink

    [0129] This allows the network of Macvlan type to be announced to the instance used for execution or processing, in particular Docker. In particular no virtual network components, for example in the Linux kernel, are created yet. It is merely remarked that eth0 will need to be used to connect by Macvlan.

    [0130] The network configuration description 18 is produced from the template file 14 by virtue of the autoconfiguration module 10 replacing the wildcard {{device.name}}, which occurs repeatedly in the template file 14, with specific values, in the present case the specific interface name ens33p0.

    [0131] It will be noted that FIG. 1 shows two partial configuration descriptions 19, 20 besides the network configuration description 18. These two are present for the case in which the configuration description produced concerns one or more communication containers 21, 22 (container configuration description 18), as is the case for the interface 4 and described in detail later on. As far as the interface 2 is concerned, it can be assumed as a simplification that the network configuration description 18 is formed only by the upper part 20.

    [0132] The network configuration description 18 (or 20) is implemented, or executed, in order to configure the system 1 in accordance therewith.

    [0133] The system 1 has a container controller module 23 and a network controller module 24, which are both software implemented in the present case.

    [0134] In the presently described example, the network controller module 24 is used for implementing, or executing, the network configuration description 18 (or 20) for the interface 2. It is present because the network configuration description 18 for the interface 2 is provided in the form of a shell script, specifically formed as a shell script interpreter, or comprises a shell script interpreter. It should be understood that the network controller module 24, if the network configuration description 18 is generated in another form, can also be formed differently, for example, as a systemd or Kubernetes controller, or can comprise these.

    [0135] The execution of the network configuration description 18 by the network controller module 24 creates a virtual bridge and connects the interface 2 to this virtual bridge. This virtual bridge can later be used to connect, for example, communication containers 21, 22 that are assigned to other interfaces 4.

    [0136] With respect to the downstream interface 4 that appears during ongoing operation, the following applies for the purposes of the presently described exemplary embodiment:

    [0137] In reaction to the new addition of the interface 4, in particular by a user, the autoconfiguration module selects the template file 12, i.e., the template file 12 that is provided for such a hardware event 11, specifically for downstream interfaces.

    [0138] The selection in this case is made on the basis of the name of the added interface, which in the present case is “ens33p2”, or on the basis of an in particular first part of the name “ens*”; it will be stressed that this is intended to be understood purely by way of illustration. Here, downstream interfaces that are used for connecting fieldbus network segments are deliberately provided with a name that begins with the three letters “ens”.

    [0139] The selected template file 12 for the interface 4 has the following appearance in the present case; it will be stressed that this is a purely illustrative version, and template files for downstream interfaces 4, 5 can also have a different form:

    TABLE-US-00003 match:  device: “ens*” networks:  - {{device.name}}:   script:    add:    - ip link add br{{device.name}} type bridge    - ip link set br{{device.name}} up    - ip link set {{device.name}} master br{{device.name}}    - ip link {{device.name}} up    remove:    - ip link del br{{device.name}} containers:  v6router:   image: . . .   networks:   - uplink   - {{device.name}}  anat64:   image: . . .   networks:   - {{device.name}}

    [0140] “match:”, or the first two lines of this file, provides the filter rules concerning the interface name. If a name of an interface 2, 3, 4, 5 that is detected for the first time, newly added or disappears begins with “ens*”, this template file 12 is selected.

    [0141] The following will be cited as a second example of a template file 12 for a downstream interface 4:

    TABLE-US-00004   match:  device: “ens*” networks:  - {{device.name}}:   script:    add:    - docker network create -d bridge     2{{device.name}}_downstream    - DSBR=‘docker network inspect -format     ‘{{.Id|printf “br-%.12s”}}’     2{{device.name}}_downstream'    - ip link set {{device.name}} master $DSBR    remove: - Docker network rm 2{{device.name}}_downstream containers:  v6router:   image: . . .   networks:   - 1uplink   - 2{{device.name}}_downstream  anat64:   image: . . .   networks:   - 2{{device.name}}_downstream

    [0142] The foregoing example is advantageous in particular if an internal network assigned to the network interface 2 has already been created in another way. The template 12 now describes how the bridge network that is still missing for the new interface 4 needs to be created: it is a shell script that creates the network by Docker command and places the new interface 4 under the control of the bridge. In line with the first example, a script is involved and a bridge network is created.

    [0143] With respect to the three indents, or commands (in each case after a bullet dash), listed in the “add” section, the first creates an internal network with a virtual bridge under the name “2ens33p2_downstream”, the portion “ens33p2” being the name of the network interface 4, the second ascertains the actual device name (Linux device name in the cited example) of the virtual bridge that was created beforehand as part of the network, and the third connects the network interface 4 to the virtual bridge that was created beforehand as part of the network.

    [0144] The indent that comes next, specifically after “remove:”, removes the network along with the virtual bridge between the container 21 and the interface 4. This indent takes effect if the interface 4 disappears again.

    [0145] With respect to the two indents after “networks:”, “1uplink” provides the reference to the internal network already created previously on the interface 2, and “2{{device.name}}_downstream” provides the reference to the preceding internal network created for the interface 4.

    [0146] “-2{{device.name}}_downstream” in the last line of the template 12 is used to connect the container to the internal network for the interface 4. A direct connection to the internal network of the interface 2 is not necessary by way of illustration; instead, this connection is made indirectly via the v6router container.

    [0147] The following will furthermore be cited as a third example of a template file 12 for a downstream interface 4:

    TABLE-US-00005   match:  device: “ens*” networks:  -{{device.name}}:   script:    add:    - docker network create -d macvlan     -o parent={{device.name}}     2{{device.name}}_downstream    remove:    - Docker network rm 2{{device.name}}_downstream containers:  v6router:   image: . . .   networks:   - 1uplink   - 2{{device.name}}_downstream  anat64:   image: . . .   networks:   - 2{{device.name}}_downstream

    [0148] As can be seen, this example is again one for Macvlan.

    [0149] The autoconfiguration module 10 uses the template file 12 (according to the first or the second or the third cited example) to produce a container configuration description 18 that comprises a first partial configuration description 19 and a second partial configuration description 20.

    [0150] It should be noted that simplified FIG. 1 schematically depicts the sequence of the production of a configuration description only once for both interfaces 2 and 4 for purposes of increased clarity. The element with the reference sign 18 therefore represents both the network configuration description produced for the interface 2 and the container configuration description produced for the interface 4 with the two partial configuration descriptions 19 and 20.

    [0151] Here, the first and second partial configuration descriptions 19, 20 of the container configuration description 18 are produced from the template file 12 by virtue of the autoconfiguration module 10 replacing the wildcard {{device.name}}, which occurs repeatedly in the template file 12, with specific values, in the present case the specific interface name, i.e., ens33p2.

    [0152] The first partial configuration description 19 (and the associated lines 14 to 23 of the template file 12) in the present case concerns two communication containers 21, 22, which each comprise at least one application via which network functions can be provided.

    [0153] The first partial configuration description 19 of the container configuration description 18 specifically has (on the basis of the first cited example of a template 12) the following appearance, this again being intended to be understood purely by way of illustration:

    TABLE-US-00006   version: “3” services:  v6router:   image: . . .   networks:   - 1upstream   - 2downstream  anat64:   image: . . .   networks:   - 2downstream networks:  1upstream:   external: upstream  2downstream:   external: ens33p2

    [0154] Based on the second above-cited example for a template file 12 for the downstream interface 4, the following is obtained for the first partial configuration description 19:

    TABLE-US-00007   version: “3” services:  v6router:   image: . . .   networks:   - 1upstream   - 2ens33p2_downstream  anat64:   image: . . .   networks:   - 2ens33p2_downstream networks:  1upstream:   external: true  2ens33p2_downstream:   external: true

    [0155] Based on the third example of a template file 12 for the downstream interface 4, the following is obtained for the first partial configuration description 19:

    TABLE-US-00008   version: “3” services:  v6router:   image: . . .   networks:   - 1upstream   - 2ens33p2_downstream  anat64:   image: . . .   networks:   - 2ens33p2_downstream networks:  1upstream:   external: true  2ens33p2_downstream:   external: true

    [0156] As evident, network functions of an IPv6 router and of an NAT64 router are provided in containerized form in the present case—for all three examples.

    [0157] The respective first partial configuration description 19 is the configuration description for the communication containers 21, 22 desired or required in the respective context 15, 16, 17. It thus concerns the container expansion, which is dependent on the context 15, 16, 17 and dependent on the type, or role, of the affected interface 4. It is used in particular for further processing in a container system, or container tool, such as for example Docker Compose.

    [0158] The respective second partial configuration description 20 (and the associated lines 3 to 12, or 13, of the respective associated template file 12) concerns network functions for connecting the communication containers 20, 21 to the affected hardware network interface 4 in the form of a configuration description for, in the present case, a virtual bridge at operating system level, or (if an interface 4 is not added, as in the present case, but rather removed) deletion instructions (“remove: . . .”) for an interface. These take effect if the autoconfiguration module 10 has been sent a communication indicating that a downstream interface 4 for connecting fieldbus network segments has disappeared.

    [0159] The second partial configuration description 20 specifically has (based on the first cited example of the template file 12) the following appearance, which again is intended to be understood purely by way of illustration:

    [0160] #!/bin/bash

    [0161] ip link add brens33p2 type bridge

    [0162] ip link set brens33p2 up

    [0163] ip link set ens33p2 master brens33p2

    [0164] ip link ens33p2 up

    [0165] From the second example cited above for the template file 12, the following is obtained for the second partial configuration description 20:

    [0166] #!/bin/bash docker network create -d bridge\2ens33p2 downstream DSBR=‘\docker network inspect -format ‘{{.Id|printf “br-%.12s”}}’2ens33p2 downstream’ip link set ens33p2 master $DSBR

    [0167] As evident, this example of a second partial configuration description 20 is identical in structure to the example of a network configuration description 18 that was described above for the interface 2.

    [0168] From the third example of the template file 12, the following is obtained for the second partial configuration description 20:

    [0169] #!/bin/bash docker network create -d macvlan\-o parent=2ens33p2\2ens33p2 downstream

    [0170] The respective second partial configuration description 20 describes (separately from the configuration description 19 for the container expansion) necessary virtual network functions that are necessary in the respective context 15, 16, 17 for connecting the respective communication containers 21, 22 to the respective interface 4. Here, the generated description is present as a shell script. Alternatively, it can also be formed as “systemd units”, as cluster system objects (Kubernetes), or the like.

    [0171] On the basis of this description 20, the autoconfiguration module 10 particularly ensures that at least one software switch forming the internal network is linked solely to the interface 4 that has appeared rather than to the host IP stack, as is customary with Docker.

    [0172] The first partial configuration description 19 and the second partial configuration description 20 (according to the first, second or third example) are implemented, or executed, in order to configure the system 1 in accordance therewith.

    [0173] The implementation, or execution, of the first partial configuration description 19 concerning the containers 21, 22 is effected via the container controller module 23 and the implementation of the second partial configuration description 20 concerning the operating system level is effected by the network controller module 24.

    [0174] The container controller module 23 in the present case is software implemented, specifically formed as a Docker Composer tool, or comprises a Docker Composer tool, this being intended to be understood by way of illustration. If, as an alternative or in addition to Docker, such as Kubernetes, is used as container technology, then the container controller module 23 can also be formed as a Kubernetes Pod Scheduler Controller, or can comprise a Kubernetes Pod Scheduler Controller. Other or further container technologies are naturally likewise possible.

    [0175] It should be noted that, for the purposes of the presently described examples, the second partial configuration description 20 is executed before the first partial configuration description 19, this not being intended to be understood as restrictive. In the case of Kubernetes, for example, the two partial configuration descriptions 19, 20 are “loaded” into the cluster and then automatically executed in the correct order by the cluster mechanisms (such as kubelet and CNI plugins). If Kubernetes is used as instance for executing the first partial configuration description, there can be provision, for example, for a “kubelet” on the system upon which the container(s) 21, 22 is/are supposed to be started to itself start a (main) CNI plugin and transfer the first partial configuration description 19 thereto for execution. The main CNI plugin can be provided by Multus, for example. It can, for example, use its own configuration and the partial configuration description 19 as a basis for deciding what other CNI plugins need to be called, possibly in order, such as the “bridge” CNI plugin or the “macvlan” CNI plugin. Here, the main CNI plugin supervises the forwarding of the necessary parts from the partial configuration description 19 to the CNI plugins that are to be called.

    [0176] As a result of the execution of the first partial configuration description 19 by the container controller module 23, the communication containers 21, 22 having the IPv6 and NAT64 router functions are generated.

    [0177] Here, the autoconfiguration module 10 additionally directly or indirectly notifies the container controller module 23 that the communication containers 21, 22 need to be started in the present case, because the interface 4 was added and not removed. If the latter were alternatively the case, the autoconfiguration module 12 would notify the container controller module 23 that the communication containers 21, 22 need to be stopped rather than started.

    [0178] The network controller module 24 for executing, or implementing, the second partial configuration description 20 is, as already noted above, likewise software implemented and formed as a shell script interpreter, or comprises a shell script interpreter.

    [0179] It should be noted that the sequence can be totally analogous if, instead of a downstream interface 4, a different type of interface 3, or an interface 3 having a different role, is added. Merely the specific communication container expansion according to the first partial configuration description 19 and the virtual network functions according to the second partial configuration description 20 can or will then differ. If, for example, a wireless network interface 3 is added, the functions of an IPv6 and NAT64 router would not be provided in containerized fashion, but rather, for example, suitable software for a WLAN access point. An appropriate communication container 25 for this context 17 is depicted, likewise purely schematically, in the figure.

    [0180] It is naturally also possible, as an alternative to the above examples, for the autoconfiguration module 10 to produce a container configuration description 18 for the interface 2 instead of a network configuration description 18 if one or more communication containers 21, 22, 25 are supposed to be provided for the interface 2. The sequence for the interface 2 could then be analogous to that described above for the interface 4.

    [0181] If an interface needs to be assigned at least one communication container 21, 22, 25 based on its role, then it is possible for a container configuration description 18 to be produced, and if no communication container 21, 22, 25 is necessary or desired, a network configuration description 18.

    [0182] Additionally, it will be stressed that the autoconfiguration module 10 can, but does not have to, be used for all of the network interfaces 2, 3, 4, 5 of a system 1. The approach in accordance with the invention can naturally also be used in combination with the approach previously known from the prior art. By way of example, as an alternative to the autoconfiguration module 10 producing a network configuration description 18 for the interface 2, as described above, a network configuration description can be, or may have been, produced in a conventional manner, in particular as part of a manual configuration, for example, by a user. The manually obtained network communication description 18 can have, for example, exactly the same appearance as the aforementioned network communication description obtained by the autoconfiguration module 10 using the template file 14.

    [0183] In a preferred embodiment, at least for those interfaces 3, 4, of a system 1 for which in each case one or more communication containers 21, 22, 25 need to be provided, the autoconfiguration module 10 automatically produces a container configuration description 18 therefor. If an automatic sequence that is as comprehensive as possible is desired, which is usually particularly advantageous, then it is additionally also possible for network configuration descriptions 18 to be produced by the autoconfiguration module 10 for all of the interfaces 2 of a system for which no communication containers 21, 22, 25 are necessary, as described by way of illustration above for the interface 2.

    [0184] The approach in accordance with the disclosed embodiments of the invention can close the “gap” between industrial users and container technology, such as Docker. As such, it becomes possible to render the great advantages of this technology usable for network functions without the user needing specific IT or network or container know-how. The autoconfiguration module 10 and the template files 12, 13, 14 can be used to dynamically and automatically connect containers having network functions to their physical network outside world, specifically in a manner suited to the respective context 15, 16, 17. Industrial users do not need to have specialist IT knowledge about containers, such as Docker containers, or about associated tools, such as Docker network drivers, or else about software switches, and can still use the advantages of this technology for automatic configuration in the event of a change, or addition, to the hardware interface expansion of their systems, or for startup thereof. It becomes particularly simple for users to extend systems 1 from the field of automation engineering by network accesses or feeders. Manual configuration via additional engineering or configuration tool(s) is no longer required. It suffices to plug in the appropriate network interface hardware, which is then followed by completely automatic configuration.

    [0185] Although the invention has been illustrated and described more thoroughly in detail by the preferred exemplary embodiment, the invention is not restricted by the disclosed examples, and other variations can be derived therefrom by a person skilled in the art without departing from the scope of protection of the invention.

    [0186] FIG. 2 is a flowchart of the method for automatically configuring a system 1. The method comprises performing, during either startup of the system 1 and/or ongoing operation of the system 1, monitoring of the system to ascertain what physical or virtual hardware network interfaces 2, 3, 4, 5 the system 1 includes, as indicated in step 210.

    [0187] Next either a type of affected hardware network interface 2, 3, 4, 5 and/or a network 6, 7, 8, 9 connected or needing to be connected to said hardware network interface is communicated to an autoconfiguration module 10 of the system 1 together with information about, if either a physical or virtual hardware network interface 2, 3, 4, 5 is detected for a first time on startup of the system 1 and/or a physical or virtual hardware network interface 2, 3, 4, 5 appears or disappears during operation of the system 1, as indicated in step 220.

    [0188] Next, the autoconfiguration module 10 utilizes a template file 12, 13, 14 to produce for at least one of the affected hardware network interfaces 2, 3, 4, 5 a configuration description 18 that comprises a first partial configuration description 19 for at least one communication container 21, 22, 25, which comprises at least one application via which network functions can be provided, and a second partial configuration description for network functions for connecting the at least one communication container 21, 22, 25 to the at least one affected hardware network interface 2, 3, 4, 5, as indicated in step 230.

    [0189] Next, executing the configuration description is executed to configure the system 1 in accordance therewith, the at least one communication container 21, 22, 25 being generated and connected to the at least one affected hardware interface 2, 3, 4, 5, as indicated in step 240.

    [0190] Thus, while there have been shown, described and pointed out fundamental novel features of the invention as applied to a preferred embodiment thereof, it will be understood that various omissions and substitutions and changes in the form and details of the methods described and the devices illustrated, and in their operation, may be made by those skilled in the art without departing from the spirit of the invention. For example, it is expressly intended that all combinations of those elements and/or method steps which perform substantially the same function in substantially the same way to achieve the same results are within the scope of the invention. Moreover, it should be recognized that structures and/or elements and/or method steps shown and/or described in connection with any disclosed form or embodiment of the invention may be incorporated in any other disclosed or described or suggested form or embodiment as a general matter of design choice. It is the intention, therefore, to be limited only as indicated by the scope of the claims appended hereto.