Method For Amending Or Adding Machine Learning Capabilities To An Automation Device
20220383066 · 2022-12-01
Assignee
Inventors
Cpc classification
G05B19/4184
PHYSICS
International classification
Abstract
Various embodiments of the teachings herein include methods for amending or adding machine learning capabilities to an automation device in an automation system. The method may include: 1) providing a capability model of the automation device semantically representing capabilities of the device; 2) providing a machine learning model for semantically representing a machine learning functionality and including a semantic model of a neural network; 3) deploying the machine learning model within the automation device; 4) interpreting a semantic part of the machine learning model using a semantic reasoner and matching requirements of the machine learning model with device capabilities inferred by the capability model; and 5) executing the machine learning functionalities on the automation device.
Claims
1. A method for amending or adding machine learning capabilities to an automation device in an automation system, the method comprising: 1) providing a capability model of the automation device, the capability model semantically representing device capabilities of the automation device; 2) providing a machine learning model for semantically representing a machine learning functionality and including a semantic model of a neural network; 3) deploying the machine learning model within the automation device; 4) interpreting a semantic part of the machine learning model using a semantic reasoner and matching requirements of the machine learning model with device capabilities inferred by the capability model; and 5) executing the machine learning functionalities on the automation device.
2. The method according to claim 1, further comprising at least partially amending the capability model with contents of the machine learning model.
3. The method according to claim 1, further comprising using the machine learning model to discover a neural network model.
4. The method according to claim 1, further comprising uploading at least one of the capability model and the machine learning model to a semantic repository of the automation system.
5. The method according to claim 4, wherein the semantic repository of the automation system includes a knowledge graph being a central or decentral data base for hosting knowledge artefacts.
6. The method according to claim 1, further comprising performing a matchmaking of the capability model and the machine learning model to semantically match capabilities of the automation device with functional requirements of the automation system.
7. The method according to claim 1, wherein the provision of the machine learning model is preceded by a semantic based discovery of relevant resources offered by a one or more automation devices.
8. The method according to claim 1, wherein the semantic model of the neural network is defined as a neural network class related to algorithms, said neural network class including a semantic specification of one or more hidden or unhidden layers of a neural network.
9. The method according to claim 8, further comprising assigning at least one of said layers of the neural network class to an interface class or subclass capable of interfacing classes or subclasses of other semantic models.
10. The method according to claim 1, wherein the semantic model of the neural network includes a multiplicity of weighted datasets related to a neuron, wherein at least one of said weighted datasets is assigned to a weight variable.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0041] The objects as well as further advantages of various embodiments of the present disclosure will become more apparent and readily appreciated from the following description of the example embodiments, taken in conjunction with the accompanying drawing accompanying drawing of which:
[0042]
[0043]
[0044]
[0045]
[0046]
DETAILED DESCRIPTION
[0047] Neural networks may need to be easily discovered, configured, and deployed in the field and edge applications. Accordingly, there is a need in the art for providing meta-data for developed neural networks, describing their applicability and characteristics. This meta-data might be realized with standardized semantic models and provided in dedicated knowledge graphs in order to be easily used across different architecture layers (cloud, edge, and field) and enable expansion of an open and interoperable IoT ecosystem.
[0048] At present, however, automation functions on automation devices are usually provided in the form of embedded software, e.g., C/C++ code. Engineering tools are usually applied for design, implementation, and configuration of a complete automation system, which typically includes the automation devise. Going beyond the limits of current practices, the embodiments described herein strive to implement machine learning functions including neural network models without requiring specific engineering or extensive customization of the automation devices.
[0049] Embodiments as described herein enable an open and interoperable IIoT ecosystem, in which field and edge applications may be realized with neural networks and easily deployed on heterogenous field and edge devices.
[0050] Embodiments as described herein may provide a semantic model built to represent Neural Networks by a machine-interpretable model which enables: [0051] Semantic discovery of a neural network for a particular task [0052] Automated proving whether hardware characteristics and computation capabilities of an automation device meet requirements of a neural network. [0053] Validated configuration and deployment of a neural network in the field or edge device. [0054] Meta-data of a Neural Network, describing its applicability, requirements, and characteristics. [0055] Integration with other knowledge artifacts (digital tweens, standardized semantic models etc.) in a Knowledge Graph, which enables a transparent and interoperable IIoT ecosystem.
[0056] In some embodiments, edge computing is to be used to monitor a conveyor belt in a factory, where IoT devices need to be installed to monitor operational processes and notify workers when anomalies or irregularities are detected. An IoT device embedded within an accelerometer may be placed on the conveyor belt in order to analyze anomaly patterns in vibration data. Another IoT device connected with a camera may be deployed to detect a presence of working personnel and sending an alarm to a remote center if no personnel is detected on-site. Such use case could be implemented by installing several Neural Network models on the devices, enabling processing and decision-making at the field or at the edge.
[0057] However, there might be dozens of conveyor belts and many other machines in a factory, where various field devices and tailored artifacts are needed to tackle different problems. As the IoT network begins to spread, it quickly becomes cumbersome to discover the capabilities and coordinate the offerings across all the field devices and on-device applications. An easy management system would support an effective discovery of neural network models in this case.
[0058] Also, the requirements imposed by discovered models and rules may be matched against the capabilities of IoT devices so that new applications may be orchestrated easily. Besides, the IoT solution should be manageable even if devices are from different vendors, artifacts have diverse functionalities, and the size of the IoT network scales.
[0059]
[0060] The basic functionality depicted in
[0061] The disclosed embodiments extend the afore described basic implementation to semantically manage NN-based (Neural Networks) applications across IoT networks. However, the embodiments are not restricted to the management of NN-based applications, as the concept is adaptable and may be extended to other on-device applications too. This section overviews the architecture of a framework for semantic management of on-device applications.
[0062]
[0063] The framework proposed by the embodiments is built on top of the toolchain to supplement a semantic management SMM. This framework may include three building blocks: [0064] A capability model TDS or a »thing description« of one or more automation devices ADV used to semantically represent or describe DSC the device capabilities of the automation device ADV. Preferably, a W3C Thing Description may be used for implementing the capability model TDS. [0065] A machine learning model SMD or a semantic description for semantically representing one or more machine learning functionalities is including at least one semantic model of a neural network. The semantic descriptions model on-device applications and enrich the capability model TDS with artifacts knowledge. The machine learning model SMD may also be seen as an artefact described DSC by the traditional workflow WFL. [0066] A semantic repository KGR or a knowledge graph along with an infrastructure for hosting, querying, and managing semantic information.
[0067] The framework may be applied at different architecture levels. In some embodiments, the framework may be applied on an edge level as described further down below. The embodiments are not limited to W3C Thing Description. Other standards for describing devices or automation devices ADV may be used in the alternative.
[0068] As symbolized by different shapes at the bottom part of
[0069] In terms of software, various programs are running inside automation devices ADV, and different vendors even have different application stacks. Most devices are bare-metal, as they are designed to live long with low power consumption and limited resources. The diversity of IoT components pose a barrier to interoperability and re-usability. Edge computing brings many devices together to open the door to advancements, yet the management of disparate components over production life-cycle is challenging.
[0070] A Thing Description according to the W3C standard aims to mitigate interoperability issues of heterogeneous automation devices ADV, bringing a variety of heterogeneous automation devices ADV under one information model. As depicted on the right side of the figure, capability model TDS or »thing description« describes the metadata and interactions of automation devices ADV, preferably in an RDF format and using a standardized ontology. A Thing Description offers the possibility to add context knowledge from additional semantic models. For this purpose, models such as SAREF ontology or many others may be used.
[0071] In addition to the standard way of extending a Thing Description with context knowledge, the Thing Description used by the embodiments as capability model TDS is complemented with semantic descriptions of artifacts. Further, on machine learning models SMD for general on-device applications—including machine learning applications—are shown on the left side of
[0072] A repository including a knowledge graph KGR as shown at the top of the figure, may be hosted remotely and may be used to register and discover appropriate semantic information. The repository offers a good approach for capturing interconnections among devices and artifacts and extending existing knowledge with a new one.
[0073] The knowledge graph KGR eases an organization of IoT components and automation devices ADV at scale. One may query semantic information of existing automation devices ADV and artifacts in the field. More importantly, the artifacts may be reused and orchestrated to build new features based on the retrieved semantic information, saving cost and avoiding reinventing the wheels.
[0074] In the following, a semantic model for describing Neural Networks is described. As the heterogeneity of edge devices and the complexity of the environment bring significant variations of on-device applications, it is not easy to have a clear overview of individual artifacts distributed over the network, such as their metadata, program structures, and computational logic, limiting the possibility of interpreting and reusing existing functionalities. The W3C Thing Description itself is not able to answer all these questions because W3C Thing Description is designed to cooperate information exchange, not to describe how the information is processed in the background.
[0075] Accordingly, the disclosed embodiments supplement a W3C Thing Description format with semantic modeling of on-device applications, particularly machine learning functions. Accordingly, existing semantic schemas and ontologies may be reused in order to construct semantic models of artifacts and corresponding namespaces and prefixes as shown in a following code example below.
TABLE-US-00001 rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . # RDF Namespace rdfs:<http://www.w3.org/2000/01/rdf-schema#> . # RDF Schema Namespace td: <http://www.w3.org/ns/td#> . # Thing Description ssn: <http://www.w3.org/ns/ssn/> . # Semantic Sensor Network Ontology s3n: <https://w3id.org/s3n/> . # Semantic Smart Sensor Network #Ontology om: <http://www.ontology-of-units-of-measure.org/resource/om- 2/> . # Ontology of Units of Measure
[0076] A semantic schema designed for neural network models may be used with a namespace and abbreviation defined as follows:
TABLE-US-00002 nnet: <https://tinyml-schema-collab.github.io/> .# Semantic Schema for # Neural Network
[0077] The semantic schema for neural network models is depicted in
[0078] According to an exemplary neural network consisting of layers, some layers in a neural network may be divided into three categories: input layer, middle layer (hidden layer), and output layer. The »input layer« may be the first layer of a neural network and may be responsible for receiving raw »input« data. A »middle layer« may be a layer performing a predefined transformation on incoming data from the previous layer and sending result of this transformation to a next layer. The »output layer« may be the last layer in a neural network and may be responsible for the »output« of the inference result.
[0079] The »input« and the »output« of a neural network may be advantageously described as subclasses of other semantic standards in order to guarantee consistency and compatibility. Most layers may be composed of many interconnected neurons that operate with weights and/or may be assigned to a weight variable. Neurons use assigned weights to participate in mathematical operations. In terms of microcontrollers, an on-device neural network model may have one or more of the following properties: [0080] Quantization, which may be applied to convert weights from floats to integers, thereby accelerating the neural network inference; [0081] A required memory space for the model; [0082] A number of multiply-accumulate operations, which may be used to estimate inference latency and other indicators.
[0083] In order to describe a possible use of the proposed machine learning model for semantically representing a neural network, an exemplary use case is considered which is related to a condition monitoring under a simplified setting.
[0084] The workpieces WCS transports the workpiece WCS from the left to the right side to a subsequent work step. However, sometimes it may happen that the conveyor belt may get clogged. In this situation the workpiece WCS cannot be moved off from the conveyor belt CVB in time. This may happen due to various reasons: For example, the vacuum gripper GRP and the conveyor belt CVB may not be synchronized in term of a serving workload or due to a malfunction of the following process.
[0085] In this situation an IoT solution use case may be implemented to monitor the state of the conveyor belt CVB and take action once the belt starts clogging. A possible solution is described in the following.
[0086] First, an automation device ADV-1 for sensing vibrations is provided below the conveyor belt CVB. A machine learning model SMD including at least one semantic model of a neural network is deployed model in the automation device ADV-1. Based on vibration data sensed by the automation device ADV itself or provided by a sensor, the machine learning functionality executed on the automation device ADV-1 may classify three different conveyor belt states: off, on-normal, and on-clogging.
[0087] Second, a second automation device ADV-2, e.g., a camera is provided to detect a presence of the worker WRK in vicinity of the workstation. For this purpose, a machine learning model including a convolutional neural network is deployed on the automation device ADV-2. The convolutional neural network, when executed according to the machine learning model including the convolutional neural network, may outputs two inference results: worker-detected and worker-not-detected. If the conveyor belt CVB is clogging and no worker is detected on-site, a warning will be triggered to a remote center RC located inside a network NW instituted by the—not shown—automation system.
[0088] Third, a third automation device RC may be implemented for reasoning about incoming events and trigger an alarm event. The remote center RC may notify a responsible manager once an alarm is received. For the sake of simplicity of this exemplary use case, only the first two inferences mentioned above are described further down below.
[0089] In the following a machine learning model SMD including a neural network model used for the vibration monitoring of the conveyor belt CVB is illustrated. The machine learning model SMD for the vibration monitoring may be modeled based on the semantic schema for Neural Network as depicted in
[0090] A possible structure of the neural network is shown in
[0091] Three dimensions refer to three different conveyor belt states: off, on-normal operation, and on-clogging, respectively. Preprocessing and postprocessing steps are treated as parts of the neural network block. Accordingly, the input of the neural network block is a three-dimensional vibration data.
[0092] The final output is an integer representing the state of the conveyor belt, where zero is off, one means on-normal, and two is on-clogging. The weights in the model are quantized to 16-bit floats. The model requires at least 20 kB of RAM, and each feed-forward inference needs about 252 multiply-accumulate operations. The model weights are saved separately in a neural network model repository and may be retrieved via a URL (Uniform Resource Locator).
[0093] In the following, a machine learning model, or particularly, a semantic representation of this neural network entitled ClogNet is shown. The machine learning model of this exemplary neural network as shown below is defined in an RDF Turtle format. This semantic representation may be automatically generated by parsing the entire neural network. For the sake of simplicity, only information relevant to the described embodiment is shown.
TABLE-US-00003 :ClogNet_1 rdf:type nnet:NeuralNetwork ; nnet:id “nnet-322-000-000-001” ; nnet:inputLayer :InputLayer_1 ; nnet:middleLayer :MiddleLayer_1 ; nnet:middleLayer :MiddleLayer_2 ; nnet:outputLayer :OutputLayer_1 ; <http://w3id.org/s3n/hasProcedureFeature> :ProcedureFeature_1 ; nnet:quantization nnet:Floatl6 ; nnet:muliplyAccumulateOperations “252” ; nnet:weights “http://model-repo/nnet-322-000-000-001” ; rdfs:comment “Neural network for detecting clogging on a conveyor belt.” ; rdfs:label “ClogNet_1” ; ssn:hasInput :ThreeAxesVibrationInput ; ssn:hasOutput :ConveyorBeltState ; . :InputLayer_1 rdf:type nnet:Layer ; nnet:shapeIn “4” ; nnet:shapeOut “28” ; nnet:trainable schema:False ; nnet:typeOfLayer nnet:FullyConnected ; rdfs:label “InputLayer_1” ; . :MiddleLayer_1 rdf:type nnet:Layer ; nnet:index “1” ; nnet:trainable schema:False ; nnet:typeOfLayer nnet:Relu ; rdfs:label “MiddleLayer_1” ; . :MiddleLayer_2 rdf:type nnet:Layer ; nnet:index “2” ; nnet:shapeIn “28” ; nnet:shapeOut “3” ; nnet:trainable schema:True ; nnet:typeOfLayer nnet:FullyConnected ; rdfs:label “MiddleLayer_1” ; . :OutputLayer_1 rdf:type nnet:Layer ; nnet:trainable schema:False ; nnet:typeOfLayer nnet:Logistic ; rdfs:label “OutputLayer_1” ; . :ProcedureFeature_1 rdf:type <http://w3id.org/s3n/ProcedureFeature> ; rdfs:label “ProcedureFeature_1” ; ssn-system:inCondition :Condition_1 ; . :Condition_1 rdf:type s3n:Memory ; schema:minValue “20” ; schema:unitcode “om:kilobyte” ; rdfs:label “RAM requirement.” ; .
[0094] One may notice that the model is rich. It provides a multiplicity of options for defining details of a neural network. For example, the second middle layer in the semantic description has value »true« in the attribute »trainable«, which implies that some weights in the neural network may be updated.
[0095] In the following, capability models TDS are provided. A W3C Thing Description may be used to semantically describe automation devices ADV for executing the desired Neural Network model. Two examples of capability model TDS are described respectively for the first automation device ADV-1 and for the second automation device ADV-2 used in the industrial use case as illustrated in
[0096] The first automation devices ADV-1 is assumed to be equipped with 320 KB SRAM, 1 MB flash, and is assumed to be powered by a Cortex-M4 CPU running at 90 MHz. An embedded 3-axis accelerometer is assumed to be used by the first automation devices ADV-1. A neural network for detecting the state of the conveyor belt is assumed to be deployed on board of the first automation devices ADV-1. The artifacts interfaces are assumed to be described under “s3n:Algorithm”. The required memory for the execution of each artifact is assumed to be provided. The estimated available RAM at run time is assumed to be 54 kB. A capability model TDS for the first automation devices ADV-1 containing the information above is given below. For the sake of simplicity and space, only important parts (i.e., essential interfaces and components) of the capability model TDS are shown.
TABLE-US-00004 { “@context”: [ “https://www.w3.org/2019/wot/td/v1”, { “sosa”: “http://www.w3.org/ns/sosa/”, “ssn”: “http://www.w3.org/ns/ssn/”, “ssn-system”: “http://www.w3.org/ns/ssn/systems”, “s3n”: “http://w3id.org/s3n/”, “om”: “http://www.ontology-of-units-of- measure.org/resource/om-2/”, “schema”: “http://schema.org/”, “rdfs”: “http://www.w3.org/2000/01/rdf-schema#”, “demo”: “http://ureasoner.org/festo-demo#”, “@language”: “en” } ], “@type”: [“s3n:SmartSensor”], “title”: “SSI-Web_1”, “description”: “SIEMENS SSI-Web Node”, “ssn:hasSubSystem”: [ { “@type”: “s3n:MicroController” , “rdfs:comment”: “Microcontroller in SIEMENS SSI-Web Node.”, “s3n:hasSystemCapability”: [ { “ssn-system:hasSystemProperty”: { “@type”: “s3n:Memory”, “schema:value”: “54”, “schema:unitCode”: “om:kilobyte” } } ] , “ssn:implements”: [ { “@type”: “s3n:Algorithm”, “rdfs:comment”: “Neural network for detecting anomaly on a conveyor belt.”, “s3n:hasProcedureFeature”: { “ssn-system:inCondition”: { “@type”: “s3n:Memory”, “rdfs:comment”: “RAM requirement.”, “schema:minValue”: “20”, “schema:unitCode”: “om:kilobyte” } } } ] }, { “@type”: “demo:VibrationSensor” }, { “@type”: “demo:TemperatureSensor” } ], “ssn:isHostedBy”: “demo:FESTO_Workstation”, “securityDefinitions”: { “nosec_sc”: { “scheme”: “nosec”} }, “security”: “nosec_sc”, “properties”: { “ConveyorBeltState”: { “title”: “ConveyorBeltState”, “observable”: true, “readOnly”: true, “description”: “State of the conveyor belt on FESTO workstation based on a vibration sensor.”, “type”: “integer”, “forms”: [ { “op”: [“readproperty”, “observeproperty”], “href”: “http://ureasoner.org/festo-demo/conveyor- belt-state” } ] } }, “actions”: {}, “events”: {}, “id”: “SSI_Web:198:023:042:021”, “forms”: [ ] }
[0097] The second automation devices ADV-2 is assumed to be equipped with 256 KB of SRAM, 1 MB of Flash. It is assumed to be driven by a Cortex-M4 CPU running at 64 MHz with a computing power similar to the first automation device ADV-1. An external image sensor is assumed to be connected to the board of the second automation devices ADV-2. Further on, a neural network for detecting on-site workers is assumed to be provided.
[0098] Memory requirements for the execution of artifacts are assumed to be provided. The estimated available RAM at run time is assumed to be 89 kB. A capability model TDS for the first automation devices ADV-1 containing the information above is given below.
TABLE-US-00005 { “@context”: [ ] , “@type”: [“s3n:SmartSensor”], “title”: “Arduino_Sense_BLE”, “description”: “Arduino Nano Sense BLE 33, “ssn:hasSubSystem”: [ { “@type”: “s3n:MicroController” , “rdfs:comment”: “Microcontroller in Arduino Nano Sense BLE 33.”, “s3n:hasSystemCapability”: [ { “ssn-system:hasSystemProperty”: { “@type”: “s3n:Memory”, “schema:value”: “89”, “schema:unitCode”: “om:kilobyte” } } ] , “ssn:implements”: [ { “@type”: “s3n:Algorithm”, “rdfs:comment”: “Neural network for detecting presence of worker.”, “s3n:hasProcedureFeature”: { “ssn-system:inCondition”: { “@type”: “s3n:Memory”, “rdfs:comment”: “RAM requirement.”, “schema:minValue”: “80”, “schema:unitCode”: “om:kilobyte” } } }, { “@type”: “s3n:Algorithm”, “rdfs:comment”: “Complex event processing for generating Alarm Event against rule.”, “s3n:hasProcedureFeature”: { “ssn-system:inCondition”: { “@type”: “s3n:Memory”, “rdfs:comment”: “RAM requirement.”, “schema:minValue”: “40”, “schema:unitCode”: “om:kilobyte” } } } ] }, { “@type”: “demo:ImagerSensor” }, { “@type”: “demo:VibrationSensor” }, { “@type”: “demo:TemperatureSensor” } ] , “ssn:isHostedBy”: “demo:FESTO_Workstation”, “securityDefinitions”: { “nosec_sc”: { “scheme”: “nosec” } }, “security”: “nosec sc”, “properties”: { “WorkerPresenceState”: { “title”: “WorkerPresenceState”, “observable”: true, “readOnly”: true, “description”: “Result of on-site worker detection based on an imager sensor.”, “type”: “boolean”, “forms”: [{ “op”: [“readproperty”, observeproperty], “herf”: “http://ureasoner.org/festo- demo/workerPresence” }] } }, “actions”: { }, “events”: { “CloggingAlarm”: { “type”: “string”, “description”: “The alarm event generated upon matching of CEP reasoning rule.”, “forms”: [{ “href”: “http://ureasoner.org/festo- demo/alarm”, “contentType”: “string”, “subprotocol”: “longpoll”, “op”: [ “subscribeevent” ] }] } }, “id”: “Arduino:198:023:042:025” , “forms”: [ ] }
[0099] It is worth noting that the capability models TDS of both automation devices ADV-1, ADV-2 and the on-device applications may be aligned with each other. The interfaces of implemented artifacts in an automation device ADV may be exposed via the capability model TDS. The artifacts may also retrieve relevant information via the capability model TDS. These are reflected in their semantic descriptions.
[0100] Eventually all IoT components are represented in a well-founded machine-interpretable format and ready for integration into the semantic repository KGR, enabling effective discovery, automated transformations and validation.
[0101] The embodiments not only allow to manage a single on-device application on a single device but enable a construct of new IoT applications by discovering and configuring the semantic knowledge stored in the semantic repository KGR. Some semantic tools may be used to automatically validate whether an automation device ADV meets requirements to run a particular artifact, e.g., in terms of available sensor data, available resources etc.
[0102] The artifacts may be instantiated from the orchestrated information and may be deployed to desired automation devices ADV. This concept significantly speeds up an engineering of IoT solutions, where new functionalities may be created by parsing existing semantic information instead of developing embedded software from scratch. By leveraging the proposed framework, mass-management of on-device applications in industrial IoT is enabled.
[0103] A possible use of semantic models for neural networks is described below. After introducing semantic artifacts and capability models TDS for an exemplary use case, the focus is now on a description how hosted semantic information may be organized and how existing components may be discovered in order to create new artifacts. This is shown in two example queries.
[0104] It is assumed that a neural network is to be deployed with the purpose of monitoring a conveyor belt based on the vibration data. Using semantic discovery, there is no need to redesign a neural network or train a neural network from scratch. Instead, the semantic repository KGR is queried for discovering available neural network models capable of accomplishing the desired task. Aside from available neural network models, minimum memory requirements for executing the model may need to be considered. An exemplary query along with a corresponding result is shown below.
TABLE-US-00006 Query 1: SELECT ?neuralNetwork ?memory ?unit WHERE { ?neuralNetwork a nnet:NeuralNetwork ; ssn:hasInput ?input ; ssn:hasOutput ?output ; s3n:hasProcedureFeature ?x . ?x ssn-system:inCondition ?cond . ?cond a s3n:Memory ; schema:minValue ?memory ; schema:unitCode ?unit . ?input rdf:type :VibrationMeasurement ; rdf:type nnet:Networkinput . ?output rdf:type :ConveyorBeltState ; rdf:type nnet:Networkoutput . } Result: neuralNetwork: ClogNet_1; memory: 20; unit om:kilobyte.
[0105] A next task may be a determination, which automation devices ADV have a vibration sensor and, at the same time, are able to meet the requirements of the necessary RAM memory. For demonstration purposes, it is assumed that a temperature sensor is also required. An example complete query may be formed as follows:
TABLE-US-00007 Query 2: SELECT ?multiSensor ?memory WHERE { ?multiSensor a s3n:SmartSensor ; ssn:hasSubSystem ?system_1 ; ssn:hasSubSystem ?system_2 ; ssn:hasSubSystem ?system_3 . ?system_1 rdf:type demo:VibrationSensor . ?system_2 rdf:type demo:TemperatureSensor . ?system_3 rdf:type s3n:MicroController ; s3n:hasSystemCapability ?x . ?x ssn-system:hasSystemProperty ?cond . ?cond a s3n:Memory ; schema:value ?memory ; schema:unitcode om:kilobyte . FILTER (?memory >= 20) } Result: multiSensor: Arduino_Sense_BLE; memory: 89, multiSensor: SSI-Web_1; memory: 54.
[0106] Using the proposed framework, it is easy to develop a new IIoT application with the proposed framework. Users and machines may efficiently discover information and filter results according to various requirements.
[0107] The proposed embodiments further enable: [0108] Semantic description for Neural Network-based applications in an automation device; [0109] Transparent machine-interpretable semantic information reused for different purposes, e.g., for representing a Neural Networks and for creating a semantic description of a device that hosts the network; [0110] Semantic discovery of a Neural Networks relevant for a particular task; [0111] Automated proving whether hardware characteristics and computation capabilities of an automation device meet requirements of a Neural Network; [0112] Validated configuration and deployment of a Neural Network in an automation device; [0113] Integration of Semantic description for Neural Networks with other knowledge artifacts (digital tweens, standardized semantic models etc.) in a Knowledge Graph; [0114] Semantic-based management of Neural Network(on-device) applications across different architecture levels (field, edge, cloud), and enablement of transparent and interoperable IIoT ecosystem as whole.
[0115] It is to be understood that the elements and features recited in the appended claims may be combined in different ways to produce new claims that likewise fall within the scope of the present disclosure. While the teachings of the present disclosure have been described above by reference to various embodiments, it should be understood that many changes and modifications can be made to the described embodiments. It is therefore intended that the foregoing description be regarded as illustrative rather than limiting, and that it be understood that all equivalents and/or combinations of embodiments are intended to be included in this description.