Patent classifications
H04L41/0246
Automated network discovery for industrial controller systems
Controller devices may be configured to execute a network discovery service to identify other devices on a network, including other controller devices, user computing devices, and/or human machine interface devices. The controller devices may communicate with the devices on the network. An individual controller device may, upon connection to a human machine interface device, provide to the human machine interface device via a web server, a graphical user interface from which a user may configure the controller device or connect to another controller device on the network.
Automated network discovery for industrial controller systems
Controller devices may be configured to execute a network discovery service to identify other devices on a network, including other controller devices, user computing devices, and/or human machine interface devices. The controller devices may communicate with the devices on the network. An individual controller device may, upon connection to a human machine interface device, provide to the human machine interface device via a web server, a graphical user interface from which a user may configure the controller device or connect to another controller device on the network.
Dynamic execution resource selection for customized workflow tasks
A representation of a workflow comprising a plurality of tasks is obtained. An execution of an instance of the workflow is initiated. The execution comprises selecting, with respect to a particular task of the workflow, a particular execution resource option from a set comprising at least a first execution resource option and a second resource execution option. A result of the execution is stored.
Segment Routing Network Signaling and Packet Processing
In one embodiment, a service chain data packet is instrumented as it is communicated among network nodes in a network providing service-level and/or networking operations visibility. The service chain data packet includes a particular header identifying a service group defining one or more service functions, and is a data packet and not a probe packet. A network node adds networking and/or service-layer operations data to the particular service chain data packet, such as, but not limited to, in the particular header. Such networking operations data includes a performance metric or attribute related to the transport of the particular service chain packet in the network. Such service-layer operations data includes a performance metric or attribute related to the service-level processing of the particular service chain data packet in the network.
Segment Routing Network Signaling and Packet Processing
In one embodiment, a service chain data packet is instrumented as it is communicated among network nodes in a network providing service-level and/or networking operations visibility. The service chain data packet includes a particular header identifying a service group defining one or more service functions, and is a data packet and not a probe packet. A network node adds networking and/or service-layer operations data to the particular service chain data packet, such as, but not limited to, in the particular header. Such networking operations data includes a performance metric or attribute related to the transport of the particular service chain packet in the network. Such service-layer operations data includes a performance metric or attribute related to the service-level processing of the particular service chain data packet in the network.
TRANSMISSION APPARATUS, ALARM TRANSFER METHOD AND ALARM TRANSFER SYSTEM
A transmission apparatus executes a reception processing that receives a first alarm detected in a first transmission apparatus different from the own apparatus from among a plurality of transmission apparatus from a second transmission apparatus different from the own apparatus from among the plurality of transmission apparatus, executes a detection processing that detects a second alarm of the own apparatus, executes a mask processing that masks alarms including the first alarm received by the reception processing and the second alarm detected by the detection processing, and executes a sending processing that sends an alarm that is not masked by the mask processing from among the alarms to a third transmission apparatus different from the own apparatus and the second transmission apparatus from among the plurality of transmission apparatus or sending the alarm to a given apparatus different from any of the plurality of transmission apparatus.
Sharing configuration resources for network devices among applications
In an example, a method includes receiving, by a network management system (NMS), a configuration request comprising first configuration data for a network device, the first configuration data defining a data structure comprising a first property/value pair; generating, by the NMS from the first configuration data, a corresponding first path/value pair for the first property/value pair, wherein a path of the first path/value pair uniquely identifies the first path/value pair in an associative data structure; modifying, by the NMS, the associative data structure based on the first path/value pair; generating, by the NMS, from the associative data structure, a configuration resource comprising second configuration data for the network device, the second configuration data comprising a second property/value pair that corresponds to the first path/value pair; and sending, by the NMS, the second configuration data to the network device to modify a configuration of the network device.
NETWORK LATENCY MEASUREMENT AND ANALYSIS SYSTEM
Deploying a point of presence (PoP) changes traffic flow to a cloud service provider. To determine if the PoP improves the performance of a cloud service to a client, actual network latencies between the client and the cloud service are measured. In more complex scenarios, multiple PoPs are used. The client sends multiple requests for the same content to the cloud provider. The requests are sent via different routes. The cloud provider serves the requests and collates the latency information. Based on the latency information, a route for a future request is selected, resources are allocated, or a user interface is presented. The process of determining the latency for content delivered by different routes may be repeated for content of different sizes. A future request is routed along the network path that provides the lowest latency for the data being requested.
Policy-based payload delivery for transport protocols
Information describing a rule to be applied to a traffic stream is received at an edge network device. The traffic stream is received at the edge network device. A schema is applied to the traffic stream at the edge network device. It is determined that a rule triggering condition has been met. The rule is applied to the traffic stream, at the edge network device, in response to the rule triggering condition having been met. At least one of determining that the rule triggering event has taken place or applying the rule is performed based on the applied schema.
SUPPORTING INTEROPERABILITY IN CLOUD ENVIRONMENTS
Examples relate to supporting interoperability in cloud environments. In some examples, an application topology is converted to a cloud product topology that supports a cloud product standard, where the application topology includes a general application and supports a cloud industry standard. Artifacts associated with the general application are imported into a product database, where the artifacts are exposed in the product database via a standard Internet protocol. At this stage, the cloud product topology is imported into the product database to obtain an imported topology, and the general application is deployed from the imported topology to a server computing device that supports the cloud product standard, where the general application accesses the artifacts via the standard Internet protocol after deployment.