Patent classifications
H04L41/122
Shim layer for extracting and prioritizing underlying rules for modeling network intents
Systems, methods, and computer-readable media for receiving one or more models of network intents, comprising a plurality of contracts between providers and consumers, each contract containing entries with priority values. Each contract is flattened into a listing of rules and a new priority value is calculated. The listing of rules encodes the implementation of the contract between the providers and the consumers. Each entry is iterated over and added to a listing of entries if it is not already present. For each rule, the one or more entries associated with the contract from which the rule was flattened are identified, and for each given entry a flat rule comprising the combination of the rule and the entry is generated, wherein a flattened priority is calculated based at least in part on the priority value of the given one of given entry and the priority value of the rule.
APPLICATION SERVICE LEVEL EXPECTATION HEALTH AND PERFORMANCE
Techniques are described for monitoring application performance in a computer network. For example, a network management system (NMS) includes a memory storing path data received from a plurality of network devices, the path data reported by each network device of the plurality of network devices for one or more logical paths of a physical interface from the given network device over a wide area network (WAN). Additionally, the NMS may include processing circuitry in communication with the memory and configured to: determine, based on the path data, one or more application health assessments for one or more applications, wherein the one or more application health assessments are associated with one or more application time periods for a site, and in response to determining at least one failure state, output a notification including identification of a root cause of the at least one failure state.
PLANNING AND MANAGING NETWORK PROBES USING CENTRALIZED CONTROLLER
In general, the disclosure describes techniques for measuring edge-based quality of experience (QoE) metrics. For instance, a network device may construct a topological representation of a network, including indications of nodes and links connecting the nodes within the network. For each of the links, the network device may select a node device of the two node devices connected by the respective link to measure one or more QoE metrics for the respective link, with the non-selected node device not measuring the QoE metrics. In response to selecting the selected node device, the network device may receive a set of one or more QoE metrics for the respective link for data flows flowing from the selected node device to the non-selected node device. The network device may store the QoE metrics and determine counter QoE metrics for data flows flowing from the non-selected node device to the selected node device.
Network data extraction parser-model in SDN
A parser model may be used with software-defined applications or controllers. A network topology may be detected and based on the change in the network topology, a network device may filter certain network data traffic for processing by a software-defined network controller.
Datapath for multiple tenants
A novel design of a gateway that handles traffic in and out of a network by using a datapath pipeline is provided. The datapath pipeline includes multiple stages for performing various data-plane packet-processing operations at the edge of the network. The processing stages include centralized routing stages and distributed routing stages. The processing stages can include service-providing stages such as NAT and firewall. The gateway caches the result previous packet operations and reapplies the result to subsequent packets that meet certain criteria. For packets that do not have applicable or valid result from previous packet processing operations, the gateway datapath daemon executes the pipelined packet processing stages and records a set of data from each stage of the pipeline and synthesizes those data into a cache entry for subsequent packets.
Information processing method, computer-readable recording medium storing information processing program, information processing apparatus, and information processing system
An information processing method executed by a computer includes: specifying one or a plurality of first physical resources on which virtual resources used by a first user operate; specifying a device connected to the first physical resource and one or a plurality of second physical resources different from the first physical resource, which is connected to the device and on which virtual resources used by a user other than the first user operate; and outputting information that indicates the first physical resource and information that indicates the second physical resource.
IMPROVING SOFTWARE DEFINED NETWORKING CONTROLLER AVAILABILITY USING MACHINE LEARNING TECHNIQUES
A method of managing a controller of a software defined networking (SDN) network is implemented by a computing device in the SDN network. The method includes receiving status information for the controller, receiving usage information for the operating environment, generating at least one failure prediction for the controller based on the received status information, and outputting prediction information for the at least one failure prediction.
Datapath load distribution for a RIC
To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
Datapath load distribution for a RIC
To provide a low latency near RT RIC, some embodiments separate the RIC's functions into several different components that operate on different machines (e.g., execute on VMs or Pods) operating on the same host computer or different host computers. Some embodiments also provide high speed interfaces between these machines. Some or all of these interfaces operate in non-blocking, lockless manner in order to ensure that critical near RT RIC operations (e.g., datapath processes) are not delayed due to multiple requests causing one or more components to stall. In addition, each of these RIC components also has an internal architecture that is designed to operate in a non-blocking manner so that no one process of a component can block the operation of another process of the component. All of these low latency features allow the near RT RIC to serve as a high speed IO between the E2 nodes and the xApps.
STITCHING MULTIPLE WIDE AREA NETWORKS TOGETHER
The present application relates to communications between a partner network and a wide area network (WAN). The partner network and WAN may exchange representations of the respective networks including a delay profile for the partner network. The WAN receives a network delay profile for multiple virtual network entities within the partner network. The multiple virtual network entities include at least a plurality of peering locations with the WAN. The WAN determines a path from the partner network through the WAN via a selected peering location of the plurality of peering locations with the WAN to a destination based on at least the network delay profile. The WAN deploys a policy for an agent within the partner network. The policy identifies traffic for the destination to route through the WAN via the selected peering location. The WAN routes traffic from the selected peering location to the destination along the path.