Patent classifications
H04L41/5019
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
Coalescing publication events based on subscriber tolerances
Systems and methods for coalescing and/or aligning publications in a publication/subscription architecture to reduce the number of publication events and to improve the performance of microservices in a communications network are provided. A method, according to one implementation, includes the step of obtaining client-based tolerance input with respect to a plurality of subscriptions requested by a plurality of clients in a publication/subscription system. Based on the client-based tolerance input, the method also includes the step of adjusting the timing of publications to reduce the phase variability of the plurality of subscriptions.
GENERATING LONG-TERM NETWORK CHANGES FROM SLA VIOLATIONS
In one embodiment, a device obtains information regarding temporary routing patches applied to a network. Each temporary routing patch implements a routing change in the network for a specified amount of time to avoid or mitigate against a service level agreement violation. The device evaluates, using the information regarding the temporary routing patches applied to the network, a plurality of replay scenarios for the network. The device determines, based on the plurality of replay scenarios, a long-term configuration change for the network. The device provides an indication of the long-term configuration change for display.
APPLICATION SERVICE LEVEL EXPECTATION HEALTH AND PERFORMANCE
Techniques are described for monitoring application performance in a computer network. For example, a network management system (NMS) includes a memory storing path data received from a plurality of network devices, the path data reported by each network device of the plurality of network devices for one or more logical paths of a physical interface from the given network device over a wide area network (WAN). Additionally, the NMS may include processing circuitry in communication with the memory and configured to: determine, based on the path data, one or more application health assessments for one or more applications, wherein the one or more application health assessments are associated with one or more application time periods for a site, and in response to determining at least one failure state, output a notification including identification of a root cause of the at least one failure state.
METHODS AND SYSTEMS FOR LINE RATE PACKET CLASSIFIERS FOR PRESORTING NETWORK PACKETS ONTO INGRESS QUEUES
A network appliance can have an input port that can receive network packets at line rate, two or more ingress queues, a line rate classification circuit that can place the network packets on the ingress queues at the line rate, a packet buffer that can store the network packets, and a sub line rate packet processing circuit that can process the network packets that are stored in the packet buffer. The line rate classification circuit can place a network packet on one of the ingress queues based on the network packet's packet contents. A buffer scheduler can select network packets for processing by a sub line rate packet processing circuit based on the priority levels of the ingress to queues.
METHODS AND SYSTEMS FOR LINE RATE PACKET CLASSIFIERS FOR PRESORTING NETWORK PACKETS ONTO INGRESS QUEUES
A network appliance can have an input port that can receive network packets at line rate, two or more ingress queues, a line rate classification circuit that can place the network packets on the ingress queues at the line rate, a packet buffer that can store the network packets, and a sub line rate packet processing circuit that can process the network packets that are stored in the packet buffer. The line rate classification circuit can place a network packet on one of the ingress queues based on the network packet's packet contents. A buffer scheduler can select network packets for processing by a sub line rate packet processing circuit based on the priority levels of the ingress to queues.
SYSTEM AND METHOD FOR OCCUPANCY BASED MANAGEMENT OF DISTRIBUTED SYSTEMS
Methods, systems, and devices for providing computer implemented services using managed systems are disclosed. To provide the computer implemented services, the managed systems may be deployed to a location and operate in a predetermined manner conducive to, for example, execution of applications that provide the computer implemented services. When deployed to a location, the managed systems may be housed in a managed system frame. The managed system frames may include systems to guide placement of managed system in preferred frame units, remotely identify occupancy of the frame units, and/or the frame units against unexpected removals of or insertion of devices in the frame units.
Container-based network functions virtualization platform
The present invention relates to a container-based network function virtualization (NFV) platform, comprising at least one master node and at least one slave node, the master node is configured to, based on interference awareness, assign container-based network functions (NFs) in a master-slave-model-based, distributed computing system that has at least two slave nodes to each said slave node in a manner that relations among characteristics of the to-be-assigned NFs, info of load flows of the to-be-assigned NFs, communication overheads between the individual slave nodes, processing performance inside individual slave nodes, and load statuses inside individual said slave nodes are measured.
Container-based network functions virtualization platform
The present invention relates to a container-based network function virtualization (NFV) platform, comprising at least one master node and at least one slave node, the master node is configured to, based on interference awareness, assign container-based network functions (NFs) in a master-slave-model-based, distributed computing system that has at least two slave nodes to each said slave node in a manner that relations among characteristics of the to-be-assigned NFs, info of load flows of the to-be-assigned NFs, communication overheads between the individual slave nodes, processing performance inside individual slave nodes, and load statuses inside individual said slave nodes are measured.