Patent classifications
H04L41/5006
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
Provenance audit trails for microservices architectures
An apparatus to facilitate provenance audit trails for microservices architectures is disclosed. The apparatus includes one or more processors to: obtain, by a microservice of a service hosted in a datacenter, provisioned credentials for the microservice based on an attestation protocol; generate, for a task performed by the microservice, provenance metadata for the task, the provenance metadata including identification of the microservice, operating state of at least one of a hardware resource or a software resource used to execute the microservice and the task, and operating state of a sidecar of the microservice during the task; encrypt the provenance metadata with the provisioned credentials for the microservice; and record the encrypted provenance metadata in a local blockchain of provenance metadata maintained for the hardware resource executing the task and the microservice.
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
SERVICE LEVEL OBJECTIVE PLATFORM
Techniques for generating and monitoring service level objectives (SLOs) are disclosed. The techniques include an SLO platform performing: storing a first SLO definition of a first SLO including a first error budget for a first metric associated with a first service; storing a second SLO definition of a second SLO including a second error budget for a second metric associated with a second service; obtaining first telemetry data from a first data source associated with the first service; obtaining second telemetry data from a second data source associated with the second service; monitoring the first SLO at least by computing the first metric based on the first telemetry data and evaluating the first metric against the first error budget; and monitoring the second SLO at least by computing the second metric based on the second telemetry data and evaluating the second metric against the second error budget.
Container-based network functions virtualization platform
The present invention relates to a container-based network function virtualization (NFV) platform, comprising at least one master node and at least one slave node, the master node is configured to, based on interference awareness, assign container-based network functions (NFs) in a master-slave-model-based, distributed computing system that has at least two slave nodes to each said slave node in a manner that relations among characteristics of the to-be-assigned NFs, info of load flows of the to-be-assigned NFs, communication overheads between the individual slave nodes, processing performance inside individual slave nodes, and load statuses inside individual said slave nodes are measured.
Container-based network functions virtualization platform
The present invention relates to a container-based network function virtualization (NFV) platform, comprising at least one master node and at least one slave node, the master node is configured to, based on interference awareness, assign container-based network functions (NFs) in a master-slave-model-based, distributed computing system that has at least two slave nodes to each said slave node in a manner that relations among characteristics of the to-be-assigned NFs, info of load flows of the to-be-assigned NFs, communication overheads between the individual slave nodes, processing performance inside individual slave nodes, and load statuses inside individual said slave nodes are measured.
Inferring quality of experience (QoE) based on choice of QoE inference model
In one example, a location of a potential bottleneck of network traffic in a network is identified. Based on the location of the potential bottleneck, a first QoE inference model is selected from a plurality of respective QoE inference models. The respective QoE inference models are each trained to infer a respective QoE of the network traffic based on one or more respective network traffic metrics generated by monitoring the network traffic at a respective location in the network. One or more first network traffic metrics of the one or more respective network traffic metrics are generated by monitoring the network traffic at a first respective location. The one or more first network traffic metrics are provided to the first QoE inference model to infer a first respective QoE.
Inferring quality of experience (QoE) based on choice of QoE inference model
In one example, a location of a potential bottleneck of network traffic in a network is identified. Based on the location of the potential bottleneck, a first QoE inference model is selected from a plurality of respective QoE inference models. The respective QoE inference models are each trained to infer a respective QoE of the network traffic based on one or more respective network traffic metrics generated by monitoring the network traffic at a respective location in the network. One or more first network traffic metrics of the one or more respective network traffic metrics are generated by monitoring the network traffic at a first respective location. The one or more first network traffic metrics are provided to the first QoE inference model to infer a first respective QoE.
STITCHING MULTIPLE WIDE AREA NETWORKS TOGETHER
The present application relates to communications between a partner network and a wide area network (WAN). The partner network and WAN may exchange representations of the respective networks including a delay profile for the partner network. The WAN receives a network delay profile for multiple virtual network entities within the partner network. The multiple virtual network entities include at least a plurality of peering locations with the WAN. The WAN determines a path from the partner network through the WAN via a selected peering location of the plurality of peering locations with the WAN to a destination based on at least the network delay profile. The WAN deploys a policy for an agent within the partner network. The policy identifies traffic for the destination to route through the WAN via the selected peering location. The WAN routes traffic from the selected peering location to the destination along the path.