H04L41/5025

NF service consumer restart detection using direct signaling between NFs

Systems and methods for detecting, e.g., that a Network Function (NF) service consumer in a core network of a cellular communications system has restarted are disclosed. In some embodiments, a method of operation of a NF service consumer in a core network of a cellular communications system comprises sending, to a NF service producer, a message comprising information related to a unit of the NF service consumer.

NF service consumer restart detection using direct signaling between NFs

Systems and methods for detecting, e.g., that a Network Function (NF) service consumer in a core network of a cellular communications system has restarted are disclosed. In some embodiments, a method of operation of a NF service consumer in a core network of a cellular communications system comprises sending, to a NF service producer, a message comprising information related to a unit of the NF service consumer.

Determining optimum software update transmission parameters

Optimum software update transmission parameters are determined and used for transmitting a software update from a host to servers of a computer network. The software update is transmitted while the servers are live and required to meet certain quality of service requirements for tenants of the computer network. Transmission parameters for transmitting the software update are adjusted and updated based on service performance data. Based on iterative adjustments, optimum transmission parameters may be determined. Additionally or alternatively, machine learning is used to generate a model that determines predicted optimum transmission parameters. The predicted optimum transmission parameters may be initially used for transmitting a software update, while the transmission parameters continue to be adjusted throughout transmission.

Determining optimum software update transmission parameters

Optimum software update transmission parameters are determined and used for transmitting a software update from a host to servers of a computer network. The software update is transmitted while the servers are live and required to meet certain quality of service requirements for tenants of the computer network. Transmission parameters for transmitting the software update are adjusted and updated based on service performance data. Based on iterative adjustments, optimum transmission parameters may be determined. Additionally or alternatively, machine learning is used to generate a model that determines predicted optimum transmission parameters. The predicted optimum transmission parameters may be initially used for transmitting a software update, while the transmission parameters continue to be adjusted throughout transmission.

5G admission by verifying slice SLA guarantees

In a 5G network, a slice controller is arranged to dynamically configure a radio access network (RAN) by allocating physical radio resources into RAN slices by making predictions of channel state information (CSI) for user equipment (UE) executing applications that make connectivity requests for admission to particular identified slices. The slice controller grants or denies admission requests based on the predicted CSI to ensure that applicable service level agreement (SLA) guarantees are satisfied for traffic across all the RAN slices. Each time new admission requests are received from applications, the slice controller determines whether a suitable RAN configuration exists that will enable SLA guarantees for the slices to continue to be satisfied for the current traffic while also meeting the SLA guarantees applicable to the new admission request.

Selecting low priority pods for guaranteed runs

Service assurance is provided. A low priority pod corresponding to a low priority service in an orchestration platform that is to be evicted due to a predicted peak load period of a high priority service is identified based on analysis of historical and resource information. The low priority service corresponding to the low priority pod that is to be evicted due to the predicted peak load period of the high priority service is marked as an assured service for a guaranteed run in response to receiving an input from a user who was notified regarding eviction of the low priority pod. The low priority pod corresponding to the low priority service that is to be evicted due to the predicted peak load period of the high priority service is provisioned on a second host node prior to the eviction of the low priority pod from a first host node.

Mechanism for monitoring and alerts of computer systems applications

A system including at least one computer and code executable thereby for implementing a mechanism for monitoring performances of applications of an application chain. The system includes an arrangement forming a measuring repository on the one hand for measuring levels of use of resources of applications during periods of degradation of performances of the applications, and by application and by period of the application chain, in a memory storing these levels of use. The arrangement is further operable to: establish a repository of use data by defining and storing in at least one memory, by resource and by application, thresholds of acceptable performance of the level of use of the measuring repository; constitute a categorization module of performance problems as a function of measuring and use repositories; and implement an alert mechanism when the monitoring mechanism detects a performance problem of the applications or when the problem is resolved.

Mechanism for monitoring and alerts of computer systems applications

A system including at least one computer and code executable thereby for implementing a mechanism for monitoring performances of applications of an application chain. The system includes an arrangement forming a measuring repository on the one hand for measuring levels of use of resources of applications during periods of degradation of performances of the applications, and by application and by period of the application chain, in a memory storing these levels of use. The arrangement is further operable to: establish a repository of use data by defining and storing in at least one memory, by resource and by application, thresholds of acceptable performance of the level of use of the measuring repository; constitute a categorization module of performance problems as a function of measuring and use repositories; and implement an alert mechanism when the monitoring mechanism detects a performance problem of the applications or when the problem is resolved.

Predictive overlay network architecture

The predictive overlay network architecture of the present invention improves the performance of applications distributing digital content among nodes of an underlying network such as the Internet by establishing and reconfiguring overlay network topologies over which associated content items are distributed. The present invention addresses not only frequently changing network congestion, but also interdependencies among nodes and links of prospective overlay network topologies. The present invention provides a prediction engine that monitors metrics and predicts the relay capacity of individual nodes and links (as well as demand of destination nodes) over time to reflect the extent to which the relaying of content among the nodes of an overlay network will be impacted by (current or future) underlying network congestion. The present invention further provides a topology selector that addresses node and link interdependencies while redistributing excess capacity to determine an overlay network topology that satisfies application-specific performance criteria.

Predictive overlay network architecture

The predictive overlay network architecture of the present invention improves the performance of applications distributing digital content among nodes of an underlying network such as the Internet by establishing and reconfiguring overlay network topologies over which associated content items are distributed. The present invention addresses not only frequently changing network congestion, but also interdependencies among nodes and links of prospective overlay network topologies. The present invention provides a prediction engine that monitors metrics and predicts the relay capacity of individual nodes and links (as well as demand of destination nodes) over time to reflect the extent to which the relaying of content among the nodes of an overlay network will be impacted by (current or future) underlying network congestion. The present invention further provides a topology selector that addresses node and link interdependencies while redistributing excess capacity to determine an overlay network topology that satisfies application-specific performance criteria.