H04L43/091

Intelligent lifecycle management of analytic functions for an IoT intelligent edge with a hypergraph-based approach

The disclosure relates to a framework for dynamic management of analytic functions such as data processors and machine learned (“ML”) models for an Internet of Things intelligent edge that addresses management of the lifecycle of the analytic functions from creation to execution, in production. The end user will be seamlessly able to check in an analytic function, version it, deploy it, evaluate model performance and deploy refined versions into the data flows at the edge or core dynamically for existing and new end points. The framework comprises a hypergraph-based model as a foundation, and may use a microservices architecture with the ML infrastructure and models deployed as containerized microservices.

Software defined network lifecycle tracking and management

A device in an evolved packet core (EPC) which includes a processor and a memory. The processor effectuates operations including receiving from one or more devices residing within a customer premise equipment (CPE) portion of a telecommunications network, sensor data associated with one or more customers and in response to receiving the sensor data, generating a data request for an ecosystem status for the CPE portion of the telecommunications network. The processor further effectuates operations including obtaining customer information for the one or more customers and creating an analytics environment, using the customer information, for the one or more customers. The processor further effectuates operations including performing, within the analytics environment, analytics on the sensor data to determine a state of the CPE portion of the telecommunications network for the one or more customers and in response to performing analytics on the sensor data, optimizing the telecommunications network.

Software defined network lifecycle tracking and management

A device in an evolved packet core (EPC) which includes a processor and a memory. The processor effectuates operations including receiving from one or more devices residing within a customer premise equipment (CPE) portion of a telecommunications network, sensor data associated with one or more customers and in response to receiving the sensor data, generating a data request for an ecosystem status for the CPE portion of the telecommunications network. The processor further effectuates operations including obtaining customer information for the one or more customers and creating an analytics environment, using the customer information, for the one or more customers. The processor further effectuates operations including performing, within the analytics environment, analytics on the sensor data to determine a state of the CPE portion of the telecommunications network for the one or more customers and in response to performing analytics on the sensor data, optimizing the telecommunications network.

TRAFFIC FLOW MONITORING
20170373950 · 2017-12-28 ·

A method is provided comprising monitoring, in a network node, a user plane traffic flow transmitted in a network, to perform measurements on selected data packets. Based on the monitoring, the network node collects in a correlated way, one or more of user measurement data, application measurement data, quality of experience measurement data, network side quality of service measurement data and a set of key performance indicators. Based on the collecting, the network node generates real-time correlated insight to customer experience.

INTELLIGENT CONFIGURATION DISCOVERY TECHNIQUES

At a configuration discovery service, a unique service-side identifier is generated for a configuration item based on analysis of a data set obtained from a first data source. A determination is made that a second data set, which does not contain the service-side identifier and is obtained from a different data source, also includes information pertaining to the same configuration item. A coalesced configuration record for the configuration item is prepared. The coalesced configuration record is stored at a repository and used to respond to a programmatic query.

ANALYZING CONTENTION DATA AND FOLLOWING RESOURCE BLOCKERS TO FIND ROOT CAUSES OF COMPUTER PROBLEMS
20170373925 · 2017-12-28 ·

Present disclosure relates to methods, processing systems and computer program products of analyzing contention data and following resource blockers to find root causes of computer problems. The method may include: detecting one or more resource waiters in a computer system, iteratively determining whether the resource blockers are a resource waiter, until a final resource blocker not waiting for another resource is found, determining, whether final resource blocker is caused by a resource blocker in a different computer system, iteratively executing, the method on the different computer system to find the final resource blocker not waiting for another resource is found, determining, whether the final resource blocker has more than one symptom that may or may not be a contention problem, selecting a symptom that has the highest priority as the root cause of the computer problems, and generating, using the processor, a report of root causes of the computer problems.

ORCHESTRATING OPERATIONS AT APPLICATIONS
20170373958 · 2017-12-28 ·

Aspects extend to methods, systems, and computer program products for orchestrating operations at applications. Aspects of the invention use a side channel (instrumentation messages generated by a service) as a mechanism to discover when a service has completed an activity. Use of a side channel, allows a (e.g., client) application to create behaviors similar to service side interfaces and/or protocols without modifying a service. Accordingly, functionality can be added incrementally, safely, and cheaply without having to revise an underlying implementation. In one aspect, an instrumentation collector and publisher (ICP) facilitates the synchronization between services and an application. ICP is a scalable infrastructure that provides applications a way to interact with servers through instrumentation.

SLA packet steering in network service function chaining
11689431 · 2023-06-27 · ·

This disclosure describes techniques that include adding information to a network service header in packets being processed by a set of compute nodes in a service chain. The information added to the network service header can be used during selection of the next hop in a service chain, and may be used to help ensure that service level agreements (SLA) are met with respect to one or more metrics. In one example, this disclosure describes a method that includes receiving, by a service complex having a plurality of service nodes, a packet associated with a service chain representing a series of services to be performed on the packet by one or more of the plurality of service nodes; identifying, by the service complex, one or more service chain constraints associated with the service chain; and modifying the packet, by the service complex, to include information about the service chain constraints.

Network capacity planning based on application performance

Network capacity planning based on application performance can include detecting a data session occurring on a network, identifying an application being used for the data session, where the application can include a video application, determining if a performance model for the video application exists, the performance model describing performance metrics and quality of service events associated with the video application, determining, based on the performance model, a capacity planning trigger for the video application, where the capacity planning trigger can include increasing network capacity based on the needs and a quality of service associated with the video application during the data session, and generating a command that, when executed by a network entity, causes the network entity to implement the capacity planning trigger on the network.

Network capacity planning based on application performance

Network capacity planning based on application performance can include detecting a data session occurring on a network, identifying an application being used for the data session, where the application can include a video application, determining if a performance model for the video application exists, the performance model describing performance metrics and quality of service events associated with the video application, determining, based on the performance model, a capacity planning trigger for the video application, where the capacity planning trigger can include increasing network capacity based on the needs and a quality of service associated with the video application during the data session, and generating a command that, when executed by a network entity, causes the network entity to implement the capacity planning trigger on the network.