H04L47/805

Usage of pre-authorized QoS

In one aspect, a device, operating in an access network that can provide a plurality of QoS levels for user data flowing to and from the device, establishes a packet data session via the access network and receives, from the access network, cost information associated with each of one or more QoS levels. The device selects, for user data for at least a first application or service, a QoS level from among the plurality of QoS levels based on the cost information. The device transmits packets carrying user data for the first application or service to the access network. The transmission includes applying a QoS treatment to the user data according to the selected QoS level.

Providing customized integration flow templates

A method and system are provided for customizing integration flow templates. The method can include can include monitoring user interaction with a plurality of systems external to an integration system to read data changes at the external systems and identifying at least one event pair, wherein each event pair is between two external systems having a same data change event in the two external systems. The identified event pairs are filtered for inclusion in an events chain and the external systems of the filtered event pairs are ordered in the events chain based on timestamps of the data change events. The method outputs integration flow templates based on the ordered external systems of the event pairs that define a flow trigger and at least one flow node.

Technologies for switching network traffic in a data center

Technologies for switching network traffic include a network switch. The network switch includes one or more processors and communication circuitry coupled to the one or more processors. The communication circuitry is capable of switching network traffic of multiple link layer protocols. Additionally, the network switch includes one or more memory devices storing instructions that, when executed, cause the network switch to receive, with the communication circuitry through an optical connection, network traffic to be forwarded, and determine a link layer protocol of the received network traffic. The instructions additionally cause the network switch to forward the network traffic as a function of the determined link layer protocol. Other embodiments are also described and claimed.

Throttling queue for a request scheduling and processing system

Various methods and systems for implementing request scheduling and processing in a multi-tenant distributed computing environment are provided. Requests to utilize system resources in the distributed computing environment are stored in account queues corresponding to tenant accounts. If storing a request in an account queue would exceed a throttling threshold such as a limit on the number of requests stored per account, the request is dropped to a throttling queue. A scheduler prioritizes processing requests stored in the processing queue before processing requests stored in the account queues. The account queues can be drained using dominant resource scheduling. In some embodiments, a request is not picked up from an account queue if processing the request would exceed a predefined hard limit on system resource utilization for the corresponding tenant account. In some embodiments, the hard limit is defined as a percentage of threads the system has to process requests.

INTELLIGENT ALLOCATION OF NETWORK RESOURCES

Systems, devices, and techniques described herein relate to intelligently allocating network resources to Quality of Service (QoS)-sensitive data traffic. An example method includes identifying a request to deliver QoS-sensitive services to a User Equipment (UE) over at least one delivery network. The at least one delivery network may include at least one reserved resource and at least one pooled resource. The QoS-sensitive services are determined to be delivered over the at least one pooled resource. In addition, delivery of the QoS-sensitive services is caused over the at least one pooled resource.

Dynamically re-allocating computing resources while maintaining network connection(s)

Techniques are described herein that are capable of dynamically re-allocating computing resources while maintaining network connection(s). Applications of users are run in a computing unit. Computing resources are allocated among the applications based at least in part on dynamic demands of the applications for the computing resources and resource limits associated with the respective customers. In a first example, the computing resources are dynamically re-allocated among the applications, as a result of changing the resource limit of at least one customer, while maintaining at least one network connection between a client device of each customer and at least one respective application. In a second example, the computing resources are dynamically re-allocated among the applications, as a result of changing the resource limit of at least one customer, while maintaining at least one network connection between an interface and a client device of each customer.

AUTOMATIC SCALING FOR CONSUMER SERVERS IN A DATA PROCESSING SYSTEM

A system and method for automatically scaling consumer servers in a data processing system. To build an automatic scaling system, the present disclosure allows consumers to obtain additional information, e.g., the number of events that await to be read from an aggregator when receiving an event from the aggregator. This additionally obtained number provides a direct gauge for the data processing system to determine when the consumers are over-provisioned, i.e., when the number of events left to be read is close to zero, as well as when the consumers are under-provisioned, e.g., when the number of events left to be read continues to increase. As a result, the consumers can be automatically scaled to handle the dynamic data processing demand while providing optimal resource allocation.

Method, device and system for ensuring service level agreement of application
11588709 · 2023-02-21 · ·

A method, device, and system for ensuring a service level agreement (SLA) of an application, where the method includes: obtaining, by an application function (AF) entity, information about a first network slice instance (NSI) that is in network slice instances between a specified location and a target network and whose SLA support capability meets a subscribed SLA requirement of the application, and sending a notification message including the information about the first NSI, where the notification message includes the information about the first NSI, to establish a new session in the first NSI for a terminal.

Utilizing network analytics for service provisioning

This disclosure describes techniques for collecting network parameter data for network switches and/or physical servers and provisioning virtual resources of a service on physical servers based on network resource availability. The network parameter data may include network resource availability data, diagnostic constraint data, traffic flow data, etc. The techniques include determining network switches that have an availability of network resources to support a virtual resource on a connected physical server. A scheduler may deploy virtual machines to particular servers based on the network parameter data in lieu of, or in addition to, the server utilization data of the physical servers (e.g., CPU usage, memory usage, etc.). In this way, a virtual resource may be deployed to a physical server that has an availability of the server resources, but also is connected to a network switch with the availability of network resources to support the virtual resource.

Combined network and computation slicing for latency critical edge computing applications

Methods and devices for creating and operating a combined network and computational slice instance (NCSI) in a Multi-access Edge Computing (MEC) scenario. Communication and computational resources may be reserved by a NCSI controller for the NCSI. The communication resources may include network slices and the computational resources may include MEC computational resources of one or more MEC servers. The reserved resources may be selected based on quality of service (QoS) requirements of UEs that will utilize the NCSI. During operation, reserved resources for the NCSI may be dynamically renegotiated based on an aggregate load of the NCSI, the QoS of data traffic, and/or updated QoS requirements of the UEs.