H04L47/821

Systems and methods for dynamic prediction and optimization of edge server scheduling

System, methods, and other embodiments described herein relate to improving execution of processing requests by an edge server. In one embodiment, a method includes predicting a number of computing requests from vehicles for execution by the edge server using a prediction solver for a time period that is forthcoming. The prediction solver may predict the number of computing requests using a prediction model selected in association with service constraints of the edge server and information from an additional server. The method also includes determining a request handling scheme using an optimization solver according to the number of computing requests, the service constraints of the edge server, and a service area of the edge server. The method also includes communicating the request handling scheme and a resource schedule to the edge server on a condition that a resources criteria are satisfied for the time period.

Optimizing agent for identifying traffic associated with a resource for an optimized service flow
11743208 · 2023-08-29 · ·

An optimizing agent of an access point device can identify traffic associated with a resource for an optimized service flow so as to provide a user an enhanced experience. The optimizing agent can identify the traffic for the optimized service flow based on one or more optimizations settings. The optimization settings can include a policy that indicates a priority level, a bandwidth, a QoS, or any other prioritization setting. A user can manage a list of resources associated with the one or more optimization settings via a user interface either hosted by a network resource or network device such that traffic associated with the resources should receive optimization.

Metadata driven static determination of controller availability
11336588 · 2022-05-17 · ·

Systems and methods for determining if a controller that can service a custom resource (CR) exists are disclosed. A processing device annotates a corresponding deployment of each of a plurality of controllers with filter metadata obtained from the controller. The filter metadata of a controller comprises at least an object type that the controller is to service. In response to generating a CR, the processing device may compare the definitions of the CR with the filter metadata from each of the plurality of controllers, wherein the definitions of the CR comprise at least an object type of the CR. In response to determining that none of the plurality of controllers have filter metadata matching the definitions of the CR, the processing device may provide to a user a no-match alert indicating that there is no controller among the plurality of controllers that can service the CR.

DISTRIBUTED DATABASE-DRIVEN RESOURCE MANAGEMENT AND LOCKING IN A CLOUD NATIVE MOBILE CORE NETWORK NODE ARCHITECTURE
20220150181 · 2022-05-12 ·

Systems and methods for efficient database management of non-transitory readable media, including a memory configured to store information associated with service instance requests across a plurality of distributed network resources and a processor configured to receive a service instance request, determine the first native domain object associated with the service instance request, allocate the plurality of network resources to a plurality of distributed worker instances dependent upon a first native domain object, assign the first service instance request to a first worker instance that includes a microservice instance that define service instance blocks to execute the request, and a service instance block manager configured to manage the first service instance request in conjunction with subsequent service instance requests associated with the plurality of worker instances, track running and completed requests, and allocate resources for similar requests across the distributed network nodes.

RESOURCE MANAGEMENT METHOD, RESOURCE MANAGEMENT SYSTEM AND WORKLOAD SCHEDULING APPARATUS FOR NETWORK SLICING

A resource management method, a resource management system, and a workload scheduling apparatus for network slicing are provided. In the method resource management, a service request related to an application type of a terminal device is received. A monitoring report of the terminal device is obtained according to the service request. The monitoring result relates to a condition of the radio resource used by the terminal device. A usage situation of a slicing resource is analyzed based on the slicing resource requested by the service request and the monitoring report to predict a predicted arrangement result of the slicing resource. The slicing resource requested by the service result is arranged according to the predicted arrangement result to transmit a corresponding setting configuration to the radio access network. The setting configuration serves to adjust the slicing resource. Accordingly, the request of the service and the current condition are fulfilled.

OPTIMIZING AGENT FOR IDENTIFYING TRAFFIC ASSOCIATED WITH A RESOURCE FOR AN OPTIMIZED SERVICE FLOW
20220150183 · 2022-05-12 ·

An optimizing agent of an access point device can identify traffic associated with a resource for an optimized service flow so as to provide a user an enhanced experience. The optimizing agent can identify the traffic for the optimized service flow based on one or more optimizations settings. The optimization settings can include a policy that indicates a priority level, a bandwidth, a QoS, or any other prioritization setting. A user can manage a list of resources associated with the one or more optimization settings via a user interface either hosted by a network resource or network device such that traffic associated with the resources should receive optimization.

Work-load management in a client-server infrastructure

Work-load management in a client-server infrastructure includes setting request information in accordance with request semantics corresponding to a type of request from a client. The request semantics include different request-types provided with different priorities during processing. Within a server, requests with high priority are included in a standard request processing queue. Further, requests with low priority are excluded from the standard request processing queue when server workload of the server exceeds a predetermined first threshold value.

REDUNDANT PATH RESOURCE RESERVATION METHOD, NETWORK DEVICE AND STORAGE MEDIUM
20230261987 · 2023-08-17 ·

The present application provides a redundant path resource reservation method, a network device, and a storage medium. The method includes: acquiring, from a received attribute declaration packet of a talker device, a TSN service targeted by the attribute declaration packet and an indication of whether to provide redundant propagation for the attribute declaration packet; duplicating the attribute declaration packet in response to the indication of providing the redundant propagation for the attribute declaration packet, and there are at least two spanning tree instances maintained in a bridge device; propagating the received attribute declaration packet and the duplicated attribute declaration packet, to establish a redundant path for the TSN service between the talker device and a listener device; and performing, in response to receiving a resource reservation request packet for the TSN service from the listener device, redundant path resource reservation for the TSN service.

DATA PROCESSING FOR CONNECTED AND AUTONOMOUS VEHICLES

A method may be implemented to prioritize and analyze data exchanged in a connected vehicle transit network. The method may include receiving, at a roadside unit, vehicle data from a connected vehicle. The method may further include prioritizing the vehicle data received from the connected vehicle based on a level of urgency, network latency or available computing resources.

Method and apparatus for allocating electronic resource

The present disclosure relate to allocating electronic resources. In some arrangements, a server configures allocable electronic resources for a live streaming room in response to receiving a first configuration request from a first client. The first configuration request is configured to indicate a first configuration operation performed by an anchor to the live streaming room, and the allocable electronic resources are associated to a target commodity corresponding to the live streaming room. The server acquires an allocation request sent by a second client, wherein the allocation request is triggered by an interactive operation of an audience of the second client in the live streaming room. The server allocates an electronic resource from the allocable electronic resources to the audience of the second client, in response to the allocation request, wherein the allocated electronic resource is used by the audience for completing an order operation about the target commodity.