Patent classifications
H04L47/83
Methods and apparatus for supporting dynamic network scaling based on learned patterns and sensed data
Methods and apparatus for predicting communications resources which will be needed at a venue and then controlling the amount of available resources dynamically are described. In various embodiments real time or near real time video of areas of the venue are used to predict the number of people in a portion of a venue and/or the direction of movement. Along with other information such as the type of event and/or event schedule collected information is supplied to a set of trained resource requirement models which are used to predict future resource needs at a venue, e.g., while an event is ongoing. Commands are sent to dynamically vary the amount of communications resources provided to one or more portions of the venue. Resources which can be varied included but are not limited to fixed wired WAN bandwidth, WiFi bandwidth, cellular bandwidth, network based on-demand services, transcoding services, firewall services, etc.
Load adaptation architecture framework for orchestrating and managing services in a cloud computing system
According to one aspect of the concepts and technologies disclosed herein, a cloud computing system can include a load adaptation architecture framework that performs operations for orchestrating and managing one or more services that may operate within at least one of layers 4 through 7 of the Open Systems Interconnection (“OSI”) communication model. The cloud computing system also can include a virtual resource layer. The virtual resource layer can include a virtual network function that provides, at least in part, a service. The cloud computing system also can include a hardware resource layer. The hardware resource layer can include a hardware resource that is controlled by a virtualization layer. The virtualization layer can cause the virtual network function to be instantiated on the hardware resource so that the virtual network function can be used to support the service.
Transmission Padding Efficiency Improvement
A user equipment (UE) configured to receive an uplink (UL) grant comprising a UL grant size, determine a current UL buffer size, compare the current UL buffer size to the UL grant size and determining an amount of padding to fill the UL grant and determine whether to transmit on the UL grant based on the amount of padding to fill the UL grant.
Enhanced selection of cloud architecture profiles
This document describes modeling and simulation techniques to select a cloud architecture profile based on correlations between application workloads and resource utilization. In some aspects, a method includes obtaining infrastructure data specifying utilization of computing resources of an existing computing system. Application workload data specifying tasks performed by one or more applications running on the existing computing system is obtained. One or more models are generated based on the infrastructure data and the application workload data. The model(s) define an impact on utilization of each computing resource in response to changes in workloads of the application(s). A workload is simulated, using the model(s), on a candidate cloud architecture profile that specifies a set of computing resources. A simulated utilization of each computing resource of the candidate cloud architecture profile is determined based on the simulation. An updated cloud architecture profile is generated based on the simulated utilization.
AUTOMATED SERVER WORKLOAD MANAGEMENT USING MACHINE LEARNING
Systems and methods are disclosed for managing workload among server clusters is disclosed. According to certain embodiments, the system may include a memory storing instructions and a processor. The processor may be configured to execute the instructions to determine historical behaviors of the server clusters in processing a workload. The processor may also be configured to execute the instructions to construct cost models for the server clusters based at least in part on the historical behaviors. The cost model is configured to predict a processor utilization demand of a workload. The processor may further be configured to execute the instructions to receive a workload and determine efficiencies of processing the workload by the server clusters based at least in part on at least one of the cost models or an execution plan of the workload.
Service chain accomodation apparatus and service chain accommodation method
A service chain accommodation device includes an influence coefficient calculation unit that calculates an influence coefficient indicating that an influence at the time of processing failure of a service chain is greater for a VNF located in a subsequent stage of a service chain and a VNF shared among a plurality of service chains, a residual resource calculation unit that corrects an amount of residual resources that can be accommodated for each of the VNFs through which the service chain passes, and an accommodation design unit that assigns a new service chain on the basis of the amount of the residual resources.
METHOD AND APPARATUS FOR MANAGING NETWORK TRAFFIC VIA UNCERTAINTY
There is provided a method and system for communication network management. There is provided an active TE architecture and procedure that rely on the epistemic uncertainty obtained from traffic forecasting models. According to embodiments, the traffic forecasting models can predict the mean of the network traffic demand and can extract one or more of the features relating epistemic uncertainty and the aleatoric uncertainty. According to embodiments, the epistemic uncertainty is used to vary the sampling frequency of network statistics in TE applications, for specific times or specific flows. A time-window can be used to predict network traffic can be varied (e.g. increased or decreased) to adjust the epistemic uncertainty.
Recalibrating resource profiles for network slices in a 5G or other next generation wireless network
The technologies described herein are generally directed to facilitating the allocation, scheduling, and management of network slice resources. According some embodiments, a system can facilitate performance of operations. The operations can include, based on a request for a network service type that was received from a user device, allocating a network slice of a network to the user device, with the network slice being previously assigned a capacity of a resource of the network in accordance with a resource profile. Further, operations include monitoring performance of the network slice, resulting in monitored slice performance compared to a performance requirement of the network service type. Another operation includes, based on the monitored slice performance, facilitating recalibration of the resource profile in accordance with a condition associated with the network service type, resulting in a modification of the capacity of the resource assigned to the network slice.
System and method for providing network support services and premises gateway support infrastructure
A service management system communicates via wide area network with gateway devices located at respective user premises. The service management system remotely manages delivery of application services, which can be voice controlled, by a gateway, e.g. by selectively activating/deactivating service logic modules in the gateway. The service management system also may selectively provide secure communications and exchange of information among gateway devices and among associated endpoint devices. An exemplary service management system includes a router connected to the network and one or more computer platforms, for implementing management functions. Examples of the functions include a connection manager for controlling system communications with the gateway devices, an authentication manager for authenticating each gateway device and controlling the connection manager and a subscription manager for managing applications services and/or features offered by the gateway devices. A service manager, controlled by the subscription manager, distributes service specific configuration data to authenticated gateway devices.
Technologies for assigning workloads to balance multiple resource allocation objectives
Technologies for allocating resources of managed nodes to workloads to balance multiple resource allocation objectives include an orchestrator server to receive resource allocation objective data indicative of multiple resource allocation objectives to be satisfied. The orchestrator server is additionally to determine an initial assignment of a set of workloads among the managed nodes and receive telemetry data from the managed nodes. The orchestrator server is further to determine, as a function of the telemetry data and the resource allocation objective data, an adjustment to the assignment of the workloads to increase an achievement of at least one of the resource allocation objectives without decreasing an achievement of another of the resource allocation objectives, and apply the adjustments to the assignments of the workloads among the managed nodes as the workloads are performed. Other embodiments are also described and claimed.