Patent classifications
H04L47/823
Edge-node controlled resource distribution
This application describes apparatus and methods for using edge-computing to control resource distribution among access channels, such as a retail banking center. Edge-nodes may be configured to move a product display in response to detected or expected customer traffic flow in or near a retail location. Edge-nodes may be configured to redirect resources provided by a cloud computing environment to or away from the retail location. Based on customer traffic flow, edge-nodes may direct customers/resources to a retail location and ensure the retail location provides a predetermined quality of service.
Tenant-driven dynamic resource allocation for virtual network functions
Techniques for tenant-driven dynamic resource allocation in network functions virtualization infrastructure (NFVI). In one example, an orchestration system is operated by a data center provider for a data center and that orchestration system comprises processing circuitry coupled to a memory; logic stored in the memory and configured for execution by the processing circuitry, wherein the logic is operative to: compute an aggregate bandwidth for a plurality of flows associated with a tenant of the data center provider and processed by a virtual network function, assigned to the tenant, executing on a server of the data center; and modify, based on the aggregate bandwidth, an allocation of compute resources of the server executing the virtual network function.
FLASH CROWD MANAGEMENT IN REAL-TIME STREAMING
A real-time streaming service predicts an incoming flash crowd event and manages computing resources to respond to the event before traffic peaks, thus reducing the likelihood that the streaming service's resources will be overwhelmed. Embodiments of a real-time streaming server predict a flash crowd event by detecting actions by client devices during a multi-step process to access a real-time content stream from an endpoint server cluster. Initially, the endpoint server has first computing resources configured to stream the content stream to the client devices. The streaming server provisions second computing resources at the endpoint server based on a rate at which the client devices perform an action associated with a first step in the multi-step process. The second computing resources are configured to stream the real-time content stream based on a rate at which the client devices perform an action associated with a second step in the multi-step process.
Utilizing a model to manage resources of a network device and to prevent network device oversubscription by endpoint devices
A network device may receive configuration data identifying resource subscription thresholds associated with a plurality of respective endpoint devices and may receive traffic from the plurality of endpoint devices. The network device may process the traffic and the configuration data, with a resource allocation model, to determine that processing traffic associated with a first endpoint device requires allocating a resource of the network device, and may process the configuration data, with the resource allocation model, to identify the resource of the network device from a particular resource of the network device that is currently allocated to traffic associated with a second endpoint device. The network device may allocate the particular resource of the network device to the traffic associated with the first endpoint device, and may process the traffic associated with the first endpoint device with the particular resource to generate processed traffic.
USER EQUIPMENT ACTIVITY ALIGNMENT FOR POWER SAVINGS
Methods, systems, and apparatuses for synchronizing data transmission activity by user equipment (UE). In one aspect, the method comprises determining, by the UE and based on current usage of resources, that alignment of subsequent usage of resources is to be adjusted, generating, by the UE, data that indicates a plurality of activity alignment parameters that are to be adjusted in order to cause the adjusted alignment of subsequent usage of resources, encoding, by the UE, the generated data for transmission to a base station, and transmitting, by the UE, the encoded data to the base station.
Bandwidth Scheduling Method, Traffic Transmission Method, and Related Product
Embodiments of this application disclose a bandwidth scheduling method, a traffic transmission method, and a related product. The bandwidth scheduling method includes: receiving a bandwidth request sent by a data center, where the bandwidth request includes bandwidth required to transmit non-real-time traffic; allocating, based on historical bandwidth information, the bandwidth required by the data center to transmit the non-real-time traffic in a future time period, where the historical bandwidth information is used to predict occupation of total bandwidth by the data center in a region in which the data center is located at each moment in the future time period; and sending a bandwidth response to the data center, where the bandwidth response includes an allocation result. Embodiments of this application help improve utilization of physical line resources in each region.
Multi-tenant resource management in a gateway
Described herein are systems, methods, and software to manage resources in a gateway shared by multiple tenants. In one example, a system may monitor usage of resources by a tenant of the gateway and compare the usage with usage limits associated with the resources. The system may further determine when the usage of a resource exceeds a usage limit associated with the resource and, when the usage of the resource exceeds the usage limit, identify an operation associated with causing the usage limit to be exceeded and blocking the operation.
Resource allocation calculation apparatus and resource allocation calculation method to enhance fairness and efficiency in allocation of resources among multiple virtual networks within a physical network
A resource allocation calculation device includes: a demand prediction unit that for each of a plurality of virtual networks sharing a physical network, predicts demands in units of communications sharing common origin and destination nodes; and an allocation calculation unit that based on the demands predicted by the demand prediction unit, observed past demands in the units of communications and past allocated bandwidths in the units of communications, calculates allocated bandwidths and allocated routes at a current time for the respective units of communications in such a manner that fairness between utilities of the respective virtual networks is maximized, which enhances fairness and efficiency of allocation of resources to a plurality of virtual networks sharing resources of a physical network.
Using distributed services to continue or fail requests based on determining allotted time and processing time
After a service receives a request from another service, the service determines an amount of time to process the request by the service as well as a remaining time allotment to complete processing the request (e.g., a timeout value). Based on the remaining time allotment and the amount of time to process the request by at least the service (predicted time or actual time), the service may determine whether to continue processing the request (e.g., by the service and/or one or more subsequent services) or fail the request. In response, the service may then continue processing the request (e.g., continue processing at the service itself or propagate the request to the next service), or the service may fail the request.
METHODS AND APPARATUS TO PREDICT END OF STREAMING MEDIA USING A PREDICTION MODEL
Methods to predict end of streaming media using a prediction model are disclosed herein. An example apparatus includes at least one memory, instructions in the apparatus, and processor circuitry to execute the instructions to generate a prediction model using a mean value of a bandwidth of a transmission of a streaming media to a user device and an amplitude of the streaming media, and identify an end of a streaming media session when an output of the prediction model satisfies a threshold.