Patent classifications
H04L47/801
Allocating additional bandwidth to resources in a datacenter through deployment of dedicated gateways
Some embodiments provide policy-driven methods for deploying edge forwarding elements in a public or private SDDC for tenants or applications. For instance, the method of some embodiments allows administrators to create different traffic groups for different applications and/or tenants, deploys edge forwarding elements for the different traffic groups, and configures forwarding elements in the SDDC to direct data message flows of the applications and/or tenants through the edge forwarding elements deployed for them. The policy-driven method of some embodiments also dynamically deploys edge forwarding elements in the SDDC for applications and/or tenants after detecting the need for the edge forwarding elements based on monitored traffic flow conditions.
Interference Reduction in Telecommunication Networks
Various embodiments of the teachings herein include a computer-implemented method for scheduling transmissions of a plurality of data streams in a telecommunication network. The transmissions are partitioned into transmission cycles with a predetermined length in time. Repetitive transmissions of each of the data streams are transmitted based on the predetermined length multiplied by a respective repetition rate. The method includes: determining a path through the network for the transmissions of each stream; determining a shared transmission links based on a comparison of the paths, wherein each shared transmission link is part of at least two of the paths; based on a numerical optimization, determining a phase of the repetitive transmissions for each data stream, the optimization using an objective function with a value for interference between two repetitive transmissions; and scheduling the transmissions of each data stream, wherein the transmissions start at a transmission cycle associated with the respective phase.
CLUSTER CAPACITY MANAGEMENT FOR HYPER CONVERGED INFRASTRUCTURE UPDATES
Disclosed are various implementations of cluster capacity management for infrastructure updates. In some examples, cluster hosts for a cluster can be scheduled for an update. A component of a datacenter level resource scheduler can analyze cluster specific resource usage data to identify a cluster scaling decision for the cluster. The datacenter level resource scheduler transmits an indication that the resource scheduler is successfully invoked. Cluster hosts can then be updated.
Cluster capacity management for hyper converged infrastructure updates
Disclosed are various implementations of cluster capacity management for infrastructure updates. In some examples, cluster hosts for a cluster can be scheduled for an update. A component of a datacenter level resource scheduler can analyze cluster specific resource usage data to identify a cluster scaling decision for the cluster. The datacenter level resource scheduler transmits an indication that the resource scheduler is successfully invoked. Cluster hosts can then be updated.
ADMISSION CONTROL OF A COMMUNICATION SESSION
Aspects of the disclosure relate to admission control of a communication session in a network. The admission control can be implemented by a network node at the boundary of the network or a subsystem thereof. In one aspect, the admission control can be implemented during a predetermined period and can be based at least on an admission criterion, which can be specific to an end-point device, e.g., a target device or an origination device. The admission criterion can be configurable and, in certain implementations, it can be obtained from historical performance associated with establishment of communication session. Such historical performance can be assessed within a period of a configurable span.
CLOUD DATA CENTER TENANT-LEVEL OUTBOUND RATE LIMITING METHOD AND SYSTEM
A cloud data center tenant-level outbound rate limiting method includes: starting a timer, receiving and generating statistics of outbound packets of tenants in a current period, obtaining local traffic rate information of the tenants based on all the outbound packets of the tenants in the current period, and generating local bandwidth demand frames of the tenants based on the local traffic rate information of the tenants; when a timing of the timer reaches the end of the current period, sending the local bandwidth demand frames of the tenants to a switch; receiving a global bandwidth demand frame sent by the switch, and computing bandwidth budgets of the tenants based on the local traffic rate information of the tenants and the global bandwidth demand frames of the tenants; modifying rate limiting parameters, and limiting the rate of the outbound packets of the tenants in a next period.
Predictive network capacity scaling based on customer interest
In one example, the present disclosure describes a device, computer-readable medium, and method for scaling network capacity predictively, based on customer interest. For instance, in one example, a method includes predicting an interest of a first customer in data content that will be available for consumption over a data network at a time in the future, wherein the predicting is based on customer data including at least a search pattern associated with the first customer, flagging the data content when the predicting indicates at least a threshold degree of likelihood that the first customer will be interested in the data content, and scaling an allocation of resources of the data network to the first customer, based on the flagging.
Signaling transmission method and device, signaling reception method and device, storage medium and terminal
A signaling transmission method and device, a signaling reception method and device, a storage medium and a terminal are provided. The signaling transmission method includes: if an advanced setting for transmitting Real Time Application (RTA) packets is supported, configuring an indication signaling, wherein the indication signaling is used to instruct a Wireless Local Area Network (WLAN) station to transmit a packet based on packet duration limitation and/or transmission opportunity duration limitation; and transmitting the indication signaling. Embodiments of the present disclosure may shorten latency to meet communication requirements of RTA.
TENANT-DRIVEN DYNAMIC RESOURCE ALLOCATION FOR VIRTUAL NETWORK FUNCTIONS
Techniques for tenant-driven dynamic resource allocation in network functions virtualization infrastructure (NFVI). In one example, an orchestration system is operated by a data center provider for a data center and that orchestration system comprises processing circuitry coupled to a memory; logic stored in the memory and configured for execution by the processing circuitry, wherein the logic is operative to: compute an aggregate bandwidth for a plurality of flows associated with a tenant of the data center provider and processed by a virtual network function, assigned to the tenant, executing on a server of the data center; and modify, based on the aggregate bandwidth, an allocation of compute resources of the server executing the virtual network function.
DYNAMIC BANDWIDTH ALLOCATION IN CLOUD NETWORK SWITCHES BASED ON TRAFFIC DEMAND PREDICTION
Embodiments for dynamic bandwidth allocation in cloud network switches in a cloud computing environment are provided. Quality of service (QoS) policies may be dynamically changed in one or more cloud network switches based on dynamically estimating expected traffic demands for each of a plurality of traffic classes, wherein bandwidth is dynamically allocated among queues based on changing the QoS policies.