Patent classifications
H04L47/80
DYNAMIC ALLOCATION OF COMPUTING RESOURCES
The exemplary embodiments disclose a method, a computer program product, and a computer system for allocating computing resources. The exemplary embodiments may include collecting data of one or more users, wherein the collected data comprises calendar data of the one or more users, extracting one or more features from the collected data, and allocating one or more computing resources to one or more of the users based on the extracted one or more features and one or more models.
ENHANCED REDEPLOYING OF COMPUTING RESOURCES
Examples described herein relate to method, resource management system, and non-transitory machine-readable medium for redeploying a computing resource. Data related to a performance parameter corresponding to a plurality of computing resources deployed on a plurality of host-computing nodes may be received. The performance parameter is associated with one or both of: communication between computing resources of the plurality of computing resources, or communication of the plurality of computing resources with a network device. Further, for a computing resource of the plurality of computing resources, a candidate host-computing node is determined from the plurality of host-computing nodes based on the data related to the performance parameter and the computing resource may be redeployed on the candidate host-computing node.
DYNAMIC BANDWIDTH ALLOCATION IN CLOUD NETWORK SWITCHES BASED ON TRAFFIC DEMAND PREDICTION
Embodiments for dynamic bandwidth allocation in cloud network switches in a cloud computing environment are provided. Quality of service (QoS) policies may be dynamically changed in one or more cloud network switches based on dynamically estimating expected traffic demands for each of a plurality of traffic classes, wherein bandwidth is dynamically allocated among queues based on changing the QoS policies.
Data network traffic management
A network management system may allocate different amounts of bandwidth to different types of data traffic. The traffic types may be distinguished by their source device address, and whether the source device is part of, or external to, a first network. Packets may also be marked by their sender with information to identify a traffic type, and the marking may be used to determine the packet's treatment. The allocations given to the various types of traffic may be dynamically modified with changing traffic demands and conditions.
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
System And Method For Subscriber Awareness In A 5G Network
A method and system for subscriber awareness for traffic flows in a computer network. The system including: a Subscriber Awareness Control Plane (SACP) module configured to register as a network node and subscribe to at least one network function on the network; at least one processing module configured to request and receive information of traffic flow parameters and subscriber parameters for the traffic flows from the at least one network function; and a subscriber awareness module configured to map subscribers to traffic flows, based on the received traffic flow parameters and subscriber parameters. The method including: registering an SACP module as a network node; subscribing to at least one network functions; receiving information of traffic flow parameters and subscriber parameters for the traffic flows; and mapping subscribers to traffic flows, based on the traffic flow parameters and subscriber parameters.
Reducing latency in downloading electronic resources using multiple threads
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for reducing latency in presenting content. In one aspect, a system includes a native application that presents an interactive item and a latency reduction engine. The latency reduction engine detects interaction with the interactive item that links to a first electronic resource that is different from the native application and provided by a first network domain and in response to the detecting, reduces latency in presenting the first electronic resource, including executing a first processing thread and a second processing thread in parallel. The first processing thread requests a second electronic resource from a second network domain and loads the second electronic resource and, in response to the loading, stores a browser cookie for the second network domain. The second processing thread requests the first electronic resource and presents the first electronic resource.
CONGESTION CONTROL METHOD AND APPARATUS
A congestion control method and an apparatus are disclosed. The method includes: after a sending device generates an unscheduled data packet of a first flow, if it is determined, based on an unscheduled packet send window shared by all flows, that the unscheduled data packet meets a sending condition, determining whether to request to add a quota for at least one of the unscheduled packet send window and a scheduled packet send window corresponding to the first flow; and if it is determined that the quota is requested to be added for the at least one of the unscheduled packet send window and the scheduled packet send window corresponding to the first flow, setting indication information in the unscheduled data packet, and sending, to a receiving device, the unscheduled data packet in which the indication information is set.
AUTOMATIC NETWORK CONFIGURATION
Automatic network configuration includes obtaining, by a virtual private network service provider infrastructure system, ranking data for data transport pathways between the virtual private network service provider infrastructure system and an external system, wherein a respective data transport pathway from the data transport pathways includes a respective exit node in the virtual private network service provider infrastructure system in communication with a respective entry node in the external system, wherein obtaining the ranking data includes obtaining at least a portion of the ranking data by testing a service provided by the external system via the entry node, and allocating, by the virtual private network service provider infrastructure system, a data transport pathway from the data transport pathways to a communication session, wherein the data transport pathway is a highest-ranking data transport pathway in the ranking data.
Managing virtual output queues
A first node of a packet switched network transmits at least one flow of protocol data units of a network to at least one output context of one of a plurality of second nodes of the network. The first node includes X virtual output queues (VOQs). The first node receives, from at least one of the second nodes, at least one fair rate record. Each fair rate record corresponds to a particular second node output context and describes a recommended rate of flow to the particular output context. The first node allocates up to X of the VOQs among flows corresponding to i) currently allocated VOQs, and ii) the flows corresponding to the received fair rate records. The first node operates each allocated VOQ according to the corresponding recommended rate of flow until a deallocation condition obtains for the each allocated VOQ.