Patent classifications
H04L12/923
Congestion Control Processing Method, Packet Forwarding Apparatus, and Packet Receiving Apparatus
A congestion control processing method uses a two-level scheduling manner of a forwarding device and a destination device, where the network device of a data center network performs coarse-grained bandwidth allocation based on weights of flows destined for different destination devices. The network device allocates each flow a bandwidth that does not cause congestion, and notifies the destination device. The destination device performs fine-grained division, determines a maximum sending rate for each flow, and notifies a packet sending device of the maximum sending rate.
METHOD AND APPARATUS FOR OBTAINING INTERIOR GATEWAY PROTOCOL DOMAIN THROUGH DIVISION IN NETWORK
A solution for obtaining an interior gateway protocol IGP domain through division in a network is provided. The network includes an access site link and a non-access site link. The access site link includes a link between access site devices and a link between an access site device and an aggregation site device. The non-access site link includes a link other than the access site link in the network. In this solution, a network device determines a changed access site subgraph based on a change of a network topology of the network, and obtains an IGP domain through division based on a link in the changed access site subgraph. The access site subgraph is one or more connection diagrams formed after the non-access site link is removed from the network topology. This solution can improve efficiency of IGP domain division.
Network resource isolation method for container network and system thereof
A network resource isolation method for container networks and a system thereof, including a computation system for network resource isolation, or a system using network resource isolation, or a network resource isolation system for container networks, and methods of implementation thereof. The system provides container overlay networks with a resource isolation scheme that also reduces the use threshold for isolation of network resources and optimizes the utilization rate of network resources.
METHOD FOR MANAGING ALLOCATION REQUESTS TO ALLOCATE A COMPUTING RESOURCE
A method for managing an allocation request to allocate a computing resource in a cloud computing system having comprising at least two data centers connected to one another via a communication network, implemented by an access device allowing a terminal to access the communication network and which determine a routing path to a service address. The method includes: transmitting the request to a first data center; and if the computing resource is not available, retransmitting the request to an adjacent data center that is the following one in the routing path, the retransmission of the request being reiterated until either a data center responds that the resource is available or the request has been retransmitted to all of the data centers.
ENHANCING DISCOVERY PATTERNS WITH SHELL COMMAND EXIT STATUS
A computing system includes a discovery application that identifies a computing device associated with a managed network. The application determines a first command that causes the computing device to invoke a function that provides as output attributes of the computing device. The command includes a parameter that suppresses any textual error messages that the function places in the output. The application also determines a second command that causes the computing device to provide a numerical exit status of the function. The application causes the computing device to execute the first and second commands, and obtains the output and the numerical exit status. Based on the numerical exit status, the application determines that the function did not fully obtain the attributes of the computing device and, in response, (i) modifies the first command, and (ii) causes the computing device to execute the first command as modified and the second command.
Enhanced selection of cloud architecture profiles
This document describes modeling and simulation techniques to select a cloud architecture profile based on correlations between application workloads and resource utilization. In some aspects, a method includes obtaining infrastructure data specifying utilization of computing resources of an existing computing system. Application workload data specifying tasks performed by one or more applications running on the existing computing system is obtained. One or more models are generated based on the infrastructure data and the application workload data. The model(s) define an impact on utilization of each computing resource in response to changes in workloads of the application(s). A workload is simulated, using the model(s), on a candidate cloud architecture profile that specifies a set of computing resources. A simulated utilization of each computing resource of the candidate cloud architecture profile is determined based on the simulation. An updated cloud architecture profile is generated based on the simulated utilization.
Intent-Based Multi-Tiered Orchestration and Automation
Novel tools and techniques are provided for implementing intent-based multi-tiered orchestration and automation. In various embodiments, in response to receiving a request for network services that comprises desired characteristics and performance parameters for the requested network services without information regarding specific hardware, hardware type, location, or network, a macro orchestrator might send, to a micro orchestrator among a plurality of micro orchestrators, the received request for network services, where the macro orchestrator automates, manages, or controls each of the plurality of micro orchestrators, while each micro orchestrator automates, manages, or controls a plurality of domain managers and/or a plurality of network resources. The micro orchestrator might identify one or more network resources for providing the requested network services, based at least in part on the desired characteristics and performance parameters, and might allocate at least one network resource among the identified network resources for providing the requested network services.
HIERARCHICAL CAPACITY MANAGEMENT IN A VIRTUALIZATION ENVIRONMENT
In one example, a processing system may support capacity management in a virtualization environment based on hierarchical capacity management. The processing system may maintain a policy for a first capacity agent at a first hierarchical layer. The policy may include a set of key capacity indicators, a capacity limit, and an algorithm. The processing system may obtain, based on the set of key capacity indicators, a set of key capacity indicator information. The processing system may monitor, based on the capacity limit, for a detection of a capacity limit event. The processing system may determine, based on the algorithm, a predicted capacity exhaustion point. The processing system may send, toward a second capacity agent at a second hierarchical layer that is above the first hierarchical layer, the set of key capacity indicator information and the predicted capacity exhaustion point.
METHODS, SYSTEMS AND COMPUTER READABLE MEDIA FOR DIAGNOSING NETWORK FUNCTION VIRTUALIZATION PERFORMANCE
Performance issues in a service function chain having a plurality of resources and a plurality of network functions each having a network function queue are diagnosed. Each network function queue is monitored and queueing information for input packets for each of the plurality of network functions is dumped to a data store. Each resource that is under contention is identified as well as which of the network functions is a contender for the resources. A diagnosing algorithm is used to diagnose performance problems and an impact graph for each victim packet is generated. A summary of results as a list of rules is then provided.
METHODS AND APPARATUSES FOR RESPONDING TO REQUESTS FOR NETWORK RESOURCES IMPLEMENTED IN A CLOUD COMPUTING INFRASTRUCTURE
A method and system for responding to requests for network resources implemented in a cloud computing infrastructure are described. A proxy server responds to requests from client devices based on the state of the origin instances that serve the requested network resources. The proxy server modifies the state of the origin instance based on whether requests are received for the network resources. The proxy server receives from the client device, a first request for a network resource that is served by an origin server. The proxy server determines a state of the origin instance, where the state of the origin instance indicates whether the origin instance is executing on the cloud computing infrastructure. Upon receipt of the state of the origin instance, the proxy server determines based on the received state, a response to be transmitted to the client device.