Patent classifications
H04L47/522
Throttling data streams from source computing devices
Local management of data stream throttling in data movement operations, such as secondary-copy operations in a storage management system, is disclosed. A local throttling manager may interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Bandwidth is allocated and re-allocated to data streams used by ongoing jobs, in response to new jobs starting and old jobs completing, without having to pause and restart ongoing jobs to accommodate bandwidth adjustments. The illustrative embodiment also provides local users with a measure of control over data streams—to suspend, pause, and/or resume them—independently from the centralized storage manager that manages the overall storage system.
DYNAMIC ALLOCATION OF NETWORK RESOURCES USING EXTERNAL INPUTS
Systems and methods for managing network resources are disclosed. One method can comprise receiving first information relating to network traffic parameters and receiving second information relating to one or more contextual events having an effect on the network traffic parameters. The first information and the second information and be correlated. And one or more network resources can be allocated based on the correlation of the first information and the second information.
Apparatuses, methods, and computer programs for a remote unit and a central unit of an optical line terminal
Examples relate to apparatuses, methods, and computer programs for a remote unit and a central unit of an optical line terminal. In particular, a central unit apparatus for an optical line terminal comprises one or more interfaces configured to communicate with one or more remote unit apparatuses via one or more communication links. The apparatus further comprises a processor configured to receive information on one or more upstream reports from the remote unit apparatuses, the upstream reports relate to one or more optical networks used by the remote unit apparatuses to communicate with a plurality of optical network users. The processor further determines information on bandwidth assignments for the plurality of optical network users based on the information on the one or more upstream reports and transmits the information on bandwidth assignments to the one or more remote unit apparatuses.
DISTRIBUTING SHAPED SUBINTERFACES TO MEMBER PORTS OF A PORT CHANNEL
Embodiments described herein relate to techniques for distributing shaped subinterfaces among physical interfaces of a port channel. Such techniques include receiving a request to configure a shape rate for a port channel subinterface; generating a physical interface set specifying: a first physical interface and a first allocated bandwidth associated with the first physical interface; and a second physical interface and a second allocated bandwidth associated with the second physical interface; making a selection, using the physical interface set, of the first physical interface based on the first allocated bandwidth being lesser than the second allocated bandwidth; assigning the first physical interface as a first anchor interface for the first port channel subinterface; and adding the first shape rate to the first allocated bandwidth to obtain a first new allocated bandwidth for the first physical interface.
Dimensioning Granular Multi-Timescale Fairness
A boost is provided in an overloaded system by distinguishing nodes with a “bad” traffic history from nodes with a “good” traffic history. In so doing, a core network node is able to apply additional resources to the node(s) having a “good” history in the form of a boost factor. Based on a system capacity and a working point, e.g., a critical number of active nodes with a “bad” traffic history, the core network node may determine a throughput history limit belonging to the “bad” traffic history. Responsive to expected requirements for a newly active node (i.e., a node having a “good” traffic history), the core network node determines a boost factor for the newly active node, applies the boost factor to the average resources allocated to the nodes with the “bad” traffic history to determine boosted resources, and allocates the boosted resources to the newly active node.
TECHNIQUES FOR IMPROVING RESOURCE UTILIZATION IN A MICROSERVICES ARCHITECTURE VIA PRIORITY QUEUES
In various embodiments, a flexible queue application allocates messages stored in priority queues to clients. In operation, the flexible queue application receives, from a client, a request to allocate a message from a priority queue. At least a first message and a second message are stored in the priority queue, and the priority of the first message is higher than the priority of the second message. The flexible queue application determines that the first message is pending but does not satisfy an allocation constraint. The flexible queue allocation then determines that the second message is pending and satisfies the allocation constraint. The flexible queue application allocates the second message to the client. Advantageously, because the flexible queue application can adapt the priority-based ordering of priority queues based on allocation constraints, the flexible queue application can efficiently enforce resource-related constraints when allocating messages from priority queues.
Dynamic resource allocation aided by reinforcement learning
A communication system in which DRA control is aided by RL. An example embodiment may control one or more buffer queues populated by downstream and/or upstream data streams. The egress rates of the buffer queues can be dynamically controlled using an RL technique, according to which a learning agent can adaptively change the state-to-action mapping function of the DRA controller while circumventing the RL exploration phase and relying on extrapolation of the already taken actions instead. This feature may result in at least two benefits: (i) cancellation of a performance penalty typically associated with RL exploration; and (ii) faster learning of the environment, as the learning agent can determine the performance metrics of many actions per state in a single occurrence of the state. In an example embodiment, the communication system may be a DSL system, a PON system, or a wireless communication system.
Technologies for adaptive network packet egress scheduling
Technologies for adaptive network packet egress scheduling include a switch configured to configure an eligibility table for a plurality of ports of the switch, wherein the eligibility table includes a plurality of rounds. The switch is further configured to retrieve an eligible mask corresponding to a round of a plurality of rounds of the eligibility table presently being scheduled and determine a ready mask that indicates a ready status of each port. The switch is further configured to determine, for each port, whether the eligible status and the ready status indicate that port is both eligible and ready, and schedule, in response to a determination that at least one port has been determined to be both eligible and ready, each of the at least one port that has been determined to be both eligible and ready. Additional embodiments are described herein.
SCHEDULING METHOD APPLIED IN INDUSTRIAL HETEROGENEOUS NETWORK IN WHICH TSN AND NON-TSN ARE INTERCONNECTED
A scheduling method applied in an industrial heterogeneous network in which a TSN and a non-TSN are interconnected is provided. The TSSDN controller classifies data flows according to the delay requirements, and calculates the scheduling priorities of the data flows in the industrial heterogeneous network. The TSSDN controller adopts an improved CSPF algorithm to determine a shortest path in the heterogeneous network, and marks the scheduling priorities of the data flows which are transmitted from the subnet of the heterogeneous network and arrive at the switch for the first time. Flow table matching is performed at the SDN switch. In a case of performing flow table matching successfully, the counter is updated and the instruction included in the flow table is executed. In a case of performing flow table matching unsuccessfully, a PacketIn message is transmitted to the TSSDN controller, and the TSSDN controller performs analysis and makes a decision.
DETERMINING RATE DIFFERENTIAL WEIGHTED FAIR OUTPUT QUEUE SCHEDULING FOR A NETWORK DEVICE
A network device may receive packets and may calculate, during a time interval, an arrival rate and a departure rate, of the packets, at one of multiple virtual output queues. The network device may calculate a current oversubscription factor based on the arrival rate and the departure rate, and may calculate a target oversubscription factor based on an average of previous oversubscription factors associated with the multiple virtual output queues. The network device may determine whether a difference exists between the target oversubscription factor and the current oversubscription factor and may calculate, when the difference exists, a scale factor based on the current oversubscription factor and the target oversubscription factor. The network device may calculate new scheduling weights based on prior scheduling weights and the scale factor, and may process packets received by the multiple virtual output queues based on the new scheduling weights.