Patent classifications
H04L47/12
METHOD AND SYSTEM FOR DISTRIBUTIVE FLOW CONTROL AND BANDWIDTH MANAGEMENT IN A NETWORK
A method and system for distributive flow control and bandwidth management in networks is disclosed. The method includes: providing multiple Internet Protocol (IP) Gateways (IPGWs) that each have a maximum send rate and one or more sessions with associated throughput criteria, wherein each IPGW performs flow control by limiting information flows by the respective maximum send rate and throughput criteria; providing multiple Code Rate Organizers (CROs) that each have a bandwidth capacity, wherein each CRO performs bandwidth allocation of its respective bandwidth capacity to one or more IPGWs of the multiple IPGWs; interconnecting the multiple IPGWs with the multiple CROs; and performing bandwidth management across the multiple CROs and IPGWs. In the method, an IPGW of the multiple IPGWs provides flow control across a plurality of the CROs of the multiple CROs, and a CRO of the multiple CROs allocates bandwidth to a plurality of the IPGWs of the multiple IPGWs.
REDUCING NETWORK LATENCY DURING LOW POWER OPERATION
In an embodiment, a method includes identifying a core of a multicore processor to which an incoming packet that is received in a packet buffer is to be directed, and if the core is powered down, transmitting a first message to cause the core to be powered up prior to arrival of the incoming packet at a head of the packet buffer. Other embodiments are described and claimed.
PROBING AVAILABLE BANDWIDTH ALONG A NETWORK PATH
In one embodiment, a time period is identified in which probe packets are to be sent along a path in a network based on predicted user traffic along the path. The probe packets are then sent during the identified time period along the path. Conditions of the network path are monitored during the time period. The rate at which the packets are sent during the time period is dynamically adjusted based on the monitored conditions. Results of the monitored conditions are collected, to determine an available bandwidth limit along the path.
CONGESTION CONTROL WITHIN A COMMUNICATION NETWORK
According to an embodiment a packet forwarding device is disclosed for forwarding data packets on a link within a communication network. The packet forwarding device is further configured to perform the following steps: measuring a load of the link; detecting if the load exceeds one of a plurality of threshold indicative for a level of congestion on the link; and sending a signal to another device in the communication network signalling the level of congestion.
Determining quality information for a route
Methods and systems for determining traffic information for devices along one or more routes are described. A content server may send a message to a plurality of devices along a route. The message may comprise an indication requesting each of the devices to send, to the content server, status information regarding the respective device. Intermediary devices may receive the message, respond with the requested information, and forward the message through the route. The message may comprise a stateless messaging protocol message such as an ICMP or UDP packet.
Throttling data streams from source computing devices
Local management of data stream throttling in data movement operations, such as secondary-copy operations in a storage management system, is disclosed. A local throttling manager may interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Bandwidth is allocated and re-allocated to data streams used by ongoing jobs, in response to new jobs starting and old jobs completing, without having to pause and restart ongoing jobs to accommodate bandwidth adjustments. The illustrative embodiment also provides local users with a measure of control over data streams—to suspend, pause, and/or resume them—independently from the centralized storage manager that manages the overall storage system.
Throttling data streams from source computing devices
Local management of data stream throttling in data movement operations, such as secondary-copy operations in a storage management system, is disclosed. A local throttling manager may interoperate with co-resident data agents and/or a media agent executing on any given local computing device, whether a client computing device or a secondary storage computing device. The local throttling manager may allocate and manage the available bandwidth for various jobs and their constituent data streams—across the data agents and/or media agent. Bandwidth is allocated and re-allocated to data streams used by ongoing jobs, in response to new jobs starting and old jobs completing, without having to pause and restart ongoing jobs to accommodate bandwidth adjustments. The illustrative embodiment also provides local users with a measure of control over data streams—to suspend, pause, and/or resume them—independently from the centralized storage manager that manages the overall storage system.
SYSTEM AND METHOD OF A HIGH BUFFERED HIGH BANDWIDTH NETWORK ELEMENT
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
SYSTEM AND METHOD OF A HIGH BUFFERED HIGH BANDWIDTH NETWORK ELEMENT
A method and apparatus of a network element that processes a packet in the network element is described. In an exemplary embodiment, the network element receives a data packet that includes a destination address. The network element receives a packet, with a packet switch unit, wherein the packet was received by the network element on an ingress interface. The network element further determines if the packet is to be stored in an external queue. In addition, the network element identifies the external queue for the packet based on one or more characteristics of the packet. The network element additionally forwards the packet to a packet storage unit, wherein the packet storage unit includes storage for the external queue. Furthermore, the network element receives the packet from the packet storage unit and forwards the packet to an egress interface corresponding to the external queue.
NETWORK CONTROL METHOD AND DATA PROCESSING SYSTEM
The present disclosure relates to a network control method and a data processing system for reducing traffic in a network and reducing a processing load of an application that performs data processing.
A network connection device determines, on the basis of a manifest, an optimal location for execution of an application that processes sensor data generated by a sensor device, from among the sensor device and a device on a path in a network connected to the sensor device. The present technology can be applied to, for example, a network control method of cloud computing.