Patent classifications
H04L12/825
System and method for ordering of data transferred over multiple channels
A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).
System and method for providing congestion notification in layer 3 networks
A system and method is provided for sending congestion notification messages through L3 networks. For example, a data packet is received at a first switch in a first fabric block of an L3 network, and the first switch performs source MAC tagging of the data packet. The data packet is then forwarded to a second switch in a second fabric block of the L3 network, and the source MAC tag is maintained by the second switch and any intermediate switches. The second switch determines, in response to receiving the data packet, whether it is congested, and generates a notification message if it is congested. The notification message is L2 forwarded to the first fabric block, and further forwarded from the first switch to a source of the data packet using ACL matching.
Management of data transmission limits for congestion control
A method for communication includes transmitting data packets from a communication device to a network. Upon receiving in the communication device a congestion notification from the network, a rate of transmission of the data packets from the communication device to the network is reduced. While transmitting the data packets, after reducing the rate of transmission, the rate of transmission is increased incrementally when a predefined volume of data has been transmitted since having made a previous change in the rate of transmission.
Using available bandwidths of an IP fabric to intelligently distribute data
In one example, a plurality of network devices forming an Internet protocol (IP) fabric includes first, second, third, and fourth network devices. The first network device includes a plurality of network interfaces communicatively coupled to at least the third and fourth network devices of the plurality of network devices, which are between the first network device and the second network device. The first network device also includes one or more hardware-based processors configured to determine available bandwidths for the third network device and the fourth network device toward the second network device, determine a ratio between the available bandwidths for the third and fourth network devices, and forward data (e.g., packets or bytes) toward the second network device such that a ratio between amounts of the data forwarded to the third and fourth network devices corresponds to the ratio between the available bandwidths.
Bandwidth policy management in a self-corrected content delivery network
In one embodiment of a network pipe optimization method, a network element may obtain at least one of a push pipe utilization report and a pull pipe utilization report from each distribution node of the content delivery network. Based on the utilization reports, the network element may determine new push pipe weights and new pull pipe weights for distribution pipes associated with each distribution node of the content delivery network. Using at least one of the new push pipe weights and new pull pipe weights, a network pipe utilization model associated with the content delivery network may be simulated. Responsive to determining that the simulated network pipe utilization model yields an improved utilization of the content delivery network, the new push pipe weights and new pull pipe weights may be distributed to each distribution node in the content delivery network.
Systems and methods for implementing bearer call-back services
The present disclosure is directed at systems, methods and media for providing bearer call-back services for bearers that have been rejected or pre-empted by a network apparatus in a core network. In some embodiments, if a network apparatus enters a state in which it becomes necessary to reject or pre-empt a bearer associated with a user equipment (UE) (e.g., due to load conditions in a radio access network, the core network, or an application server), the network apparatus can send to the UE a call-back message when the network apparatus exits the state that precipitated the bearer rejection or pre-emption. By sending a call-back message, the network apparatus can save the UE from multiple unsuccessful attempts to establish a bearer, or from waiting an unnecessarily long time before establishing a bearer.
Predictive time allocation scheduling for TSCH networks
In one embodiment, a device in a network receives one or more time slot usage reports regarding a use of time slots of a channel hopping schedule by nodes in the network. The device predicts a time slot demand change for a particular node based on the one or more time slot usage reports. The device identifies a time frame associated with the predicted time slot demand change. The device adjusts a time slot assignment for the particular node in the channel hopping schedule based on predicted demand change and the identified time frame associated with the predicted time slot demand change.
Transmission of first and second buffer status information messages in a wireless network
A wireless transmit/receive unit (WTRU) transmits a first buffer status information message over an uplink shared channel with buffered data and a second buffer status information message over the uplink shared channel with buffered data. The second buffer status information message may have less bits and use a different format than the first buffer status information message. The WTRU may also transmit a scheduling request without buffered data on a condition of not have a scheduling grant. The WTRU may initiate, subsequent to a predetermined number of subframes after transmission of the first buffer status information message, transmission of another first buffer status information message.
Method and system for resource-aware dynamic bandwidth control
Resource-aware dynamic bandwidth control uses information about current network state and receiver performance to avoid, minimize and/or recover from the effects of network spikes and data processing spikes. Linear models may be used to estimate a time required to process data packets in a data processing queue, and are thus useful to determine whether a data processing spike is occurring. When a data processing spike occurs, an alarm may be sent from a client to a server notifying the server that the client must drop packets. In response, the server can encode and transmit an independent packet suitable for replacing the queued data packets which can then be dropped by the client and the independent packet present to the processor instead.
Network system and network method
Provided are a network system and a network method. The network system includes: at least one network camera; at least one client configured to receive an image or a moving image from the at least one network camera; and a network configured to relay communication between the at least one network camera and the at least one client, wherein the at least one client is further configured to transmit an auto traffic control (ATC) priority and a setting corresponding to the applying of the ATC function while requesting to be connected to the at least one network camera.