Patent classifications
H04L47/26
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.
INTERFACE FUNCTIONALITY FOR RAN-WLAN RADIO AGGREGATION
There are provided measures for enabling/realizing interface functionality for RAN-WLAN radio aggregation, i.e. radio level integration/aggregation between a cellular radio access network and a wireless area network. Such measures exemplarily comprise user-plane data forwarding from a cellular radio access network over a wireless local area network to a terminal, both the cellular radio access network and the wireless local area network providing radio level connectivity for the terminal, wherein in the user-plane data forwarding user-plane data packets according to a cellular data convergence protocol are transported, at least partly, using a reliable transport protocol over the wireless local area network to the terminal, providing feedback on performance of user-plane data packets transport using the reliable transport protocol, at least partly, from the wireless local area network to the cellular radio access network, and executing flow control on the basis of the performance feedback.
System and method for ordering of data transferred over multiple channels
A multiple channel data transfer system (10) includes a source (12) that generates data packets with sequence numbers for transfer over multiple request channels (14). Data packets are transferred over the multiple request channels (14) through a network (16) to a destination (18). The destination (18) re-orders the data packets received over the multiple request channels (14) into a proper sequence in response to the sequence numbers to facilitate data processing. The destination (18) provides appropriate reply packets to the source (12) over multiple response channels (20) to control the flow of data packets from the source (12).
Bandwidth policy management in a self-corrected content delivery network
In one embodiment of a network pipe optimization method, a network element may obtain at least one of a push pipe utilization report and a pull pipe utilization report from each distribution node of the content delivery network. Based on the utilization reports, the network element may determine new push pipe weights and new pull pipe weights for distribution pipes associated with each distribution node of the content delivery network. Using at least one of the new push pipe weights and new pull pipe weights, a network pipe utilization model associated with the content delivery network may be simulated. Responsive to determining that the simulated network pipe utilization model yields an improved utilization of the content delivery network, the new push pipe weights and new pull pipe weights may be distributed to each distribution node in the content delivery network.
System and method of flow shaping to reduce impact of incast communications
A system and method includes a network device comprising a control unit, a first port coupled to the control unit and configured to couple the network device to a first device using a first network link. The control unit is configured to receive a data packet from the first device on the first port, inspect the data packet for an indicator of an incast communication pattern, and implement a data flow shaper on a network when the indicator is present in the data packet.
Transmission of first and second buffer status information messages in a wireless network
A wireless transmit/receive unit (WTRU) transmits a first buffer status information message over an uplink shared channel with buffered data and a second buffer status information message over the uplink shared channel with buffered data. The second buffer status information message may have less bits and use a different format than the first buffer status information message. The WTRU may also transmit a scheduling request without buffered data on a condition of not have a scheduling grant. The WTRU may initiate, subsequent to a predetermined number of subframes after transmission of the first buffer status information message, transmission of another first buffer status information message.
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
System and method for providing bandwidth congestion control in a private fabric in a high performance computing environment
Systems and methods for providing bandwidth congestion control in a private fabric in a high performance computing environment. An exemplary method can provide, at one or more microprocessors, a first subnet, the first subnet comprising a plurality of switches, and a plurality of host channel adapters, wherein each of the host channel adapters comprise at least one host channel adapter port, and wherein the plurality of host channel adapters are interconnected via the plurality of switches, and a plurality of end nodes. The method can provide, at a host channel adapter, an end node ingress bandwidth quota associated with an end node attached to the host channel adapter. The method can receive, at the end node of the host channel adapter, ingress bandwidth, the ingress bandwidth exceeding the ingress bandwidth quota of the end node.
TECHNOLOGIES FOR QUALITY OF SERVICE BASED THROTTLING IN FABRIC ARCHITECTURES
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
Congestion Information Collection Method and System, Related Device, and Computer Storage Medium
This application provides a congestion information collection method and a related device. The method includes: A first connection node receives a first packet, obtains congestion information of a segment link including the first connection node, and records the congestion information in a parameter field corresponding to the first connection node in the first packet. By encapsulating a plurality of parameter fields that can be used to record congestion information into a packet, a node on a forwarding path can record congestion information of a plurality of segment links of a transmission path of the packet. In this way, a source node can plan a transmission path of a new packet based on congestion information of each segment link, or a node that generates a packet reduces a packet sending rate when the sent packet needs to pass through a congested node.