Patent classifications
H04L49/205
TECHNOLOGIES FOR QUALITY OF SERVICE BASED THROTTLING IN FABRIC ARCHITECTURES
Technologies for quality of service based throttling in a fabric architecture include a network node of a plurality of network nodes interconnected across the fabric architecture via an interconnect fabric. The network node includes a host fabric interface (HFI) configured to facilitate the transmission of data to/from the network node, monitor quality of service levels of resources of the network node used to process and transmit the data, and detect a throttling condition based on a result of the monitored quality of service levels. The HFI is further configured to generate and transmit a throttling message to one or more of the interconnected network nodes in response to having detected a throttling condition. The HFI is additionally configured to receive a throttling message from another of the network nodes and perform a throttling action on one or more of the resources based on the received throttling message. Other embodiments are described herein.
Arrangements and methods for minimizing delay in high-speed taps
Methods and arrangements are provided for minimizing delay in a high-speed tap arrangement are disclosed and include hardware and software arrangements and methods for quickly switching the transmission path for data between a primary data path and a bypass data path. The switching is accomplished rapidly using set of powered analog switches and a relay to minimize packets loss in the event of power loss. Further, when power is restored, software and hardware methods and arrangements disclosed herein permit the data path to be promptly restored resulting in the restoration of tapping ability quickly after power is restored.
Messaging between remote controller and forwarding element
Some embodiments of the invention provide a forwarding element that can be configured through in-band data-plane messages from a remote controller that is a physically separate machine from the forwarding element. The forwarding element of some embodiments has data plane circuits that include several configurable message-processing stages, several storage queues, and a data-plane configurator. A set of one or more message-processing stages of the data plane are configured (1) to process configuration messages received by the data plane from the remote controller and (2) to store the configuration messages in a set of one or more storage queues. The data-plane configurator receives the configuration messages stored in the set of storage queues and configures one or more of the configurable message-processing stages based on configuration data in the configuration messages.
End-to end lossless Ethernet in Ethernet fabric
One embodiment of the present invention provides a computing system. The computing system includes a packet processor, a buffer management module, a data monitoring module, and a flow control module. The packet processor identifies a class of service indicating priority-based flow control associated with a remote computing system from a notification message. The buffer management module creates a buffer dedicated for frames belonging to the class of service from the remote computing system in response to identifying the class of service. The data monitoring module detects a potential overflow of the buffer. The flow control module operates in conjunction with the packet processor to generate a pause frame in response to detecting a potential overflow.
NETWORK TRAFFIC MANAGEMENT VIA NETWORK SWITCH QoS PARAMETERS ANALYSIS
Some examples disclosed herein relate to traffic management via network switch QoS parameters analysis. In one example, a set of actual QoS parameters maybe analyzed using a set of configured QoS parameters of each network switch. A set of modified QoS parameters for each network switch maybe determined based on the analysis of the set of actual QoS parameters. The set of modified QoS parameters maybe recommended to configure each network switch for improved traffic management.
Dynamically tunable heterogeneous latencies in switch or router chips
A device with dynamically tunable heterogeneous latencies includes an input port configured to receive a packet via a network, and a processing module configured to determine multiple values corresponding to a number of qualifying parameters associated with the packet. The processing module may use the values to generate a selector value and may allocate a latency mode to the packet based on the selector value.
Routers with personalized quality of service
The present disclosure relates to routers and quality of service (QoS) systems and methods that base decisions on the identification of one or more users of computing devices within the environment. Profiles and/or attributes associated with the users may be created and dynamically updated to optimize user experience. For example, the routers may dynamically adapt QoS settings to regulate bandwidth, latency and other parameters to prioritize users and/or optimize a specific user's experience based on the user's priority, personal profile, and/or other attributes.
Filtering and route lookup in a switching device
Methods and devices for processing packets are provided. The processing device may include an input interface for receiving data units containing header information of respective packets; a first module configurable to perform packet filtering based on the received data units; a second module configurable to perform traffic analysis based on the received data units; a third module configurable to perform load balancing based on the received data units; and a fourth module configurable to perform route lookups based on the received data units.
NETWORK INTERFACE ARCHITECTURE HAVING A DIRECTLY MODIFIABLE PRE-STAGE PACKET TRANSMISSION BUFFER
An improved network architecture for minimizing latency of preparing and sending data to a network over a physical medium. A system for communicating messages over a network may create and store ready-to-send data packets in a data buffer next to or as close as possible, either physically and/or logically, to a MAC component. The MAC component may then receive the data packet directly from the data buffer and encapsulate the data packet into a frame suitable for transmission to the network. The data packet is modifiable while being stored in the data buffer prior to transmission to the network.
Method for prioritization of internet traffic by finding appropriate internet exit points
The systems and methods discussed herein provide for faster communications, particularly for high priority traffic, across a distributed network with multiple exit points to a Wide Area Network. Rather than simply routing traffic based on internal or external destination, an intelligent router may measure latency to an endpoint destination via multiple paths, both external and internal, and direct traffic accordingly. Steering high priority traffic via the internal connection to an exit point near the destination server, and then to the server via the external network, may be faster than simply forwarding the connection via the external network from the exit point closest to the source device. Additionally, to reduce bandwidth requirements of the nearby exit point and provide capability for higher priority traffic, low priority traffic may be redirected back via the internal connection and transmitted via a distant exit point.