Patent classifications
H04L49/3045
Method of data delivery across a network
The present invention relates to a method of managing congestion in a multi-path network, the network having a plurality of network elements arranged in a plurality of switch stages and a plurality of network links interconnecting the network elements, the method comprising the steps of detecting congestion on a network link, the congested network link interconnecting the output port of a first network element with a first input port of a second network element in a subsequent switch stage; identifying an uncongested network link connected to a second input port of said second network element; and directing future data packets on a route across the multi-path network which includes the identified uncongested network link. Also provided is a multi-path network and an Ethernet bridge or router incorporating such a multi-path network.
Method and system for managing port bandwidth in network devices
A method for managing port bandwidth in network devices. The method includes determining a first and a second ingress bandwidth of a first and a second network chip, respectively, determining an egress bandwidth of an egress port of a third network chip, determining a first and a second weight for the first and the second network chip, respectively, where the first and the second weight are determined based on a bandwidth including the first and second ingress bandwidth, processing a first data packet, received by a first ingress port administrated by the first network chip, based on the first weight and the egress bandwidth, and processing a second data packet, received by a second ingress port administrated by the second network chip, based on the second weight, and the egress bandwidth, where the destination of the first and the second data packet is the egress port.
SYSTEM AND METHOD FOR SUPPORTING MULTIPLE CONCURRENT SL TO VL MAPPINGS IN A HIGH PERFORMANCE COMPUTING ENVIRONMENT
System and method for supporting multiple concurrent SL to VL mappings in a high performance computing environment. In accordance with an embodiment, systems and methods can provide for two or more SL to VL mappings per ingress switch port in a network switched fabric. By allowing for multiple such mappings, greater virtual lane independence can be achieved while continuing to achieve quality of service guarantees.
UPGRADING USER SPACE NETWORKING STACKS WITHOUT DISRUPTIONS TO NETWORK TRAFFIC
Described embodiments provide systems and methods for upgrading user space networking stacks without disruptions to network traffic. A first packet engine can read connection information of existing connections of a second packet engine written to a shared memory region by the second packet engine. The first packet engine can establish one or more virtual connections according to the connection information of existing connections of the second packet engine. Each of the first packet engine and the second packet engine can receive mirrored traffic data. The first packet engine can receive a first packet and determine that the first packet is associated with a virtual connection corresponding to an existing connection of the second packet engine. The first packet engine can drop the first packet responsive to the determination that the first packet is associated with the virtual connection.
Apparatus and method for rate management and bandwidth control
A data rate management system that provides quality of service at the fine granularity of applications in the home network and home automation environment is provided. An application can be associated with a dynamic traffic flow, a physical port, a logical interface, or a host computer or device. Virtual queueing is applied to isolate and protect individual applications. Comprehensive rate management algorithms are developed to offer the bandwidth guarantee for the applications individually. The data rate management system includes a traffic classifier, virtual queueing, and a rate manager. The traffic classifier can statically or dynamically identify an application. The identified application is stored in a dedicated virtual queue. The rate manager schedules the packet transmission among virtual queues using the application-based traffic profiles.
VIRTUAL NETWORK DEVICE
A virtual network device increases the effective number of local physical ports by converting each of the local physical ports into a plurality of virtual local physical ports, and the effective number of network physical ports by converting each of the network physical ports into a plurality of virtual network physical ports.
MULTI-PACKET SLIDING WINDOW SCHEDULER AND METHOD FOR INPUT-QUEUED SWITCHES
An exemplary sliding window scheduling method and system are disclosed. The exemplary sliding window scheduling method and system can schedule multiple packets in a given scheduling frame with a sliding window scheduling frame. The scheduling operation can be performed using bitmap operators and can achieve a lowest time complexity of O(1) per matching computation and per port using distributed parallelization hardware. The exemplary sliding window scheduling method and system can be performed in the context of a queue-proportional scheduler (QPS) as well as iSLIP. In alternative embodiments, the SW-QPS operation can be performed in a batching window rather than in a sliding window.
Backpressure from an external processing system transparently connected to a router
An external processing system includes a port configured to exchange signals with a router and one or more processors configured to instantiate an operating system and a hypervisor based on information provided by the router in response to the external processing system being connected to the router. The processors implement a user plane layer that generates feedback representative of a processing load and provides the feedback to the router via the port. The router includes a port allocated to an external processing system and a controller that provides the information representing the operating system and hypervisor in response to connection of the external processing system. The controller also receives feedback indicating a processing load at the external processing system. A queue holds packets prior to providing the packets to the external processing system. The controller discards one or more of the packets from the queue based on the feedback.
NETWORKING SYSTEM HAVING MULTIPLE COMPONENTS WITH MULTIPLE LOCI OF CONTROL
Each switch unit in a networking system shares its local state information among other switch units in the networking system, collectively referred to as the shared forwarding state. Each switch unit creates a respective set of output queues that correspond to ports on other switch unites based on the shared forwarding state. A received packet on an ingress switch unit operating in accordance with a first routing protocol instance can be enqueued on an output queue in the ingress switch; the packet is subsequently processed by the egress switch unit, operating in accordance with a second routing protocol instance that corresponds to the output queue.
Control wavelet for accelerated deep learning
Techniques in advanced deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements performs flow based computations on wavelets of data. Each processing element has a compute element and a routing element. Each compute element has memory. Each router enables communication via wavelets with nearest neighbors in a 2D mesh. A compute element receives a wavelet. If a control specifier of the wavelet is a first value, then instructions are read from the memory of the compute element in accordance with an index specifier of the wavelet. If the control specifier is a second value, then instructions are read from the memory of the compute element in accordance with a virtual channel specifier of the wavelet. Then the compute element initiates execution of the instructions.