Patent classifications
H04L49/501
Switching and load balancing techniques in a communication network
A source access network device multicasts copies of a packet to multiple core switches, for switching to a same target access network device. The core switches are selected for the multicast based on a load balancing algorithm managed by a central controller. The target access network device receives at least one of the copies of the packet and generates at least metric indicative of a level of traffic congestion at the core switches and feeds back information regarding the recorded at least one metric to the controller. The controller adjusts the load balancing algorithm based on the fed back information for selection of core switches for a subsequent data flow.
Providing a snapshot of buffer content in a network element using egress mirroring
A network element includes circuitry and multiple ports. The multiple ports are configured to connect to a communication network. The circuitry is configured to receive multiple packets from the communication network via one or more input ports, and store the received packets in a buffer of the network element, to schedule transmission of the packets stored in the buffer to the communication network via one or more output ports, and in response to a request to provide a snapshot of at least a portion of the buffer, to mirror for transmission, via one or more dedicated ports, only a part of the portion that was received in the network element prior to the request.
RELAY APPARATUS, RELAY METHOD AND RELAY PROGRAM
A relay apparatus includes a relay unit to which a plurality of communication lines are connected, and that relays a message received from one communication line, by transmitting the message on another communication line, a switch for switching between a connected state where at least two communication lines included in the plurality of communication lines are connected directly and a separated state where the communication lines are separated apart, and a relay rule switching unit for switching a message relay rule for the relay unit depending on a switching state of the switch. The relay apparatus also includes a communication state detection unit that detects a communication state of the communication lines, and switches the switch in accordance with the communication state detected by the communication state detection unit.
HOST DEVICE WITH MULTI-PATH LAYER CONFIGURED FOR DETECTION AND RESOLUTION OF OVERSUBSCRIPTION CONDITIONS
An apparatus comprises a host device configured to communicate over a network with a storage system comprising a plurality of storage devices. The host device comprises a set of input-output queues and a multi-path input-output driver configured to select input-output operations from the set of input-output queues for delivery to the storage system over the network. The multi-path input-output driver is further configured to maintain payload size counters to track outstanding command payload for respective ones of a plurality of paths from the host device to the storage system, to detect an oversubscription condition relating to at least one of the paths based at least in part on values of one or more of the payload size counters, and to initiate one or more automated actions responsive to the detected oversubscription condition. For example, automated deployment of one or more additional paths associated with respective spare communication links between the host device and the storage system may be initiated.
Dynamically reconfiguring data plane of forwarding element to account for operating temperature
Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (data plane) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (control plane) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples. The data plane also includes an idle-signal injecting circuit that receives from the control plane configuration data that the control plane generates based on the IC's temperature. Based on the received configuration data, the idle-signal injecting circuit generates idle control signals for the data processing stages. Each stage that receives an idle control signal enters an idle state during which the majority of the components of that stage do not perform any operations, which reduces the power consumed and temperature generated by that stage during its idle state.
Managing congestion in a network adapter based on host bus performance
A network adapter includes a host interface and circuitry. The host interface is configured to connect locally between the network adapter and a host via a bus. The circuitry is configured to receive from one or more source nodes, over a communication network to which the network adapter is coupled, multiple packets destined to the host, and temporarily store the received packets in a queue of the network adapter, to send the stored packets from the queue to the host over the bus, to monitor a performance attribute of the bus, and in response to detecting, based at least on the monitored performance attribute, an imminent overfilling state of the queue, send a congestion notification to at least one of the source nodes from which the received packets originated.
SWITCHING AND LOAD BALANCING TECHNIQUES IN A COMMUNICATION NETWORK
A source access network device multicasts copies of a packet to multiple core switches, for switching to a same target access network device. The core switches are selected for the multicast based on a load balancing algorithm managed by a central controller. The target access network device receives at least one of the copies of the packet and generates at least metric indicative of a level of traffic congestion at the core switches and feeds back information regarding the recorded at least one metric to the controller. The controller adjusts the load balancing algorithm based on the fed back information for selection of core switches for a subsequent data flow.
BUFFERBLOAT RECOVERY AND AVOIDANCE SYSTEMS AND METHODS
Systems and methods for bufferbloat recovery and avoidance are provided herein. A portion of the buffer can be compressed based on one or more thresholds without changing an order of packet transmission and without dropping packets. The method includes storing, by a device, a plurality of packets received by the device to a buffer. The buffer can be configured with a minimum threshold and a maximum threshold. The method includes detecting that a size of the buffer has reached at least the maximum threshold and compressing one or more packets of the plurality of packets stored between the minimum threshold and the maximum threshold while transmitting, during compression, at least a portion of one or more packets of the plurality of packets stored in the buffer below the minimum threshold.
DEVICES AND METHODS OF USING NETWORK FUNCTION VIRTUALIZATION AND VIRTUALIZED RESOURCES PERFORMANCE DATA TO IMPROVE PERFORMANCE
Devices and methods of providing performance measurements (PMs) for Network Function Virtualization are generally described. A Virtual Network Function (VNF) PM job is scheduled at a VNF and VNF PM data received in response. From the VNF PM data, it is determined that virtualized resource (VR) management may be a cause of poor VNF performance. A VR PM job is scheduled and results in VR PM data. The VR PM and VNF PM data are analyzed to determine whether to increase the VR at the VNF. If an increase is determined, a request for the increase is transmitted from an element manager to a VNF manager or the VNF PM and/or VR PM data are provided to a Network Manager (NM) for the NM to request the increase by a Network Function Virtualization Orchestrator (NFVO).
Data packet marking method and device, and data transmission system
This application discloses: collecting statistics about a target parameter of a first data flow, where a target queue of a switching device is used to buffer a data packet in at least one data flow, the first data flow is one of the at least one data flow, and the target parameter is used to reflect an amount of data in the first data flow; when a length of the target queue meets a first length condition, determining, based on at least one of the target parameter and an auxiliary parameter of the first data flow, a marking probability corresponding to the first data flow; and performing congestion marking on a data packet in the first data flow based on the marking probability corresponding to the first data flow.