Patent classifications
H04L49/50
Data processing network with flow compaction for streaming data transfer
An improved protocol for data transfer between a request node and a home node of a data processing network that includes a number of devices coupled via an interconnect fabric is provided that minimizes the number of response messages transported through the interconnect fabric. When congestion is detected in the interconnect fabric, a home node sends a combined response to a write request from a request node. The response is delayed until a data buffer is available at the home node and home node has completed an associated coherence action. When the request node receives a combined response, the data to be written and the acknowledgment are coalesced in the data message.
CENTRALIZED AGGREGATED ELEPHANT FLOW DETECTION AND MANAGEMENT
A semiconductor chip for implementing aggregated flow detection and management includes a number of pipes, where each pipe is coupled to a portion of ports on the semiconductor chip that are to receive data packets. A logic is coupled to the pipes and is used to detect and manage an elephant flow. The elephant flow-detection and management logic includes a flow table and a byte counter.
Delay-based tagging in a network switch
A network device organizes packets into various queues, in which the packets await processing. Queue management logic tracks how long certain packet(s), such as a designated marker packet, remain in a queue. Based thereon, the logic produces a measure of delay for the queue, referred to herein as the “queue delay.” Based on a comparison of the current queue delay to one or more thresholds, various associated delay-based actions may be performed, such as tagging and/or dropping packets departing from the queue, or preventing addition enqueues to the queue. In an embodiment, a queue may be expired based on the queue delay, and all packets dropped. In other embodiments, when a packet is dropped prior to enqueue into an assigned queue, copies of some or all of the packets already within the queue at the time the packet was dropped may be forwarded to a visibility component for analysis.
Method, Device, and Network System for Load Balancing
A method for implementing load balancing are applied to a 4-node network structure. Every two nodes in the 4-node network structure are interconnected, and the nodes are, e.g., dies. The 4-node network structure includes a source node (SN) and a destination node (DN). According to the method, when a bandwidth occupied by ingress traffic flowing into the SN and destined for the DN is greater than a bandwidth of a fabric side link (FSL) between the SN and the DN, the SN selects at least two transmission paths to send the ingress traffic to the DN; and when the bandwidth occupied by the ingress traffic is less than or equal to the bandwidth of the FSL, the SN transmits the ingress traffic on a direct link between the SN and the DN.
Overload protection for data sinks in a distributed computing system
Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
Dynamically reconfiguring data plane of forwarding element to account for power consumption
Some embodiments of the invention provide a network forwarding element that can be dynamically reconfigured to adjust its data message processing to stay within a desired operating temperature or power consumption range. In some embodiments, the network forwarding element includes (1) a data-plane forwarding circuit (“data plane”) to process data tuples associated with data messages received by the IC, and (2) a control-plane circuit (“control plane”) for configuring the data plane forwarding circuit. The data plane includes several data processing stages to process the data tuples. The data plane also includes an idle-signal injecting circuit that receives from the control plane configuration data that the control plane generates based on the IC's temperature. Based on the received configuration data, the idle-signal injecting circuit generates idle control signals for the data processing stages. Each stage that receives an idle control signal enters an idle state during which the majority of the components of that stage do not perform any operations, which reduces the power consumed and temperature generated by that stage during its idle state.
Self-Protecting Computer Network Router with Queue Resource Manager
A self-protecting router limits the extent to which its queues can be filled with potentially malicious or otherwise harmful messages received from outside the router, thereby ensuring the queues have sufficient room to accept messages generated internally within the router and are necessary for management and operation of the router. Such routers are, therefore, immune to attack by floods of messages from malicious or malfunctioning network nodes, such as computers, switches and other routers.
OVERLOAD PROTECTION FOR DATA SINKS IN A DISTRIBUTED COMPUTING SYSTEM
Described in this document, among other things, is an overload protection system that can protect data sinks from overload by controlling the volume of data sent to those data sinks in a fine-grained manner. The protection system preferably sits in between edge servers, or other producers of data, and data sinks that will receive some or all of the data. Preferably, each data sink owner defines a policy to control how and when overload protection will be applied. Each policy can include definitions of how to monitor the stream of data for overload and specify one or more conditions upon which throttling actions are necessary. In embodiments, a policy can contain a multi-part specification to identify the class(es) of traffic to monitor to see if the conditions have been triggered.
Signalling congestion
Congestion in respect to a network element operable to forward data items in a telecommunications networks, and in respect to a processing element operable to process requests for service is signaled. In either, the element is operable to perform its processing function at up to a processing rate which is subject to variation, and has a queue for items awaiting processing having a counter associated therewith which maintains a count from which a queue metric is derivable. A method comprises: updating the count at a rate dependent on the processing rate; further updating the count in response to receipt of items awaiting processing; and signalling a measure of congestion in respect of the element in dependence on the queue metric; then altering the rate at which the count is being updated and adjusting the counter whereby to cause a change in the queue metric if the processing rate has changed.
Integrated server with switching capabilities and network operating system
Methods, systems, and computer programs are presented for a switching server. One switching server includes a server, a switch module coupled to the server, and a switch controller coupled to the server and to the switch module. The server includes a processor executing an operating system that includes a network driver, and the network driver includes a first network device operating system (ndOS) program. Further, the switch module includes a switch fabric and input/output ports. The switch controller includes a processor and non-volatile storage, where the processor is configured to execute a second ndOS program. The first and second ndOS programs implement a global networking policy for a plurality of devices executing ndOS programs, the global networking policy including a definition for switching incoming packets through the plurality of devices executing the ndOS programs.