Patent classifications
H04L47/6205
Controller command scheduling in a memory system to increase command bus utilization
A first command is scheduled on a command bus, where the first command requires use of a data bus resource at a first time period after scheduling the first command. Prior to the first time period, a second command is identified according to a scheduling policy. A determination is made whether scheduling the second command on the command bus will cause a conflict in usage of the at least one data bus resource. In response to determining that scheduling the second command will cause the conflict in usage, a third lower-priority command is identified for which scheduling on the command bus will not cause the conflict in usage. The third command is scheduled on the command bus prior to scheduling the second command, even though it has lower priority than the second command.
Head-of-Queue Blocking for Multiple Lossless Queues
A network element includes at least one headroom buffer, and flow-control circuitry. The headroom buffer is configured for receiving and storing packets from a peer network element having at least two data sources, each headroom buffer serving multiple packets. The flow-control circuitry is configured to quantify a congestion severity measure, and, in response to detecting a congestion in the headroom buffer, to send to the peer network element pause-request signaling that instructs the peer network element to stop transmitting packets that (i) are associated with the congested headroom buffer and (ii) have priorities that are selected based on the congestion severity measure.
Multicast flow scheduling in a data center
In one example embodiment, a server generates a candidate instantiation of virtual applications among a plurality of hosts in a data center to support a multicast stream. The server provides, to a first set of agents corresponding to a first set of the plurality of hosts, a command to initiate a test multicast stream. The server provides, to a second set of agents corresponding to a second set of the plurality of hosts, a command to join the test multicast stream. The server obtains, from the second set of agents, a message indicating whether the second set of agents received the test multicast stream. If the message indicates that the second set of agents received the test multicast stream, the server causes the virtual applications to be instantiated in accordance with the candidate instantiation of the virtual applications.
DYNAMIC VIRTUAL CUT-THROUGH AND DYNAMIC FABRIC BANDWIDTH ALLOCATION BETWEEN VIRTUAL CUT-THROUGH AND STORE-AND-FORWARD TRAFFIC
Examples describe an egress port manager that uses an adaptive jitter selector to apply a jitter threshold level for a buffer, wherein the jitter threshold level is to indicate when egress of a packet segment from the buffer is allowed, wherein a packet segment comprises a packet header and wherein the jitter threshold level is adaptive based on a switch fabric load. In some examples, the jitter threshold level is to indicate a number of segments for the buffer's head of line (HOL) packet that are to be in the buffer or indicate a timer that starts at a time of issuance of a first read request for a first segment of the packet in the buffer. In some examples, the jitter threshold level is not more than a maximum transmission unit (MTU) size associated with the buffer. In some examples, a fetch scheduler is used to adapt an amount of interface overspeed to reduce packet fetching latency while attempting to prevent fabric saturation based on a switch fabric load level, wherein the fetch scheduler is to control the jitter threshold level for the buffer by forcing a jitter threshold level based on switch fabric load level and latency profile of the switch fabric.
Mitigating priority flow control deadlock in stretch topologies
Embodiments provide for mitigating priority flow control deadlock in stretch topologies by initializing a plurality of queues in a buffer of a leaf switch at a local cluster of a site having a plurality of clusters, wherein each queue of the plurality of queues corresponds to a respective one cluster of the plurality of clusters; receiving a pause command for no-drop traffic on the leaf switch, the pause command including an internal Class-of-Service (iCoS) identifier associated with a particular cluster of the plurality of cluster and a corresponding queue in the plurality of queues; and in response to determining, based on the iCoS identifier, that the pause command was received from a remote spine switch associated with a different cluster than the local cluster: forwarding the pause command to a local spine switch in the local cluster; and implementing the pause command on the corresponding queue in the buffer.
Inspection of network traffic in a security device at object level
A method, system, and computer-usable medium are disclosed for, responsive to establishment of a connection between a first endpoint device and a second endpoint device: maintaining, by a security device interfaced between the first endpoint device and the second endpoint device for inspecting traffic transmitted over the connection, a first communication state to be identical to a communication state of the first endpoint device; and maintaining, by the security device, a second communication state to be identical to a communication state of the second endpoint device; and responsive to transmission of traffic from the first endpoint and intended for the second endpoint: inspecting individual objects of the traffic; modifying stream identifiers of the individual objects prior to retransmission of the traffic to the second endpoint to maintain ordering of stream identifiers as seen by the second endpoint; and maintaining a mapping of the modified stream identifiers such that the mapping is used by the security device such that responses transmitted by the second endpoint in response to the objects transmitted by first endpoint device are modified to their original stream identifiers of the objects transmitted by first endpoint device.
OTN transport over a leaf/spine packet network
A network element includes ingress optics configured to receive a client signal; egress optics configured to transmit packets over one or more Ethernet links in a network; circuitry interconnecting the ingress optics and the egress optics, wherein the circuitry is configured to segment an Optical Transport Network (OTN) signal from the client signal into one or more flows; and provide the one or more flows to the egress optics for transmission over the one or more of Ethernet links to a second network element that is configured to provide the one or more flows into the OTN signal.
Congestion Flow Identification Method And Network Device
The present disclosure relates to congestion flow identification methods. One example method includes obtaining, by a network device, a queue length of a non-congestion flow queue, where the non-congestion flow queue includes a data packet or description information of the data packet, determining, by the network device, a target output port of a target data packet when the length of the non-congestion flow queue is greater than or equal to a first threshold, where the target data packet is a data packet waiting to enter the non-congestion flow queue or a next data packet waiting to be output from the non-congestion flow queue, and when utilization of the target output port is greater than or equal to a second threshold, determining, by the network device, that a flow corresponding to the target data packet is a congestion flow.
Network data processor having per-input port virtual output queues
Various embodiments of a virtual output queue system within a network element enables per-input port virtual output queues within a network data processor of the network element. In one embodiment, each port managed by a network data processor has an associated set of virtual output queues for each output port on the network data element. In one embodiment, network data processor hardware supports per-processor VOQs and per-input port VOQs are enabled in hardware for layer 3 forwarding by overloading layer 2 forwarding logic. In such embodiment, a mapping table is generated to enable virtual per-input port VOQs for layer 3 forwarding logic using layer 2 logic that is otherwise unused during layer 3 forwarding. In one embodiment, multiple traffic classes can be managed per-input port when using per-input port VOQs. In one embodiment, equal cost multi-path (ECMP) and link aggregation support is also enabled.
Timing transport method in a communication network
There is provided a method in a packet based network system for node-to-node transmission of data packets comprising timing packets and non-timing packets, which is directed to a mechanism for providing a delay variation compensation in a timing system or timing sensitive signal transport in a packet based network without participating in the timing signaling of the timing packets or timing sensitive packets themselves. The method comprises associating the data packets with different levels of transmission priority P.sub.r, P.sub.l, assigning highest (or highest available) transmission priority P.sub.r to the timing packets, separately queuing the timing packets in different buffers 401, 402, and providing first opportunity transmission of the timing packets regardless of transmission priority level of non-timing packets waiting to be transmitted. The advantage of the method is that timing-sensitive traffic thereby experiences reduced buffer delay variations.