H04L47/6205

MITIGATING PRIORITY FLOW CONTROL DEADLOCK IN STRETCH TOPOLOGIES

Embodiments provide for mitigating priority flow control deadlock in stretch topologies by initializing a plurality of queues in a buffer of a leaf switch at a local cluster of a site having a plurality of clusters, wherein each queue of the plurality of queues corresponds to a respective one cluster of the plurality of clusters; receiving a pause command for no-drop traffic on the leaf switch, the pause command including an internal Class-of-Service (iCoS) identifier associated with a particular cluster of the plurality of cluster and a corresponding queue in the plurality of queues; and in response to determining, based on the iCoS identifier, that the pause command was received from a remote spine switch associated with a different cluster than the local cluster: forwarding the pause command to a local spine switch in the local cluster; and implementing the pause command on the corresponding queue in the buffer.

INSPECTION OF NETWORK TRAFFIC IN A SECURITY DEVICE AT OBJECT LEVEL

A method, system, and computer-usable medium are disclosed for, responsive to establishment of a connection between a first endpoint device and a second endpoint device: maintaining, by a security device interfaced between the first endpoint device and the second endpoint device for inspecting traffic transmitted over the connection, a first communication state to be identical to a communication state of the first endpoint device; and maintaining, by the security device, a second communication state to be identical to a communication state of the second endpoint device; and responsive to transmission of traffic from the first endpoint and intended for the second endpoint: inspecting individual objects of the traffic; modifying stream identifiers of the individual objects prior to retransmission of the traffic to the second endpoint to maintain ordering of stream identifiers as seen by the second endpoint; and maintaining a mapping of the modified stream identifiers such that the mapping is used by the security device such that responses transmitted by the second endpoint in response to the objects transmitted by first endpoint device are modified to their original stream identifiers of the objects transmitted by first endpoint device.

MULTICAST FLOW SCHEDULING IN A DATA CENTER

In one example embodiment, a server generates a candidate instantiation of virtual applications among a plurality of hosts in a data center to support a multicast stream. The server provides, to a first set of agents corresponding to a first set of the plurality of hosts, a command to initiate a test multicast stream. The server provides, to a second set of agents corresponding to a second set of the plurality of hosts, a command to join the test multicast stream. The server obtains, from the second set of agents, a message indicating whether the second set of agents received the test multicast stream. If the message indicates that the second set of agents received the test multicast stream, the server causes the virtual applications to be instantiated in accordance with the candidate instantiation of the virtual applications.

Communication system, communication method, and computer program product

A communication system according an embodiment includes one or more hardware processors. The one or more hardware processors calculate indicators used to detect presence of abnormality caused by a situation in which a size of a message is larger than a maximum allowable size of a queue, the indicators being calculated based on gate control information including a plurality of entries each of which indicates gate states corresponding to a plurality of queues.

Controller Command Scheduling in a Memory System to Increase Command Bus Utilization
20200050397 · 2020-02-13 ·

A first command is scheduled on a command bus, where the first command requires use of a data bus resource at a first time period after scheduling the first command. Prior to the first time period, a second command is identified according to a scheduling policy. A determination is made whether scheduling the second command on the command bus will cause a conflict in usage of the at least one data bus resource. In response to determining that scheduling the second command will cause the conflict in usage, a third lower-priority command is identified for which scheduling on the command bus will not cause the conflict in usage. The third command is scheduled on the command bus prior to scheduling the second command, even though it has lower priority than the second command.

Methods and apparatus for flow control associated with a switch fabric

In some embodiments, an apparatus includes a switch fabric having at least a first switch stage and a second switch stage, an edge device operatively coupled to the switch fabric and a management module. The edge device is configured to send a first portion of a data stream to the switch fabric such that the first portion of the data stream is received at a queue of the second switch stage of the switch fabric via the first switch stage of the switch fabric. The management module is configured to send a flow control signal configured to trigger the edge device to suspend transmission of a second portion of the data stream when a congestion level of the queue of the second switch stage of the switch fabric satisfies a condition in response to the first portion of the data stream being received at the queue.

Memory device and method to restock entries in serial link

A method of a memory device, a storage system, and a memory device are provided. The method includes receiving a set of entries, where the set of entries includes a first entry from a source queue and addressed to a first destination and a second entry addressed to a second destination, determining to add a third entry associated with the first entry and addressed to the first destination to the set of entries, selecting one of the first entry and the third entry as a restock entry and the other of the first entry and the third entry as a pass-through entry, sending the restock entry to the source queue, and sending the second entry and the pass-through entry to a serial link connected to the first destination and the second destination.

Systems and methods to maintain time synchronization between networked devices
11929935 · 2024-03-12 · ·

A time synchronization maintenance method includes determining, by a node of a mesh communication network, a transmission time to transmit data in a transmission queue. The method also includes determining, by the node, an amount of time until commencement of a next beacon signal slot used to transmit a time synchronization beacon signal from the node or another node of the mesh communication network. Further, when the transmission time is greater than the amount of time until commencement of the next beacon signal slot, the method includes delaying transmission, by the node, of at least a portion of the data in the transmission queue until completion of the next beacon signal slot.

Dynamic virtual cut-through and dynamic fabric bandwidth allocation between virtual cut-through and store-and-forward traffic

Examples describe an egress port manager that uses an adaptive jitter selector to apply a jitter threshold level for a buffer, wherein the jitter threshold level is to indicate when egress of a packet segment from the buffer is allowed, wherein a packet segment comprises a packet header and wherein the jitter threshold level is adaptive based on a switch fabric load. In some examples, the jitter threshold level is to indicate a number of segments for the buffer's head of line (HOL) packet that are to be in the buffer or indicate a timer that starts at a time of issuance of a first read request for a first segment of the packet in the buffer. In some examples, the jitter threshold level is not more than a maximum transmission unit (MTU) size associated with the buffer. In some examples, a fetch scheduler is used to adapt an amount of interface overspeed to reduce packet fetching latency while attempting to prevent fabric saturation based on a switch fabric load level, wherein the fetch scheduler is to control the jitter threshold level for the buffer by forcing a jitter threshold level based on switch fabric load level and latency profile of the switch fabric.

METHODS AND APPARATUS FOR EARLY DELIVERY OF DATA LINK LAYER PACKETS
20190342225 · 2019-11-07 ·

Methods and apparatus for non-sequential packet transfer. Prior art multi-processor devices implement a complete network communications stack at each processor. The disclosed embodiments provide techniques for delivering network layer (L3) and/or transport layer (L4) data payloads in the order of receipt, rather than according to the data link layer (L2) order. The described techniques enable e.g., earlier packet delivery. Such design topologies can operate within a substantially smaller memory footprint compared to prior art solutions. As a related benefit, applications that are unaffected by data link layer corruptions can receive data immediately (rather than waiting for the re-transmission of an unrelated L4 data flow) and thus the overall network latency can be greatly reduced and user experience can be improved.