Patent classifications
H04L47/521
MECHANISM TO IMPLEMENT TIME STAMP-BASED TRANSMISSIONS FROM AN NETWORK INTERFACE DEVICE OF A DATACENTER
A circuitry of a network interface device of a computing network is to: access a first message from a server architecture of the computing network, the first message including a timestamp based on a time at which the circuitry is to access, from a host memory, one or more data packet descriptors that correspond to a data packet to be transmitted to the computing network from the network interface device; send, for transmission to the server architecture and at a transmission time based on the timestamp, a second message, the second message including a request to access the one or more data packet descriptors; and subsequent to sending the second message for transmission, access the one or more data packet descriptors to determine one or more addresses for the data packet in the host memory.
Systems and methods for providing lockless bimodal queues for selective packet capture
In a network system, an application receiving packets can consume one or more packets in two or more stages, where the second and the later stages can selectively consume some but not all of the packets consumed by the preceding stage. Packets are transferred between two consecutive stages, called producer and consumer, via a fixed-size storage. Both the producer and the consumer can access the storage without locking it and, to facilitate selective consumption of the packets by the consumer, the consumer can transition between awake and sleep modes, where the packets are consumed in the awake mode only. The producer may also switch between awake and sleep modes. Lockless access is made possible by controlling the operation of the storage by the producer and the consumer both according to the mode of the consumer, which is communicated via a shared memory location.
Packet Buffer Spill-Over in Network Devices
A packet processor of a network device receives packets ingressing from a plurality of network links via a plurality of network ports of the network device. The packet processor buffers the packets in an internal packet memory in a plurality of queues, including a first queue. In response to the packet processor detecting congestion in the internal packet memory, the packet processor selectively forwards a group of multiple packets in the first queue from the internal packet memory to a first port, among one or more ports coupled to one or more external memories, to transfer the group of multiple packets to a first external memory that is coupled to the first port so that the first queue is stored across the internal packet memory and the first external packet memory.
Packet buffer spill-over in network devices
Packets to be transmitted from a network device are buffered in queues in a first packet memory. In response to detecting congestion in a queue in the first packet memory, groups of multiple packets are transferred from the first packet memory to a second packet memory, the second packet memory configured to buffer a portion of traffic bandwidth supported by the network device. Prior to transmission of the packets among the one or more groups of multiple packets from the network device, packets among the one or more groups of multiple packets are transferred from the second packet memory back to the first packet memory. The packets transferred from the second packet memory back to the first packet memory are retrieved from the first packet memory and are forwarded to one or more network ports for transmission of the packets from the network device.
INTEGRATED TRAFFIC PROFILE FOR INDICATING CONGESTION AND PACKET DROP FOR CONGESTION AVOIDANCE
A system for facilitating an integrated traffic profile for indicating congestion and packet drop is provided. During operation, the system can determine a first traffic profile indicating whether to drop a packet based on the utilization of a queue. The packets from the queue can be forwarded via an egress port reachable via a fabric. The system can also determine a second traffic profile indicating whether to indicate congestion in the packet based on the utilization. The system can then determine a third traffic profile by combining the first and second traffic profiles. The third traffic profile can indicate acceptance at the queue for a subset of packets being selected for dropping based on the utilization. Subsequently, the system can, if the packet is selected for dropping, determine whether to accept the packet at the queue and set a congestion indicator in the packet based on the third traffic profile.
Method and Apparatus for Queue Scheduling
Embodiments of this application disclose a method and an apparatus for queue scheduling, to reduce a network latency in a packet transmission process. The method includes: A first device obtains a first packet balance when scheduling a first queue, where the first packet balance indicates a volume of packets that can be dequeued from the first queue; and the first device schedules a second queue based on the first packet balance.
System and method for a time-sensitive network
A method for a time sensitive network (TSN) having a network topology is disclosed. The method includes determining a set of data flow permutations corresponding to the network topology, computing a respective full schedule corresponding to each data flow permutation of the set of data flow permutations, determining a respective time to compute the full schedule for each flow permutation of the set of data flow permutations, and computing a respective partial schedule for each data flow permutation of the set of data flow permutations. The method further includes selecting a data flow permutation of the set of data flow permutations based at least in part on the respective time to compute the full schedule for the selected flow permutation, and saving the selected data flow permutation to a memory.
Arbitration of multiple-thousands of flows for convergence enhanced ethernet
In one embodiment, a method includes selecting a flow from a head of a first control queue or a second control queue. The method also includes providing service to the selected flow. Moreover, the method includes decreasing a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow. In another embodiment, a computer program product includes a computer readable storage medium having program code embodied therewith. The embodied program code is readable/executable by a device to select, by the device, a flow from a head of a first control queue or a second control queue. The embodied program code is also readable/executable to provide, by the device, service to the selected flow, and decrease, by the device, a service credit of the selected flow by an amount corresponding to an amount of service provided to the selected flow.
Self-Protecting Computer Network Router with Queue Resource Manager
A self-protecting router limits the extent to which its queues can be filled with potentially malicious or otherwise harmful messages received from outside the router, thereby ensuring the queues have sufficient room to accept messages generated internally within the router and are necessary for management and operation of the router. Such routers are, therefore, immune to attack by floods of messages from malicious or malfunctioning network nodes, such as computers, switches and other routers.
NETWORK PROCESSOR WITH EXTERNAL MEMORY PROTECTION
Systems and methods for protecting external memory resources to prevent bandwidth collapse in a network processor. One embodiment is a network processor including an input port configured to receive packets from a source device, on-chip memory configured to store packets in queues, and external memory configured to provide a backing store to the on-chip memory. The network processor also includes a processor configured, in response to determining that the source device is unresponsive to a congestion notification, to reduce a size of one or more queues to prevent packets transferring from the on-chip memory to the external memory.