Patent classifications
H04L47/6255
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK WITH INGRESS PORT INJECTION LIMITS
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic while applying injection limits to different traffic classes at an ingress edge port. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. Furthermore, an edge switch can dynamically allocate the ingress port bandwidth among the traffic classes that are active at a given moment.
Channel Bonding in Multiple-Wavelength Passive Optical Networks (PONs)
An apparatus comprises: a processor configured to: select a first channel from among a plurality of channels in a network, and generate a first message assigning a first grant corresponding to the first channel; a transmitter coupled to the processor and configured to transmit the first message; and a receiver coupled to the processor and configured to receive a second message on the first channel and in response to the first message. A method comprises: selecting a first channel from among a plurality of channels in a network; generating a first message assigning a first grant corresponding to the first channel; transmitting the first message; and receiving a second message on the first channel in response to the first message
Low Latency Queuing System
Disclosed herein are methods and apparatuses for processing network traffic by a queuing system which may include: receiving pointers to chunks of memory allocated responsive to receipt of network traffic, the chunks of memory each including a portion of a queue batch, wherein the queue batch includes a plurality of queue requests; generating a data structure including the pointers and a reference count; assigning the queue request to a second core; generating a first structured message for the first queue request; and storing the first structured message in a structured message passing queue associated with the second core, wherein a second processing thread associated with the second core, responsive to receiving the structured message, processes the first queue request by retrieving the first queue request from at least one of the chunks of memory.
Traffic Management in a Network Switching System with Remote Physical Ports
A switching system includes a port extender device coupled to a central switching device. Packets processed by the central switching device are forwarded to the port extender device and enqueued in ones of a plurality of egress queues in the port extender device for transmission of the packets via the front ports of the port extender device. Respective egress queues in the port extender device have a queue depth that is less than a queue depth of corresponding respective egress queues in the central switching device. A flow control message indicative of congestion in a particular egress queue of the port extender device is generated and transmitted to the central switch device to control transmission of packets from the central switching device to the particular egress queue of the port extender device.
Data processing method and apparatus, and switching device using footprint queues
This application discloses a data processing method and apparatus, and a switching device. The data processing method includes: obtaining a destination address of a data packet received by an input port; determining an available output port based on the destination address; determining a busy degree of the available output port, when there is no non-busy available output port in the available output port, determining a quantity of footprint queues on the available output port, and selecting an available output port with a largest quantity of footprint queues as a target output port; determining a busy degree of a queue on the target output port, and when there is no non-busy queue on the target output port, selecting a footprint queue on the target output port as a target output queue. In the foregoing manners, a network resource is properly used, and network blocking can be effectively alleviated.
Interconnect flow control
A communication technique which includes determining, at least in part by comparing data associated with a packet that has been pulled from a received packet queue with a highest sequence number among packets that have been placed in the received packet queue, that the received packet queue has space available to receive a further packet. A receiver with which the received packet queue is associated is sent, based at least in part on the determination, a next packet.
LONGEST QUEUE IDENTIFICATION
The present disclosure generally discloses a longest queue identification mechanism. The longest queue identification mechanism, for a set of queues of a buffer, may be configured to identify the longest queue of the set of queues and determine a length of the longest queue of the set of queues. The longest queue identification mechanism may be configured to identify the longest queue of the set of queues using only two variables including a longest queue identifier (LQID) variable for the identity of the longest queue and a longest queue length (LQL) variable for the length of the longest queue. It is noted that the identity of the longest queue of the set of queues may be an estimate of the identity of the longest queue and, similarly, that the length of the longest queue of the set of queues may be an estimate of the length of the longest queue.
SELECTIVELY BYPASSING A ROUTING QUEUE IN A ROUTING DEVICE IN A FIFTH GENERATION (5G) OR OTHER NEXT GENERATION NETWORK
The technologies described herein are generally directed toward shedding processing loads associated with route updates. According to an embodiment, a system can comprise a processor and a memory that can enable operations facilitating performance of operations including facilitating receiving, from a second routing device via a network, a communication. The operations can further comprise, in response to a queueing delay being determined to be less than a threshold, queueing, in the queue, the communication for a third routing device selected according to a first selection process as being on a route to a destination routing device for the communication. Further, operations to, in response to the queueing delay of the queue being determined to be equal to or above the threshold, transmit the communication to a fourth routing device, with the fourth routing device being selected according to a second selection process different than the first selection process.
Utilizing multiple algorithms in a distributed-service environment
Techniques for producing a gentle reduction in throughput in a distributed service when a node of the service encounters a very large backlog of requests and/or when a previously offline node of the service is brought back online. These techniques may utilize multiple different algorithms to determine an amount of work that the distributed service is able to accept at any given time, rather than a single algorithm.
METHOD AND DEVICE FOR ESTIMATING WORKLOAD OF NETWORK FUNCTION
A workload estimation method estimates a workload of a network function that processes a received flow based on a rule searched for from among a plurality of rules. The method that is executed by a processor includes: obtaining information that indicates maximum throughputs of the network function respectively for the rules; measuring traffic volumes of flows that respectively match the rules; calculating ratios of the measured traffic volumes to the maximum throughputs respectively for the rules; and estimating a workload of the network function based on a sum of the ratios calculated respectively for the rules.