Patent classifications
H04L12/865
PRIORITIZED COMMUNICATION SESSION ESTABLISHMENT IN COMPUTER NETWORKS
Techniques are described for prioritized establishment of communication sessions. In one example, a network device parses a configuration file that defines a plurality of communication sessions of a routing protocol and includes priority values assigned to the communication sessions. The network device creates two or more lists of communication sessions for two or more of the priority values based on the configuration file, wherein each list of the two or more lists is created for a particular priority value of the priority values and defines one or more communication sessions of the plurality of communication sessions that are assigned the particular priority value. The network device then establishes the one or more communication sessions included in each list of the two or more lists according to an ordering based on the priority values associated with the two or more lists.
Efficient memory utilization and egress queue fairness
In one embodiment, a network device includes multiple ports to be connected to a packet data network so as to serve as both ingress and egress ports in receiving and forwarding of data packets including unicast and multicast data packets, a memory coupled to the ports and to contain a combined unicast-multicast user-pool storing the received unicast and multicast data packets, and packet processing logic to compute a combined unicast-multicast user-pool free-space based on counting only once at least some of the multicast packets stored once in the combined unicast-multicast user-pool, compute an occupancy of an egress queue by counting a space used by the data packets of the egress queue in the combined unicast-multicast user-pool, apply an admission policy to a received data packet for entry into the egress queue based on at least the computed occupancy of the egress queue and the computed combined unicast-multicast user-pool free-space.
CIRCUITRY WITHIN ROUTER OR SWITCH AND CORRESPONDING FRAME PROCESSING METHOD
The present invention provides a circuitry within a router or a switch, wherein the circuitry comprises a priority decision circuitry and a per-stream filtering and policing circuitry. The priority decision circuitry is configured to determine a priority of a frame received from a port of the router or the switch. The per-stream filtering and policing circuitry is configured to classify the frame into a first-type frame, a second-type frame or a third-type frame, wherein if the frame is determined as the first-type frame, the per-stream filtering and policing circuitry forwards the frame; if the frame is determined as the third-type frame, the per-stream filtering and policing circuitry discards the frame; and if the frame is determined as the second-type frame, the per-stream filtering and policing circuitry changes the priority of the frame, and the per-stream filtering and policing circuitry forwards the frame with the changed priority.
Data flow processing method and device
This application provide a data flow processing method and a device. A host determines a priority corresponding to a first data flow to be sent to a switch, and adds the priority to the first data flow to generate a second data flow that includes the priority. The host sends the second data flow to the switch, so that the switch processes the second data flow according to the priority of the second data flow. A host assigns a priority to a data flow, and the switch does not need to determine whether the data flow is an elephant flow or a mouse flow, thereby saving hardware resources of the switch. The switch does not need to determine the priority of the data flow, thereby processing the data flow in a timely manner.
Hybrid packet memory for buffering packets in network devices
A network device processes received packets at least to determine port or ports of the network device via which to transmit the packet. The network device also classifies the packets into packet flows, the packet flows being further categorized into traffic pattern categories characteristic of traffic pattern characteristics of the packet flows. The network device buffers, according to the traffic pattern categories of the packet flows, packets that belong to the packet flows in a first packet memory or in a second packet memory, the first packet memory having a memory access bandwidth different from a memory access bandwidth of the second packet memory. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets.
Queue management in a forwarder
A queue management method, system, and recording medium include Selective Acknowledgments (SACK) examining to examine SACK blocks of the forwarder to selectively drop packets in the forward flow queue based on a reverse flow queue and MultiPath Transmission Control Protocol (MPTCP) examining configured to examine multipath headers to recognize MPTCP flows and examine the reverse flow queue to determine if redundant data has been sent such that the dropping drops the redundant data.
Packet processing device and packet processing method
A packet processing device includes a first unit, a second unit, and a switching unit. The first unit counts the number of arrived packets in a first period that is from the time slot present after a priority section up to the end of the initial time slot in the subsequently-arriving priority section. When the counted number of arrived packets is positive, the first unit determines that forward mismatch has occurred in an observation cycle. The second unit counts the number of arrived packets in a second period which is from the time slot present immediately after the priority section in the first period of time up to the end of the initial time slot of burst sections in the subsequently-arriving priority section. When the counted number of arrived packets is “0”, the second unit determines that backward mismatch has occurred in the observation cycle.
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack). Due to this disclosed architecture, physical memory allocations (and deallocations) may be more flexibly implemented.
Signaling storm reduction from radio networks
A method for signaling storm reduction is disclosed, comprising concentrating a plurality of signaling messages from a radio access network node to a core network node at a signaling concentrator; and processing the plurality of signaling messages with a mobile device identifier rule, at a rate equal to or greater than a line rate of a link from the radio access network to the signaling concentrator, wherein processing the plurality of signaling messages further comprises determining whether to drop each of the plurality of signaling messages.
Systems and methods for extending internal endpoints of a network device
An integrated circuit (IC) device includes a network device. The network device includes first and second network ports each configured to connect to a network, and an internal endpoint port configured to connect to first endpoint having a first processing unit and second endpoint having a second processing unit. A lookup circuit is configured to provide a first forwarding decision for a first frame to be forwarded to the first endpoint. An endpoint extension circuit is configured to determine a first memory channel based on the first forwarding decision for forwarding the first frame, and forward the first frame to the first endpoint using the determined memory channel.