Patent classifications
H04L12/863
Network device with less buffer pressure
A buffer module includes N queues configured to buffer M data streams, where N is less than M. A counting module includes M counters, the M counters are in a one-to-one correspondence with the M data streams, and the M counters are configured to count buffer quantities for the M data streams in the N queues. The control module is configured to, when a count value on a first counter exceeds a corresponding threshold, discard a to-be-enqueued data packet of a data stream corresponding to the first counter, or control the sending module to send pause indication information to an upper-level control module.
Methods and apparatuses for transparent embedding of photonic switching into electronic chassis for scaling data center cloud system
There is provided methods and apparatuses for transferring photonic cells or frames between a photonic switch and an electronic switch enabling a scalable data center cloud system with photonic functions transparently embedded into an electronic chassis. In various embodiments, photonic interface functions may be transparently embedded into existing switch chips (or switch cards) without changes in the line cards. The embedded photonic interface functions may provide the switch cards with the ability to interface with both existing line cards and photonic switches. In order to embed photonic interface functions without changes on the existing line cards, embodiments use two-tier buffering with a pause signalling or pause messaging scheme for managing the two-tier buffer memories.
QoS/QoE enforcement driven sub-service flow management in 5G system
Methods and apparatus, including computer program products, are provided for QoE/QoS management. In some example embodiments, there may be provided a method. The method may include detecting, by an enforcement point, an initiation of a session of an application; determining, by the enforcement point, whether a new subservice flow needs to be established to enable, for the initiated session, a quality of service differentiation and/or a quality of experience differentiation; sending, by the enforcement point, an indication to a radio to enable the radio to establish a radio buffer to handle the new subservice flow, when the new subservice flow needs to be established; sending, by the enforcement point, parameter information to the radio to enable the radio to configure at least one service parameter of the radio buffer; and forwarding, by the enforcement point, user-plane data associated with the session to the radio including a subservice flow identifier. Related apparatus, systems, methods, and articles are also described.
ENHANCED DISTRIBUTED CHANNEL ACCESS (EDCA) QUEUE FOR REAL TIME APPLICATION (RTA) PACKETS
A wireless local area network (WLAN) supporting real-time application (RTA) packets that are sensitive to communication delays as well as non-real time packets over a network supporting carrier sense multiple access/collision avoidance (CSMA/CA) using enhanced distributed channel access (EDCA) in an EDCA queue system. At least one new access class (AC) and associated transmit queue for enqueuing RTA packets while non-RTA packets are enqueued to the original transmit queues in the EDCA queue system. A new EDCA function (EDCAF) is created for the new access class (AC) transmit queue for RTA packets in which the RTA queue is able to contend for channel before expected RTA packets arrival.
Techniques for efficient reordering of data packets in multipath scenarios
A reorder queue device for a multipath receiver includes a reorder queue and a reorder queue controller. The reorder queue is configured to queue data packets numbered in sequence which are received over multiple paths of a multipath channel. The reorder queue controller is configured to reorder the data packets in the reorder queue according to a reorder criterion, wherein the reorder criterion is based on at least one of the following: path-specific sequence numbering of the data packets, overall sequence number comparison across the multiple paths, or path-specific characteristics of the multiple paths. The data packets are numbered in a path-specific sequence and in an overall specific sequence. The reorder queue controller is configured to detect a packet loss on a specific path based on determining the overall sequence numbers and/or the path-specific sequence numbers of at least two packets of the specific path.
Hybrid packet memory for buffering packets in network devices
A network device processes received packets at least to determine port or ports of the network device via which to transmit the packet. The network device also classifies the packets into packet flows, the packet flows being further categorized into traffic pattern categories characteristic of traffic pattern characteristics of the packet flows. The network device buffers, according to the traffic pattern categories of the packet flows, packets that belong to the packet flows in a first packet memory or in a second packet memory, the first packet memory having a memory access bandwidth different from a memory access bandwidth of the second packet memory. After processing the packets, the network device retrieves the packets from the first packet memory or the second packet memory in which the packets are buffered, and forwards the packets to the determined one or more ports for transmission of the packets.
Queue management in a forwarder
A queue management method, system, and recording medium include Selective Acknowledgments (SACK) examining to examine SACK blocks of the forwarder to selectively drop packets in the forward flow queue based on a reverse flow queue and MultiPath Transmission Control Protocol (MPTCP) examining configured to examine multipath headers to recognize MPTCP flows and examine the reverse flow queue to determine if redundant data has been sent such that the dropping drops the redundant data.
Agent message delivery fairness
Apparatus and methods are disclosed for generating, sending, and receiving messages in a networked environment using autonomous (or semi-autonomous) agents. In one example of the disclosed technology, a method of controlling message flow in a computer network comprising a plurality of agents, agent data consumers, and an agent message bridge configured to send messages by receiving a set of messages, at least some of the messages including a message type, queuing the set of messages in a spooler that includes an indication of the respective message type for each of the messages, receive an indication that sending some of the messages queued in the spooler should be delayed for one or more indicated message types, and sending at least one of the messages to a selected one or more of the agent data consumers, the sent messages not being of the indicated message types.
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures
Methods and apparatus for memory allocation and reallocation in networking stack infrastructures. Unlike prior art monolithic networking stacks, the exemplary networking stack architecture described hereinafter includes various components that span multiple domains (both in-kernel, and non-kernel). For example, unlike traditional “socket” based communication, disclosed embodiments can transfer data directly between the kernel and user space domains. A user space networking stack is disclosed that enables extensible, cross-platform-capable, user space control of the networking protocol stack functionality. The user space networking stack facilitates tighter integration between the protocol layers (including TLS) and the application or daemon. Exemplary systems can support multiple networking protocol stack instances (including an in-kernel traditional network stack). Due to this disclosed architecture, physical memory allocations (and deallocations) may be more flexibly implemented.
Intelligent input/output operation completion modes in a high-speed network
Mechanisms are provided for implementing intelligent input/output (I/O) operation completion modes in a high-speed network. An application thread executing on a central processing unit in the data processing system, receives a first indication to enter a mode of operation. The application thread enters the mode of operation, arms an arm file descriptor, and processes further completions that enter the completion queue until a second indication is received indicating that the mode is to be exited. Responsive to receiving the second indication to exit the mode, the application thread exits the mode of operation and disarms the arm file descriptor.