Patent classifications
H04L49/9047
Multi-port queue group system
A multi-port queue group system an Network Processing Unit coupled to ingress port(s) and an egress port group having a first egress port and a second egress port. The NPU includes an egress queue group having a first egress queue associated with the first egress port and a second egress queue associated with the second egress port. The NPU receives data packets that are each directed to the egress port group via the ingress port(s), and buffers a first subset of the data packets in the first egress queue included in the egress queue group, and a second subset of the data packets in the second egress queue included in the egress queue group. The NPU then transmits at least one of the data packets via at least one of the first egress port and the second egress port included in the egress port group.
Multi-port queue group system
A multi-port queue group system an Network Processing Unit coupled to ingress port(s) and an egress port group having a first egress port and a second egress port. The NPU includes an egress queue group having a first egress queue associated with the first egress port and a second egress queue associated with the second egress port. The NPU receives data packets that are each directed to the egress port group via the ingress port(s), and buffers a first subset of the data packets in the first egress queue included in the egress queue group, and a second subset of the data packets in the second egress queue included in the egress queue group. The NPU then transmits at least one of the data packets via at least one of the first egress port and the second egress port included in the egress port group.
METHODS AND APPARATUS FOR THREAD-LEVEL EXECUTION IN NON-KERNEL SPACE
Methods and apparatus for split memory allocations in non-kernel space. Many modern networking technologies use asymmetric transmit and/or receive resource. Various aspects described herein split memory resources for transmit and receive, configuring each for their respective hardware optimizations. For example, a receive data paths that support batch processing and packet aggregation may be allocated large memory objects (32 KB) that can route data packets on a per-flow basis. In contrast, transmit data paths that support multiple concurrent network connections may be allocated small memory objects (2 KB) that can route data packets one at a time.
COMMUNICATION CONTROL APPARATUS, COMMUNICATION SYSTEM, COMMUNICATION CONTROL METHOD, AND STORAGE MEDIUM
There is provided a communication control apparatus comprising a processor. The processor receives, from a first mobile communication apparatus, a request for permission of data transmission to a second mobile communication apparatus, the second mobile communication apparatus including a buffer memory to store received data. The processor determines whether to permit the data transmission based on a free capacity of the buffer memory and a reserved capacity indicated by a reservation setting of the buffer memory. The processor updates, when the data transmission is determined to be permitted, the reservation setting such that a capacity for received data corresponding to the data transmission is added to the reserved capacity. The processor transmits, when the data transmission is determined to be permitted, a response indicating that the data transmission is permitted to the first mobile communication apparatus.
Programmatically configured switches and distributed buffering across fabric interconnect
Programmable switches and routers are described herein for enabling their internal network fabric to be configured with a topology. In one implementation, a programmable switch is arranged in a network having a plurality of switches and an internal fabric. The programmable switch includes a plurality of programmable interfaces and a buffer memory component. Also, the programmable switch includes a processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface. Based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric.
Programmatically configured switches and distributed buffering across fabric interconnect
Programmable switches and routers are described herein for enabling their internal network fabric to be configured with a topology. In one implementation, a programmable switch is arranged in a network having a plurality of switches and an internal fabric. The programmable switch includes a plurality of programmable interfaces and a buffer memory component. Also, the programmable switch includes a processing component configured to establish each of the plurality of programmable interfaces to operate as one of a user-facing interface and a fabric-facing interface. Based on one or more programmable interfaces being established as one or more fabric-facing interfaces, the buffer memory component is configured to store packets received from a user-facing interface of an interconnected switch of the plurality of switches via one or more hops into the internal fabric.
ADAPTIVE PACKET RETRANSMISSION WITH OPTIMIZED DELAY FOR REAL TIME COMMUNICATIONS
A method and apparatus of a device that manages a video stream is described. In an exemplary embodiment, the device receives a plurality of packets for a video stream from a transmitting device via a server. The device may additionally store a first packet of the plurality of packets in a first buffer when the first packet is on-time and store a second packet of the plurality of packets in a second buffer when the second packet is late. The device may also further forward a frame from the second buffer to the first buffer when frame is complete.
ADAPTIVE PACKET RETRANSMISSION WITH OPTIMIZED DELAY FOR REAL TIME COMMUNICATIONS
A method and apparatus of a device that manages a video stream is described. In an exemplary embodiment, the device receives a plurality of packets for a video stream from a transmitting device via a server. The device may additionally store a first packet of the plurality of packets in a first buffer when the first packet is on-time and store a second packet of the plurality of packets in a second buffer when the second packet is late. The device may also further forward a frame from the second buffer to the first buffer when frame is complete.
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.
SYSTEM AND METHOD FOR FACILITATING DATA-DRIVEN INTELLIGENT NETWORK
Data-driven intelligent networking systems and methods are provided. The system can accommodate dynamic traffic with fast, effective congestion control. The system can maintain state information of individual packet flows, which can be set up or released dynamically based on injected data. Each flow can be provided with a flow-specific input queue upon arriving at a switch. Packets of a respective flow can be acknowledged after reaching the egress point of the network, and the acknowledgement packets can be sent back to the ingress point of the flow along the same data path. As a result, each switch can obtain state information of each flow and perform flow control on a per-flow basis.