H04L45/121

Packet sending method, network node, and system
11706149 · 2023-07-18 · ·

A controller obtains a forwarding latency requirement of a service flow and a destination address of the service flow, and determines a forwarding path that meets the forwarding latency requirement. The controller determines that an ingress node forwards a first cycle time number of a packet and an intermediate node forwards a second cycle time number of the packet, and separately determines a corresponding adjacent segment identifier. A label stack generated by the controller includes the adjacent segment identifier and the adjacent segment identifier. The controller sends the label stack to the ingress node, to trigger the ingress node to forward the packet within a period of time corresponding to the first cycle time number. The controller determines the forwarding path based on the forwarding latency requirement of the service flow, and generates a label stack corresponding to a forwarding time point.

Low latency for network devices not supporting LLD

An optimizing agent of a network device that does not support low latency DOCSIS can identify traffic or packets associated with a client resource for an optimization service flow. For example, the optimizing agent can receive a priority notification associated with a client resource from a low latency controller that is indicative of a low latency requirement associated with the client resource. The optimizing agent identifies the traffic for the optimized service flow based on the priority notification. The identifying can require modifying one or more parameters of an existing service flow, creating a new service flow, or selecting an existing service flow with low latency. The identified traffic can be routed to the optimized service flow to achieve low latency or high QoS.

Low latency for network devices not supporting LLD

An optimizing agent of a network device that does not support low latency DOCSIS can identify traffic or packets associated with a client resource for an optimization service flow. For example, the optimizing agent can receive a priority notification associated with a client resource from a low latency controller that is indicative of a low latency requirement associated with the client resource. The optimizing agent identifies the traffic for the optimized service flow based on the priority notification. The identifying can require modifying one or more parameters of an existing service flow, creating a new service flow, or selecting an existing service flow with low latency. The identified traffic can be routed to the optimized service flow to achieve low latency or high QoS.

High performance software-defined core network

A method comprising instantiating virtual routers (VRs) at each of a set of nodes that form a network. Each VR is coupled to the network and to a tenant of the node. The network comprises virtual links in an overlay network provisioned over an underlay network including servers of a public network. The method comprises configuring at least one VR to include a feedback control system comprising at least one objective function that characterizes the network. The method comprises configuring the VR to receive link state data of a set of virtual links of the virtual links, and control routing of a tenant traffic flow of each tenant according to a best route of the network determined by the at least one objective function using the link state data.

STITCHING MULTIPLE WIDE AREA NETWORKS TOGETHER

The present application relates to communications between a partner network and a wide area network (WAN). The partner network and WAN may exchange representations of the respective networks including a delay profile for the partner network. The WAN receives a network delay profile for multiple virtual network entities within the partner network. The multiple virtual network entities include at least a plurality of peering locations with the WAN. The WAN determines a path from the partner network through the WAN via a selected peering location of the plurality of peering locations with the WAN to a destination based on at least the network delay profile. The WAN deploys a policy for an agent within the partner network. The policy identifies traffic for the destination to route through the WAN via the selected peering location. The WAN routes traffic from the selected peering location to the destination along the path.

Methods of multi-link buffer management without block acknowledgement (BA) negotiation

Embodiments of a multi-link device (MLD) are generally described herein. The MLD may be configured for multi-link communication on a plurality of links. The MLD may be configured with a plurality of stations (STAs). Each STA may be a logical entity that includes a singly addressable instance of a medium access control (MAC) layer and a physical (PHY) layer of a link of the plurality of links. The MLD may configure traffic identifier (TID) assignment for the MLD for multi-link communication with another MLD. The multi-link communication may be configurable to support one or more data streams, wherein each of the data streams corresponds to a TID. The MLD may determine an assignment of the TIDs to the STAs of the MLD.

Devices and methods of selecting signal processing algorithm based on parameters

A device for performing wireless communication at least one processor configured to generate a condition signal based on at least one parameter associated with the device or the wireless communication, and select at least one of a plurality of signal processing algorithms for performing at least one of a plurality of signal processing functions based on the condition signal, each of the plurality of signal processing functions being associated with the wireless communication.

SOURCE ROUTING APPARATUS AND METHOD IN ICN

Disclosed herein a source routing apparatus and method in ICN. The method includes: receiving an interest packet; extracting a current entry value when the received interest packet includes a forwarding hint; using the extracted current entry value as an index of a path list; extracting a name of the interest packet; reducing the current entry value when the interest packet is transmitted to a network area of the path list; performing a FIB lookup with the extracted name; determining an output port using the FIB lookup; and transmitting the interest packet to the output port.

SOURCE ROUTING APPARATUS AND METHOD IN ICN

Disclosed herein a source routing apparatus and method in ICN. The method includes: receiving an interest packet; extracting a current entry value when the received interest packet includes a forwarding hint; using the extracted current entry value as an index of a path list; extracting a name of the interest packet; reducing the current entry value when the interest packet is transmitted to a network area of the path list; performing a FIB lookup with the extracted name; determining an output port using the FIB lookup; and transmitting the interest packet to the output port.

PACKET BUFFERING WITH A COMMON TIME-INDEXED DATA STORE ACROSS PACKET STREAMS
20230216794 · 2023-07-06 ·

Receiving, by a network device at a receiving time, one or more packets, each packet being one of a plurality of ordered packets in one of a plurality of streams received at the network device. Determining, by the network device for each received packet, a transmit time based on one timer common to the plurality of streams. Indexing, by the network device in a data store common to the plurality of streams, each packet by the determined transmit time. Transmitting, by the network device at each particular time corresponding to a determined transmit time, all packets in the data store indexed to the particular time.