H04L12/729

In service flow capability update in guaranteed bandwidth multicast network

In service flow capability updating in a guaranteed bandwidth multicast network may be provided. First, a node may determine that a bandwidth requirement of a flow has changed to a new bandwidth value. Then, in response to determining that the bandwidth requirement of the flow has changed to the new bandwidth value, an ingress capacity value may be updated in an interface usage table for a Reverse Path Forwarding (RPF) interface corresponding to the flow. The RPF interface may be disposed on a network device. Next, in response to determining that the bandwidth requirement of the flow has changed to the new bandwidth value, an egress capacity value may be updated in the interface usage table for an Outgoing Interface (OIF) corresponding to the flow. The OIF may be disposed on the network device.

Facilitating dynamic multiple public land mobile network resource management in advanced networks

Facilitating dynamic satellite and mobility convergence for mobility backhaul in advanced networks (e.g., 4G, 5G, 6G and beyond) is provided herein. Operations of a system can comprise dividing resources of a wireless network between a first network device and a second network device based on defined service level agreements. The operations also can comprise receiving a data packet from a mobile device. The data packet can comprise an indication that the first network device provides services for the mobile device. Further, the operations can comprise transferring the data packet to the first network device based on the resources assigned to the first network device and based on the data packet bypassing an access core of the wireless network.

ADAPTIVE PRIVATE NETWORK ASYNCHRONOUS DISTRIBUTED SHARED MEMORY SERVICES
20210320867 · 2021-10-14 ·

A highly predicable quality shared distributed memory process is achieved using less than predicable public and private internet protocol networks as the means for communications within the processing interconnect. An adaptive private network (APN) service provides the ability for the distributed memory process to communicate data via an APN conduit service, to use high throughput paths by bandwidth allocation to higher quality paths avoiding lower quality paths, to deliver reliability via fast retransmissions on single packet loss detection, to deliver reliability and timely communication through redundancy transmissions via duplicate transmissions on high a best path and on a most independent path from the best path, to lower latency via high resolution clock synchronized path monitoring and high latency path avoidance, to monitor packet loss and provide loss prone path avoidance, and to avoid congestion by use of high resolution clock synchronized enabled congestion monitoring and avoidance.

Determining quality information for a route
11140087 · 2021-10-05 · ·

Methods and systems for determining traffic information for devices along one or more routes are described. A content server may send a message to a plurality of devices along a route. The message may comprise an indication requesting each of the devices to send, to the content server, status information regarding the respective device. Intermediary devices may receive the message, respond with the requested information, and forward the message through the route. The message may comprise a stateless messaging protocol message such as an ICMP or UDP packet.

Adaptive private network asynchronous distributed shared memory services

A highly predicable quality shared distributed memory process is achieved using less than predicable public and private internet protocol networks as the means for communications within the processing interconnect. An adaptive private network (APN) service provides the ability for the distributed memory process to communicate data via an APN conduit service, to use high throughput paths by bandwidth allocation to higher quality paths avoiding lower quality paths, to deliver reliability via fast retransmissions on single packet loss detection, to deliver reliability and timely communication through redundancy transmissions via duplicate transmissions on high a best path and on a most independent path from the best path, to lower latency via high resolution clock synchronized path monitoring and high latency path avoidance, to monitor packet loss and provide loss prone path avoidance, and to avoid congestion by use of high resolution clock synchronized enabled congestion monitoring and avoidance.

Systems and methods for implementing a layer two proxy for wireless network data
11108592 · 2021-08-31 · ·

A request to establish a tunnel over a layer three network connection may be received by a proxy device. The tunnel may then be established by the proxy device. Device information and wireless network information from a mobile device may be received over the tunnel. Responsive to receipt of the device information and the wireless network information, source and destination addresses may be assigned to the mobile device. The source and destination addresses may correspond to the device information and the wireless network information. Internet protocol (IP) packets may be received, via the tunnel, from the mobile device. Layer two frames may be generated utilizing the assigned source and destination addresses. The layer two frames may encapsulate each of the IP packets. The layer two frames may be transmitted to a layer two service function chain (SFC) infrastructure.

Dynamic quality of service in edge cloud architectures

A device of a service coordinating entity includes communications circuitry to communicate with a plurality of access networks via a corresponding plurality of network function virtualization (NFV) instances, processing circuitry, and a memory device. The processing circuitry is to perform operations to monitor stored performance metrics for the plurality of NFV instances. Each of the NFV instances is instantiated by a corresponding scheduler of a plurality of schedulers on a virtualization infrastructure of the service coordinating entity. A plurality of stored threshold metrics is retrieved, indicating a desired level for each of the plurality of performance metrics. A threshold condition is detected for at least one of the performance metrics for an NFV instance of the plurality of NFV instances, based on the retrieved plurality of threshold metrics. A hardware resource used by the NFV instance to communicate with an access network is adjusted based on the detected threshold condition.

Methods and systems for application and policy based network traffic isolation and data transfer

A method includes allocating an identifier to each of a plurality of policies each comprising a network-isolation identifier associated with a VXWAN directive and transmitting each of the plurality of policies to one or more devices in a network.

ROUTING NETWORK TRAFFIC

A method may include receiving a domain name system (DNS) query at a network device, where the DNS query may be associated with a traffic flow identified for rerouting through an alternative path utilizing an alternative network device instead of a default path. The method may also include rewriting the DNS query such that the DNS query is routed through the alternative network device along the alternative path and to a DNS server associated with the alternative path. The method may additionally include receiving a DNS response from the DNS server, where a resource identified in the DNS response may be based on the DNS query coming through the alternative network device.

Route selection method and apparatus based on hybrid resource, and server thereof
11082326 · 2021-08-03 · ·

A route selection method based on hybrid resources, the route selection method being applied to a server, the server being communicably connected to a multi-node network, the multi-node network including at least two network nodes, wherein the method includes: constructing a directed graph for the multi-node network, and constructing a directed edge for each of the at least two network resources among the network resources if the at least two network resources are included between any two network nodes; and receiving node information of various network nodes, acquiring a delay weight value between any two network nodes under any network resource based on the node information, and assigning the delay weight value to a corresponding directed edge.