Patent classifications
H04L47/17
Determining a time-to-live budget for network traffic
A Time-To-Live budget can be determined for network packets and used to understand an impact of network expansion on dropped packets. Additionally, the TTL budget can be used to determine how network expansion impacts services provided in the data center. In one embodiment, agents executing on data center routers are used to transmit packet header data including a TTL budget to a collector server computer. The collector server computer can discern signal (production flows) from noise (traceroutes and probing traffic) to detect packets that are at risk of being dropped or have been dropped due to TTL expiration. Alerts can be generated for packet flows with dangerously low remaining TTL budget or no remaining budget, which are at high risk of expiring due to operational events resulting in traffic temporarily traversing slightly longer paths. A dashboard can be provided with historic TTL budget data and trends.
DEVICES AND METHODS FOR OPERATING A COMPUTING SYSTEM COMPRISING A DATA RELAY
A computing system includes a computing device and an input data path connecting an interface device to the computing device. The input data path has at least two data relays and at least one buffer memory temporarily storing data. Each of the data relays has first and second terminals and a central terminal and selectively interconnects the first and central terminals or the second and central terminals and leaves the first and second terminals constantly separated from each other. The first terminal of a first relay is connected to the interface device, and the second terminal is connected to the computing device. The central terminal of the first data relay is connected to the buffer memory. The intermediate buffer memory is selectively connected by the first data relay solely to the interface device or the second terminal of the first data relay, but not to both simultaneously.
DEVICES AND METHODS FOR OPERATING A COMPUTING SYSTEM COMPRISING A DATA RELAY
A computing system includes a computing device and an input data path connecting an interface device to the computing device. The input data path has at least two data relays and at least one buffer memory temporarily storing data. Each of the data relays has first and second terminals and a central terminal and selectively interconnects the first and central terminals or the second and central terminals and leaves the first and second terminals constantly separated from each other. The first terminal of a first relay is connected to the interface device, and the second terminal is connected to the computing device. The central terminal of the first data relay is connected to the buffer memory. The intermediate buffer memory is selectively connected by the first data relay solely to the interface device or the second terminal of the first data relay, but not to both simultaneously.
SYSTEMS AND METHODS FOR ADVERTISING INTERNET PROTOCOL (IP) VERSION 4 NETWORK LAYER ROUTING INFORMATION WITH AN IP VERSION 6 NEXT HOP ADDRESS
A first network device associated with a network may establish an Internet protocol version 6 Multiprotocol BGP session with a second network device associated with the network. The first network device and second network device are both capable of forwarding both IPv4 and IPv6 packets with only an IPv6 address configured on the interface of both the first network device and second network device. The first network device may exchange Multiprotocol Reachability capability with second network device for corresponding 2-tuple Address Family Identifier/Subsequent Address Family Identifier. The first network device may advertise Internet protocol version 4 network layer reachability information and may advertise Internet protocol version 6 network layer reachability information with IPv6 extended next hop encoding using Internet Assigned Numbering Authority assigned capability code value 5 to second network device.
METHOD AND APPARATUS FOR PROCESSING DATA UNIT BY IAB NODE IN WIRELESS COMMUNICATION SYSTEM
An Integrated Access and Backhaul (IAB) node configured with a first Backhaul Adaptation Protocol (BAP) address related to a first donor IAB node and a second BAP address related to a second donor IAB node in a wireless communication system. Techniques include receiving a packet including a destination BAP address and a path identifier (ID). Based on the packet being received through the second link and the destination BAP address not matching the second BAP address, determining whether the destination BAP address and the path ID of the packet match at least one entry of a configured rewriting table. Based on the destination BAP address and the path ID of the packet matching the at least one entry, rewriting a header of the packet by setting the destination BAP address and the path ID according to the at least one entry, and transmitting the packet to a next hop node.
Preferred path route graphs in a network
A method implemented by a network element (NE) in a network, comprising receiving, by the NE, preferred path route (PPR) information describing a PPR graph, the PPR graph representing a plurality of PPRs between an ingress NE and an egress NE in the network, and updating, by the NE, a forwarding database to include a forwarding entry for the egress NE in response to identifying the NE in the plurality of PPR-PDEs, the forwarding entry indicating a next hop by which to forward a data packet comprising the PPR-ID.
Preferred path route graphs in a network
A method implemented by a network element (NE) in a network, comprising receiving, by the NE, preferred path route (PPR) information describing a PPR graph, the PPR graph representing a plurality of PPRs between an ingress NE and an egress NE in the network, and updating, by the NE, a forwarding database to include a forwarding entry for the egress NE in response to identifying the NE in the plurality of PPR-PDEs, the forwarding entry indicating a next hop by which to forward a data packet comprising the PPR-ID.
METHOD AND SYSTEM FOR GRANULAR DYNAMIC QUOTA-BASED CONGESTION MANAGEMENT
A system for facilitating sender-side granular congestion control is provided. During operation, the first and second processes of an application can run on sender and receiver nodes, respectively. A first buffer on the sender node can be allocated to the first process. For the first process, the system can then identify a second buffer at a last-hop switch of the receiver node. The system can determine, based on in-flight packets, the utilization of the second buffer. The system can also determine a fraction of available space in the second buffer for packets from the first buffer based on the utilization. Subsequently, the system can determine whether the fraction of the available space can accommodate the next packet from the first buffer. If the fraction of the available space can accommodate the next packet, the system can allow the first process to send the next packet to the second process.
Service path generation in load balanced manner
Some embodiments provide novel methods for performing services for machines operating in one or more datacenters. For instance, for a group of related guest machines (e.g., a group of tenant machines), some embodiments define two different forwarding planes: (1) a guest forwarding plane and (2) a service forwarding plane. The guest forwarding plane connects to the machines in the group and performs L2 and/or L3 forwarding for these machines. The service forwarding plane (1) connects to the service nodes that perform services on data messages sent to and from these machines, and (2) forwards these data messages to the service nodes. In some embodiments, the guest machines do not connect directly with the service forwarding plane. For instance, in some embodiments, each forwarding plane connects to a machine or service node through a port that receives data messages from, or supplies data messages to, the machine or service node. In such embodiments, the service forwarding plane does not have a port that directly receives data messages from, or supplies data messages to, any guest machine. Instead, in some such embodiments, data associated with a guest machine is routed to a port proxy module executing on the same host computer, and this other module has a service plane port. This port proxy module in some embodiments indirectly can connect more than one guest machine on the same host to the service plane (i.e., can serve as the port proxy module for more than one guest machine on the same host).
Service path generation in load balanced manner
Some embodiments provide novel methods for performing services for machines operating in one or more datacenters. For instance, for a group of related guest machines (e.g., a group of tenant machines), some embodiments define two different forwarding planes: (1) a guest forwarding plane and (2) a service forwarding plane. The guest forwarding plane connects to the machines in the group and performs L2 and/or L3 forwarding for these machines. The service forwarding plane (1) connects to the service nodes that perform services on data messages sent to and from these machines, and (2) forwards these data messages to the service nodes. In some embodiments, the guest machines do not connect directly with the service forwarding plane. For instance, in some embodiments, each forwarding plane connects to a machine or service node through a port that receives data messages from, or supplies data messages to, the machine or service node. In such embodiments, the service forwarding plane does not have a port that directly receives data messages from, or supplies data messages to, any guest machine. Instead, in some such embodiments, data associated with a guest machine is routed to a port proxy module executing on the same host computer, and this other module has a service plane port. This port proxy module in some embodiments indirectly can connect more than one guest machine on the same host to the service plane (i.e., can serve as the port proxy module for more than one guest machine on the same host).