Patent classifications
H04L12/947
CHOREOGRAPHED CACHING
A routing device capable of performing application layer data caching is described. Application data caching at a routing device can alleviate the bottleneck that an application data host may experience during high demands for application data. Requests for the application data can also be fulfilled faster by eliminating the network delays for communicating with the application data host. The techniques described can also be used to perform analysis of the underlying application data in the network traffic transiting though a routing device.
Session Continuity in the Presence of Network Address Translation
Embodiments of the present invention provide for continuity of “stateful” routing sessions in the presence of source network address translation (NAT). Specifically, a stateful routing session may be moved from one routing path to another routing path, e.g., due to a routing change in the communication network, where the routing paths have different source NAT status. For example, the stateful routing session may be moved from a path having no source NAT to a path having source NAT, from a path having source NAT to a path having no source NAT, or from paths having different source network address translations. When a stateful routing session is moved from an existing routing path to a new routing path, the routers detect the routing change based on the change in source NAT status using a special link monitoring protocol. Upon detecting the change in source NAT status, session metadata is included in at least the first packet forwarded following detection of the change in source NAT status so that the stateful routing session can continue without interruption.
SELECTIVE RULE MANAGEMENT BASED ON TRAFFIC VISIBILITY IN A TUNNEL
One embodiment of the present invention provides a switch. The switch includes a storage device, a rule management module, an inner packet module, and a packet processor. During operation, the rule management module obtains a rule associated with a data flow within tunnel encapsulation of a tunnel. This rule indicates how the flow is to be processed at the switch. The rule management module then applies an initial rule to a respective line card of the switch. The initial rule is derived from a virtual network identifier, which is associated with the tunnel, of the obtained rule. The inner packet module determines that a first inner packet, which is encapsulated with a first encapsulation header, belongs to the flow without decapsulating the first encapsulation header. The rule management module applies the obtained rule to a line card associated with an ingress port of the encapsulated first inner packet.
APPARATUS AND METHOD FOR ROUTING DATA IN A SWITCH
Apparatuses, methods and storage medium associated with routing data in a switch are provided. In embodiments, the switch may include route lookup circuitry determine a first set of output ports that are available to send a data packet to a destination node. The lookup circuitry may further select, based on respective congestion levels associated with the first set of output ports, a plurality of output ports for a second set of output ports from the first set of output ports. An input queue of the switch may buffer the data packet and route information associated with the second set of output ports. The switch may further include route selection circuitry to select a destination output port from the second set of output ports, based on updated congestion levels associated with the output ports of the second set of output ports. Other embodiments may be described and/or claimed.
TECHNOLOGIES FOR HIGH-PERFORMANCE NETWORK FABRIC SECURITY
Technologies for fabric security include one or more managed network devices coupled to one or more computing nodes via high-speed fabric links. A managed network device enables a port and, while enabling the port, securely determines the node type of the link partner coupled to the port. If the link partner is a computing node, management access is not allowed at the port. The managed network device may allow management access at certain predefined ports, which may be connected to one of more management nodes. Management access may be allowed for additional ports in response to management messages received from the management nodes. The managed network device may check and verify data packet headers received from a compute node at each port. The managed network device may rate-limit management messages received from a compute node at each port. Other embodiments are described and claimed.
Distributing routing information in a multi-datacenter environment
A system provisions global logical entities that facilitate the operation of logical networks that span two or more datacenters. These global logical entities include global logical switches that provide L2 switching as well as global routers that provide L3 routing among network nodes in multiple datacenters. The global logical entities operate along side local logical entities that are for operating logical networks that are local within a datacenter.
TECHNOLOGIES FOR MEDIUM GRAINED ADAPTIVE ROUTING IN HIGH-PERFORMANCE NETWORK FABRICS
Technologies for medium grained adaptive routing include one or more managed network devices coupled to one or more computing nodes via high-speed fabric links. A computing node may transmit a data packet including a destination local identifier (DLID) that identifies the destination computing node. The managed network device determines a static destination port based on the DLID, and determines whether the static destination port is congested. If congested, the managed network device determines a port group based on the DLID and selects a dynamic destination port from the port group. The port group may include two or more destination ports of the managed network device, and port groups may overlap. Port groups may be described by port masks stored in a port group table. The port groups and mappings between DLIDs and port groups may be configured by a fabric manager. Other embodiments are described and claimed.
Contextual Service Mobility in an Enterprise Fabric Network Environment
In one embodiment, contextual service mobility in an enterprise fabric network environment (e.g., overlay and underlay networks) provides for moving of the location of a service being applied to packets with minimal updates to the mapping database. The mapping database is used to convert addresses of the overlay network to physical network and service addresses. The mapping database provides contextual lookup operations on the same destination address of a packet being forwarded in the overlay network to provide different results. The contextual lookup operations provide for a packet to be forwarded to a service node or its intended destination depending on the current context. In one embodiment, the enterprise fabric network uses Locator/ID Separation Protocol (LISP), a network architecture and set of protocols that uses different overlay and underlay namespaces and a distributed mapping database for converting an overlay address to an underlay address.
Forwarding of adaptive routing notifications
Communication apparatus includes multiple interfaces configured to be connected to respective links in a packet data network. Switching circuitry in the apparatus is coupled between the interfaces and is configured to receive, via a first interface among the multiple interfaces, an adaptive routing notification (ARN) requesting that a specified flow of packets from a given source to a given destination in the network be rerouted. The switching circuitry is configured, upon verifying that the first interface serves as an egress interface for the packets in the specified flow, to reroute the specified flow through a different, second interface among the multiple interfaces when there is an alternative route available in the network from the second interface to the given destination, and after finding that there is no alternative route available from any of the interfaces to the given destination, to forward the ARN to a plurality of the interfaces.
DYNAMIC ADJUSTMENT OF CONNECTION PRIORITY IN NETWORKS
Systems and methods for dynamic adjustment of a connection's priority in a network include configuring the connection with a dynamic priority and setting a current priority based on one or more factors, wherein the connection is a Layer 0 connection, a Layer 1 connection, and a combination thereof; detecting an event in the network requiring a change to the current priority, wherein the event changes the one or more factors; and causing a change in the current priority of the connection based on the event.