Patent classifications
H04L45/742
Quantitative exact match distance in network flows
There is disclosed an example of a host computing apparatus, including: an exact match cache (EMC) to perform exact matching according to an exact match tuple; a datapath classifier (DPCLS) to provide wildcard searching in a tuple search space (TSS) including a plurality of subtables, the subtables having one or more rule masks; a controller to receive a first packet having a first property matching a first rule of the classifier table, and forward header data of the packet to a partial rule module (PRM); and the PRM to compute a quantitative distance between the first rule and the exact match tuple of the EMC, and to execute a flow action for the first packet according to the quantitative distance.
Internet provider subscriber communications system
A method for communicating in real-time to users of a provider of Internet access service, without requiring any installation or set-up by the user, that utilizes the unique identification information automatically provided by the user during communications for identifying the user to provide a fixed identifier which is then communicated to a redirecting device. Messages may then be selectively transmitted to the user. The system is normally transparent to the user, with no modification of its content along the path. Content then may be modified or replaced along the path to the user. For the purposes of establishing a reliable delivery of bulletin messages from providers to their users, the system forces the delivery of specially-composed World Wide Web browser pages to the user, although it is not limited to that type of data.
TRAFFIC FLOW BASED MAP-CACHE REFRESH TO SUPPORT DEVICES AND THEIR DYNAMIC POLICY UPDATES
A traffic flow based map cache refresh may be provided. A computing device may receive a dropped packet message when a packet associated with a flow having a destination and a source was dropped before it reached the destination. Next, in response to receiving the dropped packet message, a map request message may be sent to a Map Server (MS). In response to sending the map request message, a map response message may be received indicating an updated destination for the flow. A map cache may then be refreshed for the source of the flow based on the updated destination from the received map response message.
Efficient Memory Utilization for Cartesian Products of Rules
A network device includes one or more ports, and action-select circuitry. The ports are to exchange packets over a network. The act-ion-select circuitry is to determine, for a given packet, a first search key based on a first header field of the given packet, and a second search key based on a second header field of the given packet, to compare the first search key to a first group of compare values, to output a multi-element vector responsively to a match between the first search key and a first compare value, to generate a composite search key by concatenating the second search key and the multi-element vector, to compare the composite search key to a second group of compare values, and, responsively to a match between the composite search key and a second compare value, to output an action indicator for applying to the given packet.
LOGICAL ROUTER WITH MULTIPLE ROUTING COMPONENTS
Some embodiments provide a method for handling failure at one of several peer centralized components of a logical router. At a first one of the peer centralized components of the logical router, the method detects that a second one of the peer centralized components has failed. In response to the detection, the method automatically identifies a network layer address of the failed second peer. The method assumes responsibility for data traffic to the failed peer by broadcasting a message on a logical switch that connects all of the peer centralized components and a distributed component of the logical router. The message instructs recipients to associate the identified network layer address with a data link layer address of the first peer centralized component.
METHOD AND SYSTEM FOR ESTABLISHING A DISTRIBUTED NETWORK WITHOUT A CENTRALIZED DIRECTORY
A method for establishing a connection between two nodes in a communication network without use of a centralized directory or mapping identifiers includes: receiving a lookup message from another node in the communication network that includes a lookup term; determining if a target node in a local directory cache can be identified that satisfies the lookup term; and, if such a node is identified, establishing a connection to the target node and forwarding the lookup message, or, if no such node is identified, forwarding the lookup message to other nodes in the network with which the node has an active communication connection.
Captive portal redirection by devices with no internet protocol connectivity in the host virtual local area network
In general, the disclosure relates to a method for redirecting a user to a captive portal. The method includes trapping an incoming frame originating from a host, where the incoming frame comprises a L2 header and a payload, wherein the payload specifies information associated with an external server, wherein the user of the host has not been authenticated by the captive portal at a time when the incoming frame is trapped, extracting the L2 header, an L3 header, and the payload from the incoming frame, forwarding the L3 header and the payload towards a redirection server executing on the network device, wherein the redirection server is configured to generate a redirection response based on the payload; encapsulating the redirection response to obtain an L3 response packet, encapsulating the L3 response packet using information from the L2 header to obtain an output frame, and transmitting the output frame towards the host.
INVALIDATING CACHED FLOW INFORMATION IN A CLOUD INFRASTRUCTURE
Techniques for managing the distribution of configuration information that supports the flow of packets in a cloud environment are described. In an example, a virtual network interface card (VNIC) hosted on a network virtualization device NVD receives a first packet from a compute instance associated with the VNIC. The VNIC determines that flow information to send the first packet on a virtual network is unavailable from a memory of the NVD. The VNIC sends, via the NVD, the first packet to a network interface service, where the network interface service maintains configuration information to send packets on the substrate network and is configured to send the first packet on the substrate network based on the configuration information. The NVD receives the flow information from the network interface service, where the flow information is a subset of the configuration information. The NVD stores the flow information in the memory.
COMMUNICATION OF POLICY CHANGES IN LISP-BASED SOFTWARE DEFINED NETWORKS
Systems, methods, and computer-readable media for communicating policy changes in a Locator/ID Separation Protocol (LISP) based network deployment include receiving, at a first routing device, a first notification from a map server, the first notification indicating a change in a policy for LISP based communication between at least a first endpoint device and at least a second endpoint device, the first endpoint device being connected to a network fabric through the first routing device and the second endpoint device being connected to the network fabric through a second routing device. The first routing device forwards a second notification to the second routing device if one or more entries of a first map cache implemented by the first routing device are affected by the policy change, the second notification indicating a set of one or more endpoints connected to the second routing device that are affected by the policy change.
Lightweight and trust-aware routing in NoC based SoC architectures
Various examples are provided related to software and hardware architectures that enable lightweight and trust-aware routing. In one example, among others, a method for trust-aware routing includes calculating trust values to represent how much a node can be trusted to route packets through its router. Each node can store the trust values of routers that are one hop and two hops away from it, which represent direct trust and delegated trust, respectively. When a router receives a packet, the router can update trust values and forward the packet to the next hop.