Patent classifications
H04L12/743
SYSTEMS AND METHODS FOR IMPLEMENTING MULTI-TABLE OPENFLOW FLOWS THAT HAVE COMBINATIONS OF PACKET EDITS
Systems and methods are provided herein for implementing multi-table OpenFlow flows that have combinations of packet edits. This may be accomplished by a network device receiving a first flow entry with a first set of actions to be installed into a flow table. The network device may determine that the first set of actions includes edits to a plurality of fields of a matched data packet. In response, the network device may change the first set of actions of the first flow entry to edit a first field of the data packet and create a second flow entry with a second set of actions to edit a second field of the data packet. The network device may install the first and second flow entries into one or more flow tables of the network device.
Multicast traffic in virtual private networks
In one embodiment, a method is provided. The method includes determining that a network device should use an underlay multicast group associated with an overlay multicast group for multicast traffic. The underlay multicast group carries multicast traffic for the overlay multicast group. The overlay multicast group is associated with a virtual private network. The method also includes determining an underlay multicast group address for the underlay multicast group. The overlay multicast group is associated with an overlay multicast group address. A first portion of the underlay multicast group address is a function of the overlay multicast group address. The method further includes forwarding one or more multicast packets to one or more multicast receivers via the underlay multicast group using the underlay multicast group address.
Method To Mitigate Hash Correlation In Multi-Path Networks
Methods are provided for mitigating hash correlation. In this regard, a hash correlation may be found between a first switch and a second switch in a network. In this network, a first egress port is to be selected among a first group of egress ports at the first switch for forwarding packets, and a second egress port is to be selected among a second group of egress ports at the second switch for forwarding packets, where the first group has a first group size and the second group has a second group size. Upon finding the hash correlation, a new second group size coprime to the first group size may be selected, and the second group of egress ports may be mapped to a mapped group having the new second group size. The second switch may be configured to route packets according to the mapped group.
System and method for providing scalable flow monitoring in a data center fabric
Disclosed is a method that includes calculating, at a collector receiving a data flow and via a hashing algorithm, all possible hashes associated with at least one virtual attribute associated with the data flow to yield resultant hash values. Based on the resultant hash values, the method includes computing a multicast address group and multicasting the data flow to n leafs based on the multicast address group. At respective other collectors, the method includes filtering received sub-flows of the data flow based on the resultant hashes, wherein if a respective hash is owned by a collector, the respective collector accepts and saves the sub-flow in a local switch collector database. A scalable, distributed netflow is possible with the ability to respond to queries for fabric-level netflow statistics even on virtual constructs.
Systems and methods for extending internal endpoints of a network device
An integrated circuit (IC) device includes a network device. The network device includes first and second network ports each configured to connect to a network, and an internal endpoint port configured to connect to first endpoint having a first processing unit and second endpoint having a second processing unit. A lookup circuit is configured to provide a first forwarding decision for a first frame to be forwarded to the first endpoint. An endpoint extension circuit is configured to determine a first memory channel based on the first forwarding decision for forwarding the first frame, and forward the first frame to the first endpoint using the determined memory channel.
SCALABILITY, FAULT TOLERANCE AND FAULT MANAGEMENT FOR TWAMP WITH A LARGE NUMBER OF TEST SESSIONS
The disclosed methods and systems of using TWAMP measurement architecture for testing a large network include a control-client running on a first network host initializing memory for test session parameters used to originate a test, parsing a configuration file to populate the memory with IP addresses, ports and QoS parameters for control-servers and session-reflectors; and originating test sessions using the test session parameters. The method includes extending to thousands of control-clients, each originating respective test sessions with control-servers in a mesh network using respective test session parameters; and while the test is running, optionally sending an updated configuration file to at least one control-client that introduces a new control-server or replaces a control-server; and the control-client parsing the updated configuration file and updating memory to include the new control-server IP address, port numbers and QoS parameters; and expanding the test and monitoring the running test sessions for results.
Flexible Steering
In one embodiment, a network device includes an interface configured to receive a data packet including a header section, at least one parser to parse the data of the header section yielding a first header portion and a second header portion, a packet processing engine to fetch a first match-and-action table, match a first index having a corresponding first steering action entry in the first match-and-action table responsively to the first header portion, compute a cumulative lookup value based on the first header portion and the second header portion responsively to the first steering action entry, fetch a second match-and-action table responsively to the first steering action entry, match a second index having a corresponding second steering action entry in the second match-and-action table responsively to the cumulative lookup value, and steering the packet responsively to the second steering action entry.
Delivering content over a network
A method of delivering content in one or more packets over a network is described. A content request packet comprising a request for content based on a first IPv6 address is received, the first IPv6 address identifying the content. The first IPv6 address is mapped to a second IPv6 address, the second IPv6 address being associated with the content at a physical location. The content requested in the content request packet is then received from the physical location associated with the second IPv6 address for delivery to a user. A further method includes routing a packet for requesting the content from a client to a content server storing an instant of the content, based on an IPv6 address of content being requested by the client. A communication session is then set up between the client and the content server; and the requested content is transmitted from the content server.
Method and system for multipoint access within a mobile network
Aspects of the subject disclosure may include, for example, identifying a packet data protocol session that supports a first data exchange between a mobile application of a first mobile device and a first recipient device, wherein the first exchange of data comprises a directing of the first exchange of data through a network device. A second recipient device is determined, and a second data exchange is facilitated between the mobile application and the second recipient device by way of the packet data protocol session, wherein the second exchange of data also comprises a directing of the second exchange of data through the network device without modifying the first data exchange. Other embodiments are disclosed.
System and method for monitoring logical network traffic flows using a ternary content addressable memory in a high performance computing environment
System and method for monitoring logical network traffic flows using a ternary content addressable memory (TCAM). An exemplary embodiment can provide a network port that is associated with a TCAM. The TCAM can be configured with a plurality of entries, wherein each TCAM entry contains a value. Further, each TCAM entry can be associated with at least one network counter. A predefined set of values can be retrieved from at least one header field of a data packet processed by the network port. Each value in the predefined set of values can be aggregated into a search value, and the search value can be compared to the value contained in each TCAM entry. When a match is found between the search value and the value contained in a TCAM entry, each network counter associated with the matching TCAM entry can be incremented.