Patent classifications
H04L12/743
SYMMETRIC BI-DIRECTIONAL POLICY BASED REDIRECT OF TRAFFIC FLOWS
Disclosed are systems, methods, and computer-readable storage media for guaranteeing symmetric bi-directional policy based redirect of traffic flows. A first switch connected to a first endpoint can receive a first data packet transmitted by the first endpoint to a second endpoint connected to a second switch. The first switch can enforce an ingress data policy to the first data packet by applying a hashing algorithm to a Source Internet Protocol (SIP) value and a Destination Internet Protocol (DIP) value of the first data packet, resulting in a hash value of the first data packet. The first switch can then route the first data packet to a first service node based on the hash value of the first data packet.
Fabric-based anonymity management, systems and methods
Based on a hidden service address table stored in a memory, a virtual circuit related to a hidden service is mapped to a corresponding port-level channel based on the hidden service's address. Data associated with the hidden service is routed between the virtual circuit and the port-level channel. This enables binding of high level anonymity protocols to low level communication services of a network fabric and ensures that other nodes in the network fabric can leverage fabric-hosted hidden services without requiring updates to an existing anonymity protocol.
Generating a hash table in accordance with a prefix length
Examples herein disclose a generation of a hash table. The examples identify a prefix length from a routing table of various prefix lengths and corresponding distribution of entries. The identified prefix length corresponds to a larger distribution of entries in the routing table. The examples generate the hash table in accordance with the identified prefix length.
FLOWLET SCHEDULER FOR MULTICORE NETWORK PROCESSORS
Systems and methods of using a packet order work (POW) scheduler to assign packets to a set of scheduler queues for supplying packets to parallel processing units. A processing unit and the associated scheduler queue is dedicated to a specific flow until a queue-reallocation event, which may correspond to the associated scheduler queue being idle for at least a certain interval as indicated by its age counter, or the queue being the least recently used, when a new flow arrives. In this case, the scheduler queue and the associated processing unit may be reallocated to the new flow and disassociated with the previous flow. As a result, dynamic packet workload balancing can be advantageously achieved across the multiple processing paths.
Control and user plane architecture
There is provided a method including acquiring, by a controller of a control interface part, a control plane message requesting an update of a processing thread-specific hash table of a user plane processing thread; providing, by the controller, the message to a queue of the user plane processing thread; obtaining, by the user plane processing thread, the control plane message from the queue; updating, by the user plane processing thread based on the obtained control plane message, the processing thread-specific hash table; indicating, by the user plane processing thread, to the controller that the requested update is processed in response to the updating; obtaining, by the user plane processing thread, at least one user plane message from the queue; and processing, by the user plane processing thread, the at least one user plane message based at least partly on the processing thread-specific hash table.
Protocol-independent receive-side scaling
A system and method for protocol independent receive side scaling (RSS) includes storing a plurality of RSS hash M-tuple definitions, each definition corresponding to one of a set of possible protocol header combinations for routing an incoming packet, the set of possible protocol header combinations being modifiable to include later-developed protocols. Based on initial bytes of the incoming packet, a pattern of protocol headers is detected, and used to select one of the plurality of RSS hash M-tuple definitions. The selected RSS hash M-tuple definition is applied as a protocol-independent arbitrary set of bits to the headers of the incoming packet to form a RSS hash M-tuple vector, which is used to compute a RSS hash. Based on the RSS hash, a particular queue is selected from a set of destination queues identified for the packet, and the packet is delivered to the selected particular queue.
Personalized content distribution
Systems and methods for content provisioning are disclosed herein. The system can include memory having a content database, a task database, and a user profile database. The system can include a user device having a first network interface and a first I/O subsystem. The system can include a server that can: receive a user identifier from the user device; retrieve user information from the user profile database, which user information identifies one or several attributes of the user; retrieve user task data from the task database, which user task data identifies a plurality of tasks for completion by the user; automatically generate prioritization data for the plurality of tasks identified by the user task data; select a task based on the prioritization data; and send content relating to the selected task to the user device.
Distribution of network traffic to software defined network based probes
In one example, a processor may receive network traffic from a demultiplexer via a first network interface card and place portions of the network traffic into a plurality of hash buckets. The processor may further process a first portion of the portions of the network traffic in at least a first hash bucket of the plurality of hash buckets and forward a second portion of the portions of the network traffic in at least a second hash bucket of the plurality of hash buckets to a switch via a second network interface card. In one example, the switch distributes the second portion of the network traffic to one of a plurality of overflow probes. In one example, the plurality of overflow probes comprises a network function virtualization infrastructure for processing the second portion of the network traffic.
ELASTIC MODIFICATION OF APPLICATION INSTANCES IN A NETWORK VISIBILITY INFRASTRUCTURE
Introduced here are network visibility platforms having total processing capacity that can be dynamically varied in response to determining how much network traffic is currently under consideration. A visibility platform can include one or more network appliances, each of which includes at least one instance of an application configured to process data packets. Rather than forward all traffic to a single application instance for processing, the traffic can instead be distributed amongst a pool of application instances to collectively ensure that no data packets are dropped due to over-congestion. Moreover, the visibility platform can be designed such that application instances are elastically added/removed, as necessary, based on the volume of traffic currently under consideration.
MAC learning in a multiple virtual switch environment
Examples of techniques for media access control (MAC) address learning are disclosed. In one example implementation according to aspects of the present disclosure, a computer-implemented method may include: receiving, by a processing device, a packet; determining, by the processing device, a packet type of the packet; and responsive to determining that the packet is a MAC learning packet type, updating, by the processing device, a MAC address table based on MAC address information associated with the packet.