H04L12/727

CONTEXT-AWARE PATTERN MATCHING ACCELERATOR
20180013795 · 2018-01-11 · ·

Methods and systems for improving accuracy, speed, and efficiency of context-aware pattern matching are provided. According to one embodiment, a packet stream is received by a first stage of a hardware accelerator of a network device. A pre-matching process is performed by the first stage to identify a candidate packet that matches a string or over-flow pattern associated with access control (e.g., IPS or ADC) rules. A candidate rule is identified based on a correlation of results of the pre-matching process. The candidate packet is tokened to produce matching tokens and corresponding locations. A full-match process is performed on the candidate packet by a second stage of the hardware accelerator to determine whether it satisfies the candidate rule by performing one or more of (i) context-aware pattern matching, (ii) context-aware string matching and (iii) regular expression matching based on contextual information, the matching tokens and the corresponding locations.

Dynamic path switchover decision override based on flow characteristics

In one embodiment, a device in a network receives a switchover policy for a particular type of traffic in the network. The device determines a predicted effect of directing a traffic flow of the particular type of traffic from a first path in the network to a second path in the network. The device determines whether the predicted effect of directing the traffic flow to the second path would violate the switchover policy. The device causes the traffic flow to be routed via the second path in the network, based on a determination that the predicted effect of directing the traffic flow to the second path would not violate the switchover policy for the particular type of traffic.

System and method for message routing in a network

A transmitting end-point computes a current transmission rate for each respective outbound half-route of outbound half-routes of a route set between transmitting and receiving end-points. The transmitting end-point receives, from the receiving end-point via a respective inbound half-route of the route set, a transmission rate limit for each respective outbound half-route, the transmission rate limit computed by the receiving end-point from routing headers of messages received by the receiving end-point on the respective outbound half-route, wherein the transmission rate limit for each respective outbound half-route places an upper bound on the current transmission rate for transmissions issued on the respective outbound half-route.

A NETWORK ELEMENT FOR A DATA TRANSFER NETWORK
20170339050 · 2017-11-23 ·

A network element includes a processing system for supporting inter-area data transfer paths which are Border Gateway Protocol load sharing data paths. The processing system maintains usage attributes expressing whether a given inter-area data transfer path is to be used for servicing a given traffic category. The processing system recognizes a traffic category of a data frame to be forwarded on the basis of for example the Quality-of-Service class of the data frame. Thereafter, the processing system selects one of the inter-area data transfer paths at least partly on the basis of the usage attributes and the recognized traffic category, and forwards the data frame to the selected inter-area data transfer path. Therefore, Border Gateway Protocol load sharing can be utilized for providing for example Quality-of-Service class differentiated traffic engineering.

Optimization framework for multi-tenant data centers

Systems and methods for decoupled searching and optimization for one or more data centers, including determining a network topology for one or more networks of interconnected computer systems embedded in the one or more data center, searching for routing candidates based on a network topology determined, and updating and applying one or more objective functions to the routing candidates to determine an optimal routing candidate to satisfy embedding goals based on tenant requests, and to embed the optimal routing candidate in the one or more data centers.

Method and apparatus for reducing response time in information-centric networks

A method for reducing response times in an information-centric network includes receiving an indication from an ingress node of a content object entering a network, the content object associated with a new delivery flow through the network. An egress node in the network for the content object and a size of the content object are identified. A backlog and bandwidth for the new delivery flow is determined based in part on the size of the content object. Backlogs and bandwidths for existing delivery flows in the network are determined. A set of candidate paths in the network for the new delivery flow from the ingress node to the egress node is determined. For each candidate path, a total response time is estimated for completion of all delivery flows for each candidate path based on the backlog and bandwidth. The candidate path having the lowest total response time is selected for the new delivery flow.

Selecting among multiple concurrently active paths through a network

Methods and systems for selecting among multiple concurrently active paths through a network are provided. According to one embodiment, a method is performed by a network interface of a source network device within a loop-free, reverse-path-learning network. The network is divided into multiple virtual local area networks (VLANs). Network traffic destined for a destination network device and specifying an address for the destination or including information from which the address can be derived is received from the source. A set of VLANs that can be used to transport the packet from the source to the destination is determined. Each VLAN in the set of VLANs is associated with a different path through the network from the source to the destination. A particular VLAN from the set of VLANs networks is selected, thereby effectively selecting a particular path from multiple selectable paths between the source and the destination.

Mobile relay network intelligent routing
09794164 · 2017-10-17 · ·

A method for determining a route for communication across a network in real-time, said method including: collecting a set of network delay information at a caller device; storing the set of network delay information at the caller device; based on a stored set of network delay information at the caller device and the callee device, determining, by the caller device, in cooperation with the callee device, a set of relay server candidates to be used to relay data packets between the caller device and the callee device; and based on calculated round trip times for probing data packets set out and sent back, selecting, by the caller device in cooperation with the callee device, a shortest routing path as an active routing path for use for transporting a first data packet between the caller device and the callee device.

Link delay based routing apparatus for a network-on-chip

A router of a network-on-chip receives delay information associated with a plurality of links of the network-on-chip. The router determines at least one link of a data path based on the delay information.

Maximum transmission unit size reporting and discovery by a user equipment

A method of control Maximum Transmission Unit (MTU) reporting and discovery using AT commands is proposed. In communications networks, the MTU of a communication protocol of a layer is the size (in bytes or octets) of the largest protocol data unit that the layer can pass onwards. In an IP network, IP packets may be fragmented if the supported MTU size is smaller than the packet length. In accordance with one novel aspect, the packet data protocol (PDP) context of a packet data network (PDN) connection comprises MTU information. By introducing MTU information to the PDP contexts, TE can use AT commands to query MTU parameters from the network and thereby avoid fragmentation. TE can also use AT command to set MTU parameters and thereby control MTU discovery.