H04L12/775

Best Path Computation Offload In A Network Computing Environment
20210135980 · 2021-05-06 ·

Systems, methods, and devices for offloading best path computations in a networked computing environment. A method includes storing in memory, by a best path controller, a listing of a plurality of paths learnt by a device, wherein each of the plurality of paths is a route for transmitting data from the device to a destination device. The method includes receiving, by the best path controller, a message from the device. The method includes processing, by the best path controller, a best path computation to identify one or more best paths based on the message such that processing of the best path computation is offloaded from the device to the best path controller. The method includes sending the one or more best paths to the device.

Partitioning of switches and fabrics into logical switches and fabrics

A Layer 2 network switch is partitionable into a plurality of switch fabrics. The single-chassis switch is partitionable into a plurality of logical switches, each associated with one of the virtual fabrics. The logical switches behave as complete and self-contained switches. A logical switch fabric can span multiple single-chassis switch chassis. Logical switches are connected by inter-switch links that can be either dedicated single-chassis links or logical links. An extended inter-switch link can be used to transport traffic for one or more logical inter-switch links. Physical ports of the chassis are assigned to logical switches and are managed by the logical switch. Legacy switches that are not partitionable into logical switches can serve as transit switches between two logical switches.

Packet processing method and router
10965590 · 2021-03-30 · ·

This application discloses a packet processing method and an LSR. The method includes: receiving, by an Ingress LSR of a first MPLS tunnel, a first notification packet that is based on an IGP, where the first notification packet includes an ELC flag, which is used to indicate that the first Egress LSR has ELC; after learning from the first notification packet that the first Egress LSR has ELC, inserting a label into a first packet, to generate a second packet, where the label forms an MPLS label stack, which includes, from bottom to top, a first EL, a first ELI, and a first TL; and sending the second packet to the first Egress LSR through the first MPLS tunnel. According to the solutions of this invention, a Transit LSR of the first MPLS tunnel may perform load balancing when forwarding the second packet.

NETWORK REPOSITORY FUNCTION CONTROLLER
20210068045 · 2021-03-04 ·

Example implementations relate to registering a network function to a Network Repository Function (NRF). An NRF controller may detect a network function over a network, and responsive to the detection, may determine whether the network function is registered to the NRF. The NRF controller may register the network function to the NRF responsive to determining that the network function is not registered to the NRF.

Routing flits in a network-on-chip based on operating states of routers

A system is described that includes an integrated circuit chip having a network-on-chip. The network-on-chip includes multiple routers arranged in a topology and a separate communication link coupled between each router and each of one or more neighboring routers of that router among the multiple routers in the topology. The integrated circuit chip also includes multiple nodes, each node coupled to a router of the multiple routers. When operating, a given router of the multiple routers keeps a record of operating states of some or all of the multiple routers and corresponding communication links. The given router then routes flits to destination nodes via one or more other routers of the multiple routers based at least in part on the operating states of the some or all of the multiple routers and the corresponding communication links.

Method of handling multiple forwarding information base synchronization for network switch stacking system using central learning

A method of forwarding information base synchronization for a network switch stacking system includes transmitting by at least one slave network switch at least one change event to a master network switch, generating by the master network switch a change confirmation to the at least one slave network switch when a master forwarding information base is determined to be necessarily updated by the master network switch according to the at least one change event, and updating by the at least one slave network switch at least one slave forwarding information base according to the change confirmation, wherein the at least one change event includes at least one of a new learn event, a port move event, a regular port aging out event, a logic aggregation update aging time event.

STACKED COMPUTER NETWORK DEVICES HAVING MULTIPLE MASTER NODES

An electronic device is described. The electronic device includes a stack of computer network devices, such as a stack of switches and/or routers. This stack of computer network devices includes data planes and ports for directing packets or frames in a wireless network based at least in part on destinations of the packets or frames. Moreover, the electronic device may include multiple controllers (such as processors) that operate as master nodes and that perform network functions for the stack of computer network devices using a database. This database may include a common database that is accessible by the multiple controllers or multiple instances of the database in the multiple controllers, where the multiple instances of the database are synchronized.

Systems and methods for API routing and security

The invention provides methods, computer program products, proxies and proxy clusters configured for forwarding, routing and/or load balancing of client requests or messages between multiple different APIs and/or multiple instances of an API. The invention further provides for efficient session information based routing of client requests for a target API, wherein multiple instances of the target API are simultaneously implemented across one or more API servers. The invention additionally enables separation of a control plane (i.e. control logic) and run time execution logic within a data plane within proxies in a proxy cluster, and also enables implementation of a plurality of data planes within each proxythereby ensuring security, high availability and scalability. An invention embodiment additionally implements two-stage rate limiting protection for API servers combining rate limiting between client and each proxy, and rate limiting between a proxy cluster and a server backend.

Network device stacking
10819627 · 2020-10-27 · ·

A method and device for realizing automatic stacking of network devices are disclosed. According to an example of the method, when a network device determines its device role, the network device may send a first neighbor discovery message to a neighbor device and receive a second neighbor discovery message sent by the neighbor device. Next, if it determines that a topological structure between the network device and the neighbor device changes according to the second neighbor discovery message, the network device may determine whether a stacking condition to trigger stacking the network device and the neighbor device is satisfied or not. If the stacking condition is satisfied, the network device may further determine a stacking configuration for stacking the network device and the neighbor device. Then the network device may stack the network device with the neighbor device according to the stacking configuration.

METHODS AND SYSTEMS FOR API PROXY BASED ADAPTIVE SECURITY

The invention concerns API proxy based adaptive security. The invention implements adaptive security for API servers, while avoiding data bottlenecks and maintaining client experience. The invention provides methods and configurations for API security that may be employed at proxies for implementing routing decisions involving client messages received at said proxies. The invention also involves generating or collecting at proxies, log information that captures data corresponding to received client messages and responses from API serverswhich log information correlates communications between clients, proxies and backend API servers, and includes data relevant for purposes generating API metrics and identifying anomalies and/or indicators of compromise. The invention yet further provides security server clusters configured for generating API metrics and/or identify anomalies or indicators of compromisewhich may be used by proxies to terminate existing connections and block subsequent requests or messages from clients associated with the identified anomalies or indicators of compromise.