H04L12/815

Detection block sending and receiving method, and network device and system

The present disclosure relates to detection block sending and receiving methods, and network devices and systems. One example method includes obtaining, by a network device, an original bit block data flow, generating at least one detection block, inserting the at least one detection block into a position of at least one idle block in the original bit block data flow, and sending a bit block data flow including the at least one detection block.

Method and system for shaping traffic from an egress port in a software-defined wide area network
11102130 · 2021-08-24 · ·

A method for shaping traffic from an egress port in a software-defined wide-area network (SD-WAN) involves obtaining a stored network bandwidth measurement of the network bandwidth between a source endpoint and a destination endpoint, obtaining a current shaping rate used by the source endpoint when sending data to the destination endpoint, obtaining an updated measurement of the network bandwidth between the source endpoint and the destination endpoint, determining a new shaping rate based on the stored network bandwidth measurement, the current shaping rate measurement, and the updated measurement of network bandwidth, and configuring the shaping rate used by the source endpoint when sending data to the destination endpoint with the new shaping rate.

Method and apparatus for entity-based resource protection for a cloud-based system
11075923 · 2021-07-27 · ·

Systems and methods for limiting calls to access a cloud-based system are disclosed. The systems and methods obtain a rate limiting policy including at least one attribute and a counting interval, the at least one attribute including at least one of a username associated with a client, an instance, an organization associated with the client, a resource being requested, a service being requested, a geographical access region, and an Application Programming Interface (API) being requested. The systems and methods also mark an entry, based on the rate limiting policy, in a database for each call the client makes. The systems and methods further enforce the rate liming policy by not processing calls from the client associated with the at least one attribute that are made for a count of calls marked that is beyond the counting interval.

MULTIPLEXED RESOURCE ALLOCATION ARCHITECTURE
20210243137 · 2021-08-05 ·

A device configured to receive a data set and instructions for processing the data set from a network device. The device is further configured to parse the data set into a plurality of data segments to be processed, and generate a plurality of instruction segments from the received instructions. The device is further configured to assign each instruction segment to a resource unit, and to generate control information with instructions for combining processed data segments from the resource units. The device is further configured to receive processed data segments from the resource units, to generate the processed data set, and to output the processed data set to the network device.

Methods, systems, and computer readable media for providing guaranteed traffic bandwidth for services at intermediate proxy nodes

A method for providing guaranteed minimum intermediate proxy node bandwidth for services includes configuring, at an intermediate proxy node, a guaranteed minimum bandwidth of the intermediate proxy node reserved to process messages associated with a service. The method further includes receiving a first message at the intermediate proxy node. The method further includes determining, by the intermediate proxy node, that the intermediate proxy node is in an overloaded state. The method further includes identifying, by the intermediate proxy node, the first message as being associated with the service for which the guaranteed minimum bandwidth is configured. The method further includes determining, by the intermediate proxy node, that a portion of the guaranteed minimum bandwidth for the service is available to process the first message. The method further includes routing, by the intermediate proxy node and to a producer network function (NF) that provides the service, the first message and updating a message count for the service.

TECHNIQUES FOR TRANSPARENTLY EMULATING NETWORK CONDITIONS
20210224183 · 2021-07-22 ·

In various embodiments, a network emulation application emulates network conditions when testing a software application. In response to a request to emulate a first set of network conditions for a first client device that is executing the software application, causing a kernel to implement a first pipeline and to automatically input network traffic associated with the first client device to the first pipeline instead of a default bridge. In response to a request to emulate a second set of network conditions for a second client device that is executing the software application, causing the kernel to implement a second pipeline and to automatically input network traffic associated with the second client device to the second pipeline instead of the default bridge. Each of the pipelines perform one or more traffic shaping operations on at least a subset of the network traffic input into the pipeline.

Enabling enterprise segmentation with 5G slices in a service provider network

An enterprise controller of an enterprise network sends to a service gateway of a service provider network a request for network slice information about network slices provisioned on a data plane of the service provider network. Responsive to the sending, the enterprise controller receives from the service gateway the network slice information including identifiers of and properties associated with the network slices. Responsive to receiving a request for the network slice information from a network device at a border of a forwarding plane of the enterprise network, the enterprise controller sends the network slice information to the network device to cause the network device to perform configuring network traffic in the forwarding plane with identifiers of ones of the network slices that match the network traffic, and to perform forwarding the network traffic configured with the identifiers to the data plane of the service provider network.

Highly deterministic latency in a distributed system

A distributed computing system, such as may be used to implement an electronic trading system, supports a notion of fairness in latency. The system does not favor any particular client. Thus, being connected to a particular access point into the system (such as via a gateway) does not give any particular device an unfair advantage or disadvantage over another. That end is accomplished by precisely controlling latency, that is, the time between when request messages arrive at the system and a time at which corresponding response messages are permitted to leave. The precisely controlled, deterministic latency can be fixed over time, or it can vary according to some predetermined pattern, or vary randomly within a pre-determined range of values.

Flow control of two TCP streams between three network nodes

A system for forwarding packets between a first endpoint and a second endpoint, comprising one or more processors; a first network interface for communication with the first endpoint and a second network interface for communication with the second endpoint; and non-transitory memory comprising instructions. The instructions cause the one or more processors to receive a first packet from the first endpoint comprising a first data payload; generate a second packet, comprising the first data payload and an indicator of remaining buffer capacity different from an actual buffer capacity of the system; transmit the second packet to the second endpoint; receive a third packet from the second endpoint comprising a second data payload; generate a fourth packet, comprising the second data payload and an indicator of remaining buffer capacity different from an actual buffer capacity of the system; and transmit the fourth packet to the first endpoint.

SYSTEMS, METHODS, COMPUTING PLATFORMS, AND STORAGE MEDIA FOR ADMINISTERING A DISTRIBUTED EDGE COMPUTING SYSTEM UTILIZING AN ADAPTIVE EDGE ENGINE

Systems, methods, computing platforms, and storage media for administering a distributed edge computing system utilizing an adaptive edge engine based on a finite state machine behavioral model are disclosed. Exemplary implementations may: select a first workload from one or more workloads; access, for the selected workload, one or more location optimization strategies from a plurality of location optimization strategies; and optimize an objective function across a portion of the one or more endpoints; select one or more endpoints; deploy the workload on at least one selected endpoint; monitor the health of the endpoint-workload deployment; and direct network traffic to the appropriate endpoint.