H04L47/21

System and method for maximizing resource credits across shared infrastructure
11381516 · 2022-07-05 · ·

A computer-implemented method of adjusting a resource credit configuration for cloud resources that includes collecting a resource credit inventory and attributing metadata related to resources from one or more cloud resources. An expected resource demand is determined. A plurality of resource credit configurations is determined that matches the determined expected resource demand. An improved resource credit benefit based on the resource credit inventory and on the plurality of credit configurations is determined that matches the determined expected resource demand. A modified attribute metadata based on the determined improved resource credit benefit is then determined.

System and method for streaming data

A system and method to conform data flow are provided. The system includes a queue configured to receive at least one data stream, and a processor configured to convert the at least one data stream to a continuous data stream, and output the continuous data stream at a constant rate.

System and method for streaming data

A system and method to conform data flow are provided. The system includes a queue configured to receive at least one data stream, and a processor configured to convert the at least one data stream to a continuous data stream, and output the continuous data stream at a constant rate.

MACHINE LEARNING BASED END TO END SYSTEM FOR TCP OPTIMIZATION
20220045916 · 2022-02-10 ·

Bypass network traffic records are generated for a web application. Sufficient statistics of network optimization parameters are calculated for network performance categories. The bypass network traffic records are partitioned for the network performance categories into network traffic buckets. Sufficient statistics and the network traffic buckets are used to generate network quality mappings. The network quality mappings are used as training instances to train a machine learner for generating network optimization policies to be implemented by user devices.

LOAD BALANCING AMONG OUTPUT PORTS

Examples described herein relate to a network interface device that includes packet processing circuitry to detect usage of an egress port and report the usage of the egress port to a network interface device driver to cause reallocation of hash-based packet buckets to at least one egress port to provide an allocation of hash-based packet buckets to multiple active egress ports of the network interface device with retention of bucket-to-egress port mappings except for re-allocations of one or more buckets to one or more active egress ports. In some examples, usage of the egress port is based on a count of hash buckets assigned to packets to be transmitted from the egress port or a number of bytes of packets enqueued to be transmitted from the egress port.

Asynchronous high throughput inbound messages with throttled outbound messages to safeguard enterprise backend systems

An enterprise backend system may have inherent limits on a throughput of inbound messages. In one implementation, a message producer publishes messages to a message broker at a high throughput. A message consumer receives messages from the broker and throttles the throughput of messages shipped to an enterprise backend system.

MACHINE LEARNING BASED END TO END SYSTEM FOR TCP OPTIMIZATION
20210234769 · 2021-07-29 ·

Bypass network traffic records are generated for a web application. Sufficient statistics of network optimization parameters are calculated for network performance categories. The bypass network traffic records are partitioned for the network performance categories into network traffic buckets. Sufficient statistics and the network traffic buckets are used to generate network quality mappings. The network quality mappings are used as training instances to train a machine learner for generating network optimization policies to be implemented by user devices.

POSITION PARAMETERIZED RECURSIVE NETWORK ARCHITECTURE WITH TOPOLOGICAL ADDRESSING
20210184934 · 2021-06-17 · ·

A digital data communications network that supports efficient, scalable routing of data and use of network resources by combining a recursive division of the network into hierarchical sub-networks with repeating parameterized general purpose link communication protocols and an addressing methodology that reflects the physical structure of the underlying network hardware. The sub-division of the network enhances security by reducing the amount of the network visible to an attack and by insulating the network hardware itself from attack. The fixed bandwidth range at each sub-network level allows quality of service to be assured and controlled. The routing of data is aided by a topological addressing scheme that allows data packets to be forwarded towards their destination based on only local knowledge of the network structure, with automatic support for mobility and multicasting. The repeating structures in the network greatly simplify network management and reduce the effort to engineer new network capabilities.

Method for data transmission and terminal

A method for data transmission and a terminal are provided. The method includes the following. An acknowledgment message for indicating successful data reception is generated in response to first downlink data received by a terminal. The acknowledgment message is transmitted via a preset channel resource when a channel resource occupancy priority of the acknowledgment message is higher than a channel resource occupancy priority of second downlink data to be received by the terminal.

Dynamic congestion management
10944676 · 2021-03-09 · ·

Methods and systems for dynamic congestion management in communications networks that advantageously provides a satisfactory Quality of Experience (QoE) of real time communication for network users. Congestion management is achieved wherein an ingress interface is monitored by a data processing system and when utilization of that interface exceeds a first activation level a message is sent to a second data processing system wherein that second data processing system is a source for at least some of data packets traversing the ingress interface, wherein the first message indicates that traffic shaping is to occur in accordance with the first activation level and only if the utilization falls below a deactivation level, transmitting a second message to the second data processing system wherein the second message indicates that traffic shaping is to stop.