H04L12/815

Reducing power consumption in an electronic device
11171890 · 2021-11-09 · ·

An ingress packet processor in a device corresponds to a group of ports and receives network packets from ports in its port group. A traffic manager in the device manages buffers storing packet data for transmission to egress packet processors. An ingress arbiter is associated with a port group and connects the port group to an ingress packet processor coupled to the ingress arbiter. The ingress arbiter determines a traffic rate at which the associated ingress packet processor transmits packets to the traffic manager. The ingress arbiter controls an associated traffic shaper to generate a number of tokens that are assigned to the port group. Upon receiving packet data from a port in the group, the ingress arbiter determines, using information from the traffic shaper, whether a token is available. Conditioned on determining that a token is available, the ingress arbiter forwards the packet data to the ingress packet processor.

TECHNIQUES FOR DYNAMICALLY ALLOCATING RESOURCES IN A STORAGE CLUSTER SYSTEM
20210344613 · 2021-11-04 ·

Various embodiments are directed to techniques for dynamically adjusting a maximum rate of throughput for accessing data stored within a volume of storage space of a storage cluster system based on the amount of that data that is stored within that volume. An apparatus includes an access component to monitor an amount of client data stored within a volume defined within a storage device coupled to a first node, and to perform a data access command received from a client device via a network to alter the client data stored within the volume; and a policy component to limit a rate of throughput at which at least the client data within the volume is exchanged as part of performance of the data access command to a maximum rate of throughput, and to calculate the maximum rate of throughput based on the stored amount.

PROCESSING USER TRAFFIC IN A VIRTUALISED NETWORK

User traffic is processed in a virtualised network. First and second VNFs are initialised in the same network namespace as each other in user space in a host and have access to a shared memory region of the host. The first VNF processes user traffic and the second VNF provides a user plane service in relation to user traffic processed by the first VNF. The first VNF is used to establish a point-to-point, shared-memory interface between the first and second VNFs and is used to classify incoming user traffic. In response to the first VNF determining based on the classifying, that the incoming user traffic is to be subject to the user plane service, the first VNF is used to store the incoming user traffic in the shared memory region of the host to enable the second VNF to provide the user plane service in relation to the incoming trier traffic.

Resource usage for a remote session using artificial network bandwidth shaping

Disclosed are systems and methods for managing computing resources for a remote session that has been established between a client and a remote server via a communication channel. Such a remote session is configured to automatically adapt image quality of the remote session based on a network status of the communication channel. The described technique includes detecting an inactive state of the remote session, and in turn, modifying at least one network setting of the client using a network shaping rule specified to artificially reduce a network quality of the communication channel used by the client for traffic of the remote session, so as to cause the client to reduce image quality of the remote session and reduce an amount of data exchanged between the remote server and the client.

Popularity-aware bitrate adaptation of linear programming for mobile communications

Embodiments provide popularity-based adaptive bitrate management of linear programming over constrained communications links. Embodiments can operate in context of a communications network communicating with multiple mobile client devices disposed in one or more transport craft. A number of channel offerings, including channels providing linear programming, can be made available via the communications network for consumption by the client devices. Embodiments can compute channel popularity scores for the channel offerings based on a predicted popularity, an estimated popularity, a measured popularity, etc. A bitrate can be determined for each (some or all) of the channel offerings based at least in part on its channel popularity score, so that more popular channel offerings can be communicated at higher bitrates. Determined-bitrate instances of the channel offerings can be obtained and/or generated, and delivered via the communications network, to the client devices for consumption.

Reducing power consumption in an electronic device

Ingress packet processors in a device receive network packets from ingress ports. A crossbar in the device receives, from the ingress packet processors, packet data of the packets and transmits information about the packet data to a plurality of traffic managers in the device. Each traffic manager computes a total amount of packet data to be written to buffers across the plurality of traffic managers, where each traffic manager manages one or more buffers that store packet data. Each traffic manager compares the total amount of packet data to one or more threshold values. Upon determining that the total amount of packet data is equal to or greater than a threshold value, each traffic manager drops a portion of the packet data, and writes a remaining portion of the packet data to the buffers managed by the traffic manager.

Traffic Shaping and End-to-End Prioritization
20210328914 · 2021-10-21 ·

A method is disclosed, comprising: receiving a first and a second Internet Protocol (IP) packet at a mesh network node; tagging the first and the second IP packet at the mesh network node based on a type of traffic by adding an IP options header to each of the first and the second IP packet; forwarding the first and the second IP packet toward a mesh gateway node; filtering the first and the second IP packet at the mesh gateway node based on the added IP options header by assigning each of the first and the second IP packet to one of a plurality of message queues, each of the plurality of message queues having a limited forwarding throughput; and forwarding the first and the second IP packet from the mesh gateway node toward a mobile operator core network, thereby providing packet flow filtering based on IP header and traffic type.

ESTIMATION METHOD, ESTIMATION DEVICE, AND ESTIMATION PROGRAM

The controller (10) acquires information about the band of the flow within the tunnel and the band of each flow after policing or shaping, calculates the ratio of the traffic volume after policing or shaping to the traffic volume before policing or shaping by using the acquired information about the band, and estimates the traffic volume of the flow to be monitored within the tunnel by using the calculated ratio and the band of each flow after policing or shaping.

SYSTEM AND METHOD FOR LINK BANDWIDTH MANAGEMENT
20210328897 · 2021-10-21 ·

A method for link bandwidth management in a computer network, the method including: monitoring link traffic flow for a predetermined amount of time; measuring throughput of the link traffic flow; estimating the bandwidth based on the throughput; and calibrating at least one shaper based on the estimated bandwidth. a system for link bandwidth management in a computer network, the system including: a learning module configured to monitor link traffic flow for a predetermined amount of time; an analysis module configured to measuring throughput of the link traffic flow and estimate the bandwidth based on the throughput; and a calibration module configured to calibrate at least one shaper based on the estimated bandwidth.

FLOW CONTROL OF TWO TCP STREAMS BETWEEN THREE NETWORK NODES
20210328938 · 2021-10-21 ·

A system for forwarding packets between a first endpoint and a second endpoint, comprising one or more processors; a first network interface for communication with the first endpoint and a second network interface for communication with the second endpoint; and non-transitory memory comprising instructions. The instructions cause the one or more processors to receive a first packet from the first endpoint comprising a first data payload; generate a second packet, comprising the first data payload and an indicator of remaining buffer capacity different from an actual buffer capacity of the system; transmit the second packet to the second endpoint; receive a third packet from the second endpoint comprising a second data payload; generate a fourth packet, comprising the second data payload and an indicator of remaining buffer capacity different from an actual buffer capacity of the system; and transmit the fourth packet to the first endpoint.