Patent classifications
H04L12/819
LOAD BALANCING AMONG OUTPUT PORTS
Examples described herein relate to a network interface device that includes packet processing circuitry to detect usage of an egress port and report the usage of the egress port to a network interface device driver to cause reallocation of hash-based packet buckets to at least one egress port to provide an allocation of hash-based packet buckets to multiple active egress ports of the network interface device with retention of bucket-to-egress port mappings except for re-allocations of one or more buckets to one or more active egress ports. In some examples, usage of the egress port is based on a count of hash buckets assigned to packets to be transmitted from the egress port or a number of bytes of packets enqueued to be transmitted from the egress port.
Asynchronous high throughput inbound messages with throttled outbound messages to safeguard enterprise backend systems
An enterprise backend system may have inherent limits on a throughput of inbound messages. In one implementation, a message producer publishes messages to a message broker at a high throughput. A message consumer receives messages from the broker and throttles the throughput of messages shipped to an enterprise backend system.
PACKET TRANSFER APPARATUS, METHOD, AND PROGRAM
Provided is a packet transfer apparatus configured to per form packet exchange processing for exchanging multiple continuous packets with low delay while maintaining fairness between communication flows of the same priority level. A packet transfer apparatus 100 includes: a packet classification unit 120; queues 130 that holds the classified packets for each classification; and a dequeue processing unit 140 that extracts packets from the queues 130. The dequeue processing unit 140 includes a scheduling unit 141 that controls the packet extraction amount extracted from the queue 130 for a specific communication flow based on information on the amount of data that is requested by the communication flow and is to be continuously transmitted in packets.
Technologies for controlling jitter at network packet egress
Technologies for controlling jitter at network packet egress at a source computing device include determining a switch time delta as a difference between a present switch time and a previously captured switch time upon receipt of a network packet scheduled for transmission to a target computing device and determining a host scheduler time delta as a difference between a host scheduler timestamp associated with the received network packet and a previously captured host scheduler timestamp. The source computing device is additionally configured to determine an amount of previously captured tokens present in a token bucket, determine whether there are a sufficient number of tokens available in the token bucket to transmit the received packet as a function of the switch time delta, the host scheduler time delta, and the amount of previously captured tokens present in the token bucket, and schedule the received network packet for transmission upon a determination that sufficient tokens in the token bucket.
MACHINE LEARNING BASED END TO END SYSTEM FOR TCP OPTIMIZATION
Bypass network traffic records are generated for a web application. Sufficient statistics of network optimization parameters are calculated for network performance categories. The bypass network traffic records are partitioned for the network performance categories into network traffic buckets. Sufficient statistics and the network traffic buckets are used to generate network quality mappings. The network quality mappings are used as training instances to train a machine learner for generating network optimization policies to be implemented by user devices.
POSITION PARAMETERIZED RECURSIVE NETWORK ARCHITECTURE WITH TOPOLOGICAL ADDRESSING
A digital data communications network that supports efficient, scalable routing of data and use of network resources by combining a recursive division of the network into hierarchical sub-networks with repeating parameterized general purpose link communication protocols and an addressing methodology that reflects the physical structure of the underlying network hardware. The sub-division of the network enhances security by reducing the amount of the network visible to an attack and by insulating the network hardware itself from attack. The fixed bandwidth range at each sub-network level allows quality of service to be assured and controlled. The routing of data is aided by a topological addressing scheme that allows data packets to be forwarded towards their destination based on only local knowledge of the network structure, with automatic support for mobility and multicasting. The repeating structures in the network greatly simplify network management and reduce the effort to engineer new network capabilities.
CONTENT PROVIDER RECOMMENDATIONS TO IMPROVE TARGETTING AND OTHER SETTINGS
At least one aspect of the present disclosure is directed to systems and methods of pruning retrieval tokens from sets of retrieval tokens based on criteria. The system can receive a plurality of retrieval tokens including a second retrieval token. The system can retrieve an indication of a first token and a plurality of requests. The system can construct a first bit string based on the predicted requests and the first retrieval token. The system can retrieve a second bit string corresponding to the second retrieval token. The system can compare the first bit string to the second bit string to determine a similarity value. The system can determine the similarity value is greater than a predetermined threshold. The system can remove the first and second retrieval token from the plurality to create a pruned set of retrieval tokens. The system can provide the pruned set to a content provider.
Request Throttling in Distributed Storage Systems
The disclosed technology includes an example system that has a request throttling manager that is configured to receive a first file data request, queue the first file data request in a first request queue, and process the first file data request based on the first token bucket. The first token bucket includes a sufficient first quantity of first tokens to process the first file data request. The system further includes a storage manager configured to access one or more storage nodes of a plurality of storage nodes of a distributed storage system in response to the first file data request.
Bandwidth sentinel
Minimum guaranteed wireless network bandwidth is provided to client network devices by monitoring the performance of network connections to identify client network devices experiencing network congestion. Congested network connections are then analyzed to determine the source of the network congestion. Depending upon the source of the network congestion, an embodiment of the invention may undertake steps to either improve the quality of the network connection or to mitigate the impact of this network connection on other network connections. High quality network connections may be allocated additional bandwidth, airtime, or other resources to reduce the network congestion. Low quality network connections are not allocated additional bandwidth, airtime, or other resources. Instead, the impact of this network connection on the other network connections is mitigated. Additionally, the low quality network connection may be transferred to another wireless networking device that may be able to provide a better quality network connection.
METHODS FOR LOGICAL CHANNEL PRIORITIZATION AND TRAFFIC SHAPING IN WIRELESS SYSTEMS
A method performed by a WTRU may comprise associating a logical channel with a plurality of token buckets, including at least a long term token bucket and a short term token bucket. The method may further comprise transmitting logical channel data on the associated logical channel, in a TTI. The transmitted logical channel data of the TTI may be no larger than a value corresponding to a minimum of the long term token bucket and the short term token bucket. The long term token bucket may be initialized to a value which is greater than an initialized value of the short term token bucket. When the WTRU transmits logical channel data in a TTI, the WTRU may decrement the long term token bucket and the short term token bucket by a total size of one or more MAC SDUs served on the associated logical channel.