Patent classifications
H04L47/56
Implementing a queuing system in a distributed network
A web application has a limit on the total number of concurrent users. As requests from client devices are received from users, a determination is made whether the application can accept those users. When the threshold number of users has been exceeded, new users are prevented from accessing the web application and are assigned to a queue system. A webpage may be sent to the users indicating queue status and may provide their estimated wait time. A cookie may be sent to the client for tracking the position of the user in the application queue. The users are assigned to a user bucket associated with a time interval of their initial request. When user slots become available, the users queued in the user bucket (starting from the oldest user bucket) are allowed access to the web application.
COMMUNICATION EQUIPMENT, COMMUNICATION METHODS AND PROGRAMS
An object is to provide a communication apparatus, a communication method, and a program capable of avoiding an increase in network load when input traffic continues to be large and a communication delay when input traffic is very small. A communication apparatus according to the present invention prepares three token buckets and can transfer, discard, or hold a packet in accordance with the amount of tokens in each token bucket. This enables the communication apparatus to operate so as not to exceed a set maximum bandwidth when large traffic is received for the delay guarantee shaping. Further, When the maximum bandwidth is exceeded, the communication apparatus can select whether to discard a packet to prioritize a delay guarantee or to hold a packet to prioritize no loss of packets. Furthermore, the communication apparatus can immediately transmit a packet without increasing a communication delay when input traffic is very small.
COMMUNICATION EQUIPMENT, COMMUNICATION METHODS AND PROGRAMS
An object is to provide a communication apparatus, a communication method, and a program capable of avoiding an increase in network load when input traffic continues to be large and a communication delay when input traffic is very small. A communication apparatus according to the present invention prepares three token buckets and can transfer, discard, or hold a packet in accordance with the amount of tokens in each token bucket. This enables the communication apparatus to operate so as not to exceed a set maximum bandwidth when large traffic is received for the delay guarantee shaping. Further, When the maximum bandwidth is exceeded, the communication apparatus can select whether to discard a packet to prioritize a delay guarantee or to hold a packet to prioritize no loss of packets. Furthermore, the communication apparatus can immediately transmit a packet without increasing a communication delay when input traffic is very small.
Data flow classification device
A data flow classification device includes a forwarding circuit and a configuring circuit. The forwarding circuit looks the classification of an input flow up in a lookup table according to the information of the input flow, tags the packets of the input flow with the classification, and outputs the packets to a buffer circuit; but if the classification is not found in the lookup table, the forwarding circuit tags the packets with a predetermined classification, outputs the packets to the buffer circuit, and adds the information of the input flow to the lookup table. The configuring circuit determines a flow threshold according to a queue length of the buffer circuit and a target length, learns the traffic of multiple flows from the lookup table, determines the classifications of the multiple flows according to the comparison between the traffic and the flow threshold, and stores these classifications in the lookup table.
Data flow classification device
A data flow classification device includes a forwarding circuit and a configuring circuit. The forwarding circuit looks the classification of an input flow up in a lookup table according to the information of the input flow, tags the packets of the input flow with the classification, and outputs the packets to a buffer circuit; but if the classification is not found in the lookup table, the forwarding circuit tags the packets with a predetermined classification, outputs the packets to the buffer circuit, and adds the information of the input flow to the lookup table. The configuring circuit determines a flow threshold according to a queue length of the buffer circuit and a target length, learns the traffic of multiple flows from the lookup table, determines the classifications of the multiple flows according to the comparison between the traffic and the flow threshold, and stores these classifications in the lookup table.
AUTOMATICALLY ENSURING SYMMETRICAL LATENCY IN TELEPROTECTION SYSTEMS
According to one or more embodiments, a first router receives a latency measurement indicative of latency associated with traffic sent from the first router to a second router. The first router calculates an asymmetrical latency as a difference between the latency measurement and a latency associated with traffic sent from the second router to the first router. The first router determines, based on the asymmetrical latency, a symmetrical latency target. The first router sends, to the second router, an indication of the symmetrical latency target. The first router and the second router adjust their respective de-jitter buffers to achieve the symmetrical latency target between the first router and the second router.
SCHEDULING METHOD APPLIED IN INDUSTRIAL HETEROGENEOUS NETWORK IN WHICH TSN AND NON-TSN ARE INTERCONNECTED
A scheduling method applied in an industrial heterogeneous network in which a TSN and a non-TSN are interconnected is provided. The TSSDN controller classifies data flows according to the delay requirements, and calculates the scheduling priorities of the data flows in the industrial heterogeneous network. The TSSDN controller adopts an improved CSPF algorithm to determine a shortest path in the heterogeneous network, and marks the scheduling priorities of the data flows which are transmitted from the subnet of the heterogeneous network and arrive at the switch for the first time. Flow table matching is performed at the SDN switch. In a case of performing flow table matching successfully, the counter is updated and the instruction included in the flow table is executed. In a case of performing flow table matching unsuccessfully, a PacketIn message is transmitted to the TSSDN controller, and the TSSDN controller performs analysis and makes a decision.
SCHEDULING METHOD APPLIED IN INDUSTRIAL HETEROGENEOUS NETWORK IN WHICH TSN AND NON-TSN ARE INTERCONNECTED
A scheduling method applied in an industrial heterogeneous network in which a TSN and a non-TSN are interconnected is provided. The TSSDN controller classifies data flows according to the delay requirements, and calculates the scheduling priorities of the data flows in the industrial heterogeneous network. The TSSDN controller adopts an improved CSPF algorithm to determine a shortest path in the heterogeneous network, and marks the scheduling priorities of the data flows which are transmitted from the subnet of the heterogeneous network and arrive at the switch for the first time. Flow table matching is performed at the SDN switch. In a case of performing flow table matching successfully, the counter is updated and the instruction included in the flow table is executed. In a case of performing flow table matching unsuccessfully, a PacketIn message is transmitted to the TSSDN controller, and the TSSDN controller performs analysis and makes a decision.
APPARATUSES AND METHODS FOR SUPPORTING CLASS-BASED SCHEDULING IN A TIME-SENSITIVE NETWORKING (TSN) NETWORK
An apparatus connected to a Time-Sensitive Networking (TSN) switch in a TSN network is provided. The apparatus includes a transceiver, a storage medium, and a controller. The storage medium stores a first mapping of a traffic class to a time slot, and a second mapping of a frame type of a TSN stream to the traffic class. The controller is coupled to the transceiver and the storage medium, and is configured to determine a routing path and a Gate Control List (GCL) corresponding to the TSN stream based on a network topology of the TSN network, the first mapping, and the second mapping, and deploy the GCL to each TSN switch in the routing path via the transceiver.
Network latency fairness in multi-user gaming platforms
Aspects of the present disclosure relate to wireless communications, and more particularly, to techniques that allow for enforcement of network latency fairness in multi-user gaming platforms. An example method generally includes identifying multiple user equipments (UEs) participating in a multi-user gaming platform across one or more wide area networks (WANs); and taking one or more actions to support latency fairness in delivery of information across the multiple users via the one or more WANs.