Probabilistic Packet Marking with Fast Adaptation Mechanisms
20200382427 · 2020-12-03
Inventors
- Szilveszter Nádas (Budapest, HU)
- Gergö Gombos (Budapest, HU)
- Sandor Laki (Budapest, HU)
- Zoltán Turányi (Szentendre, HU)
Cpc classification
H04L47/31
ELECTRICITY
H04L47/2408
ELECTRICITY
International classification
Abstract
At an Edge Node, a method of handling data packets in order to mark the packets with respective packet values indicative of a level of importance. The method comprises implementing a variable rate token bucket to determine an estimated arrival rate of a flow of packets. The method comprises receiving a data packet, updating the estimated arrival rate to an updated arrival rate based on a token level of the token bucket and generating a random or pseudo-random number within a range with a limit determined by the updated arrival rate. The method further comprises identifying an operator policy which determines a level of service associated with the flow of packets, and a Throughput Value Function (TVF) associated with said policy, and then applying the TVF to the random number to calculate a packet value. The packet value is included in a header of the packet.
Claims
1. At an Edge Node responsible for forwarding received packets onto a packet switched network, a method of handling the received data packets in order to mark the packets with respective packet values indicative of a level of importance to be used when further handling the packets within the packet switched network, the method comprising: implementing a variable rate token bucket to determine an estimated arrival rate of a flow of packets, wherein the token bucket has a bucket threshold and a filling rate that can be changed; receiving a data packet of said flow; updating the estimated arrival rate to an updated arrival rate based on a token level of the token bucket; generating a random or pseudo-random number within a range with a limit determined by the updated arrival rate; identifying an operator policy which determines a level of service associated with the flow of packets, and a Throughput Value Function (TVF) associated with said policy; applying the TVF to the random number to calculate a packet value indicative of the level of importance of the packet; and including the packet value in a header of the packet and forwarding the packet onto the packet switched network.
2. A method according to claim 1, wherein the step of updating the estimated arrival rate comprises determining if the estimated arrival rate is stable, overestimated or underestimated.
3. A method according to claim 2, wherein the estimated arrival rate is determined to be: stable if the token level is between zero and the bucket threshold; overestimated if the token level is greater than the bucket threshold; underestimated if the token level is less than zero.
4. A method according to claim 2, wherein, when the estimated arrival rate is underestimated, the updated arrival rate is set to a rate that is greater than the estimated arrival rate, and the range within which the random number is generated is set between the estimated and updated arrival rates.
5. A method according to claim 4, wherein the updated arrival rate is calculated by adding to the estimated arrival rate a value determined from the token level.
6. A method according to claim 5, wherein said value determined from the token level is calculated by taking the absolute value of the token level, summing it with a byte constant, and dividing the sum by a time constant.
7. A method according to claim 4, and comprising setting the token level to a constant greater than zero after updating the arrival rate.
8. A method according to claim 2, wherein, when the estimated arrival rate is overestimated, the updated arrival rate is set to a rate that is lower than the estimated arrival rate, and the range within which the random number is generated is set between zero and the updated arrival rate.
9. A method according to claim 8, wherein the updated arrival rate is calculated by subtracting from the estimated arrival rate a value determined by the token level.
10. A method according to claim 9, wherein said value determined from the token level is calculated by subtracting the bucket threshold from the token level, and dividing the difference by a time constant.
11. A method according to claim 10, wherein, if the updated arrival rate is less than the size of the packet divided by the time constant, the updated arrival rate is set to equal the size of the packet divided by the time constant.
12. A method according to claim 11 and comprising setting the token level to zero after updating the arrival rate.
13. A method according to claim 2, wherein, when the estimated arrival rate is stable, the updated arrival rate is set to a rate that is equal to the estimated arrival rate, and the range within which the random number is generated is set between zero and the updated arrival rate.
14. A method according to claim 13, wherein the step of determining the token level comprises: calculating the token level from at least a previous token level, the estimated arrival rate, a size of the packet, and a time period between the arrival of said packet at the Edge Node and the arrival of an immediately preceding packet of said flow of packets at said Edge Node.
15. A method according to claim 14, wherein the token level is given by the previous token level plus the size of the packet, minus the estimated arrival rate multiplied by said time period.
16. A method of managing packet traffic within a packet network and comprising, at Edge Nodes of the network, handling packets according to claim 1, receiving packets at intermediate nodes of the network, the receiving nodes being configured to maximise the sum of packet values of packets being forwarded.
17. An Edge Node for use in a packet switched network in order to forward packets onto the network and being configured to mark the packets with respective packet values indicative of a level of importance to be used when further handling the packets within the packet switched network, the Edge Node comprising: a receiver for receiving data packets; a processor or processors operative to, for each packet implement a variable rate token bucket to determine an estimated arrival rate of a flow of packets comprising the data packet, wherein the token bucket has a bucket threshold and a filling rate that can be changed; update the estimated arrival rate to an updated arrival rate based on a token level of the token bucket; generate a random or pseudo-random number within a range with a limit determined by the updated arrival rate, identify an operator policy which determines a level of service associated with the flow of packets, and a Throughput Value Function (TVF) associated with said policy, apply the TVF to the random number to calculate the packet value indicative of the importance of the packet, and include the packet value in a header of the packet; and a sender for sending the marked packets onto the packet switched network.
18. An Edge Node according to claim 17, wherein the processor or processors are operative to determine if the estimated arrival rate is stable, overestimated or underestimated.
19. An Edge Node according to claim 18, wherein the processor or processors are operative to determine that the estimated arrival rate is: stable if the token level is between zero and the bucket threshold; overestimated if the token level is greater than the bucket threshold; and underestimated if the token level is less than zero.
20. An Edge Node according to claim 18, wherein, when the estimated arrival rate is underestimated, the processor or processors are operative to set the updated arrival rate to a rate that is greater than the estimated arrival rate, and to generate the random number within the range between the estimated and updated arrival rates.
21. An Edge Node according to claim 20, wherein the processor or processors are operative to calculate the updated arrival rate by adding to the estimated arrival rate a value determined from the token level.
22. An Edge Node according to claim 21, wherein the processor or processors are operative to calculate said value determined from the token level by taking the absolute of the token level, summing it with a byte constant, and dividing the sum by a time constant.
23. An Edge Node according to claim 22, wherein the processor or processors are further operative to set the token level to a constant greater than zero after updating the arrival rate.
24. An Edge Node according to claim 17, wherein the processor or processors are operative to: calculate the token level from at least a previous token level, the estimated arrival rate, a size of the packet, and a time period between the arrival of said packet at the Edge Node and the arrival of an immediately preceding packet of said flow of packets at said Edge Node.
25. An Edge Node according to claim 17, wherein said Edge Node is divided over multiple physical or virtual nodes of the packet switched network.
26. (canceled)
27. A computer readable storage medium for storing computer executable instructions, which when executed cause the computer to perform the method of claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
DETAILED DESCRIPTION
[0038] Embodiments described herein provide probabilistic packet marking methods that emulate the hierarchical token bucket (HTB) behaviour described in WO2014/189422. The methods only require three state variables, and the computational complexity is O(1) constant. Instead of maintaining and using several token buckets, the packet values are assigned by a probabilistic decision. To achieve this, the method estimates the arrival rate of flows to be marked, determines a random throughput value based on this measurement, and uses that throughput value to determine a Packet Value (PV).
[0039] A token bucket is a conceptual bucket that is filled with tokens (representing a number of bytes) at some fill rate. The token bucket has a maximum capacity of tokens referred to as the bucket threshold. To send/forward a packet of a given size, a corresponding number of tokens adding up to the same size is removed from the bucket. If the rate and/or size of the packets in incoming flows is low, the bucket will overfill with tokens. Tokens spilling out of the top of the bucket are discarded. If, on the other hand, the rate of packets in the incoming flows is too high, then the bucket will empty of tokens. If there is an insufficient amount of tokens in the bucket to forward a packet, the packet may be dropped or otherwise handled to decrease the throughput of the corresponding flows.
[0040] The proposed methods have small implementation complexity compared to existing packet value marking algorithms, and enable use of a continuous packet value range. The methods are also better at emulating the HTB behaviour compared to RFQ.
[0041] Embodiments provide an acceleration method of probabilistic packet marking with faster adaptation to the changes in network conditions. The method may remedy the deviations in packet value distribution caused by errors in arrival rate estimation. Simulation results have shown that this method results in a packet value distribution (an empirical throughput-value function (TVF)) which better approximates the expected TVF of the operator policy to be applied. As a result of the proposed acceleration mechanisms, the bandwidth share at the bottleneck node satisfies the predefined operator policies with smaller deviations from the expected share around state transitions (e.g. at the presence of significant traffic bursts).
[0042] The method requires only three state variables and its computational complexity is constant, making it practical to implement. The method relies on measuring the arrival rate (R) of flows containing packets to be marked. Then the TVF V(.) is applied to a throughput value chosen uniformly at random from a throughput range defined by the rules of the method, resulting in a series of PVs matching the TVF (operator policy).
[0043] In general, packet marking should satisfy two key requirements: 1) for stable traffic the empirical TVF should follow the configured TVF (operator policy); 2) the method should react quickly to changes in the arrival rates.
[0044] To imitate the hierarchical token bucket (HTB) behaviour with a single variable rate token bucket, the random throughput value is chosen from a range between the old estimate and the new (updated) estimate when the estimate is increased. In addition, when increasing the rate estimate, a non-zero token bucket (i.e. token level>0) can be created to avoid having consistently small Packet Values when ramping up.
[0045]
[0046] A detailed description of the method is depicted in
[0051] First, the token bucket is updated so that both the actual arrival rate and the size of the incoming packet are taken into account:
tokens.sub.U=tokens.sub.U+R.sub.U*tp.size, (1)
where tokens.sub.U is the current token level, t is the time elapsed since the last packet was received from flow U and p.size is the size of the incoming packet. Based on the new token level, three different cases can be distinguished: [0052] a) Stable period: If the new token level is between zero and the bucket threshold L.sub.U, the current rate estimate R.sub.U is considered stable and therefore there is no need for further adjustment. In this case, the new token level is accepted and the packet throughput is calculated as r=random(0, R.sub.U) (chosen uniformly at random). [0053] b) Rate overestimation: If tokens.sub.U is above L.sub.U, the rate estimate should be reduced. The new rate is calculated as R.sub.U=R.sub.U;old(tokens.sub.UL.sub.U)/D.sub.U, where D.sub.U is an averaging constant (e.g. D.sub.U=40 ms). If this new estimate is below p.size/D.sub.U, the rate represented by this single packet in a D.sub.U interval, the traffic has a long idle period. In this case we reset the system as if a packet arrived to a freshly initialized buffer: R.sub.U=p.size/D.sub.U and tokens.sub.U=0. The packet throughput is then calculated as r=random(0, R.sub.U) (chosen uniformly at random). [0054] c) Rate underestimation: If tokens.sub.U is less than zero, the rate R.sub.U is considered underestimated and thus is updated as R.sub.U=R.sub.U;old+(|tokens.sub.U|+B.sub.U)/D.sub.U. To compensate for rate underestimation we use throughput r=random(R.sub.U;old, R.sub.U) (chosen uniformly at random). In this way we aim at emulating HTB behaviour when all the buckets below R.sub.U;old are depleted. To avoid increasing R.sub.U and choosing from this new interval too often, we accelerate the increase in the rate estimate with B (1500 byte). We also reset the token level to B after the rate increase, overwriting the negative tokens.sub.U. [0055] 5) Determine the packet value: The packet value can then be calculated by applying the operator policy of flow U as the TVF function of policy group G(U): V=TVF.sub.G(U)(r) [0056] 6) Encode the packet value into the packet: The calculated packet value V should be added to the packet before sending. It can be stored in an unused field of an existing protocol header (e.g. ToS), or a new header carrying this value can be added to the packet.
[0057] Although the above embodiment is described by steps 1 to 6 as performed in a particular order, the skilled person will appreciate that in other embodiments some of the steps may be performed in a different order. For example, steps 2 and 3 of classifying the packet and determining the operator policy may be performed after updating the arrival rate and determining the throughput value (step 4).
[0058] An example of the proposed method when the arrival rate is stable is illustrated in
[0059]
[0060]
[0061]
[0062]
[0063]
[0064] The PPV concept can be implemented in the cloud. In order to implement packet marking, a given flow has to traverse the same packet marker, but markers of different flows do not require coordination. The method does not require any knowledge about the flows at the bottleneck, and the bottleneck has the same complexity for a single flow as for millions of flows, which results in very good scaling of the bottleneck.
[0065] Embodiments are proposed to be implemented in a single processor/virtual machine in the cloud. Different packet markers can be distributed among processors or virtual machines as they do not require any interaction with other packet markers or with the bottleneck scheduler.
[0066] The embodiments described above relate to probabilistic packet marking with fast adaptation mechanisms, where incoming packets, without a previous packet value, are first marked at an Edge Node. However, it is sometimes necessary to remark a packet with a new packet value at the boundary between networks. The following description concerns a hierarchical bandwidth estimation scheme for per packet value (PPV) remarking.
[0067] In WO2016/007053 (incorporated herein by reference) the Per Packet Operator Value concept is extended to virtualized environments where remarking of Packet Values at the boundaries of virtual and physical networks are required since the physical network operator can apply various policies to different virtual network operators. If a packet enters the physical network from virtual network VN-1, the remarking algorithm should take into account both the carried Packet Value and the policy defined for VN-1. However, this remarking algorithm is based on a sequential bandwidth estimation scheme which may be unstable. Since the reliability of remarking is based on the accurate reconstruction of the observed throughput-packet value function (TVF), this instability results in deviations from the desired bandwidth share at bottleneck nodes in the physical operator.
[0068] It is proposed to solve this problem by using a hierarchical bandwidth estimation scheme to accurately reconstruct the observed throughput-value function (TVF) at the remarker node in the network. Since the Packet Value space could be large, it is split into K disjoint ranges. Instead of estimating the arrival rate for each value range independently, the ranges are organized in a hierarchy with K levels: ith range is part of i+1th range for every i. Then a rate estimation scheme is applied for each level where, for the rate estimation of level i, packets with packet value from ranges i to 1 are taken into account.
[0069] The solution may allow the TVF of arrival traffic to be estimated with higher accuracy. Hence, the traffic after remarking will better satisfy the policy of the physical network operator. The hierarchical bandwidth estimation may result in more stable rate estimates and better bandwidth share at bottlenecks in the physical network, satisfying the polices of both physical and virtual operators.
[0070] Packet remarking is needed for virtual networking. The incoming traffic is already marked with a Packet Value v and the physical operator has policies for each virtual networks VN defined by a TVF V.sub.VN(.). This TVF can be used to calculate the new packet value to be used in the physical network PN. The main steps the proposed method are depicted in
[0071] The re-marker node needs to perform the translation between the packet value space of the VN and the PN. To this end, the proposed remarking algorithm first determines the incoming TVF {tilde over (V)}(.) of the VN's packet value space as depicted on the left side of
[0072]
[0073] 1) Packet value decoding: After a packet arrives at the remarker node, the carried packet value PV is extracted and decoded from the packet.
[0074] 2) Construction of incoming TVF: Since the Packet Value space can be large, to reduce the number of states to be stored the proposed algorithm first splits the PV space of the VN into K disjoint ranges: (v.sub.K, v.sub.K1], (v.sub.K1, v.sub.K2], . . . , (v.sub.1, v.sub.0], where v.sub.i>v.sub.i+1. The ranges can follow a logarithmic sizing, but they can also be of equal sizes or other schemes are also possible (e.g., if the PN has prior knowledge on the expected PV distribution). For each v.sub.i (1iK), the arrival rate (Th.sub.i) of packets with PVs in range (v.sub.i,v.sub.0] are measured, creating an approximation of the incoming TVF illustrated by the gray boxes in the left side of
[0075] 3) Translation between packet value spaces: Then an incoming rate x is chosen from the range (Th.sub.j1,Th.sub.j] as an approximation of {tilde over (V)}.sup.1(v) where the packet value of the incoming packet v is in the range (v.sub.j, v.sub.j1]. Note that the selection of x can be made e.g. uniformly at random or according to any other schemes. The new packet value can then be calculated by applying the physical operator's policy for the given VN as v.sub.new=V.sub.VN(x).
[0076] 4) Packet value encoding: After the new packet value is determined, it is encoded into the packet. The old value may also be encapsulated into the outgoing packet.
[0077] In summary, the proposed method of remarking packets involves hierarchically estimating the throughput associated with PV bins, wherein the throughput estimate of a given bin also includes incoming packets with PV higher than the PV in that bin. For an incoming packet, a random throughput value is determined based on the throughput estimate belonging to the PV bin of that packet and on the throughput estimate belonging to the previous PV. The determined throughput value is used in the TVF describing the policy belonging to the VNO.