Technique for Message Flow Shaping
20180159780 ยท 2018-06-07
Inventors
Cpc classification
H04L47/263
ELECTRICITY
H04L47/32
ELECTRICITY
International classification
Abstract
A message flow shaping approach for a network element capable of message routing is presented. The network element is configured to receive one or more logical ingress message flows and to output one or more logical egress message flows, wherein a flow priority level is allocated to each ingress and egress message flow. A method implementation of the technique presented herein comprises the step of the determining a message flow congestion state per flow priority level at an egress side of the network element. The method further comprises the step of triggering a message flow shaping operation. The message flow shaping operation is triggered per flow priority level at an ingress side of the network element dependent on the congestion state determined for at least one associated flow priority level at the egress side.
Claims
1-26. (canceled)
27. A network element capable of message routing, the network element being configured to receive one or more logical ingress message flows and to output one or more logical egress message flows, wherein a flow priority level is allocated to each ingress and egress message flow, the network element comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the processing circuitry is operative to: determine a message flow congestion state per flow priority level at an egress side of the network element; and trigger a message flow shaping operation per flow priority level at an ingress side of the network element dependent on the congestion state determined for at least one associated flow priority level at the egress side.
28. The network element of claim 27: wherein the network element is configured to output multiple egress message flows; and wherein the instructions are such that the processing circuitry is operative to determine the congestion state for a given flow priority level across the egress message flows allocated to that flow priority level.
29. The network element of claim 27: wherein the network element is configured to receive multiple ingress message flows; and wherein the instructions are such that the processing circuitry is operative to trigger the message flow shaping operation for a given flow priority level across the ingress message flows allocated to that flow priority level.
30. The network element of claim 27, wherein the instructions are such that the processing circuitry is operative to: group ingress messages by one or more ingress flow definition schemes to the one or more logical ingress message flows; and group egress messages by one or more egress flow definition schemes to the one or more logical egress message flows.
31. The network element of claim 30, wherein the one or more ingress flow definition schemes are different from the one or more egress flow definition schemes.
32. The network element of claim 27, wherein the instructions are such that the processing circuitry is operative to apply at least one prioritization scheme to the ingress message flows and egress message flows to allocate the flow priority levels.
33. The network element of claim 32: wherein the message flows are associated with services that have different service priority levels; and wherein the instructions are such that the processing circuitry is operative to allocate message flows that are associated with services having the same service priority level to the same flow priority level.
34. The network element of claim 27, wherein the instructions are such that the processing circuitry is operative to trigger a message flow shaping operation at the egress side per flow priority level.
35. The network element of claim 34, wherein the instructions are such that the processing circuitry is operative to determine the congestion state for a given flow priority level based on a state of the egress side message flow shaping operation for that flow priority level.
36. The network element of claim 34, wherein the egress side message flow shaping operation is configured to operate on at least one message rate limit per flow priority level.
37. The network element of claim 36, wherein the egress side message flow shaping operation is configured to observe the at least one message rate limit for a given flow priority level by preventing an output of individual messages that belong to an egress message flow to which that flow priority level is allocated.
38. The network element of claim 37, wherein the instructions are such that the processing circuitry is operative to determine the congestion state for a given flow priority level based on a ratio between messages that have been output and messages that have been prevented from being output at the egress side.
39. The network element of claim 27, wherein the ingress side message flow shaping operation is configured to drop or reject individual messages at the ingress side.
40. The network element of claim 39, wherein the instructions are such that the processing circuitry is operative to trigger the ingress side message flow shaping operation such that a dropping or rejection ratio for a given flow priority level is dependent on the congestion state determined for the at least one associated flow priority level at the egress side.
41. The network element of claim 27: wherein the network element is configured to receive multiple ingress message flows via multiple links; and wherein the instructions are such that the processing circuitry is operative to trigger the ingress side message flow shaping operation per link.
42. The network element of claim 27: wherein the network element is configured to output multiple egress message flows via multiple links; and wherein the instructions are such that the processing circuitry is operative to determine the congestion state per link.
43. The network element of claim 41, wherein the instructions are such that the processing circuitry is operative to trigger the ingress side message flow shaping operation per ingress side link dependent on the congestion state determined for at least one associated egress side link.
44. A message routing system, the system comprising: a first network element capable of message routing, the first network element being configured to receive one or more logical ingress message flows and to output one or more logical egress message flows, wherein a flow priority level is allocated to each ingress and egress message flow, the first network element comprising: processing circuitry; memory containing instructions executable by the processing circuitry whereby the processing circuitry is operative to: determine a message flow congestion state per flow priority level at an egress side of the network element; and trigger a message flow shaping operation per flow priority level at an ingress side of the network element dependent on the congestion state determined for at least one associated flow priority level at the egress side; at least one second network element coupled to the first network element via an ingress side link; and at least third network element coupled to the first network element via an egress side link.
45. A method of controlling a network element capable of message routing, the network element being configured to receive one or more logical ingress message flows and to output one or more logical egress message flows, wherein a flow priority level is allocated to each ingress and egress message flow, the method comprising: determining a message flow congestion state per flow priority level at an egress side of the network element; and triggering a message flow shaping operation per flow priority level at an ingress side of the network element dependent on the congestion state determined for at least one associated flow priority level at the egress side.
46. A non-transitory computer readable recording medium storing a computer program product for controlling a network element capable of message routing, the network element being configured to receive one or more logical ingress message flows and to output one or more logical egress message flows, wherein a flow priority level is allocated to each ingress and egress message flow, the computer program product comprising software instructions which, when run on processing circuitry of the network element, causes the network element to: determine a message flow congestion state per flow priority level at an egress side of the network element; and trigger a message flow shaping operation per flow priority level at an ingress side of the network element dependent on the congestion state determined for at least one associated flow priority level at the egress side.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] Further aspects, details and advantages of the present disclosure will become apparent from the following description of exemplary embodiments and the drawings, wherein:
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
DETAILED DESCRIPTION
[0039] In the following description, for purposes of explanation and not limitation, specific details are set forth, such as specific network domains, protocols, and so on, in order to provide a thorough understanding of the present disclosure. It will be apparent to one skilled in the art that the present disclosure may be practiced in embodiments depart from these specific details. For example, while some of the following embodiments will be described in the exemplary context of the Diameter protocol, it will be apparent that the present disclosure could also be implemented using, for example, other application layer messaging protocols that use hop-by-hop routing. Moreover, while the present disclosure will partially be described in an exemplary roaming scenario, the present disclosure may also be implemented in connection with other communication scenarios.
[0040] Those skilled in the art will further appreciate that the methods, services, functions and steps explained herein may be implemented using individual hardware circuitry, using software in conjunction with a programmed processor or general purpose computer, using an Application Specific Integrated Circuit (ASIC) and/or using one or more Digital Signal Processors (DSPs). It will also be appreciated that the present disclosure could be implemented in connection with one or more processors and a memory coupled to the one or more processors, wherein the memory is encoded with one or more programs that cause the at least one processor to perform the methods, services, functions and steps disclosed herein when executed by the processor.
[0041]
[0042] In the exemplary scenario illustrated in
[0043] The network element 30 in the first network domain 10 and the network element 50 in the second network domain 20 may have a client/server relationship in accordance with a dedicated application layer messaging protocol, such as HTTP, MAP, SIP Diameter or Radius. Each of the network elements 30, 50 may be operated as one or both of a client or server depending on its current role in a given messaging transaction. In practice, multiple client/server pairs (in terms of multiple network elements 30 and multiple network elements 50) will be present in the message routing system of
[0044] The at least one intermediary network element 40 is configured to act as an agent (also called proxy) with message routing capabilities between the first network domain 10 and the second network domain 20. It should be noted that one or more further network elements, in particular agents, may operatively be located between the network element 30 and the network element 40 in the first network domain 10. Moreover, one or more further network elements, in particular agents, and, optionally, network domains may operatively be located between the network element 40 in the first network domain 10 and the network element 50 in the second network domain 20.
[0045] In other embodiments, the network element 40 could be located in the second network domain 20 or in any intermediate network domain (not shown) between the first network domain 10 and the second network domain 20. In still further embodiments, all the network elements 30, 40, 50 may be located within one and the same network domain, or there may be no network domain differentiation at all in the message routing system.
[0046] As shown in
[0047] The interfaces 32, 43, 52 are generally configured to receive and transmit messages from and/or to other network elements. As illustrated in
[0048] The interface 42 of the network element 40 may logically comprise an ingress side interface part and an egress side interface part. The ingress side interface part is configured to receive one or more logical ingress message flows, while the egress side interface part is configured to output one or more logical egress message flows. In some variants, the terms ingress and egress as used in connection with the network element 40 may be defined in relation to a client/server location or a request/answer messaging direction. For example, the ingress side of the network element 40 may be defined as the side at which request messages REQ are received from a client (such as the network element 30), while the egress side may be defined to be the side from which request messages REQ are forwarded to a server (such as the network element 50). It will be appreciated that other definitions of the terms ingress and egress may be applied depending on the particular use case.
[0049] Returning to
[0050] The present disclosure, in certain embodiments, permits the network elements 30, 40, 50 (i.e., clients, servers and agents) to perform better informed message flow shaping decisions. Better informed message flow shaping decisions also help to speed-up service execution, such as receipt of a final answer message at the network element 30 responsive to a request message directed to the network element 40 or the network element 50.
[0051]
[0052] In the Diameter-based and other embodiments presented herein, the processing of messages will typically be based on information included in dedicated message fields (AVPs) of these messages. Details in this regard, and in regard of the Diameter protocol in general in terms of the present embodiment, are described in the Internet Engineering Task Force (IETF) Request for Comments (RFC) 6733 of October 2002 (ISSN: 2070-1721).
[0053] The network system illustrated in
[0054] In
[0055] The routing table of agent 1b may be configured as illustrated in
[0056] As depicted in
[0057]
[0058] As shown in
[0059] Via each individual link, the network element 40 receives multiple logical ingress message flows. In a similar manner, the network element 40 is configured to output multiple logical egress flows on each link towards the servers 50. To each ingress and egress message flow a dedicated flow priority level is allocated. The different flow priority levels of the different message flows are indicated by different line types. In the present, exemplary scenario, three different flow priority levels (high, medium and low) are defined. It will be appreciated that more or less flow priority levels could be allocated in other embodiments. It will also be appreciated that each flow priority level (i.e., line type in
[0060] The grouping of ingress messages to the logical ingress message flows and the grouping of egress messages to the logical egress message flows is performed internally within the network element 40 in accordance with one or more ingress flow definition schemes and one or more egress flow definition schemes, respectively. The ingress flow definition schemes may be the same as the egress flow definition schemes, or different flow definition schemes may be applied at the ingress side and the egress side of the network element 40. The respective flow definition schemes may be defined by one or more message parameters, including the underlying messaging protocol (e.g., MAP, SIP, Diameter, Radius or HTTP), the respective messaging service or interface (e.g., Gr for MAP, S6a or Gx for Diameter, etc.), a message or command code (e.g., Update Location for MAP, Invite Method for SIP, CCR for Diameter, etc.), the presence of one or more dedicated Information Elements (IEs) and/or AVPs in a message, the content of any IE and/or AVP contained in a message (IMSI number, Location Update flags, access types, etc.), an application identifier (an application identified by an application identifier may realize one or more services, see also
[0061] Each message flow can be associated with a specific service (or application) generating the messages in that message flow. As understood herein, services can be end-user services but also network-internal services like backup services, charging services, policy control services, location update services or session setup services.
[0062] The flow priority level allocated to a particular message flow may reflect the associated service priority level. As such, a single flow priority level may be allocated to message flows pertaining to different services provided that the services have the same or, in general, an associated service priority level. As will be explained below, this allocation mechanism permits the network element 40 to throttle message traffic in a service priority-aware manner upon determining a congestion state. In such a manner, preferences of a network operator in terms of QoS can be reflected.
[0063] In the exemplary scenario illustrated in
[0064] In
[0065]
[0066] The method embodiment illustrated in
[0067] In a first step 510, the processor 44 of the network element 40 is controlled by program code in the memory 46 to determine a message flow congestion state. The congestion state is determined per flow priority level at an egress side of the network element 40. In the exemplary scenario of
[0068] In a further step 520, the processor 44 is controlled by the program code to trigger one or more message flow shaping operations at an ingress side of the network element 40. The one or more message flow shaping operations at the ingress side are triggered per flow priority level and dependent on the congestion state determined for an associated flow priority level at the egress side. In the exemplary scenario of
[0069] The message flow shaping operations at the ingress side can selectively be performed in relation to the links towards the multiple clients 30. In the example of
[0070] Different prioritization schemes may be applied at the ingress side and the egress side of the network element 40 as long as the ingress side and egress side flow priority levels can be associated with each other. As an example, a particular message flow having a flow priority level of medium at the ingress side may be allocated to a flow priority level of high at the egress side.
[0071] Further, in step 530, the message flow shaping operation triggered in step 520 is carried out at the ingress side of the network element 40. To this end, the processor 44 is configured by the program code to drop or reject individual messages at the ingress side of the network element 40. For rejected messages error codes or error messages comprising an error code may be transmitted back to the originating clients 30 to convey the reason for a rejection. Whether to drop or to reject an individual message may be decided based on the protocol type in use or based on the current state of that protocol.
[0072] In certain implementations, message rate limits may be defined per flow priority level and, optionally, per link. In such a case, a congestion state may be determined in case a particular message rate limit is reached or exceeded.
[0073] Steps 510 to 530 illustrated in
[0074] In some cases, the determination of the congestion state in step 510 may be performed taking into account the ratio of messages that have been prevented from being output at the egress side (e.g., that have been dropped or rejected) and messages that have actually been output. The congestion state may be represented by a non-binary value that increases with the ratio of messages that have been prevented from being output. In the following, one exemplary mechanism for determining the congestion state will be described in more detail.
[0075] The function f in the algorithm
Cong-state=f(MSG dropped/rejected over MSG sent)
defines the sensitivity of the calculated Cong-state value and can be set individually per priority level.
[0076] The Cong-state value for the flow priority level of high can, for example, be set to: [0077] 1.sup.st case: 1 when the ratio is 5% to 20%, 2 when 20% to 50%, 3 when above 50%, [0078] 2.sup.nd case: 1 when the ratio is 2% to 5%, 2 when 5% to 50%, 3 when above 50%.
[0079] In the second case the sensitivity is higher (i.e., the congestion state is set to a relatively high value when the number of dropped or rejected messages increases slightly).
[0080]
[0081] In the particular embodiment of
[0082]
[0083] The message rate limits for a certain flow priority level on the ingress side are thus not statically configured, but are dynamically calculated based on the Cong-state value of the associated egress side priority level. This approach allows that ingress message flows of a specific priority level can be throttled depending on the congestion state of completely different message flows on the egress side of the network element 40.
[0084] In this regard, the network element 40 calculates a so-called RALT value (Relative Allowed Traffic rate) individually per each ingress message flow (or priority level). The RALT value indicates how much the message rate per priority level shall be reduced compared to the current message rate (or compared to any statically configured maximum allowed message rate).
[0085] A RALT (low) value of 0% indicates that the current (or statically maximally configured) message rate limit for all message flows with priority level low shall not be changed.
[0086] A RALT (low) value of y% indicates that the current (or statically maximally configured) message rate shall be reduced by y%.
[0087] The individual RALT values per flow priority level are calculated similar to the Cong-state values periodically by the network element 40 and are applied for message flow shaping for a period of time until the next value is calculated and applied. When there is no congestion determined, then the RALT values will be set to 0 and no ingress message flow shaping will occur.
[0088] The RALT values can be calculated by taking into account many Cong-state values for different priority levels of the egress side. Some examples for Diameter traffic are given below. However, the same principles can be applied also to a mix of, e.g., SIP- and HTTP-based message flows. It should be noted that ingress message flows can be completely different compared to egress message flows. Ingress message flows can be, e.g., MAP-based while egress message flows can be Diameter and/or SIP based (which would typically be the case for protocol converter agents/nodes 40).
[0089] In the particular embodiment of
[0090] Assume in regard of
Example 1
Low Congestion
[0091] When the congestion level of Peer 1 for message flows of a low priority level is below 15%, then throttle 30% of low priority message flows on ingress for Peer A and 10% for Peer B (see
(cong-state(low)=1 and peer=Peer1) set (RALT(low) of peer=PeerA to 30% and RALT(medium) of peer=PeerB to 10%) expression 1
Example 2
High Congestion
[0092] When the congestion level of Peer 1 for message flows of a low priority level is above 15% and for Peer 2 above 41% then throttle 70% of low priority message flows on ingress for Peer A and 50% of medium priority message flows on Peer B
(cong-state(low)>1 and peer=Peer1) and (cong-state(low)=2 and peer=Peer2) set (RALT(low) of peer=PeerA to 70% and RALT(low) of peer=PeerB to 100% and RALT(medium) of peer=PeerB to 50%) expression 2
[0093] As has become apparent from the above exemplary embodiments, the solution presented herein permits a management of congestion situations taking into account service priority levels (as defined, e.g., by network operators for their individual networks). In congestion situations traffic can be consistently throttled (e.g., per user or user group) for individual services or individual network elements taking into account a complete message flow for a service. Messages can be dropped or rejected in congestion situations already at the beginning of a longer-lasting session (and not at the end of it, which would make all previous message exchanges obsolete), so that the already established session can be completed with higher priority, resulting in a higher QoS.
[0094] In congestion situations the message flows that cause the actual overloads can be subjected to message flow shaping operations. As an example, a specific message flow type (or traffic type) from clients that cause a server overload can be dropped or rejected.
[0095] While the present invention has been described in relation to exemplary embodiments, it is to be understood that the present disclosure is for illustrative purposes only. Accordingly, it is intended that the invention be limited only by the scope of the claims appended hereto.