Method for generating a load sharing vector
09736075 · 2017-08-15
Assignee
Inventors
Cpc classification
H04L45/00
ELECTRICITY
H04L12/66
ELECTRICITY
International classification
H04L1/00
ELECTRICITY
H04L12/66
ELECTRICITY
Abstract
The invention relates to a method for generating a load sharing vector indicating a plurality of communication targets for load sharing in a communication network. The method comprises providing a target distribution vector comprising a first number of entries indicating a first communication target, and comprising a second number of entries indicating a second communication target, and generating the load sharing vector upon the basis of active entries of the target distribution vector, the active entries indicating the communication target of the first or the second communication target which is available for load sharing in the communication network.
Claims
1. A method, executed in a processor of a communication node, for distributing data traffic among a plurality of communication targets in a communication network for load sharing, the method comprising: providing, by the processor, a target distribution vector comprising a number of entries per communication target, each entry indicating a capacity of a communication target, the target distribution vector determining a target distribution pattern for the plurality of communication targets, wherein providing the target distribution vector comprises determining the number of entries per communication target based on the following formula:
2. The method of claim 1, wherein the providing the target distribution vector comprises obtaining a randomized target distribution vector by one of: randomly reordering entries in the target distribution vector; randomly inserting entries into the target distribution vector.
3. The method of claim 1, where the providing a target distribution vector is executed when any of: a communication target is defined and must be inserted; a communication target is undefined and must be deleted; a change in capacity of communication targets.
4. The method of claim 1, wherein the generating the load sharing vector comprises scanning the target distribution vector for entries associated with communication targets which are available for load sharing.
5. The method of claim 4, where the generating the load sharing vector further comprises: setting a buffering state for each entry affected by a communication target becoming available or unavailable, in the load sharing vector; setting a rerouting timer; and setting a forwarding state for an entry affected by a communication target becoming available or unavailable in the load sharing vector, when the corresponding buffer is empty after expiry of the timer.
6. The method of claim 1, wherein the generating the load sharing vector comprises copying the entries indicating communication targets which are available for load sharing from the target distribution vector into the load sharing vector.
7. The method of claim 1, wherein the generating the load sharing vector comprises: comparing entries of the load sharing vector with entries of the target distribution vector; updating entries of the load sharing vector with entries of the target distribution vector which are associated with communication targets being available for load sharing to obtain the active entries in the load sharing vector.
8. The method of claim 1, wherein the generating the load sharing vector comprises: comparing an entry of the load sharing vector with an entry of the target distribution vector; at least one of: replacing the entry of the load sharing vector by the entry of the target distribution vector to obtain an active entry in the load sharing vector if both of: the entry of the target distribution vector indicates a communication target which is available for load sharing; and if the entry of the target distribution vector differs from the entry of the load sharing vector; comparing the entry of load sharing vector with a further subsequent entry of the target distribution vector if the entry of the target distribution vector indicates a communication target which is not available for load sharing.
9. The method of claim 1, wherein the generating the load sharing vector is executed each time the availability of a communication target changes or when the Target Distribution Vector is recalculated.
10. The method of claim 1, further comprising: determining a communication target from the load sharing vector, and forwarding a communication message towards the determined communication target.
11. The method of claim 10, wherein the determining comprises: determining an index value based on a header of the communication message, or providing an index value by a message counter; and selecting the active entry of the load sharing vector using the index value to determine the communication target.
12. The method of claim 11, wherein the determining the index value comprises calculating a hash value from at least one message header entry of the header of the communication message.
13. The method of claim 1, wherein the communication target is one of: a communication route; a communication path; a network node; an application distributed on a number of network nodes.
14. A communication node for distributing data traffic among a plurality of communication targets in a communication network for load sharing, the communication node comprising a processor and a memory coupled to the processor, the processor configured to: provide a target distribution vector comprising a number of entries per communication target, each entry indicating a capacity of a communication target, the target distribution vector determining a target distribution pattern for the plurality of communication targets wherein the processor is configured to provide the target distribution vector by determining the number of entries per communication target based on the following formula:
15. The communication node of claim 14: wherein the processor is further configured to determine a communication target from the load sharing vector; wherein the communication node further comprises a forwarder configured to forward a communication message towards the determined communication target.
16. The communication node of claim 15, wherein the processor is further configured to: determine an index value based on a header of the communication message, or provide an index value by a message counter; select the active entry of the load sharing vector using the index value to determine the communication target.
17. The communication node of claim 16, wherein the processor is further configured to determine the index value by calculating a hash value from at least the header of the communication message.
18. The communication node of claim 14, wherein the processor is further configured to obtain a randomized target distribution vector by: randomly reordering entries in the target distribution vector; or randomly inserting entries into the target distribution vector.
19. A communication node according to claim 14: wherein the processor is configured to provide the target distribution vector in response to any of: a communication target is defined and must be inserted; a communication target is undefined and must be deleted; a change in capacity of communication targets; wherein the processor is configured to generate the load sharing vector each time the availability of a communication target changes or when the target distribution vector is recalculated; wherein the processor is configured to generate the load sharing vector by: scanning the target distribution vector for entries associated with communication targets which are available for load sharing; setting a buffering state for each entry affected by a communication target becoming available or unavailable, in the load sharing vector; setting a rerouting timer; setting a forwarding state for an entry affected by a communication target becoming available or unavailable in the load sharing vector, when the corresponding buffer is empty after expiry of the timer; copying the entries indicating communication targets which are available for load sharing from the target distribution vector into the load sharing vector; comparing entries of the load sharing vector with entries of the target distribution vector; updating entries of the load sharing vector with entries of the target distribution vector which are associated with communication targets being available for load sharing to obtain the active entries in the load sharing vector; replacing the entry of the load sharing vector by the entry of the target distribution vector to obtain an active entry in the load sharing vector if both of: the entry of the target distribution vector indicates a communication target which is available for load sharing; and if the entry of the target distribution vector differs from the entry of the load sharing vector; comparing the entry of the load sharing vector with a further subsequent entry of the target distribution vector if the entry of the target distribution vector indicates a communication target which is not available for load sharing.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Further embodiments may be described with reference to the following figures, in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION
(12)
(13)
(14)
(15)
(16)
(17)
(18) The communication nodes D depicted in
(19) By way of example, in the case of the MSC-S Blade Cluster node, the load sharing node D shown in
(20) With reference to
(21) According to the embodiments shown in
(22) In order to describe the load sharing approach in detail, the following assumptions may exemplarily be assumed: The number of message flows to be load shared may be large compared to a number m of communication targets. This may be the case when message flows such as calls or transactions are created dynamically and short-lived.
(23) According to some embodiments, it is possible to discriminate a large number of message flows based on inspection of the transport protocol header and e.g. a few additional application-specific user protocol parameters that may be extracted on the communication node performing load sharing.
(24) Thus, there may exists an application-specific function h that maps the e.g. extended message header into the integers {1, . . . , N}, where h(M1)=h(M2) for any pair of messages belonging to the same message flow and N>>m. By way of example, the application-specific function h may map at least one the parameter of the message header into one or more hash values. Furthermore, the value of h may or may not be unique for each message flow.
(25) According to some embodiments, e.g. a minimum portion of traffic in case of re-routing may be affected. In the event of a temporary failure of a communication target, messages that may have been distributed to the unavailable communication target may be re-routed to other, available communication targets. The purpose of re-routing may thus be manifold. In particular, every new message flow may always be handled. With reference to
(26) With reference to
(27) In addition, a communication node redundancy may be achieved. It may also be possible to deploy two or more redundant load sharing nodes. Provided that the up-stream network on the left-hand side of the nodes D as shown n
(28) In the following, the communication targets are denoted by (T1, . . . , Tm), the receiving network nodes are denoted by (N1, . . . , Nm), and the routes are denoted by (R1, . . . , Rm). Furthermore, for a distributed application, e.g. AS, as shown e.g. in
(29) The target distribution vector (TDV) comprises entries determining a distribution pattern for the configured communication targets. Each communication target appears in the TDV with a relative frequency according to its capacity. The load distribution vector (LDV) may be derived from the target distribution vector and may take the availability of communication targets into account. The load distribution vector may be used for load sharing.
(30) Furthermore, a translation function h may be introduced to map the e.g. extended header information of each received message to an index into the load distribution vector. When in-sequence delivery is required, h may map all messages in a message flow to the same index. The message may be forwarded to the communication target pointed to be the load sharing vector entry.
(31) Each communication target T1, . . . , Tm may be associated with an integer capacity weight wi. The capacity weight is a property that is configured per communication target. Alternatively, it may be determined automatically, e.g. through measurements, or provided by an external party at run-time. Furthermore, each communication target T1, . . . , Tm may be associated with an integer capacity weight wi. The capacity weight may be a property that may be configured per communication target. Alternatively it may be determined automatically, e.g. through measurements, or provided by an external party at run-time. Each communication target may be, in its available state, available for load sharing, and in its unavailable state not available for load sharing.
(32) By way of example,
(33) Every communication target with capacity wj>0 may have at least one entry in the TDV. For example, if there are 64 communication targets with a total weight wtot=128 and M=1024 is an integer multiple of wtot, each communication target Ti may have wi*K entries, where K is equal to:
(34)
By way of example, if the capacity w0 for communication target T0 equals to 3, then T0 may have 24 entries in the vector TDV.
(35) In the case where M is not an integer multiple of wtot, the allocation of TDV entries to communication targets may, for example, be performed according to the so-called largest remainder method using the Hare quota wtot/M. In order to guarantee at least one entry per communication target, all communication targets with zero integer and non-zero remainder may receive an entry and may be removed from the list, before the remaining entries are allocated to the other communication targets according to the size of their remainder.
(36) If there are not enough remaining entries to provide at least one entry per communication target, the communication targets with the smallest weights may be allocated one entry each and the largest remainder method may be applied for the remaining communication targets only.
(37) By way of example,
(38) According to some embodiments, a communication target may be inserted into the TDV when it is defined and may be removed from TDV when it is undefined. These two actions and any change of capacity for a defined communication target may cause re-routing as the TDV and LDV have to be re-calculated.
(39) If an in-sequence-delivery according e.g. to the embodiment of
(40) According to some embodiments, the LDV may be vector of communication targets with the same number of entries as the TDV. The entries of the LDV are derived from the TDV by taking the availability of communication targets into account. At any time the LDV only contain communication targets that are available.
(41) By way of example, each entry in the LDV may have the following states: Forwarding, when the traffic including communication messages is being forwarded to the communication target; and a. Buffering, if the traffic including communication messages is temporarily buffered.
(42) The LDV may be updated each time when the availability of a communication target changes or when the TDV is re-calculated. Whenever a LDV entry changes, its communication target may enter the buffering state to ensure in-sequence delivery of messages in a message flow. A re-routing timer per LDV entry may be started to control the re-routing procedure. Each LDV entry may maintain its own FIFO buffer in the buffering state. According e.g. to the embodiment of
(43) In order to generate a new or an updated LDV e.g. whenever a new target distribution vector has been calculated, the following scheme may be performed: For each index j in the range 1, . . . , M:
(44) If the communication target of TDV[j] is available, store the communication target into LDV[j]j.
(45) If the communication target of TDV[j] is unavailable, scan the TDV linearly for the next entry whose communication target is in the available state. If the scan reaches the end of the TDV, continue at the start. Store the found communication target in the LDV[j].
(46)
(47) In order to generate the load sharing vector upon the basis of the target distribution vector, the target distribution vector may be searched for entries associated with available communication targets. Thus, only entries of the target distribution vector which are associated with available communication targets are stored into the load sharing vector 803. By way of example, starting with the entry 32, the entry 33 of the target distribution vector 801 is unavailable. Thus, the target distribution vector 801 is searched for next active entry associated with an available communication target so that, by way of example, the entry 34 of the target distribution vector 801 is copied at the index 33 of the load sharing vector. This iterative approach assures that the resulting load sharing vector has only active entries forming e.g. a subset of entries of the target distribution vector. However, if all communication targets are unavailable, then a valid load sharing vector may thus be created so that the application or a destination may be considered as been unavailable which results in a traffic which may be dropped.
(48) When a communication target changes its state to the unavailable state, then all LDV entries pointing to this communication target may be affected and may be changed to other available communication targets as follows: Change the state of an affected entry LDV[j] to the buffering state and start re-routing timer for LDV entry; If the state was already buffering, no action may be performed; Scan the TDV linearly from entry j+1 onwards for the first communication target in the available state. If the scan reaches the end of the TDV continue at the start. Store the found available communication target in LDV[j].
(49) In
(50) In order to update the load sharing vector 901, the target distribution vector is searched for e.g. a first alternative communication target which may be stored at a position in the load sharing vector, which is indicated by the index 10 and so forth. In result, the load sharing vector 901 is in a state which may be set to buffering for affected entries in order to keep e.g. an in sequence delivery. Thus, entry 10 may indicate buffering, entries 11 and 12 may indicate forwarding, entry 32 may indicate buffering, entries 33 and 34 may indicate forwarding, entry 50 may indicate buffering and entries 51 to 52 may indicate forwarding.
(51) By way of example, at a further time instant, e.g. an available communication target indicated at positions 11, 32 and 51 of the target distribution vector may fail. Thus, the updating of the load sharing vector 901 may be correspondingly performed so that the entry state of the load sharing vector may be changed back to forwarding when a buffer is emptied after e.g. expiry of a re-routing timer. Thus, the entries 10 to 52 as shown in
(52) In order to update the load sharing vector, the procedure as shown in
(53) As shown in
(54) In summary, when a previously unavailable communication target recovers and becomes available, the LDV may be updated as follows:
(55) For each index j in the range 1, . . . , M with LDV[j] unequal to TDV[j]: Scan the TDV linearly from entry j onwards for the first communication target in the available state; If the scan reaches the end of the TDV, then continue at the start; If the found communication target is different from LDV[j] store it in CVD[j]; Set the state of CVD[j] to the buffering state and start the re-routing timer for the LDV entry. Otherwise stop iteration.
(56) The LDV update described above is, by way of example, state-based in the sense that, given a certain availability of communication targets, the load sharing pattern may always be the same and may not depend on the history of availability changes in any way. This is the key enabler for redundant communication nodes that always apply the same load sharing pattern, as long as they have information on the same communication target states.
(57) When a new communication target is selected for a LDV entry, a linear scan through the randomized TDV for the first available communication target may select communication targets pseudo-randomly according to their relative capacity weight among the remaining available communication targets. Through this approach, the relative frequency of available communication targets in the LDV may statistically adhere to the capacity distribution of the remaining available communication targets.
(58)
(59) For the load sharing of message flows, the function h may map all messages of a given message flow to the same value i. This may be the case for any function if all messages in a flow have the same set of input parameters. In order to ensure good load sharing to all communication targets, the function h may produce output values in the entire range 1 . . . M or indices uniformly. Provided that the in-flow of messages has sufficient diversity in the input parameters, e.g. that the number of discernable message flows is large compared the number of communication targets, a hash function may be used for this purpose.
(60) If there are no message flows, but if instead individual messages may be load shared independently, there may be no need for any translation function. An optimal load sharing may be achieved by simply stepping a round-robin counter in the range 1 . . . M for each received message and use the current counter value as output.
(61) The output of the translation function may then be used as index into the LDV. If a LDV entry was selected in the forwarding state of the LDV, than the communication message may be forwarded directly to the communication target in the LDV entry. However, if the selected LDV was selected in the buffering state of the LDV, then the communication message may be placed at the end of the LDV entry buffer.
(62) At expiration of re-routing timer for specific LDV entry, a parallel buffer emptying process may be started. This process may fetch messages from the head of the LDV buffer and forwards them to the communication target of the LDV entry. When the buffer is emptied, the process may stop and the state of the LDV entry may be set back to the forwarding state.
(63) The load sharing approach described herein may, according to some implementations, be used for load sharing of message flows and individual messages without any load sharing constraints. Optionally, in-sequence delivery through buffering during re-routing may be provided. Additionally, re-routing at change of availability of a communication target may be performed.
(64) According to some embodiments, only the affected message flows may be temporarily buffered. Other, unaffected message flows for the same application or destination need not be buffered. In addition, a reversal of re-routing may be achieved when communication targets become available again, e.g. when any communication node may come to the same LDV based on the current set of weights and communication target availabilities.
(65) According to some embodiments, the deployment of redundant communication nodes which may seamlessly take over load sharing of message flows may be supported.
(66) According to some embodiments, reduced internal resources may be used for buffering, so that an increased message lost ratio may be avoided.
(67) According to some embodiments, a generic framework for the load sharing of signaling traffic with an application-specific handling of message flows and/or even distribution of all SLS values on each route. Thus, network-level load sharing over multiple relay stages as e.g. shown in
(68) According to some embodiments, a capacity-weighted load sharing may be performed.
(69) According to some embodiments, load sharing of message-flows or messages may be performed upon the basis of the principles described herein.
(70) According to some embodiments, history-less, e.g. redundant, load sharing nodes can take over load sharing seamlessly.
(71) According to some embodiments, a large number of load sharing targets, e.g. more than 16, may be taken into account.