CONNECTIVITY FAULT MANAGEMENT IN A COMMUNICATION NETWORK
20170118105 ยท 2017-04-27
Inventors
Cpc classification
H04L43/10
ELECTRICITY
H04L41/0631
ELECTRICITY
H04L12/4641
ELECTRICITY
International classification
Abstract
Methods and apparatus are disclosed for monitoring a Maintenance Association (MA) for Connectivity Fault Management (CFM) in a network supporting Equal Cost Multiple Paths (ECMP). A set of ECMP paths is generated for sending data between endpoints in the network. Furthermore, a set of ECMP MAs is created that are used for monitoring the generated ECMP paths between the endpoints. The created set of ECMP MAs is subsequently used for sending monitoring packets. ECMP path MAs therefore conform to existing CFM operation and are compatible with both ECMP point to point path MAs and ECMP multipoint path MAs.
Claims
1. A method of monitoring a Maintenance Association for Connectivity Fault Management in a network supporting Equal Cost Multiple Paths, ECMP, the method comprising: generating a set of ECMP paths for sending data between endpoints in the network; creating a set of ECMP Maintenance Associations for monitoring the generated ECMP paths between the endpoints; and using the created set of ECMP Maintenance Associations for sending monitoring packets.
2. The method according to claim 1, wherein each monitoring packet comprises a Connectivity Fault Management Protocol Data Unit.
3. The method according to claim 1, further comprising generating the set of ECMP paths using ECMP Point to Point paths, each of the ECMP Point to Point paths comprising a set of equal shortest length connectivity paths between two endpoints of the ECMP Point to Point path.
4. The method according to claim 1, further comprising generating the set of ECMP paths using ECMP multipoint paths, the multipoint paths comprising a set of connectivity multipoint paths among the same endpoints.
5. The method according to claim 4, wherein each ECMP path comprises an ECMP multipoint path having N endpoints, the method further comprising identifying each ECMP multipoint path using a Group address.
6. The method according to claim 4, further comprising identifying each ECMP multipoint path associated with the N endpoints using a Group MAC address.
7. The method according to claim 6, further comprising constructing the Group MAC address by applying an operation on Backbone Service Identifier values associated with the ECMP multipoint paths.
8. The method according to claim 1, further comprising monitoring an ECMP path by sending the monitored packet using an identifier associated with the monitored ECMP path.
9. The method according to claim 8 wherein the identifier is selected from any of a Flow Hash and a Group MAC address identifying the path.
10. The method according to claim 1, further comprising monitoring a plurality of ECMP paths by sending monitored packets in groups cyclically on each monitored ECMP path, using an identifier associated with each monitored ECM P path.
11. A node for use in a communications network supporting ECMP, the node comprising: a processor for generating a set of ECMP path MAs for sending data between endpoints; the processor being further arranged to create a set of ECMP Maintenance Associations for monitoring the generated ECMP paths between the endpoints; and a transmitter for sending monitoring packets using the set of generated ECMP paths.
12. The node according to claim 11, further comprising a computer readable medium in the form of a memory for storing information mapping at least one Service Identifier to each generated ECMP path.
13. The node according to claim 11, wherein the processor is further arranged to generate the monitoring packets using ECMP Point to Point paths, each of the ECMP Point to Point paths comprising a set of equal shortest length connectivity paths between two endpoints of the ECMP Point to Point path.
14. The node according to claim 11, wherein the processor is further arranged to generate the monitoring packets using ECMP multipoint paths comprising a set of connectivity multipoint paths among a plurality of endpoints.
15. The node according to claim 14, wherein the processor is arranged to identify each ECMP multipoint path monitoring packet using a Group MAC address for each ECMP multipoint path.
16. The node according to claim 15 wherein the processor is arranged to construct each Group MAC address by applying an operation on Backbone Service Identifier values associated with the ECMP multipoint paths.
17. A non-transitory computer readable medium which, when run on a node, causes the node to perform the method of claim 1.
18. A vessel or vehicle comprising the node as claimed in claim 11.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0026]
[0027]
[0028]
[0029]
DETAILED DESCRIPTION
[0030] A consistent way of enabling OAM monitoring for Connectivity Fault Management for both ECMP PtP path MAs and ECMP Multipoint path MAs is provided. Fate sharing is guaranteed by using the same forwarding parameters for monitoring packets such as CFM PDUs monitoring the ECMP service as for monitored data frames. In particular, the destination address of CFM PDUs associated with ECMP path MAs is the same address used to reach remote MEPs within the same MA, and is provided by the configuration of the MA itself. Each specific ECMP is identified by a Flow Hash value and any subsets of ECMP paths within the same PtP path are identified by the associated subset of Flow Hash values.
[0031] An ECMP path MA is associated with a connectivity path connecting a specific group of endpoints or with a subset (not necessarily proper) of equal cost paths connecting the same end points. In the latter case, the corresponding CFM PDUs are sent in groups cyclically on every monitored path, using an identifier associated with every monitored path. The number of CFM PDUs in every group depends on the specific CFM PDU: For example, for CCMs, at least four CCMs must be sent on a single monitored path before moving to the next one. For Loopback Messages (LBMs), as many LBMs as provided by the administrator that initiates the LBM are sent. Only one LTM need be sent. This is because CCMs are sent periodically, and a fault is only reported when more than three consecutive CCMs are in error (so we need to send at least four on the same path to be able to check it). The periodicity of LBMs (if any) is configurable and correspondingly the number of LBMs on individual paths must be based on the configuration setting. LTMs are set to identify individual nodes along the path, and so only one LTM on each individual path is required.
[0032] In the case of ECMP multipoint path services, the destination_address parameter of the associated monitoring CFM PDUs is set cyclically to the SPBM Group MAC address associated with the monitored multipoint service. SPBM Group MAC address assignment can be automated.
[0033] In more detail, two ECMP connectivity paths are defined as follows:
[0034] 1. ECMP PtP path: This is the complete set of equal shortest length connectivity paths between two specific end points as constructed by ECMP. In addition to what is described in P802.1Qbp/1.0, LB and LT use the same cyclic methods when a subset of Flow Hash values is provided.
[0035] 2. ECMP multipoint path: This is the complete set of connectivity multipoint paths among more than two end points as constructed by ECMP. A single multipoint path within an ECMP multipoint path of N endpoints is identified either by:
[0036] (a). N Group MAC addresses constructed as follows: the first 3 bytes corresponding to the SPsourceID of the initiating Backbone Edge Bridge (BEB) and the last 3 bytes corresponding to the same I-SID identifying the N endpoint connectivity (this I-SID value may be automated to, for example, be the least backbone I-SID value on the set of I-SID values mapped to an ECMP-VID operation within the Backbone Service Instance table on the terminating BEBs having the least SPsourceID), that is: (SPsourceID[1]-ISID, SPsourceID[2]-ISID, . . . , SPsourceID[N]-ISID); or (b). A single Group MAC address for all endpoints constructed as follows: the first 3 bytes corresponding to the IEEE 802.1Q Backbone Service Instance Group address OUI (see clause 26.4 in IEEE Std 802.1Q-2011, VLAN aware Bridges) and the last three bytes corresponding to the same I-SID identifying the N endpoint connectivity (this I-SID value chosen could be automated to, for example, be the least backbone I-SID value on the set of I-SID values mapped to an ECMP-VID operation within the Backbone Service Instance table on the terminating BEBs having the least SPsourceID). That is the same group address that is used for all I-SID endpoints corresponding to the Backbone Service instance Group Address.
[0037] The choice between (a) and (b) type of addressing described above is made by configuration. Note that the selection of (a) or (b) depends on how the ECMP multipoint connectivity is set up. Option (a) requires the set up N individual MAC addresses for an N point connectivity, while option (b) requires a single MAC address for an N-point connectivity. Option (a) provides better coverage at the expense of increased complexity.
[0038] Other multipoint paths (up to 16 for each group, a or b) within the same ECMP multipoint connectivity associated with exactly the same N endpoints can be identified by using Group MAC addresses constructed by the above sets by x:oring the I-SID values in (a) or (b) type addressing using tie break masks described in 28.8 in IEEE Std 802.1aq-2012, Shortest Path Bridging.
[0039] In order to enable ECMP operation, an I-SID to path mapping table must be configured for all local I-SIDs that map to the B-VID indicating ECMP operation on the BEBs Backbone Service Instance table. Note that there may be a default configuration set to distribute I-SIDs equally to all ECMP paths. In this case, I-SIDs can be mapped in increasing order to paths. Table 1 below is an example of such a table:
TABLE-US-00001 TABLE 1 Exemplary mapping of I-SIDs to paths. I-SID.sub.1, I-SID.sub.2, . . . , I-SID.sub.k Path 1 I-SID.sub.k+1, . . . , I-SID.sub.k+m Path 2 I-SID.sub.p, . . . , I-SID.sub.z Path 16
[0040] For each subset of I-SID values that are mapped on the same path, the least I-SID.sub.low value is identified and all the subsets are ordered on increasing I-SID.sub.low values. The I-SID subsets are then mapped to multipoint paths identified by Group MAC addresses constructed as defined above and x:ored in accordance with IEEE std 802.1aq-2012 in increasing order. Table 2 illustrates an I-SID distribution table when addressing method (a) is used:
TABLE-US-00002 TABLE 2 Exemplary mapping of I-SIDs to paths 1000, 40000, 3443 Path 1 999, 104000 Path 2 39000, 1010 Path 3 800000, 995 Path 4
[0041] An exemplary automated constructed Group MAC for a node identified by SPSourceID 5 (having the appropriate multicast address bit set) is shown in Table 3.
TABLE-US-00003 TABLE 3 Exemplary automated constructed Group MAC 800000, 995 5-995 999, 104000 5-(995 x:ored 0x01) 1000, 40000, 3443 5-(995 x:ored 0x02) 39000, 1010 5-(995 x:ored 0x03)
[0042] The method described above provides a way to automate the allocation of identifiers of individual paths within ECMP multipoint path connectivity.
[0043] The address used by CFM PDUs to reach remote MEPs within the same ECMP path MA is provided by the configuration of the MA itself. In the case of the ECMP multipoint path MAs it is an SPBM Group Address associated with the monitored service. The above method describes a way to automate the distribution of Group addresses based on the I-SID ECMP configuration tables. In the case of a single path with the ECMP path MA, the CFM PDUs use the MAC address associated with it. In cases where more then one path is monitored, the CFM PDUs are cyclically destined to the associated Group MAC addresses.
[0044] The associated ECMP path MEPs are placed on a Customer Backbone Port (CBP) by using the TESI multiplex entities and using the associated Group MAC address identifiers
[0045] The techniques described above enable automated configuration of ECMP multipoint path MAs in a way that does not require alterations to existing CFM operations, and is compatible with ECMP PtP paths MAs.
[0046] Turning now to
[0047] S1. ECMP multipoint paths are generated and are identified by a set of SPB Group Addresses as described above.
[0048] S2. ECMP PtP and multipoint path MAs are determined in order to monitor the ECMP paths. The ECMP path MAs can be associated with a connectivity path connecting a specific group of endpoints or with a subset (not necessarily proper) of equal cost paths connecting the same end points. Each ECMP PtP individual path is identified by a Flow Hash value, while each ECMP multipoint individual path is identified by an SPB Group Address as described above.
[0049] S3. CFM PDUs are sent and processed on those MAs determined in step S2. When multiple paths are used, the corresponding CFM PDUs are sent in groups cyclically on every monitored path, using the identifier associated with every monitored path. The number of CFM PDUs in every group depends on the specific CFM PDU. For example, for CCMs there should be sent at least 4 CCMs on a single monitored path before moving to the next one. For LBMs, as many LBMs as provided by the administrator that initiated the LBM are sent. For LTMs, only one LTM is sent.
[0050] As described above, there are various ways in which I-SID subsets that define paths can be mapped to Group MAC addresses.
[0051] Turning now to
[0052] The node 5 is provided with a processor 6 for generating the ECMP paths and applying them to data and CFM PDUs. A transmitter 7 and receiver 8 may also be provided. Note that this may be in the form of a separate transmitter and receiver or in the form of a transceiver. A non-transitory computer readable medium in the form of a memory 9 may be provided. This may be used to store a program 10 which, when executed by the processor 6, causes the node 5 to behave as described above. The memory 9 may also be used to store tables 11, such as Tables 1 to 3 described above for mapping I-SID values and Group MAC addresses to paths. Note that the memory 9 may be a single physical memory or may be distributed or connected remotely to the node 5. In the example of
[0053] Note also that the computer program 10 may be provided on a further non-transitory computer readable medium 12 such as a Compact Disk or flash drive and transferred from the further memory 12 to the memory 9 or executed by the processor 6 directly from the further memory 12.
[0054] A node such as a Bridge network node supporting ECMP can typically support a plurality of other service types (such as VLAN, Traffic Engineered services, Backbone tunnel services, etc). In an embodiment, the network is a Provider Backbone network where its edges (the endpoints described above) are Backbone Edge Bridges (which can encapsulate and decapsulate received frames) while transit Bridges are called Backbone Core Bridges which do not have encapsulation/decapsulation capabilities. The network needs to run Shortest Path Bridging in MAC mode (SPBM) which is used to create shortest paths between the edges. ECMP further updates SPBM in order to enable multiple paths among the same edges. A node performing ECMP typically has processing capabilities and requirements associated with the ECMP service monitoring. That is, ECMP MEPs need to be instantiated at the BEBs (in particular CBPs (Customer Backbone Ports within the BEBs) in order to initiate and process CFM PDUs associated with the ECMP services, and ECMP MIPs need to be instantiated at BCBs in order to process received CFM PDUs and respond.
[0055] Turning to
[0056] It will be appreciated by the person of skill in the art that various modifications may be made to the above described embodiments. For example, the functions of the network node are described as being embodied at a single node, but it will be appreciated that different functions may be provided at different network nodes.
[0057] The following abbreviations have been used in this specification:
[0058] BEB Backbone Edge Bridge
[0059] B-VID Bridging VLAN Identifier
[0060] CBP Customer Backbone Port
[0061] CCM Continuity Check Message
[0062] CFM Connectivity Fault Management
[0063] ECMP Equal Cost Multiple Paths
[0064] FDB Filtering Database
[0065] IS-IS Intermediate System to Intermediate System
[0066] I-SID Backbone Service Identifier
[0067] LBM Loopback Message
[0068] LTM Link Trace Message
[0069] MA Maintenance Association
[0070] MEP Maintenance Association Edge Point
[0071] OAM Operations, Administration and Maintenance
[0072] PBB-TE Provider Backbone BridgesTraffic Engineering
[0073] PDU Protocol Data Unit
[0074] PtP Point to point
[0075] SPB Shortest Path Bridging
[0076] SPBM Shortest Path BridgingMAC address mode
[0077] TESI Traffic Engineered Service Instance
[0078] VID VLAN Identifier
[0079] VLAN Virtual LAN