Active mesh network system and method
11502942 · 2022-11-15
Assignee
Inventors
Cpc classification
H04L67/1029
ELECTRICITY
H04L67/288
ELECTRICITY
G06F2009/45595
PHYSICS
H04L12/4633
ELECTRICITY
H04L67/10
ELECTRICITY
H04L12/4641
ELECTRICITY
H04L12/66
ELECTRICITY
International classification
H04L12/66
ELECTRICITY
G06F9/455
PHYSICS
Abstract
According to one embodiment, a network system features a first virtual private cloud (VPC) network and a second VPC network. The first VPC network includes a first plurality of gateways. Each gateway of the first plurality of gateways is in communications with other gateways of the first plurality of gateways in accordance with a first tunnel protocol. Similarly, a second VPC network includes a second plurality of gateways. Each of the second plurality of gateways is communicatively coupled to the each of the first plurality of gateways in accordance with a second security protocol to provide redundant routing.
Claims
1. A network system comprising: a first virtual private cloud network including a first plurality of gateways, each gateway of the first plurality of gateways being in communication with one or more other gateways of the first plurality of gateways over one or more communication links uniquely corresponding to the one or more other gateways, each of the one or more communication links operating in accordance with a tunnel protocol; and a second virtual private cloud network including a second plurality of gateways, each of the second plurality of gateways being communicatively coupled to the each of the first plurality of gateways over peer-to-peer communication links operating in accordance with a security protocol, wherein each of the peer-to-peer communication links is set to a selected equal cost multi-path (ECMP) routing metrics, and each of the one or more communication links is assigned a higher ECMP routing metric than any ECMP routing metric assigned to a peer-to-peer communication link of the peer-to-peer communication links.
2. The network system of claim 1, wherein a first gateway of the first plurality of gateways being in communication with at least a second gateway of the first plurality of gateways over a communication link of the one or more communication links operating in accordance with a Generic Routing Encapsulation (GRE) tunnel protocol to secure communications between each of the first plurality of gateways for subsequent communication to one of the second plurality of gateways.
3. The network system of claim 2, wherein each of the first plurality of gateways and the second plurality of gateways correspond to a virtual machine (VM)-based data routing component, each of the first plurality of gateways is assigned a Private IP address within an IP address range associated with a first virtual private cloud network and each of the second plurality of gateways is assigned a Private IP address within an IP address range associated with a second virtual private cloud network different than the first virtual private cloud network.
4. The network system of claim 1, wherein the communication link of the one or more communication links is active when no peer-to-peer communication links communicatively coupled to the first gateway of the first plurality of gateways is active.
5. The network system of claim 1, wherein the security protocol corresponds to an Internet Protocol Security (IPSec) protocol.
6. The network system of claim 1, wherein each of the peer-to-peer communication links is set to identical ECMP routing metrics to achieve load-balancing.
7. The network system of claim 1, wherein each of the peer-to-peer communication links is assigned a different routing weight to achieve load-balancing, the weight being based on bandwidth capacity.
8. The network system of claim 1, wherein responsive to a peer-to-peer communication link of the peer-to-peer communication links failing, a gateway of the first plurality of gateways updates a gateway routing table autonomously by disabling a virtual tunnel interface corresponding to the failed peer-to-peer communication link without reliance on activity by a controller that manages operability of a full-mesh network including the peer-to-peer communication links.
9. The network system of claim 1, wherein the second virtual private cloud network supports communications between the first virtual private cloud network and an on-premises network.
10. The network system of claim 9, wherein the second plurality of gateways being communicatively coupled to the each of the first plurality of gateways over the peer-to-peer communication links and the on-premises network.
11. A network system comprising: a first plurality of gateways, wherein a first gateway of the first plurality of gateways being in communication with at least a second gateway of the first plurality of gateways over a first communication link, the first communication link operating in accordance with a tunnel protocol; and a second plurality of gateways, each of the second plurality of gateways being communicatively coupled to the each of the first plurality of gateways over peer-to-peer communication links operating in accordance with a security protocol, wherein each of the peer-to-peer communication links is set to a selected equal cost multi-path (ECMP) routing metrics, and the first communication link is assigned a higher ECMP routing metric than any ECMP routing metric assigned to a peer-to-peer communication link of the peer-to-peer communication links.
12. The network system of claim 11, wherein the first communication link operating in accordance with a Generic Routing Encapsulation (GRE) tunnel protocol to secure communications between each of the first plurality of gateways for subsequent communication to one of the second plurality of gateways.
13. The network system of claim 12, wherein the second plurality of gateways being communicatively coupled to the each of the first plurality of gateways over the peer-to-peer communication links and an on-premises network.
14. The network system of claim 12, wherein each gateway of the first plurality of gateways operates as a spoke gateway and each gateway of the second plurality of gateways operates as a transit gateway so that (i) each spoke gateway supports the peer-to-peer communication links in an active state to each of the transit gateways and (ii) each transit gateway supports the peer-to-peer communication links in the active state to each of the spoke gateways as well as an active peer-to-peer communication link to each on-prem computing device.
15. The network system of claim 11, wherein the first communication link becomes active when no peer-to-peer communication links communicatively coupled to the first gateway from the second plurality of gateways is active.
16. The network system of claim 11, wherein the security protocol corresponds to an Internet Protocol Security (IPSec) protocol.
17. The network system of claim 11, wherein each of the peer-to-peer communication links is set to identical ECMP routing metrics to achieve load-balancing.
18. The network system of claim 11, wherein each of the peer-to-peer communication links is assigned a different routing weight to achieve load-balancing, the weight being based on bandwidth capacity.
19. The network system of claim 11, wherein responsive to a peer-to-peer communication link of the peer-to-peer communication links failing, the first gateway updates a gateway routing table autonomously by disabling a virtual tunnel interface corresponding to the failed peer-to-peer communication link without reliance on activity by a controller that manages operability of a full-mesh network including the peer-to-peer communication links.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Embodiments of the invention are illustrated by way of example and not by way of limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
DETAILED DESCRIPTION
(11) Embodiments of a system and method for establishing a load-balanced, full-mesh network within a public cloud computing platform that mitigates disruption of communications directed to or from virtual private cloud networks (VPCs) due to communication link failures. The full-mesh network may be accomplished by establishing (i) a first cloud-based networking infrastructure operating as a first virtual private cloud network (hereinafter, “spoke VPC”) and (ii) a second cloud-based networking infrastructure operating as a second virtual private cloud network (hereinafter, “transit VPC”) . The spoke VPC includes a set of (e.g., two or more) gateways (hereinafter, “spoke gateways”), which are communicatively coupled to one or more instances (e.g., cloud instances associated with a particular subnet or particular subnets as described below) and a set of gateways deployed within the transit VPC (hereinafter, “transit gateways”). Each of the spoke gateways and transit gateways may be accessed in accordance with a unique Classless Inter-Domain Routing (CIDR) routing address to propagate messages over the network.
(12) Besides communicatively coupled to the set of spoke gateways, the set of transit gateways may be communicatively coupled to one or more computing devices deployed within an on-premises network (hereinafter, “on-prem computing devices”). Herein, the transit VPC is configured to control the propagation of data traffic between the spoke VPC and the on-premises network while the spoke VPC is configured to control the propagation of data traffic between instances maintained within the spoke VPC and the transit VPC.
(13) According to one embodiment of the disclosure, the first cloud-based networking infrastructure features a one-to-many communication link deployment (e.g., criss-cross peering), where each spoke gateway supports multiple, active peer-to-peer communication links to different transit gateways and each transit gateway supports multiple, active peer-to-peer communication links to different spoke gateways as well as an active peer-to-peer communication link to each on-prem computing device. According to one embodiment of the disclosure, the peer-to-peer communication links may constitute cryptographically secure tunnels, such as tunnels operating in accordance with a secure network protocol. One example of a secure network protocol may include, but is not limited or restricted to Internet Protocol Security (IPSec). Hence, for clarity sake, these peer-to-peer communication links may be referred to as “IPSec tunnels.”
(14) Herein, the deployment of full-mesh peering in lieu of primary/HA communication links utilized in conventional cloud computing platforms provides a number of technological advantages. For example, the full-mesh peering architecture is configured to avoid intensive monitoring of routing tables relied upon by a gateway (referred to as a “gateway routing table”) for determining which IPSec tunnel is used in the transmission of data, especially for a tunnel state change. To achieve load-balancing, all of the IPSec tunnels directed to the transit gateways are set to identical, equal cost multi-path (ECMP) routing parameters, namely identical routing weights and ECMP metrics as described below. Alternatively, according to another embodiment of the disclosure, loading balancing is not based on ECMP; rather, load balancing is achieved through an assignment of weights such that different tunnels may be assigned with different weights, based on one or a combination of factors such as bandwidth, preference, or the like.
(15) Herein, when an IPSec tunnel fails, the gateway updates its gateway routing table autonomously by disabling (bring down) a tunnel interface (e.g., virtual tunnel interface) corresponding to the failed IPSec tunnel without reliance on activity by a controller that manages operability of the full-mesh network. As a result, the gateway precludes messages from being routed through the failed IPSec tunnel to mitigate data transmission loss. Instead, the messages are routed through a selected active IPSec tunnel, which may be reassigned to communication with all or some of the instances within a particular instance subnet. In response to the IPSec tunnel becoming operational (i.e., the IPSec tunnel goes up), the gateway will bring up the corresponding tunnel interface and recover the routing path if removed from the gateway routing table (e.g., routing path removed when all of the IPSec tunnels to a particular destination becoming disabled).
(16) Additionally, the full-mesh network provides another technological advantage by avoiding time intensive reprogramming of a virtual private cloud (VPC) routing table relied upon for determining a routing path between an identified source and destination. This may be accomplished by establishing one or more secondary tunnels for each routing path, where the secondary tunnel provides an alternative routing path via a gateway residing within the same VPC (e.g., gateways within the spoke VPC, gateways within the transit VPC, etc.). Each secondary tunnel supports the transmission of data through the alternative routing path when all of the IPSEC tunnels from a particular gateway have failed. Hence, secondary tunnels enable another gateway to operate as an intermediary device to support continued communications from the particular gateway with a remote peer destination (e.g., cloud instance, on-prem computing device, etc.). Each of the secondary tunnels may be configured in accordance with Generic Routing Encapsulation (GRE) tunnel protocol to secure communications between gateways within the same VPC. However, it is contemplated that another tunneling protocol, such as any IP routable tunnel based on Private IP addressing, inclusive of IPSec, may be used other than GRE.
(17) Routing path selection via the gateways within the VPCs may be accomplished through an equal cost multi-path (ECMP) routing strategy, namely next-hop message forwarding to a single destination can occur over multiple “best” paths that are determined in accordance with an assigned ECMP metric. Hence, the IPSec tunnels associated with a gateway (e.g., spoke gateway or transit gateway) are assigned equivalent ECMP metrics that are lower than the ECMP metrics assigned to any of the secondary (GRE) tunnels.
(18) Besides the network architecture per se, the operability (method) performed by the system for establishing the load-balanced, full-mesh network to mitigate disruption of communications directed to or from the VPCs is described. Herein, a controller managing operability of the public cloud network configures one or more spoke VPCs by segregating cloud instances within each spoke VPC to particular subnets. A “subnet” is a segment of a VPC's IP address range designated to group resources (e.g., managed software instances each directed to particular functionality) based on security and operational needs. Hence, each instance subnet established within a spoke VPC may be a collection of instances for that spoke VPC that are selected to communicate with a selected spoke gateway residing in the spoke VPC.
(19) Thereafter, the controller collects VPC information (e.g., VPC subnet allocations, VPC routing tables and their association with subnets) and configures a VPC routing table associated with each spoke gateway to establish communication links (e.g., logical connections) between a certain spoke gateway and cloud instances associated with a particular instance subnet. The VPC routing table is programmed to support communication links between different sources and destinations, such as an on-prem computing devices, a cloud instance within a particular instance subnet or the like.
(20) Besides the VPC routing tables for each of the spoke gateways, the controller may be adapted to configure gateway routing tables each of the gateways within the VPCs of the full-mesh network. More specifically, according to one embodiment of the disclosure, the controller may be configured to initially program gateway routing tables for both spoke gateways residing within the spoke VPC(s) and transit gateways residing within the transit VPC(s). The gateway routing tables are relied upon by the gateways for determining which tunnels to use for propagating data traffic (e.g., messages) towards a destination (e.g., virtual tunnel interface for a destination cloud instance or computing device). For this embodiment of the disclosure, the gateway routing tables includes both IPSec tunnels and secondary (e.g., GRE) tunnels between gateways within the same VPC to be used in the event that all of the IPSec tunnels have failed.
(21) The gateway routing tables are accessible to their corresponding gateways, and are updated by these gateways. For example, in response to a failed IPSec tunnel (e.g., change in tunnel state), the gateway associated with the failed IPSec tunnel disables its virtual tunnel interface (VTI). By disabling the VTI associated with the failed IPSec tunnel, further data transmissions over the failed IPSec tunnel is prevented. The disabling of the VTI may be conducted by a gateway (e.g., spoke gateway or transit gateway) without further operability by the controller.
(22) Logic within the gateway detects reversion in the tunnel state (e.g., IPSec tunnel is now active) and, if so, the gateway re-activates the tunnel interface (e.g., remove “disabled” tag and/or resets “active” tag) or recovers the routing path associated with the previously failed IPSec tunnel if removed from the gateway routing table. This recovery of the routing path may be accomplished by accessing a data store (e.g., database) associated with the gateway that maintains routing paths available to that gateway, even failed (disabled) IPSec tunnels.
(23) Based on the foregoing, the reduction in VPC routing table programming is made available through the configuration of the secondary (e.g., GRE) tunnels. End to end load balance is achieved through the network architecture by using of two technics at different stages. Firstly, from VPC instances to spoke gateway, each VPC instance is under a routing subnet. The subnet is associated with a routing table. The routing table route forward data traffic from the instance to a spoke gateway. Traffic from difference source instances of different subnet (routing table) are sent to different spoke gateways, instead of all source instances sending traffic to one spoke gateway in active-standby scheme. Secondly, between spoke gateway and transit gateway, or transit gateway to on-premises routers, this may be based on analytics conducted on a 5-tuple of a message (e.g., source IP address; source port; destination IP address; destination port; destination protocol), which is routed from between the spoke gateway and transit gateway or between the transit gateway and on-premises routes. The analytics may be a one-way hash operation in which the results (or a portion of the results) are used to select a particular ECMP link in the routing table to transmit of the data traffic.
(24) Further details of the logic associated with one embodiment of the load-balanced, full-mesh network system architecture are described below:
(25) Instance Subnets: Multiple instance subnets may be generated in a spoke VPC so that instances forming a particular instance subnet are forwarded to a selected spoke gateway.
(26) VPC routing table(s): A VPC routing table may be used to associate spoke gateways within each VPC with one or more different instance subnets. Load balancing is achieved by implementing the full-mesh network system, where identical, equal cost multi-path (ECMP) routing parameters are assigned to each of the gateways and a secondary tunnel is established between each peer gateway pair within the same VPC. Therefore, the VPC routable table requires no programming unless the gateway becomes disabled (i.e., goes down), where the VPC routing table may be remapped based on the results of a 5-tuple analytics mapped to the remainder of the active gateways within the VPC.
(27) Gateways: Multiple gateways are deployed in a VPC that control the flow of data traffic from instances of the VPC to one or more remote sites including computing devices that may process data received from the instances. Having similar architectures, the gateways may be identified differently based on their location/operability within a public cloud network platform. The “spoke” gateways are configured to interact with targeted instances while “transit” gateways are configured to further assist in the propagation of data traffic (e.g., one or more messages) directed to a spoke gateway within a spoke VPC or a computing device within an on-premises network.
(28) IPSec tunnels: Secure peer-to-peer communication links established between gateways of neighboring VPCs or between gateways of a VPC and a router of an on-premises network. The peer-to-peer communication links are secured through a secure network protocol suite referred to as “Internet Protocol Security” (IPSec). With respect to the full-mesh network deployment, as an illustrative example, where a spoke VPC has “M” gateways and a neighboring (transit) VPC has N gateways, M x N IPSec tunnels are created between the spoke VPC and the transit VPC to form the full-mesh network. These IPSec tunnels are represented in gateways by virtual tunnel interface (VTI) and the tunnel states are represented by VTI states.
(29) Gateway routing: In gateway routing table, routing paths between the gateway and an IP addressable destination to which the tunnel terminates (e.g., another gateway, on-prem computing device, etc.), identified by a virtual tunnel interface (VTI) for example, are programmed with ECMP routing parameters, namely identical routing weights and ECMP metrics. Given consistent ECMP metrics are assigned to the IPSec tunnels, the selected routing path towards the remote network may be based on analytics conducted on certain information associated with data traffic (e.g., 5-tuple). These analytics may include conducting a one-way hash operation on the 5-tuple information where a portion of the hash value may be used to identify the selected IPSec tunnel. If any of the IPSec tunnels state is changed or disabled (or re-activated), the corresponding VTI may be removed (or added) from consideration as to termination points for the selected routing path.
(30) Secondary tunnels: Each of the gateways in the same VPC may be configured to create secondary (backup) communication links (e.g., GRE tunnels) towards all other gateways within that VPC, also represented by VTIs. For example, with respect to a VPC including M gateways, each gateway will have M-1 secondary communication links. Herein, these secondary communication links are assigned with higher metric values within the gateway routing table than communication links (e.g., IPSec tunnels) pointing to a remote peer gateway. Therefore, according to one embodiment of the disclosure, a secondary communication link (e.g., GRE tunnel) will not forward traffic until all of the IPSec tunnels for that particular gateway have become disabled (gone down).
I. TERMINOLOGY
(31) In the following description, certain terminology is used to describe features of the invention. In certain situations, the terms “logic” and “computing device” is representative of hardware, software or a combination thereof, which is configured to perform one or more functions. As hardware, the logic (or device) may include circuitry having data processing or storage functionality. Examples of such circuitry may include, but are not limited or restricted to a microprocessor, one or more processor cores, a programmable gate array, a microcontroller, an application specific integrated circuit, wireless receiver, transmitter and/or transceiver circuitry, semiconductor memory, or combinatorial logic.
(32) Alternatively, or in combination with the hardware circuitry described above, the logic (or computing device) may be software in the form of one or more software modules. The software module(s) may include an executable application, an application programming interface (API), a subroutine, a function, a procedure, an applet, a servlet, a routine, source code, a shared library/dynamic load library, or one or more instructions. The software module(s) may be stored in any type of a suitable non-transitory storage medium, or transitory storage medium (e.g., electrical, optical, acoustical or other form of propagated signals such as carrier waves, infrared signals, or digital signals). Examples of non-transitory storage medium may include, but are not limited or restricted to a programmable circuit; a semiconductor memory; non-persistent storage such as volatile memory (e.g., any type of random access memory “RAM”); persistent storage such as non-volatile memory (e.g., read-only memory “ROM”, power-backed RAM, flash memory, phase-change memory, etc.), a solid-state drive, hard disk drive, an optical disc drive, or a portable memory device. As software, the logic may operate as firmware stored in persistent storage.
(33) The term “computerized” generally represents that any corresponding operations are conducted by hardware in combination with software.
(34) The term “gateway” may be construed as a virtual or physical logic. For instance, as an illustrative example, the gateway may correspond to virtual logic in the form of a software component, such as a virtual machine (VM)-based data routing component that is assigned a Private IP address within an IP address range associated with a VPC including the gateway. The gateway allows Cloud Service Providers (CSPs) and enterprises to enable datacenter and cloud network traffic routing between virtual and physical networks, including a public network (e.g., Internet). Alternatively, in some embodiments, the gateway may correspond to physical logic, such as an electronic device that is communicatively coupled to the network and assigned the hardware (MAC) address and IP address.
(35) The term “cloud-based networking infrastructure” generally refers to a combination of software instances generated based on execution of certain software by hardware associated with the public cloud network. Each software instance may constitute a virtual network resource associated with the public cloud network, such as a switch, server or the like.
(36) The term “message” generally refers to information in a prescribed format and transmitted in accordance with a suitable delivery protocol. Hence, each message may be in the form of one or more packets, frames, or any other series of bits having the prescribed format.
(37) The term “transmission medium” may be construed as a physical or logical communication path between two or more electronic devices. For instance, as a physical communication path, wired and/or wireless interconnects in the form of electrical wiring, optical fiber, cable, bus trace, or a wireless channel using infrared, radio frequency (RF), may be used.
(38) Finally, the terms “or” and “and/or” as used herein are to be interpreted as inclusive or meaning any one or any combination. As an example, “A, B or C” or “A, B and/or C” mean “any of the following: A; B; C; A and B; A and C; B and C; A, B and C.” An exception to this definition will occur only when a combination of elements, functions, steps or acts are in some way inherently mutually exclusive.
(39) As this invention is susceptible to embodiments of many different forms, it is intended that the present disclosure is to be considered as an example of the principles of the invention and not intended to limit the invention to the specific embodiments shown and described.
II. GENERAL SYSTEM ARCHITECTURE
(40) Referring to
(41) As shown, the spoke VPC 120 is configured with multiple VPC subnetworks 145 (hereinafter, “subnets”), where each of these subnets 145 includes different cloud instances. Each of the instance subnets 145.sub.1 . . . , or 145.sub.P (P≥2) is configured, in accordance with a VPC routing table 150, to exchange data traffic with a selected gateway of a set of (e.g., two or more) gateways 125.sub.1-125.sub.M (M≥2) maintained in the spoke VPC 120. Herein, these gateways 125.sub.1-125.sub.M are referred to as “spoke gateways” 125.sub.1-125.sub.M. More specifically, a controller 160 for the full-mesh network 110 is configured to manage communication links between the instance subnets 145.sub.1-145.sub.P and the set of spoke gateways 125.sub.1-125.sub.M as represented by the VPC routing table 150, which is initially programmed to identify which spoke gateway 125.sub.1 . . . or 125.sub.M is responsible for interacting with one or more instance subnets 145.sub.1 . . . , or 145.sub.P (e.g., to receive message(s), forward message(s), etc.).
(42) Referring still to
(43) The spoke gateways 125.sub.1-125.sub.M are configured for communications with transit gateways 135.sub.1-135.sub.N via peer-to-peer communication links 127.sub.11-127.sub.MN. In particular, each spoke gateway 125.sub.i (1≤i≤M) is communicatively coupled to each of the transit gateways 135.sub.1-135.sub.N via multiple, active peer-to-peer communication links 127.sub.i1-127.sub.iN. Similarly, each transit gateway 135.sub.j (1≤j≤N) is communicatively coupled to each of the spoke gateways 125.sub.1-125.sub.M via multiple, active peer-to-peer communication links 127.sub.1j-127.sub.Mj. The peer-to-peer communication links 127.sub.11-127.sub.MN may constitute cryptographically secure tunnels, such as tunnels operating in accordance with a secure network protocol. One example of a secure network protocol may include, but is not limited or restricted to Internet Protocol Security (IPSec). Hence, the VPC-to-VPC tunnels may be referred to as “IPSec tunnels.”
(44) In general terms, for the full-mesh network 110 that features the spoke VPC 120 including “M” spoke gateways and the neighboring transit VPC 130 including “N” transit gateways, M x N IPSec tunnels 127.sub.11-127.sub.MN are created between the spoke VPC 120 and the transit VPC 130. The IPSec tunnels 127.sub.11-127.sub.MN may be established and maintained through gateway routing tables 170.sub.1-170.sub.Mdedicated to each of the spoke gateways 125.sub.1-125.sub.M, respectively. For example, a first gateway routing table 170.sub.1 determines which IPSec tunnel 127.sub.11-127.sub.1N for use in forwarding a message from one of the cloud instances 140 assigned to the first gateway 125.sub.1 to a destination instance (not shown) reachable via one of the on-prem computing device(s) 180.
(45) As an illustrative example, as shown specifically in
(46) Referring now to
(47) Referring back to
(48) As an illustrative example, a first transit gateway routing table 175.sub.1 determines which IPSec tunnel 137.sub.11-137.sub.12 for use in forwarding a message received from the spoke VPC 120 and directed to the destination instance reachable via one of the on-prem computing devices 180.sub.1 and 180.sub.2. As shown in
(49) Additionally, the full-mesh network 110 provides another technological advantage by establishing more reliable communications by configuring each of the gateways 125.sub.1-125.sub.M and 135.sub.1-135.sub.N with secondary tunnels to support data traffic when all IPSEC tunnels for a particular gateway have failed. As an illustrative example, as shown in
(50) Herein, the GRE tunnel formation among the spoke gateways 125.sub.1-125.sub.M (M≥2) within the spoke VPC 120 is described in detail, give the GRE tunnel formation for the transit gateways 135.sub.1-135.sub.N within the transit VPC 130 is consistent. In general, the spoke gateways 125.sub.1-125.sub.M are configured with GRE tunnels towards all other gateways in the spoke VPC 120, where the GRE tunnels may be maintained within the gateway routing tables 170.sub.1-170.sub.M and terminated by VTIs associated with the corresponding gateways. For this embodiment, for the spoke VPC 120, the first spoke gateway 125.sub.1 would be configured with “M−1” backup GRE tunnels such as GRE tunnel 129.sub.12 established between the first spoke gateway 125.sub.1 and the second spoke gateway 125.sub.2. Similarly, “M−2” GRE tunnels may be established between the second spoke gateway 125.sub.2 and any of the remaining gateways 125.sub.3-125.sub.M within the spoke VPC 120. As shown, the first spoke gateway 125.sub.1 is configured with GRE tunnel 129.sub.12, which establishes secondary communications with the second spoke gateway 125.sub.2.
(51) The GRE tunnels may be programmed with different ECMP metrics to designate an order of selection in case any GRE tunnels fail due to failure of the assigned gateway itself. Also, the ECMP metrics associated with the GRE tunnels are set with a higher ECMP metric than ECMP metrics associated with any of the IPSec tunnels so that the GRE routing is selected if routing via any IPSec tunnel is not available. Hence, as shown, none of the gateways 125.sub.1-125.sub.2 will forward data traffic via the GRE tunnels 129.sub.12 until all IPSEC tunnels towards remote peers are down (disabled).
(52) Referring still to
(53) Referring to
(54) The gateway 300 may be configured with routing logic 350 and a data store 360. As shown in
(55) As an optional component, the gateway 300 may include NAT logic 395 which, when executed, is configured to perform translation of the IP addresses for data packets transmitted between the spoke VPC 120 and the transit VPC 130. For example, in the Internet gateway 300, the NAT logic 395 may create a destination NAT entry to translate a private IP address associated with the source 310 residing within the spoke VPC 120 to a private IP address utilized by the transit VPC 130 in which the destination 320 is located. Similarly, an inverse translation is conducted where the private IP address associated with the transit VPC 130 may be returned back into the private IP address associated with the spoke VPC 120.
(56) Referring now to
(57) Similar to architecture of the public cloud computing platform 100 described in
(58) As further shown in
(59) The communications between the second VPC 420 and the fourth VPC 440 provide a reliable communication scheme among multiple VPCs featuring spoke gateways that enable a user to access cloud instances with the first VPC 410 via the first set of spoke gateways 415 and the second VPC 420 via the second set of spoke gateways 425. Also, when multiple VPCs are deployed and support inter-communications, this spoke-hub architect has advantage over full meshed direct peering between VPCs—it is more cost effective (e.g., less peering connections needed; lower requirement for VPC gateway resources, etc.), easier manageability, and the like.
(60) As further shown in
III. OPERATIONAL FLOW
(61) Referring now to
(62) Additionally, the controller may initially configure each of the gateway routing tables to create mesh-style, peer-to-peer communication links between remote gateways such as spoke-transit gateways implemented on different VPCs (e.g., IPSec tunnels) as well as communications between transit gateways and computing devices (e.g., routers) of the on-premises network (block 520). For load balancing, each of the communication links, represented as routing paths, may be configured with multi-path (ECMP) routing parameters (e.g., identical routing weights and ECMP metrics) to ensure that sources may rely on the routing paths equally. Additionally, these gateway routing tables may include peer-to-peer communications (secondary tunnels) between spoke gateways or transit gateways within the same VPC (block 530). As a result, the VPC routing table and gateway routing tables specific to each gateway are generated, where the gateway routing tables are now responsible for alternating its gateway routing table to address state channels within the IPSec tunnels and GRE tunnels.
(63) In response to a communication path failure, such as an IPSec tunnel becomes disabled for example, the spoke or transit gateway associated with the failed IPSec tunnel disables the communication link (routing path) by altering the VTI state within an entry associated with the disabled IPSec tunnel (blocks 540 and 550;
(64) Embodiments of the invention may be embodied in other specific forms without departing from the spirit of the present disclosure. The described embodiments are to be considered in all respects only as illustrative, not restrictive. The scope of the embodiments is, therefore, indicated by the appended claims rather than by the foregoing description. All changes that come within the meaning and range of equivalency of the claims are to be embraced within their scope.