Methods for identifying a source location in a service chaining topology
11652666 · 2023-05-16
Assignee
Inventors
- Mansi Babbar (Fremont, CA, US)
- Subin Cyriac Mathew (San Jose, CA, US)
- Chidambareswaran RAMAN (Campbell, CA, US)
Cpc classification
H04L2101/622
ELECTRICITY
G06F9/45545
PHYSICS
G06F2009/45595
PHYSICS
H04L12/4633
ELECTRICITY
H04L12/4679
ELECTRICITY
H04L61/103
ELECTRICITY
International classification
G06F9/455
PHYSICS
H04L45/741
ELECTRICITY
Abstract
In an embodiment, a computer-implemented method provides mechanisms for identifying a source location in a service chaining topology. In an embodiment, a method comprises: receiving a query, from a service plane implementation module executing on a host of a service virtual machine (“SVM”), for a location of a source host implementing a guest virtual machine (“source GVM”) that originated a packet in a computer network and that serviced the packet; in response to receiving the query, performing a search of bindings associated with one or more virtual network identifiers (“VNIs”) or service virtual network identifiers (“SVNIs”) to identify a particular binding that includes a MAC address of the host implementing the source GVM; identifying, in the particular binding, the location of the source host; and providing the location of the source host to the host of the SVM to facilitate forwarding of the packet from the SVM to the GVM.
Claims
1. A computer-implemented method for identifying a source location of a packet-originating guest virtual machine in a service chaining topology, the method comprising: receiving a query, from a service plane implementation module executing on a host of a service virtual machine (“SVM”), for a location of a source host implementing a guest virtual machine (“source GVM”) that originated a packet in a computer network and that serviced the packet; in response to receiving the query, performing a search of one or more bindings associated with one or more virtual network identifiers (“VNIs”) or service virtual network identifiers (“SVNIs”) to identify a particular binding that includes a MAC address of the host implementing the source GVM; identifying, in the particular binding, the location of the source host; and providing the location of the source host to the host of the SVM to facilitate forwarding of the packet from the SVM to the source GVM, wherein the SVM is a last SVM in a service chain defined for servicing the packet originated at the source GVM.
2. The computer-implemented method of claim 1, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a controller, configured to control one or more computer hosts that implement the SVM and the source GVM, from the one or more computer hosts that implement the SVM and the source GVM; and wherein a binding, of the one or more bindings, comprises an association between a virtual machine MAC address and a virtual tunnel endpoint (“VTEP”) MAC address and a VTEP Internet Protocol (“IP”) address.
3. The computer-implemented method of claim 2, wherein the query is received and processed by the controller configured to control the one or more computer hosts that implement the SVM and the source GVM.
4. The computer-implemented method of claim 2, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a data path process executing in a hypervisor implemented in a host that supports the SVM; and wherein the query is received and processed by the data path processes executing in the hypervisor implemented in the host that supports the SVM.
5. The computer-implemented method of claim 1, wherein the location of the source host that implements the source GVM includes both a MAC address and an IP address of a VTEP to which the source GVM is connected.
6. The computer-implemented method of claim 1, wherein the host of the SVM uses information about the location of the source host to encapsulate a serviced packet and provide the encapsulated packet to the source GVM.
7. One or more non-transitory computer-readable storage media storing one or more computer instructions which, when executed by one or more processors, cause the one or more processors to perform: receiving a query, from a service plane implementation module executing on a host of a service virtual machine (“SVM”), for a location of a source host implementing a guest virtual machine (“source GVM”) that originated a packet in a computer network and that serviced the packet; in response to receiving the query, performing a search of one or more bindings associated with one or more virtual network identifiers (“VNIs”) or service virtual network identifiers (“SVNIs”) to identify a particular binding that includes a MAC address of the host implementing the source GVM; identifying, in the particular binding, the location of the source host; and providing the location of the source host to the host of the SVM to facilitate forwarding of the packet from the SVM to the source GVM, wherein the SVM is a last SVM in a service chain defined for servicing the packet originated at the source GVM.
8. The one or more non-transitory computer-readable storage media of claim 7, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a controller, configured to control one or more computer hosts that implement the SVM and the source GVM, from the one or more computer hosts that implement the SVM and the source GVM; and wherein a binding, of the one or more bindings, comprises an association between a virtual machine MAC address and a virtual tunnel endpoint (“VTEP”) MAC address and a VTEP Internet Protocol (“IP”) address.
9. The one or more non-transitory computer-readable storage media of claim 8, wherein the query is received and processed by the controller configured to control the one or more computer hosts that implement the SVM and the source GVM.
10. The one or more non-transitory computer-readable storage media of claim 8, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a data path process executing in a hypervisor implemented in a host that supports the SVM; and wherein the query is received and processed by the data path processes executing in the hypervisor implemented in the host that supports the SVM.
11. The one or more non-transitory computer-readable storage media of claim 7, wherein the location of the source host that implements the source GVM includes both a MAC address and an IP address of a VTEP to which the source GVM is connected.
12. The one or more non-transitory computer-readable storage media of claim 7, wherein the host of the SVM uses information about the location of the source GVM to encapsulate a serviced packet and provide the encapsulated packet to the source GVM.
13. A hypervisor implemented in a computer network and configured to implement mechanisms for identifying a source location of a packet-originating guest virtual machine in a service chaining topology, the hypervisor comprising: one or more processors; one or more memory units; and one or more non-transitory computer-readable storage media storing one or more computer instructions which, when executed by the one or more processors, cause the one or more processors to perform: receiving a query, from a service plane implementation module executing on a host of a service virtual machine (“SVM”), for a location of a source host implementing a guest virtual machine (“source GVM”) that originated a packet in a computer network and that serviced the packet; in response to receiving the query, performing a search of one or more bindings associated with one or more virtual network identifiers (“VNIs”) or service virtual network identifiers (“SVNIs”) to identify a particular binding that includes a MAC address of the host implementing the source GVM; identifying, in the particular binding, the location of the source host; and providing the location of the source host to the host of the SVM to facilitate forwarding of the packet from the SVM to the source GVM, wherein the location of the source host that implements the source GVM includes both a MAC address and an IP address of a VTEP to which the source GVM is connected.
14. The hypervisor of claim 13, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a controller, configured to control one or more computer hosts that implement the SVM and the source GVM, from the one or more computer hosts that implement the SVM and the source GVM; and wherein a binding, of the one or more bindings, comprises an association between a virtual machine MAC address and a virtual tunnel endpoint (“VTEP”) MAC address and a VTEP Internet Protocol (“IP”) address.
15. The hypervisor of claim 14, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a data path process executing in the hypervisor implemented in a host that supports the SVM; and wherein the query is received and processed by the data path processes executing in the hypervisor implemented in the host that supports the SVM.
16. The hypervisor of claim 13, wherein the SVM is a last SVM in a service chain defined for servicing the packet originated at the source GVM.
17. The hypervisor of claim 13, wherein the host of the SVM uses information about the location of the source GVM to encapsulate a serviced packet and provide the encapsulated packet to the source GVM.
18. One or more non-transitory computer-readable storage media storing one or more computer instructions which, when executed by one or more processors, cause the one or more processors to perform: receiving a query, from a service plane implementation module executing on a host of a service virtual machine (“SVM”), for a location of a source host implementing a guest virtual machine (“source GVM”) that originated a packet in a computer network and that serviced the packet; in response to receiving the query, performing a search of one or more bindings associated with one or more virtual network identifiers (“VNIs”) or service virtual network identifiers (“SVNIs”) to identify a particular binding that includes a MAC address of the host implementing the source GVM; identifying, in the particular binding, the location of the source host; and providing the location of the source host to the host of the SVM to facilitate forwarding of the packet from the SVM to the source GVM, wherein the location of the source host that implements the source GVM includes both a MAC address and an IP address of a VTEP to which the source GVM is connected.
19. The one or more non-transitory computer-readable storage media of claim 18, wherein the one or more bindings associated with the one or more VNIs or SVNIs are received by a controller, configured to control one or more computer hosts that implement the SVM and the source GVM, from the one or more computer hosts that implement the SVM and the source GVM; and wherein a binding, of the one or more bindings, comprises an association between a virtual machine MAC address and a virtual tunnel endpoint (“VTEP”) MAC address and a VTEP Internet Protocol (“IP”) address.
20. The one or more non-transitory computer-readable storage media of claim 19, wherein the query is received and processed by the controller configured to control the one or more computer hosts that implement the SVM and the source GVM.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) In the drawings:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) In the following description, for the purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the method described herein. It will be apparent, however, that the present approach may be practiced without these specific details. In some instances, well-known structures and devices are shown in a block diagram form to avoid unnecessarily obscuring the present approach.
(11) 1. Example Physical Implementations
(12)
(13) MP 104 may include multiple computing devices that implement management plane functions. MP 104 may be responsible for receiving network configuration input through an application programming interface (“API”), a command-line interface, and/or a graphical user interface. The network configuration input may specify, for example, how multiple VMs executing on the hosts of datacenters 160A-160B may communicate with each other. The network configuration input may include, for example, MAC addresses and IP addresses of virtual networking elements implemented in datacenters 160A-160B.
(14) In an embodiment, datacenter 160A/160B includes a central control plane (“CCP”) cluster 110A/110B that manages datacenter 160A/160B, respectively. Each CCP cluster 110A/110B may include a plurality of control planes to provide redundancy, reliability, fault tolerance, and load balance. Datacenters 160A-160B may also include hosts 150A-150B. Although
(15) CCP clusters 110A-110B may be responsible for exchanging runtime state information. Runtime state information typically refers to data that can be used to instruct data path processes (not shown) executing in hosts 150A-150B how to handle traffic encapsulation and forwarding. The runtime state information may include, for example, “MAC to VTEP” bindings and other data managed by, for example, a VTEP 208A and VTEP 208B. VTEP 208A may be configured to, for example, encapsulate packets originated by a VM instantiated on host 150A and route the encapsulated packet to VTEP 208B implemented on host 150B.
(16) Hosts 150A-150B may be referred to as computing devices, host computers, host devices, physical servers, server systems, or physical machines. The hosts may include hardware components such as commodity hardware computing platforms including computing processors, memory units, physical network interface cards, and storage devices (not shown).
(17) In an embodiment, hosts 150A-150B are physical computing devices that support the execution of one or more GVMs 125A-1, 125A-2, and one or more SVMs 125B-1, 125B-2, respectively. Hosts 150A-150B are configured with virtualization layers, referred to herein as hypervisors 130A-130B, respectively. Hypervisor 130A abstracts a processor, memory, storage, and networking resources of a corresponding hardware platform into multiple GVMs 125A-1, 125A-2. Hypervisor 130B abstracts a processor, memory, storage, and networking resources of a corresponding hardware platform into multiple SVMs 125B-1, 125B-2.
(18) Architectures of hypervisors 130A-130B may vary. In some embodiments, hypervisor 130A/130B is installed as a bare-metal installation directly on the host 150A/150B and interposed between the physical hardware and the guest operating systems executing in GVMs 125A-1, 125A-2 and SVMs 125B-1, 125B-2. In other embodiments, hypervisor 130A/130B is implemented as an additional layer on the top of a conventional host operating system.
(19) GVMs 125A-1, 125A-2 and SVMs 125B-1, 125B-2 are examples of virtualized computing instances or workloads. A virtualized computing instance may include an addressable data compute node or an isolated user space instance, often referred to as a name space container.
(20) GVM 125A-1/125A-2 comprises a software-based VNIC 202A-1/202A-2, respectively, that may be configured by a local control plane (not shown) running on host machine 150A. VNICs 202A-1, 202A-2 provide network access for GVMs 125A-1, 125A-2, respectively. VNICs 202A-1 and 202A-2 are typically connected to corresponding virtual ports, such as ports 204A-1, 204A-2, respectively, of a virtual network switch 210A. Virtual switch 210A is a forwarding element implemented in software by hypervisor 130A.
(21) SVM 125B-1/125B-2 comprises a software-based VNIC 202B-1/202B-2, respectively, that may be configured by a local control plane (not shown) running on host machine 150B. VNICs 202B-1, 202B-2 provide network access for SVMs 125B-1, 125B-2, respectively. VNICs 202B-1 and VNIC 202B-2 are typically connected to corresponding virtual ports, such as ports 204B-1 and 204B-2, respectively, of a virtual network switch 210B. Virtual switch 210B is a forwarding element implemented in software by hypervisor 130B.
(22) Hardware 127A/127B of host 150A/150B, respectively, includes hardware components such as one or more processors (not shown), a system memory unit (not shown), a storage system (not shown), I/O devices, and a network interface (“NIC”) 123A/123B, respectively. NIC 123A/123B enables host 150A/150B, respectively, to communicate with other devices via a communication medium, such as network 165. NIC 123A/123B may be used to transmit data from virtual port 206A/206B, respectively, to and from network 165.
(23) 2. Example Process for Servicing a Packet by Service Virtual Machines
(24)
(25) Suppose that: a binding generated for source GVM 125A-1 is “VNI1: GVM MAC to VTEP1,” where VNI1 is 5001; a binding generated for SVM 125B-1 is “SVNI1: SVMA MAC to VTEP2,” where SVNI1 is 5002; a binding generated for SVM 125C-1 is “SVNI1: SVMB MAC to VTEP3;” and a binding generated for SVM 125D-1 is “SVNI1: SVMC MAC to VTEP4.”
(26) Suppose that a packet that was originated by source GVM 125A-1 is to be serviced first by SVM 125B-1. To send the packet to SVM 125B-1 on a different host, the source host for GVM 125A-1 encapsulates the packet with a plurality of headers to form a packet 152. A source of packet 152 is GVM 125A-1, while a destination of packet 152 is SVM 125B-1.
(27) An example of packet 152 is depicted in
(28) Upon receiving packet 152, SVM 125B-1 services packet 152, and, if the packet is not dropped and the next SVM is on a different host, the host for SVM 125B-1 encapsulates a resulting packet with a plurality of headers to form a packet 154. A source of packet 154 is SVM 125B-1, while a destination of packet 154 is SVM 125C-1. However, if SVM “A” and SVM “B” are on the same host, then encapsulation is not needed; the host simply passes the packet to the next SVM. This is true each time a packet is passed between GVMs and/or SVMs.
(29) An example of encapsulated packet 154 is depicted in
(30) Upon receiving packet 154, SVM 125C-1 services packet 154, and, if the packet is not dropped and the next SVM is on a different host, then the host for SVM 125C-1 encapsulates a resulting packet with a plurality of headers to form a packet 156. A source of packet 156 is SVM 125C-1, while a destination of packet 156 is SVM 125D-1.
(31) An example of packet 156 is depicted in
(32) Upon receiving packet 156, SVM 125D-1 services packet 156, and, if the packet is not dropped (as shown in an element “159”) and GVM 125A-1 is on a different host than the host that implements SVM 125D-1, the host for SVM 125D-1 tries to encapsulate a resulting packet with headers to form a packet 160. A source of packet 160 is known; it is SVM 125D-1. However, a VTEP destination of packet 160 is not readily known to the host of SVM 125D-1 because the VTEP information of the VTEP1 to which GVM 125A-1 is connected is not readily available to the host of SVM 125D-1. The host for SVM 125D-1 may, however, obtain that information using the process described later in
(33) An example of packet 160 is depicted in
(34) 3. Approaches for Identifying a Location of a Source Host That Hosts a Guest Virtual Machine
(35)
(36)
(37) The receiving step corresponds to step 402 in
(38) Upon receiving the MAC to VTEP bindings for the VNIs and SVNIs, CCP 300 stores the bindings in data structures. Examples of the data structures include tables that are organized by the VNIs and SVNIs and that may be similar to a data structure 306 shown in
(39) CCP 300 may automatically provide (an element “3”) the bindings to hosts 150A-150B that support virtual machines each time CCP 300 receives a binding from any of hosts 150A-150B. For example, CCP 300 may automatically provide (“3”) information about the “5001: MAC1 to VTEP1” binding and information about the “5002: MAC7 to SVTEP2” binding to a data path process 333B implemented in host 150A.
(40) The providing step is also depicted in
(41) Periodically, the controller may check, in step 408 depicted in
(42) In step 410 of
(43) Referring to
(44) In response to receiving the query, the host or the data path process searches, in step 412 of
(45) Referring to
(46) In step 414 of
(47) Referring to
(48) Alternatively, controller 300, shown in
(49) An alternative approach is shown in
(50) In step 510 of
(51) In response to receiving the query, the host searches, in step 512, all received bindings for all VNIs and SVNIs to identify a particular binding that includes a spmac, which corresponds to a MAC address of the source GVM. Referring to
(52) In step 514, the host uses both a VTEP MAC address and a VTEP IP address extracted from the particular binding to encapsulate the serviced packet. For example, referring to
(53) The presented approaches may be optimized to support datacenters that host thousands of VMs to efficiently manage the storage space and time latency in the datacenters as the datacenters execute the presented mechanisms for identifying a source location in a service chaining topology.
(54) 4. Implementation Mechanisms
(55) The present approach may be implemented using a computing system comprising one or more processors and memory. The one or more processors and memory may be provided by one or more hardware machines. A hardware machine includes a communications bus or other communication mechanisms for addressing main memory and for transferring data between and among the various components of hardware machine. The hardware machine also includes one or more processors coupled with the bus for processing information. The processor may be a microprocessor, a system on a chip (SoC), or other type of hardware processor.
(56) Main memory may be a random-access memory (RAM) or other dynamic storage device. It may be coupled to a communications bus and used for storing information and software instructions to be executed by a processor. Main memory may also be used for storing temporary variables or other intermediate information during execution of software instructions to be executed by one or more processors.
(57) 5. General Considerations
(58) Although some of various drawings may illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings may be specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages could be implemented in hardware, firmware, software or any combination thereof.
(59) The foregoing description, for purpose of explanation, has been described regarding specific embodiments. However, the illustrative embodiments above are not intended to be exhaustive or to limit the scope of the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen to best explain the principles underlying the claims and their practical applications, to thereby enable others skilled in the art to best use the embodiments with various modifications as are suited to the uses contemplated.
(60) Any definitions set forth herein for terms contained in the claims may govern the meaning of such terms as used in the claims. No limitation, element, property, feature, advantage or attribute that is not expressly recited in a claim should limit the scope of the claim in any way. The specification and drawings are to be regarded in an illustrative rather than a restrictive sense.