Virtualization gateway between virtualized and non-virtualized networks
09935920 ยท 2018-04-03
Assignee
Inventors
- Murari Sridharan (Chennai, IN)
- David A. Maltz (Redmond, WA, US)
- Narasimhan Venkataramaiah (Redmond, WA, US)
- Parveen K. Patel (Redmond, WA)
- Yu-Shun Wang (Redmond, WA, US)
Cpc classification
G06F2009/45595
PHYSICS
H04L12/4604
ELECTRICITY
H04L12/4633
ELECTRICITY
H04L61/2596
ELECTRICITY
International classification
G06F9/455
PHYSICS
Abstract
Methods and apparatus are provided for controlling communication between a virtualized network and non-virtualized entities using a virtualization gateway. A packet is sent by a virtual machine in the virtualized network to a non-virtualized entity. The packet is routed by the host of the virtual machine to a provider address of the virtualization gateway. The gateway translates the provider address of the gateway to a destination address of the non-virtualized entity and sends the packet to the non-virtualized entity. The non-virtualized entity may be a physical resource, such as a physical server or a storage device. The physical resource may be dedicated to one customer or may be shared among customers.
Claims
1. A method performed by a gateway, the gateway comprising a first network interface connecting the gateway to a physical network of a cloud data center, the gateway comprising a second network interface connecting the gateway to an external network, the gateway configured to perform: operations to facilitate communication between VMs of the cloud data center and external devices on the external network that is external to the cloud data center, the operations comprising: storing mapping policies for respective tenants of the cloud data center, wherein the mapping policies are updated to reflect migrations of the VMs among physical servers of the cloud data center, the physical servers comprising respective virtualization hypervisors that manage execution of the VMs, the cloud data center comprising a physical network that performs routing for a provider address space (PAS), the physical servers and the first network interface having physical network addresses in PAS, the cloud data center providing customer address spaces (CASs) for the respective tenants thereof, each mapping policy mapping customer addresses (CAs) in a corresponding CAS to provider addresses (PAs) of the physical servers in the PAS; receiving, via the external network, by the second network interface, requests sent from the external devices, the requests addressed to external-facing addresses of the second network interface, the external-facing addresses corresponding to the tenants, respectively, wherein the external devices send the requests to the external-facing addresses, the external-facing addresses facing externally with respect to the cloud data center; enabling communication between the external devices and a tenant's VMs by: (i) for requests sent to the tenant's external-facing address, selecting CAs of the tenant's VMs from the tenant's mapping policy, and (ii) for each CA selected for a corresponding request sent to the tenant's external-facing address, using the tenant's mapping policy to map the selected CA to a PA of a physical server hosting the VM of the selected CA, and sending the corresponding request to the mapped PA.
2. A method according to claim 1, wherein when the sent request is received by the physical server corresponding to the mapped PA, the hypervisor of the physical server uses a local version of the tenant's mapping policy to deliver the request to the VM corresponding to the selected CA.
3. A method according to claim 1, wherein the local copy of the tenant's mapping policy is updated to reflect migrations of the tenant's VMs within the cloud data center.
4. A method according to claim 1, the operations further comprising, for a given request from a given external device, selecting a mapping policy for the request according to which external-facing address the given request is addressed to.
5. A method according to claim 1, further comprising balancing load among the tenant's VMs by the selecting the CAs, wherein the load balancing among the tenant's VMs is performed independent of which physical servers the tenant's VMs are hosted on.
6. A method according to claim 1, wherein the sending the corresponding request to the mapped PA comprises encapsulating the corresponding request in a packet addressed to the selected CA.
7. A method according to claim 1, wherein the sending the corresponding request to the mapped PA comprises translating an address of the corresponding request to the selected CA.
8. A method according to claim 1, wherein CAs in the CASs are not routable by the physical network of the cloud data center.
9. A gateway device comprising: processing hardware and storage hardware configured to perform operations to facilitate communication between VMs of a cloud data center and external devices on an external network that is external to the cloud data center; the processing hardware; a first network interface configured to connect the gateway to a physical network of the cloud data center, a second network interface configured to connect the gateway to the external network; the storage hardware configured to store mapping policies for respective tenants of the cloud data center, wherein the mapping policies are updated to reflect live migrations of the VMs among physical servers of the cloud data center, the physical servers comprising respective virtualization hypervisors that manage execution of the VMs, the cloud data center comprising a physical network that performs routing for a provider address space (PAS), the physical servers and the first network interface having physical network addresses in PAS, the cloud data center providing customer address spaces (CASs) for the respective tenants thereof, each mapping policy mapping customer addresses (CAs) in a corresponding CAS to provider addresses (PAs) of the physical servers in the PAS; the storage hardware storing instructions configured to perform the operations, the operations comprising: receiving, via the external network, by the second network interface, packets sent from the external devices, the packets addressed to external-facing addresses of the second network interface, the external-facing addresses assigned to the tenants, respectively, wherein the external devices send the packets to the external-facing addresses; enabling communication between the external devices and a tenant's VMs by: (i) for packets sent to the tenant's external-facing address, selecting CAs of the tenant's VMs from the tenant's mapping policy, and (ii) for each CA selected for a corresponding packet sent to the tenant's external-facing address, using the tenant's mapping policy to map the selected CA to a PA of a physical server hosting the VM of the selected CA, and sending the corresponding packet to the mapped PA.
10. A gateway device according to claim 9, wherein the enabling communication balances load among the tenant's VMs by the selecting of the CSs of the tenant's VMs, and wherein the load balancing causes packets addressed to the tenant's external-facing address to be distributed among the tenant's VMs.
11. A gateway device according to claim 9, the operations further comprising selecting a mapping policy for a packet received via the external network, wherein the mapping policy is selected based on which external-facing the packet is addressed to.
12. A gateway device according to claim 11, wherein when a VM represented in the mapping policy is migrated from a first physical server to a second physical server, the mapping policy is updated to stop associating the VM with a PA of the first physical server and start associating the VM with a PA of the second physical server.
13. A gateway device according to claim 9, wherein the sending the corresponding packet to the mapped PA comprises performing network translation or network encapsulation.
14. A gateway device according to claim 9, wherein each CAS corresponds to a respective virtual network implemented by the cloud data center.
15. A gateway device according to claim 9, wherein the distribution of the packets to the VMs is not affected by migrations of the VMs.
16. A gateway device according to claim 9, the operations further comprising receiving, via the physical network, packets sent by the VMs via the hypervisors, and using the mapping policies to deliver the packets sent by the VMs to the external devices.
17. A method performed by a gateway, the gateway comprising a first network interface connecting the gateway to a first physical network of a cloud data center, the gateway comprising a second network interface connecting the gateway to a second physical network that is external to the cloud data center, the method comprising: operations to facilitate communication between VMs of the cloud data center and external devices on the second physical network that is external to the cloud data center, the operations comprising: implementing a virtual network, the virtual network implemented by hypervisors executing on the server devices, the VMs belonging to the virtual network, wherein the hypervisors provide platform virtualization for the VMs, wherein the virtual network is in a second address space that is not routable on the first physical network, the second address space including addresses of the devices on the second physical network, the second address space also including addresses of the VMs, wherein the first physical network comprises a first address space that includes the addresses of the server devices; storing a mapping policy, wherein the mapping policy maps the addresses of the VMs in the second address space to the addresses of the server devices in the first address space; receiving, via the second physical network, by the second network interface, requests sent from the devices, the requests each addressed to a same address of the second network interface, wherein the devices send the requests to the address of the second network interface; and bridging communication between the devices and the VMs by: (i) for requests sent to the address of the second network interface by the devices, selecting the addresses of the VMs from the mapping policy, and (ii) for each VM address selected for a corresponding request, using the mapping policy to map the selected VM address to an address of a server device hosting the VM of the selected VM address, and sending the corresponding request to the mapped server device address, wherein the hypervisor on the mapped server device delivers the corresponding request to the VM of the selected VM address.
18. A method according to claim 17, wherein the mapping policy maps the address of one of the devices to an address of the gateway.
19. A method according to claim 17, further comprising receiving, at the gateway, updates from a virtual machine manager, the updates corresponding to migrations of the VMs between the server devices, and applying the updates to the mapping policy.
Description
BRIEF DESCRIPTION OF DRAWINGS
(1) For a better understanding of the present invention, reference is made to the accompanying drawings, which are incorporated herein by reference and in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION
(14) A simplified schematic block diagram of an embodiment of a data center incorporating features of the invention is shown in
(15) Each of the hosts in data center 10 may host one or more virtual machines (VM), which may include a complete operating system capable of running applications independently of other virtual machines. As shown in
(16) Each of the hosts in data center 10 may include a switch to route data packets to and from the virtual machines in the host. In the case of a single virtual machine, a switch may not be required. Each of the virtual machines may include a network adapter for external communication via the host in which it resides. Each of the hosts further includes a virtualization module for address translation during communication to and from the virtual machines in the host.
(17) In the example of
(18) As further shown in
(19) As further shown in
(20) A second embodiment of data center 10 is shown in
(21) As shown in
(22) A number of mapping policies for a virtual network may be grouped in a virtual network policy, such as virtual network policy 140 shown in
(23) In the example of
(24) As noted above, virtual network policy 140 includes a policy mapping entry for each virtual machine and each physical resource in the first virtual network 112. Additional virtual network policies correspond to additional virtual networks. For example, a separate virtual network policy in first host 12 and second host 14 contains mapping policies for the second virtual network 114 including virtual machines 104, 108 and 110.
(25) As shown in
(26) In the embodiment of
(27)
(28) In act 200, virtual machine 100 sends a packet 220 (arrow (1) in
(29) In act 204, first host 12 of virtual machine 100 may encapsulate the packet with the provider address LA.sub.GW of gateway 120 (LA.sub.1.fwdarw.LA.sub.GW) to provide an encapsulated packet 222 (
(30) In act 208, gateway 120 receives the encapsulated packet 222 from first host 12 and decapsulates the packet to provide a decapsulated packet 224. In particular, the provider address portion of the packet is removed, leaving the customer address AA.sub.P1 of physical server 124. In act 210, gateway 120 delivers the decapsulated packet 224 to physical server 124 (arrow (3) in
(31)
(32) In act 250, physical server 124 having address AA.sub.P1 sends a packet to virtual machine 100 having customer address AA.sub.1. In act 252, gateway 120 references its virtual network policy to obtain a mapping from customer address AA.sub.1 of virtual machine 100 to provider address LA.sub.1 of virtual machine 100. In act 254, gateway 120 may encapsulate the packet with the provider address of virtual machine 100 (LA.sub.GW.fwdarw.LA.sub.1). In act 256, gateway 120 sends the encapsulated packet to host 12 of virtual machine 100.
(33) In act 258, host 12 of virtual machine 100 receives the packet and decapsulates the packet according to the mapping policy in virtual network policy 140, which relates provider address LA.sub.1 of virtual machine 100 to customer address AA.sub.1 of virtual machine 100. In act 260, the decapsulated packet is delivered to virtual machine 100 at customer address AA.sub.1.
(34) A third embodiment of data center 10 is shown in
(35) As further shown in
(36) As further shown in
(37) As shown in
(38)
(39) In act 350, the virtual machine 100 sends a packet 352 (arrow (1) in
(40) In act 356, first host 12 encapsulates the packet with the provider address LA.sub.GW of gateway 320 to provide packet 358 (
(41) In act 362, gateway 320 receives the encapsulated packet 358 from first host 12 of virtual machine 100. In act 364, gateway 320 decapsulates received packet 358 to provide decapsulated packet 366 (arrow (2) in
(42) After the first NAT module 322 has established an entry corresponding to virtual machine 100, the physical server 300 can send a reply packet to virtual machine 100 using the reverse of the operations shown in
(43) As indicated above, virtual machine hosts in data center 10 include mapping policies which map physical servers to the provider address of gateway 320. In contrast, gateway 320 includes mapping policies which map the customer addresses of the physical resources to corresponding provider addresses. In addition, the first host 12 encapsulates packets directed to gateway 320, whereas the gateway 320 rewrites packets directed to physical server 300. In particular, virtual network policy 140 in first host 12 includes mapping policies for address encapsulation of packets sent to gateway 320, and virtual network policy 330 in virtualization gateway 330 includes mapping policies for address rewriting of packets sent to the physical resources.
(44)
(45) The customer address AA.sub.1 of migrated virtual machine 100.sub.m remains unchanged, but the provider address of migrated virtual machine 100.sub.m changes from provider address LA.sub.1 to provider address LA.sub.NEW in the example of
(46) Prior to live migration, virtual machine 100 on first host 12 sends a packet to storage device 302, as indicated by arrow 420 in
(47) Following live migration, it is assumed that virtual network policy 330 in gateway 320 has been updated to reflect the new mapping policy of migrated virtual machine 100.sub.m. In particular, virtual network policy 330 includes an address pair AA.sub.1:LA.sub.NEW, which defines a mapping policy for migrated virtual machine 100.sub.m. If migrated virtual machine 100.sub.m sends a packet to storage device 302, the packet is encapsulated by third host 410 and is sent to gateway 320, as indicated by arrow 422. Gateway 320 verifies the updated mapping policy for migrated virtual machine 100.sub.m and performs decapsulation and network address translation as shown in
(48) In effect, the network address translation masquerades the changes of the provider address of virtual machine 100 while maintaining connections between the virtual machine 100 and the physical resource. The virtual network policy 330 insures the correct mapping of provider addresses for virtual machine 100 before and after live migration. With this approach, live migration is transparent to the physical servers, as they only see the NAT address LA.sub.NAT.
(49)
(50) As further shown in
(51)
(52) A load balancer 560 is coupled between the Internet and the data center network fabric 20. Load balancer 560 includes policy-based network virtualization and contains virtual network policies 550 and 552.
(53) As shown in
(54) The load balancer indexes the load balancing tables separately for VIP.sub.A and VIP.sub.B such that the correct virtualization mapping tables for Customer A and Customer B will be used for any incoming request. For example, the Customer A's AA's: 10.1.1.1 and 10.1.1.2 can be mapped to LA.sub.1 and LA.sub.2, whereas the Customer B's AA's 10.1.1.1 and 10.1.1.2 can be mapped to LA.sub.3 and LA.sub.4. This way the load balancing functionality can be seamlessly integrated with the internal data center virtualization policy. As described previously, an advantage of the integrated architecture of the gateway is that the virtualization module of the gateway will also be part of the data center virtualization policy framework, through VM deployment and live migration. One benefit is that now the backend workload can be migrated across physical subnets, as long as the AA-LA mapping table in the integrated load balancer is also updated. All of these can happen without breaking existing load balancing or proxy sessions because the DIPs, which are AA's for VMs, remain unchanged.
(55) To support IP multicast in a multi-tenant environment, the management servers can assign each customer virtual network a multicast address in the LA space. All multicast traffic from customer VMs will be encapsulated and redirected onto the customer-specific multicast groups (or addresses) in the data center. Isolation of multicast traffic is also achieved by this separate multicast group for each customer. For example, the data center administrator, using a management tool such as VMM, can assign 224.0.0.1 for customer A, and 224.0.0.2 for customer B. While VMs for customer A and customer B both send multicast traffic destined to multicast group (destination address) 224.1.2.3, the virtualization rule will specify the following:
(56) (Policy 1) Packets sent to any multicast or broadcast destination from customer A's VMs.fwdarw.Encapsulate with 224.0.0.1
(57) (Policy 2) Packets sent to any multicast or broadcast destination from customer B's VMs.fwdarw.Encapsulate with 224.0.0.2
(58) Based on these policies, a packet sent by customer A from AA.sub.A1 to 224.1.2.3 will be encapsulated with LA.sub.A1 to 224.0.0.1, and a packet sent by customer B from AA.sub.B1 to 224.1.2.3 will be encapsulated with LA.sub.B1 to 224.0.0.2. As long as all physical hosts of customer A's VMs all subscribe to 224.0.0.1, and all physical hosts of customer B's VMs all subscribe to 224.0.0.2, the multicast packets reach all the hosts for customer A's and customer B's VMs respectively. Upon receiving the multicast packets, the virtualization policy will also differentiates the packets sent to 224.0.0.1 to be destined for VMs of customer A, whereas 224.0.0.2 to be destined for VMs of customer B. The packets will get decapsulated, and indicated to the correct VMs based on the virtualization rules.
(59) The invention is operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known computing systems, environments, and/or configurations that may be suitable for use with the invention include, but are not limited to, personal computers, server computers, hand-held or laptop devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
(60) The invention may be described in the general context of a computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The invention may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communication network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
(61) With reference to
(62) Computer 1010 typically includes a variety of computer readable media. Computer readable media can be any available media that can be accessed by computer 1010 and includes both volatile and nonvolatile media, removable and non-removable media. By way of example, and not limitation, computer readable media may comprise computer storage media. Computer storage media includes both volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by computer 1010. Combinations of the any of the above should also be included within the scope of computer readable storage media.
(63) The system memory 1030 includes computer storage media in the form of volatile and/or nonvolatile memory such as read only memory (ROM) 1031 and random access memory (RAM) 1032. A basic input/output system 1033 (BIOS), containing the basic routines that help to transfer information between elements within computer 1010, such as during start-up, is typically stored in ROM 1031. RAM 1032 typically contains data and/or program modules that are immediately accessible to and/or presently being operated on by processing unit 1020. By way of example, and not limitation,
(64) The computer 1010 may also include other removable/non-removable volatile/nonvolatile computer storage media. By way of example only,
(65) The drives and their associated computer storage media, discussed above and illustrated in
(66) A user may enter commands and information into the computer 1010 through input devices such as a keyboard 1062 and pointing device 1061, commonly referred to as a mouse, trackball or touch pad. Other input devices may include a microphone 1063, joystick, a tablet 1064, satellite dish, scanner, or the like. These and other input devices are often connected to the processing unit 1020 through a user input interface 1060 that is coupled to the system bus, but may not be connected by other interface and bus structures, such as a parallel port, game port or a universal serial bus (USB). A monitor 1091 or other type of display device is also connected to the system 1021 via an interface, such as a video interface 1090. In addition to the monitor, computers may also include other peripheral output devices such as speakers 1097 and printer 1096, which may be connected through a output peripheral interface 1095.
(67) The computer 1010 may operate in a networked environment using logical connections to one or more remote computers, such as a remote computer 1080. The remote computer 1080 may be a personal computer, a server, a router, a network PC, a peer device or other common network node, and typically includes many or all of the elements described above relative to the computer 1010, although only a memory storage device 1081 has been illustrated in
(68) When used in a LAN networking environment, the computer 1010 is connected to the LAN 1071 through a network interface or adapter 1070. When used in a WAN networking environment, the computer 1010 typically includes a modem 1072 or other means for establishing communications over the WAN 1073, such as the Internet. The modem 1072, which may be internal or external, may be connected to the system bus 1021 via the user input interface 1060, or other appropriate mechanism. In a networked environment, program modules depicted relative to the computer 1010, or portions thereof, may be stored in the remote memory storage device. By way of example, and not limitation,
(69) Having thus described several aspects of at least one embodiment of this invention, it is to be appreciated that various alterations, modifications, and improvements will readily occur to those skilled in the art.
(70) Such alterations, modifications, and improvements are intended to be part of this disclosure, and are intended to be within the spirit and scope of the invention. Accordingly, the foregoing description and drawings are by way of example only.
(71) The above-described embodiments of the present invention can be implemented in any of numerous ways. For example, the embodiments may be implemented using hardware, software or a combination thereof. When implemented in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single computer or distributed among multiple computers. Such processors may be implemented as integrated circuits, with one or more processors in an integrated circuit component. Though, a processor may be implemented using circuitry in any suitable format.
(72) Further, it should be appreciated that a computer may be embodied in any of a number of forms, such as a rack-mounted computer, a desktop computer, a laptop computer, or a tablet computer. Additionally, a computer may be embedded in a device not generally regarded as a computer but with suitable processing capabilities, including a Personal Digital Assistant (PDA), a smart phone or any other suitable portable or fixed electronic device.
(73) Also, a computer may have one or more input and output devices. These devices can be used, among other things, to present a user interface. Examples of output devices that can be used to provide a user interface include printers or display screens for visual presentation of output and speakers or other sound generating devices for audible presentation of output. Examples of input devices that can be used for a user interface include keyboards, and pointing devices, such as mice, touch pads, and digitizing tablets. As another example, a computer may receive input information through speech recognition or in other audible format.
(74) Such computers may be interconnected by one or more networks in any suitable form, including as a local area network or a wide area network, such as an enterprise network or the Internet. Such networks may be based on any suitable technology and may operate according to any suitable protocol and may include wireless networks, wired networks or fiber optic networks.
(75) Also, the various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine.
(76) In this respect, the invention may be embodied as a computer readable storage medium (or multiple computer readable media) (e.g., a computer memory, one or more floppy discs, compact discs (CD), optical discs, digital video disks (DVD), magnetic tapes, flash memories, circuit configurations in Field Programmable Gate Arrays or other semiconductor devices, or other non-transitory, tangible computer storage medium) encoded with one or more programs that, when executed on one or more computers or other processors, perform methods that implement the various embodiments of the invention discussed above. The computer readable storage medium or media can be transportable, such that the program or programs stored thereon can be loaded onto one or more different computers or other processors to implement various aspects of the present invention as discussed above. As used herein, the term non-transitory computer-readable storage medium encompasses only a computer-readable medium that can be considered to be a manufacture (i.e., article of manufacture) or a machine. Alternatively or additionally, the invention may be embodied as a computer readable medium other than a computer-readable storage medium, such as a propagating signal.
(77) The terms program or software are used herein in a generic sense to refer to any type of computer code or set of computer-executable instructions that can be employed to program a computer or other processor to implement various aspects of the present invention as discussed above. Additionally, it should be appreciated that according to one aspect of this embodiment, one or more computer programs that when executed perform methods of the present invention need not reside on a single computer or processor, but may be distributed in a modular fashion amongst a number of different computers or processors to implement various aspects of the present invention.
(78) Computer-executable instructions may be in many forms, such as program modules, executed by one or more computers or other devices. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. Typically the functionality of the program modules may be combined or distributed as desired in various embodiments.
(79) Also, data structures may be stored in computer-readable media in any suitable form. For simplicity of illustration, data structures may be shown to have fields that are related through location in the data structure. Such relationships may likewise be achieved by assigning storage for the fields with locations in a computer-readable medium that conveys relationship between the fields. However, any suitable mechanism may be used to establish a relationship between information in fields of a data structure, including through the use of pointers, tags or other mechanisms that establish relationship between data elements.
(80) Various aspects of the present invention may be used alone, in combination, or in a variety of arrangements not specifically discussed in the embodiments described in the foregoing and is therefore not limited in its application to the details and arrangement of components set forth in the foregoing description or illustrated in the drawings. For example, aspects described in one embodiment may be combined in any manner with aspects described in other embodiments.
(81) Also, the invention may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts simultaneously, even though shown as sequential acts in illustrative embodiments.
(82) Use of ordinal terms such as first, second, third, etc., in the claims to modify a claim element does not by itself connote any priority, precedence, or order of one claim element over another or the temporal order in which acts of a method are performed, but are used merely as labels to distinguish one claim element having a certain name from another element having a same name (but for use of the ordinal term) to distinguish the claim elements.
(83) Also, the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting. The use of including, comprising, or having, containing, involving, and variations thereof herein, is meant to encompass the items listed thereafter and equivalents thereof as well as additional items.