APPARATUS AND METHOD FOR SUPPORTING MULTIPLE VIRTUAL SWITCH INSTANCES ON A NETWORK SWITCH
20170237691 · 2017-08-17
Inventors
Cpc classification
International classification
Abstract
A network switch to support multiple virtual switch instances comprises a control CPU configured to run a plurality of network switch control stacks, wherein each of the network switch control stacks is configured to manage and control operations of one or more virtual switch instances of a switching logic circuitry of the network switch. The network switch further includes said switching logic circuitry partitioned into a plurality of said virtual switch instances, wherein each of the virtual switch instances is provisioned and controlled by one of the network switch control stacks and is dedicated to serve and route data packets for a specific client of the network switch.
Claims
1. A network switch to support multiple virtual switch instances, comprising: a control CPU configured to run a plurality of network switch control stacks, wherein each of the network switch control stacks is configured to manage and control operations of one or more virtual switch instances of a switching logic circuitry of the network switch; said switching logic circuitry partitioned into a plurality of said virtual switch instances, wherein each of the virtual switch instances is provisioned and controlled by one of the network switch control stacks and is dedicated to serve and route data packets for a specific client of the network switch.
2. The network switch of claim 1, wherein: each of the network switch control stacks includes a network operating system (NOS) configured to implement a network communication protocol for data communication with the client via the one or more virtual switch instances; a switch software deployment kit (SDK) configured to control routing configuration of the virtual switch instances; and a switch configuration interface driver configured to control and configure a configurable communication bus between the network switch control stack and the virtual switch instances.
3. The network switch of claim 2 wherein: the NOS includes one or more of Open Shortest Path First (OSPF) protocol, Border Gateway Protocol (BGP), and Virtual Extensible LAN (Vxlan) Protocol.
4. The network switch of claim 2 wherein: different network switch control stacks running on the control of the network switch have different types of the NOS that are completely unrelated to each other.
5. The network switch of claim 1 wherein: the switching logic circuitry is an application specific integrated circuit (ASIC).
6. The network switch of claim 1 wherein: one of the network switch control stacks is configured to control only one virtual switch instance and different virtual switch instances are controlled by different network switch control stacks.
7. The network switch of claim 1 wherein: one of the network switch control stacks is configured to control multiple of the virtual switch instances.
8. The network switch of claim 1 further comprising: a plurality of I/O ports partitioned among the plurality of virtual switch instances and controlled by the network switch control stacks, wherein each of the I/O ports is configured to transmit the data packets between the client and its corresponding virtual switch instance independent and separate from the data traffic between other clients and their virtual switch instances.
9. The network switch of claim 1 wherein: each of the virtual switch instances further includes a data processing pipeline configured to process and route the data packets through multiple processing stages based on table search results; a search logic unit associated with the corresponding data processing pipeline and configured to conduct a table search to generate the table search results; and a local memory cluster configured to maintain forwarding tables to be searched by the search logic unit.
10. The network switch of claim 9 wherein: the data processing pipeline, the search logic unit, and the local memory cluster are all identified by one virtual switch ID of the virtual switch instance.
11. The network switch of claim 9 wherein: the table search includes one of hashing for a Media Access Control (MAC) address look up, Longest-Prefix Matching (LPM) for Internet Protocol (IP) routing, wild card matching (WCM) for an Access Control List (ACL) and direct memory access for control data.
12. The network switch of claim 9 wherein: the data processing pipeline is allowed to access its own local memory cluster only.
13. The network switch of claim 9 wherein: the each data processing pipeline is configured to access other memory clusters in addition to or instead of its own local memory cluster through its corresponding search logic unit if the tables to be searched are stored across multiple memory clusters.
14. The network switch of claim 9 wherein: the data processing pipeline further comprises a plurality of lookup and decision engines (LDEs) connected in a chain, wherein, as one of the processing stages in the data processing pipeline, each LDE is configured to generate a master table lookup key for the data packets received and to process/modify the data packets received based on search results of the tables by the search logic unit using the master table lookup key.
15. The network switch of claim 14 wherein: the search logic unit is configured to accept and process a unified table request from its corresponding data processing pipeline, wherein the unified table request includes the master table lookup key.
16. The network switch of claim 15 wherein: the search logic unit is configured to collect and transmit the search results back to the requesting data processing pipeline in a unified response format as a plurality of result lanes.
17. A method to support multiple virtual switch instances, comprising: executing a plurality of network switch control stacks on a control CPU of a network switch, wherein each of the network switch control stacks is configured to manage and control operations of one or more virtual switch instances of a switching logic circuitry of the network switch; partitioning said switching logic circuitry into a plurality of said virtual switch instances, wherein each of the virtual switch instances is provisioned and controlled by one of the network switch control stacks and is dedicated to serve and route data packets for a specific client of the network switch.
18. The method of claim 17 further comprising: implementing a network communication protocol for data communication with the client via a network operating system (NOS) in each of the network switch control stacks; controlling routing configuration of the virtual switch instances via a switch software deployment kit (SDK) in the network switch control stack; and controlling and configuring a configurable communication bus between the network switch control stack and the virtual switch instances via a switch configuration interface driver in the network switch control stack.
19. The method of claim 17 further comprising: controlling only one virtual switch instance via one of the network switch control stacks and controlling different virtual switch instances by different network switch control stacks.
20. The method of claim 17 further comprising: controlling multiple of the virtual switch instances via one of the network switch control stacks.
21. The method of claim 17 further comprising: partitioning a plurality of I/O ports among the plurality of virtual switch instances and controlled by the network switch control stacks, wherein each of the I/O ports is configured to transmit the data packets between the client and its corresponding virtual switch instance independent and separate from the data traffic between other clients and their virtual switch instances.
22. The method of claim 17 wherein: processing and routing the data packets through multiple processing stages based on table search results via a data processing pipeline in each of the virtual switch instances; conducting a table search to generate the table search results via a search logic unit associated with the corresponding data processing pipeline; and maintaining forwarding tables to be searched by a local memory cluster in the virtual switch instance.
23. The method of claim 22 further comprising: allowing the data processing pipeline to access its own local memory cluster only.
24. The method of claim 22 further comprising: allowing the each data processing pipeline to access other memory clusters in addition to or instead of its own local memory cluster through its corresponding search logic unit if the tables to be searched are stored across multiple memory clusters.
25. The method of claim 22 further comprising: connecting a plurality of lookup and decision engines (LDEs) in the data processing pipeline in a chain, wherein, as one of the processing stages in the data processing pipeline, each LDE is configured to generate a master table lookup key for the data packets received and to process/modify the data packets received based on search results of the tables by the search logic unit using the master table lookup key.
26. The method of claim 25 further comprising: accepting and processing a unified table request from its corresponding data processing pipeline, wherein the unified table request includes the master table lookup key.
27. The method of claim 26 further comprising: collecting and transmitting the search results back to the requesting data processing pipeline in a unified response format as a plurality of result lanes.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0006] The foregoing will be apparent from the following more particular description of example embodiments of the invention, as illustrated in the accompanying drawings in which like reference characters refer to the same parts throughout the different views.
[0007]
[0008]
[0009]
[0010]
DETAILED DESCRIPTION
[0011] The following disclosure provides many different embodiments, or examples, for implementing different features of the subject matter. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. In addition, the present disclosure may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed.
[0012]
[0013] In the example of
[0014] In some embodiments, each of the network switch control stacks 106 includes a network operating system (NOS) 108, a switch software deployment kit (SDK) 110, and a switch configuration interface driver 112 for one or more virtual switch instances 114. Here, the NOS 108 is a comprehensive software configured to implement a network communication protocol for data communication with one of the clients of the network switch 100 via one or more of the virtual switch instances 114. In addition to other software modules required to manage the network switch 100, the NOS 108 may further include one or more of protocol stacks, including not limited to one of, Open Shortest Path First (OSPF) protocol, which is a routing protocol for Internet Protocol (IP) networks, Border Gateway Protocol (BGP), which is a standardized exterior gateway protocol designed to exchange routing and reachability information among autonomous systems (AS) on the Internet, and Virtual Extensible LAN (Vxlan) Protocol, which is a network virtualization technology that attempts to improve the scalability problems associated with large cloud computing deployments
[0015] The switch SDK 110 is configured to control routing configurations of the virtual switch instances 114 and the switch configuration interface driver 112 is configured to control and configure a configurable communication bus (e.g., PCIe/I.sup.2C/MDIO, etc.) between the network switch control stack 106 and the virtual switch instances 114. In some embodiments, setting and configurations of the switch SDK 110 of the network switch control stack 106 are adjustable by a user (e.g., network system administrator) via a user interface (not shown) provided by the network switch 100. In some embodiments, different network switch control stacks 106 running on the same control CPU 102 of the network switch 100 may have different types of NOS 108s that are completely unrelated to each other.
[0016] In the example of
[0017] In the example of
[0018]
[0019] Table search has been widely adopted for the control logic of the network switch 100, wherein the network switch 100 performs search/lookup operations on the tables stored in the memory of the network switch for each incoming packet and takes actions as instructed by the table search results or takes a default action in case of a table search miss. Examples of the table search performed in the network switch 100 include but are not limited to: hashing for a Media Access Control (MAC) address look up, Longest-Prefix Matching (LPM) for Internet Protocol (IP) routing, wild card matching (WCM) for an Access Control List (ACL) and direct memory access for control data. The table search in the network switch allows management of network services by decoupling decisions about where traffic/packets are sent (i.e., the control plane of the switch) from the underlying systems that forwards the packets to the selected destination (i.e., the data plane of the switch), which is especially important for Software Defined Networks (SDN).
[0020] In the example of
[0021] In the example of
[0022] In some embodiments, each memory cluster 208 includes a variety of memory tiles 210 that can be but are not limited to a plurality of static random-access memory (SRAM) pools and/or ternary content-addressable memory (TCAM) pools. Here, the SRAM pools support direct memory access and each TCAM pool encodes three possible states instead of two with a “Don't Care” or “X” state for one or more bits in a stored data word for additional flexibility. In some embodiments, the memory tiles 210 can be flexibly configured to accommodate and store different table types as well as entry widths. Since certain memory operations such as of hash table and LPM table lookup may require access to multiple memory pools for best memory efficiency, the division of each memory cluster 108 into multiple separate pools allows for parallel memory accesses.
[0023] In the example of
[0024]
[0025] In some embodiments, the search logic unit 206 is configured to transmit the lookup result back to the requesting data processing pipeline 202 in the unified response format as a plurality of (e.g., four) result lanes as depicted in the example of
[0026]
[0027] The foregoing description, for purposes of explanation, used specific nomenclature to provide a thorough understanding of the invention. However, it will be apparent to one skilled in the art that specific details are not required in order to practice the invention. Thus, the foregoing descriptions of specific embodiments of the invention are presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the invention to the precise forms disclosed; obviously, many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, they thereby enable others skilled in the art to best utilize the invention and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the following claims and their equivalents define the scope of the invention.