COMPUTING DEVICE WITH ETHERNET CONNECTIVITY FOR VIRTUAL MACHINES ON SEVERAL SYSTEMS ON A CHIP
20220137999 · 2022-05-05
Inventors
Cpc classification
G06F2009/45595
PHYSICS
H04L12/4604
ELECTRICITY
H04L12/4641
ELECTRICITY
G06F15/7807
PHYSICS
International classification
G06F9/455
PHYSICS
Abstract
A computing device, in particular for automotive applications, includes Ethernet connectivity for virtual machines on several systems on a chip. A vehicle comprises such a computing device. The computing device comprises two or more systems on a chip, each system on a chip comprising one or more virtual machines, wherein one system on a chip provides a connection to an Ethernet network, and wherein the two or more systems on a chip are connected by a switch. The virtual machines are connected via a virtual Ethernet link. For this purpose, each system on a chip comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip.
Claims
1. A computing device comprising two or more systems on a chip, each system on a chip comprising one or more virtual machines, wherein one system on a chip provides a connection to an Ethernet network, and wherein the two or more systems on a chip are connected by a switch, characterized in that the virtual machines are connected via a virtual Ethernet link, and in that each system on a chip comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip.
2. The computing device according to claim 1, wherein the switch is a PCIe switch with or without a non-transparent bridge functionality.
3. The computing device according to claim 2, wherein the instances of the distributed virtual switch are configured to provide a virtual Ethernet link to each virtual machine of the respective system on a chip.
4. The computing device according to claim 3, wherein the instances of the distributed virtual switch are software components that are executed in a privileged mode.
5. The computing device according to claim 4, wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of the other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch.
6. The computing device according to claim 5, wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to handle frame transmission requests to local virtual machines using data transfer.
7. The computing device (CD) according to claim 5, wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to serve frame transmission requests to virtual machines on a target system on a chip by forwarding the request to the instance of the distributed virtual switch on the target system on a chip and providing frame metadata including a data source address, a destination address, or a VLAN tag.
8. The computing device (CD) according to claim 7, wherein the instances of the distributed virtual switch are configured to provide a spatial isolation of the communication related to the virtual machines, or to provide a temporal isolation between the virtual machines with regard to Ethernet communication.
9. The computing device (CD) according to claim 8, wherein the instances of the distributed virtual switch (DVS) are configured to scan outgoing and incoming Ethernet traffic from and to each virtual machine for metadata.
10. The computing device (CD) according to claim 9, wherein the instances of the distributed virtual switch (DVS) are configured to scan ingress traffic and egress traffic and to perform plausibility checks.
11. The computing device (CD) according to claim 10, wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network has exclusive access to an Ethernet network device.
12. The computing device (CD) according to claim 11, wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to serve frame transmission request to the Ethernet network by forwarding the request to the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network.
13. The computing device (CD) according to claim 12, wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network is configured to manage fetching data targeted to this Ethernet network from local virtual machines and from instances of the distributed virtual switch of remote systems on a chip.
14. The computing device (CD) according to claim 13, wherein the instance of the distributed virtual switch of the system on a chip providing the connection to the Ethernet network is configured to serve received frames from the Ethernet network to local virtual machines using data transfer and to remote virtual machines by forwarding the frame metadata to the instance of the distributed virtual switch of the target system on a chip.
15. A vehicle, characterized in that the vehicle comprises a computing device comprising two or more systems on a chip, each system on a chip comprising one or more virtual machines, wherein one system on a chip provides a connection to an Ethernet network, and wherein the two or more systems on a chip are connected by a switch, characterized in that the virtual machines are connected via a virtual Ethernet link, and in that each system on a chip comprises an instance of a distributed virtual switch, which is configured to provide a virtualized access to the Ethernet network for the virtual machines of the respective system on a chip.
16. The vehicle according to claim 15, wherein the switch is a PCIe switch with or without a non-transparent bridge functionality.
17. The vehicle according to claim 16, wherein the instances of the distributed virtual switch are configured to provide a virtual Ethernet link to each virtual machine of the respective system on a chip.
18. The vehicle according to claim 17, wherein the instances of the distributed virtual switch are software components that are executed in a privileged mode.
19. The vehicle according to claim 18, wherein each instance of the distributed virtual switch is configured to discover the instances of the distributed virtual switch of the other systems on a chip via the switch and to establish a dedicated communication channel to each other instance of the distributed virtual switch.
20. The vehicle according to claim 19, wherein, for each virtual Ethernet link, the instances of the distributed virtual switch are configured to handle frame transmission requests to local virtual machines using data transfer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION
[0034] The present description illustrates the principles of the present disclosure. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the disclosure.
[0035] All examples and conditional language recited herein are intended for educational purposes to aid the reader in understanding the principles of the disclosure and the concepts contributed by the inventor to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions.
[0036] Moreover, all statements herein reciting principles, aspects, and embodiments of the disclosure, as well as specific examples thereof, are intended to encompass both structural and functional equivalents thereof. Additionally, it is intended that such equivalents include both currently known equivalents as well as equivalents developed in the future, i.e., any elements developed that perform the same function, regardless of structure.
[0037] Thus, for example, it will be appreciated by those skilled in the art that the diagrams presented herein represent conceptual views of illustrative circuitry embodying the principles of the disclosure.
[0038] The functions of the various elements shown in the figures may be provided through the use of dedicated hardware as well as hardware capable of executing software in association with appropriate software. When provided by a processor, the functions may be provided by a single dedicated processor, by a single shared processor, or by a plurality of individual processors, some of which may be shared. Moreover, explicit use of the term “processor” or “controller” should not be construed to refer exclusively to hardware capable of executing software, and may implicitly include, without limitation, digital signal processor (DSP) hardware, systems on a chip, microcontrollers, read only memory (ROM) for storing software, random access memory (RAM), and nonvolatile storage.
[0039] Other hardware, conventional and/or custom, may also be included. Similarly, any switches shown in the figures are conceptual only. Their function may be carried out through the operation of program logic, through dedicated logic, through the interaction of program control and dedicated logic, or even manually, the particular technique being selectable by the implementer as more specifically understood from the context.
[0040] In the claims hereof, any element expressed as a means for performing a specified function is intended to encompass any way of performing that function including, for example, a combination of circuit elements that performs that function or software in any form, including, therefore, firmware, microcode or the like, combined with appropriate circuitry for executing that software to perform the function. The disclosure as defined by such claims resides in the fact that the functionalities provided by the various recited means are combined and brought together in the manner which the claims call for. It is thus regarded that any means that can provide those functionalities are equivalent to those shown herein.
[0041] When Ethernet was introduced in the automotive industry, control units were usually connected via internal Ethernet controllers to Ethernet switches. With increasing performance requirements and tighter integration of several control units, high performance computers containing several independent virtual machines were introduced. In this case, virtual machine managers or hypervisors HV are used to partition several operating systems.
[0042]
[0043]
[0044]
[0045] According to the invention, for both solutions described above, a distributed virtual switch DVS is implemented on each SoC SoC1-SoC3. The distributed virtual switch DVS may be, for example, a software component running in privileged mode, e.g. as an extension of the hypervisor HV. The distributed virtual switch DVS provides an optimized Ethernet connectivity for each virtual machine VM1.1-VM3.2 on each SoC SoC1-SoC3 to other virtual machines VM1.1-VM3.2 on the same SoC SoC1-SoC3, to other virtual machines VM1.1-VM3.2 on different SoCs SoC1-SoC3, and to the Ethernet network ETH. For this purpose, the distributed virtual switch DVS provides a network device NetDev1.1-NetDev3.2 to each virtual machine VM1.1-VM3.2 running on the SoC SoC1-SoC3. The thin dotted arrows between the network devices NetDev1.1-NetDev3.2 and the virtual machines VM1.1-VM3.2 indicate transmit and receive queue accesses. In addition, the distributed virtual switch DVS provides for each other SoC SoC1-SoC3 one dedicated distributed virtual switch driver Drv1.1-Drv3.2, which is linked via the non-transparent bridge of the PCIe switch PCIe-SW to the respective distributed virtual switch driver Drv1.1-Drv3.2 of the other SoC SoC1-SoC3.
[0046] Each two linked distributed virtual switch drivers Drv1.1-Drv3.2 have a peer-to-peer communication. Each distributed virtual switch driver Drv1.1-Drv3.2 has a receive queue, which contains metadata of Ethernet frames, e.g. a destination MAC address, a VLAN tag, or a buffer address of an Ethernet frame transmitted by a virtual machine VM1.1-VM3.2. Each distributed virtual switch driver Drv1.1-Drv3.2 can insert an entry in the receive queue of its linked distributed virtual switch driver Drv1.1-Drv3.2 on the other SoC SoC1-SoC3. The distributed virtual switch DVS of the SoC SoC1 that is connected to the Ethernet network ETH further has access to an Ethernet network device, e.g. an Ethernet switch, via an Ethernet driver EthDrv.
[0047] Advantageously, the distributed virtual switch DVS further has additional information with regard to each virtual machine VM1.1-VM3.2, e.g. an allowed bandwidth, an arbitration priority between the queues of a virtual machine VM1.1-VM3.2 and between several virtual machines VM1.1-VM3.2, a guest physical address mapping, and so on. Because of this information and the full control of the configuration of the network connection, e.g. the Ethernet switch, and the data and control path of each virtual network device, those devices can guarantee a spatial and temporal separation.
[0048]
[0049] The distributed virtual switch DVS at the target SoC SoC1 periodically checks if there is a new entry in the Rx queue of the local distributed virtual switch drivers Drv1.1-Drv1.2. The distributed virtual switch DVS will thus detect the new entry with the metadata. With the help of the routing information in the metadata and based on a configured routing table, the distributed virtual switch DVS at the target SoC SoC1 determines that the destination of this frame is a virtual machine VM1.1 on this SoC SoC1. The distributed virtual switch DVS at the target SoC SoC1 thus retrieves the next free Rx buffer from the network device NetDev1.1 of the destination virtual machine VM1.1. The distributed virtual switch DVS at the target SoC SoC1 now sets up a DMA copy of the frame from the Tx buffer on the SoC SoC2 at the source virtual machine VM2.1 to this Rx buffer of the destination virtual machine VM1.1. The DMA copy is executed via a PCIe link and is indicated by the thick solid arrow between the source virtual machine VM2.1 and the destination virtual machine VM1.1. After the DMA copy is finished, the distributed virtual switch DVS at the target SoC SoC1 informs the destination virtual machine VM1.1 that a new frame has been received and provides the filled Rx buffer back to the virtual machine VM1.1. Furthermore, it informs the distributed virtual switch DVS at the source virtual machine VM2.1 that the frame copy is finished, and that the Tx buffer can be released. The distributed virtual switch DVS at the source virtual machine VM2.1 recognizes this information. It informs the source virtual machine VM2.1 that the transmission is finished and returns the Tx buffer back to the virtual machine VM2.1.
[0050]
[0051] The distributed virtual switch DVS at the target SoC SoC1 periodically checks if there is a new entry in the Rx queue of the local distributed virtual switch drivers Drv1.1-Drv1.2. The distributed virtual switch DVS will thus detect the new entry with the metadata. With the help of the routing information in the metadata and based on a configured routing table, the distributed virtual switch DVS at the target SoC SoC1 determines that the destination of this frame is the network ETH. The distributed virtual switch DVS at the target SoC SoC1 thus retrieves the next free Tx buffer from the Ethernet driver EthDry of the network device. The distributed virtual switch DVS at the target SoC SoC1 now sets up a DMA copy of the frame from the Tx buffer on the SoC SoC2 at the source virtual machine VM2.1 to this Tx buffer of the Ethernet driver EthDry of the network device. The DMA copy is executed via a PCIe link and is indicated by the thick solid arrow between the source virtual machine VM2.1 and the network ETH. After the DMA copy is finished, the distributed virtual switch DVS at the target SoC SoC1 informs the network device that a new frame for transmission is available. Furthermore, it informs the distributed virtual switch DVS at the source virtual machine VM2.1 that the frame copy is finished, and that the Tx buffer can be released. The network device then reads the new frame for transmission and transmits the frame. The distributed virtual switch DVS at the source virtual machine VM2.1 recognizes that the frame copy is finished. It informs the source virtual machine VM2.1 that the transmission is finished and returns the Tx buffer back to the virtual machine VM2.1.