Scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources

09800523 · 2017-10-24

Assignee

Inventors

Cpc classification

International classification

Abstract

A scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources, including: in a NUMA architecture, when a network interface card (NIC) of a virtual machine is started, getting distribution of the buffer of the NIC on each NUMA node; getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; determining a target NUMA node in combination with the distribution of the buffer of the NIC on each NUMA node and NUMA node affinities for the buffer of the NIC; scheduling the virtual processor to the CPU on the target NUMA node. The problem of affinity between the VCPU of the virtual machine and the buffer of the NIC not being optimal in the NUMA architecture is solved to reduce the speed of VCPU processing network packets.

Claims

1. A scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources, wherein the scheduling method includes the following steps: (1) in a NUMA architecture, when a network interface card of a virtual machine is started, getting the distribution of the buffer of the network interface card on each NUMA node; (2) getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; (3) determining a target NUMA node in combination with the distribution of the buffer of the network interface card on each NUMA node and affinities of each NUMA node for the buffer of the network interface card, wherein a CPU load balance on each NUMA node is further combined to determine the target NUMA node; (4) scheduling the virtual processor to a CPU on the target NUMA node; (5) continuing to monitor running condition of the network interface card of the virtual machine; wherein in the step (1), getting the distribution of the buffer of the network interface card on each NUMA node includes the following steps: (11) when a driver of a virtual function of the virtual machine is started, detecting a virtual address on which Direct Memory Access allocates the buffer in the driver, as well as getting the size of the buffer of the virtual function; (12) sending the virtual address to a specified domain; (13) the specified domain making a request to a virtual machine monitor for getting a physical address corresponding to the virtual address by a hypercall; (14) determining the distribution of the buffer of the network interface card on each NUMA node on the basis of the analysis of the distribution of the buffer on the NUMA node corresponding to the physical address; wherein in the step (2), getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node includes the following step: (21) getting the affinities of each NUMA node for the buffer of the network interface card according to information of distances between each NUMA node.

2. The scheduling method for virtual processors according to claim 1, wherein in the step (11), the size of the buffer of the virtual function is gotten by a network interface card performance testing tool.

3. The scheduling method for virtual processors according to claim 1, wherein the specified domain is Domain0 in the virtual machine monitor.

4. The scheduling method for virtual processors according to claim 1, wherein the virtual machine has a SR-IOV virtual function.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a schematic diagram of Non-Uniform Memory Access (NUMA) architecture in the physical platform;

(2) FIG. 2 is a schematic diagram of the operation of the network interface card with the SR-IOV function;

(3) FIG. 3 is a flow diagram of a scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources of the present invention; and

(4) FIG. 4 is a schematic diagram of getting distribution of the buffer of the network interface card on each NUMA node, in the scheduling method for virtual processors in FIG. 3.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS

(5) Below in conjunction with the accompanying drawings, the embodiment of the present invention will be further described. The embodiment is implemented on the premise of the technical solution of the present invention, and provides detail implementation and specific operation, but the scope of the present invention is not limited to the following embodiment.

(6) FIG. 3 illustrates a flow diagram of a scheduling method for virtual processors based on the affinity of NUMA high-performance network buffer resources of the present invention. Referring to FIG. 3, the scheduling method includes the following steps: Step S1: in a NUMA architecture, when a network interface card of a virtual machine is started, getting the distribution of the buffer of the network interface card on each NUMA node; Step S2: getting affinities of each NUMA node for the buffer of the network interface card on the basis of an affinity relationship between each NUMA node; Step S3: determining a target NUMA node in combination with the distribution of the buffer of the network interface card on each NUMA node and affinities of each NUMA node for the buffer of the network interface card; Step S4: scheduling the virtual processor to a CPU on the target NUMA node.

(7) It is noted that the scheduling method for virtual processors provided by the embodiment of the present invention is applied to the virtual machine with the SR-IOV virtual function. In the NUMA architecture, the buffer of the network interface card of the virtual machine is distributed to multiple NUMA nodes, thereby causing uncertainty of the buffer distribution, which affects the speed of virtual machines processing network packets.

(8) Specifically, in the present embodiment, whenever the virtual machine with the SR-IOV virtual function is started and enables the buffer of the network interface card, the buffer is used to receive network packets. When a driver of a virtual function of the virtual machine is started, a virtual address on which Direct Memory Access (DMA) allocates the buffer in the driver is detected, and the size of the buffer of the virtual function is gotten by a network interface card performance testing tool (such as Ethtool), and the virtual address is sent to a specified domain 104, wherein the specified domain 104 is Domain0 in the virtual machine monitor (such as Xen).

(9) Then, the specified domain 104 makes a request to a virtual machine monitor (VMM) 108 for getting a physical address corresponding to the virtual address by a hypercall, and the distribution of the buffer of the network interface card on each NUMA node 112 is determined on the basis of the analysis of the distribution of the buffer on the NUMA node 112 corresponding to the physical address.

(10) The core codes used to determine the distribution of the buffer of the network interface card on each NUMA node 112 are as follows: Add the following variables to store virtual machine parameters of the calling interface.

(11) TABLE-US-00001 struct p2m_domain *myp2m[10]; p2m_type_t *myt[10]; p2m_access_t *mya[10]; p2m_query_t myq[10]; unsigned int *mypo[10]; int count = 0;

(12) Get information of the parameters by adding detecting codes to the following initialization call.

(13) TABLE-US-00002 mfn_t .sub.——get_gfn_type_access(struct p2m_domain *p2m, unsigned long gfn,p2m_type_t *t, p2m_access_t *a, p2m_query_t q, unsigned int *page_order, bool_t locked) { int dom_count; ... for(dom_count =0; dom_count <count; dom_count ++) { if(p2m−>domain−>domain_id== myp2m[dom_count]−>domain−>domain_id) break; } if(dom_count == count) { myp2m[count] = p2m; myt[count] = t; mya[count] = a; myq[count] = q; mypo[count] = page_order; count++; } ... }

(14) Get the physical page tables corresponding to the virtual page tables of the virtual machine by using the new function, unsigned long int do_print_mfn(unsigned long,int), to hypercall VMM.

(15) TABLE-US-00003 unsigned long int do_print_mfn(unsigned long gfn, int domid) { int i; mfn_t mfn; for(i=0; i<count; i++)  if(myp2m[i]−>domain−>domain_id == domid) break; if(i==count){ printk(“Not found %d\n”,count); return 0; } mfn = myp2m[i]−>get_entry(myp2m[i], gfn, myt[i], mya[i], myq[i], mypo[i]); return mfn; }

(16) Then, get affinities of each NUMA node 112 for the buffer of the network interface card on the basis of affinities between each NUMA node 112.

(17) Specifically, in the NUMA architecture, affinities of each NUMA node 112 for the buffer of the network interface card are determined according to information of distances between each NUMA node 112, namely, the closer the distance between two NUMA nodes 112 is, the higher the affinity between them is. Therefore, in the present embodiment, the information of distances between each NUMA node 112 can be used to determine the affinities of each NUMA node 112 for the buffer of the network interface card.

(18) Then, a target NUMA node 112 is determined in combination with the distribution of the buffer of the network interface card on each NUMA node 112 and affinities of each NUMA node 112 for the buffer of the network interface card. In practice, CPU load balance on each NUMA node 112 is taken into consideration, so as to schedule VCPUs to multiple CPU cores, which still retains the original load scheduling method, on the target NUMA node 112, and reduces the effect on the system.

(19) The core codes used to determine the target NUMA node 112 are as follows: Add the following variables:

(20) TABLE-US-00004 int Numa_Node_dis[Max_size+1][Max_size+1]; int Numa_Info[Max_size+1]; int Numa_Node_Affi[Max_size+1]; int main( ){  ...  init_Numa_dis( ); memset(Numa_Node_Affi,0,sizeof(Numa_Node_Affi)); for(int Numa_Node=0; Numa_Node<Max_size; Numa_Node++) {  for(int Numa_Else=0; Numa_Else<Max_size;  Numa_Else++) { Numa_Node_Affi[Numa_Node]+= Numa_Info[Numa_Else]*Numa_Node_dis[Numa_Node][Numa_Else];  }  } for(int Numa_Node=0; Numa_Node<Max size; Numa_Node++) {  if(Numa_Node_Affi[Numa_Node] < Affinity_Min)  Affinity_Min = Numa_Node_Affi[Numa_Node]; }  for(int Numa_Node=0; Numa_Node<Max_size;  Numa_Node++) {  if(Numa_Node_Affi[Numa_Node] == Affinity_Min) Opt_Affinity.push_back(Numa_Node); }  ...  }

(21) Here, the affinity formula can be summarized as follows:

(22) Numa_Node _Aff [ i ] = .Math. k = 0 n Numa_Info [ k ] * Numa_Node _dis [ i ] [ k ] wherein, i represents the i-th NUMA node 112, and i starts counting from 0; the range of k is from 0 ton, and (n+1) represents the total number of NUMA nodes 112; NUMA_Node_Aff[i] is the affinity of the i-th NUMA node 112 for the buffer of the network interface card; NUMA_Info[k] is the size of the buffer on the k-th node; NUMA_Node_dis[i][k] represents the distance between the processor on the i-th node and the memory on the k-th node.

(23) Finally, the virtual processor is scheduled to the CPU on the target NUMA node 112, wherein the CPU on the target NUMA node 112 is a single-core CPU or a multi-core CPU. Then, the system continues to monitor running condition of the network interface card of the virtual machine.

(24) The core codes used to schedule the virtual processor to the CPU on the target NUMA node 112 are as follows:

(25) TABLE-US-00005 string Des_Range[Max_size];  int main(int argc, char *argv[ ]){ ... freopen(“Numa_Map”,“r”,stdin); for(int Numa_Node=0; Numa_Node<Max_size; Numa_Node++) cin>>Des_Range[Numa_Node]; freopen(“Numa_Opt”,“r”,stdin); int Des_Node; cin>>Des_Node; string Dom = argv[1]; string Command = vcpu_migrate+Dom+“ all ”+Des_Range[Des_Node]; const char* arg = Command.c_str( ); system(arg); ...  }

(26) In summary, the present technical solution includes at least the following beneficial technical results: it is by getting the distribution of the buffer of the network interface card on each NUMA node 112 and affinities of each NUMA node 112 for the buffer of the network interface card, to determine an optimal scheduling method for virtual processors (which is determining target NUMA node 112), which makes the virtual processor run up to the status that the affinity between the virtual processor and the buffer of the network interface card is optimal, so as to improve the processing speed of virtual network packets. Further, it is on the basis of the analysis of the buffer of the network interface card of the current virtual machine, to ensure an optimal affinity between the virtual processor and the target memory, so that the virtual machine fully utilizes the features of the NUMA architecture. Further, during the course of determining the target NUMA node 112, CPU load balance on each NUMA node 112 is also taken into consideration, so as to schedule VCPUs to multiple CPU cores, which still retains the original load scheduling method, on the target NUMA node 112, and reduces the effect on the system. Further, the precise configuration of VCPU resources on the Xen platform is controlled effectively, thereby ensuring that the VCPU has the optimal network processing speed for network interface card packets of the virtual machine with the SR-IOV virtual function.

(27) The invention has been exemplified above with reference to specific embodiments. However, it should be understood that a multitude of modifications and varieties can be made by a common person skilled in the art based on the conception of the present invention. Therefore, any technical schemes, acquired by the person skilled in the art based on the conception of the present invention through logical analyses, deductions or limited experiments, fall within the scope of the invention as specified in the claims.