NON-INTRUSIVE FINE-GRAINED POWER MONITORING OF DATACENTERS
20170322241 · 2017-11-09
Assignee
Inventors
Cpc classification
Y04S40/20
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
H02J13/00
ELECTRICITY
H02J3/00
ELECTRICITY
Y02B90/20
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
H02J2203/20
ELECTRICITY
Y04S20/12
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
Y02E60/00
GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
International classification
Abstract
Technologies for performing non-intrusive fine-grained power monitoring of a datacenter are provided. Hardware component state information for servers in the datacenter is collected, along with aggregate power consumption measurements for the datacenter. The servers are grouped into multiple virtual homogenous server clusters (VHCs) based on characteristics of the servers. A power model is constructed comprising multiple power mapping functions associated with the multiple VHCs. Component state information of a particular server can then be analyzed, along with a corresponding aggregate power consumption measurement, using the constructed power model to determine an approximate power consumption of the particular server. The approximate power consumption of the server can then be displayed and/or provided to one or more power management applications.
Claims
1. A method implemented by one or more computing devices, the method comprising: grouping a plurality of servers in a datacenter into multiple virtual homogenous server clusters (VHCs); collecting component state information of the plurality of servers in the datacenter, wherein component state information of the servers is associated with the VHCs in which the servers are grouped; creating a power model comprising multiple power mapping functions associated with the multiple VHCs, wherein a power mapping function is created using the component state information collected for an associated VHC; determining an aggregate power consumption of the datacenter; determining an approximate real-time power consumption of one or more servers in a VHC, of the multiple VHCs, using the power mapping function associated with the VHC, the total power consumption of the datacenter, and a current component state information of the one or more servers; and outputting the approximate real-time power consumption of the one or more servers.
2. The method of claim 1, wherein the VHCs comprise groups of servers with the same or similar types of hardware components.
3. The method of claim 1, wherein determining an approximate real-time power consumption of one or more servers in a VHC, of the multiple VHCs, using the power mapping function comprises correlating states of hardware components of a server to an overall power consumption of the server.
4. The method of claim 3, wherein the mapping comprises determining a linear relationship between the states of the hardware components of the server and the overall power consumption of the server.
5. The method of claim 1, wherein at least one of the a power mapping functions receives an input of a server component state vector at a particular time instant and produces an estimated power consumption of the server at the particular time instant based on the component state vector and the aggregate power consumption of the datacenter.
6. The method of claim 1, wherein at least one of the power mapping functions comprises a constant term that indicates an estimated idle power consumption of a server in a VHC associated with the power mapping function, and multiple variable terms that indicate estimated power consumptions of the server in the VHC when running multiple workloads.
7. The method of claim 6, wherein the constant term is determined by: measuring power changes when a plurality of idle servers in the VHC are turned off and on multiple times; and performing a least square minimization analysis using the multiple measured power changes to determine the constant term that indicates an estimated idle power consumption of a server in the VHC.
8. The method of claim 6, wherein the multiple variable terms comprise coefficients determined by: determining total power consumptions for the datacenter at multiple points in time; determining component states of a plurality servers at the same multiple points in time while the plurality of servers run the multiple workloads; and performing a least square minimization analysis using the multiple total power consumptions of the datacenter and the corresponding component states of the plurality of servers to determine the coefficients.
9. The method of claim 6, further comprising: training the power mapping function by analyzing a training dataset and updating the constant term and the multiple variable terms based on the analysis.
10. The method of claim 9, wherein the training dataset comprises: multiple collected total power consumption values for the datacenter; and multiple collected component states of the plurality of servers.
11. The method of claim 9, wherein the multiple training datasets comprise: multiple collected total power consumption values for the datacenter; and medians of multiple collected component states of the plurality of servers.
12. The method of claim 1, wherein the component state information of the plurality of servers comprises: index values associated with utilizations of hardware components of the plurality of servers, the utilizations of the hardware components comprising: central processing unit utilizations, graphical processing unit utilizations, memory utilizations, storage device utilizations, and network interface card utilizations.
13. The method of claim 12, wherein the utilizations of the hardware components further comprise hardware performance monitoring counters.
14. A system comprising: a datacenter comprising a main power supply and a plurality of servers, wherein the plurality of servers comprise multiple hardware components; a datacenter power data collector connected to the main power supply of the datacenter and configured to determine an aggregate power consumption of the plurality of servers in the datacenter; a component state collector connected to the plurality of servers and configured to retrieve component state information for the multiple hardware components from the plurality of servers; a power estimator configured to: receive and analyze data from the datacenter power data collector and the component state collector, update a power model comprising one or more power mapping functions based on the analysis of the data from the datacenter power data collector and the component state collector, and use the one or more power mapping functions to determine an approximate power consumption of one or more of the plurality of servers; and a display device connected to the power consumption estimator and configured to display the approximate power consumption of the one or more of the plurality of servers determined by the power consumption estimator.
15. The system of claim 14, wherein the plurality of servers are organized into multiple virtual homogenous server clusters (VHCs) based on the hardware components installed in the plurality servers; and the power estimator is further configured to: associate the one or more power mapping functions with the multiple VHCs, identify a VHC, of the multiple VHCs, to which the one or more of the plurality of servers belongs, and use a power mapping function associated with the identified VHC to determine the approximate power consumption of the one or more of the plurality of servers.
16. The system of claim 14, wherein the component state collector, the power estimator, and the datacenter power data collector comprise one or more servers in the datacenter.
17. The system of claim 14, wherein: the datacenter power data collector is configured to determine the aggregate power consumption of the plurality of servers in the datacenter by reading an embedded meter or vendor-provided interface linked to the main power supply of the datacenter.
18. The system of claim 14, wherein the main power supply comprises an uninterruptible power supply and power distribution units that energize the datacenter.
19. One or more computer-readable media storing computer-executable instructions for causing one or more processors, when programmed thereby, to perform operations comprising: identifying multiple virtual homogenous clusters of servers (VHCs) in a datacenter; creating a training dataset by: collecting component state information comprising hardware component utilization metrics for the servers in the datacenter at multiple times, collecting aggregate power consumption readings for the datacenter at multiple times by accessing an interface to a main power supply of the datacenter, and associating the collected component state information with the collected aggregate power consumption readings based on corresponding collection times; using the training dataset to create multiple power model functions associated with the multiple VHCs; receiving a component state vector for a server in the datacenter comprising hardware component utilization metrics for the server at a particular time; determining an aggregate power consumption for the datacenter at the particular time by accessing the interface to the main power supply of the datacenter; identifying a VHC, of the multiple VHCs, to which the server belongs; determining an estimated power consumption of the server using a power mapping function, of the multiple power mapping functions, associated with the identified VHC, the received component state vector, and the determined aggregate power consumption for the datacenter; and providing the estimated power consumption of the server to one or more datacenter power management applications.
20. The one or more computer-readable media of claim 19, wherein the operations further comprise: updating the training dataset periodically with additional collected component state information and additional aggregate power consumption readings; and updating the power model training functions using the updated training dataset.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION
[0037] As used herein, the term “fine-grained power monitoring” refers to estimating and/or detecting power consumption in a datacenter at the server-level and/or at the server rack-level.
[0038] As used herein, the term “aggregate power consumption” refers to the total electrical power consumed or used by the datacenter as a result of energizing a collection of servers or computing devices. An aggregate power consumption, or total power consumption, of a datacenter can be, for instance, read or collected from a main power supply of the datacenter, such as an uninterrupted power supply (UPS) or a power distribution unit (PDU), of the datacenter.
[0039] As used herein, the term “component state information” refers to data relating to the utilization of hardware components of a server. Hardware components of a server can include a central processing unit (CPU), a graphical processing unit (GPU), a memory, a storage device (such as a hard disk drive, solid state drive, or the like), and/or a network interface card (NIC). Component state information can include, but is not limited to, CPU utilization, GPU utilization, memory utilization, storage device utilization (such as disk or NAND/NOR reading and writing), network traffic (such as receiving and sending data), and other relevant hardware performance monitoring counters of the server or computing device. A “major hardware component,” as used herein, refers to a component of a server or computing device that consumes a significant amount of electrical energy. In some embodiments only utilization of major hardware components is tracked as part of the component state information.
[0040] A “training dataset,” as used herein, can refer to power data collected during a set time interval for use in training one or more power model functions (PMFs). The power data can comprise a set of values associated with an aggregate power consumption of the datacenter and corresponding component state information that are collected at certain time instants within the set time interval.
[0041] Technologies described herein can be used for non-intrusive fine-grained power monitoring of datacenters. In some embodiments, technologies described herein can be used for real-time estimation of power consumption of a server or computing device in a datacenter by analyzing the aggregate power consumption of the entire datacenter and the utilization of major hardware components (i.e., component state information) of servers or computing devices running within the datacenter.
[0042] For example,
[0043] When implemented, the technologies described herein do not require any manual measurement of power using hardware tools when training the power model 220 in the initial stages of its use. In short, the technologies described herein involve a non-intrusive power disaggregation (NIPD) approach to estimating power consumption at the server level.
[0044]
[0045] At 304, component state information of the plurality of servers is collected and associated with the VHCs. For example, for each server for which component state information is collected, a VHC to which the server belongs can be identified and the component state information of the server can be associated with the identified VHC.
[0046] At 306, a power model is created using the component state information associated with the VHCs. In some embodiments, the power model comprises multiple power mapping functions corresponding to the VHCs. For example, different PMFs can be associated with the VHCs. Component state information associated with a particular VHC can be associated with a PMF corresponding to the VHC.
[0047] At 308, an aggregate power consumption of the datacenter is determined. At 310 an approximate real-time power consumption of one or more of the servers in one of the VHCs is determined using the power model. For example, component state information for one or more servers in the datacenter for a particular time can be received and analyzed using the power model to determine an approximate power consumption of the one or more servers as of the particular time. In embodiments where the power model comprises multiple PMFs, a VHC for the one or more servers can be identified and a PMF associated with the VHC can be used to analyze the received component state information and to produce the approximate real-time power consumption.
[0048] At 312, the approximate real-time power consumption of the one or more servers is output. The power consumption can be, for example, displayed using a display device. Alternatively, the power consumption can be transmitted to a server over a computer network. For example, in some embodiments the received component state information for the one or more servers can be received from a computing device via the computing network. In such embodiments, the approximate real-time power consumption can be transmitted to the computing device over the computer network.
[0049]
[0050] The power estimator 450, the power model trainer 452, the datacenter power collector 430, the component state collector 440, and the display device 460 can comprise one or more computing devices. In some embodiments, power estimator 450, the power model trainer 452, the datacenter power collector 430, the component state collector 440, and the display device 460 are implemented using and/or integrated into existing computing hardware of the datacenter. Although they are described independently, these components may be located collectively in one server or distributed across multiple servers depending on application requirements. For example,
[0051] These example implementations are meant to show examples only, and are not intended to be limiting.
[0052] Returning to
[0053] A PMF can comprise a constant term and/or a plurality of variable terms. The constant term can indicate an idle or static power consumption of the server or group of servers. The plurality of variable terms can indicate a dynamic power consumption of the server or group of servers when the server or group of servers are running a specific workload. The constant term can be determined by measuring aggregate power changes upon turning one or more groups of idle servers off and on, and subsequently performing a least square minimization analysis by using the said aggregate power changes and the number of idle servers that were turned off and on as inputs. The variable terms can comprise coefficient values that are determined by measuring the aggregate power consumption of the datacenter at different time instants and the component states of servers in the datacenter at the corresponding time instants, and subsequently performing a least square minimization analysis by using the aggregate power consumption of the datacenter and the associated component states as inputs.
[0054] In a different or further embodiment, power model trainer 452 periodically updates the PMFs with updated variable term coefficient values and constant terms upon analysis of training datasets collected through selective means. These training datasets can include aggregate power consumption of the datacenter and the component state information of the plurality of servers in the datacenter. In some cases, calculated medians are used in these training datasets in order to alleviate the effect of outliers and make the PMF training robust.
[0055] In some embodiments, datacenter power collector 430 is an interface associated with, or built-in to, the main power supply 410 that energizes the datacenter. The main power supply 410 can comprise a UPS and/or one or more power distribution units (PDUs). In some embodiments, the datacenter power collector 430 can be a vendor-developed interface for the UPS and/or the one or more PDUs. The interface can be used to collect measurement readings for the aggregate power consumption of the datacenter. In some further embodiments, the interface can also be used to display the collected measurement readings.
[0056] In at least one embodiment, component state collector 440 collects data relating to the state or utilization of major hardware components of the servers 420 running in the datacenter. These data can include index values for CPU utilization, GPU utilization, memory utilization, disk reading and/or writing, network traffic (e.g., receiving and sending data), and/or other relevant hardware performance monitoring counters. For example, the state or utilization of more than one major hardware component can be collected for each of the servers 420 in order to improve the accuracy of the estimation of server-level power consumption. The component state collector 440 can use one or more resource statistic tools, such as dstat, vmstat, iostat, mpstat and netstat, to gather various component states of a server or a plurality of servers.
[0057] An example method for fine-grained power monitoring comprises: collecting an aggregate power consumption of a datacenter from a main power supply; collecting component state information of a plurality of servers in the datacenter; grouping the plurality of servers in the datacenter into multiple VHCs; constructing a power model that uses at least one power mapping function associated with every VHC; analyzing the aggregate power consumption of the datacenter and the component state information of a plurality of servers using the constructed power model; and outputting an approximate real-time power consumption of one or more servers of the plurality of servers in the datacenter.
[0058]
[0059]
[0060]
[0061] Servers 1050 can be grouped into virtual homogenous clusters of servers (VHCs) 1040. A VHC comprises a group of servers with a same or similar type of major hardware components. In some embodiments, one PMF 1014 is created for and associated with each VHC of VHCs 1040. In such embodiments, every server within the same VHC will use the same PMF. A PMF correlates state or utilization of multiple major hardware components of a server with an overall power consumption of the server. Since the datacenter 1060 can have multiple VHCs 1040, multiple PMFs 1014 can be needed to establish a fine-grained power monitoring of datacenter 1060.
[0062] A PMF can be expressed as a linear or a non-linear relationship between the state of the major components of a server and a power consumption of the server. In some cases, the linear relationship is preferred due to its lower computational complexity as compared to the non-linear relationship. In some embodiments, the PMFs 1014 are continuously trained with power data (e.g., training datasets) from the running datacenter 1060. Online training of the PMFs 1014 can use training datasets collected through selective means. For example, component state collector 1032 can provide per-node component states 1032 to power estimator 1010. Datacenter power collector 1020 can provide datacenter power measurements 1022 to power estimator 1010. Power estimator 1010 can use the per-node component states 1032 and the datacenter power measurements to train the PMFs 1014 that are a part of power model 1012 and that are associated with VHCs 1040. The power estimator 1010 can use power model 1012 comprising PMFs 1014 to produce per-node power estimates 1016. Display device 1070 can receive per-node power estimates 1016 from power estimator 1010 and display the per-node power estimates to a user.
[0063] In some cases, such online training of the PMFs 1014 and selective collection of training datasets can improve the precision of power disaggregation and support for running fine-grained power monitoring in real-time.
[0064] The following examples elaborate on governing principles, implementations, and results of fine-grained non-intrusive power monitoring.
Example 1—Model Designs for NIPD
[0065] In this example, the problem of NIPD for fine-grained power monitoring in datacenters is formally defined, and example solutions for training and updating power models used in NIPD are provided. Table 1 provides a summary of notations used herein:
TABLE-US-00001 TABLE 1 Summary of notations Notation Description m number of servers n number of component state r number of virtual homogeneous clusters (VHCs) y aggregate power vector of datacenter y.sub.j.sup.(i) power consumption of the i-th server at time j s.sub.j.sup.(i) state vector of the i-th server at time j μ.sub.n, j.sup.(i) the n-th component state of server i at time j d.sub.j.sup.(r) on/off number of servers in the r-th VHC at time j w.sup.(k) coefficient vector of PMF of the k-th VHC {tilde over (w)} coefficient vector of PMFs of all VHCs T transpose operation of vector when used as a superscript
[0066] In a datacenter consisting of m servers, an aggregate power consumption of the m servers sampled in a time interval [1, t] can be denoted by an aggregate power vector as:
y:=[y.sub.1,y.sub.2, . . . ,y.sub.t].sup.T. (Equation 1)
[0067] A power consumption of the i-th (1≦i≦m) server in the same time interval, which is unknown, can be denoted by an individual power vector as:
y.sup.(i):=[y.sub.1.sup.(i),y.sub.2.sup.(i), . . . ,y.sub.t.sup.(i)].sup.T. (Equation 2)
[0068] State information of components collected from each server can be recorded in a state vector s containing the n scalars (μ.sub.1, μ.sub.2, . . . , μ.sub.n), wherein n is a number of components whose information is available:
s:=[μ.sub.1,μ.sub.2, . . . ,μ.sub.n]. (Equation 3)
[0069] Accordingly, the state vector of the i-th server at time j(1≦j≦t) can be represented as:
s.sub.j.sup.(i):=[μ.sub.1,j.sup.(i),μ.sub.2,j.sup.(i), . . . ,μ.sub.n,j.sup.(i)], (Equation 4)
[0070] in which μ.sub.k,j.sup.(i) represents a value of the k-th (1≦k≦n) component state in the i-th server at time instant j.
[0071] During a time interval [1, t], given the aggregate power vector y of m servers and each server's state vector s.sub.j.sup.(i), 1≦i≦m, 1≦j≦t, non-intrusive power disaggregation (NIPD) can be performed by estimating the power consumption of each individual server at each time instant, i.e., y.sub.j.sup.(i), 1≦i≦m, 1≦j≦t.
[0072] To perform NIPD, the servers in the datacenter are first logically divided into multiple VHCs, such that, for each VHC, the major hardware components (e.g., CPU, GPU, memory, storage device(s), and/or NIC) of servers in the VHC are the same or similar (e.g., same or similar makes and models, same or similar capacities, same or similar performance characteristics, and/or same or similar power consumption characteristics). Thus, if a datacenter is composed by r(r≧1) types of servers, the servers can be divided into r VHCs.
[0073] For servers in the same VHC, a power mapping function (PMF) can be defined as f: R.sup.n.fwdarw.R, such that the input of a server's state vector at any time instant can yield the server's power consumption at the corresponding time instant; i.e., for the i-th server's state vector at time j, s.sub.j, f(s.sub.j.sup.(i)) approximates y.sub.j.sup.(i).
[0074] A linear model can capture the relationship between the power consumption of a server and its component state. The computational complexity of the linear model can be much lower than that of non-linear models. Therefore, in some cases it can be preferable to first model the PMF as a linear function, i.e., to initially model a server's power consumption by the linear combination of the server's component states. For servers in a same VHC, with the state vector s in Equation 3, a PMF for the VHC can be defined as:
f(s)=[1,s]w, (Equation 5)
[0075] wherein w is a coefficient vector denoted as:
w=[w.sub.0,w.sub.1,w.sub.2, . . . ,w.sub.n].sup.T. (Equation 6)
[0076] Some previous methods try to build a power model for each major component in a server, which are then used to estimate the power consumptions of each component in the server. In those methods, the server's power consumption is approximated by the aggregate of the estimated power consumption of its major components. Contrastingly, the PMFs described herein can be regarded as a special type of power model that are different from the ones used in the previous methods. For example, a PMF, as described herein, can indicate a way of mapping a server's major components' states to the server's overall power consumption. The power of uncovered components, such as fans within the server enclosure, can be properly absorbed (in the sense that f(s.sub.j.sup.(i)) can best approximates y.sub.i.sup.(i) by the components modeled in the PMF. Hence, the power consumption of each component modeled in a PMF is not necessarily the true value.
[0077] Moreover, the overall power consumption of a server f(s) can be broken down into two parts: idle power (or static power) and dynamic power. The former is considered as a baseline power supplied to maintain a server system in an idle state, while the latter is an additional power consumption for running specific workloads on the server system. In the PMF coefficient vector w (Equation 6), w.sub.0 is a constant term that models the idle power, and w.sub.1, w.sub.2, . . . , w.sub.n are coefficients associated with the dynamic power of different components.
[0078] The coefficients of a server's PMF can be estimated. For example, in a datacenter that comprises r VHCs, wherein m.sub.k servers are in a k-th (1≦k≦r) VHC, and wherein each server of the k-th VHC reports the states of n.sub.k components, using the state vector s (Equation 3), the PMF for the k-th VHC can be expressed as:
f.sub.k(s)=[1,s](w.sup.(k)).sup.T, (Equation 7)
[0079] wherein w.sup.(k) is the coefficient vector of the PMF for the k-th VHC and can be denoted as:
w.sup.(k)=[w.sub.0.sup.(k),w.sub.1.sup.(k),w.sub.2.sup.(k), . . . ,w.sub.n.sub.
[0080] At an arbitrary time instant j, the aggregate power consumption of the k-th VHC can be expressed as: ŷ.sub.j=ŝ.sub.jw.sup.(k), wherein:
ŝ.sub.j.sup.(k)=[m.sub.k,Σ.sub.i=1.sup.m.sup.
[0081] Meanwhile, an aggregate power consumption of the whole datacenter (or r VHCs) can be expressed as: y.sub.j=ŝ.sub.j{tilde over (w)}, wherein:
s.sub.j=[ŝ.sub.j.sup.(1),ŝ.sub.j.sup.(2), . . . ,ŝ.sub.j.sup.(r)], (Equation 10)
and
{tilde over (w)}=[w.sup.(1),w.sup.(2), . . . ,w.sup.(r)].sup.T, (Equation 11)
[0082] in which ŝ.sub.j.sup.(k) and w.sup.(k) are defined by Equations 9 and 8, respectively. Detailed transformations of the above equations are provided in Example 4 below.
[0083] With the measured aggregate power vector of the whole datacenter (Equation 1), the following least square estimation (LSE) problem can be formulated as the training model for the r PMFs of the datacenter:
[0084] By solving the above problem, optimal coefficients for the r PMFs appearing in w can be obtained, with which the power consumption of individual servers in different VHCs can be estimated by providing corresponding state vectors.
[0085] However, the above LSE training model can only capture only one constant term appearing in the coefficient vector, and not capture multiple constant terms. Consequently, if there are more than one VHC in the datacenter (r>1), the resultant constant terms (i.e., w.sub.0.sup.(1), w.sub.0.sup.(2), . . . , w.sub.0.sup.(r)) from Equation 12 are not accurate. In other words, the idle power of servers in each VHC cannot be estimated by this model. Therefore, additional steps need to be performed to estimate the constant terms in PMFs.
[0086] A widely used energy saving strategy in many datacenters is to shutdown idle servers. The shutdown servers are turned on again when the working servers cannot satisfy the workload. Such a scenario provides an opportunity to estimate the constant terms in PMFs.
[0087] For example, in a datacenter with r VHCs, at an arbitrary time instant j, if h servers are turned off (or on), and meanwhile a power decrease (or increase) in the aggregate power consumption of the whole datacenter, Δy(Δy>0), is detected, then Δy can be captured and associated with the number of h servers in an off/on event. Δy>0 is used to indicate that only an absolute value is considered.
[0088]
[0089] If t off/on events have been captured in the datacenter consisting of r VHCs, then for the j-th (1≦j≦t) off/on event, a counting vector can be defined as:
d.sub.j:=[d.sub.j.sup.(1),d.sub.j.sup.(2), . . . ,d.sub.j.sup.(r)], (Equation 13)
[0090] wherein d.sub.j.sup.(k) stands for the number of turned-off (or turned-on) servers in the k-th VHC at time j, and the detected (mean) power decrease (or increase) is Δy.sub.j. Then the following optimization problem can be formulated to find an optimal estimation of the constant terms, i.e.,
w.sub.0=[w.sub.0.sup.(1),w.sub.0.sup.(2), . . . ,w.sub.0.sup.(r)].sup.T:
[0091] In the estimation of the constant terms of PMFs, the optimization strategy using Equation 14 can be combined with a manual setup with information from technical specification of servers. For servers that can be shut down, e.g., the computing nodes, it can be straight-forward to gather off/on events and estimate the idle power via the optimization method. For other IT units that cannot be shut down during the operation of datacenter, e.g., admin nodes, the server's technical specification can be used to ascertain its idle power consumption. Alternatively, idle power consumption can be approximated using information from other servers equipped with similar hardware components that can be shut down.
[0092] After the PMFs are created, they can be used to estimate the real-time power consumption of individual servers by referring to real-time component states from the corresponding servers.
[0093] However, to make PMFs more accurate, training datasets can be used to train the PMFs. In some cases, a training dataset can contain complete component states, i.e., all possible component states of the servers in each VHC. However, in real-world datacenter operations, it can be hard to stress each of the components in a server to work through all possible states. Thus, in some cases, a training dataset collected in a time interval of several hours or even several days may be incomplete. In these cases, there is no guarantee that the training dataset covers all possible state information. This phenomenon may result in inaccurate PMFs.
[0094] Simply collecting training data as much as possible, however, may not be a good solution to the above problem due to two reasons: (1) the larger the training dataset, the higher the overhead in PMF training, and (2) more redundant data entries will be collected while they do not contribute to the improvement of PMFs. The following selective data collection strategy can be used to avoid these issues.
[0095] First, an update time interval is set for the training dataset, denoted as Δt.sub.1. At an arbitrary time instant j, the components states collected from r VHCs can be expressed as {tilde over (s)}.sub.j (Equation 10). Along with a measured aggregate power consumption of the datacenter at the same moment y.sub.h, a data entry in the training dataset can be represented as ({tilde over (s)}.sub.j, y.sub.j). With data entry of ({tilde over (s)}.sub.j, y.sub.j), the process of selective training data collection can include the following steps: [0096] Step 1: Normalize each element in {tilde over (s)}.sub.j with the corresponding maximum value, i.e., rescale the values of each element to [0, 1]. The maximum value could be found from a technical specification, such as a maximum I/O speed, or if unknown, it could be set as a value higher than any possible values of the state. [0097] Step 2: Compare the normalized data entry with those in the training dataset. If it already exists, go to Step 4. Otherwise, go to Step 3. [0098] Step 3: If the normalized entry already exists in the training dataset, the backup the power value y.sub.j for the existing entry with the same component states. Otherwise insert ({tilde over (s)}.sub.j, y.sub.j) into the training dataset as a new entry.
[0099] Note that in Step 3, if the normalized entry already exists, the redundant entry is not simply discarded. Instead, a record of its power value is kept. Thus, one data entry in the training dataset may have multiple power values. In such a case, a median of multiple power values can be the final value used in the entry for PMF training. Using the median can alleviate the effect of outliers and can make the PMF training more robust.
[0100] In addition to the collection of component states, the same strategy can also be applied to the collection of the off/on events for constant terms estimation.
[0101]
[0102] At 508, the component state information collected at 502 and the aggregate power consumption collected at 504 are used to select training datasets for estimating workload power consumption. At 512, the training datasets selected at 508 are used to estimate coefficients of variable terms of the PMFs.
[0103] At 510, the aggregate power consumption collected at 504 and the off/on events captured at 506 are used to select training datasets for estimating idle server power consumption. At 514, the training datasets selected at 510 are used to estimate constant terms of the PMFs.
[0104] At 516, the PMFs are updated with the coefficients estimated at 512 and the constant terms estimated at 514.
[0105] For the selective data collection described above, the resolution of the normalized component states can determine the maximum number of data entries in the training dataset. Assuming that a datacenter consists of r(r≧1) VHCs, each having n.sub.k (1≦k≦r) component states, and that a preset resolution of normalized component states is p(0<p<<1), then the number of data entries in the training dataset is upper-bounded by
A proof is provided below in Example 5.
[0106] In some cases, with the above data collection strategy, the training dataset may eventually become complete as time goes on. However, datacenter scaling-out (i.e., adding computing resources) and/or scaling-up (i.e., upgrading IT facilities) may lead to changes of PMFs. In this case, a new training dataset needs to be collected with the same procedure, and PMFs need to be updated accordingly.
[0107] Complexity of PMFs Update
[0108] The PMFs can be updated at a regular basis, e.g., every Δt.sub.2 interval time, using the most updated training dataset. The PMFs update can be carried out during the normal running of the datacenter and has very small overhead.
[0109] According to an analysis of PMF training complexity provided in Example 6 below, the complexity of PMF training has a linear growth with increase of data entries and a quadratic growth with increase of component states. However, as explained above, the number of the training data entries has an upper bound of
In many cases, this is not a large number (less than 10,000 in one experiment). Furthermore, as discussed in Example 2 below, a small number of component states (e.g., 6 in one experiment) can be sufficient to provide accurate PMFs in some cases.
[0110] In some examples, the training dataset is selectively updated and duly applied to update PMFs in the background and, at foreground, the real-time component state information is used to obtain server-level power estimations.
Example 2—NIPD System
[0111] This example provides a particular embodiment of the technologies described herein for illustration purposes. This particular embodiment comprises a 326-node server cluster comprising 12 (blade) server racks that house 306 CPU nodes, 16 disk array nodes, 2 I/O index nodes, and 2 admin nodes, each running a Linux kernel. Table 2 shows the detailed configuration of each type of server used in this example:
TABLE-US-00002 TABLE 2 Example Configuration of Server Nodes Node Type configurations Number CPU Node 2 X Intel Xeon E5-2670 8-core CPU(2.6 G) 306 8 X 8 GB DDR3 1600 MHz SDRAM 1 X 300 G 10000 rpm SAS HDD Disk Array 1 X Intel Xeon E5-2603 4-core CPU(1.8 G) 16 Node 4 X 4 GB DDR3 ECC SDRAM 1 X 300 G 10000 rpm SAS HDD 36 X 900 G SAS HDD Networking Switches I/O Index 2 X Intel Xeon E5-2603 4-core CPU(1.8 G) 2 Node 8 X 4 GB DDR3 ECC SDRAM 1 X 300 G 10000 rpm SAS HDD Admin Node 2 X Intel Xeon E5-2670 8-core CPU(2.6 G) 2 8 X 16 GB DDR3 1600 MHz SDRAM 1 X 300 G 10000 rpm SAS HDD
[0112]
[0113] Data Collection
[0114] Referring to
[0115] The administrative node 1222 is used to collect the component state information from each node (e.g., 1224, 1226A-B, and 1228A-B). The administrative node 1222 can use the same sampling rate or a different sampling rate than the aggregate power collector 1230. In some cases, the sampling rate of the administrative node 1222 can be a rate of 1 second. The dstat tool, a widely-used resource statistic tool, can be used to gather various component states of a server, as shown in Table 3. Other tools can also be used, such as vmstat, iostat, mpstat and netstat.
TABLE-US-00003 TABLE 3 Example Component State Metrics Collected Using dstat Component State label Description processor usr CPU utilization for user processes sys CPU utilization for system processes idle CPU in idle wai CPU utilization for I/O waiting memory used memory usage for processes buff buffer memory cach cache memory free free memory disk read disk reading amount write disk writing amount network recv traffic amount that the system received send traffic amount that the system sent paging in # pages changes from disk to memory page # pages changed from memory to disk system int System interruption time csw Context switch times
[0116] Rather than using all states information provided by dstat, for training PMFs, the following 6 state metrics from the collected states in Table 3 can be used: total CPU utilization (1-idle), total memory utilization (1-free), disk reading/writing (read/write) and network traffic receiving/sending (recv/send). In some cases, the utilization metrics can be limited to these 6 for training purposes since: (1) the selected metrics can often cover the major hardware components of the server, and (2) including other metrics can increase the overhead of training PMFs but do may not improve the accuracy of PMFs.
[0117] Estimation of Idle Power
[0118] For the estimation of idle power (or constant terms in PMFs) of CPU nodes 1224 in this example, idle nodes are identified and remotely turned off and on. For remote operation, the industry-standard Intelligent Platform Management Interface (IPMI) can be used to turn the servers off and on. During the on/off time period, multiple off/on events and corresponding power changes are captured from event logs and data logs, respectively. These off/on events are fed into an optimization model to estimate the constant terms (idle power) of the CPU nodes 1224.
[0119] In this example, the idle power of I/O nodes 1226A-B, and admin node 1222 cannot be estimated by turning them off and on remotely because they are not allowed to be shut down for the normal operation of the running datacenter. Since the number of these two-server types is quite small in this example (only 2 for each type), and their hardware configurations are similar with that of CPU nodes 1224, their idle power can set as the same as that of CPU nodes in this case. The disk array nodes 1228A-B also need to be kept on all the time. However, their hardware configurations are not similar to the hardware configurations of the CPU nodes 1224. Therefore, the idle power of the disk array nodes 1228A-B is from their working power range by making use of rack power measurements.
[0120] The precision and complexity of the example NIPD solution for power monitoring can be evaluated at the rack level and the server level, respectively.
[0121] Table 4 summarizes the values of example parameters set in the example NIPD system:
TABLE-US-00004 TABLE 4 Example Parameter Settings for the Example NIPD system Parameter Setting number of VHCs (r) 4 number of component states (n.sub.k) [6, 6, 6, 6] normalizing resolution (p) 0.01 training dataset update interval (Δt.sub.1) 2 seconds PMFs update interval (Δt.sub.2) 5 minutes, 0.5 hours
[0122] The example parameter settings in Table 4 are based on the following considerations: [0123] Number of VHCs (r): According to the example server node configurations in Table 1, the nodes in the datacenter can be logically divided into 4 VHCs, and the number of servers in each VHC is 306, 16, 2, 2, respectively. [0124] Number of component states (n.sub.k): As discussed above, 6 component states are chosen states for PMFs training as well as power estimation of individual servers. [0125] Normalizing resolution (p): In the update of a training dataset, the resolution of normalized data in each entry is set as 0.01, which, as discussed further below, can be precise enough for accurate PMFs training in some cases. A higher resolution will increase the size of training dataset as well as PMFs training complexity. [0126] Interval for updating training dataset (Δt.sub.1): In this example, this interval is set to the same value as the sampling interval for aggregate power consumption, which in this case is 2 seconds. Setting the update interval of the training dataset to the same value as the sampling interval can enable training data to be collected quickly. [0127] PMFs update interval (Δt.sub.2): An initial value for the PMF update interval is set as 5 minutes, which is based on an estimation of PMFs training time needed under the theoretical maximum size of training dataset. Over time, as the training dataset size begins to stabilize, the update interval is changed to 0.5 hours to reduce the overhead of the PMFs update.
[0128] Power Monitoring at the Rack Level
[0129] By putting the real-time component state information of the servers into the corresponding PMFs, the power consumption of each server can be estimated. The estimated power consumption of all servers in the same rack can then be aggregated to produce an estimated power consumption of the rack. To measure an error rate of this rack-level estimation, the mean relative error (MRE) metric can be used that is defined by:
[0130] where t is the number of data entries in the dataset, and y.sub.j and y′.sub.j are the ground truth and estimated rack power for the j-th data entry, respectively.
[0131] By running different benchmarks shown in Table 5, training data can be collected for various workloads and used to update the PMFs.
TABLE-US-00005 TABLE 5 Example workloads for NIPD evaluations. Workload Description Purpose Idle Only background OS processes Server-level Peak Stress CPU usage to 100% validation malloc memory until 100% SPECint gcc Compiler Training data gobmk Artificial Intelligence: go collection and sjeng Artificial Intelligence: chess PMFs update tonto Discrete Event Simulation SPECfp namd Biology/Molecular Dynamics wrf Weather Prediction tonto Quantum, Checmistry IOZone Filesystem benchmark tool Synthetic Occupy CPU randomly Rack-level Read/write memory randomly validation
[0132] In one scenario, after each PMF update, the synthetic workloads listed in Table 5 are run, power consumption and server component states are collected, and the MRE of the power estimation with updated PMFs is calculated.
[0133] To illustrate the performance results more clearly, power estimation results for two server racks: Rack-1 and Rack-2 (in 0.5 hours) are shown in
[0134] To have a view of the overall performance in the datacenter, example MRE values over all 12 racks the example datacenter are depicted in
[0135] In cases where power consumption of a rack is very stable, variable terms may be excluded from a PMF. For example, in this particular example, Rack-12 1502 is dedicated to an InfiniBand (IB) switch and has a very stable power consumption around 2.5±0.1 kW. Only the constant term was used for power estimation of Rack-12 and resulted in an MRE of 0.85%.
[0136] Power Monitoring at Server Level
[0137] In some cases, it can be difficult to fully validate the accuracy of power estimation at the server level. For example, some servers, such as blade servers, are designed to be highly integrated in the rack. In scenarios like this, it is difficult to assemble sensors/meters inside individual servers. In addition, multiple servers may share the same power supply so it is also hard to obtain server level power outside the servers.
[0138] In these cases, although ground truth power consumption for individual nodes cannot be recorded, knowledge about idle power and peak power or working power range of each server type can be obtained. Idle power of CPU nodes in the datacenter can be estimated by turning idle CPU nodes off and on, as described in more detail above. Peak power (or name plate power) can be learned by referring to nameplate power provided by the server vendor. Additionally, some racks may contain only CPU nodes and disk arrays. In these cases, all the CPU nodes can be shut down, leaving only the disk arrays running to obtain the working power range of the disk arrays by measuring power consumption at the rack-level. For a disk array node, in many cases its power consumption is usually larger, but relatively more stable, compared with that of a CPU node. A working power range of a disk array node can be estimated rather than the idle/peak power by making use of rack-level power. The measured or estimated idle/peak power and working power range of the servers in the example datacenter are illustrated in Table 6. These values are used as references to evaluate server-level power estimation in this example.
TABLE-US-00006 TABLE 6 Example Idle/Peak power of CPU Nodes and Example Power Range of Disk Array Nodes Node type Referred metric Power (range) CPU node Idle power 75.4 Watts Peak power 200 Watts Disk array node Power range 1020~1170 Watts
[0139] Power Disaggregation of the Datacenter
[0140] Using PMFs trained from the aggregate power readings of the IT facilities in this example, real-time power consumption of individual servers is estimated. To illustrate the performance, four CPU nodes and two disk array nodes are chosen as test nodes. Of the four CPU test nodes, two of them run the peak workload (listed in Table 4), and the other two firstly keep idle for 15 minutes and then the run peak workload for another 15 minutes. The two disk array test nodes left running and available to other processes.
[0141]
[0142] In some cases, the estimated power values are slightly larger than the referred ones. This is can occur because, when disaggregating the datacenter power, the power loss during the transmission (e.g., by wire and PDUs) as well as power consumed by some shared facilities (e.g., network switches and datacenter accessories) are assigned to individual servers.
[0143] Power Disaggregation of Racks
[0144] When a datacenter is capable of monitoring power consumption of each rack, the technologies described herein can be used to disaggregate the rack-level power consumption into server-level power consumption. In scenarios where the servers in a rack are homogeneous, the number of VHCs can be set to one. In this case, the computational complexity for training PMFs will be much lower than that in a heterogeneous environment.
[0145] In one particular example, a test rack which contains 28 CPU nodes and 2 I/O index nodes was selected. Since the number of CPU nodes is much larger than that of the I/O index nodes, and the CPU nodes' working power ranges are very similar, the selected rack can be considered to be approximately homogeneous. Historical data is collected historical data from the selected rack and used for PMF training. (Since the selected rack is considered to be approximately homogenous, in this case only one VHC is created for the servers in the rack and, thus only one PMF is created and trained.) The updated PMF is used to make estimations under idle/peak workloads for individual servers in the selected rack. The resulted idle/peak power estimation of four CPU test nodes using rack-level power is illustrated in
[0146] It can be observed from
Example 3—NIPD as Middleware
[0147] As the technologies described herein can provide fine-grained power information at the server level, they can be used as middleware in some embodiments to support different power management applications.
[0148]
[0149] Power Capping 1812: The power capacity of IT facilities estimated by servers' nameplate ratings can be much higher than the actual server power consumption. A graph depicting example power readings of a server rack compared with the rack's designed power capacity is shown in
[0150] Power Accounting 1814: The fine-grained power information obtained from NIPD sub-system 1840 can also be used for power accounting from different perspectives. For example, as shown in
[0151] Others: Based on results from NIPD sub-system 1814, the power consumption characteristics of different servers, workloads, and/or users can be analyzed and corresponding energy-saving policies 1816 can be adopted. For example, the power efficiency of different server types under the same workloads can be measured and used by to choose the most energy-conservative servers for performing similar workloads in the future. In addition, the server-level power information can be used to draw the power distribution map of the datacenter, which provides clues to identify or predict “hot spots” for more intelligent cooling systems 1818.
Example 4—Equation Transformations
[0152] This example provides details of transformations of Equations 9 and 10.
[0153] Transformation of Equation 9
[0154] For a VHC consisting of m servers, each with n component states, given its PMF in the form of Equation 5 and state vector in the form of Equation 4, the aggregate power consumption at time j can be expressed as:
[0155] Transformation of Equation 10
[0156] Assuming that a datacenter consists of r VHCs and the PMF of the k-th (1≦k≦r) VHC is denoted in the form of Equation 7, then at an arbitrary time instant j, the aggregate power consumption generated by r VHCs can be expressed as:
[0157] where
{tilde over (s)}.sub.j=[{tilde over (s)}.sub.j.sup.(1),{tilde over (s)}.sub.j.sup.(2), . . . ,{tilde over (s)}.sub.j.sup.(r)] (Equation 19)
and
{tilde over (w)}=[w.sup.(1),w.sup.(2), . . . ,w.sup.(r)].sup.T, (Equation 20)
[0158] in which ŝ.sub.j.sup.(k) and w.sup.(k) are defined by Equations 9 and 8, respectively.
Example 5—Proof of an Upper Bound on Training Dataset Entries
[0159] Given a datacenter with r(r≧1) VHCs, each with n.sub.k (1≦k≦r) component states, for each data entry in the training dataset in form of ({tilde over (s)}, y), the number of non-constant elements of {tilde over (s)} is Σ.sub.k=1.sup.rn.sub.k (referring to Equation 9). Then, for each of the elements, as the normalizing resolution is set asp and the normalized range is [0, 1], the number of its possible values is
Therefore, the total number of possible combinations, i.e., the values of {tilde over (s)}, is
Example 6—PMFs Training Complexity
[0160] For PMFs training, the optimization model established in Equation 12 can be used to find the optimal PMFs coefficients, which can essentially fall into the form of least square linear regression. With t data entries in the training dataset, the closed-form solution to the least square regression problem (Equation 12), i.e., the PMFs coefficients {tilde over (w)}, can be expressed as:
{tilde over (w)}=(S.sup.TS).sup.−1S.sup.Tŷ, (Equation 21)
[0161] where S=[{tilde over (s)}.sub.1, {tilde over (s)}.sub.2, . . . , {tilde over (s)}.sub.t].sup.T and ŷ=[y.sub.1, y.sub.2, . . . , y.sub.t].sup.T.
[0162] Assuming that the total number of component states for all VHC's is n, n=Σ.sub.k=1.sup.rm.sub.k where m.sub.k denotes the number of component states for the k-th VHC, the time complexity to get {tilde over (w)} from Equation 21 is O(n.sup.2.Math.t).
Example 7—Computing Systems
[0163]
[0164] With reference to
[0165] A computing system may have additional features. For example, the computing system 2100 includes storage 2140, one or more input devices 2150, one or more output devices 2160, and one or more communication connections 2170. An interconnection mechanism (not shown) such as a bus, controller, or network interconnects the components of the computing system 2100. Typically, operating system software (not shown) provides an operating environment for other software executing in the computing system 2100, and coordinates activities of the components of the computing system 2100.
[0166] The tangible storage 2140 may be removable or non-removable, and includes magnetic disks, magnetic tapes or cassettes, CD-ROMs, DVDs, or any other medium which can be used to store information in a non-transitory way and which can be accessed within the computing system 2100. The storage 2140 stores instructions for the software 2180 implementing one or more innovations described herein.
[0167] The input device(s) 2150 may be a touch input device such as a keyboard, mouse, pen, or trackball, a voice input device, a scanning device, or another device that provides input to the computing system 2100. For video encoding, the input device(s) 2150 may be a camera, video card, TV tuner card, or similar device that accepts video input in analog or digital form, or a CD-ROM or CD-RW that reads video samples into the computing system 2100. The output device(s) 2160 may be a display, printer, speaker, CD-writer, or another device that provides output from the computing system 2100.
[0168] The communication connection(s) 2170 enable communication over a communication medium to another computing entity. The communication medium conveys information such as computer-executable instructions, audio or video input or output, or other data in a modulated data signal. A modulated data signal is a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal. By way of example, and not limitation, communication media can use an electrical, optical, RF, or other carrier.
[0169] The innovations can be described in the general context of computer-executable instructions, such as those included in program modules, being executed in a computing system on a target real or virtual processor. Generally, program modules include routines, programs, libraries, objects, classes, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The functionality of the program modules may be combined or split between program modules as desired in various embodiments. Computer-executable instructions for program modules may be executed within a local or distributed computing system.
[0170] The terms “system” and “device” are used interchangeably herein. Unless the context clearly indicates otherwise, neither term implies any limitation on a type of computing system or computing device. In general, a computing system or computing device can be local or distributed, and can include any combination of special-purpose hardware and/or general-purpose hardware with software implementing the functionality described herein.
[0171] For the sake of presentation, the detailed description uses terms like “determine” and “use” to describe computer operations in a computing system. These terms are high-level abstractions for operations performed by a computer, and should not be confused with acts performed by a human being. The actual computer operations corresponding to these terms vary depending on implementation.
Example 8—Cloud Computing Environment
[0172]
[0173] The cloud computing services 2210 are utilized by various types of computing devices (e.g., client computing devices), such as computing devices 2220, 2222, and 2224. For example, the computing devices (e.g., 2220, 2222, and 2224) can be computers (e.g., desktop or laptop computers), mobile devices (e.g., tablet computers or smart phones), or other types of computing devices. For example, the computing devices (e.g., 2220, 2222, and 2224) can utilize the cloud computing services 2210 to perform computing operators (e.g., data processing, data storage, and the like).
Example 9—Implementations
[0174] Although the operations of some of the disclosed methods are described in a particular, sequential order for convenient presentation, it should be understood that this manner of description encompasses rearrangement, unless a particular ordering is required by specific language set forth below. For example, operations described sequentially may in some cases be rearranged or performed concurrently. Moreover, for the sake of simplicity, the attached figures may not show the various ways in which the disclosed methods can be used in conjunction with other methods.
[0175] Any of the disclosed methods can be implemented as computer-executable instructions or a computer program product stored on one or more computer-readable storage media and executed on a computing device (e.g., any available computing device, including smart phones or other mobile devices that include computing hardware). Computer-readable storage media are any available tangible media that can be accessed within a computing environment (e.g., one or more optical media discs such as DVD or CD, volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as flash memory or hard drives)). By way of example and with reference to
[0176] Any of the computer-executable instructions for implementing the disclosed techniques as well as any data created and used during implementation of the disclosed embodiments can be stored on one or more computer-readable storage media. The computer-executable instructions can be part of, for example, a dedicated software application or a software application that is accessed or downloaded via a web browser or other software application (such as a remote computing application). Such software can be executed, for example, on a single local computer (e.g., any suitable commercially available computer) or in a network environment (e.g., via the Internet, a wide-area network, a local-area network, a client-server network (such as a cloud computing network), or other such network) using one or more network computers.
[0177] For clarity, only certain selected aspects of the software-based implementations are described. Other details that are well known in the art are omitted. For example, it should be understood that the disclosed technology is not limited to any specific computer language or program. For instance, the disclosed technology can be implemented by software written in C++, Java, Perl, JavaScript, assembly language, Python, or any other suitable programming language. Likewise, the disclosed technology is not limited to any particular computer or type of hardware. Certain details of suitable computers and hardware are well known and need not be set forth in detail in this disclosure.
[0178] Furthermore, any of the software-based embodiments (comprising, for example, computer-executable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, software applications, cable (including fiber optic cable), magnetic communications, electromagnetic communications (including RF, microwave, and infrared communications), electronic communications, or other such communication means.
[0179] The disclosed methods, apparatus, and systems should not be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed embodiments, alone and in various combinations and sub combinations with one another. The disclosed methods, apparatus, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed embodiments require that any one or more specific advantages be present or problems be solved.
[0180] The technologies from any example can be combined with the technologies described in any one or more of the other examples. In view of the many possible embodiments to which the principles of the disclosed technology may be applied, it should be recognized that the illustrated embodiments are examples of the disclosed technology and should not be taken as a limitation on the scope of the disclosed technology. Rather, the scope of the disclosed technology includes what is covered by the scope and spirit of the following claims.