Machine Learning Engine using a Distributed Predictive Analytics Data Set
20220358324 · 2022-11-10
Assignee
Inventors
Cpc classification
G06F18/2148
PHYSICS
International classification
Abstract
A novel distributed method for machine learning is described, where the algorithm operates on a plurality of data silos, such that the privacy of the data in each silo is maintained. In some embodiments, the attributes of the data and the features themselves are kept private within the data silos. The method includes a distributed learning algorithm whereby a plurality of data spaces are co-populated with artificial, evenly distributed data, and then the data spaces are carved into smaller portions whereupon the number of real and artificial data points are compared. Through an iterative process, clusters having less than evenly distributed real data are discarded. A plurality of final quality control measurements are used to merge clusters that are too similar to be meaningful. These distributed quality control measures are then combined from each of the data silos to derive an overall quality control metric.
Claims
1. A distributed method for creating a machine learning rule set, the method comprising: preparing, on a computer, a set of data identifiers to identify data elements representing similar events for training the machine learning rule set; sending the set of data identifiers to a plurality of data silos; receiving a quality control metric from each data silo, wherein the quality control metric from each data silo represents the quality control metric calculated using a silo specific rule set that was derived from a machine learning algorithm using the data elements and the data identifiers on the data silo; and combining the quality control metrics from each data silo into a combined quality control metric.
2. The method of claim 1 wherein the quality control metric is an F-Score.
3. The method of claim 1 wherein the combined quality control metric uses a weighted algorithm.
4. The method of claim 1 further comprising receiving the silo specific rule sets from at least one of the plurality of data silos.
5. The method of claim 4 further comprising receiving a plurality of silo specific rule sets and quality control metrics associated with the silo specific rule sets, from at least one of the plurality of data silos.
6. The method of claim 1 wherein the silo specific rule sets are not returned to the computer.
7. The method of claim 1 wherein a set of training results are sent with the identifiers to the plurality of data silos.
8. The method of claim 1 wherein the machine learning algorithm creates a test rule by adding a condition, calculating a test quality metric, and saving the test rule and the test quality metric if the quality metric is better than previously saved test quality metrics.
9. The method of claim 8 wherein the condition is a range locating clusters of data.
10. A non-transitory computer readable media programmed to: prepare, on a computer, a set of data identifiers to identify data elements representing similar events for training a machine learning rule set; send the set of data identifiers to a plurality of data silos; receive a quality control metric from each data silo, wherein the quality control metric from each data silo represents the quality control metric calculated using a silo specific rule set that was derived from a machine learning algorithm using the data elements and the data identifiers on the data silo; and combine the quality control metrics from each data silo into a combined quality control metric.
11. The non-transitory computer readable media of claim 10 wherein the quality control metric is an F-Score.
12. The non-transitory computer readable media of claim 10 wherein the combined quality control metric uses a weighted algorithm.
13. The non-transitory computer readable media of claim 10 wherein the silo specific rule sets are returned to the computer and combined into the machine learning rule set.
14. The non-transitory computer readable media of claim 13 wherein a plurality of silo specific rule sets and quality control metrics associated with the silo specific rule sets are returned to the computer from each data silo.
15. The non-transitory computer readable media of claim 10 wherein the silo specific rule sets are not returned to the computer.
16. The non-transitory computer readable media of claim 10 wherein an associated set of training results are sent with the identifiers to the plurality of data silos from the computer.
17. The non-transitory computer readable media of claim 10 wherein the machine learning algorithm creates a test rule by adding a condition, calculating a test quality metric, and saving the test rule and the test quality metric if the quality metric is better than previously saved test quality metrics.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
DETAILED DESCRIPTION
[0047] The following description outlines several possible embodiments to create models using distributed data. The Distributed DensiCube modeler and scorer described below extend the predicative analytic algorithms that are described in U.S. Pat. No. 9,489,627 to extend their execution in distributed data environments and into quality analytics. The rule learning algorithm for DensiCube is briefly described below. But the DensiCube machine learning algorithm is only one embodiment of the inventions herein. Other machine learning algorithms could also be used.
Rule Learning Algorithm
[0048] The rule learning algorithm induces a set of rules. A rule itself is a conjunction of conditions, each for one attribute. A condition is a relational expression in the form:
A=V,
where A is an attribute and V is a nominal value for a symbolic attribute or an interval for a numeric attribute. The rule induction algorithm allows for two important learning parameters 102: minimum recall and minimum precision. More specifically, rules generated by the algorithm must satisfy the minimum recall and minimum precision requirements 105 as set by these parameters 102. The algorithm repeats the process of learning a rule 103 for the target class and removing all target class examples covered by the rule 104 until no rule can be generated to satisfy the minimum recall and minimum precision requirements 105 (
[0049] In learning a rule, as seen in
[0050] Looking at 211, 212, the rule 212 covers all of the positive and negative values, and rule 211 is empty. This rule set is then scored and compared to the base rule 201. The best rule is stored.
[0051] Next, the algorithm increments the x-axis split between the rules, creating rule 231 and 232. The rules are scored and compared to the previous best rule.
[0052] The process is repeated until all but one increment on the x-axis is left. These rules 241, 242 are then scored, compared, and stored if the score is better.
[0053] Once the x-axis has been searched, the best rules are then split on the y-axis (for example, 251, 252) to find the best overall rule. This process may be repeated for as many axes as found in the data.
[0054] In the Distributed DensiCube algorithm, the functions shown in
[0055]
[0056] In the Distributed DensiCube algorithm, the entire process described in
[0057] Looking at
[0058] Every rule induction algorithm uses a metric to evaluate or rank the rules that it generates. Most rule induction algorithms use accuracy as the metric. However, accuracy is not a good metric for imbalanced data sets. The algorithm uses an F-measure as the evaluation metric. It selects the rule with the largest F-measure score. F-measure is widely used in information retrieval and in some machine learning algorithms. The two components of F-measure are recall and precision. The recall of a target class rule is the ratio of the number of target class examples covered by the rule to the total number of target class examples. The precision of a target class (i.e., misstatement class) rule is the ratio of the number of target class examples covered by the rule to the total number of examples (from both the target and non-target classes) covered by that rule. F-measure of a rule r is defined as:
where β is the weight. When β is set to 1, recall and precision are weighted equally. F-measure favors recall with β>1 and favors precision with β<1. F-measure can be used to compare the performances of two different models/rules. A model/rule with a larger F-measure is better than a model/rule with a smaller F-measure.
Prototype Generation Algorithm for Ranking with Rules
[0059] The algorithms incorporate a method, called prototype generation, to facilitate ranking with rules. For each rule generated by the rule learning algorithm, two prototypes are created. In generating prototypes, the software ignores symbolic conditions, because examples covered by a rule share the same symbolic values. Given a rule R with m numeric conditions: A.sub.R1=V.sub.R1{circumflex over ( )}A.sub.R2=V.sub.Rs{circumflex over ( )} . . . {circumflex over ( )}A.sub.Rm=V.sub.Rm, where A.sub.Ri is a numeric attribute and V.sub.Ri is a range of numeric values, the positive prototype of R, P(R)=(p.sub.R1, p.sub.R2, . . . , p.sub.Rm) and the negative prototype of R N(R)=(n.sub.R1, n.sub.R2, . . . , n.sub.Rm), where both p.sub.Ri ∈ V.sub.Ri and n.sub.Ri ∈ V.sub.Ri. p.sub.Ri and n.sub.Ri are computed using the following formulas:
[0060] where R(POS) and R(NEG) are the sets of positive and negative examples covered by R respectively, e=(e.sub.R1, e.sub.R2, . . . , e.sub.Rm) is an example, and e.sub.Ri ∈ V.sub.Ri for i=1, . . . , m, because e is covered by R.
[0061] Given a positive prototype P(R)=(p.sub.R1, p.sub.R2, . . . , p.sub.Rm) and a negative prototype N(R)=(n.sub.R1, n.sub.R2, . . . , n.sub.Rm) of rule R, the score of an example e=(e.sub.R1, e.sub.R2, . . . , e.sub.Rm) is 0 if e is not covered by R. Otherwise, e receives a score between 0 and 1 computed using the following formula:
[0062] where w.sub.Ri is the weight of Ri.sup.th attribute of R. The value of
is between −1 and 1. When e.sub.Ri>n.sub.ri>p.sub.Ri or p.sub.Ri>n.sub.Ri>e.sub.Ri it is −1. When e.sub.Ri>p.sub.Ri>n.sub.Ri or n.sub.Ri>p.sub.Ri>e.sub.R, it is 1. When e.sub.Ri is closer to n.sub.Ri than p.sub.Ri, it takes a value between −1 and 0. When e.sub.Ri is closer to p.sub.Ri than n.sub.Ri, it takes a value between 0 and 1. The value of score(e, R) is normalized to the range of 0 and 1. If p.sub.Ri=n.sub.Ri, then
is set to 0.
[0063] w.sub.Ri is computed using the following formula.
where max.sub.Ri and min.sub.Ri are the maximum and minimum values of the Ri.sup.th attribute of R, respectively. The large difference between p.sub.Ri and n.sub.Ri implies that the values of positive examples are very different from the values of negative examples on the Ri.sup.th attribute, so the attribute should distinguish positive examples from negative one well.
Scoring Using Rules
[0064] A rule induction algorithm usually generates a set of overlapped rules. Two methods, Max and Probabilistic Sum, for combining example scores of multiple rules are used by the software. Both methods have been used in rule-based expert systems. The max approach simply takes the largest score of all rules. Given an example e and a set of n rules R={R.sub.1, . . . , R.sub.n,}, the combined score of e using Max is computed as follows:
score(e, R)=max.sub.i=1.sup.n{Precision(Ri)×score(e, R.sub.i)},
where precision(R.sub.i) is the precision of R.sub.i. There are two ways to determine score(e, R.sub.i) for a hybrid rule. The first way returns the score of e received from rule R.sub.i for all e's. The second way returns the score of e received from R.sub.i only if the score is larger than or equal to the threshold of R.sub.i, otherwise the score is 0. The first way returns. For a normal rule,
[0065] For the probabilistic sum method, the formula can be defined recursively as follows.
score(e, {R.sub.1})=score(e, R.sub.1)
score(e, {R.sub.1, R.sub.2})=score(e, R.sub.1)+score(e, R.sub.2)−score(e, R.sub.1)×score(e, R.sub.2)
score(e, {R.sub.1, . . . , R.sub.n})=score(e, {R.sub.1, . . . , R.sub.n−1})+score(e, R.sub.n)−score(e, {R.sub.1, . . . , R.sub.n−1})×score(e, R.sub.n)
Hardware Architecture
[0066] Turning to
Distributed DensiCube
[0067] By allowing for distributed execution, the Distributed DensiCube algorithm allows for a number of important benefits. First of all, privacy of the data assets in the model generation and prediction modes of operation are preserved by keeping the data in its original location and limiting access to the specific data. Second, the cost of implementing complex ETL processes and data warehousing in general is reduced by eliminating the costs of transmission to and storage in a central location. Third, these inventions increase performance by allowing parallel execution of the DensiCube algorithm (i.e., executing the predictive analytics algorithms on a distributed computing platforms). In addition, this distributed algorithm provides the capability for the Distributed DensiCube algorithm to provide unsupervised learning (e.g., fraud detection from distributed data sources). Finally, it allows predictive analytics solutions to operate and react in real time on a low-level transactional streaming data representation without requiring data aggregation.
[0068] The Distributed DensiCube approach represents a paradigm shift of moving from the currently predominant Data Centric approaches to predictive analytics, i.e., approaches that transform, integrate, and push data from distributed silos to predictive analytics agents, to the future Decision Centric (predictive analytics bot agent based) approaches, i.e., approaches that push predictive analytics agents to the data locations and by collaborating support decision-making in the distributed data environments.
[0069] Essentially, the distributed DensiCube algorithm operates the Densicube algorithm on each server 503, 505, 507 analyzing the local data in the database 504, 506, 508. The best rule or best set of rules 405 from each server 503, 505, 507 is then combined into the best overall rule. In some embodiments, several servers could work together to derive a best rule, that is combined with another server.
[0070] Collaborating predictive analytics bot agents can facilitate numerous opportunities for enterprise data warehousing to provide faster, more predictive, more prescriptive, and time and cost saving decision-making solutions for their customers.
1.0 Distributed DensiCube Concept of Operation
[0071] The following sections describe the concept behind the Distributed DesiCube approach. As mentioned in the previous section, the Distributed DensiCube solution continues to use the same modeling algorithms as the current non-distributed predictive analytics solution (with modifications to the scoring algorithms to support privacy by preserving the data assets in silos).
[0072] 1.1 Distributed Modeling
[0073] The Distributed DensiCube operates on distributed entities at different logical and/or physical locations.
[0074] The distributed entity represents a unified virtual feature vector describing an event (e.g., financial transaction, customer campaign information). Feature subsets 704, 705 of this representation are registered/linked by a common identifier (e.g., transaction ID, Enrolment Code, Invoice ID, etc.) 707. Thus, a distributed data 701 represents a virtual table 706 of joined feature subsets 704, 705 by their common identifier 707 (see
[0075] In
[0076] As an example of the distributed DensiCube algorithm, see
[0077] The credit agency database 803 contains three fields, the ID(SSN), the Credit Score, and the Total Debt fields. The registry of deeds database 804 also has three fields in this example, the ID(SSN), a home ownership field, and a home value field. In our example, there are a number of reasons that the data in the credit agency 803 needs to be kept separate from the registry data 804, and both of those dataset need to be kept separate from the bank data 802. As a result, the DensiCube algorithm is run three times on each of the databases 802, 803, 804. In another embodiment, two of the servers could be combined, with the algorithm running on one of the servers. This embodiment is seen in
[0078] As seen in
[0079] Modeler 1003 on the servers 1001
[0080] Feature managers 1004 on multiple data silos 1002
[0081] Predictors 1009 on the servers 1001
[0082] All the above components collaborate to generate models and use them for scoring, and at the same time, preserve the privacy of the data silos 1002. There are three levels of privacy that are possible in this set of inventions. The first level could preserve the data in the silos, providing privacy only for the individual data records. A second embodiment preserves that attributes of the data in the silos, preventing the model from knowing the attributes. The second embodiment may also hide the features (names of attributes) by instead returning an pseudonym for the features. In the third embodiment, the features themselves are kept hidden in the silos. For example, in the first level, that the range of the credit scores is between 575 and 829 is reported back to the modeler 1003, but the individual record is kept hidden. In the second embodiment, the modeler 1003 are told that credit scores are used, but the range is kept hidden on the data silo 1002. In the third embodiment, the credit score feature itself is kept hidden from the modeler 1003. In this third embodiment, the model itself is distributed on each data silo, and the core modeler 1003 has no knowledge of the rules used on each data silo 1002.
[0083] The collaboration between distributed components results in a set of rules generated through a rule-based induction algorithm. The DensiCube induction algorithm, in an iterative fashion, determines the data partitions based on the feature rule based on the syntactic representation (e.g., if feature F>20 and F<−25). It dichotomizes (splits) the data into partitions. Each partition is evaluated by computing statistical quality measures. Specifically, the DensiCube uses an F-Score measure to compute the predictive quality of a specific partition. In binary classification the F-score measure is a measure of a test's accuracy and is defined as the weighted harmonic mean of the test's precision and recall. Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances, while Recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of instances.
[0084] Specifically, the following steps are executed by Distributed DensiCube:
[0085] 1) The modelers 1003 invokes feature managers 1004 that subsequently start data partitioning based on the local set of features at the data silo 1002. This process is called specialization.
[0086] 2) Feature managers 1004 push their computed partitions (i.e., using the data identifier as the partition identifier) and their corresponding evaluation measures (e.g., F-score) to modelers 1003.
[0087] 3) Each feature model manager 1008 compares evaluation measures of the sent partitions and selects the top N best partitions (i.e. specifically it establishes the global beam search for the top preforming partitions and their combinations).
[0088] 4) Subsequently, the modeler 1003 proceeds to the process of generating partition combinations. The first iteration of such combinations syntactically represent two-conditional rules (i.e., a partition is represented by a joint of lower and upper bounds of two features). Once this process is completed the identifiers of the two-conditional rules are sent to the feature managers 1004. Once received, feature managers 1004 evaluate the new partitions identified by the identifiers by executing the next iteration specialization.
[0089] A data manager 1012 is a logical construct which is comprised of a data orchestrator 1005 and one or more feature data managers 1006, which cooperate to manage data sets. Data sets can be used to create models and/or to make predictions using models. A data orchestrator 1005 is a component which provides services to maintain Data Sets, is identified by its host domain and port, and has a name which is not necessarily unique. A feature data manager 1006 is a component which provides services to maintain Feature Data Sets 1203, is identified by its host domain and port, and has a name which is not necessarily unique. A data set lives in a data orchestrator 1005, has a unique ID within the data orchestrator 1005, consists of a junction of Feature Data Sets 1203, joins Feature Data Sets 1203 on specified unique features, and is virtual tabular data (see
[0090] A model manager 1013 is a logical construct which is comprised of a model orchestrator 1007 and one or more feature model managers 1008, which cooperate to generate models.
[0091] A prediction manager 1014 is a logical construct which is comprised of a prediction orchestrator 1010 and one or more feature prediction managers 1011, which cooperate to create scores and statistics (a.k.a. predictions).
[0092] 1.2 Distributed Scoring
[0093] The distributed scoring process is accomplished in two steps. First, partial scores are calculated on each feature manager 1004 on each server. Then, complete scores are calculated from the partial scores.
[0094] The combined scores are the sum of the scores from each server divided by the sum of the weights from each server, multiplied by two:
[0095] In this formula, the score for server A and B are similar to the DensiCube scoring described above.
[0096] The weights are also determined for each location, as above.
[0097] With the combined score, we have a metric to show the validity of the selected model.
2.0 Initial Architectural Concept of Operation and Requirements
[0098] 2.1 Feature Manager 1004
[0099] At the initialization of the machine learning model generation process, each feature manager 1004 is setup on the local servers 1002. Each feature manager 1004 must be uniquely named (e.g., within the subnet where it lives). The port number where the feature manager 1004 can be reached needs to be defined. Access control needs to be configured, with a certificate for the feature manager 1004 installed and the public key for each modeler 1003 and feature prediction manager 1011 installed to allow access to this feature manager 1004. Each local feature manager 1004 needs to broadcast the name, host, port and public key of the feature manager 1004. In some embodiments, the feature manager 1004 needs to listen to other broadcasts to verify uniqueness.
[0100] Next, the data sources are defined. A seen in
[0101] Each Data Source shall be described by a name for the data source and a plurality of columns, where each column has a name, a data type, and a uniqueness field. Data Sources can be used by feature model managers 1008 or feature prediction managers 1011 or both. Data Sources are probably defined by calls from a modeler 1003.
[0102] The next step involves defining the Data Set Templates. A Data Set Template is a specification of how to join Data Sources defined within a feature data manager 1006. Each Data Set Template must be uniquely identified by name within a feature data manager 1006. A Data Set Template is a definition of Columns without regard to the Rows in each Data Source. For example, a Data Set Template could be represented by a SQL select statement with columns and join conditions, but without a where clause to limit rows. Data Set Templates can be used by feature model managers 1008 or feature prediction managers 1011 or both. Data Set Templates are probably defined by calls from a feature model manager 1008.
[0103] Once the Data Set Templates are setup, the next step is to define the Data Sets. A Data Set is tabular data which is a subset of a data from the Data Sources defined within a feature data manager 1006. Each Data Set must be uniquely identified by name within a feature data manager 1006. A Data Set is defined by a Data Set Template to define the columns and a set of filters to define the rows. For example, the filter could be the where clause in a SQL statement. Data Sets can be used by modelers 1003 or feature prediction managers 1011 or both. Data Sets are probably defined by calls from a modeler 1003.
[0104] 2.2 Modeler 1003
[0105] In
[0106] In the setup of the model orchestrator 1007, each modeler 1003 should be uniquely named, at least within the subnet where it lives. However, in some embodiments, the uniqueness may not be enforceable. Next the access control is configured by installing a certificate for the modeler 1003 and installing the public key for each feature manager 1004 containing pertinent data. The public key for each feature prediction manager 1011 is also installed, to which this modeler 1003 can publish.
[0107] Once set up, the model orchestrator 1007 establishes a connection to each feature model manager 1008.
[0108] Then the Model Data Set templates are defined. A Model Data Set Template is a conjunction of Data Set Templates from feature data managers 1006. Each Data Set Template must be uniquely named within the feature manager 1004. The Data Set Templates on feature data managers 1006 are defined, as are the join conditions. A join condition is an equality expression between unique columns on two Data Sets. For example <Feature Manager A>.<Data Set Template 1>.<Column a>==<Feature Manager B>.<Data Set Template 2>.<Column b>. Each data set participating in the model data set must be joined such that a singular virtual tabular data set is defined.
[0109] After the templates are defined, the model data sets themselves are defined. A Model Data Set is a conjunction of Data Sets from feature data managers 1006. The Model Data Set is a row filter applied to a Model Data Set Template. Each Data Set must be uniquely named within a Model Data Set Template. Then the data sets on the feature data managers 1006 are defined. This filters the rows.
[0110] Next, the Modeling Parameters are defined. Modeling Parameters define how a Model is created on any Model Data Set which is derived from a Model Data Set Template. Each Modeling Parameters definition must be unique within a Model Data Set Template.
[0111] Then, a model is created and published. A model is created by applying Modeling Parameters to a Model Data Set. Each Model must be uniquely identified by name within a Model Data Set. A Model can be published to a feature prediction manager 1011. Publishing will persist the Model artifacts in the feature model managers 1008 and feature prediction managers 1011. Following are some of the artifacts which will be persisted to either the feature data manager 1008 and/or feature prediction manager 1011: Data set templates, model data set templates, and the model.
[0112] 2.3 Prediction Orchestrator 1010
[0113] The prediction orchestrator 1010 setup begins with the configuration of the access control. This is done by installing a certificate for the feature prediction manager 1011 and installing the public key for each modeler 1003 allowed to access this prediction orchestrator 1010. The public key for each feature manager 1004 containing pertinent data is also installed. Each prediction orchestrator 1010 should be uniquely named, but in some embodiments this may not be enforced.
[0114] Next, a connection to each feature prediction manager 1011 is established and to a model orchestrator 1007. The model orchestrator 1007 will publish the Model Data Set Template and Model to the prediction orchestrator 1010.
[0115] The scoring data sets are then defined. A Scoring Data Set is a conjunction of Data Sets from the feature data managers 1006. It is a row filter applied to a Model Data Set Template. Each Data Set must be uniquely named within a Model Data Set Template. The data sets on the feature data managers 1006 are defined (this filters the rows).
[0116] Then the Scoring Parameters are defined. Scoring Parameters define how Scores are calculated on any Score Data Set which is derived from a Model Data Set Template. Each Scoring Parameters definition must be unique within a Model Data Set Template.
[0117] Finally, a Scoring Data Set is defined. Partial Scores are calculated on each feature manager 1004 in the feature prediction manager 1011. See
[0118] Looking to
[0119] The feature managers 1004 on each of the data silos 1002 then initialize the site 1311, 1321, 1331. The data on the silo 1002 is then sliced, using the list of IDs and the features 1312, 1322, 1332 into a data set of interest, by the feature data manager 1006. The DensiCube algorithm 1313, 1323, 1333 is then run by the feature model manager 1008 on the data of interest, as seen in
[0120] The rules, in some embodiments, are then returned to the prediction orchestrator 1010 where they are combined into an overall rule 1304, as seen in
[0121] Modifications to the scoring algorithms to support privacy preserving in the data silos.
[0122] The foregoing devices and operations, including their implementation, will be familiar to, and understood by, those having ordinary skill in the art.
[0123] The above description of the embodiments, alternative embodiments, and specific examples, are given by way of illustration and should not be viewed as limiting. Further, many changes and modifications within the scope of the present embodiments may be made without departing from the spirit thereof, and the present invention includes such changes and modifications.