METHODS FOR USING FEATURE VECTORS AND MACHINE LEARNING ALGORITHMS TO DETERMINE DISCRIMINANT FUNCTIONS OF MINIMUM RISK QUADRATIC CLASSIFICATION SYSTEMS
20200027027 ยท 2020-01-23
Inventors
Cpc classification
G06F18/254
PHYSICS
G06F17/18
PHYSICS
A61B5/7264
HUMAN NECESSITIES
G06F18/2415
PHYSICS
G06F17/16
PHYSICS
International classification
G06F17/16
PHYSICS
Abstract
Methods are provided for determining discriminant functions of minimum risk quadratic classification systems, wherein a discriminant function is represented by a geometric locus of a principal eigenaxis of a quadratic decision boundary. A geometric locus of a principal eigenaxis is determined by solving a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium. Feature vectors and machine learning algorithms are used to determine discriminant functions and ensembles of discriminant functions of minimum risk quadratic classification systems, wherein a discriminant function of a minimum risk quadratic classification system exhibits the minimum probability of error for classifying given collections of feature vectors and unknown feature vectors related to the collections.
Claims
1. A computer-implemented method of using feature vectors and machine learning algorithms to determine a discriminant function of a minimum risk quadratic classification system that classifies said feature vectors into two classes and using said discriminant function of said minimum risk quadratic classification system to classify unknown feature vectors related to said feature vectors, said method comprising: receiving an Nd data set of feature vectors within a computer system, wherein N is a number of feature vectors, d is a number of vector components in each feature vector, and each one of said N feature vectors is labeled with information that identifies which of two classes each one of said N feature vectors belongs to, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving within said computer system unknown feature vectors related to said data set; determining a kernel matrix using said data set, said determination of said kernel matrix being performed by using processors of said computer system to calculate a matrix of all possible inner products of signed reproducing kernels of said N feature vectors, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein each one of said reproducing kernels of said N feature vectors has a sign of +1 or 1 that identifies which of said two classes each one of said N feature vectors belongs to, and using said processors of said computer system to calculate a regularized kernel matrix from said kernel matrix; determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using said regularized kernel matrix, wherein said extreme points are located within overlapping regions or near tail regions of distributions of said N feature vectors, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said minimum risk quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said N feature vectors and said reproducing kernels of said unknown feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors that belong to said two classes and said unknown feature vectors related to said data set; determining which of said two classes said unknown feature vectors belong to using said discriminant function of said minimum risk quadratic classification system, said determination of said classes of said unknown feature vectors being performed by using said processors of said computer system to apply said discriminant function of said minimum risk quadratic classification system to said unknown feature vectors, wherein said discriminant function determines likely locations of said unknown feature vectors and identifies said decision regions related to said two classes that said unknown feature vectors are located within, wherein said discriminant function recognizes said classes of said unknown feature vectors, and wherein said minimum risk quadratic classification system decides which of said two classes said unknown feature belong to and thereby classifies said unknown feature vectors.
2. The method of claim 1, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp(sx.sup.2):0.010.1.
3. The method of claim 1, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
4. A computer-implemented method of using feature vectors and machine learning algorithms to determine a fused discriminant function of a fused minimum risk quadratic classification system that classifies two types of said feature vectors into two classes, wherein said types of said feature vectors have different numbers of vector components, and using said fused discriminant function of said fused minimum risk quadratic classification system to classify unknown feature vectors related to said two types of said feature vectors, said method comprising: receiving an Nd data set of feature vectors within a computer system, wherein N is a number of feature vectors, d is a number of vector components in each feature vector, and each one of said N feature vectors is labeled with information that identifies which of two classes each one of said N feature vectors belongs to, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving an Np different data set of different feature vectors within said computer system, wherein N is a number of different feature vectors, p is a number of vector components in each different feature vector, and each one of said N different feature vectors is labeled with information that identifies which of said two classes each one of said N different feature vectors belongs to, and wherein each said different feature vector is defined by a p-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving within said computer system unknown feature vectors related to said data set and unknown different feature vectors related to said different data set; determining a kernel matrix using said data set, said determination of said kernel matrix being performed by using processors of said computer system to calculate a matrix of all possible inner products of signed reproducing kernels of said N feature vectors, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein each one of said reproducing kernels of said N feature vectors has a sign of +1 or 1 that identifies which of said two classes each one of said N feature vectors belongs to, and using said processors of said computer system to calculate a regularized kernel matrix from said kernel matrix; determining a different kernel matrix using said different data set, said determination of said different kernel matrix being performed by using processors of said computer system to calculate a matrix of all possible inner products of signed reproducing kernels of said N different feature vectors, wherein a reproducing kernel of a different feature vector replaces said different feature vector with a curve that contains first and second degree vector components, and wherein each one of said reproducing kernels of said N different feature vectors has a sign of +1 or 1 that identifies which of said two classes each one of said N different feature vectors belongs to, and using said processors of said computer system to calculate a regularized different kernel matrix from said different kernel matrix; determining a discriminant function of a minimum risk quadratic classification system using said regularized kernel matrix and said data set, said determination of said discriminant function of said minimum risk quadratic classification system comprising the steps of: determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using said regularized kernel matrix, wherein said extreme points are located within overlapping regions or near tail regions of distributions of said N feature vectors, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said minimum risk quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said N feature vectors and said reproducing kernels of said unknown feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum risk quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors that belong to said two classes and said unknown feature vectors related to said data set; determining a different discriminant function of a different minimum risk quadratic classification system using said regularized different kernel matrix and said different data set, said determination of said different discriminant function of said different minimum risk quadratic classification system being performed by using said processors of said computer system to perform said steps of determining said discriminant function of said minimum risk quadratic classification system, wherein said different minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N different feature vectors that belong to said two classes and said unknown different feature vectors related to said different data set; determining a fused discriminant function of a fused minimum risk quadratic classification system using said discriminant function of said minimum risk quadratic classification system and said different discriminant function of said different minimum risk quadratic classification system, said determination of said fused discriminant function of said fused minimum risk quadratic classification system being performed by using said processors of said computer system to sum said discriminant function of said minimum risk quadratic classification system and said different discriminant function of said different minimum risk quadratic classification system; and determining which of said two classes said unknown feature vectors and said unknown different feature vectors belong to using said fused discriminant function of said fused minimum risk quadratic classification system, said determination of said classes of said unknown feature vectors and said unknown different feature vectors being performed by using said processors of said computer system to apply said fused discriminant function of said fused minimum risk quadratic classification system to said unknown feature vectors and said unknown different feature vectors, wherein said fused discriminant function determines likely locations of said unknown feature vectors and said unknown different feature vectors and identifies said decision regions related to said two classes that said unknown feature vectors and said unknown different feature vectors are located within, wherein said fused discriminant function recognizes said classes of said unknown feature vectors and said unknown different feature vectors, and wherein said fused minimum risk quadratic classification system decides which of said two classes said unknown feature vectors and said unknown different feature vectors belong to and thereby classifies said unknown feature vectors and said unknown different feature vectors.
5. The method of claim 4, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp(sx.sup.2):0.010.1.
6. The method of claim 4, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
7. A computer-implemented method of using feature vectors and machine learning algorithms to determine a discriminant function of an M-class minimum risk quadratic classification system that classifies said feature vectors into M classes and using said discriminant function of said M-class minimum risk quadratic classification system to classify unknown feature vectors related to said feature vectors, said method comprising: receiving M Nd data sets of feature vectors within a computer system, wherein M is a number of classes, N is a number of feature vectors in each one of said M data sets, d is a number of vector components in each feature vector, and each one of said N feature vectors in each one of said M data sets belongs to the same class and is labeled with information that identifies said class, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving within said computer system unknown feature vectors related to said M data sets; determining M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems using said M data sets, wherein the determination of each one of said M ensembles comprises the steps of: determining M1 kernel matrices for a class of feature vectors using said M data sets, said determination of said M1 kernel matrices being performed by using processors of said computer system to calculate M1 matrices, wherein each matrix contains all possible inner products of signed reproducing kernels of feature vectors that belong to said class and one of the other M1 classes, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein said N feature vectors that belong to said class have the sign +1, and said N feature vectors that belong to said other class have the sign 1, and wherein said M1 matrices account for all of the other said M1 classes, and calculating M1 regularized kernel matrices from said M1 kernel matrices; determining M1 discriminant functions of M1 minimum risk quadratic classification systems using said M1 regularized kernel matrices, wherein the determination of each one of said M1 discriminant functions of M1 minimum risk quadratic classification systems further comprises the steps of: determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using one of said regularized kernel matrices, wherein said extreme points are located within overlapping regions or near tail regions of distributions of feature vectors that belong to said class and one of the other said M1 classes, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said feature vectors that belong to said class and said other class and said reproducing kernels of said unknown feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors from said class and said N feature vectors from said other class and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors that belong to said class and said N feature vectors that belong to said other class, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors that belong to said class and said N feature vectors that belong to said other class and said unknown feature vectors related to said data set and said other data set; determining a discriminant function of an M-class minimum risk quadratic classification system using said M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems, said determination of said discriminant function of said M-class minimum risk quadratic classification system being performed by using said processors of said computer system to sum said M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems; determining which of said M classes said unknown feature vectors belong to using said discriminant function of said M-class minimum risk quadratic classification system, said determination of said classes of said unknown feature vectors being performed by using said processors of said computer system to apply said discriminant function of said M-class minimum risk quadratic classification system to said unknown feature vectors, wherein said discriminant function determines likely locations of said unknown feature vectors and identifies said decision regions related to said M classes that said unknown feature vectors are located within, wherein said discriminant function recognizes said classes of said unknown feature vectors, and wherein said M-class minimum risk quadratic classification system decides which of said M classes said unknown feature vectors belong to and thereby classifies said unknown feature vectors.
8. The method of claim 7, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp (sx.sup.2):0.010.1.
9. The method of claim 7, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
10. A computer-implemented method of using feature vectors and machine learning algorithms to determine a fused discriminant function of a fused M-class minimum risk quadratic classification system that classifies two types of said feature vectors into M classes, wherein said types of said feature vectors have different numbers of vector components, and using said fused discriminant function of said fused M-class minimum risk quadratic classification system to classify unknown feature vectors related to said two types of said feature vectors, said method comprising: receiving M Nd data sets of feature vectors within a computer system, wherein M is a number of classes, N is a number of feature vectors in each one of said M data sets, d is a number of vector components in each feature vector, and each one of said N feature vectors in each one of said M data sets belongs to the same class and is labeled with information that identifies said class, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving M Np different data sets of different feature vectors within said computer system, wherein M is said number of said classes, N is a number of different feature vectors in each one of said M different data sets, p is a number of vector components in each different feature vector, and each one of said N different feature vectors in each one of said M different data sets belongs to the same class and is labeled with information that identifies said class, and wherein each said different feature vector is defined by a p-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving within said computer system unknown feature vectors related to said M data sets and unknown different feature vectors related to said M different data sets; determining M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems using said M data sets, wherein the determination of each one of said M ensembles comprises the steps of: determining M1 kernel matrices for a class of feature vectors using said M data sets, said determination of said M1 kernel matrices being performed by using processors of said computer system to calculate M1 matrices, wherein each matrix contains all possible inner products of signed reproducing kernels of feature vectors that belong to said class and one of the other M1 classes, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein said N feature vectors that belong to said class have the sign +1, and said N feature vectors that belong to said other class have the sign 1, and said M1 matrices account for all of the other said M1 classes, and calculating M1 regularized kernel matrices from said M1 kernel matrices; determining M1 discriminant functions of M1 minimum risk quadratic classification systems using said M1 regularized kernel matrices, wherein the determination of each one of said M1 discriminant functions of M1 minimum risk quadratic classification systems further comprises the steps of: determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using one of said regularized kernel matrices, wherein said extreme points are located within overlapping regions or near tail regions of distributions of feature vectors that belong to said class and one of the other said M1 classes, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said feature vectors that belong to said class and said other class and said reproducing kernels of said unknown feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors from said class and said N feature vectors from said other class and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors that belong to said class and said N feature vectors that belong to said other class, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors that belong to said class and said N feature vectors that belong to said other class and said unknown feature vectors related to said data set and said other data set; determining M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems using said M different data sets, said determination of said M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems being performed by performing said steps of determining M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems; determining a fused discriminant function of a fused M-class minimum risk quadratic classification system using said M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems and said M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems, said determination of said fused discriminant function of said fused M-class minimum risk quadratic classification system being performed by using said processors of said computer system to sum said M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems and said M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems; determining which of said M classes said unknown feature vectors and said unknown different feature vectors belong to using said fused discriminant function of said fused M-class minimum risk quadratic classification system, said determination of said classes of said unknown feature vectors and said unknown different feature vectors being performed by using said processors of said computer system to apply said fused discriminant function of said fused M-class minimum risk quadratic classification system to said unknown feature vectors and said unknown different feature vectors, wherein said fused discriminant function determines likely locations of said unknown feature vectors and said unknown different feature vectors and identifies said decision regions related to said M classes that said unknown feature vectors and said unknown different feature vectors are located within, wherein said fused discriminant function recognizes said classes of said unknown feature vectors and said unknown different feature vectors, and wherein said fused M-class minimum risk quadratic classification system decides which of said M classes said unknown feature vectors and said unknown different feature vectors belong to and thereby classifies said unknown feature vectors and said unknown different feature vectors.
11. The method of claim 10, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp(sx.sup.2):0.010.1.
12. The method of claim 10, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
13. A computer-implemented method of using feature vectors and machine learning algorithms to determine a discriminant function of a minimum risk quadratic classification system that classifies said feature vectors into two classes and using said discriminant function of said minimum risk quadratic classification system to determine a classification error rate and a measure of overlap between distributions of said feature vectors, said method comprising: receiving an Nd data set of feature vectors within a computer system, wherein N is a number of feature vectors, d is a number of vector components in each feature vector, and each one of said N feature vectors is labeled with information that identifies which of two classes each one of said N feature vectors belongs to, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; receiving an Nd test data set of test feature vectors related to said data set within said computer system, wherein N is a number of test feature vectors, d is a number of vector components in each test feature vector, and each one of said N test feature vectors is labeled with information that identifies which of said two classes each one of said N test feature vectors belongs to; determining a kernel matrix using said data set, said determination of said kernel matrix being performed by using processors of said computer system to calculate a matrix of all possible inner products of signed reproducing kernels of said N feature vectors, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein each one of said reproducing kernels of said N feature vectors has a sign of +1 or 1 that identifies which of said two classes each one of said N feature vectors belongs to, and using said processors of said computer system to calculate a regularized kernel matrix from said kernel matrix; determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using said regularized kernel matrix, wherein said extreme points are located within overlapping regions or near tail regions of distributions of said N feature vectors, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said N feature vectors and said reproducing kernels of said N feature vectors and said N test feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors and said N test feature vectors and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors and said N test feature vectors related to said data set; determining which of said two classes said N feature vectors belong to using said discriminant function of said minimum risk quadratic classification system, said determination of said classes of said N feature vectors being performed by using said processors of said computer system to apply said discriminant function of said minimum risk quadratic classification system to said N feature vectors, wherein said discriminant function determines likely locations of said N feature vectors and identifies said decision regions related to said two classes that said N feature vectors are located within, wherein said discriminant function recognizes said classes of said N feature vectors, and wherein said minimum risk quadratic classification system decides which of said two classes said N feature vectors belong to belong to and thereby classifies said N feature vectors; determining an in-sample classification error rate for said two classes of feature vectors, said determination of said error rate being performed by using said processors of said computer system to calculate the average number of wrong decisions made by said minimum risk quadratic classification system for classifying said N features vectors; determining which of said two classes said N test feature vectors belong to using said discriminant function of said minimum risk quadratic classification system, said determination of said classes of said N test feature vectors being performed by using said processors of said computer system to apply said discriminant function of said minimum risk quadratic classification system to said N test feature vectors, wherein said discriminant function determines likely locations of said N test feature vectors and identifies said decision regions related to said two classes that said N test feature vectors are located within, wherein said discriminant function recognizes said classes of said N test feature vectors, and wherein said minimum risk quadratic classification system decides which of said two classes said N test feature vectors belong to and thereby classifies said N test feature vectors; determining an out-of-sample classification error rate for said two classes of feature vectors, said determination of said error rate being performed by using said processors of said computer system to calculate the average number of wrong decisions made by said minimum risk quadratic classification system for classifying said N test features vectors; determining a classification error rate for said two classes of feature vectors, said determination of said classification error rate being performed by using said processors of said computer system to average said in-sample classification error rate and said out-of-sample classification error rate; and determining a measure of overlap between distributions of feature vectors for said two classes of feature vectors using said N feature vectors and said extreme vectors, said determination of said measure of overlap being performed by using said processors of said computer system to calculate the ratio of the number of said extreme vectors to the number of said N feature vectors, wherein said ratio determines said measure of overlap.
14. The method of claim 13, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp(sx.sup.2):0.010.1.
15. The method of claim 13, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
16. A computer-implemented method of using feature vectors and machine learning algorithms to determine a discriminant function of a minimum risk quadratic classification system that classifies collections of said feature vectors into two classes and using said discriminant function of said minimum risk quadratic classification system to determine if distributions of said collections of said feature vectors are homogenous distributions, said method comprising: receiving an Nd data set of feature vectors within a computer system, wherein N is a number of feature vectors, d is a number of vector components in each feature vector, and each one of said N feature vectors is labeled with information that identifies which of two collections each one of said N feature vectors belongs to, and wherein each said feature vector is defined by a d-dimensional vector of numerical features, wherein said numerical features are extracted from digital signals; determining a kernel matrix using said data set, said determination of said kernel matrix being performed by using processors of said computer system to calculate a matrix of all possible inner products of signed reproducing kernels of said N feature vectors, wherein a reproducing kernel of a feature vector replaces said feature vector with a curve that contains first and second degree vector components, and wherein each one of said reproducing kernels of said N feature vectors has a sign of +1 or 1 that identifies which of said two collections each one of said N feature vectors belongs to, and using said processors of said computer system to calculate a regularized kernel matrix from said kernel matrix; determining scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points using said regularized kernel matrix, wherein said extreme points are located within overlapping regions or near tail regions of distributions of said N feature vectors, said determination of said scale factors being performed by using said processors of said computer system to determine a solution of a dual optimization problem, wherein said scale factors and said geometric locus satisfy a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, and wherein said scale factors determine conditional densities for said extreme points and also determine critical minimum eigenenergies exhibited by scaled extreme vectors on said geometric locus, wherein said critical minimum eigenenergies determine conditional probabilities of said extreme points and also determine corresponding counter risks and risks of a minimum risk quadratic classification system, wherein said counter risks are associated with right decisions and said risks are associated with wrong decisions of said minimum risk quadratic classification system, and wherein said geometric locus determines the principal eigenaxis of the decision boundary of said minimum risk quadratic classification system, wherein said principal eigenaxis exhibits symmetrical dimensions and density, wherein said conditional probabilities and said critical minimum eigenenergies exhibited by said minimum risk quadratic classification system are symmetrically concentrated within said principal eigenaxis, and wherein counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus together with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically balanced with each other about the geometric center of said principal eigenaxis, wherein the center of total allowed eigenenergy and minimum expected risk of said minimum risk quadratic classification system is located at the geometric center of said geometric locus, and wherein said geometric locus determines a primal representation of a dual locus of likelihood components and principal eigenaxis components, wherein said likelihood components and said principal eigenaxis components are symmetrically distributed over either side of the axis of said dual locus, wherein a statistical fulcrum is placed directly under the center of said dual locus, and wherein said likelihood components of said dual locus determine conditional likelihoods for said extreme points, and wherein said principal eigenaxis components of said dual locus determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders that jointly partition the decision space of said minimum risk quadratic classification system into symmetrical decision regions; determining said extreme vectors on said geometric locus using the vector of said scale factors, said determination of said extreme vectors being performed by using said processors of said computer system to identify said scale factors that exceed zero by a small threshold, and using said processors of said computer system to determine a sign vector of signs associated with said extreme vectors using said data set, and compute the average sign using said sign vector; determining a locus of average risk for said quadratic classification system using said reproducing kernels of said extreme vectors and said sign vector, said determination of said locus of average risk being performed by using said processors of said computer system to calculate a kernel matrix of all possible inner products of said reproducing kernels of said extreme vectors and multiply said kernel matrix by said sign vector; determining said geometric locus, said determination of said geometric locus being performed by using said processors of said computer system to calculate a matrix of inner products between said signed reproducing kernels of said N feature vectors and said reproducing kernels of said N feature vectors, and multiply said matrix by said vector of scale factors; determining the discriminant function of said minimum quadratic classification system, using said locus of aggregate risk and said average sign and said geometric locus, said determination of said discriminant function of said minimum risk quadratic classification system being performed by using said processors of said computer system to subtract said locus of aggregate risk from sum of said geometric locus and said average sign, wherein said discriminant function of said minimum risk quadratic classification system satisfies said system of fundamental locus equations of binary classification, and wherein said discriminant function of said minimum risk quadratic classification system determines likely locations of said N feature vectors and also determines said geometric loci of said quadratic decision boundary and said corresponding decision borders that jointly partition said extreme points into said symmetrical decision regions, wherein said symmetrical decision regions span said overlapping regions or said tail regions of said distributions of said N feature vectors, and wherein said discriminant function of said minimum risk quadratic classification system satisfies said quadratic decision boundary in terms of a critical minimum eigenenergy and said minimum expected risk, wherein said counteracting and opposing components of said critical minimum eigenenergies exhibited by said scaled extreme vectors on said geometric locus associated with said corresponding counter risks and risks exhibited by said minimum risk quadratic classification system are symmetrically distributed over said axis of said dual locus, on equal sides of said statistical fulcrum located at said geometric center of said dual locus, wherein said counteracting and opposing components of said critical minimum eigenenergies together with said corresponding counter risks and risks exhibited by said minimum risk quadratic system are symmetrically balanced with each other about said geometric center of said dual locus, and wherein said statistical fulcrum is located at said center of said total allowed eigenenergy and said minimum expected risk of said minimum risk quadratic classification system, wherein said minimum risk quadratic classification system satisfies a state of statistical equilibrium, wherein said total allowed eigenenergy and said expected risk of said minimum risk quadratic classification system are minimized, and wherein said minimum risk quadratic classification system exhibits the minimum probability of error for classifying said N feature vectors that belong to said two collections of said feature vectors; determining which of said two collections said N feature vectors belong to using said discriminant function of said minimum risk quadratic classification system, said determination of said collections of said N feature vectors being performed by using said processors of said computer system to apply said discriminant function of said minimum risk quadratic classification system to said N feature vectors, wherein said discriminant function determines likely locations of said N feature vectors and identifies said decision regions related to said two collections that said N feature vectors are located within, wherein said discriminant function recognizes said collections of said N feature vectors, and wherein said minimum risk quadratic classification system decides which of said two collections said N feature vectors belong to belong to and thereby classifies said N feature vectors; determining an in-sample classification error rate for said two collections of feature vectors, said determination of said error rate being performed by using said processors of said computer system to calculate the average number of wrong decisions made by said minimum risk quadratic classification system for classifying said N features vectors; determining a measure of overlap between said distributions of said N feature vectors for said two collections of feature vectors using said N feature vectors and said extreme vectors, said determination of said measure of overlap being performed by using said processors of said computer system to calculate the ratio of the number of said extreme vectors to the number of said N feature vectors, wherein said ratio determines said measure of overlap; and determining if said distributions of said two collections of said N feature vectors are homogenous distributions using said in-sample classification error rate and said measure of overlap, wherein said distributions of said N feature vectors are homogenous distributions if said measure of overlap has an approximate value of one and said in-sample classification error rate has an approximate value of one half.
17. The method of claim 16, wherein the reproducing kernel is a Gaussian reproducing kernel: k.sub.x=exp(sx.sup.2):0.010.1.
18. The method of claim 16, wherein the reproducing kernel is a second-order polynomial reproducing kernel: k.sub.x=(s.sup.Tx+1).sup.2.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
DETAILED DESCRIPTION OF THE INVENTION
[0044] Before describing illustrative embodiments of the invention, a detailed description of machine learning algorithms of the invention is presented along with a detailed description of the novel principal eigenaxis that determines a discriminant function of a minimum risk quadratic classification system.
[0045] The method to determine a discriminant function of a minimum risk quadratic classification system that classifies feature vectors into two categories, designed in accordance with the invention, uses machine learning algorithms and labeled feature vectors to determine a geometric locus of signed and scaled reproducing kernels of extreme points for feature vectors x of dimension d belonging to either of two classes A or B, wherein the geometric locus satisfies a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a quadratic classification system in statistical equilibrium.
[0046] The input to a machine learning algorithm of the invention is a collection of N feature vectors x.sub.i with labels y.sub.i
(x.sub.1,y.sub.1),(x.sub.2,y.sub.2, . . . ,(x.sub.N,y.sub.N)
wherein y.sub.i=+1 if x.sub.iA and y.sub.i=1 if x.sub.iB, and wherein the N feature vectors are extracted from collections of digital signals.
[0047] Denote a minimum risk quadratic classification system of the invention by
wherein A or B is the true category. The discriminant function D(s)=k.sub.s+.sub.0 of the minimum risk quadratic classification system is represented by a novel principal eigenaxis that is expressed as a dual locus of likelihood components and principal eigenaxis components and is determined by a geometric locus of signed and scaled reproducing kernels of extreme points:
[0048] wherein k.sub.x.sub.
determines an eigenaxis of symmetry for the decision space, and wherein the scale factors .sub.1* and .sub.2i* determine magnitudes .sub.1i*k.sub.x.sub.
min()=.sup.2/2C/2.sub.i=1.sup.N.sub.i.sup.2,
s.t. y.sub.i(k.sub.x.sub.
wherein is a d1 geometric locus of signed and scaled reproducing kernels of extreme points that determines the principal eigenaxis of the decision boundary of a minimum risk quadratic classification system, wherein is expressed as a dual locus of likelihood components and principal eigenaxis components, and wherein k.sub.x.sub..sub.min (Z.sub.min.sub.
wherein the system of N inequalities:
y.sub.i(k.sub.x.sub.
is satisfied in a suitable manner, and wherein the dual locus of satisfies a critical minimum eigenenergy constraint:
()=.sub.min.sub.
wherein the total allowed eigenenergy Z|.sub.min.sub..sub.i. (Z|.sub.min.sub.
wherein the objective function and its constraints are combined with each other, that is minimized with respect to the primal variables and .sub.0, and is maximized with respect to the dual variables .sub.i. The Lagrange multipliers method introduces a Wolfe dual geometric locus that is symmetrically and equivalently related to the primal geometric locus and finds extrema for the restriction of the primal geometric locus to a Wolfe dual principal eigenspace.
[0049] The fundamental unknowns associated with the primal optimization problem in Eq. (1.1) are the scale factors .sub.i of the principal eigenaxis components
on the geometric locus of a principal eigenaxis . Each scale factor .sub.i determines a conditional density and a corresponding conditional likelihood for a reproducing kernel of an extreme point on a dual locus of likelihood components, and each scale factor .sub.i determines the magnitude and the critical minimum eigenenergy exhibited by a scaled extreme vector on a dual locus of principal eigenaxis components.
[0050] The Karush-Kuhn-Tucker (KKT) conditions on the Lagrangian function L.sub.() in Eq. (1.2)
.sub.i=1.sup.N.sub.iy.sub.ik.sub.x.sub.
.sub.i=1.sup.N.sub.iy.sub.i=0, i=1, . . . ,N,(1.4)
C.sub.i=1.sup.N.sub.i.sub.i=1.sup.N.sub.i0, i=1, . . . ,N,(1.5)
.sub.i0, i=1, . . . ,N,(1.6)
.sub.i[y.sub.i(k.sub.x.sub.
determine a system of fundamental locus equations of binary classification, subject to geometric and statistical conditions for a minimum risk quadratic classification system in statistical equilibrium, that are jointly satisfied by the geometric locus of the principal eigenaxis and the geometric locus of the principal eigenaxis .
[0051] Because the primal optimization problem in Eq. (1.1) is a convex optimization problem, the inequalities in Eqs (1.6) and (1.7) must only hold for certain values of the primal and the dual variables. The KKT conditions in Eqs (1.3)-(1.7) restrict the magnitudes and the eigenenergies of the principal eigenaxis components on both w and x, wherein the expected risk .sub.min(Z|.sub.min.sub.
wherein is subject to the constraints .sub.i=1.sup.N.sub.iy.sub.i=0, and .sub.i0, and wherein .sub.ij is the Kronecker defined as unity for i=j and 0 otherwise.
[0052] Equation (1.8) is a quadratic programming problem that can be written in vector notation by letting QI+{tilde over (X)}{tilde over (X)}.sup.T, wherein {tilde over (X)}
D.sub.yX, wherein D.sub.y is a NN diagonal matrix of training labels (class membership statistics) y.sub.i, and wherein the Nd matrix {tilde over (X)} is a matrix of labeled reproducing kernels of N feature vectors:
{tilde over (X)}=(y.sub.1k.sub.x.sub.
[0053] The matrix version of the Lagrangian dual problem, which is also known as the Wolfe dual problem:
[0054] is subject to the constraints .sup.Ty=0 and .sub.i0, wherein the inequalities .sub.i0 only hold for certain values of .sub.i.
[0055] Because Eq. (1.9) is a convex programming problem, the theorem for convex duality guarantees an equivalence and a corresponding symmetry between the dual loci of and . Accordingly, the geometric locus of the principal eigenaxis determines a dual locus of likelihood components and principal eigenaxis components, wherein the expected risk .sub.min (Z|.sub.min.sub.
.sub.min (Z|.sub.min.sub.
.sub.min (Z|.sub.min.sub.
.sub.min(Z|.sub.min.sub.
[0056] The locations and the scale factors of the principal eigenaxis components on both and are considerably affected by the rank and the eigenspectrum of the kernel matrix Q, wherein a low rank kernel matrix Q determines an unbalanced principal eigenaxis and an irregular quadratic partition of a decision space. The kernel matrix Q has low rank, wherein d<N for a collection of N feature vectors of dimension d. These problems are solved by the following regularization method.
[0057] The regularized form of Q, wherein <<1 and QI+{tilde over (X)}{tilde over (X)}.sup.T, ensures that Q has full rank and a complete eigenvector set, wherein Q has a complete eigenspectrum. The regularization constant C is related to the regularization parameter by
For N feature vectors of dimension d, wherein d<N, all of the regularization parameters {.sub.i}.sub.i=1.sup.N in Eq. (1.1) and all of its derivatives are set equal to a very small value: .sub.i=<<1, e.g. .sub.i==0.02. The regularization constant C is set equal to
For N feature vectors of dimension d, wherein N<d, all of the regularization parameters {.sub.i}.sub.i=1.sup.N in Eq. (1.1) and all of its derivatives are set equal to zero: .sub.i==0. The regularization constant C is set equal to infinity: C=.
[0058] The KKT conditions in Eqs (1.3) and (1.6) require that the geometric locus of the principal eigenaxis satisfy the vector expression:
=.sub.i=1.sup.Ny.sub.u.sub.ik.sub.x.sub.
wherein .sub.i0 and reproducing kernels k.sub.x.sub.
that have non-zero magnitudes .sub.i>0 are termed extreme vectors.
[0059] Denote the scaled extreme vectors that belong to class A and class B by .sub.1i*k.sub.x.sub.
[0060] Using Eq. (1.10), the class membership statistics and the assumptions outlined above, it follows that the geometric locus of the principal eigenaxis is determined by the vector difference between a pair of sides, i.e., a pair of directed line segments:
wherein .sub.1 and .sub.2 denote the sides of , wherein the side of is .sub.1 is determined by the vector expression
and the side of .sub.2. is determined by the vector expression
and wherein the geometric locus of the principal eigenaxis is determined by the vector difference of .sub.1 and .sub.2.
[0061] All of the principal eigenaxis components
on the dual locus of
determine an intrinsic coordinate system of geometric loci of a quadratic decision boundary and corresponding decision borders.
[0062]
[0063]
[0064]
[0065]
[0066]
[0067] The manner in which a discriminate function of the invention partitions the feature space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system for a collection of N feature vectors is determined by the KKT condition in Eq. (1.7) and the KKT condition of complementary slackness.
[0068] The KKT condition in Eq. (1.7) and the KKT condition of complementary slackness determine a discriminant function
D(s)=k.sub.s+.sub.0(1.12)
that satisfies the set of constraints:
D(s)=0, D(s)=+1, and D(s)=1,
wherein D(s)=0 denotes a quadratic decision boundary that partitions the Z.sub.1 and Z.sub.2 decision regions of a minimum risk quadratic classification system
and wherein D(s)=+1 denotes the quadratic decision border for the Z.sub.1 decision region, and wherein D(s)=1 denotes the quadratic decision border for the Z.sub.2 decision region.
[0069] The KKT condition in Eq. (1.7) and the KKT condition of complementary slackness also determines the following system of locus equations that are satisfied by .sub.0 and :
y.sub.i(k.sub.x.sub.
wherein .sub.0 satisfies the functional of x in the following manner:
[0070] Using Eqs (1.12) and (1.13), the discriminant function is rewritten as:
[0071] Using Eq. (1.14) and letting D(s)=0, the discriminant function is rewritten as
wherein the constrained discriminant function D(s)=0 determines a quadratic decision boundary, and all of the points s on the quadratic decision boundary D(s)=0 exclusively reference the principal eigenaxis of .
[0072] Using Eq. (1.14) and letting D(s)=+1, the discriminant function is rewritten as
wherein the constrained discriminant function D(s)=+1 determines a quadratic decision border, and all of the points s on the quadratic decision border D(s)=+1 exclusively reference the principal eigenaxis of .
[0073] Using Eq. (1.14) and letting D(s)=1, the discriminant function is rewritten as
wherein the constrained discriminant function D(s)=1 determines a quadratic decision border, and all of the points s on the quadratic decision border D(s)=1 exclusively reference the principal eigenaxis of .
[0074] Given Eqs (1.15) (1.17), it follows that a constrained discriminant function of the invention
determines geometric loci of a quadratic decision boundary D(s)=0 and corresponding decision borders D(s)=+1 and D(s)=1 that jointly partition the decision space Z of a minimum risk quadratic classification system
into symmetrical decision regions Z.sub.1 and Z.sub.2:Z=z.sub.1+Z.sub.2:Z.sub.1Z.sub.2wherein balanced portions of the extreme points x.sub.1i* and x.sub.2i* from class A and class Baccount for right and wrong decisions of the minimum risk quadratic classification system.
[0075] Therefore, the geometric locus of the principal eigenaxis determines an eigenaxis of symmetry
for the decision space of a minimum risk quadratic classification system, wherein a constrained discriminant function delineates symmetrical decision regions Z.sub.1 and Z.sub.2:Z.sub.1=Z.sub.2 for the minimum risk quadratic classification system
wherein the decision regions Z.sub.1 and Z.sub.2 are symmetrically partitioned by the quadratic decision boundary of Eq. (1.15), and wherein the span of the decision regions is regulated by the constraints on the corresponding decision borders of Eqs (1.16) (1.17).
[0076]
[0077] Substitution of the vector expressions for and .sub.0 in Eqs (1.11) and (1.13) into the expression for the discriminant function in Eq. (1.12) determines an expression for a discriminant function of a minimum risk quadratic classification system that classifies feature vectors s into two classes A and B:
wherein feature vectors s belong to and are related to a collection of N feature vectors {k.sub.x.sub.
determines the average locus of the l extreme vectors
that belong to the collection of N feature vectors {k.sub.x.sub.
accounts for class memberships of the principal eigenaxis components on .sub.1 and .sub.2. The average locus
determines the average risk for the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system
wherein the vector transform
determines the distance between a feature vector s and the locus of average risk . Let s denote an unknown feature vector related to a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N that are inputs to one of the machine learning algorithms of the invention, wherein each feature vector x.sub.i has a label y.sub.i wherein y.sub.i=+1 if x.sub.iA and y.sub.i=1 if x.sub.iB, and wherein a discriminant function of a minimum risk quadratic classification system has been determined. Now take any given unknown feature vector s.
[0078] The discriminant function
of Eq. (1.18) determines the likely location of the unknown feature vector s, wherein the likely location of s is determined by the vector projection of
onto the dual locus of likelihood components and principal eigenaxis components .sub.1.sub.2:
wherein the component of
along the dual locus of .sub.1.sub.2:
determines the signed magnitude
along the axis of .sub.1.sub.2, where is the angle between the transformed unknown feature vector
and .sub.1.sub.2, and wherein the decision region that the unknown feature vector s is located within is determined by the sign of the expression:
[0079] Therefore, the likely location of the unknown feature vector s is determined by the scalar value of
along the axis of the dual locus .sub.1.sub.2, wherein the scalar value of the expression
indicates the decision region Z.sub.1 or Z.sub.2 that the unknown feature vector s is located withinalong with the corresponding class of s.
[0080] Thus, if:
then the unknown feature vector s is located within region Z.sub.1 and sA, whereas if
then the unknown feature vectors s is located within region Z.sub.2 and sB.
[0081] The minimum risk quadratic classification system of the invention decides which of the two classes A or B that the unknown feature vector s belongs to according to the sign of +1 or 1 that is output by the signum function:
and thereby classifies the unknown feature vector s.
[0082] Thus, the discriminant function of the invention in Eq. (1.18) determines likely locations of each one of the feature vectors x.sub.i that belong to a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N and any given unknown feature vectors s related to the collection, wherein the feature vectors are inputs to one of the machine learning algorithms of the invention and a discriminant function of a minimum risk quadratic classification system has been determined.
[0083] Further, the discriminant function identifies the decision regions Z.sub.1 and Z.sub.2 related to the two classes A and B that each one of the N feature vectors x.sub.i and the unknown feature vectors s are located within, wherein the discriminant function recognizes the classes of each one of the N feature vectors x.sub.i and each one of the unknown feature vectors s, and the minimum risk quadratic classification system of the invention in Eq. (1.19) decides which of the two classes that each one of the N feature vectors x.sub.i and each one of the unknown feature vectors s belong to and thereby classifies the collection of N feature vectors{x.sub.i}.sub.i=1.sup.N and any given unknown feature vectors s.
[0084] Therefore, discriminant functions of the invention exhibit a novel and useful property, wherein, for any given collection of feature vectors that belong to two classes and are inputs to a machine learning algorithm of the invention, the discriminant function that is determined by the machine learning algorithm determines likely locations of each one of the feature vectors that belong to the given collection of feature vectors and any given unknown feature vectors related to the collection, and identifies the decision regions related to the two classes that each one of the feature vectors and each one of the unknown feature vectors are located within, wherein the discriminant function recognizes the classes of the feature vectors and the unknown feature vectors according to the signs related to the two classes.
[0085] The likelihood components and the corresponding principal eigenaxis components
on the dual locus of .sub.1.sub.2 are determined by the geometric and the statistical structure of the geometric locus of signed and scaled reproducing kernels of extreme points:
wherein the scale factors .sub.1i* and .sub.2i* of the geometric locus determine magnitudes
as well as critical minimum eigenenergies
exhibited by respective principal eigenaxis components
on the dual locus of .sub.1.sub.2, and each scale factor .sub.1i* or .sub.2i* determines a conditional density and a corresponding conditional likelihood for a respective extreme point
[0086] Scale factors are determined by finding a satisfactory solution for the Lagrangian dual optimization problem in Eq. (1.9), wherein finding a geometric locus of signed and scaled reproducing kernels of extreme points involves optimizing a vector-valued cost function with respect to constraints on the scaled extreme vectors on the dual loci of and , wherein the constraints are specified by the KKT conditions in Eqs (1.3)-(1.7). The Wolfe dual geometric locus of scaled extreme points on is determined by the largest eigenvector .sub.max of the kernel matrix Q associated with the quadratic form .sub.max.sup.TQ.sub.max in Eq. (1.9), wherein .sup.Ty=0, .sub.i*>0, and wherein .sub.max is the principal eigenaxis of an implicit quadratic decision boundaryassociated with the constrained quadratic form .sub.max.sup.TQ.sub.maxwithin the Wolfe dual principal eigenspace of iv, wherein the form of the inner product statistics contained within the kernel matrix Q determines an intrinsic coordinate system of the intrinsic quadratic decision boundary.
[0087] Further, the intrinsic coordinate system of the intrinsic quadratic decision boundary of Eq. (1.9) is an inherent function of inner product statistics between feature vectors k.sub.x.sub.
[0088] The theorem for convex duality indicates that the principal eigenaxis of satisfies a critical minimum eigenenergy constraint that is symmetrically and equivalently related to the critical minimum eigenenergy constraint on the principal eigenaxis of , within the Wolfe dual principal eigenspace of and :Z|.sub.min.sub.
max .sub.max.sup.TQ.sub.max=.sub.max.sub.
and the functional 1.sup.T.sup.TQ/2 in Eq. (1.9) is maximized by the largest eigenvector .sub.max of Q, wherein the constrained quadratic form .sup.TQ/2, wherein .sub.max.sup.Ty=0 and .sub.i*>0, reaches its smallest possible value. It follows that the principal eigenaxis components on satisfy minimum length constraints.
[0089] The principal eigenaxis components on also satisfy an equilibrium constraint. The KKT condition in Eq. (1.4) requires that the magnitudes of the principal eigenaxis components on the dual locus of satisfy the locus equation:
(y.sub.i=1).sub.i=1.sup.l.sup.
wherein Eq. (1.20) determines the Wolf dual equilibrium point:
.sub.i=1.sup.l.sup.
of a minimum risk quadratic classification system, wherein the critical minimum eigenenergies exhibited by the principal eigenaxis of are symmetrically concentrated.
[0090] Given Eq. (1.21), it follows that the integrated lengths of the Wolfe dual principal eigenaxis components correlated with each class balance each other, wherein the principal eigenaxis of is in statistical equilibrium:
.sub.i=1.sup.l.sup.
[0091] Now, each scale factor .sub.1i* or .sub.2i* is correlated with a respective extreme vector k.sub.x.sub.
Therefore, let l.sub.1+l.sub.2=l, express the principal eigenaxis of in terms of l scaled, unit extreme vectors:
[0092] wherein .sub.1 and .sub.2 denote the sides of the dual locus of , wherein the side of .sub.1 is determined by the vector expression
and wherein the side of .sub.2 is determined by the vector expression
[0093] The system of locus equations in Eqs (1.20)-(1.23) demonstrates that the principal eigenaxis of is determined by a geometric locus of scaled, unit extreme vectors from class A and class B, wherein all of the scaled, unit extreme vectors on .sub.1 and .sub.2 are symmetrically distributed over either side of the geometric locus of the principal eigenaxis , wherein a statistical fulcrum is placed directly under the center of the principal eigenaxis of .
[0094] Using Eq. (1.22) and Eq. (1.23), it follows that the length .sub.1 of .sub.1 is equal to the length .sub.2 of .sub.2=.sub.1=.sub.2. It also follows that the total allowed eigenenergies Z|.sub.1.sub.min.sub.
[0095] The equilibrium constraint on the geometric locus of the principal eigenaxis in Eq. (1.20) ensures that the critical minimum eigenenergies exhibited by all of the principal eigenaxis components on .sub.1 and .sub.2 are symmetrically concentrated within the principal eigenaxis of :
[0096] Using Eq. (1.24), it follows that the principal eigenaxis of satisfies a state of statistical equilibrium, wherein all of the principal eigenaxis components on are equal or in correct proportions, relative to the center of , wherein components of likelihood components and corresponding principal eigenaxis components of class Aalong the axis of .sub.1are symmetrically balanced with components of likelihood components and corresponding principal eigenaxis components of class Balong the axis of .sub.2.
[0097] Therefore, the principal eigenaxis of determines a point at which the critical minimum eigenenergies exhibited by all of the scaled, unit extreme vectors from class A and class B are symmetrically concentrated, wherein the total allowed eigenenergy Z|.sub.min.sub.
[0098] The scale factors are associated with the fundamental unknowns of the constrained optimization problem in Eq. (1.1). Now, the geometric locus of the principal eigenaxis can be written as
wherein each scale factor .sub.j is correlated with scalar projections
of a feature vector k.sub.x.sub.
[0099] Further, given a kernel matrix of all possible inner products of reproducing kernels of a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N, the pointwise covariance statistic (k.sub.x.sub.
determines a unidirectional estimate of the joint variations between the random variables of each feature vector k.sub.x.sub.
[0100] Let i=1:l.sub.1 where each extreme vector
is correlated with a principal eigenaxis component
on .sub.1. Now take the extreme vector
that is correlated with the principal eigenaxis component
Using Eqs (1.25) and (1.26), it follows that the geometric locus of the principal eigenaxis component
on .sub.1 is determined by the locus equation:
wherein components of likelihood components and principal eigenaxis components for class Aalong the axis of the extreme vector
are symmetrically balanced with opposing components of likelihood components and principal eigenaxis components for class Balong the axis of the extreme vector
wherein .sub.1i* determines a scale factor for the extreme vector
Accordingly, Eq. (1.27) determines a scale factor .sub.1i* for a correlated extreme vector
Let i=1:l.sub.2, where each extreme vector
is correlated with a principal eigenaxis component
on .sub.2. Now take the extreme vector
that is correlated with the principal eigenaxis component
Using Eqs (1.25) and (1.26), it follows that the geometric locus of the principal eigenaxis component on
on .sub.2 is determined by the locus equation:
wherein components of likelihood components and principal eigenaxis components for class Balong the axis of the extreme vector
are symmetrically balanced with opposing components of likelihood components and principal eigenaxis components for class Aalong the axis of the extreme vector
wherein .sub.2i* determines a scale factor for the extreme vector
Accordingly, Eq. (1.28) determines a scale factor .sub.2i* for a correlated extreme vector
[0101] Given the pointwise covariance statistic in Eq. (1.26), it follows that Eq. (1.27) and Eq. (1.28) determine the manner in which the first and second order vector components of a set of l scaled extreme vectors
wherein the set belongs to a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N, are distributed along the axes of respective extreme vectors k.sub.x.sub.
wherein the first and second order vector components of each scaled extreme vector
are symmetrically distributed according to: (1) a class label +1 or 1; (2) a signed magnitude
and (3) a symmetrically balanced distribution of l scaled extreme vectors
along the axis of the scaled extreme vector
wherein the symmetrically balanced distribution is specified by the scale factor .sub.j*.
[0102] Accordingly, the geometric locus of each principal eigenaxis component
on the geometric locus of the principal eigenaxis determines the manner in which the first and second order vector components of an extreme vector
are symmetrically distributed over the axes of a set of 1 signed and scaled extreme vectors:
[0103] It follows that the geometric locus of each principal eigenaxis component
on the geometric locus of the principal eigenaxis determines a conditional distribution of first and second degree coordinates for a correlated extreme point k.sub.x.sub.
wherein
determines a pointwise conditional density estimate
for the correlated extreme point
wherein the component of the extreme vector
is symmetrically distributed over the geometric locus of the principal eigenaxis :
and wherein
determines a pointwise conditional density estimate
for the correlated extreme point
wherein me component or the extreme vector
is symmetrically distributed over the axis of the geometric locus of :
[0104] Thus, each scale factor .sub.1i* or .sub.2i* determines a conditional density and a corresponding conditional likelihood for a correlated extreme point
[0105] Therefore, conditional densities and corresponding conditional likelihoods
for the
extreme points are identically distributed over the principal eigenaxis components on .sub.1
wherein
determines a conditional density and a corresponding conditional likelihood for a correlated extreme point
and wherein .sub.1 a parameter vector for a class-conditional probability density function
for a given set
of extreme points
that belong to a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N:
wherein the area
under a scaled extreme vector
determines a conditional probability that an extreme point
will be observed within a localized region of either region Z.sub.1 or region Z.sub.2 within a decision space Z, and wherein the area under the conditional density function
determines the conditional probability
of observing the set
or extreme points
within localized regions of the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system
[0106] Likewise, conditional densities and corresponding conditional likelihoods
for the
extreme points are identically distributed over the principal eigenaxis components on .sub.2
wherein
determines a conditional density and a corresponding conditional likelihood for a correlated extreme point
and wherein .sub.2 a parameter vector for a class-conditional probability density function
for a given set
of extreme points
that belong to a collection of N feature vectors {x.sub.i}.sub.i=1.sup.N:
wherein the area
under a scaled extreme vector
determines a conditional probability that an extreme point
will be observed within a localized region of either region Z.sub.1 or region Z.sub.2 within a decision space Z, and wherein the area under the conditional density function
determines the conditional probability
of observing the set
of extreme points
within localized regions of the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system
[0107] The integral of a conditional density function
for class A
over the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system, determines the conditional probability
of observing a set
of extreme points
within localized regions of the decision space Z=Z.sub.1+Z.sub.2, wherein integrated conditional densities
of extreme points
located within the decision region Z.sub.1 determine costs
for expected counter risks
of making correct decisions, and integrated conditional densities
of extreme points
located within the decision region Z.sub.2 determine costs
for expected risks
of making decision errors.
[0108] Accordingly, all of the scaled extreme vectors
from class A possess critical minimum eigenenergies
that determine either costs for obtaining expected risks of making decision errors or costs
for obtaining expected counter risks of making correct decisions.
[0109] Therefore, the conditional probability function P(k.sub.x.sub.
over the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system, wherein the integral of Eq. (1.29) has a solution in terms of the critical minimum eigenenergy Z|.sub.1.sub.min.sub.
[0110] The integral of a conditional density function
for class B
over the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system, determines the conditional probability
of observing a set
of extreme points
within localized regions or the decision space Z=Z.sub.1+Z.sub.2 wherein integrated conditional densities
of extreme points
located within the decision region Z.sub.1 determine costs
for expected risks
of making decision errors, and integrated conditional densities
of extreme points
located on Z.sub.2 determine costs
for expected counter risks
of making correct decisions.
[0111] Accordingly, all of the scaled extreme vectors
from class B possess critical minimum eigenenergies
that determine either costs for obtaining expected risks of making decision errors or costs
for obtaining expected counter risks of making correct decisions.
[0112] Therefore, the conditional probability function
for class B is given by the integral
over the decision space Z=Z.sub.1+Z.sub.2 of a minimum risk quadratic classification system, wherein the integral of Eq. (1.30) has a solution in terms of the critical minimum eigenenergy Z|.sub.2.sub.min.sub.
[0113] Machine learning algorithms of the present invention find the right mix of principal eigenaxis components on the dual loci of and by accomplishing an elegant, statistical balancing feat within the Wolfe dual principal eigenspace of and . The scale factors {.sub.i*}.sub.i=1.sup.l of the principal eigenaxis components on play a fundamental role in the statistical balancing feat.
[0114] Using Eq. (1.27), the integrated lengths .sub.i=1.sup.l.sup.
and, using Eq. (1.28), the integrated lengths .sub.i=1.sup.l.sup.
[0115] Returning to Eq. (1.22), wherein the principal eigenaxis of is in statistical equilibrium, it follows that the RHS of Eq. (1.31) equals the RHS of Eq. (1.32):
wherein components of all of the extreme vectors
from class A and class B are distributed over the axes of .sub.1 and .sub.2 in the symmetrically balanced manner:
wherein components of extreme vectors
along the axis of .sub.2 oppose components of extreme vectors
along the axis of .sub.1 oppose components of extreme vectors
along the axis of .sub.1 oppose components of extreme vectors
along the axis of .sub.2.
[0116] Using Eq. (1.33), it follows that components
of extreme vectors
along the axis of .sub.1, wherein the axis of .sub.1 is determined by distributions of conditional likelihoods of extreme points
and opposing components
of extreme vectors
along the axis of .sub.2, wherein the axis of .sub.2 is determined by distributions of conditional likelihoods of extreme points
are symmetrically balanced with components
of extreme vectors
along the axis of .sub.2, wherein the axis of .sub.2 is determined by distributions of conditional likelihoods of extreme points
and opposing components
of extreme vectors k.sub.x.sub.
wherein counteracting and opposing components of likelihoods of extreme vectors
associated with counter risks and risks for class A, along the axis of are symmetrically balanced with counteracting and opposing components of likelihoods of extreme vectors
associated with counter risks and risks for class B, along the axis of .
[0117] Now rewrite Eq. (1.33) as:
wherein components of all of the extreme vectors k.sub.x.sub.
wherein components of likelihoods of extreme vectors
associated with counter risks and risks for class A and class Balong the axis of .sub.1, are symmetrically balanced with components of likelihoods of extreme vectors
associated with counter risks and risks for class A and class Balong the axis of .sub.2. Therefore, machine learning algorithms of the invention determine scale factors and .sub.2i* for the geometric locus of signed and scaled reproducing kernels of extreme points in Eq. (1.11)
that satisfy suitable length constraints, wherein the principal eigenaxis of and the principal eigenaxis of are both formed by symmetrical distributions of likelihoods of extreme vectors
from class A and class B, wherein components of likelihoods of extreme vectors
associated with counter risks and risks for class A and class B are symmetrically balanced with each other: along the axis of .sub.1 and .sub.2 of the principal eigenaxis of and along the axis of .sub.1 and .sub.2. of the principal eigenaxis of .
[0118] Given Eqs (1.33) and (1.34), it follows that the locus equation
determines the primal equilibrium point of a minimum risk quadratic classification systemwithin a Wolfe dual principal eigenspacewherein the form of Eq. (1.35) is determined by geometric and statistical conditions that are satisfied by the dual loci of and .
[0119] A discriminant function of the invention satisfies the geometric locus of a quadratic decision boundary of a minimum risk quadratic classification system in terms of the critical minimum eigenenergy Z|.sub.min.sub..sub.min(Z|.sub.min.sub.
.sub.min(Z|.sub.min.sub.
.sub.min(Z|.sub.min.sub.
[0120] The KKT condition in Eq. (1.7) on the Lagrangian function in Eq. (1.2) and the theorem of Karush, Kuhn, and Tucker determine the manner in which a discriminant function of the invention satisfies the geometric loci of the quadratic decision boundary in Eq. (1.15) and the quadratic decision borders in Eqs (1.16) and (1.17).
[0121] Accordingly, given a Wolfe dual geometric locus of scaled unit extreme vectors
wherein {.sub.i*>0.sub.i=1.sup.l and .sub.i=1.sup.l.sub.i*y.sub.i=0, it follows that the l likelihood components and corresponding principal eigenaxis components
on the dual locus of satisfy the system of locus equations:
within the primal principal eigenspace of the minimum risk quadratic classification system, wherein either .sub.i==0 or .sub.i=<<1, e.g. .sub.i==0.02.
[0122] Take the set
of l.sub.1 extreme vectors that belong to class A. Using Eq. (1.36) and letting y.sub.i=+1, it follows that the total allowed eigenenergy and the minimum expected risk exhibited by .sub.1 is are both determined by the identity
Z|.sub.1.sub.min.sub.
wherein the constrained discriminant function k.sub.s+.sub.0=+1 satisfies the geometric locus of the quadratic decision border in Eq. (1.16) in terms of the critical minimum eigenenergy Z|.sub.1.sub.min.sub..sub.min(Z|.sub.1.sub.min.sub.
[0123] Take the set
of l.sub.2 extreme vectors that belong to class B. Using Eq. (1.36) and letting y.sub.i=1, it follows that the total allowed eigenenergy and the minimum expected risk exhibited by .sub.2 are both determined by the identity
Z|.sub.min.sub.
wherein the constrained discriminant function k.sub.s+.sub.0=1 satisfies the geometric locus of the quadratic decision border in Eq. (1.17) in terms of the critical minimum eigenenergy Z|.sub.2.sub.min.sub..sub.min(Z|.sub.2.sub.min.sub.
[0124] Summation over the complete system of locus equations that are satisfied by
and by .sub.2
and using the equilibrium constraint on the dual locus of in Eq. (1.22), wherein the principal eigenaxis of is in statistical equilibrium, produces the identity that determines the total allowed eigenenergy Z|.sub.min.sub..sub.min(Z|.sub.min.sub.
wherein the constrained discriminant function k.sub.s+.sub.0=0 satisfies the geometric locus of the quadratic decision boundary in Eq. (1.15) in terms of the critical minimum eigenenergy lZ|.sub.1.sub.2.sub.min.sub..sub.min (Z|.sub.1.sub.2.sub.min.sub.
[0125] within the primal principal eigenspace of the dual locus of .sub.1.sub.2, and wherein the dual loci of and are symmetrically and equivalently related to each other within the Wolfe dual-principal eigenspace.
[0126] Given Eq. (1.39), it follows that the total minimum eigenenergy Z|.sub.1.sub.2.sub.min.sub..sub.min(Z|.sub.1.sub.2.sub.min.sub.
(.sub.1.sub.2).sub.i=1.sup.l.sub.i*(1.sub.i).sub.i=1.sup.l.sub.i*.sub.i=1.sup.l.sub.i*.sub.i,
wherein regularization parameters .sub.i=<<1 determine negligible constraints on the minimum expected risk .sub.min(Z|.sub.1.sub.2.sub.min.sub.
[0127] Now, take any given collection {x.sub.i}.sub.i=1.sup.N of feature vectors x.sub.i that are inputs to one of the machine learning algorithm of the invention, wherein each feature vector x.sub.i has a label y.sub.i wherein y.sub.i=+1 if x.sub.iA and y.sub.i=1 if x.sub.iB.
[0128] The system of locus equations in Eqs (1.37)-(1.39) determines the manner in which a constrained discriminant function of the invention satisfies parametric, primary and secondary integral equations of binary classification over the decision space of a minimum risk quadratic classification system of the invention. The primary integral equation is devised first.
[0129] Using Eq. (1.11), Eq. (1.13), Eq. (1.22) and Eqs (1.37)-(1.39), it follows that the constrained discriminant function
satisfies the locus equations
Z|.sub.1.sub.min.sub.
and
Z|.sub.2.sub.min.sub.
over the decision regions Z.sub.1 and Z.sub.2 of the decision space Z of the minimum risk quadratic classification system
wherein the parameters (y).sub.i=1.sup.l.sup.
are equalizer statistics.
[0130] Using Eqs (1.40) and (1.41) along with the identity in Eq. (1.31)
and the identity in Eq. (1.32)
it follows that the constrained discriminant function satisfies the locus equation over the decision regions Z.sub.1 and Z.sub.2 of the decision space Z of the minimum risk quadratic classification system:
wherein both the left-hand side and the right-hand side of Eq. (1.42) satisfy half the total allowed eigenenergy Z|.sub.1.sub.2.sub.min.sub..sub.min(Z|.sub.1.sub.2.sub.min.sub.
[0131] Returning to the integral in Eq. (1.29):
wherein the above integral determines a conditional probability
for class A, and to the integral in Eq. (1.30)
P(k.sub.x.sub.
wherein the above integral determines a conditional probability
for class B, it follows that the value for the integration constant C.sub.1 in Eq. (1.29) is: C.sub.1=.sub.1.sub.2 cos .sub..sub.
[0132] Substituting the value for C.sub.1 into Eq. (1.29), and using Eq. (1.29) and Eq. (1.42), it follows that the conditional probability
for class A, wherein the integral of the conditional density function
for class A is given by the integral:
over the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system, is determined by half the total allowed eigenenergy Z|.sub.1.sub.2.sub.min.sub..sub.min(Z|.sub.1.sub.2.sub.min.sub.
[0133] Substituting the value for C.sub.2 into Eq. (1.30), and using Eq. (1.30) and Eq. (1.42), it follows that the conditional probability
for class B, wherein the integral of the conditional density function
for class B is given by the integral:
over the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system, is determined by half the total allowed eigenenergy Z|.sub.1.sub.2.sub.min.sub..sub.min(Z|.sub.1.sub.2.sub.min.sub.
[0134] Given Eqs (1.43) and (1.44), it follows that the integral of the conditional density function
for class A and the integral of the conditional density function
for class B are both constrained to satisfy half the total allowed eigenenergy Z|.sub.1.sub.2.sub.min.sub..sub.min(Z|.sub.1.sub.2.sub.min.sub.
Therefore, the conditional probability
of observing the set
of l.sub.1 extreme points
from class A within localized regions of the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system is equal to the conditional probability
of observing the set
of l.sub.2 extreme portions
from class B within localized regions of the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system, wherein
and wherein all of the extreme points belong to the collection of feature vectors {x.sub.i}.sub.i=1.sup.N that are inputs to a machine learning algorithm of the invention.
[0135] Therefore, minimum risk quadratic classification systems of the invention exhibit a novel property of computer-implemented quadratic classification systems, wherein for any given collection of feature vectors {x.sub.i}.sub.i=1.sup.N that are inputs to one of the machine learning algorithms of the invention: (1) the conditional probability, (2) the minimum expected risk, and (3) the total allowed eigenenergy exhibited by a minimum risk quadratic classification system for class A is equal to (1) the conditional probability, (2) the minimum expected risk, and (3) the total allowed eigenenergy exhibited by the minimum risk quadratic classification system for class B.
[0136] Using Eqs (1.43) and (1.44), it follows that the constrained discriminant function of the invention
is the solution of the parametric, fundamental integral equation of binary classification:
over the decision space Z=Z.sub.1+Z.sub.2 of the minimum risk quadratic classification system
of the invention, wherein the decision space Z is spanned by symmetrical decision regions Z.sub.1+Z.sub.2=Z:Z.sub.1Z.sub.2, and wherein the conditional probability P(Z.sub.1|.sub.1) and the counter risk .sub.min(Z.sub.1|.sub.1.sub.min.sub.
.sub.min(Z.sub.2|.sub.1.sub.min.sub.
.sub.min(Z.sub.1|.sub.2.sub.min.sub.
.sub.min(Z.sub.2|.sub.2.sub.min.sub.
.sub.min(Z|.sub.1.sub.2.sub.min.sub.
and the Wolfe dual equilibrium point:
of the integral equation f.sub.1(D(s)).
[0137] Further, the novel principal eigenaxis of the invention that determines discriminant functions of the invention along with minimum risk quadratic classification systems of the inventionsatisfies the law of cosines in the symmetrically balanced manner that is outlined below.
[0138] Any given geometric locus of signed and scaled reproducing kernels of extreme points:
wherein the geometric locus of a principal eigenaxis determines a dual locus of likelihood components and principal eigenaxis components x=.sub.1.sub.2 that represents a discriminant function D(s)=k.sub.s+.sub.0 of the invention, wherein principal eigenaxis components and corresponding likelihood components
on the dual locus of .sub.1.sub.2 determine conditional densities and conditional likelihoods for respective extreme points
and wherein the geometric locus of the principal eigenaxis determines an intrinsic coordinate system .sub.1.sub.2 of a quadratic decision boundary k.sub.s+.sub.0=0 and an eigenaxis of symmetry
for the decision space Z.sub.1+Z.sub.2=Z:Z.sub.1Z.sub.2 of a minimum risk quadratic classification
of the invention, satisfies the law of cosines
in the symmetrically balanced manner:
wherein is the angle between .sub.1 and .sub.2, and wherein the dual locus of likelihood components and principal eigenaxis components exhibits symmetrical dimensions and density, wherein the total allowed eigenenergy .sub.1.sub.min.sub.
given class A is symmetrically balanced with the total allowed eigenenergy .sub.2.sub.min.sub.
given class B:
.sub.1.sub.min.sub.
wherein the length of side .sub.1 equals the length of side .sub.2
.sub.1=.sub.2,
and wherein components of likelihood components and principal eigenaxis components of class Aalong the axis of .sub.1are symmetrically balanced with components of likelihood components and principal eigenaxis components of class Balong the axis of .sub.2:
wherein components of critical minimum eigenenergies exhibited by scaled extreme vectors from class A and corresponding counter risks and risks for class Aalong the axis of .sub.1, are symmetrically balanced with components of critical minimum eigenenergies exhibited by scaled extreme vectors from class B and corresponding counter risks and risks for class Balong the axis of .sub.2, and wherein the opposing component of .sub.2along the axis of .sub.1, is symmetrically balanced with the opposing component of .sub.1along the axis of .sub.2:
.sub.1[.sub.2 cos .sub..sub.
wherein opposing components of likelihood components and principal eigenaxis components of class Balong the axis of .sub.1, are symmetrically balanced with opposing components of likelihood components and principal eigenaxis components of class Aalong the axis of .sub.2:
wherein opposing components of critical minimum eigenenergies exhibited by scaled extreme vectors from class B and corresponding counter risks and risks for class Balong the axis of is, are symmetrically balanced with opposing components of critical minimum eigenenergies exhibited by scaled extreme vectors from class A and corresponding counter risks and risks for class Aalong the axis of .sub.2, and wherein opposing and counteracting random forces and influences of the minimum risk quadratic classification system of the invention are symmetrically balanced with each otherabout the geometric center of the dual locus :
wherein the statistical fulcrum of is located.
[0139] Accordingly, counteracting and opposing components of critical minimum eigenenergies exhibited by all of the scaled extreme vectors on the geometric locus of the principal eigenaxis =.sub.1.sub.2 of the invention, along the axis of the principal eigenaxis , and corresponding counter risks and risks exhibited by the minimum risk quadratic classification system
of the invention, are symmetrically balanced with each other about the geometric center of the dual locus , wherein the statistical fulcrum of is located.
[0140] Now, take the previous collection {x.sub.i}.sub.i=1.sup.N of labeled feature vectors x.sub.i that are inputs to one of the machine learning algorithm of the invention, wherein each feature vector x.sub.i has a label y.sub.i wherein y.sub.i=+1 if x.sub.iA and y.sub.i=1 if .sub.iB.
[0141] Given that a constrained discriminant function of the invention
is the solution of the parametric, fundamental integral equation of binary classification in Eq. (1.45), and given that the discriminant function is represented by a dual locus of likelihood components and principal eigenaxis components =.sub.1.sub.2 that satisfies the law of cosines in the symmetrically balanced manner outlined above, it follows that the constrained discriminant function satisfies the parametric, secondary integral equation of binary classification:
over the Z.sub.1 and Z.sub.2 decision regions of a minimum risk quadratic classification system, wherein opposing and counteracting random forces and influences of the minimum risk quadratic classification system are symmetrically balanced with each otherwithin the Z.sub.1 and Z.sub.2 decision regionsin the following manners: (1) the eigenenergy Z.sub.1|.sub.1.sub.min.sub..sub.min(Z.sub.1|.sub.1.sub.min.sub.
.sub.min(Z.sub.1|.sub.2.sub.min.sub.
.sub.min(Z.sub.2|.sub.2.sub.min.sub.
.sub.min(Z.sub.2|.sub.1.sub.min.sub.
.sub.min(Z.sub.1|.sub.1.sub.min.sub.
.sub.min(Z.sub.1|.sub.2.sub.min.sub.
.sub.min(Z.sub.2|.sub.2.sub.min.sub.
.sub.min(Z.sub.2|.sub.1.sub.min.sub.
.sub.min(Z|.sub.1.sub.2.sub.min.sub.
[0142] Therefore, minimum risk quadratic classification systems of the invention exhibit a novel and useful property, wherein for any given collection of labeled feature vectors that are inputs to a machine learning algorithm of the invention, the minimum risk quadratic classification system determined by the machine learning algorithm satisfies a state of statistical equilibrium, wherein the expected risk and the total allowed eigenenergy exhibited by the minimum risk quadratic classification system are minimized, and the minimum risk quadratic classification system exhibits the minimum probability of error for classifying the collection of feature vectors and feature vectors related to the collection into two classes.
[0143] Further, discriminant functions of minimum risk quadratic classification systems of the invention exhibit a novel and useful property, wherein a discriminant function D(s) of a minimum risk quadratic classification system is determined by a linear combination of a collection of extreme vectors k.sub.x.sub.
a collection of signs y.sub.i=+1 or y.sub.i=1 associated with the extreme vectors
and a collection of regularization parameters .sub.i==0 or .sub.i=<<1:
wherein the collection of extreme vectors {k.sub.x.sub.
wherein the output of the minimum risk quadratic classification system sign(D(s)) is related to the two classes, and wherein the minimum risk quadratic classification system sign(D(s)) exhibits the minimum probability of error for classifying feature vectors that belong to and are related to the collection of feature vectors used to determine the system sign(D(s)).
[0144] Therefore, a discriminant function D(s) of a minimum risk quadratic classification system sign(D(s)) provides a scalable module that can be used to determine an ensemble E=.sub.j=1.sup.M-1sign(D.sub.ij(s)) of discriminant functions of minimum risk quadratic classification systems, wherein the ensemble of M1 discriminant functions of M1 minimum risk quadratic classification systems exhibits the minimum probability of error for classifying feature vectors that belong to and are related to M give collections of feature vectors.
[0145] More specifically, discriminant functions of minimum risk quadratic classification systems provide scalable modules that are used to determine a discriminant function of an M-class minimum risk quadratic classification system that classifies feature vectors into M classes, wherein the total allowed eigenenergy and the minimum expected risk that is exhibited by the M-class minimum risk quadratic classification system is determined by the total allowed eigenenergy and the minimum expected risk that is exhibited by M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems E.sub.M=.sub.i=1.sup.M.sub.j=1.sup.M-1sign(D.sub.ij(s)), wherein each minimum risk quadratic classification system sign(D.sub.ij(s)) of an ensemble E.sub.c.sub.
[0146] It follows that discriminant functions of M-class minimum risk quadratic classification systems that are determined by machine learning algorithms of the invention exhibit the minimum probability of error for classifying feature vectors that belong to M collections of feature vectors and unknown feature vectors related to the M collections of feature vectors.
[0147] It immediately follows that discriminant functions of minimum risk quadratic classification systems of the invention also provide scalable modules that are used to determine a fused discriminant function of a fused minimum quadratic classification system that classifies two types of feature vectors into two classes, wherein each type of feature vector has a different number of vector components. The total allowed eigenenergy and the minimum expected risk exhibited by the fused minimum risk quadratic classification system is determined by the total allowed eigenenergy and the minimum expected risk that is exhibited by an ensemble of a discriminant function of a minimum risk quadratic classification system sign(D(s)) and a different discriminant function of a different minimum risk quadratic classification system sign((s)):
(s)), wherein the total allowed eigenenergy and the expected risk exhibited by the fused minimum risk quadratic classification system is minimum for a given collection of feature vectors and a given collection of different feature vectors.
[0148] Any given fused discriminant function of a fused minimum risk quadratic classification system (s)) that is determined by a machine learning algorithm of the invention exhibits the minimum probability of error for classifying feature vectors that belong to and are related to a collection of feature vectors as well as different feature vectors that belong to and are related to a collection of different feature vectors.
[0149] Discriminant functions of minimum risk quadratic classification systems of the invention also provide scalable modules that are used to determine a fused discriminant function of a fused M-class minimum risk quadratic classification system that classifies two types of feature vectors into M classes, wherein each type of feature vector has a different number of vector components, and wherein the total allowed eigenenergy and the minimum expected risk exhibited by the fused M-class minimum risk quadratic classification system is determined by the total allowed eigenenergy and the minimum expected risk that is exhibited by M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems E.sub.M=.sub.i=1.sup.M.sub.j=1.sup.M-1sign(D.sub.ij(s)) and M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems .sub.M=.sub.i=1.sup.M.sub.j=1.sup.M-1sign(
.sub.ij(s)):
.sub.ij(s)),
and wherein the total allowed eigenenergy and the expected risk exhibited by the fused M-class minimum risk quadratic classification system is minimum for M given collections of feature vectors and M given collections of different feature vectors.
[0150] Therefore, fused discriminant functions of fused M-class minimum risk quadratic classification systems that are determined by machine learning algorithms of the invention exhibit the minimum probability of error for classifying feature vectors that belong to M collections of feature vectors and unknown feature vectors related to the M collections of feature vectors as well as different feature vectors that belong to M collections of different feature vectors and unknown different feature vectors related to the M collections of different feature vectors.
[0151] Further, given that discriminant functions of the invention determine likely locations of feature vectors that belong to given collections of feature vectors and any given unknown feature vectors related to a given collection, wherein a given collection of feature vectors belong to two classes, and given that discriminant functions of the invention identify decision regions related to two classes that given collections of feature vectors and any given unknown feature vectors related to a given collection are located within, and given that discriminant functions of the invention recognize classes of feature vectors that belong to given collections of feature vectors and any given unknown feature vectors related to a given collection, wherein minimum risk quadratic classification systems of the invention decide which of two classes that given collections of feature vectors and any given unknown feature vectors related to a given collection belong to, and thereby classify given collections of feature vectors and any given unknown feature vectors related to a given collection, it follows that discriminant functions of minimum risk quadratic classification systems of the invention can be used to determine a classification error rate and a measure of overlap between distributions of feature vectors for two classes of feature vectors. Further, discriminant functions of minimum quadratic classification systems of the invention can be used to determine if distributions of two collections of feature vectors are homogenous distributions.
Embodiment 1
[0152] The method to determine a discriminant function of a minimum risk quadratic classification system that classifies feature vectors into two classes, designed in accordance with the invention, is fully described within the detailed description of the invention.
[0153] Receive an Nd data set of feature vectors within a computer system wherein N is the number of feature vectors, d is the number of vector components in each feature vector, and each one of the N feature vectors is labeled with information that identifies which of the two classes each one of the N feature vectors belongs to.
[0154] Receive within unknown feature vectors related to the data set with the computer system.
[0155] Choose a reproducing kernel and determine a kernel matrix using the data set by calculating a matrix of all possible inner products of signed reproducing kernels of the N feature vectors, wherein each one of the reproducing kernels of the N feature vectors has a sign of +1 or 1 that identifies which of the two classes each one of the N feature vectors belongs to, and calculate a regularized kernel matrix from the kernel matrix.
[0156] Determine the scale factors of a geometric locus of signed and scaled reproducing kernels of extreme points by using the regularized kernel matrix to solve the dual optimization problem in Eq. (1.9).
[0157] Determine the extreme vectors on the geometric locus by identifying scale factors in the vector of scale factors that exceed zero by a small threshold T e.g.: T=0.0050.
[0158] Determine a sign vector of the signs associated with the extreme vectors using the data set, and compute the average sign using the sign vector.
[0159] Determine a locus of aggregate risk by calculating a kernel matrix using the extreme vectors, and multiply the kernel matrix by the sign vector.
[0160] Determine the geometric locus using the N feature vectors and feature vectors being classified to calculate a matrix of inner products between the signed reproducing kernels of the N feature vectors and the reproducing kernels of the feature vectors, and multiply the matrix by the vector of scale factors.
[0161] Determine the discriminant function of the minimum risk quadratic classification system, wherein the minimum risk quadratic classification system is determined by computing the sign of the discriminant function, and classify any given unknown feature vectors.
Embodiment 2
[0162]
[0163] A discriminant function of an M-class minimum risk quadratic classification system that classifies feature vectors into M classes is determined by using a machine learning algorithm of the invention and M collections of N feature vectors, wherein each feature vector in a given collection belongs to the same class, to determine M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems, wherein the determination of each one of the M ensembles involves using the machine algorithm to determine M1 discriminant functions of M1 minimum risk quadratic classification systems for a class c.sub.i of feature vectors, wherein the N feature vectors that belong to the class c.sub.i have the sign +1 and all of the N feature vectors that belong to all of the other M1 classes have the sign 1:
E.sub.c.sub.
wherein the input of the machine learning algorithm for each discriminant function of a minimum risk quadratic classification system sign(D.sub.ij(s)) is the collection of N feature vectors that belongs to the class c, and a collection of N feature vectors that belongs to one of the other M1 classes, and wherein the ensemble E.sub.c.sub.
[0164] Therefore, the M ensembles of the M1 discriminant functions of the M1 minimum risk quadratic classification systems
E.sub.M=.sub.i=1.sup.M.sub.j=1.sup.M-1sign(D.sub.ij(s))
determine the discriminant function of an M-class minimum risk quadratic classification system that classifies a feature vector s into the class c.sub.i associated with the ensemble E.sub.c.sub.
[0165] The discriminant function of the M-class minimum risk quadratic classification system D.sub.E.sub.
D.sub.E.sub.
exhibits the minimum probability of error for classifying feature vectors that belong to the M collections of N feature vectors and unknown feature vectors related to the M collections of N feature vectors, wherein the discriminant function of the M-class minimum risk quadratic classification system function determines likely locations of feature vectors that belong to and are related to the M collections of N feature vectors and identifies decision regions related to the M classes that the feature vectors are located within, wherein the discriminant function recognizes the classes of the feature vectors, and wherein the M-class minimum risk quadratic classification decides which of the M classes that the feature vectors belong to, and thereby classifies the feature vectors.
Embodiment 3
[0166] A fused discriminant function of a fused minimum risk quadratic classification system that classifies two types of feature vectors into two classes, wherein the types of feature vectors have different numbers of vector components, is determined by using a machine learning algorithm of the invention and a collection of N feature vectors and a collection of N different feature vectors to determine an ensemble of a discriminant function of a minimum risk quadratic classification system sign(D(s)) and a different discriminant function of a different minimum risk quadratic classification system sign((s)):
(s)), wherein the discriminant function and the different discriminant function are both determined by the process that is described in EMBODIMENT 1.
[0167] The fused discriminant function of the fused minimum risk quadratic classification system
(s))
exhibits the minimum probability of error for classifying the feature vectors that belong to the collection of N feature vectors and unknown feature vectors related to the collection of N feature vectors as well as the different feature vectors that belong to the collection of N different feature vectors and unknown different feature vectors related to the collection of N different feature vectors, wherein the fused discriminant function determines likely locations of feature vectors that belong to and are related to the collection of N feature vectors as well as different feature vectors that belong to and are related to the collection of N different feature vectors and identifies decision regions related to the two classes that the feature vectors and the different feature vectors are located within, wherein the fused discriminant function recognizes the classes of the feature vectors and the different feature vectors, and wherein the fused minimum risk quadratic classification decides which of the two classes that the feature vectors and the different feature vectors belong to, and thereby classifies the feature vectors and the different feature vectors.
Embodiment 4
[0168]
[0169] A fused discriminant function of a fused M-class minimum risk quadratic classification system that classifies two types of feature vectors into M classes is determined by using a machine learning algorithm of the invention and M collections of N feature vectors to determine M ensembles of M1 discriminant functions of M1 minimum risk quadratic classification systems E.sub.M=.sub.i=1.sup.M.sub.j=1.sup.M-1sign(D.sub.ij(s)) as well as M collections of N different feature vectors to determine M different ensembles of M1 different discriminant functions of M1 different minimum risk quadratic classification systems .sub.M=.sub.i=1.sup.M-1sign(
.sub.ij(s)), wherein the M ensembles and the M different ensembles are both determined by the process that is described in EMBODIMENT 2. The fused discriminant function of the fused M-class minimum risk quadratic classification system
exhibits the minimum probability of error for classifying feature vectors that belong to the M collections of N feature vectors and unknown feature vectors related to the M collections of N feature vectors as well as different feature vectors that belong to the M collections of N different feature vectors and unknown different feature vectors related to the M collections of N different feature vectors, wherein the fused discriminant function determines likely locations of feature vectors that belong to and are related to the M collections of N feature vectors as well as different feature vectors that belong to and are related to the M collections of N different feature vectors and identifies decision regions related to the M classes that the feature vectors and the different feature vectors are located within, wherein the fused discriminant function recognizes the classes of the feature vectors and the different feature vectors, and wherein the fused M-class minimum risk quadratic classification decides which of the M classes that the feature vectors and the different feature vectors belong to, and thereby classifies the feature vectors and the different feature vectors.
Embodiment 5
[0170]
[0171] The process of using a discriminant function of a minimum risk quadratic classification system to determine a classification error rate and a measure of overlap between distributions of feature vectors for two classes of feature vectors involves the following steps:
[0172] Receive an Nd data set of feature vectors within a computer system, wherein N is the number of feature vectors, d is the number of vector components in each feature vector, and each one of the N feature vectors is labeled with information that identifies which of the two classes each one of the N feature vectors belongs to.
[0173] Receive an Nd test data set of test feature vectors related to the data set within the computer system, wherein N is a number of test feature vectors, d is a number of vector components in each test feature vector, and each one of the N test feature vectors is labeled with information that identifies which of the two classes each one of the N test feature vectors belongs to.
[0174] Determine the discriminant function of the minimum risk quadratic classification system by performing the steps outlined in EMBODIMENT 1.
[0175] Use the minimum risk quadratic classification system to classify the N feature vectors.
[0176] Determine an in-sample classification error rate for the two classes of feature vectors by calculating the average number of wrong decisions of the minimum risk quadratic classification system for classifying the N features vectors.
[0177] Use the minimum risk quadratic classification system to classify the N test feature vectors.
[0178] Determine an out-of-sample classification error rate for the two classes of test feature vectors by calculating the average number of wrong decisions of the minimum risk quadratic classification system for classifying the N test feature vectors.
[0179] Determine the classification error rate for the two classes of feature vectors by averaging the in-sample classification error rate and the out-of-sample classification error rate.
[0180] Determine a measure of overlap between distributions of feature vectors for the two categories of feature vectors using the N feature vectors and the extreme vectors that have been identified, by calculating the ratio of the number of the extreme vectors to the number of the N feature vectors, wherein the ratio determines the measure of overlap.
Embodiment 6
[0181]
[0182] Receive an Nd data set of feature vectors within a computer system, wherein N is the number of feature vectors, d is the number of vector components in each feature vector, and each one of the N feature vectors is labeled with information that identifies which of the two collections each one of the N feature vectors belongs to.
[0183] Determine the discriminant function of the minimum risk quadratic classification system by performing the steps outlined in EMBODIMENT 1.
[0184] Use the minimum risk quadratic classification system to classify the N feature vectors.
[0185] Determine an in-sample classification error rate for the two collections of feature vectors by calculating the average number of wrong decisions of the minimum risk quadratic classification system for classifying the N features vectors.
[0186] Determine a measure of overlap between distributions of feature vectors for the two collections of feature vectors using the N feature vectors and the extreme vectors that have been identified, by calculating the ratio of the number of the extreme vectors to the number of the N feature vectors, wherein the ratio determines the measure of overlap.
[0187] Determine if the distributions of the two collections of the N feature vectors are homogenous distributions by using the in-sample classification error rate and the measure of overlap, wherein the distributions of the two collections of the N feature vectors are homogenous distributions if the measure of overlap has an approximate value of one and the in-sample classification error rate has an approximate value of one half.
[0188] Machine learning algorithms of the invention involve solving certain variants of the inequality constrained optimization that is used by support vector machines, wherein regularization parameters and reproducing kernels have been defined.
[0189] Software for machine learning algorithms of the invention can be obtained by using any of the software packages that solve quadratic programming problems, or via LIBSVM (A Library for Support Vector Machines), SVMlight (an implementation of SVMs in C) or MATLAB SVM toolboxes.
[0190] The machine learning methods of the invention disclosed herein may be readily utilized in a wide variety of applications, wherein feature vectors have been extracted from outputs of sensors that include, but are not limited to radar and hyperspectral or multispectral images, biometrics, digital communication signals, text, images, digital waveforms, etc.
[0191] More specifically, the applications include, for example and without limitation, general pattern recognition (including image recognition, waveform recognition, object detection, spectrum identification, and speech and handwriting recognition, data classification, (including text, image, and waveform categorization), bioinformatics (including automated diagnosis systems, biological modeling, and bio imaging classification), etc.
[0192] One skilled in the art will recognize that any suitable computer system may be used to execute the machine learning methods disclosed herein. The computer system may include, without limitation, a mainframe computer system, a workstation, a personal computer system, a personal digital assistant, or other device or apparatus having at least one processor that executes instructions from a memory medium.
[0193] The computer system may further include a display device or monitor for displaying operations associated with the learning machine and one or more memory mediums on which computer programs or software components may be stored. In addition, the memory medium may be entirely or partially located in one or more associated computers or computer systems which connect to the computer system over a network, such as the Internet.
[0194] The machine learning method described herein may also be executed in hardware, a combination of software and hardware, or in other suitable executable implementations. The learning machine methods implemented in software may be executed by the processor of the computer system or the processor or processors of the one or more associated computer systems connected to the computer system.
[0195] While the invention herein disclosed has been described by means of specific embodiments, numerous modifications and variations could be made by those skilled in the art without departing from the scope of the invention set forth in the claims.