Device and method for training a classifier

11960991 ยท 2024-04-16

Assignee

Inventors

Cpc classification

International classification

Abstract

A computer-implemented method for training a classifier, particularly a binary classifier, for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier. The method includes providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors.

Claims

1. A computer-implemented method for training a classifier for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the method comprising the following steps: providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications; and training the classifier depending on the provided weighting factors; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

2. The method according to claim 1, wherein classifier is a binary classifier, and the optimization is carried out by finding an optimum value of a Lagrangian multiplier corresponding to the moment-matching constraint and wherein trained parameters of a fully-connected layer of the binary classifier are set equal to the optimum value of the Lagrangian multiplier.

3. The method according to claim 1, wherein the optimization includes solving the two-player game by solving a linear program in only one of those two players.

4. The method according to claim 1, wherein the optimization of the performance according to the non-decomposable metric is further subject to an inequality constraint of an expected value of a second metric that measures an alignment between the classifications and the predicted classifications.

5. A computer-implemented method for using a classifier for classifying sensor signals, wherein the classifier is trained to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the training including providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors, the method comprising the following steps: receiving a sensor signal including data from a sensor; determining a first input signal which depends on the sensor signal; and feeding the first input signal into the classifier to obtain an output signal that characterizes a classification of the first input signal; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

6. A computer-implemented method for using a classifier trained for providing an actuator control signal for controlling an actuator, wherein the classifier is trained to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the training including providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors, the method comprising the following steps: receiving a sensor signal including data from a sensor; determining a first input signal which depends on the sensor signal; feeding the first input signal into the classifier to obtain an output signal that characterizes a classification of the first input signal; and determining the actuator control signal depending on the output signal; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

7. The method according to claim 6, wherein the actuator controls an at least partially autonomous robot, and/or a manufacturing machine, and/or an access control system.

8. A non-transitory machine readable storage medium on which is stored a computer program for training a binary classifier for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the computer program when executed by a computer, causing the computer to perform the following steps: providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications; and training the classifier depending on the provided weighting factors; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; and wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

9. A control system for operating an actuator, the control system comprising: a classifier trained for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the classifier being trained by providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors; wherein the control system is configured to operate the actuator in accordance with an output of the classifier; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; and wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

10. A control system that is configured to use a classifier for classifying sensor signals, wherein the classifier is trained to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the training including providing weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications, and training the classifier depending on the provided weighting factors, the control system configured to: receive a sensor signal including data from a sensor; determine a first input signal which depends on the sensor signal; and feed the first input signal into the classifier to obtain an output signal that characterizes a classification of the first input signal; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; and wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

11. A training system configured train a classifier for classifying input signals to optimize performance according to a non-decomposable metric that measures an alignment between classifications corresponding to input signals of a set of training data and corresponding predicted classifications of the input signals obtained from the classifier, the training system configured to: provide weighting factors that characterize how the non-decomposable metric depends on a plurality of terms from a confusion matrix of the classifications and the predicted classifications; and train the classifier depending on the provided weighting factors; wherein the non-decomposable metric is given by the formula ? j a j .Math. TP + b j .Math. TN + f j ( PP , AP , PN , AN ) g j ( PP , AP , PN , AN ) , where a.sub.j and b.sub.j are scalar values and f.sub.j and g.sub.j are functions, and TP, TN, PP, PN, AP and AN are entries of the confusion matrix, wherein TP=true positive, TN=true negative, PP=predicted positive, PN=predicted negative, AP=actual positive and AN=actual negative; and wherein the optimization is carried out by finding an equilibrium of a two-player game between a first player and a second player, wherein the first player tries to find first classifications corresponding to input signals of the training data and the second player tries to find second classifications corresponding to input values of the training data, and wherein the first player tries to maximize and the second player tries to minimize an expectation value of a metric in which the confusion matrix is evaluated based on the first classifications and the second classifications, wherein the second classifications are subject to a moment-matching constraint; and wherein the expectation value is computed based on marginal probabilities of the first classifications and/or the second classifications.

12. The method as recited in claim 1, wherein the classifier is a binary classifier.

13. The method as recited in claim 1, wherein the marginal probabilities represent marginal probabilities of a classification of a given input value being equal to a predefined classification and a sum of all classifications being equal to a predefined sum value.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a control system having a classifier controlling an actuator in its environment, in accordance with an example embodiment of the present invention.

(2) FIG. 2 shows the control system controlling an at least partially autonomous robot, in accordance with an example embodiment of the present invention.

(3) FIG. 3 shows the control system controlling a manufacturing machine, in accordance with an example embodiment of the present invention.

(4) FIG. 4 shows the control system controlling an automated personal assistant, in accordance with an example embodiment of the present invention.

(5) FIG. 5 shows the control system controlling an access control system, in accordance with an example embodiment of the present invention.

(6) FIG. 6 shows the control system controlling a surveillance system, in accordance with an example embodiment of the present invention.

(7) FIG. 7 shows the control system controlling an imaging system, in accordance with an example embodiment of the present invention.

(8) FIG. 8 shows a training system for training the classifier, in accordance with an example embodiment of the present invention.

(9) FIG. 9 shows an exemplary structure of the classifier, in accordance with an example embodiment of the present invention.

(10) FIG. 10 shows a flow-chart diagram of a training method carried out by the training system, in accordance with an example embodiment of the present invention.

(11) FIG. 11 shows a flow-chart diagram of an aspect of this training method, in accordance with an example embodiment of the present invention.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

(12) Shown in FIG. 1 is one example embodiment of an actuator 10 in its environment 20. Actuator 10 interacts with a control system 40. An actuator 10 may be a technical system that is capable of receiving actuator control commands A and of acting in accordance with the received actuator control commands, A. Actuator 10 and its environment 20 will be jointly called actuator system. At preferably evenly spaced distances, a sensor 30 senses a condition of the actuator system. The sensor 30 may comprise several sensors. Preferably, sensor 30 is an optical sensor that takes images of the environment 20. An output signal S of sensor 30 (or, in case the sensor 30 comprises a plurality of sensors, an output signal S for each of the sensors) which encodes the sensed condition is transmitted to the control system 40.

(13) Thereby, control system 40 receives a stream of sensor signals S. It then computes a series of actuator control commands A depending on the stream of sensor signals S, which are then transmitted to actuator 10.

(14) Control system 40 receives the stream of sensor signals S of sensor 30 in an optional receiving unit 50. Receiving unit 50 transforms the sensor signals S into input signals x. Alternatively, in case of no receiving unit 50, each sensor signal S may directly be taken as an input signal x. Input signal x may, for example, be given as an excerpt from sensor signal S. Alternatively, sensor signal S may be processed to yield input signal x. Input signal x may comprise image data corresponding to an image recorded by sensor 30, or it may comprise audio data, for example if sensor 30 is an audio sensor. In other words, input signal x may be provided in accordance with sensor signal S.

(15) Input signal x is then passed on to a classifier 60, for example an image classifier, which may, for example, be given by an artificial neural network.

(16) Classifier 60 is parametrized by parameters ?, which are stored in and provided by parameter storage St.sub.1.

(17) Classifier 60 determines output signals y from input signals x. The output signal y comprises information that assigns one or more labels to the input signal x. Output signals y are transmitted to an optional conversion unit 80, which converts the output signals y into the control commands A. Actuator control commands A are then transmitted to actuator 10 for controlling actuator 10 accordingly. Alternatively, output signals y may directly be taken as control commands A.

(18) Actuator 10 receives actuator control commands A, is controlled accordingly and carries out an action corresponding to actuator control commands A. Actuator 10 may comprise a control logic, which transforms actuator control command A into a further control command, which is then used to control actuator 10.

(19) In further embodiments, control system 40 may comprise sensor 30. In even further embodiments, control system 40 alternatively or additionally may comprise actuator 10.

(20) In still further embodiments, it may be envisioned that control system 40 controls a display 10a instead of an actuator 10. Furthermore, control system 40 may comprise a processor 45 (or a plurality of processors) and at least one machine-readable storage medium 46 on which instructions are stored which, if carried out, cause control system 40 to carry out a method according to one aspect of the present invention.

(21) FIG. 2 shows an embodiment in which control system 40 is used to control an at least partially autonomous robot, e.g., an at least partially autonomous vehicle 100.

(22) Sensor 30 may comprise one or more video sensors and/or one or more radar sensors and/or one or more ultrasonic sensors and/or one or more LiDAR sensors and or one or more position sensors (like, e.g., GPS). Some or all of these sensors are preferably but not necessarily integrated in vehicle 100.

(23) Alternatively or additionally sensor 30 may comprise an information system for determining a state of the actuator system. One example for such an information system is a weather information system that determines a present or future state of the weather in environment 20.

(24) For example, using input signal x, the classifier 60 may for example detect objects in the vicinity of the at least partially autonomous robot. Output signal y may comprise an information that characterizes where objects are located in the vicinity of the at least partially autonomous robot. Control command A may then be determined in accordance with this information, for example to avoid collisions with the detected objects.

(25) Actuator 10, which is preferably integrated in vehicle 100, may be given by a brake, a propulsion system, an engine, a drivetrain, or a steering of vehicle 100. Actuator control commands A may be determined such that actuator (or actuators) 10 is/are controlled such that vehicle 100 avoids collisions with the detected objects. Detected objects may also be classified according to what the classifier 60 deems them most likely to be, e.g., pedestrians or trees, and actuator control commands A may be determined depending on the classification.

(26) In one embodiment, classifier 60 may be designed to identify lanes on a road ahead, e.g., by classifying a road surface and markings on the road, and identifying lanes as patches of road surface between the markings. Based on an output of a navigation system, a suitable target lane for pursuing a chosen path can then be selected, and depending on a present lane and the target lane, it may then be decided whether vehicle 10 is to switch lanes or stay in the present lane. Control command A may then be computed by, e.g., retrieving a predefined motion pattern from a database corresponding to the identified action.

(27) Likewise, upon identifying road signs or traffic lights, depending on an identified type of road sign or an identified state of the traffic lights, corresponding constraints on possible motion patterns of vehicle 10 may then be retrieved from, e.g., a database, a future path of vehicle 10 commensurate with the constraints may be computed, and the actuator control command A may be computed to steer the vehicle such as to execute the trajectory.

(28) Likewise, upon identifying pedestrians and/or vehicles, a projected future behavior of the pedestrians and/or vehicles may be estimated, and based on the estimated future behavior, a trajectory may then be selected such as to avoid collision with the pedestrian and/or the vehicle, and the actuator control command A may be computed to steer the vehicle such as to execute the trajectory.

(29) In further embodiments, the at least partially autonomous robot may be given by another mobile robot (not shown), which may, for example, move by flying, swimming, diving or stepping. The mobile robot may, inter alia, be an at least partially autonomous lawn mower, or an at least partially autonomous cleaning robot. In all of the above embodiments, actuator command control A may be determined such that propulsion unit and/or steering and/or brake of the mobile robot are controlled such that the mobile robot may avoid collisions with the identified objects.

(30) In a further embodiment, the at least partially autonomous robot may be given by a gardening robot (not shown), which uses sensor 30, preferably an optical sensor, to determine a state of plants in the environment 20. Actuator 10 may be a nozzle for spraying chemicals. Depending on an identified species and/or an identified state of the plants, an actuator control command A may be determined to cause actuator 10 to spray the plants with a suitable quantity of suitable chemicals.

(31) In even further embodiments, the at least partially autonomous robot may be given by a domestic appliance (not shown), like, e.g., a washing machine, a stove, an oven, a microwave, or a dishwasher. Sensor 30, e.g., an optical sensor, may detect a state of an object that is to undergo processing by the household appliance. For example, in the case of the domestic appliance being a washing machine, sensor 30 may detect a state of the laundry inside the washing machine based on image. Actuator control signal A may then be determined depending on a detected material of the laundry.

(32) Shown in FIG. 3 is an embodiment in which control system 40 is used to control a manufacturing machine 11, e.g., a punch cutter, a cutter, a gun drill or a gripper) of a manufacturing system 200, e.g., as part of a production line. The control system 40 controls an actuator 10 which in turn control the manufacturing machine 11.

(33) Sensor 30 may be given by an optical sensor that captures properties of, e.g., a manufactured product 12. Classifier 60 may determine a state of the manufactured product 12 from these captured properties, e.g., whether the product 12 is faulty or not. Actuator 10 which controls manufacturing machine 11 may then be controlled depending on the determined state of the manufactured product 12 for a subsequent manufacturing step of manufactured product 12. Alternatively, it may be envisioned that actuator 10 is controlled during manufacturing of a subsequent manufactured product 12 depending on the determined state of the manufactured product 12. For example, actuator 10 may be controlled to select a product 12 that has been identified by classifier 60 as faulty and sort it into a designated bin, where they may be re-checked before discarding them.

(34) Shown in FIG. 4 is an embodiment in which control system 40 is used for controlling an automated personal assistant 250. Sensor 30 may be an optic sensor, e.g., for receiving video images of a gestures of user 249. Alternatively, sensor 30 may also be an audio sensor, e.g., for receiving a voice command of user 249 as an audio signal.

(35) Control system 40 then determines actuator control commands A for controlling the automated personal assistant 250. The actuator control commands A are determined in accordance with sensor signal S of sensor 30. Sensor signal S is transmitted to the control system 40. For example, classifier 60 may be configured to, e.g., carry out a gesture recognition algorithm to identify a gesture made by user 249. Control system 40 may then determine an actuator control command A for transmission to the automated personal assistant 250. It then transmits the actuator control command A to the automated personal assistant 250.

(36) For example, actuator control command A may be determined in accordance with the identified user gesture recognized by classifier 60. It may then comprise information that causes the automated personal assistant 250 to retrieve information from a database and output this retrieved information in a form suitable for reception by user 249.

(37) In further embodiments, it may be envisioned that instead of the automated personal assistant 250, control system 40 controls a domestic appliance (not shown) controlled in accordance with the identified user gesture. The domestic appliance may be a washing machine, a stove, an oven, a microwave or a dishwasher.

(38) Shown in FIG. 5 is an embodiment in which control system controls an access control system 300. Access control system may be designed to physically control access. It may, for example, comprise a door 401. Sensor 30 is configured to detect a scene that is relevant for deciding whether access is to be granted or not. It may for example be an optical sensor for providing image or video data, for detecting a person's face. Classifier 60 may be configured to interpret this image or video data, e.g., by matching identities with known people stored in a database, thereby determining an identity of the person. Actuator control signal A may then be determined depending on the interpretation of classifier 60, e.g., in accordance with the determined identity. Actuator 10 may be a lock that grants access or not depending on actuator control signal A. A non-physical, logical access control is also possible.

(39) Shown in FIG. 6 is an embodiment in which control system 40 controls a surveillance system 400. This embodiment is largely identical to the embodiment shown in FIG. 5. Therefore, only the differing aspects will be described in detail. Sensor 30 is configured to detect a scene that is under surveillance. Control system does not necessarily control an actuator 10, but a display 10a. For example, the machine learning system 60 may determine a classification of a scene, e.g., whether the scene detected by optical sensor 30 is suspicious. Actuator control signal A which is transmitted to display 10a may then, e.g., be configured to cause display 10a to adjust the displayed content dependent on the determined classification, e.g., to highlight an object that is deemed suspicious by machine learning system 60.

(40) Shown in FIG. 7 is an embodiment of a control system 40 for controlling an imaging system 500, for example an MRI apparatus, x-ray imaging apparatus or ultrasonic imaging apparatus. Sensor 30 may, for example, be an imaging sensor. Machine learning system 60 may then determine a classification of all or part of the sensed image. Actuator control signal A may then be chosen in accordance with this classification, thereby controlling display 10a. For example, machine-learning system 60 may interpret a region of the sensed image to be potentially anomalous. In this case, actuator control signal A may be determined to cause display 10a to display the imaging and highlighting the potentially anomalous region.

(41) Shown in FIG. 8 is an embodiment of a training system 140 for training classifier 60. A training data unit 150 determines input signals x, which are passed on to classifier 60. For example, training data unit 150 may access a computer-implemented database St.sub.2 in which at least one set T of training data is stored. The at least one set T comprises pairs of input signals x.sub.i and corresponding desired output signals y.sub.i. Desired output signal y.sub.i is passed on to assessment unit 180. The set T of training data may be a full set of training data. It may also be a selected batch of training data if training is performed in batches.

(42) Classifier 60 is configured to compute output signals ? from input signal x.sub.i. These output signals ?.sub.i are also passed on to assessment unit 180.

(43) A modification unit 160 determines updated parameters ? depending on input from assessment unit 180. Updated parameters ? are transmitted to parameter storage St.sub.1 to replace present parameters ?.

(44) Furthermore, training system 140 may comprise a processor 145 (or a plurality of processors) and at least one machine-readable storage medium 146 on which instructions are stored which, if carried out, cause control system 140 to carry out a method according to one aspect of the invention.

(45) Shown in FIG. 9 is an exemplary structure of classifier 60, which in this embodiment is given by a neural network that is parametrized by parameters, or weights, ?. Input data is x is fed into input layer 61, processed and then successively passed on to hidden layers 62 and 63. The output of layer 63 is a feature map ?. If classifier 60 is a convolutional neural network, layers 61, 62 and 63 comprise at least one convolutional layer. Parameters that parametrize layers 61, 62 and 63 are called w. Feature map ? is passed on to a fully-connected layer 64, which is parametrized by parameters ?.sub.f. The output ?.sup.T.Math.?.sub.f is passed on to a final layer 65 that comprises computing a softmax transformation for output ?.sup.T.Math.?.sub.f and an argmax function that selects the label y of the classification that corresponds to the highest softmax score as the output signal of classifier 60.

(46) Shown in FIG. 10 is a flow-chart diagram that outlines an embodiment of the training method for training classifier 60 that may be carried out by training system 140. In a first step (1000) Lagrangian multiplier values ? are initialized, e.g., randomly or as a predefined value, e.g., 0. Parameters ?.sub.f of fully-connected layer 64 are set equal to Lagrangian multiplier values ?. Dataset T is provided, as well as parameters a.sub.i, b.sub.i, f.sub.i, g.sub.i that characterize the metric as defined in equation (M). Optionally, parameters characterizing a constraint as given in equation (7) are also provided.

(47) Then (1010), optimum values Q* for the optimization problem stated as the inner minimax problem in equation (5) (or (7), in case constraints are provided) are computed. In addition, a matrix ? is computed. Details of this computation are discussed in connection with FIG. 11.

(48) Next (1020), an increment d?=??(Q*.sup.T1?y.sub.T) is computed with y.sub.T=(y1, . . . , y.sub.n).sup.T being the vector with the classifications of the training data set.

(49) Then (1030) is checked whether the method is converged, e.g., by checking whether an absolute value of increment d? is less than a predefined threshold.

(50) If the method has converged, the algorithm is stopped and training is complete (1060).

(51) If not, in optional step (1040), the increment to d? are taken as an increment to parameters ?.sub.f of fully-connected layer 64 and backpropagated through the remaining network, i.e., through layers 63, 62 and 61 to obtain an increment dw to parameters w and the method continues with step (1050). Alternatively, parameters w can remain fixed and the method branches directly from step (1030) to step (1050).

(52) In step (1050), parameters ?, ?.sub.f, and w are updated as ???+d? w?w+dw ?.sub.f??.

(53) Then, the method continues with step (1010) and iterates until the method is concluded in step (1060).

(54) Shown in FIG. 11 is a flow-chart diagram of the method to compute the optimum value Q* of the inner minimax problem as stated in equation (5) (or (7)) in step (1010).

(55) First (2010), based n?n matrices D, E, F are provided as

(56) D k l = .Math. j a j g j ( k , l ) , E k l = .Math. j b j g j ( k , l ) and F k l = .Math. j f j g j ( k , l ) .

(57) Then (2020), Z(Q) is provided as a symbolic expression as

(58) Z ( Q ) = QD T + diag ( 1 , .Math. , 1 n ) 2 Q 1 1 T E T 1 1 T - diag ( 1 , .Math. , 1 n ) Q E T 1 1 T - diag ( 1 , .Math. , 1 n ) Q 1 1 T E T + Q E T + diag ( 1 , .Math. , 1 n ) F T diag ( 1 , .Math. , 1 n ) Q 11 T .

(59) Next (2030), a linearly transformed expression Z(Q) is provided from Z(Q) via Z(Q)=Z(Q).Math.diag(1, . . . , n).

(60) Furthermore, c(Q) is computed as c(Q)=0 in case the special cases as defined in equations (S1) and (S2) do not need to be enforced. If we like to enforce (S1), Z(Q) is increased by

(61) ( diag ( 1 , .Math. , 1 n ) 1 1 T Q T - Id ) diag ( 1 , .Math. , 1 n ) 1 1 T
and c(Q) becomes

(62) c ( Q ) = 1 - 1 T diag ( 1 , .Math. , 1 n ) Q 1 ,
with Id being a n?n-dimensional identity matrix.

(63) If (S2) is to be enforced, Z(Q) is increased by an a n?n-dimensional matrix E that is 0 everywhere, except at position (n, n) where it is set to Q.sub.nn.

(64) Now (2040), all input signals x.sub.i in dataset T, are propagated through classifier (60) to yield feature vectors ?.sub.1(x.sub.i). An n?m matrix ? (with n being the number of data samples in dataset T and m being the number of features) the columns of which denote the features of each sample as
?.sub.:,i=?.sub.1(x.sub.i)
and a matrix W is computed as
W=?.sup.T?1.sup.T.

(65) In case equation (7) is to be solved, the resulting output values of classifier (60) are also stored as ?.sub.i.

(66) Next, (2050) in case equation (5) is to be solved, Q* is computed as the optimum value of the linear program

(67) min Q ; ? ; v v + c ( Q ) - .Math. Q , W .Math. s . t . : q i , k ? 0 ? i , k ? [ 1 , n ] ? i , k ? 0 ? i , k ? [ 1 , n ] v ? 0 q i , k ? 1 k .Math. j q j , k ? i , k ? [ 1 , n ] .Math. k 1 k .Math. i q i , k ? 1 ? v ? ( Z ( Q ) ) ( i , k ) - ? i , k + 1 k .Math. j ? j , k , ? i , k ? [ 1 , n ] ,

(68) In case equation (7) is to be solved, a matrices B.sup.(i) and scalars ?.sub.i are defined for each constraint of equation (7) by computing
custom character.sub.({tilde over (P)}(X,Y);P({circumflex over (?)}))(metric.sup.i(?,Y))=:custom characterB.sup.(i),Pcustom character+?.sub.i

(69) This is done by defining for each constraint i the vectors D.sub.k.sup.i=

(70) 0 .Math. j a j g j ( k , l ) , E k i = .Math. j b j g j ( k , l ) and F k i = .Math. j f j g j ( k , l )
for l=?.sub.iy.sub.i and setting:

(71) B ( i ) = D i y T + E i ( 1 - y ) T + diag ( 1 , .Math. , 1 n ) [ F i + ( n - l ) E i ] 1 T ? i = 0

(72) If neither (S1) nor (S2) are enforced for any i.

(73) If (S1) is enforced, the above mentioned expression remains the same as long as l=?.sub.iy.sub.i>0. If we have l=0, and the above variables are set as

(74) B ( i ) = - diag ( 1 , .Math. , 1 n ) 1 1 T ? i = 1

(75) If (S2) is enforced, the above-mentioned expression (prior to the S1 special case) remains the same as long as l=?.sub.iy.sub.i<n. If we have l=n, we choose ?.sub.i=0 and B.sup.(i) as a n?n-dimensional matrix that is 0 everywhere except at position (n, n) where it is 1.

(76) Then, Q* is obtained as the optimum value by solving the linear program

(77) min Q ; ? ; ? ; v v + c ( Q ) - .Math. Q , W .Math. + .Math. l ( ? l - ? l ) s . t . : q i , k ? 0 ? i , k ? [ 1 , n ] ? i , k ? 0 ? i , k ? [ 1 , n ] ? l ? 0 ? l , ? [ 1 , s ] v ? 0 q i , k ? 1 k .Math. j q j , k ? i , k ? [ 1 , n ] .Math. k 1 k .Math. i q i , k ? 1 ? v ? ( Z ( Q ) ) ( i , k ) - ? i , k + 1 k .Math. j ? j , k + .Math. l ? l ( B ( l ) ) ( i , k ) , ? i , k ? [ 1 , n ] .

(78) This concludes the method.

(79) The term computer covers any device for the processing of predefined calculation instructions. These calculation instructions can be in the form of software, or in the form of hardware, or also in a mixed form of software and hardware.

(80) It is further understood that the procedures cannot only be completely implemented in software as described. They can also be implemented in hardware, or in a mixed form of software and hardware.