ATG seminar series presents
Combining classifiers under the Neyman-Pearson paradigm
by Martin Grill
Time: Wednesday, April 22 at 14:30 in room KN:205.
Aim of the combining classifiers is to design powerful combining rules to get stronger combined classifier composed of a set of weak classifiers. The use of multiple classifiers reduces error rates introduced by individual classifiers. Motivated by the problems of anomaly detection, we propose to use the Neyman-Pearson paradigm for the combing classifiers to deal with asymmetric errors in binary classification. In the Neyman-Pearson classification paradigm, the goal is to learn a classifier from labeled training data such that the probability of false negative is minimized, while the probability of false positive is below a pre-defined level. This approach is especially useful in the scenarios, where the analysis of the false positives is very costly, e.g. network security. We use the Neyman-Pearson paradigm to find the optimal combination of given classifiers and obtain a new classifier that optimises the Neyman-Pearson criteria. In this talk I would like to introduce the challenges and possible approaches that minimize an empirical objective subject to empirical constraint.