nips nips2007 nips2007-149 nips2007-149-reference knowledge-graph by maker-knowledge-mining

149 nips-2007-Optimal ROC Curve for a Combination of Classifiers


Source: pdf

Author: Marco Barreno, Alvaro Cardenas, J. D. Tygar

Abstract: We present a new analysis for the combination of binary classifiers. Our analysis makes use of the Neyman-Pearson lemma as a theoretical basis to analyze combinations of classifiers. We give a method for finding the optimal decision rule for a combination of classifiers and prove that it has the optimal ROC curve. We show how our method generalizes and improves previous work on combining classifiers and generating ROC curves. 1


reference text

[1] Foster Provost and Tom Fawcett. Robust classification for imprecise environments. Machine Learning Journal, 42(3):203–231, March 2001.

[2] Peter A. Flach and Shaomin Wu. Repairing concavities in ROC curves. In Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI’05), pages 702–707, August 2005.

[3] Tom Fawcett. ROC graphs: Notes and practical considerations for data mining researchers. Technical Report HPL-2003-4, HP Laboratories, Palo Alto, CA, January 2003. Updated March 2004.

[4] J. Neyman and E. S. Pearson. On the problem of the most efficient tests of statistical hypotheses. Philosophical Transactions of the Royal Society of London, Series A, Containing Papers of a Mathematical or Physical Character, 231:289–337, 1933.

[5] Vincent H. Poor. An Introduction to Signal Detection and Estimation. Springer-Verlag, second edition, 1988.

[6] D. J. Newman, S. Hettich, C. L. Blake, and C. J. Merz. UCI repository of machine learning databases, 1998. http://www.ics.uci.edu/∼mlearn/MLRepository.html.

[7] I. Mierswa, M. Wurst, R. Klinkenberg, M. Scholz, and T. Euler. YALE: Rapid prototyping for complex data mining tasks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), 2006.

[8] L. Breiman. Bagging predictors. Machine Learning, 24(2):123–140, 1996.

[9] Y. Freund and R. E. Schapire. Experiments with a new boosting algorithm. In Thirteenth International Conference on Machine Learning, pages 148–156, Bari, Italy, 1996. Morgan Kaufmann.

[10] Thomas G. Dietterich. Ensemble methods in machine learning. Lecture Notes in Computer Science, 1857:1–15, 2000.

[11] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, October 1998.

[12] Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research (JMLR), 4:933–969, 2003.

[13] D. H. Wolpert. Stacked generalization. Neural Networks, 5:241–259, 1992. ˘

[14] Saso D˘ eroski and Bernard Zenko. Is combining classifiers with stacking better than selecting the best z one? Machine Learning, 54:255–273, 2004. 8