nips nips2007 nips2007-147 nips2007-147-reference knowledge-graph by maker-knowledge-mining

147 nips-2007-One-Pass Boosting


Source: pdf

Author: Zafer Barutcuoglu, Phil Long, Rocco Servedio

Abstract: This paper studies boosting algorithms that make a single pass over a set of base classifiers. We first analyze a one-pass algorithm in the setting of boosting with diverse base classifiers. Our guarantee is the same as the best proved for any boosting algorithm, but our one-pass algorithm is much faster than previous approaches. We next exhibit a random source of examples for which a “picky” variant of AdaBoost that skips poor base classifiers can outperform the standard AdaBoost algorithm, which uses every base classifier, by an exponential factor. Experiments with Reuters and synthetic data show that one-pass boosting can substantially improve on the accuracy of Naive Bayes, and that picky boosting can sometimes lead to a further improvement in accuracy.


reference text

[1] S. Dasgupta and P. M. Long. Boosting with diverse base classifiers. COLT, 2003.

[2] R. O. Duda and P. E. Hart. Pattern Classification and Scene Analysis. Wiley, 1973.

[3] Y. Freund. Boosting a weak learning algorithm by majority. Inf. and Comput., 121(2):256–285, 1995.

[4] Y. Freund and R. Schapire. Experiments with a new boosting algorithm. In ICML, pages 148– 156, 1996.

[5] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. JCSS, 55(1):119–139, 1997.

[6] N. Littlestone. Redundant noisy attributes, attribute errors, and linear-threshold learning using Winnow. In COLT, pages 147–156, 1991.

[7] A. Mccallum and K. Nigam. A comparison of event models for naive bayes text classification. In AAAI-98 Workshop on Learning for Text Categorization, 1998.

[8] R. Schapire and Y. Singer. Improved boosting algorithms using confidence-rated predictions. Machine Learning, 37:297–336, 1999.