nips nips2011 nips2011-299 nips2011-299-reference knowledge-graph by maker-knowledge-mining

299 nips-2011-Variance Penalizing AdaBoost


Source: pdf

Author: Pannagadatta K. Shivaswamy, Tony Jebara

Abstract: This paper proposes a novel boosting algorithm called VadaBoost which is motivated by recent empirical Bernstein bounds. VadaBoost iteratively minimizes a cost function that balances the sample mean and the sample variance of the exponential loss. Each step of the proposed algorithm minimizes the cost efficiently by providing weighted data to a weak learner rather than requiring a brute force evaluation of all possible weak learners. Thus, the proposed algorithm solves a key limitation of previous empirical Bernstein boosting methods which required brute force enumeration of all possible weak learners. Experimental results confirm that the new algorithm achieves the performance improvements of EBBoost yet goes beyond decision stumps to handle any weak learner. Significant performance gains are obtained over AdaBoost for arbitrary weak learners including decision trees (CART). 1


reference text

[1] J-Y. Audibert, R. Munos, and C. Szepesv´ ri. Tuning bandit algorithms in stochastic environa ments. In ALT, 2007.

[2] P. L. Bartlett, M. I. Jordan, and J. D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473):138–156, 2006.

[3] L. Breiman, J.H. Friedman, R.A. Olshen, and C.J. Stone. Classification and Regression Trees. Chapman and Hall, New York, 1984.

[4] Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119–139, 1997.

[5] A. Maurer and M. Pontil. Empirical Bernstein bounds and sample variance penalization. In COLT, 2009.

[6] V. Mnih, C. Szepesv´ ri, and J-Y. Audibert. Empirical Bernstein stopping. In COLT, 2008. a

[7] G. Raetsch, T. Onoda, and K.-R. Muller. Soft margins for AdaBoost. Machine Learning, 43:287–320, 2001.

[8] L. Reyzin and R. Schapire. How boosting the margin can also boost classifier complexity. In ICML, 2006.

[9] R. E. Schapire, Y. Freund, P. L. Bartlett, and W. S. Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics, 26(5):1651–1686, 1998.

[10] P. K. Shivaswamy and T. Jebara. Empirical Bernstein boosting. In AISTATS, 2010.

[11] V. Vapnik. The Nature of Statistical Learning Theory. Springer, New York, NY, 1995. 9