nips nips2013 nips2013-275 nips2013-275-reference knowledge-graph by maker-knowledge-mining

275 nips-2013-Reservoir Boosting : Between Online and Offline Ensemble Learning


Source: pdf

Author: Leonidas Lefakis, François Fleuret

Abstract: We propose to train an ensemble with the help of a reservoir in which the learning algorithm can store a limited number of samples. This novel approach lies in the area between offline and online ensemble approaches and can be seen either as a restriction of the former or an enhancement of the latter. We identify some basic strategies that can be used to populate this reservoir and present our main contribution, dubbed Greedy Edge Expectation Maximization (GEEM), that maintains the reservoir content in the case of Boosting by viewing the samples through their projections into the weak classifier response space. We propose an efficient algorithmic implementation which makes it tractable in practice, and demonstrate its efficiency experimentally on several compute-vision data-sets, on which it outperforms both online and offline methods in a memory constrained setting. 1


reference text

[1] Antoine Bordes, Seyda Ertekin, Jason Weston, and L´ on Bottou. Fast kernel classifiers with e online and active learning. J. Mach. Learn. Res., 6:1579–1619, December 2005.

[2] Joseph K. Bradley and Robert E. Schapire. Filterboost: Regression and classification on large datasets. In NIPS, 2007.

[3] Nicol Cesa-Bianchi and Claudio Gentile. Tracking the best hyperplane with a simple budget perceptron. In In Proc. of Nineteenth Annual Conference on Computational Learning Theory, pages 483–498. Springer-Verlag, 2006.

[4] Shang-Tse Chen, Hsuan-Tien Lin, and Chi-Jen Lu. An online boosting algorithm with theoretical justifications. In John Langford and Joelle Pineau, editors, ICML, ICML ’12, pages 1007–1014, New York, NY, USA, July 2012. Omnipress.

[5] Adam Coates and Andrew Ng. The importance of encoding versus training with sparse coding and vector quantization. In Lise Getoor and Tobias Scheffer, editors, Proceedings of the 28th International Conference on Machine Learning (ICML-11), ICML ’11, pages 921–928, New York, NY, USA, June 2011. ACM.

[6] Koby Crammer, Jaz S. Kandola, and Yoram Singer. Online classification on a budget. In Sebastian Thrun, Lawrence K. Saul, and Bernhard Schlkopf, editors, NIPS. MIT Press, 2003.

[7] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 886–893 vol. 1, 2005.

[8] Ofer Dekel and Yoram Singer. Support vector machines on a budget. In NIPS, pages 345–352, 2006.

[9] Carlos Domingo and Osamu Watanabe. Madaboost: A modification of adaboost. In Proceedings of the Thirteenth Annual Conference on Computational Learning Theory, COLT ’00, pages 180–189, San Francisco, CA, USA, 2000. Morgan Kaufmann Publishers Inc.

[10] Yoav Freund and Robert E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, August 1997.

[11] Helmut Grabner and Horst Bischof. On-line boosting and vision. In CVPR (1), pages 260–267, 2006.

[12] Mihajlo Grbovic and Slobodan Vucetic. Tracking concept change with incremental boosting by minimization of the evolving exponential loss. In ECML PKDD, ECML PKDD’11, pages 516–532, Berlin, Heidelberg, 2011. Springer-Verlag.

[13] Zdenek Kalal, Jiri Matas, and Krystian Mikolajczyk. Weighted sampling for large-scale boosting. In BMVC, 2008.

[14] C. Leistner, A. Saffari, P.M. Roth, and H. Bischof. On robustness of on-line boosting a competitive study. In Computer Vision Workshops (ICCV Workshops), 2009 IEEE 12th International Conference on, pages 1362 –1369, 27 2009-oct. 4 2009.

[15] Nikunj C. Oza and Stuart Russell. Online bagging and boosting. In In Artificial Intelligence and Statistics 2001, pages 105–112. Morgan Kaufmann, 2001.

[16] Bordes Antoine Weston Jason and L´ on Bottou. Online (and offline) on an even tighter budget. e In In Artificial Intelligence and Statistics 2005, 2005. 9