nips nips2001 nips2001-8 nips2001-8-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: T. Zhang
Abstract: Greedy approximation algorithms have been frequently used to obtain sparse solutions to learning problems. In this paper, we present a general greedy algorithm for solving a class of convex optimization problems. We derive a bound on the rate of approximation for this algorithm, and show that our algorithm includes a number of earlier studies as special cases.
[1] A.R. Barron. Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory, 39(3):930–945, 1993.
[2] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci., 55(1):119–139, 1997.
[3] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Additive logistic regression: A statistical view of boosting. The Annals of Statistics, 28(2):337–407, 2000. With discussion.
[4] T. J. Hastie and R. J. Tibshirani. Generalized additive models. Chapman and Hall Ltd., London, 1990.
[5] Lee K. Jones. A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Ann. Statist., 20(1):608–613, 1992.
[6] Wee Sun Lee, P.L. Bartlett, and R.C. Williamson. Efficient agnostic learning of neural networks with bounded fan-in. IEEE Transactions on Information Theory, 42(6):2118–2132, 1996.
[7] Jonathan Q. Li and Andrew R. Barron. Mixture density estimation. In S.A. Solla, T.K. Leen, and K.-R. M¨ ller, editors, Advances in Neural Information Processing Systems u 12, pages 279–285. MIT Press, 2000.
[8] Robert E. Schapire, Yoav Freund, Peter Bartlett, and Wee Sun Lee. Boosting the margin: a new explanation for the effectiveness of voting methods. Ann. Statist., 26(5):1651–1686, 1998.
[9] Alex J. Smola and Peter Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural Information Processing Systems 13, pages 619–625, 2001.
[10] Tong Zhang. Some sparse approximation bounds for regression problems. In The Eighteenth International Conference on Machine Learning, pages 624–631, 2001.