nips nips2000 nips2000-120 nips2000-120-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Alex J. Smola, Peter L. Bartlett
Abstract: We present a simple sparse greedy technique to approximate the maximum a posteriori estimate of Gaussian Processes with much improved scaling behaviour in the sample size m. In particular, computational requirements are O(n 2 m), storage is O(nm), the cost for prediction is 0 (n) and the cost to compute confidence bounds is O(nm), where n «: m. We show how to compute a stopping criterion, give bounds on the approximation error, and show applications to large scale problems. 1
[1] S. Fine and K Scheinberg. Efficient SVM training using low-rank kernel representation. Technical report, IBM Watson Research Center, New York, 2000.
[2] M. Gibbs and D .J .C . Mackay. Efficient implementation of gaussian processes. Technical report, Cavendish Laboratory, Cambridge, UK, 1997.
[3] F. Girosi. An equivalence between sparse approximation and support vector machines. Neural Computation, 10(6):1455-1480, 1998.
[4] S. Mallat and Z. Zhang. Matching Pursuit in a time-frequency dictionary. IEEE Transactions on Signal Processing, 41:3397-3415, 1993.
[5] B. K Natarajan. Sparse approximate solutions to linear systems. SIAM Journal of Computing, 25(2) :227-234, 1995.
[6] B. Sch6lkopf, S. Mika, C. Burges, P . Knirsch, K-R. Miiller, G . Ratsch, and A. Smola. Input space vs. feature space in kernel-based methods. IEEE Transactions on Neural Networks, 10(5):1000 - 1017, 1999.
[7] A.J . Smola and B. Sch6lkopf. Sparse greedy matrix approximation for machine learning. In P. Langley, editor, Proceedings of the 17th International Conference on Machine Learning, pages 911 - 918, San Francisco, 2000. Morgan Kaufman.
[8] C .KI. Williams and M. Seeger. The effect of the input density distribution on kernelbased classifiers. In P. Langley, editor, Proceedings of the Seventeenth International Conference on Machine Learning, pages 1159 - 1166, San Francisco, California, 2000. Morgan Kaufmann.