nips nips2013 nips2013-105 nips2013-105-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Yanshuai Cao, Marcus A. Brubaker, David Fleet, Aaron Hertzmann
Abstract: We propose an efficient optimization algorithm for selecting a subset of training data to induce sparsity for Gaussian process regression. The algorithm estimates an inducing set and the hyperparameters using a single objective, either the marginal likelihood or a variational free energy. The space and time complexity are linear in training set size, and the algorithm can be applied to large regression problems on discrete or continuous domains. Empirical evaluation shows state-ofart performance in discrete cases and competitive results in the continuous case. 1
[1] F. R. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. ICML, pp. 33–40, 2005..
[2] L. Bo and C. Sminchisescu. Twin gaussian processes for structured prediction. IJCV, 87:28– 52, 2010.
[3] Y. Cao, M. A. Brubaker, D. J. Fleet, and A. Hertzmann. Project page: supplementary material and software for efficient optimization for sparse gaussian process regression. www.cs.toronto.edu/˜caoy/opt_sgpr, 2013.
[4] K. Chalupka, C. K. I. Williams, and I. Murray. A framework for evaluating approximation methods for gaussian process regression. JMLR, 14(1):333–350, February 2013.
[5] L. Csat´ and M. Opper. Sparse on-line gaussian processes. Neural Comput., 14:641–668, o 2002.
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. IEEE CVPR, pp. 886–893, 2005.
[7] S. S. Keerthi and W. Chu. A matching pursuit approach to sparse gaussian process regression. NIPS 18, pp. 643–650. 2006.
[8] N. D. Lawrence, M. Seeger, and R. Herbrich, Fast sparse gaussian process methods: The informative vector machine. NIPS 15, pp. 609–616. 2003.
[9] J. J. Lee. Libpmk: A pyramid match toolkit. TR: MIT-CSAIL-TR-2008-17, MIT CSAIL, 2008. URL http://hdl.handle.net/1721.1/41070.
[10] J. Qui˜ onero-Candela and C. E. Rasmussen. A unifying view of sparse approximate gaussian n process regression. JMLR, 6:1939–1959, 2005.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian processes for machine learning. Adaptive computation and machine learning. MIT Press, 2006.
[12] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse gaussian process regression. AI & Stats. 9, 2003.
[13] A. J. Smola and P. Bartlett. Sparse greedy gaussian process regression. In Advances in Neural Information Processing Systems 13, pp. 619–625. 2001.
[14] E. Snelson and Z. Ghahramani. Sparse gaussian processes using pseudo-inputs. NIPS 18, pp. 1257–1264. 2006.
[15] M. K. Titsias. Variational learning of inducing variables in sparse gaussian processes. JMLR, 5:567–574, 2009.
[16] C. Walder, K. I. Kwang, and B. Sch¨ lkopf. Sparse multiscale gaussian process regression. o ICML, pp. 1112–1119, 2008. 9