nips nips2008 nips2008-249 nips2008-249-reference knowledge-graph by maker-knowledge-mining

249 nips-2008-Variational Mixture of Gaussian Process Experts


Source: pdf

Author: Chao Yuan, Claus Neubauer

Abstract: Mixture of Gaussian processes models extended a single Gaussian process with ability of modeling multi-modal data and reduction of training complexity. Previous inference algorithms for these models are mostly based on Gibbs sampling, which can be very slow, particularly for large-scale data sets. We present a new generative mixture of experts model. Each expert is still a Gaussian process but is reformulated by a linear model. This breaks the dependency among training outputs and enables us to use a much faster variational Bayesian algorithm for training. Our gating network is more flexible than previous generative approaches as inputs for each expert are modeled by a Gaussian mixture model. The number of experts and number of Gaussian components for an expert are inferred automatically. A variety of tests show the advantages of our method. 1


reference text

[1] C. E. Rasmussen and Z. Ghahramani. Infinite mixtures of Gaussian process experts. In Advances in Neural Information Processing Systems 14. MIT Press, 2002.

[2] E. Meeds and S. Osindero. An alternative infinite mixture of Gaussian process experts. In Advances in Neural Information Processing Systems 18. MIT Press, 2006.

[3] L. Xu, M. I. Jordan, and G. E. Hinton. An alternative model for mixtures of experts. In Advances in Neural Information Processing Systems 7. MIT Press, 1995.

[4] N. Ueda and Z. Ghahramani. Bayesian model search for mixture models based on optimizing variational bounds. Neural Networks, 15(10):1223–1241, 2002.

[5] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.

[6] A. J. Smola and P. Bartlett. Sparse greedy Gaussian process regression. In Advances in Neural Information Processing Systems 13. MIT Press, 2001.

[7] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse Gaussian process regression. In Workshop on Artificial Intelligence and Statistics 9, 2003.

[8] S. S. Keerthi and W. Chu. A matching pursuit approach to sparse Gaussian process regression. In Advances in Neural Information Processing Systems 18. MIT Press, 2006.

[9] E. Snelson and Z. Ghahramani. Sparse Gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems 18. MIT Press, 2006.

[10] R. A. Jacobs, M. I. Jordan, S. J. Nowlan, and G. E. Hinton. Adaptive mixture of local experts. Neural computation, 3:79–87, 1991.

[11] S. Waterhouse. Classification and regression using mixtures of experts. PhD Theis, Department of Engineering, Cambridge University, 1997.

[12] C. M. Bishop and M. Svens´ n. Bayesian hierarchical mixtures of experts. In Proc. Uncertainty in Artificial e Intelligence, 2003.

[13] V. Tresp. Mixtures of Gaussian processes. In Advances in Neural Information Processing Systems 13. MIT Press, 2001.

[14] B. W. Silverman. Some aspects of the spline smoothing approach to non-parametric regression curve fitting. J. Royal. Stat. Society. B, 47(1):1–52, 1985.

[15] C. E. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems 12. MIT Press, 2000.

[16] L. Csat´ and M. Opper. Sparse on-line Gaussian processes. Neural Computation, 14(3):641–668, 2002. o 8