nips nips2003 nips2003-176 nips2003-176-reference knowledge-graph by maker-knowledge-mining

176 nips-2003-Sequential Bayesian Kernel Regression


Source: pdf

Author: Jaco Vermaak, Simon J. Godsill, Arnaud Doucet

Abstract: We propose a method for sequential Bayesian kernel regression. As is the case for the popular Relevance Vector Machine (RVM) [10, 11], the method automatically identifies the number and locations of the kernels. Our algorithm overcomes some of the computational difficulties related to batch methods for kernel regression. It is non-iterative, and requires only a single pass over the data. It is thus applicable to truly sequential data sets and batch data sets alike. The algorithm is based on a generalisation of Importance Sampling, which allows the design of intuitively simple and efficient proposal distributions for the model parameters. Comparative results on two standard data sets show our algorithm to compare favourably with existing batch estimation strategies.


reference text

[1] C. M. Bishop and M. E. Tipping. Variational relevance vector machines. In C. Boutilier and M. Goldszmidt, editors, Proceedings of the 16th Conference on Uncertainty in Artificial Intelligence, pages 46–53. Morgan Kaufmann, 2000.

[2] D. Crisan. Particle filters – a theoretical perspective. In A. Doucet, J. F. G. de Freitas, and N. J. Gordon, editors, Sequential Monte Carlo Methods in Practice, pages 17–38. Springer-Verlag, 2001.

[3] A. Doucet, J. F. G. de Freitas, and N. J. Gordon, editors. Sequential Monte Carlo Methods in Practice. Springer-Verlag, New York, 2001.

[4] N. J. Gordon, D. J. Salmond, and A. F. M. Smith. Novel approach to nonlinear/non-Gaussian Bayesian state estimation. IEE Proceedings-F, 140(2):107–113, 1993.

[5] P. J. Green. Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika, 82(4):711–732, 1995.

[6] G. Kitagawa. Monte Carlo filter and smoother for non-Gaussian nonlinear state space models. Journal of Computational and Graphical Statistics, 5(1):1–25, 1996.

[7] P. Del Moral and A. Doucet. Sequential Monte Carlo samplers. Technical Report CUED/FINFENG/TR.443, Signal Processing Group, Cambridge University Engineering Department, 2002.

[8] R. M. Neal. Assessing relevance determination methods using DELVE. In C. M. Bishop, editor, Neural Networks and Machine Learning, pages 97–129. Springer-Verlag, 1998.

[9] S. S. Tham, A. Doucet, and R. Kotagiri. Sparse Bayesian learning for regression and classification using Markov chain Monte Carlo. In Proceedings of the International Conference on Machine Learning, pages 634–643, 2002. ¨

[10] M. E. Tipping. The relevance vector machine. In S. A. Solla, T. K. Leen, and K. R. Muller, editors, Advances in Neural Information Processing Systems, volume 12, pages 652–658. MIT Press, 2000.

[11] M. E. Tipping. Sparse Bayesian learning and the relevance vector machine. Journal of Machine Learning Research, 1:211–244, 2001.

[12] M. E. Tipping and A. C. Faul. Fast marginal likelihood maximisation for sparse Bayesian models. In C. M. Bishop and B. J. Frey, editors, Proceedings of the Ninth International Workshop on Artificial Intelligence and Statistics, 2003.

[13] V. N. Vapnik. Statistical Learning Theory. John Wiley and Sons, New York, 1998.