nips nips2005 nips2005-98 nips2005-98-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Zoubin Ghahramani, Thomas L. Griffiths
Abstract: We define a probability distribution over equivalence classes of binary matrices with a finite number of rows and an unbounded number of columns. This distribution is suitable for use as a prior in probabilistic models that represent objects using a potentially infinite array of features. We identify a simple generative process that results in the same distribution over equivalence classes, which we call the Indian buffet process. We illustrate the use of this distribution as a prior in an infinite latent feature model, deriving a Markov chain Monte Carlo algorithm for inference in this model and applying the algorithm to an image dataset. 1
[1] N. Ueda and K. Saito. Parametric mixture models for multi-labeled text. In Advances in Neural Information Processing Systems 15, Cambridge, 2003. MIT Press.
[2] I. T. Jolliffe. Principal component analysis. Springer, New York, 1986.
[3] R. S. Zemel and G. E. Hinton. Developing population codes by minimizing description length. In Advances in Neural Information Processing Systems 6. Morgan Kaufmann, San Francisco, CA, 1994.
[4] Z. Ghahramani. Factorial learning and the EM algorithm. In Advances in Neural Information Processing Systems 7. Morgan Kaufmann, San Francisco, CA, 1995.
[5] C. E. Rasmussen and Z. Ghahramani. Occam’s razor. In Advances in Neural Information Processing Systems 13. MIT Press, Cambridge, MA, 2001.
[6] C. Antoniak. Mixtures of Dirichlet processes with applications to Bayesian nonparametric problems. The Annals of Statistics, 2:1152–1174, 1974.
[7] M. D. Escobar and M. West. Bayesian density estimation and inference using mixtures. Journal of the American Statistical Association, 90:577–588, 1995.
[8] T. S. Ferguson. Bayesian density estimation by mixtures of normal distributions. In M. Rizvi, J. Rustagi, and D. Siegmund, editors, Recent advances in statistics, pages 287–302. Academic Press, New York, 1983.
[9] R. M. Neal. Markov chain sampling methods for Dirichlet process mixture models. Journal of Computational and Graphical Statistics, 9:249–265, 2000.
[10] C. Rasmussen. The infinite Gaussian mixture model. In Advances in Neural Information Processing Systems 12. MIT Press, Cambridge, MA, 2000. ´
[11] D. Aldous. Exchangeability and related topics. In Ecole d’´ t´ de probabilit´ s de Saint-Flour, ee e XIII—1983, pages 1–198. Springer, Berlin, 1985.
[12] J. Pitman. Combinatorial stochastic processes, 2002. Notes for Saint Flour Summer School.
[13] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process. Technical Report 2005-001, Gatsby Computational Neuroscience Unit, 2005.
[14] A. d’Aspremont, L. El Ghaoui, I. Jordan, and G. R. G. Lanckriet. A direct formulation for sparse PCA using semidefinite programming. In Advances in Neural Information Processing Systems 17. MIT Press, Cambridge, MA, 2005.
[15] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, in press.