nips nips2012 nips2012-251 nips2012-251-reference knowledge-graph by maker-knowledge-mining

251 nips-2012-On Lifting the Gibbs Sampling Algorithm


Source: pdf

Author: Deepak Venugopal, Vibhav Gogate

Abstract: First-order probabilistic models combine the power of first-order logic, the de facto tool for handling relational structure, with probabilistic graphical models, the de facto tool for handling uncertainty. Lifted probabilistic inference algorithms for them have been the subject of much recent research. The main idea in these algorithms is to improve the accuracy and scalability of existing graphical models’ inference algorithms by exploiting symmetry in the first-order representation. In this paper, we consider blocked Gibbs sampling, an advanced MCMC scheme, and lift it to the first-order level. We propose to achieve this by partitioning the first-order atoms in the model into a set of disjoint clusters such that exact lifted inference is polynomial in each cluster given an assignment to all other atoms not in the cluster. We propose an approach for constructing the clusters and show how it can be used to trade accuracy with computational complexity in a principled manner. Our experimental evaluation shows that lifted Gibbs sampling is superior to the propositional algorithm in terms of accuracy, scalability and convergence.


reference text

[1] M. Chavira and A. Darwiche. On probabilistic inference by weighted model counting. Artificial Intelligence, 172(6-7):772–799, 2008.

[2] R. de Salvo Braz. Lifted First-Order Probabilistic Inference. PhD thesis, University of Illinois, UrbanaChampaign, IL, 2007.

[3] P. Domingos and D. Lowd. Markov Logic: An Interface Layer for Artificial Intelligence. Morgan & Claypool, San Rafael, CA, 2009.

[4] S. Ermon, C.P. Gomes, A. Sabharwal, and B. Selman. Accelerated Adaptive Markov Chain for Partition Function Computation. In NIPS, pages 2744–2752, 2011.

[5] A. Gelman and D. B. Rubin. Inference from iterative simulation using multiple sequences. Statistical Science, 7(4):457–472, 1992.

[6] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Transactions on Pattern Analysis and Machine Intelligence, 6:721–741, 1984.

[7] L. Getoor and B. Taskar, editors. Introduction to Statistical Relational Learning. MIT Press, 2007.

[8] V. Gogate and P. Domingos. Probabilistic theorem proving. In UAI, pages 256–265, 2011.

[9] V. Gogate, A. Jha, D. Venugopal. Advances in Lifted Importance Sampling. In AAAI, pages 1910–1916, 2012.

[10] C. S. Jensen, U. Kjaerulff, and A. Kong. Blocking gibbs sampling in very large probabilistic expert systems. International Journal of Human Computer Studies. Special Issue on Real-World Applications of Uncertain Reasoning, 42:647–666, 1993.

[11] A. Jha, V. Gogate, A. Meliou, and D. Suciu. Lifted inference from the other side: The tractable features. In NIPS, pages 973–981, 2010.

[12] S. Kok, M. Sumner, M. Richardson, P. Singla, H. Poon, and P. Domingos. The Alchemy system for statistical relational AI. Technical report, Department of Computer Science and Engineering, University of Washington, Seattle, WA, 2006. http://alchemy.cs.washington.edu.

[13] D. Koller and N. Friedman. Probabilistic Graphical Models: Principles and Techniques. MIT Press, 2009.

[14] P. Liang, M. I. Jordan, and D. Klein. Type-based MCMC. In HLT-NAACL, pages 573–581, 2010.

[15] J. S. Liu. Monte Carlo Strategies in Scientific Computing. Springer Publishing Company, Incorporated, 2001.

[16] J. S. Liu, W. H. Wong, and A. Kong. Covariance structure of the Gibbs sampler with applications to the comparison of estimators and augmentation schemes. Biometrika, 81:27–40, 1994.

[17] D. Lowd and P. Domingos. Recursive random fields. In IJCAI, pages 950–955. 2007.

[18] B. Milch and S. J. Russell. General-purpose MCMC inference over relational structures. In UAI, pages 349–358, 2006.

[19] B. Milch, L. S. Zettlemoyer, K. Kersting, M. Haimes, and L. P. Kaelbling. Lifted probabilistic inference with counting formulas. In AAAI, pages 1062–1068, 2008.

[20] K. P. Murphy, Y. Weiss, and M. I. Jordan. Loopy Belief propagation for approximate inference: An empirical study. In UAI, pages 467–475, 1999.

[21] M. Niepert. Markov Chains on Orbits of Permutation Groups. In UAI, pages 624–633, 2012.

[22] Radford Neal. Slice sampling. Annals of Statistics, 31:705–767, 2000.

[23] K. S. Ng, J. W. Lloyd, and W. T. Uther. Probabilistic modelling, inference and learning using logical theories. Annals of Mathematics and Artificial Intelligence, 54(1-3):159–205, 2008.

[24] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann, San Francisco, CA, 1988.

[25] D. Poole. First-order probabilistic inference. In IJCAI, pages 985–991, 2003.

[26] H. Poon and P. Domingos. Sound and efficient inference with probabilistic and deterministic dependencies. In AAAI, pages 458–463, 2006.

[27] H. Poon, P. Domingos, and M. Sumner. A general method for reducing the complexity of relational inference and its application to MCMC. In AAAI, pages 1075–1080, 2008.

[28] M. Richardson and P. Domingos. Markov logic networks. Machine Learning, 62:107–136, 2006.

[29] T. Sang, P. Beame, and H. Kautz. Solving Bayesian networks by weighted model counting. In AAAI, pages 475–482, 2005.

[30] P. Singla and P. Domingos. Lifted first-order belief propagation. In AAAI, pages 1094–1099, Chicago, IL, 2008. AAAI Press. 9