nips nips2011 nips2011-55 nips2011-55-reference knowledge-graph by maker-knowledge-mining

55 nips-2011-Collective Graphical Models


Source: pdf

Author: Daniel R. Sheldon, Thomas G. Dietterich

Abstract: There are many settings in which we wish to fit a model of the behavior of individuals but where our data consist only of aggregate information (counts or low-dimensional contingency tables). This paper introduces Collective Graphical Models—a framework for modeling and probabilistic inference that operates directly on the sufficient statistics of the individual model. We derive a highlyefficient Gibbs sampling algorithm for sampling from the posterior distribution of the sufficient statistics conditioned on noisy aggregate observations, prove its correctness, and demonstrate its effectiveness experimentally. 1


reference text

[1] D. Sheldon, M. A. S. Elmohamed, and D. Kozen. Collective inference on Markov models for modeling bird migration. In Advances in Neural Information Processing Systems (NIPS 2007), pages 1321–1328, Cambridge, MA, 2008. MIT Press.

[2] Daniel Sheldon. Manipulation of PageRank and Collective Hidden Markov Models. PhD thesis, Cornell University, 2009.

[3] L. Devroye. A simple generator for discrete log-concave distributions. Computing, 39(1): 87–91, 1987.

[4] A. Agresti. A survey of exact inference for contingency tables. Statistical Science, 7(1):131– 153, 1992.

[5] P. Diaconis and B. Sturmfels. Algebraic algorithms for sampling from conditional distributions. The Annals of statistics, 26(1):363–397, 1998. ISSN 0090-5364.

[6] A. Dobra. Markov bases for decomposable graphical models. Bernoulli, 9(6):1093–1108, 2003. ISSN 1350-7265.

[7] S.L. Lauritzen. Graphical models. Oxford University Press, USA, 1996.

[8] D. Poole. First-order probabilistic inference. In Proc. IJCAI, volume 18, pages 985–991, 2003.

[9] R. de Salvo Braz, E. Amir, and D. Roth. Lifted first-order probabilistic inference. Introduction to Statistical Relational Learning, page 433, 2007.

[10] B. Milch, L.S. Zettlemoyer, K. Kersting, M. Haimes, and L.P. Kaelbling. Lifted probabilistic inference with counting formulas. Proc. 23rd AAAI, pages 1062–1068, 2008.

[11] P. Sen, A. Deshpande, and L. Getoor. Bisimulation-based approximate lifted inference. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 496–505. AUAI Press, 2009.

[12] J. Kisynski and D. Poole. Lifted aggregation in directed first-order probabilistic models. In Proc. IJCAI, volume 9, pages 1922–1929, 2009.

[13] Udi Apsel and Ronen Brafman. Extended lifted inference with joint formulas. In Proceedings of the Proceedings of the Twenty-Seventh Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-11), pages 11–18, Corvallis, Oregon, 2011. AUAI Press.

[14] R. Sundberg. Some results about decomposable (or Markov-type) models for multidimensional contingency tables: distribution of marginals and partitioning of tests. Scandinavian Journal of Statistics, 2(2):71–79, 1975.

[15] M.J. Wainwright and M.I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008.

[16] P. Diaconis, S. Holmes, and R.M. Neal. Analysis of a nonreversible Markov chain sampler. The Annals of Applied Probability, 10(3):726–752, 2000.

[17] W.R. Gilks and P. Wild. Adaptive Rejection sampling for Gibbs Sampling. Journal of the Royal Statistical Society. Series C (Applied Statistics), 41(2):337–348, 1992. ISSN 0035-9254.

[18] K. Murphy. The Bayes net toolbox for MATLAB. Computing science and statistics, 33(2): 1024–1034, 2001. 9