nips nips2010 nips2010-164 nips2010-164-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Akshat Kumar, Shlomo Zilberstein
Abstract: Computing a maximum a posteriori (MAP) assignment in graphical models is a crucial inference problem for many practical applications. Several provably convergent approaches have been successfully developed using linear programming (LP) relaxation of the MAP problem. We present an alternative approach, which transforms the MAP problem into that of inference in a mixture of simple Bayes nets. We then derive the Expectation Maximization (EM) algorithm for this mixture that also monotonically increases a lower bound on the MAP assignment until convergence. The update equations for the EM algorithm are remarkably simple, both conceptually and computationally, and can be implemented using a graph-based message passing paradigm similar to max-product computation. Experiments on the real-world protein design dataset show that EM’s convergence rate is significantly higher than the previous LP relaxation based approach MPLP. EM also achieves a solution quality within 95% of optimal for most instances. 1
[1] H. Attias. Planning by probabilistic inference. In Proc. of the 9th Int. Workshop on Artificial Intelligence and Statistics, 2003.
[2] J. Dean and S. Ghemawat. MapReduce: a flexible data processing tool. Communications of the ACM, 53(1):72–77, 2010.
[3] R. Dechter. Constraint Processing. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 2003.
[4] A. P. Dempster, N. M. Laird, and D. B. Rubin. Maximum likelihood from incomplete data via the EM algorithm. Journal of the Royal Statistical society, Series B, 39(1):1–38, 1977.
[5] A. Globerson and T. Jaakkola. Fixing Max-Product: Convergent message passing algorithms for MAP LP-relaxations. In Advances in Neural Information Processing Systems, 2007.
[6] K. Jung, P. Kohli, and D. Shah. Local rules for global MAP: When do they work? In Advances in Neural Information Processing Systems, 2009. u
[7] C. D. Manning and H. Sch¨ tze. Foundations of statistical natural language processing. MIT Press, Cambridge, MA, USA, 1999.
[8] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann Publishers Inc., 1988.
[9] D. Sontag, A. Globerson, and T. Jaakkola. Clusters and coarse partitions in LP relaxations. In Advances in Neural Information Processing Systems, pages 1537–1544, 2008.
[10] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In Advances in Neural Information Processing Systems, 2007.
[11] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations for MAP using message passing. In Proc. of Uncertainty in Artificial Intelligence, pages 503–510, 2008.
[12] M. Toussaint, L. Charlin, and P. Poupart. Hierarchical POMDP controller optimization by likelihood maximization. In Proc. of Uncertainty in Artificial Intelligence, pages 562–570, 2008.
[13] M. Toussaint, S. Harmeling, and A. Storkey. Probabilistic inference for solving (PO)MDPs. Technical Report EDIINF-RR-0934, University of Edinburgh, School of Informatics, 2006.
[14] M. Toussaint and A. J. Storkey. Probabilistic inference for solving discrete and continuous state markov decision processes. In Proc. of the International Conference on Machine Learning, pages 945–952, 2006.
[15] M. Wainwright, T. Jaakkola, and A. Willsky. MAP estimation via agreement on (hyper)trees: Messagepassing and linear programming approaches. IEEE Transactions on Information Theory, 51:3697–3717, 2002.
[16] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational inference. Foundations and Trends in Machine Learning, 1(1-2):1–305, 2008.
[17] C. Yanover, T. Meltzer, Y. Weiss, P. Bennett, and E. Parrado-hernndez. Linear programming relaxations and belief propagation – an empirical study. Journal of Machine Learning Research, 7:2006, 2006. 9