nips nips2008 nips2008-89 nips2008-89-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Tom Minka, John Winn
Abstract: Gates are a new notation for representing mixture models and context-sensitive independence in factor graphs. Factor graphs provide a natural representation for message-passing algorithms, such as expectation propagation. However, message passing in mixture models is not well captured by factor graphs unless the entire mixture is represented by one factor, because the message equations have a containment structure. Gates capture this containment structure graphically, allowing both the independences and the message-passing equations for a model to be readily visualized. Different variational approximations for mixture models can be understood as different ways of drawing the gates in a model. We present general equations for expectation propagation and variational message passing in the presence of gates. 1
[1] B. Frey, F. Kschischang, H. Loeliger, and N. Wiberg. Factor graphs and algorithms. In Proc. of the 35th Allerton Conference on Communication, Control and Computing, 1998.
[2] C. Boutilier, N. Friedman, M. Goldszmidt, and D. Koller. Context-specific independence in Bayesian networks. In Proc. of the 12th conference on Uncertainty in Artificial Intelligence, pages 115–123, 1996.
[3] D. McAllester, M. Collins, and F. Pereira. Case-factor diagrams for structured probabilistic modeling. Uncertainty in Artificial Intelligence, 2004.
[4] B. Milch, B. Marthi, D. Sontag, S. Russell, D. L. Ong, and A. Kolobov. Approximate inference for infinite contingent Bayesian networks. In Proc. of the 6th workshop on Artificial Intelligence and Statistics, 2005.
[5] E. Mjolsness. Labeled graph notations for graphical models: Extended report. Technical Report TR# 0403, UCI ICS, March 2004.
[6] W. L. Buntine. Operations for learning with graphical models. JAIR, 2:159–225, 1994.
[7] S. Geman and D. Geman. Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images. IEEE Trans. on Pattern Anal. Machine Intell., 6:721–741, 1984.
[8] E. S. Lander and D. Botstein. Mapping Mendelian factors underlying quantitative traits using RFLP linkage maps. Genetics, 121(1):185–199, 1989.
[9] W.A.J.J. Wiegerinck. Variational approximations between mean field theory and the junction tree algorithm. In UAI, pages 626–633, 2000.
[10] J. Winn and C. M. Bishop. Variational Message Passing. JMLR, 6:661–694, 2005.
[11] T. P. Minka. Expectation propagation for approximate Bayesian inference. In UAI, pages 362–369, 2001.
[12] T. Minka and J. Winn. Gates: A graphical notation for mixture models. Technical report, Microsoft Research Ltd, 2008.
[13] M. Svens´ n and C. M. Bishop. Robust Bayesian mixture modelling. Neurocomputing, 64:235–252, 2005. e
[14] C. Archambeau and M. Verleysen. Robust Bayesian clustering. Neural Networks, 20:129–138, 2007. 8