nips nips2007 nips2007-68 nips2007-68-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Charlie Frogner, Avi Pfeffer
Abstract: Dynamic Bayesian networks are structured representations of stochastic processes. Despite their structure, exact inference in DBNs is generally intractable. One approach to approximate inference involves grouping the variables in the process into smaller factors and keeping independent beliefs over these factors. In this paper we present several techniques for decomposing a dynamic Bayesian network automatically to enable factored inference. We examine a number of features of a DBN that capture different types of dependencies that will cause error in factored inference. An empirical comparison shows that the most useful of these is a heuristic that estimates the mutual information introduced between factors by one step of belief propagation. In addition to features computed over entire factors, for efficiency we explored scores computed over pairs of variables. We present search methods that use these features, pairwise and not, to find a factorization, and we compare their results on several datasets. Automatic factorization extends the applicability of factored inference to large, complex models that are undesirable to factor by hand. Moreover, tests on real DBNs show that automatic factorization can achieve significantly lower error in some cases. 1
[1] Sun Yong Kim, Seiya Imot, and Satoru Miyano. Inferring gene networks from time series microarray data using dynamic Bayesian networks. Briefings in Bioinformatics, 2003.
[2] Geoffrey Zweig and Stuart Russell. Dynamic Bayesian networks for speech recognition. In National Conference on Artificial Intelligence (AAAI), 1998.
[3] Xavier Boyen and Daphne Koller. Tractable inference for complex stochastic processes. In Neural Information Processing Systems, 1998.
[4] Kevin Murphy and Yair Weiss. The factored frontier algorithm for approximate inference in DBNs. In Uncertainty in Artificial Intelligence, 2001.
[5] Brenda Ng, Leonid Peshkin, and Avi Pfeffer. Factored particles for scalable monitoring. In Uncertainty in Artificial Intelligence, 2002.
[6] Avi Pfeffer. Approximate separability for weak interaction in dynamic systems. In Uncertainty in Artificial Intelligence, 2006.
[7] Thomas Dean and Keiji Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 1989.
[8] Kevin Murphy. Dynamic Bayesian networks: representation, inference and learning. PhD thesis, U.C. Berkeley, Computer Science Division, 2002.
[9] Avi Pfeffer. Sufficiency, separability and temporal probabilistic models. In Uncertainty in Artificial Intelligence, 2001.
[10] Xavier Boyen and Daphne Koller. Exploiting the architecture of dynamic systems. In Proceedings AAAI-99, 1999.
[11] Andrew Ng, Michael Jordan, and Yair Weiss. On spectral clustering: analysis and an algorithm. In Neural Information Processing Systems, 2001.
[12] Charlie Frogner and Avi Pfeffer. Heuristics for automatically decomposing a dynamic Bayesian network for factored inference. Technical Report TR-04-07, Harvard University, 2007.
[13] Jeff Forbes, Tim Huang, Keiji Kanazawa, and Stuart Russell. The BATmobile: towards a Bayesian automatic taxi. In International Joint Conference on Artificial Intelligence, 1995. 8