nips nips2013 nips2013-332 nips2013-332-reference knowledge-graph by maker-knowledge-mining

332 nips-2013-Tracking Time-varying Graphical Structure


Source: pdf

Author: Erich Kummerfeld, David Danks

Abstract: Structure learning algorithms for graphical models have focused almost exclusively on stable environments in which the underlying generative process does not change; that is, they assume that the generating model is globally stationary. In real-world environments, however, such changes often occur without warning or signal. Real-world data often come from generating models that are only locally stationary. In this paper, we present LoSST, a novel, heuristic structure learning algorithm that tracks changes in graphical model structure or parameters in a dynamic, real-time manner. We show by simulation that the algorithm performs comparably to batch-mode learning when the generating graphical structure is globally stationary, and significantly better when it is only locally stationary. 1


reference text

[1] R. P. Adams and D. J. C. MacKay. Bayesian online changepoint detection. Technical report, University of Cambridge, Cambridge, UK, 2007. arXiv:0710.3742v1 [stat.ML].

[2] D. M. Chickering. Learning Bayesian networks is NP-complete. In Proceedings of AI and Statistics, 1995.

[3] D. M. Chickering. Optimal structure identification with greedy search. Journal of Machine Learning Research, 3:507–554, 2002.

[4] F. Desobry, M. Davy, and C. Doncarli. An online kernel change detection algorithm. IEEE Transactions on Signal Processing, 8:2961–2974, 2005.

[5] E. Kummerfeld and D. Danks. Model change and methodological virtues in scientific inference. Technical report, Carnegie Mellon University, Pittsburgh, Pennsylvania, 2013.

[6] S. L. Lauritzen. Graphical models. Clarendon Press, 1996.

[7] T. Liptak. On the combination of independent tests. Magyar Tud. Akad. Mat. Kutato Int. Kozl., 3:171–197, 1958.

[8] P. C. Mahalanobis. On the generalized distance in statistics. Proceedings of the National Institute of Sciences of India, 2:49–55, 1936.

[9] A. McCallum, D. Freitag, and F. C. N. Pereira. Maximum entropy Markov models of information extraction and segmentation. In Proceedings of ICML-2000, pages 591–598, 2000.

[10] J. Pearl. Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000.

[11] M.R. Siracusa and J.W. Fisher III. Tractable bayesian inference of time-series dependence structure. In Proceedings of the 12th International Conference on Artificial Intelligence and Statistics, 2009.

[12] P. Spirtes, C. Glymour, and R. Scheines. Causation, Prediction, and Search. MIT Press, 2nd edition, 2000.

[13] R. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.

[14] M. Talih and N. Hengartner. Structural learning with time-varying components: tracking the cross-section of financial time series. Journal of the Royal Statistical Society - Series B: Statistical Methodology, 67(3):321–341, 2005. 9