nips nips2012 nips2012-232 nips2012-232-reference knowledge-graph by maker-knowledge-mining

232 nips-2012-Multiplicative Forests for Continuous-Time Processes


Source: pdf

Author: Jeremy Weiss, Sriraam Natarajan, David Page

Abstract: Learning temporal dependencies between variables over continuous time is an important and challenging task. Continuous-time Bayesian networks effectively model such processes but are limited by the number of conditional intensity matrices, which grows exponentially in the number of parents per variable. We develop a partition-based representation using regression trees and forests whose parameter spaces grow linearly in the number of node splits. Using a multiplicative assumption we show how to update the forest likelihood in closed form, producing efficient model updates. Our results show multiplicative forests can be learned from few temporal trajectories with large gains in performance and scalability.


reference text

[1] T. Dean and K. Kanazawa, “A model for reasoning about persistence and causation,” Computational Intelligence, vol. 5, no. 2, pp. 142–150, 1989.

[2] U. Nodelman, C. R. Shelton, and D. Koller, “Learning continuous time Bayesian networks,” in UAI, 2003.

[3] U. Nodelman, Continuous time Bayesian networks. PhD thesis, Stanford University, 2007.

[4] U. Nodelman, D. Koller, and C. R. Shelton, “Expectation propagation for continuous time Bayesian networks,” in UAI, 2005.

[5] S. Saria, U. Nodelman, and D. Koller, “Reasoning at the right time granularity,” in UAI, 2007.

[6] I. Cohn, T. El-Hay, N. Friedman, and R. Kupferman, “Mean field variational approximation for continuous-time Bayesian networks,” in UAI, 2009.

[7] Y. Fan and C. R. Shelton, “Sampling for approximate inference in continuous time Bayesian networks,” in AI and Mathematics, 2008.

[8] V. Rao and Y. Teh, “Fast MCMC sampling for Markov jump processes and continuous time Bayesian networks,” in UAI, 2011.

[9] D. Heckerman, “Causal independence for knowledge acquisition and inference,” in UAI, pp. 122–127, 1993.

[10] A. Gunawardana, C. Meek, and P. Xu, “A model for temporal dependencies in event streams,” in NIPS, 2011.

[11] C. Strobl, J. Malley, and G. Tutz, “An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests.,” Psychological methods, vol. 14, no. 4, p. 323, 2009.

[12] Y. Freund and R. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” in Computational learning theory, 1995.

[13] L. Breiman, “Random forests,” Machine learning, vol. 45, no. 1, pp. 5–32, 2001.

[14] W. Kannel, “Blood pressure as a cardiovascular risk factor,” JAMA, vol. 275, no. 20, p. 1571, 1996.

[15] C. Shelton, Y. Fan, W. Lam, J. Lee, and J. Xu, “Continuous time Bayesian network reasoning and learning engine,” JMLR, vol. 11, pp. 1137–1140, 2010.

[16] J. Friedman, “Greedy function approximation: a gradient boosting machine,” Annals of Statistics, 2001.

[17] S. Rajaram, T. Graepel, and R. Herbrich, “Poisson-networks: A model for structured point processes,” in AI and Statistics, 2005.

[18] A. Simma, Modeling Events in Time Using Cascades Of Poisson Processes. PhD thesis, EECS Department, University of California, Berkeley, Jul 2010.

[19] A. Saeedi and A. Bouchard-Ct, “Priors over recurrent continuous time processes,” in NIPS, 2011. 9