nips nips2009 nips2009-150 nips2009-150-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Theodore J. Perkins
Abstract: Continuous-time Markov chains are used to model systems in which transitions between states as well as the time the system spends in each state are random. Many computational problems related to such chains have been solved, including determining state distributions as a function of time, parameter estimation, and control. However, the problem of inferring most likely trajectories, where a trajectory is a sequence of states as well as the amount of time spent in each state, appears unsolved. We study three versions of this problem: (i) an initial value problem, in which an initial state is given and we seek the most likely trajectory until a given final time, (ii) a boundary value problem, in which initial and final states and times are given, and we seek the most likely trajectory connecting them, and (iii) trajectory inference under partial observability, analogous to finding maximum likelihood trajectories for hidden Markov models. We show that maximum likelihood trajectories are not always well-defined, and describe a polynomial time test for well-definedness. When well-definedness holds, we show that each of the three problems can be solved in polynomial time, and we develop efficient dynamic programming algorithms for doing so. 1
[1] FG Ball and JA Rice. Stochastic models for ion channels: introduction and bibliography. Mathematical biosciences, 112(2):189, 1992.
[2] D.J. Wilkinson. Stochastic modelling for systems biology. Chapman & Hall/CRC, 2006.
[3] M. Holder and P.O. Lewis. Phylogeny estimation: traditional and Bayesian approaches. Nature Reviews Genetics, 4(4):275–284, 2003.
[4] H.M. Taylor and S. Karlin. An introduction to stochastic modeling. Academic Press, 1998.
[5] D.R. Fredkin and J.A. Rice. Maximum likelihood estimation and identification directly from single-channel recordings. Proceedings: Biological Sciences, pages 125–132, 1992.
[6] R. Rosales, J.A. Stark, W.J. Fitzgerald, and S.B. Hladky. Bayesian restoration of ion channel records using hidden Markov models. Biophysical Journal, 80(3):1088–1103, 2001.
[7] M.A. Suchard, R.E. Weiss, and J.S. Sinsheimer. Bayesian selection of continuous-time Markov chain evolutionary models. Molecular Biology and Evolution, 18(6):1001–1013, 2001.
[8] DT Crommelin and E. Vanden-Eijnden. Fitting timeseries by continuous-time Markov chains: A quadratic programming approach. Journal of Computational Physics, 217(2):782–805, 2006.
[9] M. L. Puterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. John Wiley and Sons, New York, 1994.
[10] S. Hedlund and A. Rantzer. Optimal control of hybrid systems. In Decision and Control, 1999. Proceedings of the 38th IEEE Conference on, volume 4, 1999.
[11] D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific Belmont, Mass, 1995.
[12] NG Van Kampen. Stochastic processes in physics and chemistry. North-Holland, 2007.
[13] D. T. Gillespie. Exact stochastic simulation of coupled chemical reactions. Journal of Physical Chemistry, 81:2340–2361, 1977.
[14] A. Hordijk, D.L. Iglehart, and R. Schassberger. Discrete time methods for simulating continuous time Markov chains. Advances in Applied Probability, pages 772–788, 1976. 9