nips nips2005 nips2005-64 nips2005-64-reference knowledge-graph by maker-knowledge-mining

64 nips-2005-Efficient estimation of hidden state dynamics from spike trains


Source: pdf

Author: Marton G. Danoczy, Richard H. R. Hahnloser

Abstract: Neurons can have rapidly changing spike train statistics dictated by the underlying network excitability or behavioural state of an animal. To estimate the time course of such state dynamics from single- or multiple neuron recordings, we have developed an algorithm that maximizes the likelihood of observed spike trains by optimizing the state lifetimes and the state-conditional interspike-interval (ISI) distributions. Our nonparametric algorithm is free of time-binning and spike-counting problems and has the computational complexity of a Mixed-state Markov Model operating on a state sequence of length equal to the total number of recorded spikes. As an example, we fit a two-state model to paired recordings of premotor neurons in the sleeping songbird. We find that the two state-conditional ISI functions are highly similar to the ones measured during waking and singing, respectively. 1


reference text

[1] Z. N´ dasdy, H. Hirase, A. Czurk´ , J. Csicsv´ ri, and G. Buzs´ ki. Replay and time compression a o a a of recurring spike sequences in the hippocampus. J Neurosci, 19(21):9497–9507, Nov 1999.

[2] K. G. Thompson, D. P. Hanes, N. P. Bichot, and J. D. Schall. Perceptual and motor processing stages identified in the activity of macaque frontal eye field neurons during visual search. J Neurophysiol, 76(6):4040–4055, Dec 1996.

[3] R. Cossart, D. Aronov, and R. Yuste. Attractor dynamics of network UP states in the neocortex. Nature, 423(6937):283–288, May 2003.

[4] E. N. Brown, R. Barbieri, V. Ventura, R. E. Kass, and L. M. Frank. The time-rescaling theorem and its application to neural spike train data analysis. Neur Comp, 14(2):325–346, Feb 2002.

[5] J. Deppisch, K. Pawelzik, and T. Geisel. Uncovering the synchronization dynamics from correlated neuronal activity quantifies assembly formation. Biol Cybern, 71(5):387–399, 1994.

[6] A. M. Fraser and A. Dimitriadis. Forecasting probability densities by using hidden Markov models with mixed states. In Weigend and Gershenfeld, editors, Time Series Prediction: Forecasting the Future and Understanding the Past, pages 265–82. Addison-Wesley, 1994.

[7] A. Berchtold. The double chain Markov model. Comm Stat Theor Meths, 28:2569–2589, 1999.

[8] L. R. Rabiner. A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE, 77(2):257–286, Feb 1989.

[9] R. H. R. Hahnloser, A. A. Kozhevnikov, and M. S. Fee. An ultra-sparse code underlies the generation of neural sequences in a songbird. Nature, 419(6902):65–70, Sep 2002.

[10] A. S. Dave and D. Margoliash. Song replay during sleep and computational rules for sensorimotor vocal learning. Science, 290(5492):812–816, Oct 2000.

[11] D. J. Thomson and A. D. Chave. Jackknifed error estimates for spectra, coherences, and transfer functions. In Simon Haykin, editor, Advances in Spectrum Analysis and Array Processing, volume 1, chapter 2, pages 58–113. Prentice Hall, 1991.

[12] A. Leonardo and M. S. Fee. Ensemble coding of vocal control in birdsong. J Neurosci, 25(3):652–661, Jan 2005.

[13] G. Radons, J. D. Becker, B. D¨ lfer, and J. Kr¨ ger. Analysis, classification, and coding of u u multielectrode spike trains with hidden Markov models. Biol Cybern, 71(4):359–373, 1994.

[14] I. Gat, N. Tishby, and M. Abeles. Hidden Markov modelling of simultaneously recorded cells in the associative cortex of behaving monkeys. Network: Computation in Neural Systems, 8(3):297–322, 1997.