nips nips2007 nips2007-140 nips2007-140-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jonathan W. Pillow, Peter E. Latham
Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.
[1] I. Hunter and M. Korenberg. The identification of nonlinear biological systems: Wiener and hammerstein cascade models. Biological Cybernetics, 55:135–144, 1986.
[2] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling optimizes information transmission. Neuron, 26:695–702, 2000.
[3] H. Plesser and W. Gerstner. Noise in integrate-and-fire neurons: From stochastic input to escape rates. Neural Computation, 12:367–384, 2000.
[4] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12:199–213, 2001.
[5] E. P. Simoncelli, L. Paninski, J. Pillow, and O. Schwartz. Characterization of neural responses with stochastic stimuli. In M. Gazzaniga, editor, The Cognitive Neurosciences, pages 327–338. MIT Press, 3rd edition, 2004.
[6] M. Berry and M. Meister. Refractoriness and neural precision. Journal of Neuroscience, 18:2200–2211, 1998.
[7] K. Harris, J. Csicsvari, H. Hirase, G. Dragoi, and G. Buzsaki. Organization of cell assemblies in the hippocampus. Nature, 424:552–556, 2003.
[8] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J. Neurophysiol, 93(2):1074–1089, 2004.
[9] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network: Computation in Neural Systems, 15:243–262, 2004.
[10] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Correlations and coding with multi-neuronal spike trains in primate retina. SFN abstracts, #768.9, 2007.
[11] D. Nykamp. Reconstructing stimulus-driven neural networks from spike times. NIPS, 15:309–316, 2003.
[12] D. Nykamp. Revealing pairwise coupling in linear-nonlinear networks. SIAM Journal on Applied Mathematics, 65:2005–2032, 2005.
[13] M. Okatan, M. Wilson, and E. Brown. Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. Neural Computation, 17:1927–1961, 2005.
[14] L. Srinivasan, U. Eden, A. Willsky, and E. Brown. A state-space analysis for reconstruction of goaldirected movements using neural signals. Neural Computation, 18:2465–2494, 2006.
[15] D. Nykamp. A mathematical framework for inferring connectivity in probabilistic neuronal networks. Mathematical Biosciences, 205:204–251, 2007.
[16] J. E. Kulkarni and L Paninski. Common-input models for multiple neural spike-train data. Network: Computation in Neural Systems, 18(4):375–407, 2007.
[17] B. Yu, A. Afshar, G. Santhanam, S. Ryu, K. Shenoy, and M. Sahani. Extracting dynamical structure embedded in neural activity. NIPS, 2006.
[18] S. Escola and L. Paninski. Hidden Markov models applied toward the inference of neural states and the improved estimation of linear receptive fields. COSYNE07, 2007.
[19] P. McCullagh and J. Nelder. Generalized linear models. Chapman and Hall, London, 1989.
[20] A. Dempster, N. Laird, and R. Rubin. Maximum likelihood from incomplete data via the EM algorithm. J. Royal Statistical Society, B, 39(1):1–38, 1977.
[21] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental, sparse, and other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355–368. MIT Press, Cambridge, 1999.
[22] GE Hinton, P. Dayan, BJ Frey, and RM Neal. The” wake-sleep” algorithm for unsupervised neural networks. Science, 268(5214):1158–1161, 1995. 8