nips nips2013 nips2013-286 nips2013-286-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: David Pfau, Eftychios A. Pnevmatikakis, Liam Paninski
Abstract: Recordings from large populations of neurons make it possible to search for hypothesized low-dimensional dynamics. Finding these dynamics requires models that take into account biophysical constraints and can be fit efficiently and robustly. Here, we present an approach to dimensionality reduction for neural data that is convex, does not make strong assumptions about dynamics, does not require averaging over many trials and is extensible to more complex statistical models that combine local and global influences. The results can be combined with spectral methods to learn dynamical systems models. The basic method extends PCA to the exponential family using nuclear norm minimization. We evaluate the effectiveness of this method using an exact decomposition of the Bregman divergence that is analogous to variance explained for PCA. We show on model data that the parameters of latent linear dynamical systems can be recovered, and that even if the dynamics are not stationary we can still recover the true latent subspace. We also demonstrate an extension of nuclear norm minimization that can separate sparse local connections from global latent dynamics. Finally, we demonstrate improved prediction on real neural data from monkey motor cortex compared to fitting linear dynamical models without nuclear norm smoothing. 1
[1] I. H. Stevenson and K. P. Kording, “How advances in neural recording affect data analysis,” Nature neuroscience, vol. 14, no. 2, pp. 139–142, 2011.
[2] M. Okun, P. Yger, S. L. Marguet, F. Gerard-Mercier, A. Benucci, S. Katzner, L. Busse, M. Carandini, and K. D. Harris, “Population rate dynamics and multineuron firing patterns in sensory cortex,” The Journal of Neuroscience, vol. 32, no. 48, pp. 17108–17119, 2012.
[3] K. L. Briggman, H. D. I. Abarbanel, and W. B. Kristan, “Optical imaging of neuronal populations during decision-making,” Science, vol. 307, no. 5711, pp. 896–901, 2005.
[4] C. K. Machens, R. Romo, and C. D. Brody, “Functional, but not anatomical, separation of “what” and “when” in prefrontal cortex,” The Journal of Neuroscience, vol. 30, no. 1, pp. 350–360, 2010.
[5] M. Stopfer, V. Jayaraman, and G. Laurent, “Intensity versus identity coding in an olfactory system,” Neuron, vol. 39, no. 6, pp. 991–1004, 2003. 8
[6] M. M. Churchland, J. P. Cunningham, M. T. Kaufman, J. D. Foster, P. Nuyujukian, S. I. Ryu, and K. V. Shenoy, “Neural population dynamics during reaching,” Nature, 2012.
[7] W. Brendel, R. Romo, and C. K. Machens, “Demixed principal component analysis,” Advances in Neural Information Processing Systems, vol. 24, pp. 1–9, 2011.
[8] L. Paninski, Y. Ahmadian, D. G. Ferreira, S. Koyama, K. R. Rad, M. Vidne, J. Vogelstein, and W. Wu, “A new look at state-space models for neural data,” Journal of Computational Neuroscience, vol. 29, no. 1-2, pp. 107–126, 2010.
[9] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, “Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity,” Journal of neurophysiology, vol. 102, no. 1, pp. 614–635, 2009.
[10] B. Petreska, B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani, “Dynamical segmentation of single trials from population neural data,” Advances in neural information processing systems, vol. 24, 2011.
[11] J. E. Kulkarni and L. Paninski, “Common-input models for multiple neural spike-train data,” Network: Computation in Neural Systems, vol. 18, no. 4, pp. 375–407, 2007.
[12] A. Smith and E. Brown, “Estimating a state-space model from point process observations,” Neural Computation, vol. 15, no. 5, pp. 965–991, 2003.
[13] M. Fazel, H. Hindi, and S. P. Boyd, “A rank minimization heuristic with application to minimum order system approximation,” Proceedings of the American Control Conference., vol. 6, pp. 4734–4739, 2001.
[14] Z. Liu and L. Vandenberghe, “Interior-point method for nuclear norm approximation with application to system identification,” SIAM Journal on Matrix Analysis and Applications, vol. 31, pp. 1235–1256, 2009.
[15] Z. Liu, A. Hansson, and L. Vandenberghe, “Nuclear norm system identification with missing inputs and outputs,” Systems & Control Letters, vol. 62, no. 8, pp. 605–612, 2013.
[16] L. Buesing, J. Macke, and M. Sahani, “Spectral learning of linear dynamics from generalised-linear observations with application to neural population data,” Advances in neural information processing systems, vol. 25, 2012.
[17] L. Paninski, J. Pillow, and E. Simoncelli, “Maximum likelihood estimation of a stochastic integrate-andfire neural encoding model,” Neural computation, vol. 16, no. 12, pp. 2533–2561, 2004.
[18] E. Chornoboy, L. Schramm, and A. Karr, “Maximum likelihood identification of neural point process systems,” Biological cybernetics, vol. 59, no. 4-5, pp. 265–275, 1988.
[19] J. Macke, J. Cunningham, M. Byron, K. Shenoy, and M. Sahani, “Empirical models of spiking in neural populations,” Advances in neural information processing systems, vol. 24, 2011.
[20] M. Collins, S. Dasgupta, and R. E. Schapire, “A generalization of principal component analysis to the exponential family,” Advances in neural information processing systems, vol. 14, 2001.
[21] V. Solo and S. A. Pasha, “Point-process principal components analysis via geometric optimization,” Neural Computation, vol. 25, no. 1, pp. 101–122, 2013.
[22] Z. Zhou, X. Li, J. Wright, E. Candes, and Y. Ma, “Stable principal component pursuit,” Proceedings of the IEEE International Symposium on Information Theory, pp. 1518–1522, 2010.
[23] A. Banerjee, S. Merugu, I. S. Dhillon, and J. Ghosh, “Clustering with Bregman divergences,” The Journal of Machine Learning Research, vol. 6, pp. 1705–1749, 2005.
[24] S. P. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Foundations and Trends R in Machine Learning, vol. 3, no. 1, pp. 1–122, 2011.
[25] P. Van Overschee and B. De Moor, “Subspace identification for linear systems: theory, implementation, applications,” 1996.
[26] V. Lawhern, W. Wu, N. Hatsopoulos, and L. Paninski, “Population decoding of motor cortical activity using a generalized linear model with hidden states,” Journal of neuroscience methods, vol. 189, no. 2, pp. 267–280, 2010.
[27] S. Koyama, L. Castellanos P´ rez-Bolde, C. R. Shalizi, and R. E. Kass, “Approximate methods for statee space models,” Journal of the American Statistical Association, vol. 105, no. 489, pp. 170–180, 2010.
[28] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, E. Chichilnisky, and E. P. Simoncelli, “Spatiotemporal correlations and visual signalling in a complete neuronal population,” Nature, vol. 454, no. 7207, pp. 995–999, 2008.
[29] M. Harrison, “Conditional inference for learning the network structure of cortical microcircuits,” in 2012 Joint Statistical Meeting, (San Diego, CA), 2012. 9