nips nips2011 nips2011-86 knowledge-graph by maker-knowledge-mining

86 nips-2011-Empirical models of spiking in neural populations


Source: pdf

Author: Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Empirical models of spiking in neural populations ¨ Lars Busing Gatsby Computational Neuroscience Unit University College London, UK lars@gatsby. [sent-1, score-0.194]

2 Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. [sent-19, score-0.318]

3 We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. [sent-22, score-0.686]

4 We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. [sent-23, score-0.496]

5 We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. [sent-24, score-0.432]

6 1 Introduction Multi-electrode array recording and similar methods provide measurements of activity from dozens of neurons simultaneously, and thus allow unprecedented insights into the statistical structure of neural population activity. [sent-25, score-0.538]

7 To exploit this potential we need methods that identify the temporal dynamics of population activity and link it to external stimuli and observed behaviour. [sent-26, score-0.436]

8 These statistical models of population activity are essential for understanding neural coding at a population level [1] and can have practical applications for Brain Machine Interfaces [2]. [sent-27, score-0.637]

9 Two frameworks for modelling the temporal dynamics of cortical population recordings have recently become popular. [sent-28, score-0.484]

10 Generalised Linear spike-response Models (GLMs) [1, 3, 4, 5] model the influence of spiking history, external stimuli or other neural signals on the firing of a neuron. [sent-29, score-0.156]

11 Here, the interdependence of different neurons is modelled by terms that link the instantaneous firing rate of each neuron to the recent spiking history of the population. [sent-30, score-0.535]

12 Such models have been successful in a range of studies and systems, including retinal [1] and cortical [7] population recordings. [sent-32, score-0.353]

13 An alternative is provided by latent variable models such as Gaussian Process Factor Analysis [8] or other state-space models [9, 10, 11]. [sent-33, score-0.223]

14 In this approach, shared variability (or ‘noise correlation’) is modelled by an unobserved process driving the population, which is sometimes characterised as ‘common input’ [12, 13]. [sent-34, score-0.181]

15 One advantage of this approach is that the trajectories of the latent state provide a compact, low-dimensional representation of the population which can be used to visualise population activity, and link it to observed behaviour [14]. [sent-35, score-0.634]

16 1 Comparing coupled generalised linear models and latent variable models Three lines of argument suggest that latent dynamical models may provide a better fit to cortical population data than the spike-response GLM. [sent-37, score-0.987]

17 First, prevalent recording apparatus, such as extracellular grid electrodes, sample neural populations very sparsely making it unlikely that much of the observed shared variability is a consequence of direct physical interaction. [sent-38, score-0.192]

18 Without direct synaptic coupling, it is unlikely that variability is shared exclusively by particular pairs of units; instead, it will generally be common to many cells—an assumption explicit in the latent variable approach, where shared variability results from the model of cortical dynamics. [sent-40, score-0.437]

19 Second, most cortical population recordings find that shared variability across neurons is dominated by a central peak at zero time lag (i. [sent-41, score-0.665]

20 Correlations with these properties arise naturally in dynamical system models. [sent-44, score-0.218]

21 The common input from the latent state induces instantaneous correlations, and the evolution of the latent system typically yields positive temporal correlations over moderate timescales. [sent-45, score-0.569]

22 By contrast, GLMs couple instantaneous rate to the recent spiking of other neurons, but not to their simultaneous activity, making zero-lag correlation hard to model. [sent-46, score-0.217]

23 In addition, positive history coupling in a GLM may lead to loops of self-excitation, predicting unrealistically high firing rates—a trend that must be countered by long-term negative self-coupling. [sent-52, score-0.177]

24 In dynamical system models, the parameter count grows linearly with population size (for a constant latent dimension), whereas the parameters of a coupled GLM depend quadratically on the number of neurons. [sent-56, score-0.67]

25 Here we show that population activity in monkey motor cortex is better fit by a dynamical system model than by a spike-response GLM; and that the dynamical system, but not a GLM of the same temporal resolution, accurately reproduces the temporal structure of cross-correlations in these data. [sent-58, score-1.037]

26 2 Comparing dynamical system models with spiking or Gaussian observations Many studies of population latent variable models assume Gaussian observation noise [8, 17] (but see, e. [sent-60, score-0.786]

27 Here, we describe a latent linear dynamical system whose count distribution, when conditioned on all past observations, is Poisson. [sent-66, score-0.404]

28 Using a co-smoothing metric, we show that this (computationally more expensive) count model predicts spike counts in our data better than a Gaussian linear dynamical system (GLDS). [sent-68, score-0.406]

29 The two models give substantially different population spike-count distributions, and the count approach is also more accurate on this measure than either the GLDS or GLM. [sent-69, score-0.304]

30 1 Dynamical systems with count observations and time-varying mean rates i We first consider the count-process latent dynamical model (PLDS). [sent-71, score-0.384]

31 Denote by ykt the observed spike-count of neuron i ∈ {1 . [sent-72, score-0.164]

32 Neurons are assumed to be conditionally independent given the low-dimensional latent state xkt (of dimensionality p with p < q). [sent-82, score-0.283]

33 Thus, correlated neural variability arises from variations of this latent population state, and not from direct interaction between neurons. [sent-83, score-0.472]

34 The history term st is a vector of all relevant recent spiking in the population [1, 3, 7, 20]. [sent-85, score-0.475]

35 For example, one choice to model spike refractoriness would set skt to the counts at the previous time point skt = yk,(t−1) , and D to a diagonal matrix of size q × q with negative diagonal entries. [sent-86, score-0.273]

36 However, to maintain the conditional independence of neurons given latent state the matrix D (of size q × dim(s)) is constrained to have zero values at all entries corresponding to cross-neuron couplings. [sent-88, score-0.293]

37 Furthermore, while conditioned on the latent state and the recent spiking history the count in each bin is Poisson distributed (hence the model name), samples from the model are not Poisson as they are affected both by variations in the underlying state and the single-neuron history. [sent-90, score-0.533]

38 We assume that the latent population state xkt evolves according to driven linear Gaussian dynamics: xk1 ∼ N (xo , Qo ) xk(t+1) |xkt ∼ N (Axkt + bt , Q) (2) (3) Here, xo and Qo denote the average value and the covariance of the initial state x1 of each trial. [sent-91, score-0.671]

39 The p × p matrix A specifies the deterministic component of the evolution from one state to the next, and the matrix Q gives the covariance of the innovations that perturb the latent state at each time step. [sent-92, score-0.213]

40 The ‘driving inputs’ bt , which add to the latent state, allow the model to capture time-varying structure in the firing rates that is consistent across trials. [sent-93, score-0.253]

41 Here, by contrast, time-varying means are captured by the driving inputs into the latent state, and so only p × T parameters are needed to describe all the PSTHs. [sent-95, score-0.221]

42 The E-step requires the posterior distribution P (¯ k |yk , Θ) over the x ¯ latent trajectories xk = vec (xk,1:T ) given the data and our current estimate of the parameters Θ. [sent-98, score-0.173]

43 by maximising the log-posterior P (¯ k , yk ) of each trial x over xk , setting µk = argmaxx P (¯ |yk , Θ) to be the latent trajectory that achieves this maximum, x ¯ −1 and Σk = − ( x to be the negative inverse Hessian of the log-posterior ¯ ¯ x log P (¯ |yk , Θ)|x=µk ) at its maximum. [sent-103, score-0.313]

44 L(Θ ) = (5) k This integral can be evaluated in closed form, and efficiently optimised over the parameters: L(Θ ) is jointly concave in the parameters C, d, D, and the updates with respect to the dynamics parameters A, Q, Qo , xo and the driving inputs bt can be calculated analytically. [sent-111, score-0.293]

45 If implemented naively, this requires q inferences of the latent state from the activity of q−1 neurons. [sent-115, score-0.287]

46 (6) kt The coupling matrix D describes dependence both on the history of firing in the same neuron and on spiking in other neurons, and the q × 1 vectors bt model time-varying mean firing rates. [sent-121, score-0.488]

47 kt kt While equation (6) is similar to the definition of the PLDS model in equation (1), the models differ in their treatment of shared variability: The GLM has no latent state xt and so shared variance is modelled through the cross-coupling terms of the matrix D, which are set to 0 in the PLDS. [sent-123, score-0.38]

48 As the number of parameters in the GLM is quadratic in population size, it may be prone to overfitting on small datasets. [sent-124, score-0.227]

49 To improve the generalisation ability of the GLM we added a sparsityinducing L1 prior on the coupling parameters and a smoothness prior on the PSTH parameters bt , and minimized the (convex) cost function using methods described in [24]: 1 b K −1 bt . [sent-125, score-0.211]

50 An apparently less veridical alternative would be to model counts as conditionally Gaussian given the latent state. [sent-142, score-0.198]

51 We used the EM algorithm [9] to fit a linear dynamical system model with Gaussian noise and driving inputs [17] (GLDS). [sent-143, score-0.292]

52 Finally, we also compared PLDS to Gaussian Process Factor Analysis (GPFA) [8], a Gaussian model in which the latent trajectories are drawn not from a linear dynamical system, but from a more general Gaussian Process with (here) a squared-exponential kernel. [sent-145, score-0.321]

53 We did not include the driving inputs bt in this model, and used the full model for co-smoothing, i. [sent-146, score-0.156]

54 Each neuron’s firing rate was first predicted using the activity of all other neurons on each test trial. [sent-150, score-0.22]

55 For the GLM (but not PLDS), predictions reported were based on the past activity of other neurons, but also used the observed past activity of the neuron being predicted (results exploiting all data from other neurons were similar). [sent-151, score-0.413]

56 Positive values indicate that prediction is more accurate than a constant prediction equal to the true mean activity of that neuron on that trial. [sent-157, score-0.301]

57 4 Details of neural recordings and choice of parameters We evaluated the methods described above on multi-electrode recordings from the motor cortex of a behaving monkey. [sent-162, score-0.279]

58 For the PLDS model, dimensionality of the latent state varied from 1 to 20. [sent-172, score-0.205]

59 D = 0), or used spike history mapped to a set of 4 basis functions formed by othogonalising decaying exponentials with time constants 0. [sent-175, score-0.228]

60 The history term st was then obtained by projecting spike counts in the previous 100ms onto each of these functions. [sent-177, score-0.279]

61 1 Results Goodness-of-fit of dynamical system models and GLMs We first compared the goodness-of-fit of PLDS with p = 5 latent dimensions against those of GLMs. [sent-185, score-0.403]

62 5 GPFA GLDS PLDS PLDS 100ms 5 10 15 Dimensionality of latent space 0. [sent-201, score-0.147]

63 64 20 GPFA GLDS PLDS PLDS 100ms 5 10 15 Dimensionality of latent space 20 Figure 1: Quantifying goodness-of-fit. [sent-203, score-0.147]

64 A) Prediction performance (variance minus mean-squared error on test-set) of various coupled GLMs (10 ms history; 2 variants with 100 ms history; 150 ms history) plotted against sparsity in the filter matrix D generated by different choices of η1 . [sent-204, score-0.183]

65 C) Prediction performance of different latent variable models (GPFA, and LDSs with Gaussian, Poisson or history-dependent Poisson noise) on the test-set. [sent-208, score-0.185]

66 PLDS outperforms alternatives, and performance plateaus at small latent dimensionalities. [sent-211, score-0.147]

67 This was true for GLMs with history terms of length 10ms, 100ms or 150ms (with 1, 4 or 5 basis functions each, which were equivalent to the history functions used for the spiking-history in the dynamical system model, with an additional 80 ms time-constant exponential as the 5th basis function). [sent-216, score-0.526]

68 In this case, the prediction performance of both models decreased slightly, but the latent variable models still had substantially better prediction performance. [sent-218, score-0.331]

69 Next, we investigated a more realistic spiking noise model would further improves the performance of the dynamical system model, and how this would depend on the latent dimensionality d. [sent-223, score-0.539]

70 We therefore compared our models (GPFA, GLDS, PLDS, PLDS with 100ms history) for different choices of the latent dimensionality d. [sent-224, score-0.21]

71 Thus, of the models considered here, a low-dimensional latent variable provides the best fit to the data. [sent-227, score-0.185]

72 A) Average temporal cross-correlation in four groups of neurons (color-coded from most to least correlated), and comparison with correlations captured by the dynamical system models with Gaussian, Poisson or history-dependent Poisson noise. [sent-239, score-0.509]

73 B) Comparison of GLMs with differing history-dependence with cortical recordings; the correlations of the models differ markedly from those of the data, and do not have a peak at zero time-lag. [sent-241, score-0.23]

74 We also found that models with the more realistic spiking noise model (PLDS, and PLDS 100ms) had a small, but consistent performance benefit over the computationally more efficient Gaussian models (GLDS, GPFA). [sent-242, score-0.225]

75 However, for the dataset and comparison considered here (which was based on predicting the mean activity averaged over all possible spiking histories), we only found a small advantage of also adding single-neuron dynamics (i. [sent-243, score-0.265]

76 If we compared the models using their ability to predict population activity on the next time-step from the observed population history, single-neuron filters did have an effect. [sent-246, score-0.599]

77 In this prediction task, PLDS with history filters performed best, in particular better than GLMs. [sent-247, score-0.184]

78 When using AUC rather than mean-squared-error to quantify prediction performance, we found similar results: Low-dimensional models showed best performance, spiking models slightly outperformed Gaussian ones, and adding single-neuron dynamics yielded only a small benefit. [sent-248, score-0.313]

79 Thus, all four of our latent variable models provided superior fits to the dataset than GLMs. [sent-251, score-0.185]

80 2 Reproducing the correlations of cortical population activity In the introduction, we argued that dynamical system models would be more appropriate for capturing the typical temporal structure of cross-neural correlations in cortical multi-cell recordings. [sent-253, score-1.01]

81 First, we subtracted the time-varying mean firing rate (PSTH) of each neuron to eliminate correlations induced by similarity in mean firing rates. [sent-255, score-0.164]

82 2) according to their total correlation (using summed correlation coefficients with all other neurons), and calculated the average pairwise correlation in each group. [sent-258, score-0.149]

83 2A shows the resulting average time-lagged correlations, and shows that both dynamical system models accurately capture this aspect of the correlation structure of the data. [sent-260, score-0.321]

84 Distribution of the population spike counts, and comparison with distributions from PLDS, GLDS and two versions of the GLM with 150ms history dependence (GLM with no regularization, GLM2 with optimal sparsity). [sent-272, score-0.455]

85 02 0 0 10 20 30 Number of spikes in 10ms bin 40 population activity, namely the distribution of population spike counts, i. [sent-278, score-0.637]

86 the distribution of the total number of spikes across the population per time bin. [sent-280, score-0.279]

87 4 Discussion We explored a statistical model of cortical population recordings based on a latent dynamical system with count-process observations. [sent-287, score-0.747]

88 We argued that such a model provides a more natural modeling choice than coupled spike-response GLMs for cortical array-recordings; and indeed, this model did fit motor-cortical multi-unit recording better, and more faithfully reproduced the temporal structure of cross-neural correlations. [sent-288, score-0.268]

89 GLMs have many attractive properties, and given the flexibility of the model class, it is impossible to rule out that some coupled GLM with finer temporal resolution, possibly nonlinear history dependencies and cleverly chosen regularization would yield better crossvalidation performance. [sent-289, score-0.231]

90 We here argued that latent variable models yield a more appropriate model of cross-neural correlations with zero-lag peaks: In GLMs, one has to use a fine discretization of the time-axis (which can be computationally intensive) or work in continuous time to achieve this. [sent-290, score-0.289]

91 We also showed that a model with count-process observations yields better fits to our data than ones with a Gaussian noise model, and that it has a more realistic distribution of population spike counts. [sent-292, score-0.356]

92 For the recordings we considered here, a dynamical system model with count-process observations worked best, but there will be datasets for which either GLMs, or GLDS or GPFA provide the most appropriate model. [sent-299, score-0.285]

93 While we used a co-smoothing metric to quantify model performance, different models might be more suitable for decoding reaching movements from population activity [11], or inferring the underlying anatomical connectivity from extracellular recordings. [sent-301, score-0.438]

94 Collective dynamics in human and monkey sensorimotor cortex: predicting single neuron spikes. [sent-360, score-0.172]

95 Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity. [sent-373, score-0.265]

96 Population decoding of motor cortical activity using a generalized linear model with hidden states. [sent-392, score-0.286]

97 Synchrony between neurons with similar muscle fields in monkey motor cortex. [sent-439, score-0.209]

98 Bayesian population decoding of motor cortical activity using a kalman filter. [sent-449, score-0.513]

99 Inferring neural firing rates from spike trains using gaussian processes. [sent-470, score-0.201]

100 Efficient markov chain monte carlo methods for decoding neural spike trains. [sent-508, score-0.177]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('plds', 0.56), ('glm', 0.349), ('glds', 0.249), ('population', 0.227), ('glms', 0.224), ('dynamical', 0.174), ('latent', 0.147), ('history', 0.13), ('ring', 0.122), ('spiking', 0.118), ('neurons', 0.113), ('poisson', 0.11), ('gpfa', 0.109), ('activity', 0.107), ('spike', 0.098), ('cortical', 0.088), ('neuron', 0.086), ('bt', 0.082), ('correlations', 0.078), ('xkt', 0.078), ('ykt', 0.078), ('auc', 0.077), ('xo', 0.071), ('yk', 0.069), ('recordings', 0.067), ('dskt', 0.062), ('skt', 0.062), ('temporal', 0.062), ('shenoy', 0.062), ('variability', 0.06), ('instantaneous', 0.058), ('prediction', 0.054), ('comput', 0.053), ('recording', 0.053), ('spikes', 0.052), ('generalised', 0.051), ('counts', 0.051), ('motor', 0.05), ('driving', 0.05), ('qo', 0.05), ('ms', 0.048), ('coupling', 0.047), ('axkt', 0.047), ('cxkt', 0.047), ('monkey', 0.046), ('trial', 0.044), ('system', 0.044), ('lag', 0.043), ('gaussian', 0.041), ('cunningham', 0.041), ('psth', 0.041), ('correlation', 0.041), ('decoding', 0.041), ('shared', 0.041), ('dynamics', 0.04), ('bins', 0.04), ('count', 0.039), ('coupled', 0.039), ('neural', 0.038), ('neurosci', 0.038), ('models', 0.038), ('yu', 0.037), ('santhanam', 0.035), ('reproduces', 0.035), ('ner', 0.035), ('gatsby', 0.035), ('ryu', 0.034), ('ahmadian', 0.034), ('bin', 0.033), ('state', 0.033), ('cortex', 0.032), ('sahani', 0.032), ('resolution', 0.032), ('lters', 0.031), ('ldat', 0.031), ('realistic', 0.031), ('modelled', 0.03), ('dim', 0.03), ('laplace', 0.028), ('couplings', 0.027), ('maximising', 0.027), ('pillow', 0.027), ('tting', 0.027), ('calculated', 0.026), ('argued', 0.026), ('peak', 0.026), ('xk', 0.026), ('uk', 0.026), ('dimensionality', 0.025), ('london', 0.025), ('behaving', 0.025), ('afshar', 0.025), ('invalid', 0.025), ('quantify', 0.025), ('kt', 0.025), ('var', 0.025), ('pt', 0.025), ('inputs', 0.024), ('rates', 0.024), ('accurately', 0.024)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 86 nips-2011-Empirical models of spiking in neural populations

Author: Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1

2 0.23726892 75 nips-2011-Dynamical segmentation of single trials from population neural data

Author: Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Simultaneous recordings of many neurons embedded within a recurrentlyconnected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function. In principle, these dynamics might be identified by purely unsupervised, statistical means. Here, we show that a Hidden Switching Linear Dynamical Systems (HSLDS) model— in which multiple linear dynamical laws approximate a nonlinear and potentially non-stationary dynamical process—is able to distinguish different dynamical regimes within single-trial motor cortical activity associated with the preparation and initiation of hand movements. The regimes are identified without reference to behavioural or experimental epochs, but nonetheless transitions between them correlate strongly with external events whose timing may vary from trial to trial. The HSLDS model also performs better than recent comparable models in predicting the firing rate of an isolated neuron based on the firing rates of others, suggesting that it captures more of the “shared variance” of the data. Thus, the method is able to trace the dynamical processes underlying the coordinated evolution of network activity in a way that appears to reflect its computational role. 1

3 0.19636621 302 nips-2011-Variational Learning for Recurrent Spiking Networks

Author: Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner

Abstract: We derive a plausible learning rule for feedforward, feedback and lateral connections in a recurrent network of spiking neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimental Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method’s applicability to learning both stationary and temporal spike patterns. 1

4 0.18679602 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

Author: Kamiar R. Rad, Liam Paninski

Abstract: Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design.

5 0.18631527 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

Author: Konrad Koerding, Ian Stevenson

Abstract: Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1

6 0.15140504 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

7 0.14255524 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

8 0.1311473 249 nips-2011-Sequence learning with hidden units in spiking neural networks

9 0.1309703 200 nips-2011-On the Analysis of Multi-Channel Neural Spike Data

10 0.12564975 99 nips-2011-From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models

11 0.12391585 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

12 0.12371136 102 nips-2011-Generalised Coupled Tensor Factorisation

13 0.11076739 24 nips-2011-Active learning of neural response functions with Gaussian processes

14 0.10777833 23 nips-2011-Active dendrites: adaptation to spike-based communication

15 0.10723209 37 nips-2011-Analytical Results for the Error in Filtering of Gaussian Processes

16 0.10290564 219 nips-2011-Predicting response time and error rates in visual search

17 0.098869361 301 nips-2011-Variational Gaussian Process Dynamical Systems

18 0.093949862 13 nips-2011-A blind sparse deconvolution method for neural spike identification

19 0.092139453 44 nips-2011-Bayesian Spike-Triggered Covariance Analysis

20 0.080634259 123 nips-2011-How biased are maximum entropy models?


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.186), (1, 0.115), (2, 0.327), (3, -0.045), (4, 0.084), (5, -0.005), (6, -0.033), (7, -0.066), (8, 0.033), (9, 0.052), (10, 0.066), (11, -0.002), (12, -0.045), (13, 0.045), (14, -0.011), (15, -0.045), (16, -0.087), (17, 0.041), (18, -0.011), (19, -0.056), (20, -0.093), (21, -0.088), (22, 0.025), (23, 0.03), (24, -0.047), (25, -0.083), (26, -0.07), (27, 0.022), (28, 0.04), (29, 0.053), (30, 0.003), (31, -0.044), (32, -0.026), (33, 0.037), (34, -0.02), (35, -0.031), (36, 0.102), (37, -0.034), (38, 0.06), (39, -0.06), (40, 0.144), (41, 0.049), (42, -0.012), (43, -0.087), (44, 0.034), (45, -0.027), (46, 0.056), (47, -0.004), (48, -0.036), (49, -0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94241178 86 nips-2011-Empirical models of spiking in neural populations

Author: Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1

2 0.83465797 75 nips-2011-Dynamical segmentation of single trials from population neural data

Author: Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Simultaneous recordings of many neurons embedded within a recurrentlyconnected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function. In principle, these dynamics might be identified by purely unsupervised, statistical means. Here, we show that a Hidden Switching Linear Dynamical Systems (HSLDS) model— in which multiple linear dynamical laws approximate a nonlinear and potentially non-stationary dynamical process—is able to distinguish different dynamical regimes within single-trial motor cortical activity associated with the preparation and initiation of hand movements. The regimes are identified without reference to behavioural or experimental epochs, but nonetheless transitions between them correlate strongly with external events whose timing may vary from trial to trial. The HSLDS model also performs better than recent comparable models in predicting the firing rate of an isolated neuron based on the firing rates of others, suggesting that it captures more of the “shared variance” of the data. Thus, the method is able to trace the dynamical processes underlying the coordinated evolution of network activity in a way that appears to reflect its computational role. 1

3 0.78181785 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

Author: Julie Dethier, Paul Nuyujukian, Chris Eliasmith, Terrence C. Stewart, Shauki A. Elasaad, Krishna V. Shenoy, Kwabena A. Boahen

Abstract: Motor prostheses aim to restore function to disabled patients. Despite compelling proof of concept systems, barriers to clinical translation remain. One challenge is to develop a low-power, fully-implantable system that dissipates only minimal power so as not to damage tissue. To this end, we implemented a Kalman-filter based decoder via a spiking neural network (SNN) and tested it in brain-machine interface (BMI) experiments with a rhesus monkey. The Kalman filter was trained to predict the arm’s velocity and mapped on to the SNN using the Neural Engineering Framework (NEF). A 2,000-neuron embedded Matlab SNN implementation runs in real-time and its closed-loop performance is quite comparable to that of the standard Kalman filter. The success of this closed-loop decoder holds promise for hardware SNN implementations of statistical signal processing algorithms on neuromorphic chips, which may offer power savings necessary to overcome a major obstacle to the successful clinical translation of neural motor prostheses. ∗ Present: Research Fellow F.R.S.-FNRS, Systmod Unit, University of Liege, Belgium. 1 1 Cortically-controlled motor prostheses: the challenge Motor prostheses aim to restore function for severely disabled patients by translating neural signals from the brain into useful control signals for prosthetic limbs or computer cursors. Several proof of concept demonstrations have shown encouraging results, but barriers to clinical translation still remain. One example is the development of a fully-implantable system that meets power dissipation constraints, but is still powerful enough to perform complex operations. A recently reported closedloop cortically-controlled motor prosthesis is capable of producing quick, accurate, and robust computer cursor movements by decoding neural signals (threshold-crossings) from a 96-electrode array in rhesus macaque premotor/motor cortex [1]-[4]. This, and previous designs (e.g., [5]), employ versions of the Kalman filter, ubiquitous in statistical signal processing. Such a filter and its variants are the state-of-the-art decoder for brain-machine interfaces (BMIs) in humans [5] and monkeys [2]. While these recent advances are encouraging, clinical translation of such BMIs requires fullyimplanted systems, which in turn impose severe power dissipation constraints. Even though it is an open, actively-debated question as to how much of the neural prosthetic system must be implanted, we note that there are no reports to date demonstrating a fully implantable 100-channel wireless transmission system, motivating performing decoding within the implanted chip. This computation is constrained by a stringent power budget: A 6 × 6mm2 implant must dissipate less than 10mW to avoid heating the brain by more than 1◦ C [6], which is believed to be important for long term cell health. With this power budget, current approaches can not scale to higher electrode densities or to substantially more computer-intensive decode/control algorithms. The feasibility of mapping a Kalman-filter based decoder algorithm [1]-[4] on to a spiking neural network (SNN) has been explored off-line (open-loop). In these off-line tests, the SNN’s performance virtually matched that of the standard implementation [7]. These simulations provide confidence that this algorithm—and others similar to it—could be implemented using an ultra-low-power approach potentially capable of meeting the severe power constraints set by clinical translation. This neuromorphic approach uses very-large-scale integrated systems containing microelectronic analog circuits to morph neural systems into silicon chips [8, 9]. These neuromorphic circuits may yield tremendous power savings—50nW per silicon neuron [10]—over digital circuits because they use physical operations to perform mathematical computations (analog approach). When implemented on a chip designed using the neuromorphic approach, a 2,000-neuron SNN network can consume as little as 100µW. Demonstrating this approach’s feasibility in a closed-loop system running in real-time is a key, non-incremental step in the development of a fully implantable decoding chip, and is necessary before proceeding with fabricating and implanting the chip. As noise, delay, and over-fitting play a more important role in the closed-loop setting, it is not obvious that the SNN’s stellar open-loop performance will hold up. In addition, performance criteria are different in the closed-loop and openloop settings (e.g., time per target vs. root mean squared error). Therefore, a SNN of a different size may be required to meet the desired specifications. Here we present results and assess the performance and viability of the SNN Kalman-filter based decoder in real-time, closed-loop tests, with the monkey performing a center-out-and-back target acquisition task. To achieve closed-loop operation, we developed an embedded Matlab implementation that ran a 2,000-neuron version of the SNN in real-time on a PC. We achieved almost a 50-fold speed-up by performing part of the computation in a lower-dimensional space defined by the formal method we used to map the Kalman filter on to the SNN. This shortcut allowed us to run a larger SNN in real-time than would otherwise be possible. 2 Spiking neural network mapping of control theory algorithms As reported in [11], a formal methodology, called the Neural Engineering Framework (NEF), has been developed to map control-theory algorithms onto a computational fabric consisting of a highly heterogeneous population of spiking neurons simply by programming the strengths of their connections. These artificial neurons are characterized by a nonlinear multi-dimensional-vector-to-spikerate function—ai (x(t)) for the ith neuron—with parameters (preferred direction, maximum firing rate, and spiking-threshold) drawn randomly from a wide distribution (standard deviation ≈ mean). 2 Spike rate (spikes/s) Representation ˆ x → ai (x) → x = ∑i ai (x)φix ˜ ai (x) = G(αi φix · x + Jibias ) 400 Transformation y = Ax → b j (Aˆ ) x Aˆ = ∑i ai (x)Aφix x x(t) B' y(t) A' 200 0 −1 Dynamics ˙ x = Ax → x = h ∗ A x A = τA + I 0 Stimulus x 1 bk(t) y(t) B' h(t) x(t) A' aj(t) Figure 1: NEF’s three principles. Representation. 1D tuning curves of a population of 50 leaky integrate-and-fire neurons. The neurons’ tuning curves map control variables (x) to spike rates (ai (x)); this nonlinear transformation is inverted by linear weighted decoding. G() is the neurons’ nonlinear current-to-spike-rate function. Transformation. SNN with populations bk (t) and a j (t) representing y(t) and x(t). Feedforward and recurrent weights are determined by B and A , as described next. Dynamics. The system’s dynamics is captured in a neurally plausible fashion by replacing integration with the synapses’ spike response, h(t), and replacing the matrices with A = τA + I and B = τB to compensate. The neural engineering approach to configuring SNNs to perform arbitrary computations is underlined by three principles (Figure 1) [11]-[14]: Representation is defined by nonlinear encoding of x(t) as a spike rate, ai (x(t))—represented by the neuron tuning curve—combined with optimal weighted linear decoding of ai (x(t)) to recover ˆ an estimate of x(t), x(t) = ∑i ai (x(t))φix , where φix are the decoding weights. Transformation is performed by using alternate decoding weights in the decoding operation to map transformations of x(t) directly into transformations of ai (x(t)). For example, y(t) = Ax(t) is represented by the spike rates b j (Aˆ (t)), where unit j’s input is computed directly from unit i’s x output using Aˆ (t) = ∑i ai (x(t))Aφix , an alternative linear weighting. x Dynamics brings the first two principles together and adds the time dimension to the circuit. This principle aims at reuniting the control-theory and neural levels by modifying the matrices to render the system neurally plausible, thereby permitting the synapses’ spike response, h(t), (i.e., impulse ˙ response) to capture the system’s dynamics. For example, for h(t) = τ −1 e−t/τ , x = Ax(t) is realized by replacing A with A = τA + I. This so-called neurally plausible matrix yields an equivalent dynamical system: x(t) = h(t) ∗ A x(t), where convolution replaces integration. The nonlinear encoding process—from a multi-dimensional stimulus, x(t), to a one-dimensional soma current, Ji (x(t)), to a firing rate, ai (x(t))—is specified as: ai (x(t)) = G(Ji (x(t))). (1) Here G is the neurons’ nonlinear current-to-spike-rate function, which is given by G(Ji (x)) = τ ref − τ RC ln (1 − Jth /Ji (x)) −1 , (2) for the leaky integrate-and-fire model (LIF). The LIF neuron has two behavioral regimes: subthreshold and super-threshold. The sub-threshold regime is described by an RC circuit with time constant τ RC . When the sub-threshold soma voltage reaches the threshold, Vth , the neuron emits a spike δ (t −tn ). After this spike, the neuron is reset and rests for τ ref seconds (absolute refractory period) before it resumes integrating. Jth = Vth /R is the minimum input current that produces spiking. Ignoring the soma’s RC time-constant when specifying the SNN’s dynamics are reasonable because the neurons cross threshold at a rate that is proportional to their input current, which thus sets the spike rate instantaneously, without any filtering [11]. The conversion from a multi-dimensional stimulus, x(t), to a one-dimensional soma current, Ji , is ˜ performed by assigning to the neuron a preferred direction, φix , in the stimulus space and taking the dot-product: ˜ Ji (x(t)) = αi φix · x(t) + Jibias , (3) 3 where αi is a gain or conversion factor, and Jibias is a bias current that accounts for background ˜ activity. For a 1D space, φix is either +1 or −1 (drawn randomly), for ON and OFF neurons, respectively. The resulting tuning curves are illustrated in Figure 1, left. The linear decoding process is characterized by the synapses’ spike response, h(t) (i.e., post-synaptic currents), and the decoding weights, φix , which are obtained by minimizing the mean square error. A single noise term, η, takes into account all sources of noise, which have the effect of introducing uncertainty into the decoding process. Hence, the transmitted firing rate can be written as ai (x(t)) + ηi , where ai (x(t)) represents the noiseless set of tuning curves and ηi is a random variable picked from a zero-mean Gaussian distribution with variance σ 2 . Consequently, the mean square error can be written as [11]: E = 1 ˆ [x(t) − x(t)]2 2 x,η,t = 2 1 2 x(t) − ∑ (ai (x(t)) + ηi ) φix i (4) x,η,t where · x,η denotes integration over the range of x and η, the expected noise. We assume that the noise is independent and has the same variance for each neuron [11], which yields: E= where σ2 1 2 2 x(t) − ∑ ai (x(t))φix i x,t 1 + σ 2 ∑(φix )2 , 2 i (5) is the noise variance ηi η j . This expression is minimized by: N φix = ∑ Γ−1 ϒ j , ij (6) j with Γi j = ai (x)a j (x) x + σ 2 δi j , where δ is the Kronecker delta function matrix, and ϒ j = xa j (x) x [11]. One consequence of modeling noise in the neural representation is that the matrix Γ is invertible despite the use of a highly overcomplete representation. In a noiseless representation, Γ is generally singular because, due to the large number of neurons, there is a high probability of having two neurons with similar tuning curves leading to two similar rows in Γ. 3 Kalman-filter based cortical decoder In the 1960’s, Kalman described a method that uses linear filtering to track the state of a dynamical system throughout time using a model of the dynamics of the system as well as noisy measurements [15]. The model dynamics gives an estimate of the state of the system at the next time step. This estimate is then corrected using the observations (i.e., measurements) at this time step. The relative weights for these two pieces of information are given by the Kalman gain, K [15, 16]. Whereas the Kalman gain is updated at each iteration, the state and observation matrices (defined below)—and corresponding noise matrices—are supposed constant. In the case of prosthetic applications, the system’s state vector is the cursor’s kinematics, xt = y [veltx , velt , 1], where the constant 1 allows for a fixed offset compensation. The measurement vector, yt , is the neural spike rate (spike counts in each time step) of 192 channels of neural threshold crossings. The system’s dynamics is modeled by: xt yt = Axt−1 + wt , = Cxt + qt , (7) (8) where A is the state matrix, C is the observation matrix, and wt and qt are additive, Gaussian noise sources with wt ∼ N (0, W) and qt ∼ N (0, Q). The model parameters (A, C, W and Q) are fit with training data by correlating the observed hand kinematics with the simultaneously measured neural signals (Figure 2). For an efficient decoding, we derived the steady-state update equation by replacing the adaptive Kalman gain by its steady-state formulation: K = (I + WCQ−1 C)−1 W CT Q−1 . This yields the following estimate of the system’s state: xt = (I − KC)Axt−1 + Kyt = MDT xt−1 + MDT yt , x y 4 (9) a Velocity (cm/s) Neuron 10 c 150 5 100 b 50 20 0 −20 0 0 x−velocity y−velocity 2000 4000 6000 8000 Time (ms) 10000 12000 1cm 14000 Trials: 0034-0049 Figure 2: Neural and kinematic measurements (monkey J, 2011-04-16, 16 continuous trials) used to fit the standard Kalman filter model. a. The 192 cortical recordings fed as input to fit the Kalman filter’s matrices (color code refers to the number of threshold crossings observed in each 50ms bin). b. Hand x- and y-velocity measurements correlated with the neural data to obtain the Kalman filter’s matrices. c. Cursor kinematics of 16 continuous trials under direct hand control. where MDT = (I − KC)A and MDT = K are the discrete time (DT) Kalman matrices. The steadyx y state formulation improves efficiency with little loss in accuracy because the optimal Kalman gain rapidly converges (typically less than 100 iterations). Indeed, in neural applications under both open-loop and closed-loop conditions, the difference between the full Kalman filter and its steadystate implementation falls to within 1% in a few seconds [17]. This simplifying assumption reduces the execution time for decoding a typical neuronal firing rate signal approximately seven-fold [17], a critical speed-up for real-time applications. 4 Kalman filter with a spiking neural network To implement the Kalman filter with a SNN by applying the NEF, we first convert Equation 9 from DT to continuous time (CT), and then replace the CT matrices with neurally plausible ones, which yields: x(t) = h(t) ∗ A x(t) + B y(t) , (10) where A = τMCT + I, B = τMCT , with MCT = MDT − I /∆t and MCT = MDT /∆t, the CT x y x x y y Kalman matrices, and ∆t = 50ms, the discrete time step; τ is the synaptic time-constant. The jth neuron’s input current (see Equation 3) is computed from the system’s current state, x(t), which is computed from estimates of the system’s previous state (ˆ (t) = ∑i ai (t)φix ) and current x y input (ˆ (t) = ∑k bk (t)φk ) using Equation 10. This yields: y ˜x J j (x(t)) = α j φ j · x(t) + J bias j ˜x ˆ ˆ = α j φ j · h(t) ∗ A x(t) + B y(t) ˜x = α j φ j · h(t) ∗ A + J bias j ∑ ai (t)φix + B ∑ bk (t)φky i + J bias j (11) k This last equation can be written in a neural network form: J j (x(t)) = h(t) ∗ ∑ ω ji ai (t) + ∑ ω jk bk (t) i + J bias j (12) k y ˜x ˜x where ω ji = α j φ j A φix and ω jk = α j φ j B φk are the recurrent and feedforward weights, respectively. 5 Efficient implementation of the SNN In this section, we describe the two distinct steps carried out when implementing the SNN: creating and running the network. The first step has no computational constraints whereas the second must be very efficient in order to be successfully deployed in the closed-loop experimental setting. 5 x ( 1000 x ( = 1000 1000 = 1000 x 1000 b 1000 x 1000 1000 a Figure 3: Computing a 1000-neuron pool’s recurrent connections. a. Using connection weights requires multiplying a 1000×1000 matrix by a 1000 ×1 vector. b. Operating in the lower-dimensional state space requires multiplying a 1 × 1000 vector by a 1000 × 1 vector to get the decoded state, multiplying this state by a component of the A matrix to update it, and multiplying the updated state by a 1000 × 1 vector to re-encode it as firing rates, which are then used to update the soma current for every neuron. Network creation: This step generates, for a specified number of neurons composing the network, x ˜x the gain α j , bias current J bias , preferred direction φ j , and decoding weight φ j for each neuron. The j ˜x preferred directions φ j are drawn randomly from a uniform distribution over the unit sphere. The maximum firing rate, max G(J j (x)), and the normalized x-axis intercept, G(J j (x)) = 0, are drawn randomly from a uniform distribution on [200, 400] Hz and [-1, 1], respectively. From these two specifications, α j and J bias are computed using Equation 2 and Equation 3. The decoding weights j x φ j are computed by minimizing the mean square error (Equation 6). For efficient implementation, we used two 1D integrators (i.e., two recurrent neuron pools, with each pool representing a scalar) rather than a single 3D integrator (i.e., one recurrent neuron pool, with the pool representing a 3D vector by itself) [13]. The constant 1 is fed to the 1D integrators as an input, rather than continuously integrated as part of the state vector. We also replaced the bk (t) units’ spike rates (Figure 1, middle) with the 192 neural measurements (spike counts in 50ms bins), y which is equivalent to choosing φk from a standard basis (i.e., a unit vector with 1 at the kth position and 0 everywhere else) [7]. Network simulation: This step runs the simulation to update the soma current for every neuron, based on input spikes. The soma voltage is then updated following RC circuit dynamics. Gaussian noise is normally added at this step, the rest of the simulation being noiseless. Neurons with soma voltage above threshold generate a spike and enter their refractory period. The neuron firing rates are decoded using the linear decoding weights to get the updated states values, x and y-velocity. These values are smoothed with a filter identical to h(t), but with τ set to 5ms instead of 20ms to avoid introducing significant delay. Then the simulation step starts over again. In order to ensure rapid execution of the simulation step, neuron interactions are not updated dix rectly using the connection matrix (Equation 12), but rather indirectly with the decoding matrix φ j , ˜x dynamics matrix A , and preferred direction matrix φ j (Equation 11). To see why this is more efficient, suppose we have 1000 neurons in the a population for each of the state vector’s two scalars. Computing the recurrent connections using connection weights requires multiplying a 1000 × 1000 matrix by a 1000-dimensional vector (Figure 3a). This requires 106 multiplications and about 106 sums. Decoding each scalar (i.e., ∑i ai (t)φix ), however, requires only 1000 multiplications and 1000 sums. The decoded state vector is then updated by multiplying it by the (diagonal) A matrix, another 2 products and 1 sum. The updated state vector is then encoded by multiplying it with the neurons’ preferred direction vectors, another 1000 multiplications per scalar (Figure 3b). The resulting total of about 3000 operations is nearly three orders of magnitude fewer than using the connection weights to compute the identical transformation. To measure the speedup, we simulated a 2,000-neuron network on a computer running Matlab 2011a (Intel Core i7, 2.7-GHz, Mac OS X Lion). Although the exact run-times depend on the computing hardware and software, the run-time reduction factor should remain approximately constant across platforms. For each reported result, we ran the simulation 10 times to obtain a reliable estimate of the execution time. The run-time for neuron interactions using the recurrent connection weights was 9.9ms and dropped to 2.7µs in the lower-dimensional space, approximately a 3,500-fold speedup. Only the recurrent interactions benefit from the speedup, the execution time for the rest of the operations remaining constant. The run-time for a 50ms network simulation using the recurrent connec6 Table 1: Model parameters Symbol max G(J j (x)) G(J j (x)) = 0 J bias j αj ˜x φj Range 200-400 Hz −1 to 1 Satisfies first two Satisfies first two ˜x φj = 1 Description Maximum firing rate Normalized x-axis intercept Bias current Gain factor Preferred-direction vector σ2 τ RC j τ ref j τ PSC j 0.1 20 ms 1 ms 20 ms Gaussian noise variance RC time constant Refractory period PSC time constant tion weights was 0.94s and dropped to 0.0198s in the lower-dimensional space, a 47-fold speedup. These results demonstrate the efficiency the lower-dimensional space offers, which made the closedloop application of SNNs possible. 6 Closed-loop implementation An adult male rhesus macaque (monkey J) was trained to perform a center-out-and-back reaching task for juice rewards to one of eight targets, with a 500ms hold time (Figure 4a) [1]. All animal protocols and procedures were approved by the Stanford Institutional Animal Care and Use Committee. Hand position was measured using a Polaris optical tracking system at 60Hz (Northern Digital Inc.). Neural data were recorded from two 96-electrode silicon arrays (Blackrock Microsystems) implanted in the dorsal pre-motor and motor cortex. These recordings (-4.5 RMS threshold crossing applied to each electrode’s signal) yielded tuned activity for the direction and speed of arm movements. As detailed in [1], a standard Kalman filter model was fit by correlating the observed hand kinematics with the simultaneously measured neural signals, while the monkey moved his arm to acquire virtual targets (Figure 2). The resulting model was used in a closed-loop system to control an on-screen cursor in real-time (Figure 4a, Decoder block). A steady-state version of this model serves as the standard against which the SNN implementation’s performance is compared. We built a SNN using the NEF methodology based on derived Kalman filter parameters mentioned above. This SNN was then simulated on an xPC Target (Mathworks) x86 system (Dell T3400, Intel Core 2 Duo E8600, 3.33GHz). It ran in closed-loop, replacing the standard Kalman filter as the decoder block in Figure 4a. The parameter values listed in Table 1 were used for the SNN implementation. We ensured that the time constants τiRC ,τiref , and τiPSC were smaller than the implementation’s time step (50ms). Noise was not explicitly added. It arose naturally from the fluctuations produced by representing a scalar with filtered spike trains, which has been shown to have effects similar to Gaussian noise [11]. For the purpose of computing the linear decoding weights (i.e., Γ), we modeled the resulting noise as Gaussian with a variance of 0.1. A 2,000-neuron version of the SNN-based decoder was tested in a closed-loop system, the largest network our embedded MatLab implementation could run in real-time. There were 1206 trials total among which 301 (center-outs only) were performed with the SNN and 302 with the standard (steady-state) Kalman filter. The block structure was randomized and interleaved, so that there is no behavioral bias present in the findings. 100 trials under hand control are used as a baseline comparison. Success corresponds to a target acquisition under 1500ms, with 500ms hold time. Success rates were higher than 99% on all blocks for the SNN implementation and 100% for the standard Kalman filter. The average time to acquire the target was slightly slower for the SNN (Figure 5b)—711ms vs. 661ms, respectively—we believe this could be improved by using more neurons in the SNN.1 The average distance to target (Figure 5a) and the average velocity of the cursor (Figure 5c) are very similar. 1 Off-line, the SNN performed better as we increased the number of neurons [7]. 7 a Neural Spikes b c BMI: Kalman decoder BMI: SNN decoder Decoder Cursor Velocity 1cm 1cm Trials: 2056-2071 Trials: 1748-1763 5 0 0 400 Time after Target Onset (ms) 800 Target acquisition time histogram 40 Mean cursor velocity 50 Standard Kalman filter 40 20 Hand 30 30 Spiking Neural Network 20 10 0 c Cursor Velocity (cm/s) b Mean distance to target 10 Percent of Trials (%) a Distance to Target (cm) Figure 4: Experimental setup and results. a. Data are recorded from two 96-channel silicon electrode arrays implanted in dorsal pre-motor and motor cortex of an adult male monkey performing a centerout-and-back reach task for juice rewards to one of eight targets with a 500ms hold time. b. BMI position kinematics of 16 continuous trials for the standard Kalman filter implementation. c. BMI position kinematics of 16 continuous trials for the SNN implementation. 10 0 500 1000 Target Acquire Time (ms) 1500 0 0 200 400 600 800 Time after Target Onset (ms) 1000 Figure 5: SNN (red) performance compared to standard Kalman filter (blue) (hand trials are shown for reference (yellow)). The SNN achieves similar results—success rates are higher than 99% on all blocks—as the standard Kalman filter implementation. a. Plot of distance to target vs. time both after target onset for different control modalities. The thicker traces represent the average time when the cursor first enters the acceptance window until successfully entering for the 500ms hold time. b. Histogram of target acquisition time. c. Plot of mean cursor velocity vs. time. 7 Conclusions and future work The SNN’s performance was quite comparable to that produced by a standard Kalman filter implementation. The 2,000-neuron network had success rates higher than 99% on all blocks, with mean distance to target, target acquisition time, and mean cursor velocity curves very similar to the ones obtained with the standard implementation. Future work will explore whether these results extend to additional animals. As the Kalman filter and its variants are the state-of-the-art in cortically-controlled motor prostheses [1]-[5], these simulations provide confidence that similar levels of performance can be attained with a neuromorphic system, which can potentially overcome the power constraints set by clinical applications. Our ultimate goal is to develop an ultra-low-power neuromorphic chip for prosthetic applications on to which control theory algorithms can be mapped using the NEF. As our next step in this direction, we will begin exploring this mapping with Neurogrid, a hardware platform with sixteen programmable neuromorphic chips that can simulate up to a million spiking neurons in real-time [9]. However, bandwidth limitations prevent Neurogrid from realizing random connectivity patterns. It can only connect each neuron to thousands of others if neighboring neurons share common inputs — just as they do in the cortex. Such columnar organization may be possible with NEF-generated networks if preferred directions vectors are assigned topographically rather than randomly. Implementing this constraint effectively is a subject of ongoing research. Acknowledgment This work was supported in part by the Belgian American Education Foundation(J. Dethier), Stanford NIH Medical Scientist Training Program (MSTP) and Soros Fellowship (P. Nuyujukian), DARPA Revolutionizing Prosthetics program (N66001-06-C-8005, K. V. Shenoy), and two NIH Director’s Pioneer Awards (DP1-OD006409, K. V. Shenoy; DPI-OD000965, K. Boahen). 8 References [1] V. Gilja, Towards clinically viable neural prosthetic systems, Ph.D. Thesis, Department of Computer Science, Stanford University, 2010, pp 19–22 and pp 57–73. [2] V. Gilja, P. Nuyujukian, C.A. Chestek, J.P. Cunningham, J.M. Fan, B.M. Yu, S.I. Ryu, and K.V. Shenoy, A high-performance continuous cortically-controlled prosthesis enabled by feedback control design, 2010 Neuroscience Meeting Planner, San Diego, CA: Society for Neuroscience, 2010. [3] P. Nuyujukian, V. Gilja, C.A. Chestek, J.P. Cunningham, J.M. Fan, B.M. Yu, S.I. Ryu, and K.V. Shenoy, Generalization and robustness of a continuous cortically-controlled prosthesis enabled by feedback control design, 2010 Neuroscience Meeting Planner, San Diego, CA: Society for Neuroscience, 2010. [4] V. Gilja, C.A. Chestek, I. Diester, J.M. Henderson, K. Deisseroth, and K.V. Shenoy, Challenges and opportunities for next-generation intra-cortically based neural prostheses, IEEE Transactions on Biomedical Engineering, 2011, in press. [5] S.P. Kim, J.D. Simeral, L.R. Hochberg, J.P. Donoghue, and M.J. Black, Neural control of computer cursor velocity by decoding motor cortical spiking activity in humans with tetraplegia, Journal of Neural Engineering, vol. 5, 2008, pp 455–476. [6] S. Kim, P. Tathireddy, R.A. Normann, and F. Solzbacher, Thermal impact of an active 3-D microelectrode array implanted in the brain, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 15, 2007, pp 493–501. [7] J. Dethier, V. Gilja, P. Nuyujukian, S.A. Elassaad, K.V. Shenoy, and K. Boahen, Spiking neural network decoder for brain-machine interfaces, IEEE Engineering in Medicine & Biology Society Conference on Neural Engineering, Cancun, Mexico, 2011, pp 396–399. [8] K. Boahen, Neuromorphic microchips, Scientific American, vol. 292(5), 2005, pp 56–63. [9] R. Silver, K. Boahen, S. Grillner, N. Kopell, and K.L. Olsen, Neurotech for neuroscience: unifying concepts, organizing principles, and emerging tools, Journal of Neuroscience, vol. 27(44), 2007, pp 11807– 11819. [10] J.V. Arthur and K. Boahen, Silicon neuron design: the dynamical systems approach, IEEE Transactions on Circuits and Systems, vol. 58(5), 2011, pp 1034-1043. [11] C. Eliasmith and C.H. Anderson, Neural engineering: computation, representation, and dynamics in neurobiological systems, MIT Press, Cambridge, MA; 2003. [12] C. Eliasmith, A unified approach to building and controlling spiking attractor networks, Neural Computation, vol. 17, 2005, pp 1276–1314. [13] R. Singh and C. Eliasmith, Higher-dimensional neurons explain the tuning and dynamics of working memory cells, The Journal of Neuroscience, vol. 26(14), 2006, pp 3667–3678. [14] C. Eliasmith, How to build a brain: from function to implementation, Synthese, vol. 159(3), 2007, pp 373–388. [15] R.E. Kalman, A new approach to linear filtering and prediction problems, Transactions of the ASME– Journal of Basic Engineering, vol. 82(Series D), 1960, pp 35–45. [16] G. Welsh and G. Bishop, An introduction to the Kalman Filter, University of North Carolina at Chapel Hill Chapel Hill NC, vol. 95(TR 95-041), 1995, pp 1–16. [17] W.Q. Malik, W. Truccolo, E.N. Brown, and L.R. Hochberg, Efficient decoding with steady-state Kalman filter in neural interface systems, IEEE Transactions on Neural Systems and Rehabilitation Engineering, vol. 19(1), 2011, pp 25–34. 9

4 0.7617237 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

Author: Konrad Koerding, Ian Stevenson

Abstract: Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1

5 0.7607075 302 nips-2011-Variational Learning for Recurrent Spiking Networks

Author: Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner

Abstract: We derive a plausible learning rule for feedforward, feedback and lateral connections in a recurrent network of spiking neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimental Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method’s applicability to learning both stationary and temporal spike patterns. 1

6 0.74990547 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

7 0.6310122 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

8 0.60075438 23 nips-2011-Active dendrites: adaptation to spike-based communication

9 0.55667776 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

10 0.55346781 68 nips-2011-Demixed Principal Component Analysis

11 0.54970169 249 nips-2011-Sequence learning with hidden units in spiking neural networks

12 0.54060948 85 nips-2011-Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits

13 0.53351605 99 nips-2011-From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models

14 0.51791698 37 nips-2011-Analytical Results for the Error in Filtering of Gaussian Processes

15 0.51687825 200 nips-2011-On the Analysis of Multi-Channel Neural Spike Data

16 0.4871926 24 nips-2011-Active learning of neural response functions with Gaussian processes

17 0.47836384 219 nips-2011-Predicting response time and error rates in visual search

18 0.46015173 179 nips-2011-Multilinear Subspace Regression: An Orthogonal Tensor Decomposition Approach

19 0.45062903 301 nips-2011-Variational Gaussian Process Dynamical Systems

20 0.45025489 148 nips-2011-Learning Probabilistic Non-Linear Latent Variable Models for Tracking Complex Activities


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.012), (4, 0.035), (7, 0.195), (20, 0.026), (26, 0.013), (27, 0.011), (31, 0.143), (33, 0.02), (43, 0.075), (45, 0.079), (57, 0.063), (63, 0.011), (65, 0.012), (74, 0.041), (83, 0.103), (84, 0.023), (89, 0.012), (99, 0.04)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.8313815 86 nips-2011-Empirical models of spiking in neural populations

Author: Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1

2 0.7410022 152 nips-2011-Learning in Hilbert vs. Banach Spaces: A Measure Embedding Viewpoint

Author: Kenji Fukumizu, Gert R. Lanckriet, Bharath K. Sriperumbudur

Abstract: The goal of this paper is to investigate the advantages and disadvantages of learning in Banach spaces over Hilbert spaces. While many works have been carried out in generalizing Hilbert methods to Banach spaces, in this paper, we consider the simple problem of learning a Parzen window classifier in a reproducing kernel Banach space (RKBS)—which is closely related to the notion of embedding probability measures into an RKBS—in order to carefully understand its pros and cons over the Hilbert space classifier. We show that while this generalization yields richer distance measures on probabilities compared to its Hilbert space counterpart, it however suffers from serious computational drawback limiting its practical applicability, which therefore demonstrates the need for developing efficient learning algorithms in Banach spaces.

3 0.73024452 249 nips-2011-Sequence learning with hidden units in spiking neural networks

Author: Johanni Brea, Walter Senn, Jean-pascal Pfister

Abstract: We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited.

4 0.72758126 75 nips-2011-Dynamical segmentation of single trials from population neural data

Author: Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Simultaneous recordings of many neurons embedded within a recurrentlyconnected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function. In principle, these dynamics might be identified by purely unsupervised, statistical means. Here, we show that a Hidden Switching Linear Dynamical Systems (HSLDS) model— in which multiple linear dynamical laws approximate a nonlinear and potentially non-stationary dynamical process—is able to distinguish different dynamical regimes within single-trial motor cortical activity associated with the preparation and initiation of hand movements. The regimes are identified without reference to behavioural or experimental epochs, but nonetheless transitions between them correlate strongly with external events whose timing may vary from trial to trial. The HSLDS model also performs better than recent comparable models in predicting the firing rate of an isolated neuron based on the firing rates of others, suggesting that it captures more of the “shared variance” of the data. Thus, the method is able to trace the dynamical processes underlying the coordinated evolution of network activity in a way that appears to reflect its computational role. 1

5 0.72538739 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

Author: Kamiar R. Rad, Liam Paninski

Abstract: Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design.

6 0.721834 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

7 0.71931148 32 nips-2011-An Empirical Evaluation of Thompson Sampling

8 0.71141201 219 nips-2011-Predicting response time and error rates in visual search

9 0.70703387 292 nips-2011-Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

10 0.69655889 273 nips-2011-Structural equations and divisive normalization for energy-dependent component analysis

11 0.69625491 37 nips-2011-Analytical Results for the Error in Filtering of Gaussian Processes

12 0.69598562 301 nips-2011-Variational Gaussian Process Dynamical Systems

13 0.69394743 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning

14 0.69352561 13 nips-2011-A blind sparse deconvolution method for neural spike identification

15 0.69238895 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

16 0.69201469 24 nips-2011-Active learning of neural response functions with Gaussian processes

17 0.6903252 57 nips-2011-Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs

18 0.68980372 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

19 0.6891346 221 nips-2011-Priors over Recurrent Continuous Time Processes

20 0.68906873 102 nips-2011-Generalised Coupled Tensor Factorisation