nips nips2011 nips2011-249 knowledge-graph by maker-knowledge-mining

249 nips-2011-Sequence learning with hidden units in spiking neural networks


Source: pdf

Author: Johanni Brea, Walter Senn, Jean-pascal Pfister

Abstract: We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Sequence learning with hidden units in spiking neural networks Johanni Brea, Walter Senn and Jean-Pascal Pfister Department of Physiology University of Bern B¨ hlplatz 5 u CH-3012 Bern, Switzerland {brea, senn, pfister}@pyl. [sent-1, score-0.521]

2 ch Abstract We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. [sent-3, score-0.788]

3 Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. [sent-4, score-1.92]

4 We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. [sent-5, score-1.06]

5 Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited. [sent-6, score-0.705]

6 1 Introduction Learning to produce temporal sequences is a general problem that the brain needs to solve. [sent-7, score-0.185]

7 Early attempts to model sequence learning used a simple asymmetric Hebbian learning rule [10, 20, 6] and succeeded to store sequences of random patterns, but perform poorly as soon as there are temporal correlations between the patterns [3]. [sent-9, score-0.511]

8 Other studies [14] included a reservoir of hidden neurons but assumed weights towards the hidden neurons to be fixed. [sent-11, score-1.957]

9 Here we start by defining a stochastic neuronal dynamics - that can be arbitrarily complicated (e. [sent-13, score-0.159]

10 This stochastic dynamics defines the overall probability distribution which is parametrized by the synaptic weights. [sent-16, score-0.181]

11 The goal of learning is to adapt synaptic weights such that the model distribution approximates as good as possible the target distribution of temporal sequences. [sent-17, score-0.274]

12 This can be seen as the extension of the maximum likelihood approach of Barber [2] where we add stochastic hidden neurons with plastic weights. [sent-18, score-0.849]

13 1 A B ht−1 ht stochastic hidden neurons ht−1 ht vt−1 vt stochastic visible neurons vt−1 vt Figure 1: Graphical representation of the conditional dependencies of the joint distribution over visible and hidden sequences. [sent-20, score-2.984]

14 A Graphical model used for the derivation of the learning rule in section 2 and the example in section 4. [sent-21, score-0.173]

15 B Markovian model used in the example with binary neurons in section 3. [sent-22, score-0.49]

16 The resulting learning rule is local (but modulated by a global factor), causal and biologically relevant in the sense that it shares important features with Spike-Timing Dependent Plasticity (STDP). [sent-23, score-0.252]

17 We also derive an online version of the learning rule and show numerically that it performs almost equally well as the exact batch learning rule. [sent-24, score-0.277]

18 2 Learning a distribution of sequences Let us consider temporal sequences v = {vt,i |t = 0 . [sent-25, score-0.361]

19 Nv } of Nv visible neurons over the interval [0, T ]. [sent-31, score-0.865]

20 from a target distribution P ∗ (v) that must be learned by a model which consists of Nv visible neurons and Nh hidden neurons. [sent-46, score-1.261]

21 The model distribution over those visible sequences is denoted by Pθ (v) = h Pθ (v, h) where θ denotes the model parameters, h = {ht,i |t = 0 . [sent-47, score-0.551]

22 Nh } the hidden temporal sequence and Pθ (v, h) the joint distribution over the visible and the hidden sequences. [sent-53, score-1.153]

23 (2), it is possible to approximate it by sampling the visible sequences v from the target distribution P ∗ (v) and the hidden sequences from the posterior distribution Pθ (h|v) given the visible ones. [sent-60, score-1.493]

24 Indeed, at a time t the posterior distribution over ht does not only depend on the past visible activity but also on the future visible activity, since it is conditioned on the whole visible activity v0:T from time step 0 to T . [sent-62, score-1.633]

25 We exploit that in all neuronal network models of interest, neuronal firing at any time point is conditionally independent given the past activity of the network. [sent-69, score-0.287]

26 Using the chain rule this means that we can write the joint distribution Pθ (v, h) (see Fig. [sent-70, score-0.207]

27 The sampling can be accomplished by clamping the visible neurons to a target sequence v and let the hidden dynamics run, i. [sent-72, score-1.374]

28 at time t, ht is sampled from Pθ (ht |v0:t−1 h0:t−1 ). [sent-74, score-0.218]

29 (3), the posterior distribution Pθ (h|v) can be written as Pθ (h|v) = Rθ (v|h)Qθ (h|v) , Pθ (v) (4) where the marginal distribution over the visible sequences v can be also expressed as Pθ (v) = Rθ (v|h) Qθ (h|v) . [sent-76, score-0.585]

30 Note that in the absence of hidden neurons, this factor γθ (v, h) is equal to one and the maximum likelihood learning rule [2, 18] is recovered. [sent-82, score-0.498]

31 N do h ∼ Qθ (h|v) α(v) ← α(v) + Rθ (v|h) ∂ log Pθ (v,h) ∂θ Pθ (v) ← Pθ (v) + N −1 Rθ (v|h) end for α(v) θ ← θ + η Pθ (v) end while return θ 3 B C G 10 20 30 10 20 30 time step D 10 20 30 time step E 10 20 30 time step perform ance unit number A 1. [sent-89, score-0.24]

32 A The target distribution contained only this training pattern for 30 visible neurons and 30 time steps. [sent-96, score-1.09]

33 B-F, H-J Overlay of 20 recalls after learning with 15 000 training pattern presentations, B with only visible neurons and a simple asymmetric Hebb rule (see main text) C only visible neurons and learning rule Eq. [sent-97, score-2.251]

34 (5) D static weights towards 30 hidden neurons (Reservoir Computing) E learning rule Eq. [sent-98, score-1.239]

35 G Learning curves for the training pattern in A for only visible neurons (black line), static weights towards hidden (blue line), online learning approximation (purple line) exact learning rule (red line). [sent-101, score-1.799]

36 The performance was measured in one minus average Hamming distance per neuron per time step (see main text). [sent-102, score-0.191]

37 I Recall with a network of 30 visible and 10 hidden neurons without learning the weights towards hidden neurons. [sent-104, score-1.706]

38 J Recall after training the same network with learning rule Eq. [sent-105, score-0.252]

39 3 Binary neurons In order to illustrate the learning rule given by Eq. [sent-107, score-0.663]

40 Let x denote the activity of the visible and hidden neurons, i. [sent-109, score-0.776]

41 For the distribution over the initial conditions Pθ (v0 ) and Pθ (h0 ) we choose delta distributions such that v0 is equal to the first state of the training sequence and h0 is an arbitrary but fixed vector of binary values. [sent-123, score-0.185]

42 If we assume that the weights wij are the only adaptable parameters in this model, 4 A B 1. [sent-124, score-0.294]

43 5 20 40 60 80 100 20 40 60 80 100 seq u en ce len gth number of hidden units Figure 3: Adding trainable hidden neurons leads to much better recall performance than having static hidden neurons or no hidden neurons at all. [sent-136, score-2.985]

44 A Comparison of the performance after 20000 learning cycles between static (blue curve) and dynamic weights (red curve) towards hidden neurons for a network with 30 visible and different numbers of hidden neurons in a training task with a uncorrelated random pattern of length 60 time steps. [sent-137, score-2.497]

45 For B we generated random, uncorrelated sequences of different length and compared the performance after 20000 learning cycles for only visible neurons (black curve), static weights towards hidden (blue curve) and dynamic weights towards hidden (red curve). [sent-138, score-2.134]

46 (3) and (6) we find ∂ log Pw (x) β = ∂wij 2 T (xt,i − xt,i Pθ (xt,i |xt−1 ) )xt−1,j , (8) t=1 where xt,i Pθ (xt,i |xt−1 ) = g(ut,i )δt − (1 − g(ut,i )δt) and the indices i and j run over all visible and hidden neurons. [sent-141, score-0.733]

47 2) where the distribution over sequences is a delta distribution P ∗ (v) = δ(v − v ∗ ) around a single pattern v ∗ (Fig. [sent-144, score-0.324]

48 a non-Markovian pattern, thus making it a dift=0 T ∗ ∗ ficult pattern to learn with a simple asymmetric Hebb rule ∆wij ∝ t=0 vt+1,i vt,j (Fig. [sent-147, score-0.295]

49 The performance was measured ∗ by one minus the Hamming distance per visible neuron and time step 1−(T Nv )−1 t,i |vt,i −vt,i |/2 between target pattern and recall pattern averaged over 100 runs. [sent-150, score-0.792]

50 Adding hidden neurons without learning the weights towards hidden neurons is similar to the idea used in the framework of Reservoir Computing (for a review see [13]): the visible states feed a fixed reservoir of neurons that returns a non-linear transformation of the input. [sent-151, score-2.847]

51 Only the readout from hidden to visible neurons and in our case the recurrent connections in the visible layer are trained. [sent-152, score-1.614]

52 To assure a sensible distribution of weights towards hidden units, we used the weights that were obtained after learning with Eq. [sent-153, score-0.604]

53 Obviously, without training the reservoir the performance is always worse compared to a system with an equal number of hidden neurons but dynamic weights (Fig. [sent-155, score-1.11]

54 With only a few hidden neurons our rule is also capable to learn patterns where the visible neurons are silent during a few time-steps. [sent-157, score-1.9]

55 After learning the weights towards 10 hidden neurons with learning rule Eq. [sent-160, score-1.153]

56 2) or static weights towards hidden neurons the time gap was not learned (see Fig. [sent-164, score-1.124]

57 5 w arbitrary units 0 40 20 0 20 t post t pre ms 40 Figure 4: The learning rule Eq. [sent-166, score-0.206]

58 (11) is compatible with Spike-Timing Dependent Plasticity (STDP): the weight gets potentiated if a presynaptic spike is followed by a postsynaptic spike and depressed otherwise. [sent-167, score-0.406]

59 3 we used again delta target distributions P ∗ (v) = δ(v − v ∗ ) with random uncorrelated patterns v ∗ of different length. [sent-170, score-0.168]

60 For a pattern of length 2Nv = 60 only Nv /2 = 15 trainable hidden neurons are sufficient to reach perfect recall (see Fig. [sent-172, score-0.978]

61 This is in clear contrast to the case of static hidden weights. [sent-174, score-0.411]

62 Again the static weights were obtained by reshuffling those that we obtained after learning with Eq. [sent-175, score-0.166]

63 3B compares the capacity of our learning rule with Nh = Nv = 30 hidden neurons to the case of no hidden neuron or static weights towards hidden neurons. [sent-178, score-2.017]

64 Without learning the weights towards hidden neurons the performance drops to almost chance level for sequences of 45 or more time steps, whereas with our learning rule this decrease of performance occurs only at sequences of 100 or more time steps. [sent-179, score-1.505]

65 4 Limit to Continuous Time Starting from the neurons in the last section we show that in the limit to continuous time we can implement the sequence learning task with stochastic spiking neurons [7]. [sent-180, score-1.239]

66 First note that the state of a neuron at time t in the model described in the previous section is fully defined by ut,i := j wij xt−1,j (see Eq. [sent-181, score-0.342]

67 The weighted sum j wij xt−1,j is the response of neuron i to the spikes of its presynaptic neurons and its own spikes. [sent-183, score-0.985]

68 In a more realistic model the postsynaptic neuron feels the influence of presynaptic spikes through a perturbation of the membrane potential on the order of a few milliseconds, which in the limit to continuous time clearly cannot be modeled by a one-time step response. [sent-185, score-0.486]

69 (6) by ∞ ut,i = ∞ κs xt−s,i + s=1 j=i =:xκ t,i ǫs xt−s,j , wij (10) s=1 =:xǫ t,j where xt−s,i ∈ {0, 1}. [sent-187, score-0.214]

70 The kernel ǫ models the time-course of the response to a presynaptic spike and κ the refractoriness. [sent-188, score-0.262]

71 (9) we note that we can scale Rw (v|h) without changing the learning rule Eq. [sent-191, score-0.173]

72 We use the scaling Rw (v|h) → Rw (v|h) := (g0 δt)−Sv Rw (v|h), where Sv denotes the total number of spikes T Nv in the visible sequence v, i. [sent-193, score-0.47]

73 With neuron i’s response to past spiking activity ui (t) = xκ (t) + j=i wij xǫ (t) and the escape rate function ρi (t) = g0 exp (βui (t)) we j i recovered the defining equations of a simplified stochastic spike response model [7]. [sent-200, score-0.827]

74 4 we display the weight change after forcing two neurons to fire with a fixed time lag. [sent-202, score-0.524]

75 Our learning rule is consistent with STDP in the sense that a presynaptic spike followed by a postsynaptic spike leads to potentiation and to depression otherwise. [sent-204, score-0.647]

76 5 Approximate online version Without hidden neurons the learning rule found by using Eq. [sent-206, score-1.053]

77 (11) is straightforward to implement in an online way where the parameters are updated at every moment in time according to wij ∝ ˙ (xi (t) − ρi (t))xǫ (t) instead of waiting with the update until a training batch finished. [sent-207, score-0.405]

78 Finding j an online version of the learning algorithm for networks with hidden neurons turns out to be a challenge, since we need to know the whole sequences v and h in order to evaluate the importance factor Rθ (v|h)/ Rθ (v|h′ ) Qθ (h′ |v) . [sent-208, score-1.095]

79 Here we propose to use in each time step an approximation of the importance factor based on the network dynamics during the preceding period of typical sequence length and multiply it by the low-pass filtered change of parameters. [sent-209, score-0.259]

80 To find an online estimate of Rθ (v, h′ ) Qθ (h′ |v) we assume that a training pattern v ∼ P ∗ (v) is presented a few times in a row and after time N T , with N ∈ N, N ≫ 1, a new training pattern is picked from the training distribution. [sent-216, score-0.392]

81 Under this assumption we can replace the average over 7 hidden sequences by a low-pass filter of r with time constant N T , see Eq. [sent-217, score-0.501]

82 during the time interval [0, τ ), with τ on the order of the kernel time constant τm - the hidden activity h(s) is drawn from a given distribution P (h(s)). [sent-221, score-0.503]

83 2A) in section 3, the performance of the online rule is close to the one of the batch rule (Fig. [sent-225, score-0.45]

84 6 Discussion Learning long and temporally correlated sequences with neural networks is a difficult task. [sent-227, score-0.239]

85 In this paper we suggested a statistical model with hidden neurons and derived a learning rule that leads to optimal recall of the learned sequences given the neuronal dynamics. [sent-228, score-1.243]

86 The learning rule is derived by minimizing the Kullback-Leibler divergence from training distribution to model distribution with a variant of the EM-algorithm, where we use importance sampling to draw hidden sequences given the visible training sequence. [sent-229, score-1.296]

87 Choosing an appropriate distribution in the importance sampling step we are able to circumvent inference which usually makes the training of non-Markovian models hard. [sent-230, score-0.197]

88 We showed that it is ready to be implemented with biologically realistic neurons and that an approximate online version exists. [sent-232, score-0.634]

89 Our approach follows the ideas outlined in [2], where sequence learning was considered with visible neurons. [sent-233, score-0.426]

90 Here we extended this model by adding stochastic hidden neurons that help to perform well with sequences of linearly depend states - including non-Markovian sequences - or long sequences. [sent-234, score-1.158]

91 As in [18] we look at the limit of continuous time and find that the learning rule is consistent with Spike-Timing Dependent Plasticity. [sent-235, score-0.244]

92 In contrast to Reservoir Computing [13] we train the weights towards hidden neurons which clearly helps to improve performance. [sent-236, score-0.98]

93 Our learning rule does not need a “wake” and a “sleep” phase as we know it from Boltzmann machines [1, 8]. [sent-237, score-0.198]

94 Viewed in a different light our learning algorithm has a nice interpretation: as in reinforcement learning, the hidden neurons explore different sequences, where each trial leads to a global reward signal that modulates the weight change. [sent-238, score-0.815]

95 However, in contrast to common reinforcement learning the reward is not provided by an external teacher but depends solely on the internal dynamics and the visible neurons do not explore but are clamped to the training sequence. [sent-239, score-0.985]

96 To make our model even more biologically relevant, future work should aim for a biological implementation of the global importance factor that depends on the spike timing and the membrane potential of all the visible neurons (see Eq. [sent-240, score-1.123]

97 Phase diagram and storage capacity of sequence processing u neural networks. [sent-279, score-0.188]

98 Matching storage and recall: hippocampal spike timingdependent plasticity and phase response curves. [sent-313, score-0.319]

99 Efficient methods for sampling spike trains in networks of coupled neurons. [sent-340, score-0.175]

100 A tutorial on hidden Markov models and selected applications in speech recognition. [sent-352, score-0.325]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('neurons', 0.49), ('visible', 0.375), ('hidden', 0.325), ('wij', 0.214), ('nv', 0.199), ('ht', 0.184), ('rule', 0.173), ('reservoir', 0.162), ('sequences', 0.142), ('rw', 0.14), ('eij', 0.133), ('spike', 0.119), ('presynaptic', 0.107), ('spiking', 0.103), ('neuron', 0.094), ('static', 0.086), ('towards', 0.085), ('weights', 0.08), ('activity', 0.076), ('pw', 0.074), ('storage', 0.07), ('xt', 0.07), ('plasticity', 0.069), ('pattern', 0.067), ('vt', 0.067), ('dynamics', 0.067), ('online', 0.065), ('nh', 0.064), ('postsynaptic', 0.061), ('markovian', 0.061), ('neuronal', 0.058), ('stdp', 0.056), ('asymmetric', 0.055), ('recall', 0.055), ('training', 0.053), ('boltzmann', 0.052), ('sequence', 0.051), ('recurrent', 0.049), ('biologically', 0.049), ('sv', 0.048), ('patterns', 0.047), ('delta', 0.047), ('brea', 0.046), ('reshuf', 0.046), ('senn', 0.046), ('synaptic', 0.046), ('importance', 0.046), ('membrane', 0.044), ('spikes', 0.044), ('temporal', 0.043), ('ui', 0.041), ('trainable', 0.041), ('exp', 0.039), ('batch', 0.039), ('ring', 0.038), ('temporally', 0.037), ('snf', 0.037), ('hebb', 0.037), ('barber', 0.037), ('target', 0.037), ('limit', 0.037), ('em', 0.037), ('uncorrelated', 0.037), ('response', 0.036), ('depression', 0.035), ('step', 0.035), ('past', 0.035), ('time', 0.034), ('dependent', 0.034), ('stochastic', 0.034), ('capacity', 0.034), ('curve', 0.034), ('distribution', 0.034), ('potentiation', 0.033), ('neural', 0.033), ('units', 0.033), ('log', 0.033), ('refractory', 0.032), ('bern', 0.032), ('ster', 0.032), ('divergence', 0.032), ('hinton', 0.03), ('realistic', 0.03), ('modulated', 0.03), ('calculating', 0.029), ('sampling', 0.029), ('swiss', 0.029), ('mod', 0.029), ('minus', 0.028), ('contrastive', 0.028), ('networks', 0.027), ('ti', 0.027), ('network', 0.026), ('states', 0.025), ('phase', 0.025), ('cycles', 0.024), ('gap', 0.024), ('populations', 0.023), ('converged', 0.023), ('hamming', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999928 249 nips-2011-Sequence learning with hidden units in spiking neural networks

Author: Johanni Brea, Walter Senn, Jean-pascal Pfister

Abstract: We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited.

2 0.37883991 302 nips-2011-Variational Learning for Recurrent Spiking Networks

Author: Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner

Abstract: We derive a plausible learning rule for feedforward, feedback and lateral connections in a recurrent network of spiking neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimental Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method’s applicability to learning both stationary and temporal spike patterns. 1

3 0.27169827 23 nips-2011-Active dendrites: adaptation to spike-based communication

Author: Balazs B. Ujfalussy, Máté Lengyel

Abstract: Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity. 1

4 0.20947585 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

Author: Konrad Koerding, Ian Stevenson

Abstract: Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1

5 0.18423958 94 nips-2011-Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines

Author: Matthew D. Zeiler, Graham W. Taylor, Leonid Sigal, Iain Matthews, Rob Fergus

Abstract: We present a type of Temporal Restricted Boltzmann Machine that defines a probability distribution over an output sequence conditional on an input sequence. It shares the desirable properties of RBMs: efficient exact inference, an exponentially more expressive latent state than HMMs, and the ability to model nonlinear structure and dynamics. We apply our model to a challenging real-world graphics problem: facial expression transfer. Our results demonstrate improved performance over several baselines modeling high-dimensional 2D and 3D data. 1

6 0.18360548 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

7 0.18321568 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

8 0.1633033 219 nips-2011-Predicting response time and error rates in visual search

9 0.14776637 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

10 0.1472888 200 nips-2011-On the Analysis of Multi-Channel Neural Spike Data

11 0.14478052 92 nips-2011-Expressive Power and Approximation Errors of Restricted Boltzmann Machines

12 0.14222632 75 nips-2011-Dynamical segmentation of single trials from population neural data

13 0.1311473 86 nips-2011-Empirical models of spiking in neural populations

14 0.12822972 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

15 0.12573835 292 nips-2011-Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

16 0.11844693 57 nips-2011-Comparative Analysis of Viterbi Training and Maximum Likelihood Estimation for HMMs

17 0.11683421 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning

18 0.10442755 250 nips-2011-Shallow vs. Deep Sum-Product Networks

19 0.10397787 183 nips-2011-Neural Reconstruction with Approximate Message Passing (NeuRAMP)

20 0.10342714 184 nips-2011-Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.23), (1, 0.096), (2, 0.394), (3, -0.034), (4, 0.138), (5, 0.012), (6, -0.074), (7, -0.09), (8, -0.056), (9, -0.134), (10, 0.085), (11, -0.015), (12, 0.001), (13, 0.038), (14, -0.038), (15, -0.088), (16, -0.059), (17, 0.069), (18, -0.037), (19, 0.118), (20, 0.046), (21, 0.111), (22, -0.07), (23, 0.052), (24, -0.043), (25, -0.094), (26, 0.072), (27, -0.17), (28, 0.157), (29, 0.039), (30, -0.017), (31, -0.016), (32, 0.02), (33, -0.069), (34, 0.045), (35, -0.006), (36, 0.053), (37, -0.018), (38, -0.013), (39, -0.048), (40, 0.001), (41, -0.068), (42, 0.01), (43, 0.018), (44, -0.079), (45, 0.068), (46, -0.03), (47, -0.03), (48, -0.016), (49, 0.008)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97183347 249 nips-2011-Sequence learning with hidden units in spiking neural networks

Author: Johanni Brea, Walter Senn, Jean-pascal Pfister

Abstract: We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited.

2 0.82336354 23 nips-2011-Active dendrites: adaptation to spike-based communication

Author: Balazs B. Ujfalussy, Máté Lengyel

Abstract: Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity. 1

3 0.79701018 302 nips-2011-Variational Learning for Recurrent Spiking Networks

Author: Danilo J. Rezende, Daan Wierstra, Wulfram Gerstner

Abstract: We derive a plausible learning rule for feedforward, feedback and lateral connections in a recurrent network of spiking neurons. Operating in the context of a generative model for distributions of spike sequences, the learning mechanism is derived from variational inference principles. The synaptic plasticity rules found are interesting in that they are strongly reminiscent of experimental Spike Time Dependent Plasticity, and in that they differ for excitatory and inhibitory neurons. A simulation confirms the method’s applicability to learning both stationary and temporal spike patterns. 1

4 0.73569804 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

Author: Konrad Koerding, Ian Stevenson

Abstract: Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1

5 0.71214658 292 nips-2011-Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

Author: Cristina Savin, Peter Dayan, Máté Lengyel

Abstract: Storing a new pattern in a palimpsest memory system comes at the cost of interfering with the memory traces of previously stored items. Knowing the age of a pattern thus becomes critical for recalling it faithfully. This implies that there should be a tight coupling between estimates of age, as a form of familiarity, and the neural dynamics of recollection, something which current theories omit. Using a normative model of autoassociative memory, we show that a dual memory system, consisting of two interacting modules for familiarity and recollection, has best performance for both recollection and recognition. This finding provides a new window onto actively contentious psychological and neural aspects of recognition memory. 1

6 0.70563471 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

7 0.64639825 75 nips-2011-Dynamical segmentation of single trials from population neural data

8 0.6353026 85 nips-2011-Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits

9 0.62644476 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

10 0.5988853 184 nips-2011-Neuronal Adaptation for Sampling-Based Probabilistic Inference in Perceptual Bistability

11 0.59385496 86 nips-2011-Empirical models of spiking in neural populations

12 0.58844239 94 nips-2011-Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines

13 0.5343926 99 nips-2011-From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models

14 0.51449531 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

15 0.49504733 219 nips-2011-Predicting response time and error rates in visual search

16 0.47141284 92 nips-2011-Expressive Power and Approximation Errors of Restricted Boltzmann Machines

17 0.46174002 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning

18 0.46046242 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

19 0.44424313 250 nips-2011-Shallow vs. Deep Sum-Product Networks

20 0.41009924 89 nips-2011-Estimating time-varying input signals and ion channel states from a single voltage trace of a neuron


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.014), (4, 0.029), (20, 0.022), (26, 0.025), (31, 0.22), (33, 0.012), (43, 0.046), (45, 0.079), (57, 0.037), (74, 0.05), (82, 0.104), (83, 0.158), (84, 0.011), (99, 0.092)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.94008803 249 nips-2011-Sequence learning with hidden units in spiking neural networks

Author: Johanni Brea, Walter Senn, Jean-pascal Pfister

Abstract: We consider a statistical framework in which recurrent networks of spiking neurons learn to generate spatio-temporal spike patterns. Given biologically realistic stochastic neuronal dynamics we derive a tractable learning rule for the synaptic weights towards hidden and visible neurons that leads to optimal recall of the training sequences. We show that learning synaptic weights towards hidden neurons significantly improves the storing capacity of the network. Furthermore, we derive an approximate online learning rule and show that our learning rule is consistent with Spike-Timing Dependent Plasticity in that if a presynaptic spike shortly precedes a postynaptic spike, potentiation is induced and otherwise depression is elicited.

2 0.86909056 23 nips-2011-Active dendrites: adaptation to spike-based communication

Author: Balazs B. Ujfalussy, Máté Lengyel

Abstract: Computational analyses of dendritic computations often assume stationary inputs to neurons, ignoring the pulsatile nature of spike-based communication between neurons and the moment-to-moment fluctuations caused by such spiking inputs. Conversely, circuit computations with spiking neurons are usually formalized without regard to the rich nonlinear nature of dendritic processing. Here we address the computational challenge faced by neurons that compute and represent analogue quantities but communicate with digital spikes, and show that reliable computation of even purely linear functions of inputs can require the interplay of strongly nonlinear subunits within the postsynaptic dendritic tree. Our theory predicts a matching of dendritic nonlinearities and synaptic weight distributions to the joint statistics of presynaptic inputs. This approach suggests normative roles for some puzzling forms of nonlinear dendritic dynamics and plasticity. 1

3 0.85063416 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

Author: Konrad Koerding, Ian Stevenson

Abstract: Synaptic plasticity underlies learning and is thus central for development, memory, and recovery from injury. However, it is often difficult to detect changes in synaptic strength in vivo, since intracellular recordings are experimentally challenging. Here we present two methods aimed at inferring changes in the coupling between pairs of neurons from extracellularly recorded spike trains. First, using a generalized bilinear model with Poisson output we estimate time-varying coupling assuming that all changes are spike-timing-dependent. This approach allows model-based estimation of STDP modification functions from pairs of spike trains. Then, using recursive point-process adaptive filtering methods we estimate more general variation in coupling strength over time. Using simulations of neurons undergoing spike-timing dependent modification, we show that the true modification function can be recovered. Using multi-electrode data from motor cortex we then illustrate the use of this technique on in vivo data. 1

4 0.84879363 13 nips-2011-A blind sparse deconvolution method for neural spike identification

Author: Chaitanya Ekanadham, Daniel Tranchina, Eero P. Simoncelli

Abstract: We consider the problem of estimating neural spikes from extracellular voltage recordings. Most current methods are based on clustering, which requires substantial human supervision and systematically mishandles temporally overlapping spikes. We formulate the problem as one of statistical inference, in which the recorded voltage is a noisy sum of the spike trains of each neuron convolved with its associated spike waveform. Joint maximum-a-posteriori (MAP) estimation of the waveforms and spikes is then a blind deconvolution problem in which the coefficients are sparse. We develop a block-coordinate descent procedure to approximate the MAP solution, based on our recently developed continuous basis pursuit method. We validate our method on simulated data as well as real data for which ground truth is available via simultaneous intracellular recordings. In both cases, our method substantially reduces the number of missed spikes and false positives when compared to a standard clustering algorithm, primarily by recovering overlapping spikes. The method offers a fully automated alternative to clustering methods that is less susceptible to systematic errors. 1

5 0.84398627 75 nips-2011-Dynamical segmentation of single trials from population neural data

Author: Biljana Petreska, Byron M. Yu, John P. Cunningham, Gopal Santhanam, Stephen I. Ryu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Simultaneous recordings of many neurons embedded within a recurrentlyconnected cortical network may provide concurrent views into the dynamical processes of that network, and thus its computational function. In principle, these dynamics might be identified by purely unsupervised, statistical means. Here, we show that a Hidden Switching Linear Dynamical Systems (HSLDS) model— in which multiple linear dynamical laws approximate a nonlinear and potentially non-stationary dynamical process—is able to distinguish different dynamical regimes within single-trial motor cortical activity associated with the preparation and initiation of hand movements. The regimes are identified without reference to behavioural or experimental epochs, but nonetheless transitions between them correlate strongly with external events whose timing may vary from trial to trial. The HSLDS model also performs better than recent comparable models in predicting the firing rate of an isolated neuron based on the firing rates of others, suggesting that it captures more of the “shared variance” of the data. Thus, the method is able to trace the dynamical processes underlying the coordinated evolution of network activity in a way that appears to reflect its computational role. 1

6 0.83695292 302 nips-2011-Variational Learning for Recurrent Spiking Networks

7 0.83483028 292 nips-2011-Two is better than one: distinct roles for familiarity and recollection in retrieving palimpsest memories

8 0.82587713 86 nips-2011-Empirical models of spiking in neural populations

9 0.81794012 240 nips-2011-Robust Multi-Class Gaussian Process Classification

10 0.81423694 229 nips-2011-Query-Aware MCMC

11 0.81290406 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning

12 0.81178129 40 nips-2011-Automated Refinement of Bayes Networks' Parameters based on Test Ordering Constraints

13 0.80793399 37 nips-2011-Analytical Results for the Error in Filtering of Gaussian Processes

14 0.8066172 137 nips-2011-Iterative Learning for Reliable Crowdsourcing Systems

15 0.8053776 225 nips-2011-Probabilistic amplitude and frequency demodulation

16 0.80531025 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

17 0.80415249 102 nips-2011-Generalised Coupled Tensor Factorisation

18 0.80352598 241 nips-2011-Scalable Training of Mixture Models via Coresets

19 0.80062813 88 nips-2011-Environmental statistics and the trade-off between model-based and TD learning in humans

20 0.8001827 94 nips-2011-Facial Expression Transfer with Input-Output Temporal Restricted Boltzmann Machines