nips nips2013 nips2013-246 knowledge-graph by maker-knowledge-mining

246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity


Source: pdf

Author: Christian Albers, Maren Westkott, Klaus Pawelzik

Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. [sent-7, score-0.149]

2 It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. [sent-8, score-0.258]

3 The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. [sent-11, score-0.233]

4 Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons. [sent-12, score-0.638]

5 The original Perceptron Learning Rule (PLR) is a supervised learning rule that employs a threshold to control weight changes, which also serves as a margin to enhance robustness [2, 3]. [sent-14, score-0.187]

6 Associative learning can be considered a special case of supervised learning where the activity of the output neuron is used as a teacher signal such that after learning missing activities are filled in. [sent-16, score-0.28]

7 For this reason the PLR is very useful for building associative memories in recurrent networks where it can serve to learn arbitrary patterns in a ’quasi-unsupervised’ way. [sent-17, score-0.216]

8 On the other hand, it is not known if and how real synaptic mechanisms might realize the successdependent self-regulation of the PLR in networks of spiking neurons in the brain. [sent-20, score-0.361]

9 The simplified tempotron learning rule, while biologically more plausible, still relies on a reward signal which tells each neuron specifically that it should have spiked or not. [sent-22, score-0.408]

10 Taken together, while highly desirable, the feature of self regulation in the PLR still poses a challenge for biologically realistic synaptic mechanisms. [sent-23, score-0.253]

11 In particular, it was found that the reversed temporal order (first post- then presynaptic spiking) could lead to LTP (and vice versa; RSTDP), depending on the location on the dendrite [9, 10]. [sent-26, score-0.228]

12 For inhibitory synapses some experiments were performed which indicate that here STDP exists as well and has the form of CSTDP [11]. [sent-27, score-0.147]

13 Note that CSTDP of inhibitory synapses in its effect on the postsynaptic neuron is equivalent to RSTDP of excitatory synapses. [sent-28, score-0.736]

14 Additionally it has been shown that CSTDP does not always rely on spikes, but that strong subthreshold depolarization can replace the postsynaptic spike for LTD while keeping the usual timing dependence [12]. [sent-29, score-0.901]

15 It is very likely that plasticity rules and dynamical properties of neurons co-evolved to take advantage of each other. [sent-32, score-0.311]

16 A modeling example for a beneficial effect of such an interplay was investigated in [13], where CSTDP interacted with spike-frequency adaptation of the postsynaptic neuron to perform a gradient descent on a square error. [sent-34, score-0.556]

17 When the neuron reaches a threshold potential Uthr , it is reset to a reset potential Ureset < 0, from where it decays back to the resting potential U∞ = 0 with time constant τU . [sent-42, score-0.412]

18 Synaptic transmission takes the form of delta pulses, which reach the soma of the postsynaptic neuron after time τa + τd , and are modulated by the synaptic weight w. [sent-44, score-0.745]

19 We denote the presynaptic spike train of neuron i as xi with spike times ti : pre δ(t − ti ). [sent-45, score-1.049]

20 pre xi (t) = ti pre 2 (2) A B Uthr postsynaptic trace y Ust U¥ presynaptic spikes x x z(t) w(t) x(t) subthreshold events z(t) Figure 1: Illustration of STDP mechanism. [sent-46, score-1.079]

21 A: Upper trace (red) is the membrane potential of the postsynaptic neuron. [sent-47, score-0.668]

22 Middle trace (black) is the variable z(t), the train of LTD threshold crossing events. [sent-49, score-0.14]

23 Please note that the first spike in z(t) occurs at a different time than the neuronal spike. [sent-50, score-0.339]

24 The second event in z reads out the trace of the presynaptic ¯ spike x, leading to LTD. [sent-52, score-0.551]

25 A postsynaptic spike leads ¯ to an instantaneous jump in the trace y (top left, red line), which decays exponentially. [sent-54, score-0.764]

26 Subsequent ¯ presynaptic spikes (dark blue bars and corresponding thin gray bars in the STDP window) “read” out the state of the trace for the respective ∆t = tpre − tpost . [sent-55, score-0.402]

27 Similarly, z(t) reads out the presynaptic trace x (lower left, blue line). [sent-56, score-0.242]

28 ¯ A postsynaptic neuron receives the input Isyn (t) = i wi xi (t − τa − τd ). [sent-58, score-0.585]

29 The postsynaptic spike train is similarly denoted by y(t) = tpost δ(t − tpost ). [sent-59, score-0.809]

30 2 The plasticity rule The plasticity rule we employ mimics reverse STDP: A postsynaptic spike which arrives at the synapse shortly before a presynaptic spike leads to synaptic potentiation. [sent-61, score-2.071]

31 For synaptic depression the relevant signal is not the spike, but the point in time where U (t) crosses an additional threshold Ust from below, with U∞ < Ust < Uthr (“subthreshold threshold”). [sent-62, score-0.273]

32 These events are modelled as δ-pulses in the function z(t) = k δ(t−tk ), where tk are the times of the aforementioned threshold crossing events (see Fig. [sent-63, score-0.157]

33 The temporal characteristic of (reverse) STDP is preserved: If a presynaptic spike occurs shortly before the membrane potential crosses this threshold, the synapse depresses. [sent-65, score-0.766]

34 Timing dependent LTD without postsynaptic spiking has been observed, although with classical timing requirements [12]. [sent-66, score-0.52]

35 We formalize this by letting pre- and postsynaptic spikes each drive a synaptic trace: ˙ ¯ x τpre x = −¯ + x(t − τa ) ˙ ¯ y τpost y = −¯ + y(t − τd ). [sent-67, score-0.723]

36 (3) The learning rule is a read–out of the traces by spiking and threshold crossing events, respectively: w ∝ y x(t − τa ) − γ xz(t − τd ), ˙ ¯ ¯ (4) where γ is a factor which scales depression and potentiation relative to each other. [sent-68, score-0.356]

37 1 B shows how this plasticity rule creates RSTDP. [sent-70, score-0.302]

38 3 Equivalence to Perceptron Learning Rule The Perceptron Learning Rule (PLR) for positive binary inputs and outputs is given by µ µ µ ∆wi ∝ xi,µ (2y0 − 1)Θ [κ − (2y0 − 1)(hµ − ϑ)] , 0 3 (5) where xi,µ ∈ {0, 1} denotes the activity of presynaptic neuron i in pattern µ ∈ {1, . [sent-71, score-0.418]

39 , P }, 0 µ y0 ∈ {0, 1} signals the desired response to pattern µ, κ > 0 is a margin which ensures a certain robustness against noise after convergence, hµ = i wi xi,µ is the input to a postsynaptic neuron, 0 ϑ denotes the firing threshold, and Θ(x) denotes the Heaviside step function. [sent-74, score-0.578]

40 If the P patterns are linearly separable, the perceptron will converge to a correct solution of the weights in a finite number of steps. [sent-75, score-0.186]

41 Interestingly, for the case of temporally well separated synchronous spike patterns the combination of RSTDP-like synaptic plasticity dynamics with depolarization-dependent LTD and neuronal hyperpolarization leads to a plasticity rule which can be mapped to the Perceptron Learning Rule. [sent-78, score-1.323]

42 We consider a single postsynaptic neuron with N presynaptic neurons, with the condition τd < τa . [sent-80, score-0.759]

43 During learning, presynaptic spike patterns consisting of synchronous spikes at time t = 0 are induced, concurrent with a possibly occuring postsynaptic spike which signals the class the presynaptic pattern belongs to. [sent-81, score-1.698]

44 With x0 and y0 used as above we can write the pre- and postsynaptic activity as x(t) = x0 δ(t) and y(t) = y0 δ(t). [sent-83, score-0.461]

45 The membrane potential of the postsynaptic neuron depends on y0 : U (t) = y0 Ureset exp(−t/τU ) U (τa + τd ) = y0 Ureset exp(−(τa + τd )/τU ) = y0 Uad . [sent-84, score-0.769]

46 (6) Similarly, the synaptic current is wi xi δ(t − τa − τd ) 0 Isyn (t) = i wi xi = Iad . [sent-85, score-0.247]

47 0 Isyn (τa + τd ) = (7) i The activity traces at the synapses are exp(−(t − τa )/τpre ) τpre exp(−(t − τd )/τpost ) y (t) = y0 Θ(t − τd ) ¯ . [sent-86, score-0.183]

48 τpost x(t) = x0 Θ(t − τa ) ¯ (8) The variable of threshold crossing z(t) depends on the history of the postsynaptic neurons, which again can be written with the aid of y0 as: z(t) = Θ(Iad + y0 Uad − Ust )δ(t − τa − τd ). [sent-87, score-0.517]

49 Only when the postsynaptic input at time t = τa + τd is greater than the residual hyperpolarization (Uad < 0! [sent-89, score-0.542]

50 These are the ingredients for the plasticity rule (4): ∆w ∝ [¯x(t − τa ) − γ xz(t − τd )] dt y ¯ =x0 y0 exp(−(τa + τd )/τpost ) exp(−2τd /τpre ) − γx0 Θ(Iad + y0 Uad − Ust ). [sent-91, score-0.302]

51 4 Associative learning of spatio-temporal spike patterns 4. [sent-101, score-0.387]

52 1 Tempotron learning with RSTDP The condition of exact spike synchrony used for the above equivalence proof can be relaxed to include the association of spatio-temporal spike patterns with a desired postsynaptic activity. [sent-102, score-1.176]

53 In the following we take the perspective of the postsynaptic neuron which during learning is externally activated (or not) to signal the respective class by spiking at time t = 0 (or not). [sent-103, score-0.625]

54 During learning in each trial presynaptic spatio-temporal spike patterns are presented in the time span 0 < t < T , and plasticity is ruled by (4). [sent-104, score-0.825]

55 For these conditions the resulting synaptic weights realize a Tempotron with substantial memory capacity. [sent-105, score-0.189]

56 A Tempotron is an integrate-and-fire neuron with input weights adjusted to perform arbitrary classifications of (sparse) spike patterns [5, 18]. [sent-106, score-0.527]

57 First, we separate the time scales of membrane potential and hyperpolarization by introducing a variable ν: τν ν = −ν . [sent-108, score-0.339]

58 ˙ (16) Immediately after a postsynaptic spike, ν is reset to νspike < 0. [sent-109, score-0.445]

59 The reason is that the length of hyperpolarization determines the time window where significant learning can take place. [sent-110, score-0.181]

60 2s, so that the postsynaptic neuron can learn to spike almost anywhere over the time window, and we introduce postsynaptic potentials (PSP) with a finite rise time: ˙ τs Isyn = −Isyn + wi xi (t − τa ), (17) i where wi denotes the synaptic weight of presynaptic neuron i. [sent-113, score-1.871]

61 With these changes, the membrane potential is governed by ˙ τU U = (ν − U ) + Isyn (t − τd ). [sent-116, score-0.213]

62 (18) A postsynaptic spike resets U to νspike = Ureset < 0. [sent-117, score-0.725]

63 τU sets the time scale for the summation of EPSP contributing to spurious spikes, τν sets the time window where the desired spikes can lie. [sent-121, score-0.209]

64 5 Figure 2: Illustration of Perceptron learning with RSTDP with subthreshold LTD and postsynaptic hyperpolarization. [sent-123, score-0.515]

65 Pre- and postsynaptic spikes are displayed as ¯ ¯ black bars at t = 0. [sent-125, score-0.534]

66 Initially the weights are too low and the synaptic current (summed PSPs) is smaller than Ust . [sent-129, score-0.189]

67 Weight change is LTP only until during pattern presentation the membrane potential hits Ust . [sent-130, score-0.285]

68 Shown are the same traces as in A at the absence of an inital postsynaptic spike. [sent-133, score-0.46]

69 The membrane potential after learning is drawn as a dashed line to highlight the amplitude. [sent-134, score-0.213]

70 Without the initial hyperpolarization, the synaptic current after learning is large enough to cross the spiking threshold, the postsynaptic neuron fires the desired spike. [sent-135, score-0.85]

71 Learning until Ust is reached ensures a minimum height of synaptic currents and therefore robustness against noise. [sent-136, score-0.233]

72 Initially, the synaptic current during pattern presentation causes a spike and consequently LTD. [sent-138, score-0.57]

73 Learning stops when the membrane potential stays below Ust . [sent-139, score-0.213]

74 Shown is the fraction of pattern which elicit the desired postsynaptic activity upon presentation. [sent-143, score-0.527]

75 Shown is the fraction of pattern which during recall succeed in producing the correct postsynaptic spike time in a window of length 30 ms after the teacher spike. [sent-150, score-0.906]

76 1 Learning performance We test the performance of networks of N input neurons at classifying spatio-temporal spike patterns by generating P = αN patterns, which we repeatedly present to the network. [sent-155, score-0.467]

77 In each pattern, each presynaptic neuron spikes exactly once at a fixed time in each presentation, with spike times uniformly distributed over the trial. [sent-156, score-0.77]

78 After each block, we test if the postsynaptic output matches the desired activity for each pattern. [sent-160, score-0.497]

79 If during training a postsynaptic spike at t = 0 was induced, the output can lie anytime in the testing trial for a positive outcome. [sent-161, score-0.725]

80 To test scaling of the capacity, we generate networks of 100 to 600 neurons and present the patterns until the classification error reaches a plateau. [sent-162, score-0.158]

81 2 Chronotron learning with RSTDP In the Chronotron [17] input spike patterns become associated with desired spike trains. [sent-180, score-0.732]

82 The plasticity mechanism presented here has the tendency to generate postsynaptic spikes as close in time as possible to the teacher spike during recall. [sent-182, score-1.19]

83 The average distance of these 7 spikes depends on the time constants of hyperpolarization and the learning window, especially τpost . [sent-184, score-0.244]

84 the ability to generate the desired spike times within a short window in time, is shown in Fig. [sent-188, score-0.4]

85 Inspection of the spike times reveals that the average distance of output spikes to the respective teacher spike is much shorter than the learning window (≈ 2ms for α = 0. [sent-192, score-0.862]

86 This provides a biologically plausible mechanism to build associative memories with a capacity close to the theoretical maximum. [sent-197, score-0.267]

87 The mechanism proposed here is complementary to a previous approach [13] which uses CSTDP in combination with spike frequency adaptation to perform gradient descent learning on a squared error. [sent-200, score-0.35]

88 For sparse spatio-temporal spike patterns extensive simulations show that the same mechanism is able to learn Tempotrons and Chronotrons with substantial memory capacity. [sent-203, score-0.428]

89 However, in the case of the Chronotron the capacity comes close to the one obtained with a commonly employed, supervised spike time learning rule. [sent-205, score-0.376]

90 A prototypical example for such a supervised learning rule is the Temptron rule proposed by G¨ tig and Sompolinski [5]. [sent-207, score-0.186]

91 Essentially, after a pattern presentation the complete u time course of the membrane potential during the presentation is examined, and if classification was erroneous, the synaptic weights which contributed most to the absolute maximum of the potential are changed. [sent-208, score-0.57]

92 In other words, the neurons would have to able to retrospectivly disentangle contributions to their membrane potential at a certain time in the past. [sent-209, score-0.269]

93 As we showed here, RSTDP with subthreshold LTD together with postsynaptic hyperpolarization for the first time provides a realistic mechanism for Tempotron and Chronotron learning. [sent-210, score-0.702]

94 After-hyperpolarization allows synaptic potentiation for presynaptic inputs immediately after the teacher spike without causing additional non-teacher spikes, which would be detrimental for learning. [sent-214, score-0.815]

95 During recall, the absence of the hyperpolarization ensures the then desired threshold crossing of the membrane potential (see Fig. [sent-215, score-0.499]

96 It counteracts synaptic potentiation when the membrane potential becomes sufficiently high after the teacher spike. [sent-218, score-0.516]

97 Taken together, our results show that the interplay of neuronal dynamics and synaptic plasticity rules can give rise to powerful learning dynamics. [sent-220, score-0.496]

98 [5] G¨ tig R, Sompolinsky H (2006) The Tempotron: a neuron that learns spike timing-based decisions. [sent-231, score-0.477]

99 [9] Froemke RC, Poo MM, Dan Y (2005) Spike-timing-dependent synaptic plasticity depends on dendritic location. [sent-239, score-0.424]

100 [14] Song S, Miller KD, Abbott LF (2000) Competitive Hebbian learning through spike-timing-dependent synaptic plasticity. [sent-248, score-0.189]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('postsynaptic', 0.416), ('spike', 0.309), ('ust', 0.259), ('plasticity', 0.235), ('tempotron', 0.224), ('presynaptic', 0.203), ('stdp', 0.196), ('synaptic', 0.189), ('ltd', 0.17), ('iad', 0.168), ('uad', 0.168), ('membrane', 0.159), ('neuron', 0.14), ('chronotron', 0.14), ('hyperpolarization', 0.126), ('plr', 0.126), ('rstdp', 0.126), ('spikes', 0.118), ('isyn', 0.112), ('perceptron', 0.108), ('subthreshold', 0.099), ('cstdp', 0.098), ('synapses', 0.094), ('pre', 0.088), ('bremen', 0.084), ('uthr', 0.084), ('post', 0.081), ('patterns', 0.078), ('teacher', 0.071), ('ureset', 0.07), ('spiking', 0.069), ('rule', 0.067), ('associative', 0.063), ('ltp', 0.062), ('heaviside', 0.062), ('neurons', 0.056), ('window', 0.055), ('potential', 0.054), ('inhibitory', 0.053), ('threshold', 0.052), ('memories', 0.051), ('crossing', 0.049), ('activity', 0.045), ('biologically', 0.044), ('traces', 0.044), ('capacity', 0.043), ('potentiation', 0.043), ('depolarization', 0.042), ('poo', 0.042), ('tpost', 0.042), ('presentation', 0.042), ('mechanism', 0.041), ('synapse', 0.041), ('trace', 0.039), ('hebbian', 0.037), ('concert', 0.037), ('desired', 0.036), ('timing', 0.035), ('excitatory', 0.033), ('depression', 0.032), ('synchronous', 0.032), ('load', 0.032), ('pattern', 0.03), ('neuronal', 0.03), ('dan', 0.03), ('wi', 0.029), ('reset', 0.029), ('equivalence', 0.028), ('events', 0.028), ('chronotrons', 0.028), ('clopath', 0.028), ('hyperpolarisation', 0.028), ('pawelzik', 0.028), ('psps', 0.028), ('tempotrons', 0.028), ('tig', 0.028), ('mv', 0.026), ('perfect', 0.026), ('plausible', 0.025), ('ms', 0.025), ('maren', 0.025), ('hebb', 0.025), ('resume', 0.025), ('dendrite', 0.025), ('gerstner', 0.025), ('iext', 0.025), ('supervised', 0.024), ('networks', 0.024), ('classi', 0.023), ('neocortical', 0.023), ('xz', 0.023), ('ensures', 0.023), ('margin', 0.023), ('mechanisms', 0.023), ('dynamics', 0.022), ('robustness', 0.021), ('rules', 0.02), ('realistic', 0.02), ('sompolinsky', 0.019), ('str', 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

Author: Christian Albers, Maren Westkott, Klaus Pawelzik

Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.

2 0.30132973 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

Author: Cristina Savin, Peter Dayan, Mate Lengyel

Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1

3 0.27663532 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

4 0.24243073 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

5 0.20690006 15 nips-2013-A memory frontier for complex synapses

Author: Subhaneil Lahiri, Surya Ganguli

Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function. 1

6 0.17375542 121 nips-2013-Firing rate predictions in optimal balanced networks

7 0.16368197 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

8 0.1632973 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

9 0.12692732 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

10 0.11469414 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

11 0.11225586 173 nips-2013-Least Informative Dimensions

12 0.10074054 210 nips-2013-Noise-Enhanced Associative Memories

13 0.086781755 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes

14 0.085698463 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

15 0.080236621 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

16 0.06920734 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

17 0.069155768 205 nips-2013-Multisensory Encoding, Decoding, and Identification

18 0.056683775 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes

19 0.048607457 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

20 0.045452148 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.112), (1, 0.077), (2, -0.107), (3, -0.086), (4, -0.368), (5, -0.037), (6, -0.032), (7, -0.102), (8, 0.04), (9, 0.08), (10, 0.032), (11, 0.025), (12, 0.06), (13, 0.011), (14, 0.056), (15, -0.055), (16, 0.024), (17, 0.084), (18, 0.016), (19, 0.01), (20, -0.028), (21, -0.009), (22, 0.058), (23, 0.188), (24, -0.037), (25, 0.162), (26, 0.072), (27, -0.103), (28, -0.051), (29, 0.019), (30, 0.038), (31, 0.115), (32, 0.05), (33, -0.039), (34, 0.067), (35, 0.066), (36, -0.052), (37, 0.04), (38, -0.071), (39, 0.048), (40, 0.096), (41, 0.024), (42, 0.056), (43, -0.135), (44, -0.01), (45, 0.033), (46, 0.001), (47, 0.1), (48, -0.003), (49, -0.008)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97216815 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

Author: Christian Albers, Maren Westkott, Klaus Pawelzik

Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.

2 0.81243527 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

Author: Cristina Savin, Peter Dayan, Mate Lengyel

Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1

3 0.76295364 15 nips-2013-A memory frontier for complex synapses

Author: Subhaneil Lahiri, Surya Ganguli

Abstract: An incredible gulf separates theoretical models of synapses, often described solely by a single scalar value denoting the size of a postsynaptic potential, from the immense complexity of molecular signaling pathways underlying real synapses. To understand the functional contribution of such molecular complexity to learning and memory, it is essential to expand our theoretical conception of a synapse from a single scalar to an entire dynamical system with many internal molecular functional states. Moreover, theoretical considerations alone demand such an expansion; network models with scalar synapses assuming finite numbers of distinguishable synaptic strengths have strikingly limited memory capacity. This raises the fundamental question, how does synaptic complexity give rise to memory? To address this, we develop new mathematical theorems elucidating the relationship between the structural organization and memory properties of complex synapses that are themselves molecular networks. Moreover, in proving such theorems, we uncover a framework, based on first passage time theory, to impose an order on the internal states of complex synaptic models, thereby simplifying the relationship between synaptic structure and function. 1

4 0.60024744 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

5 0.5348438 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

6 0.50200313 121 nips-2013-Firing rate predictions in optimal balanced networks

7 0.4826583 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

8 0.46260357 210 nips-2013-Noise-Enhanced Associative Memories

9 0.45753413 61 nips-2013-Capacity of strong attractor patterns to model behavioural and cognitive prototypes

10 0.4387525 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

11 0.43719813 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

12 0.42396322 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

13 0.41643471 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes

14 0.39855328 86 nips-2013-Demixing odors - fast inference in olfaction

15 0.38963041 205 nips-2013-Multisensory Encoding, Decoding, and Identification

16 0.37922704 308 nips-2013-Spike train entropy-rate estimation using hierarchical Dirichlet process priors

17 0.35222268 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

18 0.32907936 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

19 0.31779143 173 nips-2013-Least Informative Dimensions

20 0.30631012 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.385), (2, 0.017), (16, 0.041), (33, 0.084), (34, 0.065), (41, 0.024), (49, 0.094), (56, 0.051), (70, 0.068), (85, 0.016), (89, 0.028), (93, 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.77425987 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

Author: Christian Albers, Maren Westkott, Klaus Pawelzik

Abstract: Recent extensions of the Perceptron as the Tempotron and the Chronotron suggest that this theoretical concept is highly relevant for understanding networks of spiking neurons in the brain. It is not known, however, how the computational power of the Perceptron might be accomplished by the plasticity mechanisms of real synapses. Here we prove that spike-timing-dependent plasticity having an anti-Hebbian form for excitatory synapses as well as a spike-timing-dependent plasticity of Hebbian shape for inhibitory synapses are sufficient for realizing the original Perceptron Learning Rule if these respective plasticity mechanisms act in concert with the hyperpolarisation of the post-synaptic neurons. We also show that with these simple yet biologically realistic dynamics Tempotrons and Chronotrons are learned. The proposed mechanism enables incremental associative learning from a continuous stream of patterns and might therefore underly the acquisition of long term memories in cortex. Our results underline that learning processes in realistic networks of spiking neurons depend crucially on the interactions of synaptic plasticity mechanisms with the dynamics of participating neurons.

2 0.49352387 183 nips-2013-Mapping paradigm ontologies to and from the brain

Author: Yannick Schwartz, Bertrand Thirion, Gael Varoquaux

Abstract: Imaging neuroscience links brain activation maps to behavior and cognition via correlational studies. Due to the nature of the individual experiments, based on eliciting neural response from a small number of stimuli, this link is incomplete, and unidirectional from the causal point of view. To come to conclusions on the function implied by the activation of brain regions, it is necessary to combine a wide exploration of the various brain functions and some inversion of the statistical inference. Here we introduce a methodology for accumulating knowledge towards a bidirectional link between observed brain activity and the corresponding function. We rely on a large corpus of imaging studies and a predictive engine. Technically, the challenges are to find commonality between the studies without denaturing the richness of the corpus. The key elements that we contribute are labeling the tasks performed with a cognitive ontology, and modeling the long tail of rare paradigms in the corpus. To our knowledge, our approach is the first demonstration of predicting the cognitive content of completely new brain images. To that end, we propose a method that predicts the experimental paradigms across different studies. 1

3 0.46166831 161 nips-2013-Learning Stochastic Inverses

Author: Andreas Stuhlmüller, Jacob Taylor, Noah Goodman

Abstract: We describe a class of algorithms for amortized inference in Bayesian networks. In this setting, we invest computation upfront to support rapid online inference for a wide range of queries. Our approach is based on learning an inverse factorization of a model’s joint distribution: a factorization that turns observations into root nodes. Our algorithms accumulate information to estimate the local conditional distributions that constitute such a factorization. These stochastic inverses can be used to invert each of the computation steps leading to an observation, sampling backwards in order to quickly find a likely explanation. We show that estimated inverses converge asymptotically in number of (prior or posterior) training samples. To make use of inverses before convergence, we describe the Inverse MCMC algorithm, which uses stochastic inverses to make block proposals for a Metropolis-Hastings sampler. We explore the efficiency of this sampler for a variety of parameter regimes and Bayes nets. 1

4 0.39274096 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

5 0.39041874 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

Author: Cristina Savin, Peter Dayan, Mate Lengyel

Abstract: It has long been recognised that statistical dependencies in neuronal activity need to be taken into account when decoding stimuli encoded in a neural population. Less studied, though equally pernicious, is the need to take account of dependencies between synaptic weights when decoding patterns previously encoded in an auto-associative memory. We show that activity-dependent learning generically produces such correlations, and failing to take them into account in the dynamics of memory retrieval leads to catastrophically poor recall. We derive optimal network dynamics for recall in the face of synaptic correlations caused by a range of synaptic plasticity rules. These dynamics involve well-studied circuit motifs, such as forms of feedback inhibition and experimentally observed dendritic nonlinearities. We therefore show how addressing the problem of synaptic correlations leads to a novel functional account of key biophysical features of the neural substrate. 1

6 0.38772967 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

7 0.38408819 266 nips-2013-Recurrent linear models of simultaneously-recorded neural populations

8 0.37828282 274 nips-2013-Relevance Topic Model for Unstructured Social Group Activity Recognition

9 0.37810242 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines

10 0.37802929 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

11 0.37134081 131 nips-2013-Geometric optimisation on positive definite matrices for elliptically contoured distributions

12 0.37099424 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

13 0.37061051 323 nips-2013-Synthesizing Robust Plans under Incomplete Domain Models

14 0.36910909 70 nips-2013-Contrastive Learning Using Spectral Methods

15 0.36869636 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

16 0.36849335 157 nips-2013-Learning Multi-level Sparse Representations

17 0.36500412 345 nips-2013-Variance Reduction for Stochastic Gradient Optimization

18 0.36259434 64 nips-2013-Compete to Compute

19 0.35919738 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization

20 0.35853049 16 nips-2013-A message-passing algorithm for multi-agent trajectory planning