nips nips2000 nips2000-129 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
Reference: text
sentIndex sentText sentNum sentScore
1 il Abstract The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. [sent-4, score-1.037]
2 This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. [sent-5, score-1.127]
3 We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. [sent-6, score-0.942]
4 Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. [sent-7, score-0.492]
5 The magnitude of these synaptic changes decays roughly exponentially as a function of the time difference between pre- and post synaptic spikes, with a time constant of few tens of milliseconds (results vary between studies, especially with regard to the synaptic depression component, compare e. [sent-10, score-1.35]
6 What could be the computational role of this delicate type of plasticity, sometimes termed spike-timing dependent plasticity (STDP) ? [sent-13, score-0.146]
7 Importantly, STDP embodies an inherent competition between incoming inputs, and was shown to result in normalization of total incoming synaptic strength [7], maintain the irregularity of neuronal firing [8, 9], ·Work supported in part by a Human Frontier Science Project (HFSP) grant RG 0133/1998. [sent-15, score-0.543]
8 and lead to the emergence of synchronous subpopulation firing in recurrent networks [10]. [sent-16, score-0.134]
9 The dynamics of synaptic efficacies under the operation of STDP strongly depends on whether STDP is implemented additively (independent of the baseline synaptic value) or multiplicatively (where the change is proportional to the synaptic efficacy) [13]. [sent-18, score-1.063]
10 To derive our learning rule, we consider the principle of mutual information maximization. [sent-20, score-0.469]
11 This idea, known as the Infomax principle [14], states that the goal of a neural network's learning procedure is to maximize the mutual information between its output and input. [sent-21, score-0.514]
12 The current paper applies Infomax for a leaky integrator neuron with spiking inputs. [sent-22, score-0.291]
13 The derivation suggests computational insights into the dependence of the temporal characteristics of STDP on biophysical parameters and shows that STDP may serve to maximize mutual information in a network of spiking neurons. [sent-23, score-0.652]
14 2 The Model We study a network with N input neurons Sl . [sent-24, score-0.214]
15 SN firing spike trains, and a single output (target) neuron Y. [sent-26, score-0.431]
16 The filter F may be used to consider general synaptic transfer function and voltage decay effects, but is set here as an example to an exponential filter Fr (x) == exp( - x / T). [sent-29, score-0.563]
17 The learning goal is to set the synaptic weights W such that M + 1 uncorrelated patterns of input activity ~1/ ('TJ = O M) may be . [sent-30, score-0.636]
18 Each pattern determines the firing rates of the input neurons, thus S is a noisy realization of ~ due to the stochasticity of the point process. [sent-33, score-0.26]
19 The input patterns are presented for periods of length T (on the order of tens of milliseconds). [sent-34, score-0.223]
20 At each period, a pattern ~1/ is randomly chosen for presentation with probability q1/' where most of the patterns are rare (L:! [sent-35, score-0.509]
21 It should be stressed that in our model information is coded in the non-stationary rates that underlie the input spike trains. [sent-37, score-0.411]
22 As these rates are not observable, any learning must depends on the observable input spikes that realize those underlying rates. [sent-38, score-0.441]
23 3 Mutual Information Maximization Let us focus on a single presentation period (omitting the notation of t), and look at the value of Y at the end of this period, Y = L:~l WiX i , with Xi == J~T et'/r Si(t')dt'. [sent-39, score-0.192]
24 This mutual information measures how easy it is to decide which input pattern 'TJ was presented to the network by observing the network's output Y. [sent-44, score-0.589]
25 To calculate the conditional entropy h(YI'TJ) we use the assumption that input neurons fire independently and their number is large, thus the input of the target neuron when the network is presented with the pattern ~11 is normally distributed f(YI'TJ) = N(J. [sent-45, score-0.648]
26 The brackets denote averaging over the possible realizations of the inputs Xl1 when the network is presented with the pattern ~11. [sent-48, score-0.209]
27 To calculate the entropy of Y we note that f (Y) is a mixture of Gaussians, each resulting from the presentation of an input pattern and use the assumption E! [sent-49, score-0.336]
28 Differentiating the mutual information with regard to Wi we obtain 8I(Yj'TJ) 8Wi M +Lql1 (Cav(y,X:! [sent-52, score-0.359]
29 where E(X;:) is the expected value of X;: as averaged over presentation of the ~11 pattern. [sent-62, score-0.157]
30 The derived gradient may be used for a gradient ascent learning rule by repeatedly calculating the distribution moments J. [sent-64, score-0.484]
31 This learning rule climbs along the gradient and is bound to converge to a local maximum of the mutual information. [sent-67, score-0.679]
32 Figure lA plots the mutual information during the operation of the learning rule, showing that the network indeed reaches a (possibly local) mutual information maximum. [sent-68, score-0.798]
33 Figure IB depicts the changes in output distribution during learning, showing that it splits into two segregated bumps: one that corresponds to the ~o pattern and another that corresponds to the rest of the patterns. [sent-69, score-0.261]
34 4 Learning In A Biological System Aiming to obtain a spike-dependent biologically feasible learning rule that maximizes mutual information, we now turn to approximate the analytical rule derived above by a rule that can be implemented in biology. [sent-70, score-1.343]
35 Since information is believed to be coded in the activity of excitatory neurons, we limit the weights W to positive values. [sent-73, score-0.179]
36 t11' (71)" To avoid this problem we approximate the learning rule by replacing {K~ , Kg, K~} with constants {>. [sent-75, score-0.441]
37 01 o ~~~~--~-~~~~~~-~ 1000 2000 time steps 3000 4000 - 60 - 20 0 20 40 output value Y Figure 1: Mutual information and output distribution along learning with the gradient ascent learning rule (Eq. [sent-101, score-0.743]
38 All patterns were constructed by setting 10% of the input neurons to fire Poisson spike trains at 40R z, while the rest fire at lOR z. [sent-103, score-0.832]
39 Poisson spike trains were simulated by discretizing time into 1 millisecond bins. [sent-104, score-0.314]
40 Outputs segregate into two distinct bumps: one corresponds to the presentation of the ~o pattern and the other corresponds to the rest of the patterns. [sent-111, score-0.373]
41 Thirdly, summation over patterns embodies a 'batch' mode of learning, requiring very large memory to average over multiple presentations. [sent-112, score-0.174]
42 To implement an online learning rule, we replace summation over patterns by pattern-triggered learning. [sent-113, score-0.255]
43 One should note that the analytical derivation yielded that summation in is performed over the rare patterns only (Eq. [sent-114, score-0.431]
44 3), thus pattern-triggered learning is naturally implemented by restricting learning to presentations of rare patterns l . [sent-115, score-0.584]
45 Fourthly, the learning rule explicitly depends on E(X) and COV(Y, X) that are not observables of the model. [sent-116, score-0.376]
46 We thus replace them by performing stochastic weighted averaging over spikes to yield a spike-dependent learning rule. [sent-117, score-0.367]
47 In the case of inhomogeneous Poisson spike trains where input neurons fire independently, the covariance t t' - t terms obeys Cau(Y,Xi) = W i E r / 2 (X i ), where Er(X) = e-T-E(S(t'))dt' . [sent-118, score-0.565]
48 The expectations E(Xn may be simply estimated by weighted averaging of the observed spikes Xi that precede the learning moment. [sent-119, score-0.415]
49 Estimating E(XP) is more difficult because, as stated above, learning should be triggered by the rare patterns only. [sent-120, score-0.534]
50 Thus, ~o spikes should have an effect only when a rare pattern ~'f/ is presented. [sent-121, score-0.48]
51 A possible solution is to use the fact that ~o is highly frequent, (and therefore spikes in the vicinity of a ~'f/ presentation are with high probability ~o spikes), to average over spikes following a ~'f/ presentation for background activity estimation. [sent-122, score-0.856]
52 These spikes can be temporally weighted in many ways: from the computational point of view it is beneficial to weigh spikes uniformly along time, but this may require long "memory" and is biologically improbable. [sent-123, score-0.624]
53 Since such fluctuations strongly reduce the mutual information obtained by these rules, we conclude that pattern-triggered learning should be triggered by the rare pattern only. [sent-125, score-0.876]
54 activated only when one of the rare patterns (f'I, mem = l. [sent-126, score-0.273]
55 M) is presented (4) where h,2(S(t')) denote the temporal weighting of f,,0 spikes. [sent-128, score-0.168]
56 It should be noted that this learning rule uses rare pattern presentations as an external ("supervised") learning signal. [sent-129, score-0.856]
57 The general form of this learning rule and its performance are discussed in the next section. [sent-130, score-0.376]
58 1 Analyzing The Biologically Feasible Rule Comparing performance We have obtained a new spike-dependent learning rule that may be implemented in a biological system that approximates an information maximization learning rule. [sent-132, score-0.716]
59 Does learning with the biologically feasible learning rule increase mutual information? [sent-134, score-0.869]
60 The curves in figure 2A compare the mutual information of the learning rule of Eq. [sent-136, score-0.694]
61 Apparently, the approximated learning rule achieves fairly good performance compared to the optimal rule, and most of reduction in performance is due to limiting weights to positive values. [sent-139, score-0.376]
62 2 Interpreting the learning rule structure The general form of the learning rule of Eq. [sent-141, score-0.752]
63 First, synaptic potentiation is temporally weighted in a manner that is determined by the same filter F that the neuron applies over its inputs, but learning should apply an average of F and F2 (it F(t - t')S(t')dt' and F2(t - t')S(t')dt'). [sent-143, score-0.922]
64 The relative weighting of these two components was numerically estimated by simulating the optimal rule of Eq. [sent-144, score-0.362]
65 Second, in our model synaptic depression is targeted at learning the underlying structure of background activity. [sent-146, score-0.679]
66 Our analysis does not restrict the temporal weighting of the depression curve. [sent-147, score-0.255]
67 The possible role of the postsynaptic spike is discussed in the following section. [sent-149, score-0.395]
68 6 Unsupervised Learning By now we have considered a learning scenario that used external information telling whether the presented pattern is the background pattern or not, to decide whether learning should take place. [sent-150, score-0.611]
69 When such learning signal is missing, it is tempting to use the postsynaptic spike (signaling the presence of an interesting input pattern) as a learning signal. [sent-151, score-0.694]
70 4 except this time learning is triggered by postsynaptic spikes instead of an external signal. [sent-153, score-0.753]
71 The resulting learning rule is similar to previous models of the experimentally observed STDP as [9, 13, 16]. [sent-154, score-0.531]
72 However, this mechanism will effectively serve learning only if the postsynaptic spikes co-occur with the presentation of a rare pattern. [sent-155, score-0.855]
73 Such co-occurrence may be achieved by supplying short learning signals at the presence of the interesting patterns (e. [sent-156, score-0.204]
74 This will induce learning such that later postsynaptic spikes will be triggered by the rare pattern presentation. [sent-159, score-0.921]
75 10% of the input neurons of CI ('T/ > 0) were set to fire at 40Hz, while the rest fire at 5H z . [sent-179, score-0.468]
76 ~o-neurons fire at 8Hz yielding similar average input as the fTl patterns. [sent-180, score-0.184]
77 4, plotting ~W as a function of the time difference between the learning signal time t and the input spike time tspike. [sent-192, score-0.508]
78 The potentiation curve (solid line) is the sum of two exponents with constants and (dashed lines). [sent-193, score-0.236]
79 The depression curve is not constrained by our derivation, thus several examples are brought (dot-dashed lines). [sent-194, score-0.158]
80 T Discussion In the framework of information maximization, we have derived a spike-dependent learning rule for a leaky integrator neuron. [sent-196, score-0.519]
81 This learning rule achieves near optimal mutual information and can in principle be implemented in biological neurons. [sent-197, score-0.85]
82 The analytical derivation of this rule allows to obtain insight into the learning rules observed experimentally in various preparations. [sent-198, score-0.697]
83 The most fundamental result is that time-dependent learning stems from the timedependency of neuronal output on its inputs. [sent-199, score-0.237]
84 In our model this is embodied in the filter F which a neuron applies over its input spike trains. [sent-200, score-0.49]
85 This filter is determined by the biophysical parameters of the neuron, namely its membrane leak, synaptic transfer functions and dendritic arbor structure. [sent-201, score-0.599]
86 Our model thus yields direct experimental predictions for the way temporal characteristics of the potentiation learning curve are determined by the neuronal biophysical parameters. [sent-202, score-0.514]
87 Namely, cells with larger membrane constants should exhibit longer synaptic potentiation time windows. [sent-203, score-0.63]
88 Interestingly, the time window observed for STDP potentiation indeed fits the time windows of an AMP A channel and is also in agreement with cortical membrane time constants, as predicted by the current analysis [4, 6]. [sent-204, score-0.361]
89 Several features of the theoretically derived rule may have similar functions in the experimentally observed rule: In our model synaptic weakening is targeted to learn the structure of the background activity. [sent-205, score-0.852]
90 Both synaptic depression and potentiation in our model should be triggered by rare pattern presentation to allow near-optimal mutual information. [sent-206, score-1.42]
91 IN addition, synaptic changes should depend on the synaptic baseline value in a sub-linear manner. [sent-207, score-0.687]
92 While the learning rule presented in Equation 4 assumes independent firing of input neurons , our derivation actually holds for a wider class of inputs. [sent-209, score-0.727]
93 In the case of correlated inputs however, the learning rule involves cross-synaptic terms, which may be difficult to compute by biological neurons. [sent-210, score-0.507]
94 As STDP is highly sensitive to synchronous inputs, it remains a most interesting question to investigate biologicallyfeasible approximations to an Infomax rule for time structured and synchronous inputs. [sent-211, score-0.42]
95 Asynchronous pre- and postsynaptic activity induces associative long-term depression in area CAl of the rat hippocampus in vitro. [sent-224, score-0.453]
96 Regulation of synaptic efficacy by coincidence of postsynaptic aps and epsps. [sent-235, score-0.559]
97 Precise spike timing determines the direction and extent of synaptic modifications in cultured hippocampal neurons. [sent-251, score-0.622]
98 Temporally asymetric hebbian learning, spike timing and neural respons variability. [sent-274, score-0.432]
99 Distributed synchrony of spiking neurons in a hebbian cell assembly. [sent-293, score-0.305]
100 Intrinsic stabilization of output rates by spike-time dependent hebbian learning. [sent-345, score-0.278]
wordName wordTfidf (topN-words)
[('synaptic', 0.326), ('stdp', 0.292), ('mutual', 0.267), ('rule', 0.259), ('spikes', 0.215), ('spike', 0.215), ('rare', 0.186), ('postsynaptic', 0.18), ('presentation', 0.157), ('triggered', 0.144), ('potentiation', 0.137), ('hebbian', 0.136), ('depression', 0.124), ('fire', 0.119), ('learning', 0.117), ('experimentally', 0.107), ('neurons', 0.104), ('neuron', 0.099), ('temporally', 0.097), ('plasticity', 0.093), ('patterns', 0.087), ('biophysical', 0.083), ('biological', 0.083), ('timing', 0.081), ('pattern', 0.079), ('filter', 0.076), ('neuronal', 0.075), ('derivation', 0.073), ('firing', 0.072), ('background', 0.071), ('temporal', 0.068), ('membrane', 0.065), ('spiking', 0.065), ('constants', 0.065), ('input', 0.065), ('weighting', 0.063), ('synchronous', 0.062), ('biologically', 0.062), ('trains', 0.062), ('rest', 0.061), ('external', 0.06), ('rules', 0.059), ('infomax', 0.058), ('efficacy', 0.053), ('kempter', 0.053), ('levy', 0.053), ('post', 0.053), ('dependent', 0.053), ('summation', 0.051), ('excitatory', 0.051), ('information', 0.051), ('maximization', 0.05), ('transfer', 0.049), ('observed', 0.048), ('inputs', 0.048), ('feasible', 0.047), ('leaky', 0.046), ('bumps', 0.046), ('efficacies', 0.046), ('cav', 0.046), ('gerstner', 0.046), ('integrator', 0.046), ('milliseconds', 0.046), ('network', 0.045), ('output', 0.045), ('rates', 0.044), ('studies', 0.044), ('poisson', 0.043), ('activity', 0.041), ('regard', 0.041), ('targeted', 0.041), ('pre', 0.041), ('numerically', 0.04), ('implemented', 0.039), ('associative', 0.038), ('presentations', 0.038), ('corresponds', 0.038), ('dt', 0.038), ('time', 0.037), ('presented', 0.037), ('gradient', 0.036), ('coded', 0.036), ('voltage', 0.036), ('ascent', 0.036), ('embodies', 0.036), ('rat', 0.036), ('applies', 0.035), ('period', 0.035), ('depend', 0.035), ('entropy', 0.035), ('neuroscience', 0.035), ('weighted', 0.035), ('analytical', 0.034), ('tens', 0.034), ('competition', 0.034), ('hippocampus', 0.034), ('principle', 0.034), ('curve', 0.034), ('wi', 0.033), ('fluctuations', 0.032)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999934 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
2 0.28530157 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
3 0.27912211 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
4 0.24193795 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
5 0.19089465 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
6 0.18061791 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron
7 0.14976561 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations
8 0.14265162 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
9 0.13548362 141 nips-2000-Universality and Individuality in a Neural Code
10 0.10134345 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
11 0.10028227 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization
12 0.090024792 38 nips-2000-Data Clustering by Markovian Relaxation and the Information Bottleneck Method
13 0.087168574 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
14 0.086632349 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
15 0.081630863 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
16 0.080844253 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks
17 0.074648678 34 nips-2000-Competition and Arbors in Ocular Dominance
18 0.07177192 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
19 0.071126767 49 nips-2000-Explaining Away in Weight Space
20 0.068894468 101 nips-2000-Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning
topicId topicWeight
[(0, 0.242), (1, -0.279), (2, -0.36), (3, -0.063), (4, 0.078), (5, -0.02), (6, -0.079), (7, 0.192), (8, -0.097), (9, 0.036), (10, -0.037), (11, -0.154), (12, 0.049), (13, -0.03), (14, 0.062), (15, -0.021), (16, -0.069), (17, 0.043), (18, 0.133), (19, 0.091), (20, -0.116), (21, -0.105), (22, -0.081), (23, 0.028), (24, -0.035), (25, 0.009), (26, 0.111), (27, -0.022), (28, -0.008), (29, 0.069), (30, 0.043), (31, 0.071), (32, 0.066), (33, 0.071), (34, 0.018), (35, 0.08), (36, 0.02), (37, 0.106), (38, 0.016), (39, 0.006), (40, 0.023), (41, -0.078), (42, -0.034), (43, -0.05), (44, 0.053), (45, -0.033), (46, -0.057), (47, -0.038), (48, -0.043), (49, 0.004)]
simIndex simValue paperId paperTitle
same-paper 1 0.96501845 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
2 0.84781128 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
3 0.71443707 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
4 0.6569581 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
5 0.63261592 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
6 0.47936165 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron
7 0.47504395 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations
8 0.46945399 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization
9 0.4022598 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
10 0.39224303 141 nips-2000-Universality and Individuality in a Neural Code
11 0.33830312 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
12 0.32512704 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
13 0.27390486 11 nips-2000-A Silicon Primitive for Competitive Learning
14 0.27341586 34 nips-2000-Competition and Arbors in Ocular Dominance
15 0.27036268 73 nips-2000-Kernel-Based Reinforcement Learning in Average-Cost Problems: An Application to Optimal Portfolio Choice
16 0.26190761 38 nips-2000-Data Clustering by Markovian Relaxation and the Information Bottleneck Method
17 0.25969365 49 nips-2000-Explaining Away in Weight Space
18 0.2531088 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
19 0.25203755 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
20 0.24804717 64 nips-2000-High-temperature Expansions for Learning Models of Nonnegative Data
topicId topicWeight
[(10, 0.029), (17, 0.111), (18, 0.272), (26, 0.015), (32, 0.013), (33, 0.034), (42, 0.023), (54, 0.013), (55, 0.023), (62, 0.034), (65, 0.013), (67, 0.118), (75, 0.02), (76, 0.045), (79, 0.013), (81, 0.076), (90, 0.018), (91, 0.013), (93, 0.02), (97, 0.014)]
simIndex simValue paperId paperTitle
same-paper 1 0.86285162 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
2 0.62157899 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
3 0.54827178 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
4 0.54664218 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
5 0.53554809 134 nips-2000-The Kernel Trick for Distances
Author: Bernhard Schölkopf
Abstract: A method is described which, like the kernel trick in support vector machines (SVMs), lets us generalize distance-based algorithms to operate in feature spaces, usually nonlinearly related to the input space. This is done by identifying a class of kernels which can be represented as norm-based distances in Hilbert spaces. It turns out that common kernel algorithms, such as SVMs and kernel PCA, are actually really distance based algorithms and can be run with that class of kernels, too. As well as providing a useful new insight into how these algorithms work, the present work can form the basis for conceiving new algorithms.
6 0.53230691 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
7 0.50441402 49 nips-2000-Explaining Away in Weight Space
8 0.49822143 79 nips-2000-Learning Segmentation by Random Walks
9 0.49773404 122 nips-2000-Sparse Representation for Gaussian Process Models
10 0.49525923 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
11 0.49391884 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations
12 0.48882055 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
13 0.48837477 46 nips-2000-Ensemble Learning and Linear Response Theory for ICA
14 0.48564234 37 nips-2000-Convergence of Large Margin Separable Linear Classification
15 0.48452076 125 nips-2000-Stability and Noise in Biochemical Switches
16 0.48424268 20 nips-2000-Algebraic Information Geometry for Learning Machines with Singularities
17 0.48346826 74 nips-2000-Kernel Expansions with Unlabeled Examples
18 0.4830932 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
19 0.48123738 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
20 0.48019499 69 nips-2000-Incorporating Second-Order Functional Knowledge for Better Option Pricing