nips nips2001 nips2001-197 knowledge-graph by maker-knowledge-mining

197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules


Source: pdf

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Why neuronal dynamics should control synaptic learning rules Jesper Tegner Stockholm Bioinformatics Center Dept. [sent-1, score-0.712]

2 Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. [sent-10, score-0.64]

3 We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. [sent-12, score-0.683]

4 We show that this adaptive rule makes the addit ive STDP more robust. [sent-14, score-0.298]

5 Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. [sent-15, score-0.461]

6 1 Introduction Hebbian learning rules are widely used to model synaptic modification shaping the functional connectivity of neural networks [4, 5]. [sent-16, score-0.46]

7 Recent experiments revealed a mode of synaptic plasticity that provides new possibilities and constraints for synaptic learning rules [7, 8, 9]. [sent-18, score-0.979]

8 It has been found that synapses are strengthened if a presynaptic spike precedes a postsynaptic spike within a short (::::: 20 ms) time window, while the reverse spike order leads to synaptic weakening. [sent-19, score-1.083]

9 Computational models highlighted how STDP combines synaptic strengthening and weakening so that learning gives rise to synaptic competition in a way that neuronal firing rates are stabilized. [sent-21, score-1.514]

10 Recent modeling studies have, however, demonstrated that whether an STDP type rule results in competition or rate stabilization depends on exact formulation of the weight update scheme [3, 2]. [sent-22, score-0.508]

11 In the additive version of an STDP update rule studied by Abbott and coworkers [1, 10], the magnitude of synaptic change is independent on synaptic strength. [sent-24, score-0.981]

12 For this version of the rule (aSTDP), the steady-state synaptic weight distribution is bimodal. [sent-26, score-0.56]

13 In sharp contrast to this, using a multiplicative STDP rule where the amount of weight increase scales inversely with present weight size produces neither synaptic competition nor rate normalization [3, 2]. [sent-27, score-0.923]

14 In this multiplicative scenario the synaptic weight distribution is unimodal. [sent-28, score-0.479]

15 Activity-dependent synaptic scaling has recently been proposed as a separate mechanism to ensure synaptic competition operating on a slow (days) time scale [3]. [sent-29, score-0.902]

16 In the first section we show that the aSTDP rule normalizes postsynaptic firing rates only in a limited parameter range. [sent-32, score-0.792]

17 The critical parameter of aSTDP becomes the ratio (0;) between the amount of synaptic depression and potentiation. [sent-33, score-0.595]

18 This lead us to consider an adaptive version of aSTDP in order to create a rule that is both competitive as well as rate stabilizing under different circumstances. [sent-35, score-0.448]

19 Next, we use a Fokker-Planck formalism to clarify what determines when an additive STDP rule fails to stabilize the postsynaptic firing rate. [sent-36, score-0.818]

20 Here we derive the requirement for how the potentiation to depression ratio should change with neuronal activity. [sent-37, score-0.424]

21 In the last section we provide a biologically realistic implementation of the adaptive rule and perform numerical simulations to show the how different parameterizations of the adaptive rule can guide STDP into differentially rate-sensitive regimes. [sent-38, score-0.683]

22 The integral over the temporal window of the synaptic learning function (L) is always negative. [sent-44, score-0.449]

23 Correlations between input rates were generated by adding a common bias rate in a graded manner across synapses so that the first afferent is has zero while the last afferent has the maximal correlation, Cmax . [sent-46, score-0.477]

24 We first examine how the depression/potentiation ratio (0; = LTD / LT P) [2] controls the dependence of the output firing rate on the synaptic input rate, here referred to as the effective neuronal gain. [sent-47, score-1.365]

25 Provided that 0; is sufficiently large, the STDP rule controls the postsynaptic firing rate (Fig. [sent-48, score-0.803]

26 The stabilizing effect of the STDP rule is therefore equivalent to having weak a neuronal gain. [sent-50, score-0.359]

27 The slope of the dependence of the postsynaptic output rate on the presynaptic input rate is referred to as the effective neuronal gain. [sent-54, score-1.25]

28 The initial firing rate is shown by the upper curve while the lower line displays the final postsynaptic firing rate. [sent-55, score-0.883]

29 When the synaptic input is strongly correlated the postsynaptic neuron operates in a high gain mode characterized by a larger slope and larger baseline rate. [sent-60, score-0.975]

30 Note that for further increases in the presynaptic rates, postsynaptic firing can increase to over 1000 Hz. [sent-72, score-0.795]

31 We find that the neuronal gain is extremely sensitive to the value of 0: as well as to the amount of afferent input correlations. [sent-83, score-0.513]

32 Figure IB shows that increasing the amount of input correlations for a given 0: value increases the overall firing rate and the slope of the input-output curve, thus leading to larger effective gain. [sent-84, score-0.74]

33 Increasing the amount of correlations between the synaptic afferents could therefore be interpreted as increasing the effective neuronal gain. [sent-85, score-0.809]

34 Note that the baseline firing at a presynaptic drive of 20Hz is also increased. [sent-86, score-0.455]

35 Next, we examined how neuronal gain depends on the value of 0: in the STDP rule (Figure IC). [sent-87, score-0.425]

36 The high gain and high rate mode induced by strong input correlations was reduced to a lower gain and lower rate mode by increasing 0: (see arrow in Figure IC). [sent-88, score-0.769]

37 3 Conditions for an adaptive additive STDP rule Here we address how the learning ratio, 0:, should depend on the input rate in order to produce a given neuronal input-output relationship. [sent-90, score-0.891]

38 Using this functional form we will be able to formulate constraints for an adaptive additive STDP rule. [sent-91, score-0.271]

39 This will guide us in the derivation of a biophysical implementation of the adaptive control scheme. [sent-92, score-0.34]

40 The problem in its generality is to find (i) how the learning ratio should depend on the postsynaptic rate and (ii) how the postsynaptic rate depends on the input rate and the synaptic weights. [sent-93, score-1.606]

41 By performing self-consistent calculations using a Fokker-Planck formulation, the problem is reduced to finding conditions for how the learning ratio should depend on the input rates only. [sent-94, score-0.432]

42 U A The output rate does not depend on the input rate. [sent-163, score-0.287]

43 B Dependence of the mean synaptic weight on input rates . [sent-165, score-0.656]

44 E,F A( w) and P( w) are functions of the synaptic strength and depend on the input rate . [sent-168, score-0.649]

45 Note that eight different input rates are used but only traces 1, 3, 5, 7 are shown for A(w) and pew) in which the dashed line correspond to the case with the lowest presynaptic rate. [sent-170, score-0.432]

46 determine how the parameter fJ = 0: - 1 should scale with presynaptic rates in order to control the neuronal gain. [sent-171, score-0.575]

47 The Fokker-Planck formulation permits an analytic calculation of the steady state distribution of synaptic weights [3]. [sent-172, score-0.446]

48 The competition parameter for N excitatory afferents is given by W tot = twrpreN < w > where the time window tw is defined as the probability for depression (Pd = tw/tisi) that a synaptic event occurs within the time window (tw < tisi ). [sent-173, score-0.76]

49 Thus, A( w) determines whether a given synapse (w) will increase or decrease as a function of its synaptic weight. [sent-175, score-0.438]

50 The steepness of the A( w) function determines the degree of synaptic competition. [sent-176, score-0.411]

51 When Wmax > (1 - l/o:)Wtot the synaptic weight distribution is bimodal. [sent-178, score-0.442]

52 Using these equations one can calculate how the parameter fJ should scale with the presynaptic input rate in order to produce a given postsynaptic firing rate. [sent-181, score-0.973]

53 At that point, the postsynaptic firing rate can be calculated. [sent-183, score-0.646]

54 Here, instead we impose a fixed postsynaptic output rate for a given input rate and search for a self-consistent solution using (3 as a free parameter. [sent-184, score-0.664]

55 Performing this calculation for a range of input rates provides us with the desired dependency of (3 on the presynaptic firing rate. [sent-185, score-0.669]

56 Once a solution is reached we also examine the resulting steady state synaptic weight distribution (P(w)) and the corresponding drift term A( w) as a function of the presynaptic input rate. [sent-186, score-0.873]

57 The neuronal gain, the ratio between the postsynaptic firing rate and the input rate is set to be zero (Fig 2A). [sent-188, score-1.203]

58 To normalize postsynaptic firing rates the average synaptic weight has to decrease in order to compensate for the increasing presynaptic firing rate. [sent-189, score-1.586]

59 The condition for a zero neuronal gain is that the average synaptic weight should decrease as 1 j r pre . [sent-191, score-0.802]

60 The dependence of A( w) and the synaptic weight distribution P( w) on different presynaptic rates is illustrated in Fig 2E and F. [sent-195, score-0.826]

61 As the presynaptic rates increase, the A(w) function is lowered (dashed line indicates the smallest presynaptic rate), thus pushing more synapses to smaller values since they experience a net negative "force field". [sent-196, score-0.598]

62 This is also reflected in the synaptic weight distribution which is pushed to the lower boundary as the input rates increase. [sent-197, score-0.656]

63 When enforcing a different neuronal gain, the dependence of the (3 term on the presynaptic rates remains approximately linear but with a different slope (not shown). [sent-198, score-0.663]

64 4 Derivation of an adaptive learning rule with biophysical components The key insight from the above calculations is the observed linear dependence of (3 on presynaptic rates. [sent-199, score-0.71]

65 However, when implementing an adaptive rule with biophysical elements it is very likely that individual components will have a non-linear dependence on each other. [sent-200, score-0.443]

66 A natural solution would be to use postsynaptic calcium to measure the postsynaptic firing and therefore indirectly the presynaptic firing rate. [sent-206, score-1.43]

67 Moreover, the asymmetry ((3) of the learning ratio could depend on the level of postsynaptic calcium. [sent-207, score-0.458]

68 It is known that increased resting calcium levels inhibit NMDA channels and thus calcium influx due to synaptic input. [sent-208, score-0.714]

69 A biophysical formulation of the above scheme is the following 200 i- No Adaptive Tracking 150 :WlUlliWWU] 2 ~-40 > -60 0~----------~5~00~--------~10~0~0----------~1~500 ~loo 5 o '·'r ~A_ 50 ~'ll Adaptive Tracking 20 40 60 input rat. [sent-213, score-0.284]

70 When the STDP rule is extended with an adaptive control loop , the output rates are normalized in the presence of correlated input. [sent-216, score-0.497]

71 Since (3 tracks changes in intracellular calcium on a rapid time-scale, every spike experiences a different learning ratio, 0:. [sent-218, score-0.347]

72 Note that the adaptive scheme approximates the learning ratio (0: = 1. [sent-219, score-0.384]

73 (4) d(3 T(3 - dt = - (3 + [Ca]q (5) The parameter p determines how the calcium concentration scales with the postsynaptic firing rate (delta spikes r5 above) and q controls the learning sensitivity. [sent-221, score-0.9]

74 "( controls the rise of steady-state calcium with increasing postsynaptic rates (rpost). [sent-222, score-0.655]

75 The time constants TCa and T(3 determine the calcium dynamics and the time course of the adaptive rule respectively. [sent-223, score-0.462]

76 The neuronal gain can parameter T Moreover , the drift term (6) for (3 < < 1. [sent-226, score-0.344]

77 Note also, that when W max > [TCa"(r~ost ]qWtot there is a bimodal synaptic weight distribution and synaptic competition is preserved. [sent-229, score-0.923]

78 O) 1\1 :::J Co :::J 0 0 0 25 50 75 100 0 0 Input rate (Hz) 25 50 75 100 Input rate (Hz) 0 0 25 50 75 100 Input rate (Hz) F igure 4: Full numerical simulation of the adaptive additive STDP rule. [sent-246, score-0.661]

79 5 Numerical simulations Next, we examine whether the theory of adaptive normalization carryover to a full scale simulation of the integrate-and-fire model with the STDP rule and the biophysical adaptive scheme as described above. [sent-256, score-0.66]

80 Driving a neuron with increasing input rates increases the output rate significantly when there is no adaptive scheme (squares, Figure 3 Left) as observed previously (cf. [sent-259, score-0.691]

81 Adding the adaptive loop normalizes the output rates (circles, Figure 3 Left). [sent-261, score-0.385]

82 This simulation shows that the average postsynaptic firing rate is regulated by the adaptive tracking scheme. [sent-262, score-0.884]

83 This is expected since the Fokker-Planck analysis is based on the steady-state synaptic weight distribution. [sent-263, score-0.442]

84 To further gain insight into the operation of the adaptive loop we examined the spike-to-spike dependence of the tracking scheme. [sent-264, score-0.424]

85 The adaptive rule tracks fast changes in firing by adjusting the learning ratio for each spike. [sent-266, score-0.784]

86 Our fast , spike-to-spike tracking scheme is in contrast to other homeostatic mechanisms operating on the time-scale of hours to days [11 , 12, 13, 14]. [sent-270, score-0.291]

87 In our formulation , the learning ratio, via (3, tracks changes in intra-cellular calcium, which in turn reflects the instantaneous firing rate. [sent-271, score-0.373]

88 Slower homeostatic mechanisms are unable to detect these rapid changes in firing statistics. [sent-272, score-0.381]

89 Because this fast adaptive scheme depends on recent neuronal firing, pairing several spikes on the time-scale comparable to the calcium dynamics introduces non-linear summation effects. [sent-273, score-0.648]

90 Neurons with this adaptive STDP control loop can detect changes in the input correlation while being only weakly dependent on the presynaptic firing rate. [sent-274, score-0.856]

91 In a different regime where we introduce increasing correlations between the synaptic inputs [1] we find that the neuronal gain is changed little with increasing input rates but increases substantially with increasing input correlations (Fig 4c) . [sent-280, score-1.385]

92 Thus, the adaptive aSTDP rule can normalize the mean postsynaptic rate even when the input statistics change. [sent-281, score-0.816]

93 W ith other adaptive parameters we also found learning regimes where the responses to input correlations were affected differentially (not shown). [sent-282, score-0.433]

94 We found that STDP is very sensitive to the ratio of synaptic strengthening to weakening, (t, and requires different values for different input statistics. [sent-286, score-0.67]

95 To correct for this, we proposed an adaptive control scheme to adjust the plasticity rule. [sent-287, score-0.384]

96 This adaptive mechanisms makes the learning rule more robust to changing input conditions while preserving its interesting properties, such as synaptic competition. [sent-288, score-0.868]

97 Our adaptive STDP rule adjusts the learning ratio on a millisecond time-scale. [sent-290, score-0.437]

98 Because the learning rule changes rapidly, it is very sensitive the input statistics. [sent-292, score-0.324]

99 Furthermore, the synaptic weight changes add non-linearly due to the rapid self-regulation. [sent-293, score-0.512]

100 Dan, personal communication) which might have roles in making synaptic plasticity adaptive. [sent-295, score-0.486]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('stdp', 0.493), ('synaptic', 0.386), ('postsynaptic', 0.287), ('firing', 0.237), ('presynaptic', 0.218), ('neuronal', 0.213), ('adaptive', 0.18), ('calcium', 0.164), ('rate', 0.122), ('rule', 0.118), ('astdp', 0.113), ('ratio', 0.113), ('input', 0.109), ('rates', 0.105), ('plasticity', 0.1), ('competition', 0.095), ('gain', 0.094), ('additive', 0.091), ('biophysical', 0.084), ('fig', 0.075), ('abbott', 0.075), ('depression', 0.07), ('correlations', 0.066), ('tca', 0.066), ('slope', 0.066), ('scheme', 0.065), ('dependence', 0.061), ('increasing', 0.06), ('stabilize', 0.06), ('tracking', 0.058), ('synapses', 0.057), ('ost', 0.057), ('thrrigiano', 0.057), ('weight', 0.056), ('pre', 0.053), ('homeostatic', 0.049), ('tot', 0.049), ('hz', 0.049), ('rules', 0.048), ('normalizes', 0.045), ('nelson', 0.045), ('spike', 0.045), ('afferent', 0.042), ('tracks', 0.042), ('changes', 0.042), ('controls', 0.039), ('control', 0.039), ('cmax', 0.038), ('roi', 0.038), ('rossum', 0.038), ('stockholm', 0.038), ('wtot', 0.038), ('drift', 0.037), ('window', 0.037), ('multiplicative', 0.037), ('guide', 0.037), ('arrow', 0.036), ('linearity', 0.036), ('song', 0.036), ('operating', 0.035), ('steady', 0.034), ('examine', 0.033), ('mode', 0.033), ('kepecs', 0.033), ('weakening', 0.033), ('destabilizing', 0.033), ('strengthening', 0.033), ('days', 0.033), ('depend', 0.032), ('hebbian', 0.032), ('regime', 0.031), ('loop', 0.031), ('afferents', 0.03), ('ltd', 0.03), ('sensitive', 0.029), ('potentiation', 0.028), ('tw', 0.028), ('stabilizing', 0.028), ('sompolinsky', 0.028), ('effective', 0.028), ('excitatory', 0.028), ('rapid', 0.028), ('increase', 0.027), ('differentially', 0.026), ('stabilization', 0.026), ('regimes', 0.026), ('increases', 0.026), ('amount', 0.026), ('fast', 0.026), ('formulation', 0.026), ('learning', 0.026), ('mechanisms', 0.025), ('determines', 0.025), ('ms', 0.025), ('foundation', 0.025), ('numerical', 0.024), ('output', 0.024), ('ic', 0.024), ('conditions', 0.024), ('calculations', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000007 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

2 0.29947674 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

3 0.21138251 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

4 0.21014108 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

Author: Hiroyuki Nakahara, Shun-ichi Amari

Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1

5 0.16022617 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

6 0.15353589 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

7 0.13397725 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

8 0.13225819 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

9 0.11869392 2 nips-2001-3 state neurons for contextual processing

10 0.095904484 23 nips-2001-A theory of neural integration in the head-direction system

11 0.073711492 57 nips-2001-Correlation Codes in Neuronal Populations

12 0.070067525 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

13 0.068844371 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

14 0.066761523 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

15 0.065644875 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

16 0.058698501 143 nips-2001-PAC Generalization Bounds for Co-training

17 0.05816194 28 nips-2001-Adaptive Nearest Neighbor Classification Using Support Vector Machines

18 0.053209037 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

19 0.051123906 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

20 0.050748195 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.154), (1, -0.251), (2, -0.137), (3, 0.095), (4, 0.163), (5, 0.037), (6, 0.179), (7, -0.052), (8, -0.094), (9, -0.06), (10, 0.088), (11, -0.203), (12, 0.152), (13, -0.027), (14, -0.045), (15, 0.034), (16, 0.014), (17, -0.074), (18, -0.01), (19, -0.04), (20, 0.138), (21, -0.013), (22, -0.085), (23, 0.058), (24, -0.025), (25, -0.163), (26, -0.046), (27, 0.11), (28, -0.075), (29, -0.076), (30, 0.115), (31, 0.226), (32, 0.09), (33, 0.044), (34, 0.034), (35, -0.05), (36, -0.001), (37, -0.148), (38, -0.042), (39, 0.009), (40, 0.004), (41, 0.009), (42, -0.088), (43, 0.159), (44, 0.013), (45, -0.105), (46, 0.055), (47, 0.058), (48, -0.039), (49, 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96925974 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

2 0.73951 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

3 0.61212641 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

4 0.60340053 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.59159935 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

Author: Hiroyuki Nakahara, Shun-ichi Amari

Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1

6 0.56693459 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

7 0.4137072 2 nips-2001-3 state neurons for contextual processing

8 0.39680618 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

9 0.36193061 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

10 0.28288653 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

11 0.25007465 83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons

12 0.23874919 143 nips-2001-PAC Generalization Bounds for Co-training

13 0.23754923 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

14 0.23124857 23 nips-2001-A theory of neural integration in the head-direction system

15 0.21472012 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

16 0.19849387 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

17 0.19570573 57 nips-2001-Correlation Codes in Neuronal Populations

18 0.19057719 152 nips-2001-Prodding the ROC Curve: Constrained Optimization of Classifier Performance

19 0.1847284 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

20 0.18232198 177 nips-2001-Switch Packet Arbitration via Queue-Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.097), (17, 0.017), (19, 0.033), (27, 0.153), (30, 0.075), (38, 0.047), (39, 0.218), (59, 0.012), (67, 0.011), (72, 0.062), (74, 0.035), (79, 0.045), (91, 0.101)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.85781932 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

2 0.70236856 137 nips-2001-On the Convergence of Leveraging

Author: Gunnar Rätsch, Sebastian Mika, Manfred K. Warmuth

Abstract: We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical optimization and state non-asymptotical convergence results for all these methods. Our analysis includes -norm regularized cost functions leading to a clean and general way to regularize ensemble learning.

3 0.70062071 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

Author: Olivier Chapelle, Bernhard Schćžšlkopf

Abstract: The choice of an SVM kernel corresponds to the choice of a representation of the data in a feature space and, to improve performance , it should therefore incorporate prior knowledge such as known transformation invariances. We propose a technique which extends earlier work and aims at incorporating invariances in nonlinear kernels. We show on a digit recognition task that the proposed approach is superior to the Virtual Support Vector method, which previously had been the method of choice. 1

4 0.70022392 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.70015401 8 nips-2001-A General Greedy Approximation Algorithm with Applications

Author: T. Zhang

Abstract: Greedy approximation algorithms have been frequently used to obtain sparse solutions to learning problems. In this paper, we present a general greedy algorithm for solving a class of convex optimization problems. We derive a bound on the rate of approximation for this algorithm, and show that our algorithm includes a number of earlier studies as special cases.

6 0.69625938 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

7 0.69608676 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

8 0.68745953 13 nips-2001-A Natural Policy Gradient

9 0.68621689 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

10 0.68381721 134 nips-2001-On Kernel-Target Alignment

11 0.68244278 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

12 0.68067992 58 nips-2001-Covariance Kernels from Bayesian Generative Models

13 0.67932349 9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

14 0.67926729 60 nips-2001-Discriminative Direction for Kernel Classifiers

15 0.67647314 139 nips-2001-Online Learning with Kernels

16 0.67565954 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

17 0.67521149 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

18 0.67190504 88 nips-2001-Grouping and dimensionality reduction by locally linear embedding

19 0.66957033 56 nips-2001-Convolution Kernels for Natural Language

20 0.66938245 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation