nips nips2001 nips2001-166 knowledge-graph by maker-knowledge-mining

166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity


Source: pdf

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 jp Abstract Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. [sent-7, score-0.411]

2 The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. [sent-8, score-0.302]

3 Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. [sent-9, score-0.209]

4 However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. [sent-10, score-0.331]

5 In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. [sent-11, score-0.344]

6 On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. [sent-12, score-0.297]

7 We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. [sent-13, score-0.239]

8 1 Introduction Recent biological experimental findings have indicated that the synaptic plasticity depends on the relative timing of the pre- and post- synaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does [1, 2, 3]. [sent-14, score-0.474]

9 Many authors have numerically shown that spatio-temporal patterns can be stored in neural networks [6, 7, 8, 9, 10, 11]. [sent-19, score-0.209]

10 discussed the variablity of spike generation about the network consisting of spiking neurons using TAH [6]. [sent-21, score-0.307]

11 Yoshioka also discussed the associative memory network consisting of spiking neurons using TAH [11]. [sent-24, score-0.383]

12 Munro and Hernandez numerically showed that a network can retrieve spatio-temporal patterns even in a noisy environment owing to LTD [9]. [sent-26, score-0.328]

13 However, they did not discuss the reason why TAH was effective in terms of the storage and retrieval of the spatio-temporal patterns. [sent-27, score-0.298]

14 Since TAH has not only the effect of LTP but that of LTD, the interference of LTP and LTD may prevent retrieval of the patterns. [sent-28, score-0.188]

15 To investigate this unknown mathematical mechanism for retrieval, we employ an associative memory network consisting of binary neurons. [sent-29, score-0.367]

16 We show the mechanism that the spatio-temporal patterns can be retrieved in this network. [sent-32, score-0.223]

17 There are many works concerned with associative memory networks that store spatio-temporal patterns by the covariance learning [12, 13]. [sent-33, score-0.414]

18 It is wellknown that the covariance learning is indispensable when the sparse patterns are embedded in a network as attractors [15, 16]. [sent-35, score-0.544]

19 The information on the firing rate for the stored patterns is not indispensable for TAH, although it is indispensable for the covariance learning. [sent-36, score-0.619]

20 We theoretically show that TAH qualitatively has the same effect as the covariance learning when the spatio-temporal patterns are embedded in the network. [sent-37, score-0.239]

21 This means that the difference in spike times induces LTP or LTD, and the effect of the firing rate information can be canceled out by this spike time difference. [sent-38, score-0.381]

22 We also use discrete time steps and the following synchronous updating rule, N ui (t) = Jij xj (t), (1) j=1 xi (t + 1) = Θ(ui (t) − θ), Θ(u) = 1, u ≥ 0 0, u < 0, (2) (3) where xi (t) is the state of the i-th neuron at time t, ui (t) its internal potential, and θ a uniform threshold. [sent-43, score-0.519]

23 If the i-th neuron fires at time t, its state is xi (t) = 1; otherwise, xi (t) = 0. [sent-44, score-0.218]

24 Jij is µ the synaptic weight from the j-th neuron to the i-th neuron. [sent-46, score-0.196]

25 Each element ξi of µ µ µ µ the µ-th memory pattern ξ = (ξ1 , ξ2 , · · ·, ξ N ) is generated independently by, µ µ µ Prob[ξi = 1] = 1 − Prob[ξi = 0] = f. [sent-47, score-0.192]

26 µ E[ξi ] (4) The expectation of ξ is = f, and thus, f can be considered as the mean firing rate of the memory pattern. [sent-48, score-0.226]

27 The memory pattern is “sparse” when f → 0, and this coding scheme is called “sparse coding”. [sent-49, score-0.235]

28 The synaptic weight Jij follows the synaptic plasticity that depends on the difference in spike times between the i-th (post-) and j-th (pre-) neurons. [sent-50, score-0.437]

29 Figure 1(b) shows that LTP occurs when the j-th neuron fires one time step before the i-th neuron µ+1 µ does, ξi = ξj = 1, and that LTD occurs when the j-th neuron fires one time step µ−1 µ after the i-th neuron does, ξi = ξj = 1. [sent-55, score-0.402]

30 Jij = N f(1 − f) µ=1 i (5) The number of memory patterns is p = αN where α is defined as the “loading rate”. [sent-64, score-0.211]

31 If the loading rate is larger than αC , the pattern sequence becomes unstable. [sent-66, score-0.325]

32 We show that p memory patterns are retrieved periodically like ξ1 → ξ 2 → · · · → ξ p → ξ 1 → · · ·. [sent-69, score-0.283]

33 One candidate algorithm for controlling the threshold value is to maintain the mean firing rate of the network at that of memory pattern, f, as follows, N N 1 1 xi (t) = Θ(ui (t) − θ(t)). [sent-73, score-0.508]

34 (6) f = N N i=1 i=1 It is known that the obtained threshold value is nearly optimal, since it approximately gives a maximal storage capacity value [16]. [sent-74, score-0.458]

35 3 Theory Many neural network models that store and retrieve sequential patterns by TAH have been discussed by many authors [7, 8, 9, 10]. [sent-75, score-0.38]

36 For example, Munro and Hernandez showed that their model could retrieve a stored pattern sequence even in a noisy environment [9]. [sent-77, score-0.278]

37 Here, we discuss the mechanism that the network learned by TAH can store and retrieve sequential patterns. [sent-80, score-0.343]

38 Before providing details of the retrieval process, we discuss a simple situation where the number of memory patterns is very small relative to the number of neurons, i. [sent-81, score-0.329]

39 Then, the internal potential u i (t) of the equation (1) is given by, t+1 t−1 ui (t) = ξi − ξi . [sent-85, score-0.196]

40 The first term ξi of the equation (7) is a signal term for the recall of the pattern ξt+1 , which is designed to be retrieved at time t+1, and the second term t−1 ξi can interfere in retrieval of ξt+1 . [sent-87, score-0.263]

41 If the t+1 threshold θ(t) is set between 0 and +1, ξi = 0 isn’t influenced by the interference t−1 t+1 t−1 of ξi = 1. [sent-90, score-0.215]

42 We consider the probability distribution of the internal potential ui (t) to examine how the interference of LTD influences the retrieval of ξt+1 . [sent-92, score-0.383]

43 (8) Since the threshold θ(t) is set between 0 and +1, the state xi (t + 1) is 1 with probability f − f 2 and 0 with 1 − f + f 2 . [sent-95, score-0.193]

44 The overlap between the state x(t + 1) and the memory pattern ξ t+1 is given by, mt+1 (t + 1) = 1 N f(1 − f) N i=1 t+1 (ξi − f)xi (t + 1) = 1 − f. [sent-96, score-0.333]

45 This means that the interference of LTD disappears in a sparse limit, and the model can retrieve the next pattern ξt+1 . [sent-98, score-0.4]

46 Next, we discuss whether the information on the firing rate is indispensable for TAH or not. [sent-100, score-0.277]

47 To investigate this, we consider the case that the number of memory patterns is extensively large, i. [sent-101, score-0.238]

48 Using the equation (9), the internal potential ui (t) of the i-th neuron at time t is represented as, t+1 t−1 ui (t) = (ξi − ξi )mt (t) + zi (t), (10) p zi (t) = µ=t µ+1 µ−1 (ξi − ξi )mµ (t). [sent-104, score-0.47]

49 (11) zi (t) is called the “cross-talk noise”, which represents contributions from non-target patterns excluding ξt−1 and prevents the target pattern ξt+1 from being retrieved. [sent-105, score-0.212]

50 It is well-known that the covariance learning is indispensable when the sparse patterns are embedded in a network as attractors [15, 16]. [sent-107, score-0.544]

51 Under sparse coding schemes, unless the covariance learning is employed, the cross-talk noise does diverge in the large N limit. [sent-108, score-0.271]

52 The information on the firing rate for the stored patterns is not indispensable for TAH, although it is indispensable for the covariance learning. [sent-110, score-0.619]

53 If a pattern sequence can be stored, the cross-talk noise is obeyed by a Gaussian distribution with mean 0 and time-dependent variance σ 2 (t). [sent-112, score-0.189]

54 According to the statistical neurodynamics, we obtain the recursive equations for the overlap mt (t) between the network state x(t) and the target pattern ξt and the variance σ2 (t). [sent-115, score-0.502]

55 2σ(t − 1) 2σ(t − 1) 2σ(t − 1) These equations reveal that the variance σ2 (t) of cross-talk noise does not diverge as long as a pattern sequence can be retrieved. [sent-122, score-0.241]

56 Therefore, the variance of cross-talk noise doesn’t diverge, and this is another factor for the network learned by TAH to store and retrieve a pattern sequence. [sent-127, score-0.388]

57 We conclude that the difference in spike times induces LTP or LTD, and the effect of the firing rate information can be canceled out by this spike times difference. [sent-128, score-0.381]

58 4 Results We investigate the property of our model and examine the following two conditions: a fixed threshold and a time-dependent threshold, using the statistical neurodynamics and computer simulations. [sent-129, score-0.231]

59 overlap (solid), activity/f (dashed) Figure 2 shows how the overlap mt (t) and the mean firing rate of the network, 1 x(t) = N i xi (t), depend on the loading rate α when the mean firing rate of ¯ the memory pattern is f = 0. [sent-130, score-1.055]

60 52, where the storage capacity is maximum with respect to the threshold θ. [sent-132, score-0.458]

61 The stored pattern sequence can be retrieved when the initial overlap m1 (1) is greater than the critical value mC . [sent-133, score-0.398]

62 The lower line indicates how the critical initial overlap m C depends on the loading rate α. [sent-134, score-0.403]

63 In other words, the lower line represents the basin of attraction for the retrieved sequence. [sent-135, score-0.235]

64 The upper line denotes a steady value of overlap mt (t) when the pattern sequence is retrieved. [sent-136, score-0.399]

65 mt (t) is obtained by setting the initial state to the first memory pattern: x(1) = ξ1 . [sent-137, score-0.287]

66 The dashed line shows a steady value of the normalized mean firing rate of network, x(t)/f, for the pattern sequence. [sent-140, score-0.263]

67 The critical overlap (the lower line) and the overlap at the stationary state (the upper line). [sent-156, score-0.291]

68 The dashed line shows the mean firing rate of the network divided firing rate which is 0. [sent-157, score-0.355]

69 overlap (solid), activity/f (dashed) Next, we examine the threshold control scheme in the equation (6), where the threshold is controlled to maintain the mean firing rate of the network at f. [sent-165, score-0.618]

70 q(t) 1 in equation (15) is equal to the mean firing rate because q(t) = N N (xi (t))2 = i=1 N 1 i=1 xi (t) under the condition xi (t) = {0, 1}. [sent-166, score-0.256]

71 (17) 2 Figure 3 shows the overlap mt (t) as a function of loading rate α with f = 0. [sent-168, score-0.469]

72 The basin of attraction becomes larger than that of the fixed threshold condition, θ = 0. [sent-172, score-0.232]

73 This means that even if the initial state x(1) is different from the first memory pattern ξ1 , that is, the state includes a lot of noise, the pattern sequence can be retrieved. [sent-175, score-0.365]

74 The critical overlap (the lower line) and the overlap at the stationary state (the upper line) when the threshold is changing over time to maintain mean firing rate of the network at f. [sent-189, score-0.633]

75 The dashed line shows the mean firing rates of the network divided firing rate which is 0. [sent-190, score-0.266]

76 The basin of attraction become larger than that of the fixed threshold condition: Figure 2. [sent-192, score-0.232]

77 Finally, we discuss how the storage capacity depends on the firing rate f of the 1 memory pattern. [sent-193, score-0.591]

78 It is known that the storage capacity diverges as f | log f | in a sparse limit, f → 0 [19, 20]. [sent-194, score-0.489]

79 Therefore, we investigate the asymptotic property of the storage capacity in a sparse limit. [sent-195, score-0.462]

80 Figure 4 shows how the storage capacity depends on the firing rate where the threshold is controlled to maintain the network 1 activity at f (symbol ◦). [sent-196, score-0.668]

81 The storage capacity diverges as f | log f | in a sparse limit. [sent-197, score-0.489]

82 The storage capacity as a function of f in the case of maintaining activity at f (symbol ◦). [sent-202, score-0.352]

83 Ths 1 storage capacity diverges as f | log f | in a sparse limit. [sent-203, score-0.489]

84 45 1/|log f| 5 Discussion Using a simple neural network model, we have discussed the mechanism that TAH enables the network to store and retrieve a pattern sequence. [sent-217, score-0.496]

85 First, we showed that the interference of LTP and LTD disappeared in a sparse coding scheme. [sent-218, score-0.269]

86 This is a factor to enable the network to store and retrieve a pattern sequence. [sent-219, score-0.334]

87 Next, we showed the mechanism that TAH qualitatively had the same effect as the covariance learning by analyzing the stability of the stored pattern sequence and the retrieval process by means of the statistical neurodynamics. [sent-220, score-0.407]

88 Consequently, the variance of cross-talk noise didn’t diverge, and this is another factor for the network learned by TAH to store and retrieve a pattern sequence. [sent-221, score-0.388]

89 We conclude that the difference in spike times induces LTP or LTD, and the effect of the firing rate information can be canceled out by this spike times difference. [sent-222, score-0.381]

90 To improve the retrieval property of the basin of attraction, we introduced a threshold control algorithm where a threshold value was adjusted to maintain the mean firing rate of the network at that of a memory pattern. [sent-224, score-0.701]

91 We also found that the loading rate diverged as f | log f | in a sparse limit, f → 0. [sent-226, score-0.299]

92 Here, we compare the storage capacity of our model with that of the model using the covariance learning (Figure 5). [sent-227, score-0.417]

93 We calculate the storage capacity αCOV from their dynamical equations and compare these of our model, C αT AH , by the ratio of α T AH /αCOV . [sent-229, score-0.379]

94 The contribution of LTD reduces the storage capacity of our model to half. [sent-233, score-0.352]

95 Therefore, in terms of the storage capacity, the covariance learning is better than TAH. [sent-234, score-0.245]

96 But, as we discussed previously, the information of the firing rate is indispensable in TAH. [sent-235, score-0.265]

97 The comparison of the storage capacity of our model with that of the model using the covariance learning. [sent-242, score-0.417]

98 As f decreases, the ratio of storage capacity approaches 0. [sent-243, score-0.352]

99 Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. [sent-257, score-0.283]

100 Temporally asymmetric hebbian learning, spike timing and neuronal response variability. [sent-283, score-0.349]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('tah', 0.413), ('ltd', 0.381), ('ltp', 0.344), ('ring', 0.246), ('storage', 0.18), ('capacity', 0.172), ('indispensable', 0.149), ('erf', 0.149), ('mt', 0.144), ('ui', 0.136), ('loading', 0.127), ('synaptic', 0.12), ('jij', 0.12), ('memory', 0.111), ('overlap', 0.109), ('interference', 0.109), ('spike', 0.106), ('threshold', 0.106), ('retrieve', 0.102), ('patterns', 0.1), ('hebbian', 0.097), ('plasticity', 0.091), ('temporally', 0.089), ('rate', 0.089), ('network', 0.084), ('sparse', 0.083), ('pattern', 0.081), ('erence', 0.08), ('di', 0.079), ('retrieval', 0.079), ('neuron', 0.076), ('retrieved', 0.072), ('associative', 0.071), ('neurodynamics', 0.068), ('store', 0.067), ('stored', 0.067), ('covariance', 0.065), ('attraction', 0.063), ('basin', 0.063), ('asymmetric', 0.059), ('postsynaptic', 0.057), ('res', 0.057), ('ect', 0.055), ('xi', 0.055), ('diverges', 0.054), ('canceled', 0.052), ('munro', 0.052), ('mechanism', 0.051), ('diverge', 0.051), ('timing', 0.05), ('occurs', 0.049), ('biological', 0.044), ('coding', 0.043), ('numerically', 0.042), ('critical', 0.041), ('neurons', 0.04), ('discuss', 0.039), ('embedded', 0.038), ('sparsely', 0.038), ('maintain', 0.037), ('neuronal', 0.037), ('line', 0.037), ('qualitatively', 0.036), ('prob', 0.036), ('saitama', 0.036), ('disappeared', 0.034), ('kitano', 0.034), ('okada', 0.034), ('cov', 0.034), ('state', 0.032), ('equation', 0.031), ('zi', 0.031), ('ective', 0.031), ('examine', 0.03), ('ndings', 0.03), ('hernandez', 0.03), ('kempter', 0.03), ('stdp', 0.03), ('dashed', 0.03), ('noise', 0.029), ('internal', 0.029), ('induces', 0.028), ('sequence', 0.028), ('equations', 0.027), ('investigate', 0.027), ('discussed', 0.027), ('rule', 0.027), ('spiking', 0.027), ('mean', 0.026), ('physical', 0.026), ('potentiation', 0.025), ('ah', 0.025), ('attractors', 0.025), ('depression', 0.025), ('gerstner', 0.025), ('disappears', 0.025), ('variance', 0.025), ('consisting', 0.023), ('doesn', 0.023), ('riken', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999988 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

2 0.16110192 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

Author: Yun Gao, Michael J. Black, Elie Bienenstock, Shy Shoham, John P. Donoghue

Abstract: Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model and rigorously compare it with previous models using cross-validation. Second, we infer a posterior probability distribution over hand motion conditioned on a sequence of neural test data using Bayesian inference. The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for the kinematics. A particle filtering method is used to represent, update, and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it may be appropriate for neural prosthetic applications.

3 0.16022617 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

4 0.14236814 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.13619876 23 nips-2001-A theory of neural integration in the head-direction system

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

6 0.11561381 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

7 0.11319108 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

8 0.11134611 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

9 0.092173457 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

10 0.074259154 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

11 0.067673348 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

12 0.057794049 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

13 0.057774045 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

14 0.057494644 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

15 0.052827492 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

16 0.050591007 2 nips-2001-3 state neurons for contextual processing

17 0.050117515 78 nips-2001-Fragment Completion in Humans and Machines

18 0.04873484 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

19 0.048573434 178 nips-2001-TAP Gibbs Free Energy, Belief Propagation and Sparsity

20 0.04626764 139 nips-2001-Online Learning with Kernels


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.144), (1, -0.213), (2, -0.099), (3, 0.05), (4, 0.083), (5, 0.024), (6, 0.114), (7, -0.021), (8, -0.041), (9, -0.009), (10, 0.026), (11, -0.1), (12, 0.016), (13, -0.035), (14, -0.043), (15, 0.035), (16, -0.053), (17, -0.091), (18, 0.037), (19, 0.002), (20, 0.039), (21, 0.01), (22, -0.042), (23, 0.075), (24, -0.063), (25, -0.068), (26, 0.018), (27, -0.102), (28, 0.07), (29, -0.082), (30, -0.075), (31, 0.098), (32, -0.014), (33, 0.062), (34, 0.066), (35, -0.104), (36, 0.106), (37, -0.255), (38, -0.028), (39, 0.12), (40, 0.005), (41, 0.065), (42, -0.122), (43, -0.046), (44, 0.071), (45, -0.016), (46, 0.057), (47, 0.018), (48, -0.054), (49, -0.126)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95559084 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

2 0.63575047 23 nips-2001-A theory of neural integration in the head-direction system

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

3 0.63228351 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

4 0.53202718 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

Author: Yun Gao, Michael J. Black, Elie Bienenstock, Shy Shoham, John P. Donoghue

Abstract: Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model and rigorously compare it with previous models using cross-validation. Second, we infer a posterior probability distribution over hand motion conditioned on a sequence of neural test data using Bayesian inference. The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for the kinematics. A particle filtering method is used to represent, update, and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it may be appropriate for neural prosthetic applications.

5 0.50645053 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

Author: A. Bofill, D. P. Thompson, Alan F. Murray

Abstract: Experimental data has shown that synaptic strength modification in some types of biological neurons depends upon precise spike timing differences between presynaptic and postsynaptic spikes. Several temporally-asymmetric Hebbian learning rules motivated by this data have been proposed. We argue that such learning rules are suitable to analog VLSI implementation. We describe an easily tunable circuit to modify the weight of a silicon spiking neuron according to those learning rules. Test results from the fabrication of the circuit using a O.6J.lm CMOS process are given. 1

6 0.42438382 37 nips-2001-Associative memory in realistic neuronal networks

7 0.3998045 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

8 0.39098385 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

9 0.37279972 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

10 0.36455923 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

11 0.35661465 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

12 0.34698921 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

13 0.34661871 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

14 0.30451015 57 nips-2001-Correlation Codes in Neuronal Populations

15 0.28216115 184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

16 0.26619142 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

17 0.26330075 3 nips-2001-ACh, Uncertainty, and Cortical Inference

18 0.26202285 182 nips-2001-The Fidelity of Local Ordinal Encoding

19 0.25826076 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

20 0.25088722 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(8, 0.303), (14, 0.086), (19, 0.023), (20, 0.023), (27, 0.138), (30, 0.078), (38, 0.031), (59, 0.033), (72, 0.039), (79, 0.044), (91, 0.098)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81269979 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

2 0.79763484 70 nips-2001-Estimating Car Insurance Premia: a Case Study in High-Dimensional Data Inference

Author: Nicolas Chapados, Yoshua Bengio, Pascal Vincent, Joumana Ghosn, Charles Dugas, Ichiro Takeuchi, Linyan Meng

Abstract: Estimating insurance premia from data is a difficult regression problem for several reasons: the large number of variables, many of which are .discrete, and the very peculiar shape of the noise distribution, asymmetric with fat tails, with a large majority zeros and a few unreliable and very large values. We compare several machine learning methods for estimating insurance premia, and test them on a large data base of car insurance policies. We find that function approximation methods that do not optimize a squared loss, like Support Vector Machines regression, do not work well in this context. Compared methods include decision trees and generalized linear models. The best results are obtained with a mixture of experts, which better identifies the least and most risky contracts, and allows to reduce the median premium by charging more to the most risky customers. 1

3 0.55223817 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

4 0.55193406 137 nips-2001-On the Convergence of Leveraging

Author: Gunnar Rätsch, Sebastian Mika, Manfred K. Warmuth

Abstract: We give an unified convergence analysis of ensemble learning methods including e.g. AdaBoost, Logistic Regression and the Least-SquareBoost algorithm for regression. These methods have in common that they iteratively call a base learning algorithm which returns hypotheses that are then linearly combined. We show that these methods are related to the Gauss-Southwell method known from numerical optimization and state non-asymptotical convergence results for all these methods. Our analysis includes -norm regularized cost functions leading to a clean and general way to regularize ensemble learning.

5 0.55174321 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

6 0.54963791 8 nips-2001-A General Greedy Approximation Algorithm with Applications

7 0.54760242 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

8 0.53919029 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

9 0.53881204 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

10 0.53808886 9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

11 0.53789544 60 nips-2001-Discriminative Direction for Kernel Classifiers

12 0.53744429 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

13 0.53710985 74 nips-2001-Face Recognition Using Kernel Methods

14 0.53623575 138 nips-2001-On the Generalization Ability of On-Line Learning Algorithms

15 0.53613275 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

16 0.53498393 172 nips-2001-Speech Recognition using SVMs

17 0.53487325 13 nips-2001-A Natural Policy Gradient

18 0.53463531 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

19 0.53454441 56 nips-2001-Convolution Kernels for Natural Language

20 0.53302968 58 nips-2001-Covariance Kernels from Bayesian Generative Models