nips nips2005 nips2005-99 knowledge-graph by maker-knowledge-mining

99 nips-2005-Integrate-and-Fire models with adaptation are good enough


Source: pdf

Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner

Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Integrate-and-Fire models with adaptation are good enough: predicting spike times under random current injection Renaud Jolivet∗ Brain Mind Institute, EPFL CH-1015 Lausanne, Switzerland renaud. [sent-1, score-0.748]

2 On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. [sent-11, score-0.321]

3 Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. [sent-12, score-0.397]

4 We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. [sent-13, score-0.904]

5 In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. [sent-14, score-0.466]

6 Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. [sent-15, score-0.237]

7 This is a question of importance since the I&F; model is one of the most commonly used spiking neuron model in theoretical studies as well as in the machine learning community (see [2-3] for a review). [sent-17, score-0.368]

8 It is believed to be much too simple to capture the firing dynamics of real neurons beyond a very rough and conceptual description of input integration and spikes initiation. [sent-19, score-0.334]

9 Nevertheless, recent years have seen several groups reporting that this type of model yields quantitative predictions of the activity of real neurons. [sent-20, score-0.194]

10 Rauch and colleagues have shown that I&F-type; models (with adaptation) reliably predict the mean firing rate of cortical ∗ homepage: http://icwww. [sent-21, score-0.143]

11 Keat and colleagues have shown that a similar model is able to predict almost exactly the timing of spikes of neurons in the visual pathway [5]. [sent-24, score-0.499]

12 However, the question is still open of how the predictions of I&F-type; models compare to the precise structure of spike trains in the cortex. [sent-25, score-0.557]

13 Indeed, cortical pyramidal neurons are known to produce spike trains whose reliability highly depends on the input scenario [6]. [sent-26, score-1.067]

14 Firstly, we will show that there exists a systematic way to extract relevant parameters of an I&F-type; model from intracellular recordings. [sent-28, score-0.133]

15 Secondly, we will show by a quantitative evaluation of the model performances that the quality of simple threshold models is surprisingly good and is close to the intrinsic reliability of real neurons. [sent-32, score-0.723]

16 We will try to convince the reader that, given the addition of a slow process, the I&F; model is in fact a model that can be considered good enough for pyramidal neurons of the neocortex under random current injection. [sent-33, score-0.463]

17 Layer 5 pyramidal neurons of the rat neocortex were recorded intracellularly in vitro while stimulated at the soma by a randomly fluctuating current generated by an Ornstein-Uhlenbeck (OU) process with a 1 ms autocorrelation time. [sent-35, score-0.57]

18 2 Both the mean µI and the variance σI of the OU process were varied in order to sample the response of the neurons to various levels of tonic and noisy inputs. [sent-36, score-0.224]

19 A subset of these recordings was used to construct, separately for each recorded neuron, a generalized I&F-type; model that we formulated in the framework of the Spike Response Model [3]. [sent-38, score-0.177]

20 1 Definition of the model The Spike Response Model (SRM) is written +∞ ˆ u(t) = η(t − t) + κ(s) I(t − s)ds (1) 0 with u the membrane voltage of the neuron and I the external driving current. [sent-40, score-0.47]

21 The kernel η acts as a template for the shape of spikes (usually highly stereotyped). [sent-42, score-0.151]

22 Like in the I&F; model, the model neuron fires each time that the membrane voltage u crosses the threshold ϑ from below if u(t) ≥ ϑ(t) and d d ˆ u(t) ≥ ϑ(t), then t = t dt dt (2) Here, the threshold includes a mechanism of spike-frequency adaptation. [sent-43, score-0.741]

23 ϑ is given by the following equation dϑ ϑ − ϑ0 =− + Aϑ δ(t − tk ) (3) dt τϑ k Each time that a spike is fired, the threshold ϑ is increased by a fixed amount A ϑ . [sent-44, score-0.516]

24 During discharge at rate f , the threshold fluctuates around the average value ¯ ϑ ≈ ϑ0 + α f (4) where α = Aϑ τϑ . [sent-47, score-0.155]

25 This type of adaptation mechanism has been shown to constitute a universal model for spike-frequency adaptation [10] and has already been applied in a similar context [11]. [sent-48, score-0.349]

26 During the model estimation, we use as a first step a traditional constant threshold denoted by ϑ(t) = ϑcst which is then transformed in the adaptive threshold of Equation (3) by a procedure to be detailed below. [sent-49, score-0.357]

27 Extract the kernel η from a sample voltage recording by spike triggered averaging. [sent-55, score-0.443]

28 For the sake of simplicity, we assume that the mean drive µI = 0. [sent-56, score-0.109]

29 Subtract η from the voltage recording to isolate the subthreshold fluctuations. [sent-58, score-0.233]

30 This step involves a comparison between the subthreshold fluctuations and the corresponding input current. [sent-61, score-0.188]

31 Plot the threshold ϑcst as a function of the firing frequency f of the neuron and run a linear regression. [sent-68, score-0.392]

32 The double exponential shape of κ is due to the coupling between somatic and dendritic compartments [15]. [sent-73, score-0.151]

33 3 Evaluation of performances The performances of the model are evaluated with the coincidence factor Γ [16]. [sent-77, score-0.339]

34 The factor N = 1−2ν∆ normalizes Γ to a maximum value Γ = 1 which is reached if and only if the spike train of the SRM reproduces exactly that of the cell. [sent-79, score-0.49]

35 A homogeneous Poisson process with the same number of spikes as the SRM would yield Γ = 0. [sent-80, score-0.151]

36 We compute the coincidence factor Γ by comparing the two complete spike trains as in [7]. [sent-81, score-0.558]

37 The optimal constant threshold ϑcst is plotted versus the output frequency f (symbols). [sent-93, score-0.248]

38 3 Results Figure 2 shows a direct comparison between predicted and recorded spike train for a typical neuron. [sent-96, score-0.628]

39 Even when zooming on the subthreshold regime, differences are in the range of a few millivolts only (B). [sent-98, score-0.205]

40 The spike dynamics is correctly predicted apart from a short period of time just after a spike is emitted (C). [sent-99, score-0.864]

41 This is due to the fact that the kernel η was extracted for a mean drive µI = 0. [sent-100, score-0.109]

42 Here, the mean is much larger than 0 and the neuron has already adapted to this new regime. [sent-101, score-0.274]

43 Before moving to a quantitative estimate of the quality of the predictions of our model, we need to understand what kind of limits are imposed on predictions by the modelled neurons themselves. [sent-109, score-0.428]

44 It is well known that pyramidal neurons of the cortex respond with very different reliability depending on the type of stimulation they receive [6]. [sent-110, score-0.66]

45 Neurons tend to fire regularly but without conserving the exact timing of spikes in response to constant or quasi constant input current. [sent-111, score-0.318]

46 On the other hand, they fire irregularly but reliably in terms of spike timing in response to fluctuating current. [sent-112, score-0.491]

47 We do not expect our model to yield better predictions than the intrinsic reliability of the modelled neuron. [sent-113, score-0.452]

48 To evaluate the intrinsic reliability of the pyramidal neurons, we repeated injection of the same OU process, i. [sent-114, score-0.721]

49 injection of processes with the same seed, and computed Γ between the repeated spike trains obtained in response to this procedure. [sent-116, score-0.745]

50 Figure 3A shows a surface plot of the intrinsic reliability Γn→n of a typical neuron (the subscript n → n is written for neuron to itself). [sent-117, score-0.79]

51 It is plotted versus the parameters of the stimulation, the current mean drive µI and its standard deviation σI . [sent-118, score-0.202]

52 We find that the mean drive µI has almost no impact on Γn→n (measured cross-correlation coefficient r = 0. [sent-119, score-0.109]

53 On the other hand, σ I has a strong impact on the reliability of the neuron (r = 0. [sent-122, score-0.444]

54 membrane voltage (mV) A 40 20 0 -20 -40 -60 -80 1700 B 1800 time (msec) 1900 C Figure 2: Performances of the SRM constructed by the method presented in this paper. [sent-129, score-0.147]

55 The prediction of the model (black line) is compared to the spike train of the corresponding neuron (thick grey line). [sent-131, score-0.735]

56 This panel corresponds to the first dotted zone in A (horizontal bar is 5 ms; vertical bar is 5 mV) C. [sent-134, score-0.15]

57 This panel corresponds to the second dotted zone in A (horizontal bar is 1 ms; vertical bar is 20 mV). [sent-136, score-0.15]

58 The model slightly undershoots during about 4 ms after the spike (see text for further details). [sent-137, score-0.474]

59 These findings are stable across the different neurons that we recorded and repeat the findings of Mainen and Sejnowski [6]. [sent-143, score-0.221]

60 In order to connect model predictions to these findings, we evaluate the Γ coincidence factor between the predicted spike train and the recorded spike trains (this Γ is labelled m → n for model to neuron). [sent-144, score-1.369]

61 We find that the predictions of our minimal model are close to the natural upper bound set by the intrinsic reliability of the pyramidal neuron. [sent-146, score-0.694]

62 On average, the minimal model achieves a quality Γm→n which is 65% (±3% s. [sent-147, score-0.166]

63 Furthermore, let us recall that due to the definition of the coincidence factor Γ, the threshold for statistical significance here is Γm→n = 0. [sent-154, score-0.245]

64 Finally, we compare the predictions of our minimal model in terms of two other indicators, the mean rate and the coefficient of variation of the interspike interval distribution (Cv ). [sent-156, score-0.293]

65 The mean rate is usually correctly predicted by our minimal model (see Figure 3C) in agreement with the findings of Rauch and colleagues [4]. [sent-157, score-0.325]

66 The C v is predicted in the correct range as well but may vary due to missed or extra spikes added in the prediction (data not shown). [sent-158, score-0.253]

67 It is also noteworthy that available spike trains are not very long (a few seconds) and the number of spikes is sometimes too low to yield a reliable estimate of the Cv . [sent-159, score-0.619]

68 5 150 predicted rate (Hz) C Γn→n D 2 1 R = 0. [sent-164, score-0.102]

69 Intrinsic reliability Γn→n of a typical pyramidal neuron in function of the mean drive µI and its standard deviation σI . [sent-172, score-0.722]

70 Performances of the SRM in correct spike timing prediction Γm→n are plotted versus the cells intrinsic reliability Γn→n (symbols) for the very same stimulation parameters. [sent-174, score-0.959]

71 The diagonal line (solid) denotes the “natural” upper bound limit imposed by the neurons intrinsic reliability. [sent-175, score-0.255]

72 Same as in A but in a model without adaptation where the threshold has been optimized separately for each set of stimulation parameters (see text for further details. [sent-179, score-0.453]

73 ) Previous model studies had shown that a model with a threshold simpler than the one used here is able to reliably predict the spike train of more detailed neuron models [7,12]. [sent-180, score-0.937]

74 Here, we used a threshold including an adaptation mechanism. [sent-181, score-0.306]

75 In contrast to this, our I&F; model with adaptation achieves the same level of predictive quality (Figure 3B) with one single set of threshold parameters. [sent-185, score-0.399]

76 This illustrates the importance of adaptation to I&F; models or SRM. [sent-186, score-0.151]

77 4 Discussion Mapping real neurons to simplified neuronal models has benefited from many developments in recent years [4-5,7-8,11-13,19-22] and was applied to both in vitro [4,9,13,22] and in vivo recordings [5]. [sent-187, score-0.338]

78 The model neuron is built sequentially from intracellular recordings. [sent-189, score-0.37]

79 The resulting model is very efficient in the sense that it allows a quantitative and accurate prediction of the spike train of the real neuron. [sent-190, score-0.556]

80 Most of the time, the predicted subthreshold membrane voltage differs from the recorded one by a few millivolts only. [sent-191, score-0.529]

81 The mean firing rate of the minimal model corresponds to that of the real neuron. [sent-192, score-0.157]

82 The statistical structure of the spike train is approximately conserved since we observe that the coefficient of variation (Cv ) of the interspike interval distribution is predicted in the correct range by our minimal model. [sent-193, score-0.673]

83 But most important, our minimal model has the ability to predict spikes with the correct timing (±2 ms) and the level of prediction that is reached is close to the intrinsic reliability of the real neuron in terms of spike timing [6]. [sent-194, score-1.402]

84 The adapting threshold has been found to play an important role. [sent-195, score-0.155]

85 It allows the model to tune to variable input characteristics and to extend its predictions beyond the input regimes used for model evaluation. [sent-196, score-0.257]

86 This work suggests that L5 neocortical pyramidal neurons under random current injection behave very much like I&F; neurons including a spike-frequency adaptation process. [sent-197, score-0.848]

87 They also indicate that the picture of a neuron combining a linear summation in the subthreshold regime with a threshold criterion for spike initiation is good enough to account for much of the behavior in an in vivo-like lab setting. [sent-201, score-0.983]

88 First, we used random current injection rather than a more realistic random conductance protocol [23]. [sent-203, score-0.308]

89 In a previous report [12], we had checked the consequences of random conductance injection with simulated data. [sent-204, score-0.308]

90 We found that random conductance injection mainly changes the effective membrane time constant of the neuron and can be accounted for by making the time course of the optimal linear filter (κ here) depend on the mean input to the neuron. [sent-205, score-0.684]

91 The minimal model reached the same quality level of predictions when driven by random conductance injection [12] as the level it reaches when driven by random current injection [7]. [sent-206, score-0.838]

92 Second, a largely fluctuating current generated by a random process can only be seen as a poor approximation to the input a neuron would receive in vivo. [sent-207, score-0.312]

93 Thus, all dendritic non-linearities, including backpropagating action potentials and dendritic spikes are excluded. [sent-212, score-0.323]

94 In summary, simple threshold models will never be able to account for all the variety of neuronal responses that can be probed in an artificial laboratory setting. [sent-213, score-0.196]

95 For example, effects of delayed spike initiation cannot be reproduced by simple threshold models that combine linear subthreshold behavior with a strict threshold criterion (but could be reproduced by quadratic or exponential I&F; models). [sent-214, score-0.932]

96 For this reason, we are currently studying exponential I&F; models with adaptation that allow us to relate our approach with other known models [21,28]. [sent-215, score-0.151]

97 However, for random current injection that mimics synaptic bombardment, the picture of a neuron that combines linear summation with a threshold criterion is not too wrong. [sent-216, score-0.671]

98 Moreover, in contrast to more complicated neuron models, the simple threshold model allows rapid parameter extraction from experimental traces; efficient numerical simulation; and rigorous mathematical analysis. [sent-217, score-0.439]

99 Our results also suggest that, if any elaborated computation is taking place in single neurons, it is likely to happen at dendritic level rather than at somatic level. [sent-218, score-0.151]

100 In absence of a clear understanding of dendritic computation, the I&F; neuron with adaptation thus appears as a model that we consider “good enough”. [sent-219, score-0.521]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('spike', 0.361), ('neuron', 0.237), ('injection', 0.236), ('cst', 0.217), ('reliability', 0.207), ('pyramidal', 0.169), ('rauch', 0.163), ('threshold', 0.155), ('spikes', 0.151), ('adaptation', 0.151), ('subthreshold', 0.151), ('neurons', 0.146), ('jolivet', 0.118), ('srm', 0.118), ('intrinsic', 0.109), ('ncoinc', 0.109), ('gerstner', 0.108), ('trains', 0.107), ('predicted', 0.102), ('performances', 0.101), ('stimulation', 0.1), ('mv', 0.1), ('train', 0.09), ('coincidence', 0.09), ('timing', 0.089), ('predictions', 0.089), ('intracellular', 0.086), ('dendritic', 0.086), ('voltage', 0.082), ('ndata', 0.081), ('uctuating', 0.081), ('recorded', 0.075), ('minimal', 0.073), ('drive', 0.072), ('conductance', 0.072), ('ou', 0.071), ('scher', 0.071), ('colleagues', 0.066), ('ms', 0.066), ('membrane', 0.065), ('epfl', 0.065), ('somatic', 0.065), ('brillinger', 0.065), ('cv', 0.06), ('msec', 0.06), ('vitro', 0.06), ('symbols', 0.06), ('quantitative', 0.058), ('paninski', 0.057), ('bar', 0.055), ('recordings', 0.055), ('arcas', 0.054), ('bombardment', 0.054), ('coincidences', 0.054), ('fairhall', 0.054), ('keat', 0.054), ('millivolts', 0.054), ('neocortex', 0.054), ('nsrm', 0.054), ('pillow', 0.054), ('segundo', 0.054), ('ssrm', 0.054), ('zoom', 0.054), ('ring', 0.052), ('la', 0.05), ('versus', 0.049), ('ndings', 0.049), ('kistler', 0.047), ('senn', 0.047), ('criticized', 0.047), ('exposed', 0.047), ('interspike', 0.047), ('mainen', 0.047), ('model', 0.047), ('quality', 0.046), ('plotted', 0.044), ('switzerland', 0.043), ('zador', 0.043), ('fusi', 0.043), ('lausanne', 0.043), ('mimics', 0.043), ('response', 0.041), ('regime', 0.041), ('neuronal', 0.041), ('feng', 0.04), ('zone', 0.04), ('emitted', 0.04), ('cortical', 0.04), ('driving', 0.039), ('reached', 0.039), ('hz', 0.039), ('receive', 0.038), ('initiation', 0.038), ('spiking', 0.037), ('input', 0.037), ('mean', 0.037), ('simoncelli', 0.036), ('vivo', 0.036), ('indistinguishable', 0.036), ('reproduced', 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000006 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner

Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1

2 0.39749265 181 nips-2005-Spiking Inputs to a Winner-take-all Network

Author: Matthias Oster, Shih-Chii Liu

Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1

3 0.33763179 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity

Author: Robert A. Legenstein, Wolfgang Maass

Abstract: We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong “teacher inputs” to the same neuron. It turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a simplified neuron model whenever a stable solution exists, no equally strong convergence guarantee can be given for spiking neurons with STDP. But we derive a criterion on the statistical dependency structure of input spike trains which characterizes exactly when learning with STDP will converge on average for a simple model of a spiking neuron. This criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation matrix related to the spike inputs. In addition we show through computer simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses. 1

4 0.25402957 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects

Author: Jean-pascal Pfister, Wulfram Gerstner

Abstract: While classical experiments on spike-timing dependent plasticity analyzed synaptic changes as a function of the timing of pairs of pre- and postsynaptic spikes, more recent experiments also point to the effect of spike triplets. Here we develop a mathematical framework that allows us to characterize timing based learning rules. Moreover, we identify a candidate learning rule with five variables (and 5 free parameters) that captures a variety of experimental data, including the dependence of potentiation and depression upon pre- and postsynaptic firing frequencies. The relation to the Bienenstock-Cooper-Munro rule as well as to some timing-based rules is discussed. 1

5 0.22244443 118 nips-2005-Learning in Silicon: Timing is Everything

Author: John V. Arthur, Kwabena Boahen

Abstract: We describe a neuromorphic chip that uses binary synapses with spike timing-dependent plasticity (STDP) to learn stimulated patterns of activity and to compensate for variability in excitability. Specifically, STDP preferentially potentiates (turns on) synapses that project from excitable neurons, which spike early, to lethargic neurons, which spike late. The additional excitatory synaptic current makes lethargic neurons spike earlier, thereby causing neurons that belong to the same pattern to spike in synchrony. Once learned, an entire pattern can be recalled by stimulating a subset. 1 Variability in Neural Systems Evidence suggests precise spike timing is important in neural coding, specifically, in the hippocampus. The hippocampus uses timing in the spike activity of place cells (in addition to rate) to encode location in space [1]. Place cells employ a phase code: the timing at which a neuron spikes relative to the phase of the inhibitory theta rhythm (5-12Hz) conveys information. As an animal approaches a place cell’s preferred location, the place cell not only increases its spike rate, but also spikes at earlier phases in the theta cycle. To implement a phase code, the theta rhythm is thought to prevent spiking until the input synaptic current exceeds the sum of the neuron threshold and the decreasing inhibition on the downward phase of the cycle [2]. However, even with identical inputs and common theta inhibition, neurons do not spike in synchrony. Variability in excitability spreads the activity in phase. Lethargic neurons (such as those with high thresholds) spike late in the theta cycle, since their input exceeds the sum of the neuron threshold and theta inhibition only after the theta inhibition has had time to decrease. Conversely, excitable neurons (such as those with low thresholds) spike early in the theta cycle. Consequently, variability in excitability translates into variability in timing. We hypothesize that the hippocampus achieves its precise spike timing (about 10ms) through plasticity enhanced phase-coding (PEP). The source of hippocampal timing precision in the presence of variability (and noise) remains unexplained. Synaptic plasticity can compensate for variability in excitability if it increases excitatory synaptic input to neurons in inverse proportion to their excitabilities. Recasting this in a phase-coding framework, we desire a learning rule that increases excitatory synaptic input to neurons directly related to their phases. Neurons that lag require additional synaptic input, whereas neurons that lead 120µm 190µm A B Figure 1: STDP Chip. A The chip has a 16-by-16 array of microcircuits; one microcircuit includes four principal neurons, each with 21 STDP circuits. B The STDP Chip is embedded in a circuit board including DACs, a CPLD, a RAM chip, and a USB chip, which communicates with a PC. require none. The spike timing-dependent plasticity (STDP) observed in the hippocampus satisfies this requirement [3]. It requires repeated pre-before-post spike pairings (within a time window) to potentiate and repeated post-before-pre pairings to depress a synapse. Here we validate our hypothesis with a model implemented in silicon, where variability is as ubiquitous as it is in biology [4]. Section 2 presents our silicon system, including the STDP Chip. Section 3 describes and characterizes the STDP circuit. Section 4 demonstrates that PEP compensates for variability and provides evidence that STDP is the compensation mechanism. Section 5 explores a desirable consequence of PEP: unconventional associative pattern recall. Section 6 discusses the implications of the PEP model, including its benefits and applications in the engineering of neuromorphic systems and in the study of neurobiology. 2 Silicon System We have designed, submitted, and tested a silicon implementation of PEP. The STDP Chip was fabricated through MOSIS in a 1P5M 0.25µm CMOS process, with just under 750,000 transistors in just over 10mm2 of area. It has a 32 by 32 array of excitatory principal neurons commingled with a 16 by 16 array of inhibitory interneurons that are not used here (Figure 1A). Each principal neuron has 21 STDP synapses. The address-event representation (AER) [5] is used to transmit spikes off chip and to receive afferent and recurrent spike input. To configure the STDP Chip as a recurrent network, we embedded it in a circuit board (Figure 1B). The board has five primary components: a CPLD (complex programmable logic device), the STDP Chip, a RAM chip, a USB interface chip, and DACs (digital-to-analog converters). The central component in the system is the CPLD. The CPLD handles AER traffic, mediates communication between devices, and implements recurrent connections by accessing a lookup table, stored in the RAM chip. The USB interface chip provides a bidirectional link with a PC. The DACs control the analog biases in the system, including the leak current, which the PC varies in real-time to create the global inhibitory theta rhythm. The principal neuron consists of a refractory period and calcium-dependent potassium circuit (RCK), a synapse circuit, and a soma circuit (Figure 2A). RCK and the synapse are ISOMA Soma Synapse STDP Presyn. Spike PE LPF A Presyn. Spike Raster AH 0 0.1 Spike probability RCK Postsyn. Spike B 0.05 0.1 0.05 0.1 0.08 0.06 0.04 0.02 0 0 Time(s) Figure 2: Principal neuron. A A simplified schematic is shown, including: the synapse, refractory and calcium-dependent potassium channel (RCK), soma, and axon-hillock (AH) circuits, plus their constituent elements, the pulse extender (PE) and the low-pass filter (LPF). B Spikes (dots) from 81 principal neurons are temporally dispersed, when excited by poisson-like inputs (58Hz) and inhibited by the common 8.3Hz theta rhythm (solid line). The histogram includes spikes from five theta cycles. composed of two reusable blocks: the low-pass filter (LPF) and the pulse extender (PE). The soma is a modified version of the LPF, which receives additional input from an axonhillock circuit (AH). RCK is inhibitory to the neuron. It consists of a PE, which models calcium influx during a spike, and a LPF, which models calcium buffering. When AH fires a spike, a packet of charge is dumped onto a capacitor in the PE. The PE’s output activates until the charge decays away, which takes a few milliseconds. Also, while the PE is active, charge accumulates on the LPF’s capacitor, lowering the LPF’s output voltage. Once the PE deactivates, this charge leaks away as well, but this takes tens of milliseconds because the leak is smaller. The PE’s and the LPF’s inhibitory effects on the soma are both described below in terms of the sum (ISHUNT ) of the currents their output voltages produce in pMOS transistors whose sources are at Vdd (see Figure 2A). Note that, in the absence of spikes, these currents decay exponentially, with a time-constant determined by their respective leaks. The synapse circuit is excitatory to the neuron. It is composed of a PE, which represents the neurotransmitter released into the synaptic cleft, and a LPF, which represents the bound neurotransmitter. The synapse circuit is similar to RCK in structure but differs in function: It is activated not by the principal neuron itself but by the STDP circuits (or directly by afferent spikes that bypass these circuits, i.e., fixed synapses). The synapse’s effect on the soma is also described below in terms of the current (ISYN ) its output voltage produces in a pMOS transistor whose source is at Vdd. The soma circuit is a leaky integrator. It receives excitation from the synapse circuit and shunting inhibition from RCK and has a leak current as well. Its temporal behavior is described by: τ dISOMA ISYN I0 + ISOMA = dt ISHUNT where ISOMA is the current the capacitor’s voltage produces in a pMOS transistor whose source is at Vdd (see Figure 2A). ISHUNT is the sum of the leak, refractory, and calciumdependent potassium currents. These currents also determine the time constant: τ = C Ut κISHUNT , where I0 and κ are transistor parameters and Ut is the thermal voltage. STDP circuit ~LTP SRAM Presynaptic spike A ~LTD Inverse number of pairings Integrator Decay Postsynaptic spike Potentiation 0.1 0.05 0 0.05 0.1 Depression -80 -40 0 Presynaptic spike Postsynaptic spike 40 Spike timing: t pre - t post (ms) 80 B Figure 3: STDP circuit design and characterization. A The circuit is composed of three subcircuits: decay, integrator, and SRAM. B The circuit potentiates when the presynaptic spike precedes the postsynaptic spike and depresses when the postsynaptic spike precedes the presynaptic spike. The soma circuit is connected to an AH, the locus of spike generation. The AH consists of model voltage-dependent sodium and potassium channel populations (modified from [6] by Kai Hynna). It initiates the AER signaling process required to send a spike off chip. To characterize principal neuron variability, we excited 81 neurons with poisson-like 58Hz spike trains (Figure 2B). We made these spike trains poisson-like by starting with a regular 200Hz spike train and dropping spikes randomly, with probability of 0.71. Thus spikes were delivered to neurons that won the coin toss in synchrony every 5ms. However, neurons did not lock onto the input synchrony due to filtering by the synaptic time constant (see Figure 2B). They also received a common inhibitory input at the theta frequency (8.3Hz), via their leak current. Each neuron was prevented from firing more than one spike in a theta cycle by its model calcium-dependent potassium channel population. The principal neurons’ spike times were variable. To quantify the spike variability, we used timing precision, which we define as twice the standard deviation of spike times accumulated from five theta cycles. With an input rate of 58Hz the timing precision was 34ms. 3 STDP Circuit The STDP circuit (related to [7]-[8]), for which the STDP Chip is named, is the most abundant, with 21,504 copies on the chip. This circuit is built from three subcircuits: decay, integrator, and SRAM (Figure 3A). The decay and integrator are used to implement potentiation, and depression, in a symmetric fashion. The SRAM holds the current binary state of the synapse, either potentiated or depressed. For potentiation, the decay remembers the last presynaptic spike. Its capacitor is charged when that spike occurs and discharges linearly thereafter. A postsynaptic spike samples the charge remaining on the capacitor, passes it through an exponential function, and dumps the resultant charge into the integrator. This charge decays linearly thereafter. At the time of the postsynaptic spike, the SRAM, a cross-coupled inverter pair, reads the voltage on the integrator’s capacitor. If it exceeds a threshold, the SRAM switches state from depressed to potentiated (∼LTD goes high and ∼LTP goes low). The depression side of the STDP circuit is exactly symmetric, except that it responds to postsynaptic activation followed by presynaptic activation and switches the SRAM’s state from potentiated to depressed (∼LTP goes high and ∼LTD goes low). When the SRAM is in the potentiated state, the presynaptic 50 After STDP 83 92 100 Timing precision(ms) Before STDP 75 B Before STDP After STDP 40 30 20 10 0 50 60 70 80 90 Input rate(Hz) 100 50 58 67 text A 0.2 0.4 Time(s) 0.6 0.2 0.4 Time(s) 0.6 C Figure 4: Plasticity enhanced phase-coding. A Spike rasters of 81 neurons (9 by 9 cluster) display synchrony over a two-fold range of input rates after STDP. B The degree of enhancement is quantified by timing precision. C Each neuron (center box) sends synapses to (dark gray) and receives synapses from (light gray) twenty-one randomly chosen neighbors up to five nodes away (black indicates both connections). spike activates the principal neuron’s synapse; otherwise the spike has no effect. We characterized the STDP circuit by activating a plastic synapse and a fixed synapse– which elicits a spike at different relative times. We repeated this pairing at 16Hz. We counted the number of pairings required to potentiate (or depress) the synapse. Based on this count, we calculated the efficacy of each pairing as the inverse number of pairings required (Figure 3B). For example, if twenty pairings were required to potentiate the synapse, the efficacy of that pre-before-post time-interval was one twentieth. The efficacy of both potentiation and depression are fit by exponentials with time constants of 11.4ms and 94.9ms, respectively. This behavior is similar to that observed in the hippocampus: potentiation has a shorter time constant and higher maximum efficacy than depression [3]. 4 Recurrent Network We carried out an experiment designed to test the STDP circuit’s ability to compensate for variability in spike timing through PEP. Each neuron received recurrent connections from 21 randomly selected neurons within an 11 by 11 neighborhood centered on itself (see Figure 4C). Conversely, it made recurrent connections to randomly chosen neurons within the same neighborhood. These connections were mediated by STDP circuits, initialized to the depressed state. We chose a 9 by 9 cluster of neurons and delivered spikes at a mean rate of 50 to 100Hz to each one (dropping spikes with a probability of 0.75 to 0.5 from a regular 200Hz train) and provided common theta inhibition as before. We compared the variability in spike timing after five seconds of learning with the initial distribution. Phase coding was enhanced after STDP (Figure 4A). Before STDP, spike timing among neurons was highly variable (except for the very highest input rate). After STDP, variability was virtually eliminated (except for the very lowest input rate). Initially, the variability, characterized by timing precision, was inversely related to the input rate, decreasing from 34 to 13ms. After five seconds of STDP, variability decreased and was largely independent of input rate, remaining below 11ms. Potentiated synapses 25 A Synaptic state after STDP 20 15 10 5 0 B 50 100 150 200 Spiking order 250 Figure 5: Compensating for variability. A Some synapses (dots) become potentiated (light) while others remain depressed (dark) after STDP. B The number of potentiated synapses neurons make (pluses) and receive (circles) is negatively (r = -0.71) and positively (r = 0.76) correlated to their rank in the spiking order, respectively. Comparing the number of potentiated synapses each neuron made or received with its excitability confirmed the PEP hypothesis (i.e., leading neurons provide additional synaptic current to lagging neurons via potentiated recurrent synapses). In this experiment, to eliminate variability due to noise (as opposed to excitability), we provided a 17 by 17 cluster of neurons with a regular 200Hz excitatory input. Theta inhibition was present as before and all synapses were initialized to the depressed state. After 10 seconds of STDP, a large fraction of the synapses were potentiated (Figure 5A). When the number of potentiated synapses each neuron made or received was plotted versus its rank in spiking order (Figure 5B), a clear correlation emerged (r = -0.71 or 0.76, respectively). As expected, neurons that spiked early made more and received fewer potentiated synapses. In contrast, neurons that spiked late made fewer and received more potentiated synapses. 5 Pattern Completion After STDP, we found that the network could recall an entire pattern given a subset, thus the same mechanisms that compensated for variability and noise could also compensate for lack of information. We chose a 9 by 9 cluster of neurons as our pattern and delivered a poisson-like spike train with mean rate of 67Hz to each one as in the first experiment. Theta inhibition was present as before and all synapses were initialized to the depressed state. Before STDP, we stimulated a subset of the pattern and only neurons in that subset spiked (Figure 6A). After five seconds of STDP, we stimulated the same subset again. This time they recruited spikes from other neurons in the pattern, completing it (Figure 6B). Upon varying the fraction of the pattern presented, we found that the fraction recalled increased faster than the fraction presented. We selected subsets of the original pattern randomly, varying the fraction of neurons chosen from 0.1 to 1.0 (ten trials for each). We classified neurons as active if they spiked in the two second period over which we recorded. Thus, we characterized PEP’s pattern-recall performance as a function of the probability that the pattern in question’s neurons are activated (Figure 6C). At a fraction of 0.50 presented, nearly all of the neurons in the pattern are consistently activated (0.91±0.06), showing robust pattern completion. We fitted the recall performance with a sigmoid that reached 0.50 recall fraction with an input fraction of 0.30. No spurious neurons were activated during any trials. Rate(Hz) Rate(Hz) 8 7 7 6 6 5 5 0.6 0.4 2 0.2 0 0 3 3 2 1 1 A 0.8 4 4 Network activity before STDP 1 Fraction of pattern actived 8 0 B Network activity after STDP C 0 0.2 0.4 0.6 0.8 Fraction of pattern stimulated 1 Figure 6: Associative recall. A Before STDP, half of the neurons in a pattern are stimulated; only they are activated. B After STDP, half of the neurons in a pattern are stimulated, and all are activated. C The fraction of the pattern activated grows faster than the fraction stimulated. 6 Discussion Our results demonstrate that PEP successfully compensates for graded variations in our silicon recurrent network using binary (on–off) synapses (in contrast with [8], where weights are graded). While our chip results are encouraging, variability was not eliminated in every case. In the case of the lowest input (50Hz), we see virtually no change (Figure 4A). We suspect the timing remains imprecise because, with such low input, neurons do not spike every theta cycle and, consequently, provide fewer opportunities for the STDP synapses to potentiate. This shortfall illustrates the system’s limits; it can only compensate for variability within certain bounds, and only for activity appropriate to the PEP model. As expected, STDP is the mechanism responsible for PEP. STDP potentiated recurrent synapses from leading neurons to lagging neurons, reducing the disparity among the diverse population of neurons. Even though the STDP circuits are themselves variable, with different efficacies and time constants, when using timing the sign of the weight-change is always correct (data not shown). For this reason, we chose STDP over other more physiological implementations of plasticity, such as membrane-voltage-dependent plasticity (MVDP), which has the capability to learn with graded voltage signals [9], such as those found in active dendrites, providing more computational power [10]. Previously, we investigated a MVDP circuit, which modeled a voltage-dependent NMDAreceptor-gated synapse [11]. It potentiated when the calcium current analog exceeded a threshold, which was designed to occur only during a dendritic action potential. This circuit produced behavior similar to STDP, implying it could be used in PEP. However, it was sensitive to variability in the NMDA and potentiation thresholds, causing a fraction of the population to potentiate anytime the synapse received an input and another fraction to never potentiate, rendering both subpopulations useless. Therefore, the simpler, less biophysical STDP circuit won out over the MVDP circuit: In our system timing is everything. Associative storage and recall naturally emerge in the PEP network when synapses between neurons coactivated by a pattern are potentiated. These synapses allow neurons to recruit their peers when a subset of the pattern is presented, thereby completing the pattern. However, this form of pattern storage and completion differs from Hopfield’s attractor model [12] . Rather than forming symmetric, recurrent neuronal circuits, our recurrent network forms asymmetric circuits in which neurons make connections exclusively to less excitable neurons in the pattern. In both the poisson-like and regular cases (Figures 4 & 5), only about six percent of potentiated connections were reciprocated, as expected by chance. We plan to investigate the storage capacity of this asymmetric form of associative memory. Our system lends itself to modeling brain regions that use precise spike timing, such as the hippocampus. We plan to extend the work presented to store and recall sequences of patterns, as the hippocampus is hypothesized to do. Place cells that represent different locations spike at different phases of the theta cycle, in relation to the distance to their preferred locations. This sequential spiking will allow us to link patterns representing different locations in the order those locations are visited, thereby realizing episodic memory. We propose PEP as a candidate neural mechanism for information coding and storage in the hippocampal system. Observations from the CA1 region of the hippocampus suggest that basal dendrites (which primarily receive excitation from recurrent connections) support submillisecond timing precision, consistent with PEP [13]. We have shown, in a silicon model, PEP’s ability to exploit such fast recurrent connections to sharpen timing precision as well as to associatively store and recall patterns. Acknowledgments We thank Joe Lin for assistance with chip generation. The Office of Naval Research funded this work (Award No. N000140210468). References [1] O’Keefe J. & Recce M.L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317-330. [2] Mehta M.R., Lee A.K. & Wilson M.A. (2002) Role of experience and oscillations in transforming a rate code into a temporal code. Nature 417(6890):741-746. [3] Bi G.Q. & Wang H.X. (2002) Temporal asymmetry in spike timing-dependent synaptic plasticity. Physiology & Behavior 77:551-555. [4] Rodriguez-Vazquez, A., Linan, G., Espejo S. & Dominguez-Castro R. (2003) Mismatch-induced trade-offs and scalability of analog preprocessing visual microprocessor chips. Analog Integrated Circuits and Signal Processing 37:73-83. [5] Boahen K.A. (2000) Point-to-point connectivity between neuromorphic chips using address events. IEEE Transactions on Circuits and Systems II 47:416-434. [6] Culurciello E.R., Etienne-Cummings R. & Boahen K.A. (2003) A biomorphic digital image sensor. IEEE Journal of Solid State Circuits 38:281-294. [7] Bofill A., Murray A.F & Thompson D.P. (2005) Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning. In: Advances in Neural Information Processing Systems 14, MIT Press, 2002. [8] Cameron K., Boonsobhak V., Murray A. & Renshaw D. (2005) Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI. IEEE Transactions on Neural Networks 16(6):1626-1627. [9] Chicca E., Badoni D., Dante V., D’Andreagiovanni M., Salina G., Carota L., Fusi S. & Del Giudice P. (2003) A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Transaction on Neural Networks 14(5):1297-1307. [10] Poirazi P., & Mel B.W. (2001) Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29(3)779-796. [11] Arthur J.V. & Boahen K. (2004) Recurrently connected silicon neurons with active dendrites for one-shot learning. In: IEEE International Joint Conference on Neural Networks 3, pp.1699-1704. [12] Hopfield J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Science 81(10):3088-3092. [13] Ariav G., Polsky A. & Schiller J. (2003) Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons. Journal of Neuroscience 23(21):7750-7758.

6 0.22096947 64 nips-2005-Efficient estimation of hidden state dynamics from spike trains

7 0.19367242 188 nips-2005-Temporally changing synaptic plasticity

8 0.14525203 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

9 0.14129144 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

10 0.12469218 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

11 0.12347972 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks

12 0.12337767 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

13 0.10721247 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

14 0.10687026 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

15 0.099314667 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

16 0.096194141 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

17 0.082883537 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks

18 0.077400625 197 nips-2005-Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments

19 0.064168192 141 nips-2005-Norepinephrine and Neural Interrupts

20 0.063419685 6 nips-2005-A Connectionist Model for Constructive Modal Reasoning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.226), (1, -0.466), (2, -0.131), (3, -0.188), (4, 0.01), (5, -0.056), (6, 0.073), (7, 0.069), (8, -0.013), (9, -0.06), (10, 0.12), (11, -0.009), (12, 0.07), (13, 0.023), (14, 0.024), (15, -0.01), (16, -0.01), (17, 0.036), (18, -0.026), (19, -0.002), (20, -0.066), (21, 0.006), (22, 0.079), (23, -0.07), (24, 0.028), (25, 0.012), (26, -0.026), (27, -0.061), (28, -0.023), (29, 0.015), (30, 0.035), (31, -0.059), (32, 0.008), (33, 0.084), (34, -0.006), (35, 0.006), (36, 0.001), (37, 0.057), (38, -0.06), (39, -0.069), (40, 0.053), (41, 0.024), (42, 0.035), (43, 0.001), (44, 0.009), (45, -0.06), (46, -0.017), (47, -0.02), (48, -0.016), (49, -0.063)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97231424 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner

Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1

2 0.90067106 181 nips-2005-Spiking Inputs to a Winner-take-all Network

Author: Matthias Oster, Shih-Chii Liu

Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1

3 0.87445468 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity

Author: Robert A. Legenstein, Wolfgang Maass

Abstract: We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong “teacher inputs” to the same neuron. It turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a simplified neuron model whenever a stable solution exists, no equally strong convergence guarantee can be given for spiking neurons with STDP. But we derive a criterion on the statistical dependency structure of input spike trains which characterizes exactly when learning with STDP will converge on average for a simple model of a spiking neuron. This criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation matrix related to the spike inputs. In addition we show through computer simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses. 1

4 0.83831495 118 nips-2005-Learning in Silicon: Timing is Everything

Author: John V. Arthur, Kwabena Boahen

Abstract: We describe a neuromorphic chip that uses binary synapses with spike timing-dependent plasticity (STDP) to learn stimulated patterns of activity and to compensate for variability in excitability. Specifically, STDP preferentially potentiates (turns on) synapses that project from excitable neurons, which spike early, to lethargic neurons, which spike late. The additional excitatory synaptic current makes lethargic neurons spike earlier, thereby causing neurons that belong to the same pattern to spike in synchrony. Once learned, an entire pattern can be recalled by stimulating a subset. 1 Variability in Neural Systems Evidence suggests precise spike timing is important in neural coding, specifically, in the hippocampus. The hippocampus uses timing in the spike activity of place cells (in addition to rate) to encode location in space [1]. Place cells employ a phase code: the timing at which a neuron spikes relative to the phase of the inhibitory theta rhythm (5-12Hz) conveys information. As an animal approaches a place cell’s preferred location, the place cell not only increases its spike rate, but also spikes at earlier phases in the theta cycle. To implement a phase code, the theta rhythm is thought to prevent spiking until the input synaptic current exceeds the sum of the neuron threshold and the decreasing inhibition on the downward phase of the cycle [2]. However, even with identical inputs and common theta inhibition, neurons do not spike in synchrony. Variability in excitability spreads the activity in phase. Lethargic neurons (such as those with high thresholds) spike late in the theta cycle, since their input exceeds the sum of the neuron threshold and theta inhibition only after the theta inhibition has had time to decrease. Conversely, excitable neurons (such as those with low thresholds) spike early in the theta cycle. Consequently, variability in excitability translates into variability in timing. We hypothesize that the hippocampus achieves its precise spike timing (about 10ms) through plasticity enhanced phase-coding (PEP). The source of hippocampal timing precision in the presence of variability (and noise) remains unexplained. Synaptic plasticity can compensate for variability in excitability if it increases excitatory synaptic input to neurons in inverse proportion to their excitabilities. Recasting this in a phase-coding framework, we desire a learning rule that increases excitatory synaptic input to neurons directly related to their phases. Neurons that lag require additional synaptic input, whereas neurons that lead 120µm 190µm A B Figure 1: STDP Chip. A The chip has a 16-by-16 array of microcircuits; one microcircuit includes four principal neurons, each with 21 STDP circuits. B The STDP Chip is embedded in a circuit board including DACs, a CPLD, a RAM chip, and a USB chip, which communicates with a PC. require none. The spike timing-dependent plasticity (STDP) observed in the hippocampus satisfies this requirement [3]. It requires repeated pre-before-post spike pairings (within a time window) to potentiate and repeated post-before-pre pairings to depress a synapse. Here we validate our hypothesis with a model implemented in silicon, where variability is as ubiquitous as it is in biology [4]. Section 2 presents our silicon system, including the STDP Chip. Section 3 describes and characterizes the STDP circuit. Section 4 demonstrates that PEP compensates for variability and provides evidence that STDP is the compensation mechanism. Section 5 explores a desirable consequence of PEP: unconventional associative pattern recall. Section 6 discusses the implications of the PEP model, including its benefits and applications in the engineering of neuromorphic systems and in the study of neurobiology. 2 Silicon System We have designed, submitted, and tested a silicon implementation of PEP. The STDP Chip was fabricated through MOSIS in a 1P5M 0.25µm CMOS process, with just under 750,000 transistors in just over 10mm2 of area. It has a 32 by 32 array of excitatory principal neurons commingled with a 16 by 16 array of inhibitory interneurons that are not used here (Figure 1A). Each principal neuron has 21 STDP synapses. The address-event representation (AER) [5] is used to transmit spikes off chip and to receive afferent and recurrent spike input. To configure the STDP Chip as a recurrent network, we embedded it in a circuit board (Figure 1B). The board has five primary components: a CPLD (complex programmable logic device), the STDP Chip, a RAM chip, a USB interface chip, and DACs (digital-to-analog converters). The central component in the system is the CPLD. The CPLD handles AER traffic, mediates communication between devices, and implements recurrent connections by accessing a lookup table, stored in the RAM chip. The USB interface chip provides a bidirectional link with a PC. The DACs control the analog biases in the system, including the leak current, which the PC varies in real-time to create the global inhibitory theta rhythm. The principal neuron consists of a refractory period and calcium-dependent potassium circuit (RCK), a synapse circuit, and a soma circuit (Figure 2A). RCK and the synapse are ISOMA Soma Synapse STDP Presyn. Spike PE LPF A Presyn. Spike Raster AH 0 0.1 Spike probability RCK Postsyn. Spike B 0.05 0.1 0.05 0.1 0.08 0.06 0.04 0.02 0 0 Time(s) Figure 2: Principal neuron. A A simplified schematic is shown, including: the synapse, refractory and calcium-dependent potassium channel (RCK), soma, and axon-hillock (AH) circuits, plus their constituent elements, the pulse extender (PE) and the low-pass filter (LPF). B Spikes (dots) from 81 principal neurons are temporally dispersed, when excited by poisson-like inputs (58Hz) and inhibited by the common 8.3Hz theta rhythm (solid line). The histogram includes spikes from five theta cycles. composed of two reusable blocks: the low-pass filter (LPF) and the pulse extender (PE). The soma is a modified version of the LPF, which receives additional input from an axonhillock circuit (AH). RCK is inhibitory to the neuron. It consists of a PE, which models calcium influx during a spike, and a LPF, which models calcium buffering. When AH fires a spike, a packet of charge is dumped onto a capacitor in the PE. The PE’s output activates until the charge decays away, which takes a few milliseconds. Also, while the PE is active, charge accumulates on the LPF’s capacitor, lowering the LPF’s output voltage. Once the PE deactivates, this charge leaks away as well, but this takes tens of milliseconds because the leak is smaller. The PE’s and the LPF’s inhibitory effects on the soma are both described below in terms of the sum (ISHUNT ) of the currents their output voltages produce in pMOS transistors whose sources are at Vdd (see Figure 2A). Note that, in the absence of spikes, these currents decay exponentially, with a time-constant determined by their respective leaks. The synapse circuit is excitatory to the neuron. It is composed of a PE, which represents the neurotransmitter released into the synaptic cleft, and a LPF, which represents the bound neurotransmitter. The synapse circuit is similar to RCK in structure but differs in function: It is activated not by the principal neuron itself but by the STDP circuits (or directly by afferent spikes that bypass these circuits, i.e., fixed synapses). The synapse’s effect on the soma is also described below in terms of the current (ISYN ) its output voltage produces in a pMOS transistor whose source is at Vdd. The soma circuit is a leaky integrator. It receives excitation from the synapse circuit and shunting inhibition from RCK and has a leak current as well. Its temporal behavior is described by: τ dISOMA ISYN I0 + ISOMA = dt ISHUNT where ISOMA is the current the capacitor’s voltage produces in a pMOS transistor whose source is at Vdd (see Figure 2A). ISHUNT is the sum of the leak, refractory, and calciumdependent potassium currents. These currents also determine the time constant: τ = C Ut κISHUNT , where I0 and κ are transistor parameters and Ut is the thermal voltage. STDP circuit ~LTP SRAM Presynaptic spike A ~LTD Inverse number of pairings Integrator Decay Postsynaptic spike Potentiation 0.1 0.05 0 0.05 0.1 Depression -80 -40 0 Presynaptic spike Postsynaptic spike 40 Spike timing: t pre - t post (ms) 80 B Figure 3: STDP circuit design and characterization. A The circuit is composed of three subcircuits: decay, integrator, and SRAM. B The circuit potentiates when the presynaptic spike precedes the postsynaptic spike and depresses when the postsynaptic spike precedes the presynaptic spike. The soma circuit is connected to an AH, the locus of spike generation. The AH consists of model voltage-dependent sodium and potassium channel populations (modified from [6] by Kai Hynna). It initiates the AER signaling process required to send a spike off chip. To characterize principal neuron variability, we excited 81 neurons with poisson-like 58Hz spike trains (Figure 2B). We made these spike trains poisson-like by starting with a regular 200Hz spike train and dropping spikes randomly, with probability of 0.71. Thus spikes were delivered to neurons that won the coin toss in synchrony every 5ms. However, neurons did not lock onto the input synchrony due to filtering by the synaptic time constant (see Figure 2B). They also received a common inhibitory input at the theta frequency (8.3Hz), via their leak current. Each neuron was prevented from firing more than one spike in a theta cycle by its model calcium-dependent potassium channel population. The principal neurons’ spike times were variable. To quantify the spike variability, we used timing precision, which we define as twice the standard deviation of spike times accumulated from five theta cycles. With an input rate of 58Hz the timing precision was 34ms. 3 STDP Circuit The STDP circuit (related to [7]-[8]), for which the STDP Chip is named, is the most abundant, with 21,504 copies on the chip. This circuit is built from three subcircuits: decay, integrator, and SRAM (Figure 3A). The decay and integrator are used to implement potentiation, and depression, in a symmetric fashion. The SRAM holds the current binary state of the synapse, either potentiated or depressed. For potentiation, the decay remembers the last presynaptic spike. Its capacitor is charged when that spike occurs and discharges linearly thereafter. A postsynaptic spike samples the charge remaining on the capacitor, passes it through an exponential function, and dumps the resultant charge into the integrator. This charge decays linearly thereafter. At the time of the postsynaptic spike, the SRAM, a cross-coupled inverter pair, reads the voltage on the integrator’s capacitor. If it exceeds a threshold, the SRAM switches state from depressed to potentiated (∼LTD goes high and ∼LTP goes low). The depression side of the STDP circuit is exactly symmetric, except that it responds to postsynaptic activation followed by presynaptic activation and switches the SRAM’s state from potentiated to depressed (∼LTP goes high and ∼LTD goes low). When the SRAM is in the potentiated state, the presynaptic 50 After STDP 83 92 100 Timing precision(ms) Before STDP 75 B Before STDP After STDP 40 30 20 10 0 50 60 70 80 90 Input rate(Hz) 100 50 58 67 text A 0.2 0.4 Time(s) 0.6 0.2 0.4 Time(s) 0.6 C Figure 4: Plasticity enhanced phase-coding. A Spike rasters of 81 neurons (9 by 9 cluster) display synchrony over a two-fold range of input rates after STDP. B The degree of enhancement is quantified by timing precision. C Each neuron (center box) sends synapses to (dark gray) and receives synapses from (light gray) twenty-one randomly chosen neighbors up to five nodes away (black indicates both connections). spike activates the principal neuron’s synapse; otherwise the spike has no effect. We characterized the STDP circuit by activating a plastic synapse and a fixed synapse– which elicits a spike at different relative times. We repeated this pairing at 16Hz. We counted the number of pairings required to potentiate (or depress) the synapse. Based on this count, we calculated the efficacy of each pairing as the inverse number of pairings required (Figure 3B). For example, if twenty pairings were required to potentiate the synapse, the efficacy of that pre-before-post time-interval was one twentieth. The efficacy of both potentiation and depression are fit by exponentials with time constants of 11.4ms and 94.9ms, respectively. This behavior is similar to that observed in the hippocampus: potentiation has a shorter time constant and higher maximum efficacy than depression [3]. 4 Recurrent Network We carried out an experiment designed to test the STDP circuit’s ability to compensate for variability in spike timing through PEP. Each neuron received recurrent connections from 21 randomly selected neurons within an 11 by 11 neighborhood centered on itself (see Figure 4C). Conversely, it made recurrent connections to randomly chosen neurons within the same neighborhood. These connections were mediated by STDP circuits, initialized to the depressed state. We chose a 9 by 9 cluster of neurons and delivered spikes at a mean rate of 50 to 100Hz to each one (dropping spikes with a probability of 0.75 to 0.5 from a regular 200Hz train) and provided common theta inhibition as before. We compared the variability in spike timing after five seconds of learning with the initial distribution. Phase coding was enhanced after STDP (Figure 4A). Before STDP, spike timing among neurons was highly variable (except for the very highest input rate). After STDP, variability was virtually eliminated (except for the very lowest input rate). Initially, the variability, characterized by timing precision, was inversely related to the input rate, decreasing from 34 to 13ms. After five seconds of STDP, variability decreased and was largely independent of input rate, remaining below 11ms. Potentiated synapses 25 A Synaptic state after STDP 20 15 10 5 0 B 50 100 150 200 Spiking order 250 Figure 5: Compensating for variability. A Some synapses (dots) become potentiated (light) while others remain depressed (dark) after STDP. B The number of potentiated synapses neurons make (pluses) and receive (circles) is negatively (r = -0.71) and positively (r = 0.76) correlated to their rank in the spiking order, respectively. Comparing the number of potentiated synapses each neuron made or received with its excitability confirmed the PEP hypothesis (i.e., leading neurons provide additional synaptic current to lagging neurons via potentiated recurrent synapses). In this experiment, to eliminate variability due to noise (as opposed to excitability), we provided a 17 by 17 cluster of neurons with a regular 200Hz excitatory input. Theta inhibition was present as before and all synapses were initialized to the depressed state. After 10 seconds of STDP, a large fraction of the synapses were potentiated (Figure 5A). When the number of potentiated synapses each neuron made or received was plotted versus its rank in spiking order (Figure 5B), a clear correlation emerged (r = -0.71 or 0.76, respectively). As expected, neurons that spiked early made more and received fewer potentiated synapses. In contrast, neurons that spiked late made fewer and received more potentiated synapses. 5 Pattern Completion After STDP, we found that the network could recall an entire pattern given a subset, thus the same mechanisms that compensated for variability and noise could also compensate for lack of information. We chose a 9 by 9 cluster of neurons as our pattern and delivered a poisson-like spike train with mean rate of 67Hz to each one as in the first experiment. Theta inhibition was present as before and all synapses were initialized to the depressed state. Before STDP, we stimulated a subset of the pattern and only neurons in that subset spiked (Figure 6A). After five seconds of STDP, we stimulated the same subset again. This time they recruited spikes from other neurons in the pattern, completing it (Figure 6B). Upon varying the fraction of the pattern presented, we found that the fraction recalled increased faster than the fraction presented. We selected subsets of the original pattern randomly, varying the fraction of neurons chosen from 0.1 to 1.0 (ten trials for each). We classified neurons as active if they spiked in the two second period over which we recorded. Thus, we characterized PEP’s pattern-recall performance as a function of the probability that the pattern in question’s neurons are activated (Figure 6C). At a fraction of 0.50 presented, nearly all of the neurons in the pattern are consistently activated (0.91±0.06), showing robust pattern completion. We fitted the recall performance with a sigmoid that reached 0.50 recall fraction with an input fraction of 0.30. No spurious neurons were activated during any trials. Rate(Hz) Rate(Hz) 8 7 7 6 6 5 5 0.6 0.4 2 0.2 0 0 3 3 2 1 1 A 0.8 4 4 Network activity before STDP 1 Fraction of pattern actived 8 0 B Network activity after STDP C 0 0.2 0.4 0.6 0.8 Fraction of pattern stimulated 1 Figure 6: Associative recall. A Before STDP, half of the neurons in a pattern are stimulated; only they are activated. B After STDP, half of the neurons in a pattern are stimulated, and all are activated. C The fraction of the pattern activated grows faster than the fraction stimulated. 6 Discussion Our results demonstrate that PEP successfully compensates for graded variations in our silicon recurrent network using binary (on–off) synapses (in contrast with [8], where weights are graded). While our chip results are encouraging, variability was not eliminated in every case. In the case of the lowest input (50Hz), we see virtually no change (Figure 4A). We suspect the timing remains imprecise because, with such low input, neurons do not spike every theta cycle and, consequently, provide fewer opportunities for the STDP synapses to potentiate. This shortfall illustrates the system’s limits; it can only compensate for variability within certain bounds, and only for activity appropriate to the PEP model. As expected, STDP is the mechanism responsible for PEP. STDP potentiated recurrent synapses from leading neurons to lagging neurons, reducing the disparity among the diverse population of neurons. Even though the STDP circuits are themselves variable, with different efficacies and time constants, when using timing the sign of the weight-change is always correct (data not shown). For this reason, we chose STDP over other more physiological implementations of plasticity, such as membrane-voltage-dependent plasticity (MVDP), which has the capability to learn with graded voltage signals [9], such as those found in active dendrites, providing more computational power [10]. Previously, we investigated a MVDP circuit, which modeled a voltage-dependent NMDAreceptor-gated synapse [11]. It potentiated when the calcium current analog exceeded a threshold, which was designed to occur only during a dendritic action potential. This circuit produced behavior similar to STDP, implying it could be used in PEP. However, it was sensitive to variability in the NMDA and potentiation thresholds, causing a fraction of the population to potentiate anytime the synapse received an input and another fraction to never potentiate, rendering both subpopulations useless. Therefore, the simpler, less biophysical STDP circuit won out over the MVDP circuit: In our system timing is everything. Associative storage and recall naturally emerge in the PEP network when synapses between neurons coactivated by a pattern are potentiated. These synapses allow neurons to recruit their peers when a subset of the pattern is presented, thereby completing the pattern. However, this form of pattern storage and completion differs from Hopfield’s attractor model [12] . Rather than forming symmetric, recurrent neuronal circuits, our recurrent network forms asymmetric circuits in which neurons make connections exclusively to less excitable neurons in the pattern. In both the poisson-like and regular cases (Figures 4 & 5), only about six percent of potentiated connections were reciprocated, as expected by chance. We plan to investigate the storage capacity of this asymmetric form of associative memory. Our system lends itself to modeling brain regions that use precise spike timing, such as the hippocampus. We plan to extend the work presented to store and recall sequences of patterns, as the hippocampus is hypothesized to do. Place cells that represent different locations spike at different phases of the theta cycle, in relation to the distance to their preferred locations. This sequential spiking will allow us to link patterns representing different locations in the order those locations are visited, thereby realizing episodic memory. We propose PEP as a candidate neural mechanism for information coding and storage in the hippocampal system. Observations from the CA1 region of the hippocampus suggest that basal dendrites (which primarily receive excitation from recurrent connections) support submillisecond timing precision, consistent with PEP [13]. We have shown, in a silicon model, PEP’s ability to exploit such fast recurrent connections to sharpen timing precision as well as to associatively store and recall patterns. Acknowledgments We thank Joe Lin for assistance with chip generation. The Office of Naval Research funded this work (Award No. N000140210468). References [1] O’Keefe J. & Recce M.L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317-330. [2] Mehta M.R., Lee A.K. & Wilson M.A. (2002) Role of experience and oscillations in transforming a rate code into a temporal code. Nature 417(6890):741-746. [3] Bi G.Q. & Wang H.X. (2002) Temporal asymmetry in spike timing-dependent synaptic plasticity. Physiology & Behavior 77:551-555. [4] Rodriguez-Vazquez, A., Linan, G., Espejo S. & Dominguez-Castro R. (2003) Mismatch-induced trade-offs and scalability of analog preprocessing visual microprocessor chips. Analog Integrated Circuits and Signal Processing 37:73-83. [5] Boahen K.A. (2000) Point-to-point connectivity between neuromorphic chips using address events. IEEE Transactions on Circuits and Systems II 47:416-434. [6] Culurciello E.R., Etienne-Cummings R. & Boahen K.A. (2003) A biomorphic digital image sensor. IEEE Journal of Solid State Circuits 38:281-294. [7] Bofill A., Murray A.F & Thompson D.P. (2005) Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning. In: Advances in Neural Information Processing Systems 14, MIT Press, 2002. [8] Cameron K., Boonsobhak V., Murray A. & Renshaw D. (2005) Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI. IEEE Transactions on Neural Networks 16(6):1626-1627. [9] Chicca E., Badoni D., Dante V., D’Andreagiovanni M., Salina G., Carota L., Fusi S. & Del Giudice P. (2003) A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Transaction on Neural Networks 14(5):1297-1307. [10] Poirazi P., & Mel B.W. (2001) Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29(3)779-796. [11] Arthur J.V. & Boahen K. (2004) Recurrently connected silicon neurons with active dendrites for one-shot learning. In: IEEE International Joint Conference on Neural Networks 3, pp.1699-1704. [12] Hopfield J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Science 81(10):3088-3092. [13] Ariav G., Polsky A. & Schiller J. (2003) Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons. Journal of Neuroscience 23(21):7750-7758.

5 0.83045292 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects

Author: Jean-pascal Pfister, Wulfram Gerstner

Abstract: While classical experiments on spike-timing dependent plasticity analyzed synaptic changes as a function of the timing of pairs of pre- and postsynaptic spikes, more recent experiments also point to the effect of spike triplets. Here we develop a mathematical framework that allows us to characterize timing based learning rules. Moreover, we identify a candidate learning rule with five variables (and 5 free parameters) that captures a variety of experimental data, including the dependence of potentiation and depression upon pre- and postsynaptic firing frequencies. The relation to the Bienenstock-Cooper-Munro rule as well as to some timing-based rules is discussed. 1

6 0.7726655 64 nips-2005-Efficient estimation of hidden state dynamics from spike trains

7 0.63122725 188 nips-2005-Temporally changing synaptic plasticity

8 0.59108984 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

9 0.51702303 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks

10 0.4452458 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

11 0.43507549 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

12 0.42973495 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

13 0.42641073 6 nips-2005-A Connectionist Model for Constructive Modal Reasoning

14 0.41549245 134 nips-2005-Neural mechanisms of contrast dependent receptive field size in V1

15 0.35478213 165 nips-2005-Response Analysis of Neuronal Population with Synaptic Depression

16 0.33724788 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks

17 0.33564666 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

18 0.31327665 197 nips-2005-Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments

19 0.31031361 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

20 0.30795994 73 nips-2005-Fast biped walking with a reflexive controller and real-time policy searching


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.025), (10, 0.056), (11, 0.017), (27, 0.03), (31, 0.034), (34, 0.068), (39, 0.03), (41, 0.01), (55, 0.023), (57, 0.061), (60, 0.026), (65, 0.015), (69, 0.082), (70, 0.309), (71, 0.012), (73, 0.016), (88, 0.065), (91, 0.059)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.77899081 149 nips-2005-Optimal cue selection strategy

Author: Vidhya Navalpakkam, Laurent Itti

Abstract: Survival in the natural world demands the selection of relevant visual cues to rapidly and reliably guide attention towards prey and predators in cluttered environments. We investigate whether our visual system selects cues that guide search in an optimal manner. We formally obtain the optimal cue selection strategy by maximizing the signal to noise ratio (SN R) between a search target and surrounding distractors. This optimal strategy successfully accounts for several phenomena in visual search behavior, including the effect of target-distractor discriminability, uncertainty in target’s features, distractor heterogeneity, and linear separability. Furthermore, the theory generates a new prediction, which we verify through psychophysical experiments with human subjects. Our results provide direct experimental evidence that humans select visual cues so as to maximize SN R between the targets and surrounding clutter.

same-paper 2 0.77249324 99 nips-2005-Integrate-and-Fire models with adaptation are good enough

Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner

Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1

3 0.4552536 181 nips-2005-Spiking Inputs to a Winner-take-all Network

Author: Matthias Oster, Shih-Chii Liu

Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1

4 0.44022286 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

Author: Misha Ahrens, Liam Paninski, Quentin J. Huys

Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1

5 0.4358719 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

Author: Afsheen Afshar, Gopal Santhanam, Stephen I. Ryu, Maneesh Sahani, Byron M. Yu, Krishna V. Shenoy

Abstract: Spiking activity from neurophysiological experiments often exhibits dynamics beyond that driven by external stimulation, presumably reflecting the extensive recurrence of neural circuitry. Characterizing these dynamics may reveal important features of neural computation, particularly during internally-driven cognitive operations. For example, the activity of premotor cortex (PMd) neurons during an instructed delay period separating movement-target specification and a movementinitiation cue is believed to be involved in motor planning. We show that the dynamics underlying this activity can be captured by a lowdimensional non-linear dynamical systems model, with underlying recurrent structure and stochastic point-process output. We present and validate latent variable methods that simultaneously estimate the system parameters and the trial-by-trial dynamical trajectories. These methods are applied to characterize the dynamics in PMd data recorded from a chronically-implanted 96-electrode array while monkeys perform delayed-reach tasks. 1

6 0.43138906 11 nips-2005-A Hierarchical Compositional System for Rapid Object Detection

7 0.42966682 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models

8 0.42463362 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity

9 0.42305773 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches

10 0.42024797 96 nips-2005-Inference with Minimal Communication: a Decision-Theoretic Variational Approach

11 0.41803539 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects

12 0.4163827 32 nips-2005-Augmented Rescorla-Wagner and Maximum Likelihood Estimation

13 0.41548043 90 nips-2005-Hot Coupling: A Particle Approach to Inference and Normalization on Pairwise Undirected Graphs

14 0.41444144 200 nips-2005-Variable KD-Tree Algorithms for Spatial Pattern Search and Discovery

15 0.41194168 30 nips-2005-Assessing Approximations for Gaussian Process Classification

16 0.4092629 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

17 0.40877342 136 nips-2005-Noise and the two-thirds power Law

18 0.40869492 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity

19 0.40562487 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

20 0.40496838 43 nips-2005-Comparing the Effects of Different Weight Distributions on Finding Sparse Representations