nips nips2005 nips2005-188 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter
Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. [sent-8, score-1.056]
2 In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. [sent-9, score-0.64]
3 Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. [sent-10, score-0.476]
4 We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. [sent-11, score-0.592]
5 We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. [sent-12, score-0.422]
6 As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. [sent-14, score-0.933]
7 These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. [sent-16, score-0.69]
8 1 Introduction The traditional view on Hebbian plasticity is that the correlation between pre- and postsynaptic events will drive learning. [sent-17, score-0.541]
9 This view ignores the fact that synaptic plasticity is driven by a whole sequence of events and that some of these events are causally related. [sent-18, score-0.703]
10 For example, usually through the synaptic activity at a cluster of synapses the postsynaptic spike will be triggered. [sent-19, score-0.757]
11 This signal can then travel retrogradely into the dendrite (as a so-called back-propagating- or BP-spike, [3]), leading to a depolarization at this and other clusters of synapses by which their plasticity will be influenced. [sent-20, score-0.793]
12 More locally, something similar can happen if a cluster of synapses is able to elicit a dendritic spike (D-spike, [4, 5]), which may not travel far, but which certainly leads to a local depolarization “under” these and adjacent synapses, triggering synaptic plasticity of one kind or another. [sent-21, score-1.534]
13 Hence synaptic plasticity seems to be to some degree influenced by recurrent processes. [sent-22, score-0.58]
14 In this study, we will use a differential Hebbian learning rule [2, 6] to emulate spike timing dependent plasticity (STDP, [7, 8]). [sent-23, score-0.644]
15 With one specifically chosen example architecture we will investigate how the temporal relation between dendritic- and back propagating spikes could influence plasticity. [sent-24, score-0.209]
16 , xn representing inputs to cluster 1, hAM P A , ˜ hN M DA - filters shaping AMPA and NMDA signals, hDS , hDS , hBP - filters shaping D and BP-spikes, q1 , q2 - differential thresholds, τ - a delay. [sent-29, score-0.265]
17 The model includes several clusters of synapses located on dendritic branches. [sent-38, score-0.65]
18 Dendritic spikes are elicited following the summation of several AMPA signals passing threshold q1 . [sent-39, score-0.326]
19 NMDA receptor influence on dendritic spike generation was not considered as the contribution of NMDA potentials to the total membrane potential is substantially smaller than that of AMPA channels at a mixed synapse. [sent-40, score-0.535]
20 Inputs to the model arrive in groups, but each input line gets only one pulse in a given group (Fig. [sent-41, score-0.39]
21 Each synaptic cluster is limited to generating one dendritic spike from one arriving pulse group. [sent-43, score-1.012]
22 Cell firing is not explicitly modelled but said to be achieved when the summation of several dendritic spikes at the cell soma has passed threshold q 2 . [sent-44, score-0.616]
23 Since we do not model biophysical processes, all signal shapes are obtained by appropriate filters h, where u = x ∗ h is the convolution of spike train x with filter h. [sent-47, score-0.199]
24 A differential Hebbian-type learning rule is used to drive synaptic plasticity [2, 6] with ρ = µuv, where ρ denotes synaptic weight, u stands for the synaptic input, v for the ˙ ˙ output, and µ for the learning rate. [sent-48, score-1.059]
25 ˙ NMDA signals are used as the pre-synaptic signals, dendritic spikes, or dendritic spikes complemented by back-propagating spikes, define the post-synaptic signals for the learning rule. [sent-53, score-0.979]
26 In addition, synaptic weights were sigmoidally saturated with limits zero and one. [sent-54, score-0.233]
27 Filter shapes forming AMPA and NMDA channel responses, as well as back- propagating spikes and some forms of dendritic spikes used in this study were described by: h(t) = e−2πt/τ − e−8πt/τ 6π/τ (1) where τ determines the total duration of the pulse. [sent-55, score-0.617]
28 We use for AMPA channels: τ = 6 ms, for NMDA channels: τ = 120 ms, for dendritic spikes: τ = 235 ms, and for BP-spikes: τ = 40 ms. [sent-57, score-0.335]
29 We distinguish three basic input groups: strongly correlated inputs (several inputs over an interval of up to 10 ms), less correlated (dispersed over an interval of 10-100 ms) and uncorrelated (dispersed over the interval of more than 100 ms). [sent-61, score-0.86]
30 Figure 2: Example STDP curves (A,B), input pulse distribution (C), and model setup (D). [sent-62, score-0.25]
31 C) Example input pulse distribution for two pulse groups. [sent-65, score-0.4]
32 D) Model neuron with two dendritic branches (left and right), consisting of two sub-branches which get inputs X or Y , which are similar for either side. [sent-66, score-0.481]
33 In the absence of a BP spike the D-spike dominates plasticity. [sent-73, score-0.205]
34 This seems to correspond to new physiological observations concerning the relations between post-synaptic signals and the actually expressed form of plasticity [10]. [sent-74, score-0.472]
35 We specifically investigate a two-phase processes, where plasticity is first dominated by the D- spike and later by a BP-spike. [sent-75, score-0.588]
36 2 D shows a setup in which two-phase plasticity could arise. [sent-77, score-0.422]
37 We assume that inputs to compact clusters of synapses are similar (e. [sent-78, score-0.388]
38 early in development, synapses may be weak and only the conjoint action of many synchronous inputs will lead to a local D-spike. [sent-84, score-0.452]
39 Local plasticity from these few D-spikes (indicated by the circular arrow under the dendritic branches in Fig. [sent-85, score-0.796]
40 2) strengthens these synapses and at some point Dspikes are elicited more reliably at conjoint branches. [sent-86, score-0.385]
41 This could finally also lead to spiking at the soma and, hence, to a BP-spike, changing plasticity of the individual synapses. [sent-87, score-0.485]
42 Assuming that at some point the cell will be driven into spiking, a BP-spike is added after several hundred pulse groups (second part of the experiment). [sent-90, score-0.306]
43 Figure 3: Temporal weight development for the setup shown in Fig 2 with one sub-branch for the driving cluster (A), and one for the non-driving cluster (B). [sent-91, score-0.35]
44 Initially all weights grow gradually until the driving cluster leads to a BP-spike after 200 pulse groups. [sent-92, score-0.447]
45 Thus only the weights of its group x1 − x3 will continue to grow, now at an increased rate. [sent-93, score-0.183]
46 For both clusters, we assume that the input activity for three synapses is closely correlated and that they occur in a temporal interval of 6 ms (group x, y: 1 − 3). [sent-97, score-0.895]
47 Three other inputs are wider dispersed (interval of 35 ms, group x, y: 4−6) and the three remaining ones arrive uncorrelated in an interval of 150 ms (group x, y: 7 − 9). [sent-98, score-0.734]
48 Pulse groups arriving at the second cluster, however, were randomly shifted by maximally ±20 ms relative to the centre of the pulse group of the first cluster. [sent-100, score-0.767]
49 Hence initially plasticity can only take place by D-spikes, and we assume that D-spikes will not reach the other cluster. [sent-103, score-0.388]
50 5ms around zero, covering the dispersion of input groups 1 − 3 as well as 4 − 6. [sent-106, score-0.286]
51 3 A,B) we see that all weights 1 − 6 grow, only for the least correlated input 6 − 9 the weights remain close their origin. [sent-109, score-0.29]
52 The correlated group 1 − 3, however, benefits most strongly, because it is more likely that a D-spike will be elicited by this group than by any other combination. [sent-110, score-0.53]
53 Conjoint growth at a whole cluster of such synapses would at some point drive the cell into somatic firing. [sent-111, score-0.535]
54 This can, for example, be the case when the input properties of the two input groups are different leading to (slightly) less weight growth in the other cluster. [sent-114, score-0.312]
55 2 B now strongly enhancing all causally driving synapses, hence group x 1 − x3 (Fig. [sent-116, score-0.289]
56 This group grows at an increased rate while all other synapses shrink. [sent-118, score-0.391]
57 This result was reproduced in a model with 100 synapses in each input group (data not shown) and in the next sections we will show that a system with two growth phases is rather robust against parameter variations. [sent-120, score-0.508]
58 Plotted are the average weights of the less correlated group (ordinate) against the correlated group (abscissa). [sent-122, score-0.721]
59 Simulation with three correlated and three less correlated inputs, for AMPA: τ = 6 ms, for NMDA: τ = 117 ms, for D-spike: τ = 235 ms, for BP-spike: τ = 6 − 66 ms, q1 = 0. [sent-123, score-0.396]
60 4 shows a plot of 350 experiments with the same basic architecture, using only one synapse cluster and the same chain of events as before but with different parameter settings. [sent-133, score-0.176]
61 Each point represents one experiment consisting of 600 pulse groups. [sent-135, score-0.184]
62 On the abscissa we plot the average weight of the three correlated synapses; on the ordinate the average weight of the three less correlated synapses after these 600 pulse groups. [sent-136, score-0.987]
63 We assume, as in the last experiment, that a BP-spike is triggered as soon as q2 is passed, which happens around pulse group 200 in all cases. [sent-137, score-0.431]
64 (1) The width of the BP-spike was varied between 5 ms and 50 ms. [sent-139, score-0.417]
65 (2) The interval width for the temporal distribution of the three correlated spikes was varied between 1 ms and 10 ms. [sent-140, score-0.844]
66 Hence 1 ms amounts to three synchronously elicited spikes. [sent-141, score-0.398]
67 (3) The interval width for the temporal distribution of the three less correlated spikes was varied between 1 ms and 100 ms. [sent-142, score-0.888]
68 The first parameter, BP spike width, shows some small interference with the spike shift for the widest spikes. [sent-145, score-0.378]
69 Symbols “dots”, “diamonds” and “others” (circles and plusses) refer to a BP-spike shifts: of less than −5 ms (dots), between −5 ms and +5 ms (diamonds) and larger than +5 ms (circles and pluses). [sent-149, score-1.356]
70 Circles in the latter region show cases with the less correlated dispersion interval below 40 ms, and plusses the cases of the dispersion 40 ms or higher. [sent-150, score-1.062]
71 The “dot” region (−5 ms) shows cases where correlated synapses will grow, while less correlated synapses can grow or shrink. [sent-151, score-0.976]
72 This happens because the BP spike is too early to influence plasticity in the strongly correlated group, which will grow by the DS-mechanism only, but the BP-spike still falls in the dispersion range of the less correlated group, influencing its weights. [sent-152, score-1.335]
73 At a shift of −5 ms a fast transition in the weight development occurs. [sent-153, score-0.448]
74 The randomness whether the input falls into pre- or post-output zone in both, correlated and less correlated, groups is large enough, and leads to weights staying close to origin or to shrinkage. [sent-155, score-0.404]
75 The circles and plusses encode the dispersion of the wide, less correlated spike distributions in the case when time shifts of the BP-spike are positive (> 5 ms, hence BP-spike after D-spike). [sent-156, score-0.785]
76 The data points show a certain regularity when the BP spike shift moves from big values towards the borderline of +5 ms, where the weights stop to grow. [sent-161, score-0.251]
77 For big shifts, points cluster on the upper, diagonal tail in or near the dot region. [sent-162, score-0.208]
78 With a smaller BP spike shift points move up this tail and then drop down to the horizontal tail, which occurs for shifts of about 20 ms. [sent-163, score-0.332]
79 This pattern is typical for the bigger dispersion in the range of 20 − 60 ms and data points essentially follow the circle drawn in the figure. [sent-164, score-0.537]
80 But this will first only affect the less correlated group as there are almost always some inputs so late that they “collide” with the BP-spike. [sent-166, score-0.435]
81 Hence LTP and LTD will be essentially balanced in the less correlated group, leading on average to zero weight growth. [sent-168, score-0.261]
82 4 Discussion Just like with the famous Baron von M¨ nchausen, who was able to pull himself out of a u swamp by his own hair, the current study suggests that plasticity change as a consequence of itself might lead to specific functional properties. [sent-172, score-0.388]
83 In order to arrive at this conclusion, we have used a simplified model of STDP and combined it with a custom designed and also simplified dendritic architecture. [sent-173, score-0.367]
84 This model never attempted to address the difficult issues of the biophysics of synaptic plasticity (for a discussion see [2]) and it was also not our goal to investigate the mechanisms of signal propagation in a dendrite [11]. [sent-176, score-0.694]
85 Both aspects had been reduced to a few basic descriptors and this way we were able to show for the first time that a useful synaptic selection process can develop over time. [sent-177, score-0.192]
86 The system consisted of a first “pre-growth” phase (until the BPspike sets in) followed by a second phase where only one group of synapses grows strongly, while the others shrink again. [sent-178, score-0.391]
87 In general this example describes a scenario where groups of synapses first undergo less selective classical Hebbian-like growth, while later more pronounced STDP sets in, selecting only the main driving group. [sent-179, score-0.413]
88 We believe that in the early development of a real brain such a two-phase system might be beneficial for the stable selection of those synapses that are better correlated. [sent-180, score-0.322]
89 It is conceivable that at early developmental stages correlations are in general weaker, while the number of inputs to a cell is probably much higher than in the adult stage, where many have been pruned. [sent-181, score-0.181]
90 Hence highly selective and strong STDP-like plasticity employed too early might lead to a noiseinduced growth of ”the wrong” synapses. [sent-182, score-0.509]
91 This, however, might be prevented by just such a soft pre-selection mechanisms which would gradually drive clusters of synapses apart by a local dendritic process before the stronger influence of the back-propagating spike sets in. [sent-183, score-0.935]
92 This is supported by recent results from Holthoff et al [1, 12], who have shown that Dspikes will lead to a different type of plasticity than BP-spikes in layer 5 pyramidal cells in mouse cortex. [sent-184, score-0.473]
93 This will require to re-address these issues in greater detail when dealing with a specific given neuron but the general conclusions about the self-influencing and local [2, 13] character of synaptic plasticity and their possible functional use should hopefully remain valid. [sent-186, score-0.608]
94 Dendritic mechanisms underlying the coupling of the dendritic with the axonal action potential initiation zone of adult rat layer 5 pyramidal neurons. [sent-224, score-0.476]
95 A synaptically controlled, associative signal for Hebbian plasticity in hippocampal neurons. [sent-247, score-0.388]
96 Regulation of synaptic efficacy u by coincidence of postsynaptic APs and EPSPs. [sent-254, score-0.242]
97 Local learning rules: predicted influence o o of dendritic location on synaptic modification in spike-timing-dependent plasticity. [sent-260, score-0.527]
98 Coactivation and timingdependent integration of synaptic potentiation and depression. [sent-273, score-0.192]
99 Propagation of action potentials in dendrites a depends on dendritic morphology. [sent-280, score-0.377]
100 Single-shock LTD by local dendritic spikes in pyramidal neurons of mouse visual cortex. [sent-289, score-0.589]
wordName wordTfidf (topN-words)
[('plasticity', 0.388), ('dendritic', 0.335), ('ms', 0.328), ('synapses', 0.249), ('synaptic', 0.192), ('pulse', 0.184), ('dispersion', 0.176), ('correlated', 0.176), ('spike', 0.168), ('stdp', 0.163), ('group', 0.142), ('spikes', 0.141), ('nmda', 0.13), ('ampa', 0.115), ('bp', 0.106), ('cluster', 0.098), ('plusses', 0.088), ('saudargiene', 0.088), ('growth', 0.085), ('signals', 0.084), ('grow', 0.082), ('groups', 0.078), ('tail', 0.078), ('rg', 0.077), ('interval', 0.074), ('uence', 0.074), ('inputs', 0.073), ('branches', 0.073), ('elicited', 0.07), ('conjoint', 0.066), ('holthoff', 0.066), ('porr', 0.066), ('clusters', 0.066), ('soma', 0.065), ('drive', 0.059), ('dispersed', 0.057), ('circles', 0.055), ('happens', 0.053), ('dendrite', 0.052), ('emulate', 0.052), ('ltd', 0.052), ('soon', 0.052), ('hebbian', 0.05), ('pyramidal', 0.05), ('postsynaptic', 0.05), ('width', 0.046), ('shifts', 0.044), ('events', 0.044), ('less', 0.044), ('dspikes', 0.044), ('golding', 0.044), ('hds', 0.044), ('kovalchuk', 0.044), ('yuste', 0.044), ('cell', 0.044), ('varied', 0.043), ('driving', 0.042), ('shift', 0.042), ('dendrites', 0.042), ('weight', 0.041), ('weights', 0.041), ('abscissa', 0.038), ('ordinate', 0.038), ('elicit', 0.038), ('depolarization', 0.038), ('scotland', 0.038), ('dispersions', 0.038), ('dominates', 0.037), ('development', 0.037), ('differential', 0.036), ('early', 0.036), ('strongly', 0.036), ('temporal', 0.036), ('arriving', 0.035), ('causally', 0.035), ('mouse', 0.035), ('stirling', 0.035), ('glasgow', 0.035), ('diamonds', 0.035), ('synapse', 0.034), ('setup', 0.034), ('hence', 0.034), ('zone', 0.033), ('bigger', 0.033), ('input', 0.032), ('arrive', 0.032), ('dot', 0.032), ('changing', 0.032), ('investigate', 0.032), ('channels', 0.032), ('summation', 0.031), ('sharper', 0.031), ('biophysical', 0.031), ('ltp', 0.031), ('mechanisms', 0.03), ('shaping', 0.029), ('uencing', 0.029), ('local', 0.028), ('uncorrelated', 0.028), ('adult', 0.028)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999964 188 nips-2005-Temporally changing synaptic plasticity
Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter
Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1
2 0.29654071 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity
Author: Robert A. Legenstein, Wolfgang Maass
Abstract: We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong “teacher inputs” to the same neuron. It turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a simplified neuron model whenever a stable solution exists, no equally strong convergence guarantee can be given for spiking neurons with STDP. But we derive a criterion on the statistical dependency structure of input spike trains which characterizes exactly when learning with STDP will converge on average for a simple model of a spiking neuron. This criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation matrix related to the spike inputs. In addition we show through computer simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses. 1
3 0.29464906 118 nips-2005-Learning in Silicon: Timing is Everything
Author: John V. Arthur, Kwabena Boahen
Abstract: We describe a neuromorphic chip that uses binary synapses with spike timing-dependent plasticity (STDP) to learn stimulated patterns of activity and to compensate for variability in excitability. Specifically, STDP preferentially potentiates (turns on) synapses that project from excitable neurons, which spike early, to lethargic neurons, which spike late. The additional excitatory synaptic current makes lethargic neurons spike earlier, thereby causing neurons that belong to the same pattern to spike in synchrony. Once learned, an entire pattern can be recalled by stimulating a subset. 1 Variability in Neural Systems Evidence suggests precise spike timing is important in neural coding, specifically, in the hippocampus. The hippocampus uses timing in the spike activity of place cells (in addition to rate) to encode location in space [1]. Place cells employ a phase code: the timing at which a neuron spikes relative to the phase of the inhibitory theta rhythm (5-12Hz) conveys information. As an animal approaches a place cell’s preferred location, the place cell not only increases its spike rate, but also spikes at earlier phases in the theta cycle. To implement a phase code, the theta rhythm is thought to prevent spiking until the input synaptic current exceeds the sum of the neuron threshold and the decreasing inhibition on the downward phase of the cycle [2]. However, even with identical inputs and common theta inhibition, neurons do not spike in synchrony. Variability in excitability spreads the activity in phase. Lethargic neurons (such as those with high thresholds) spike late in the theta cycle, since their input exceeds the sum of the neuron threshold and theta inhibition only after the theta inhibition has had time to decrease. Conversely, excitable neurons (such as those with low thresholds) spike early in the theta cycle. Consequently, variability in excitability translates into variability in timing. We hypothesize that the hippocampus achieves its precise spike timing (about 10ms) through plasticity enhanced phase-coding (PEP). The source of hippocampal timing precision in the presence of variability (and noise) remains unexplained. Synaptic plasticity can compensate for variability in excitability if it increases excitatory synaptic input to neurons in inverse proportion to their excitabilities. Recasting this in a phase-coding framework, we desire a learning rule that increases excitatory synaptic input to neurons directly related to their phases. Neurons that lag require additional synaptic input, whereas neurons that lead 120µm 190µm A B Figure 1: STDP Chip. A The chip has a 16-by-16 array of microcircuits; one microcircuit includes four principal neurons, each with 21 STDP circuits. B The STDP Chip is embedded in a circuit board including DACs, a CPLD, a RAM chip, and a USB chip, which communicates with a PC. require none. The spike timing-dependent plasticity (STDP) observed in the hippocampus satisfies this requirement [3]. It requires repeated pre-before-post spike pairings (within a time window) to potentiate and repeated post-before-pre pairings to depress a synapse. Here we validate our hypothesis with a model implemented in silicon, where variability is as ubiquitous as it is in biology [4]. Section 2 presents our silicon system, including the STDP Chip. Section 3 describes and characterizes the STDP circuit. Section 4 demonstrates that PEP compensates for variability and provides evidence that STDP is the compensation mechanism. Section 5 explores a desirable consequence of PEP: unconventional associative pattern recall. Section 6 discusses the implications of the PEP model, including its benefits and applications in the engineering of neuromorphic systems and in the study of neurobiology. 2 Silicon System We have designed, submitted, and tested a silicon implementation of PEP. The STDP Chip was fabricated through MOSIS in a 1P5M 0.25µm CMOS process, with just under 750,000 transistors in just over 10mm2 of area. It has a 32 by 32 array of excitatory principal neurons commingled with a 16 by 16 array of inhibitory interneurons that are not used here (Figure 1A). Each principal neuron has 21 STDP synapses. The address-event representation (AER) [5] is used to transmit spikes off chip and to receive afferent and recurrent spike input. To configure the STDP Chip as a recurrent network, we embedded it in a circuit board (Figure 1B). The board has five primary components: a CPLD (complex programmable logic device), the STDP Chip, a RAM chip, a USB interface chip, and DACs (digital-to-analog converters). The central component in the system is the CPLD. The CPLD handles AER traffic, mediates communication between devices, and implements recurrent connections by accessing a lookup table, stored in the RAM chip. The USB interface chip provides a bidirectional link with a PC. The DACs control the analog biases in the system, including the leak current, which the PC varies in real-time to create the global inhibitory theta rhythm. The principal neuron consists of a refractory period and calcium-dependent potassium circuit (RCK), a synapse circuit, and a soma circuit (Figure 2A). RCK and the synapse are ISOMA Soma Synapse STDP Presyn. Spike PE LPF A Presyn. Spike Raster AH 0 0.1 Spike probability RCK Postsyn. Spike B 0.05 0.1 0.05 0.1 0.08 0.06 0.04 0.02 0 0 Time(s) Figure 2: Principal neuron. A A simplified schematic is shown, including: the synapse, refractory and calcium-dependent potassium channel (RCK), soma, and axon-hillock (AH) circuits, plus their constituent elements, the pulse extender (PE) and the low-pass filter (LPF). B Spikes (dots) from 81 principal neurons are temporally dispersed, when excited by poisson-like inputs (58Hz) and inhibited by the common 8.3Hz theta rhythm (solid line). The histogram includes spikes from five theta cycles. composed of two reusable blocks: the low-pass filter (LPF) and the pulse extender (PE). The soma is a modified version of the LPF, which receives additional input from an axonhillock circuit (AH). RCK is inhibitory to the neuron. It consists of a PE, which models calcium influx during a spike, and a LPF, which models calcium buffering. When AH fires a spike, a packet of charge is dumped onto a capacitor in the PE. The PE’s output activates until the charge decays away, which takes a few milliseconds. Also, while the PE is active, charge accumulates on the LPF’s capacitor, lowering the LPF’s output voltage. Once the PE deactivates, this charge leaks away as well, but this takes tens of milliseconds because the leak is smaller. The PE’s and the LPF’s inhibitory effects on the soma are both described below in terms of the sum (ISHUNT ) of the currents their output voltages produce in pMOS transistors whose sources are at Vdd (see Figure 2A). Note that, in the absence of spikes, these currents decay exponentially, with a time-constant determined by their respective leaks. The synapse circuit is excitatory to the neuron. It is composed of a PE, which represents the neurotransmitter released into the synaptic cleft, and a LPF, which represents the bound neurotransmitter. The synapse circuit is similar to RCK in structure but differs in function: It is activated not by the principal neuron itself but by the STDP circuits (or directly by afferent spikes that bypass these circuits, i.e., fixed synapses). The synapse’s effect on the soma is also described below in terms of the current (ISYN ) its output voltage produces in a pMOS transistor whose source is at Vdd. The soma circuit is a leaky integrator. It receives excitation from the synapse circuit and shunting inhibition from RCK and has a leak current as well. Its temporal behavior is described by: τ dISOMA ISYN I0 + ISOMA = dt ISHUNT where ISOMA is the current the capacitor’s voltage produces in a pMOS transistor whose source is at Vdd (see Figure 2A). ISHUNT is the sum of the leak, refractory, and calciumdependent potassium currents. These currents also determine the time constant: τ = C Ut κISHUNT , where I0 and κ are transistor parameters and Ut is the thermal voltage. STDP circuit ~LTP SRAM Presynaptic spike A ~LTD Inverse number of pairings Integrator Decay Postsynaptic spike Potentiation 0.1 0.05 0 0.05 0.1 Depression -80 -40 0 Presynaptic spike Postsynaptic spike 40 Spike timing: t pre - t post (ms) 80 B Figure 3: STDP circuit design and characterization. A The circuit is composed of three subcircuits: decay, integrator, and SRAM. B The circuit potentiates when the presynaptic spike precedes the postsynaptic spike and depresses when the postsynaptic spike precedes the presynaptic spike. The soma circuit is connected to an AH, the locus of spike generation. The AH consists of model voltage-dependent sodium and potassium channel populations (modified from [6] by Kai Hynna). It initiates the AER signaling process required to send a spike off chip. To characterize principal neuron variability, we excited 81 neurons with poisson-like 58Hz spike trains (Figure 2B). We made these spike trains poisson-like by starting with a regular 200Hz spike train and dropping spikes randomly, with probability of 0.71. Thus spikes were delivered to neurons that won the coin toss in synchrony every 5ms. However, neurons did not lock onto the input synchrony due to filtering by the synaptic time constant (see Figure 2B). They also received a common inhibitory input at the theta frequency (8.3Hz), via their leak current. Each neuron was prevented from firing more than one spike in a theta cycle by its model calcium-dependent potassium channel population. The principal neurons’ spike times were variable. To quantify the spike variability, we used timing precision, which we define as twice the standard deviation of spike times accumulated from five theta cycles. With an input rate of 58Hz the timing precision was 34ms. 3 STDP Circuit The STDP circuit (related to [7]-[8]), for which the STDP Chip is named, is the most abundant, with 21,504 copies on the chip. This circuit is built from three subcircuits: decay, integrator, and SRAM (Figure 3A). The decay and integrator are used to implement potentiation, and depression, in a symmetric fashion. The SRAM holds the current binary state of the synapse, either potentiated or depressed. For potentiation, the decay remembers the last presynaptic spike. Its capacitor is charged when that spike occurs and discharges linearly thereafter. A postsynaptic spike samples the charge remaining on the capacitor, passes it through an exponential function, and dumps the resultant charge into the integrator. This charge decays linearly thereafter. At the time of the postsynaptic spike, the SRAM, a cross-coupled inverter pair, reads the voltage on the integrator’s capacitor. If it exceeds a threshold, the SRAM switches state from depressed to potentiated (∼LTD goes high and ∼LTP goes low). The depression side of the STDP circuit is exactly symmetric, except that it responds to postsynaptic activation followed by presynaptic activation and switches the SRAM’s state from potentiated to depressed (∼LTP goes high and ∼LTD goes low). When the SRAM is in the potentiated state, the presynaptic 50 After STDP 83 92 100 Timing precision(ms) Before STDP 75 B Before STDP After STDP 40 30 20 10 0 50 60 70 80 90 Input rate(Hz) 100 50 58 67 text A 0.2 0.4 Time(s) 0.6 0.2 0.4 Time(s) 0.6 C Figure 4: Plasticity enhanced phase-coding. A Spike rasters of 81 neurons (9 by 9 cluster) display synchrony over a two-fold range of input rates after STDP. B The degree of enhancement is quantified by timing precision. C Each neuron (center box) sends synapses to (dark gray) and receives synapses from (light gray) twenty-one randomly chosen neighbors up to five nodes away (black indicates both connections). spike activates the principal neuron’s synapse; otherwise the spike has no effect. We characterized the STDP circuit by activating a plastic synapse and a fixed synapse– which elicits a spike at different relative times. We repeated this pairing at 16Hz. We counted the number of pairings required to potentiate (or depress) the synapse. Based on this count, we calculated the efficacy of each pairing as the inverse number of pairings required (Figure 3B). For example, if twenty pairings were required to potentiate the synapse, the efficacy of that pre-before-post time-interval was one twentieth. The efficacy of both potentiation and depression are fit by exponentials with time constants of 11.4ms and 94.9ms, respectively. This behavior is similar to that observed in the hippocampus: potentiation has a shorter time constant and higher maximum efficacy than depression [3]. 4 Recurrent Network We carried out an experiment designed to test the STDP circuit’s ability to compensate for variability in spike timing through PEP. Each neuron received recurrent connections from 21 randomly selected neurons within an 11 by 11 neighborhood centered on itself (see Figure 4C). Conversely, it made recurrent connections to randomly chosen neurons within the same neighborhood. These connections were mediated by STDP circuits, initialized to the depressed state. We chose a 9 by 9 cluster of neurons and delivered spikes at a mean rate of 50 to 100Hz to each one (dropping spikes with a probability of 0.75 to 0.5 from a regular 200Hz train) and provided common theta inhibition as before. We compared the variability in spike timing after five seconds of learning with the initial distribution. Phase coding was enhanced after STDP (Figure 4A). Before STDP, spike timing among neurons was highly variable (except for the very highest input rate). After STDP, variability was virtually eliminated (except for the very lowest input rate). Initially, the variability, characterized by timing precision, was inversely related to the input rate, decreasing from 34 to 13ms. After five seconds of STDP, variability decreased and was largely independent of input rate, remaining below 11ms. Potentiated synapses 25 A Synaptic state after STDP 20 15 10 5 0 B 50 100 150 200 Spiking order 250 Figure 5: Compensating for variability. A Some synapses (dots) become potentiated (light) while others remain depressed (dark) after STDP. B The number of potentiated synapses neurons make (pluses) and receive (circles) is negatively (r = -0.71) and positively (r = 0.76) correlated to their rank in the spiking order, respectively. Comparing the number of potentiated synapses each neuron made or received with its excitability confirmed the PEP hypothesis (i.e., leading neurons provide additional synaptic current to lagging neurons via potentiated recurrent synapses). In this experiment, to eliminate variability due to noise (as opposed to excitability), we provided a 17 by 17 cluster of neurons with a regular 200Hz excitatory input. Theta inhibition was present as before and all synapses were initialized to the depressed state. After 10 seconds of STDP, a large fraction of the synapses were potentiated (Figure 5A). When the number of potentiated synapses each neuron made or received was plotted versus its rank in spiking order (Figure 5B), a clear correlation emerged (r = -0.71 or 0.76, respectively). As expected, neurons that spiked early made more and received fewer potentiated synapses. In contrast, neurons that spiked late made fewer and received more potentiated synapses. 5 Pattern Completion After STDP, we found that the network could recall an entire pattern given a subset, thus the same mechanisms that compensated for variability and noise could also compensate for lack of information. We chose a 9 by 9 cluster of neurons as our pattern and delivered a poisson-like spike train with mean rate of 67Hz to each one as in the first experiment. Theta inhibition was present as before and all synapses were initialized to the depressed state. Before STDP, we stimulated a subset of the pattern and only neurons in that subset spiked (Figure 6A). After five seconds of STDP, we stimulated the same subset again. This time they recruited spikes from other neurons in the pattern, completing it (Figure 6B). Upon varying the fraction of the pattern presented, we found that the fraction recalled increased faster than the fraction presented. We selected subsets of the original pattern randomly, varying the fraction of neurons chosen from 0.1 to 1.0 (ten trials for each). We classified neurons as active if they spiked in the two second period over which we recorded. Thus, we characterized PEP’s pattern-recall performance as a function of the probability that the pattern in question’s neurons are activated (Figure 6C). At a fraction of 0.50 presented, nearly all of the neurons in the pattern are consistently activated (0.91±0.06), showing robust pattern completion. We fitted the recall performance with a sigmoid that reached 0.50 recall fraction with an input fraction of 0.30. No spurious neurons were activated during any trials. Rate(Hz) Rate(Hz) 8 7 7 6 6 5 5 0.6 0.4 2 0.2 0 0 3 3 2 1 1 A 0.8 4 4 Network activity before STDP 1 Fraction of pattern actived 8 0 B Network activity after STDP C 0 0.2 0.4 0.6 0.8 Fraction of pattern stimulated 1 Figure 6: Associative recall. A Before STDP, half of the neurons in a pattern are stimulated; only they are activated. B After STDP, half of the neurons in a pattern are stimulated, and all are activated. C The fraction of the pattern activated grows faster than the fraction stimulated. 6 Discussion Our results demonstrate that PEP successfully compensates for graded variations in our silicon recurrent network using binary (on–off) synapses (in contrast with [8], where weights are graded). While our chip results are encouraging, variability was not eliminated in every case. In the case of the lowest input (50Hz), we see virtually no change (Figure 4A). We suspect the timing remains imprecise because, with such low input, neurons do not spike every theta cycle and, consequently, provide fewer opportunities for the STDP synapses to potentiate. This shortfall illustrates the system’s limits; it can only compensate for variability within certain bounds, and only for activity appropriate to the PEP model. As expected, STDP is the mechanism responsible for PEP. STDP potentiated recurrent synapses from leading neurons to lagging neurons, reducing the disparity among the diverse population of neurons. Even though the STDP circuits are themselves variable, with different efficacies and time constants, when using timing the sign of the weight-change is always correct (data not shown). For this reason, we chose STDP over other more physiological implementations of plasticity, such as membrane-voltage-dependent plasticity (MVDP), which has the capability to learn with graded voltage signals [9], such as those found in active dendrites, providing more computational power [10]. Previously, we investigated a MVDP circuit, which modeled a voltage-dependent NMDAreceptor-gated synapse [11]. It potentiated when the calcium current analog exceeded a threshold, which was designed to occur only during a dendritic action potential. This circuit produced behavior similar to STDP, implying it could be used in PEP. However, it was sensitive to variability in the NMDA and potentiation thresholds, causing a fraction of the population to potentiate anytime the synapse received an input and another fraction to never potentiate, rendering both subpopulations useless. Therefore, the simpler, less biophysical STDP circuit won out over the MVDP circuit: In our system timing is everything. Associative storage and recall naturally emerge in the PEP network when synapses between neurons coactivated by a pattern are potentiated. These synapses allow neurons to recruit their peers when a subset of the pattern is presented, thereby completing the pattern. However, this form of pattern storage and completion differs from Hopfield’s attractor model [12] . Rather than forming symmetric, recurrent neuronal circuits, our recurrent network forms asymmetric circuits in which neurons make connections exclusively to less excitable neurons in the pattern. In both the poisson-like and regular cases (Figures 4 & 5), only about six percent of potentiated connections were reciprocated, as expected by chance. We plan to investigate the storage capacity of this asymmetric form of associative memory. Our system lends itself to modeling brain regions that use precise spike timing, such as the hippocampus. We plan to extend the work presented to store and recall sequences of patterns, as the hippocampus is hypothesized to do. Place cells that represent different locations spike at different phases of the theta cycle, in relation to the distance to their preferred locations. This sequential spiking will allow us to link patterns representing different locations in the order those locations are visited, thereby realizing episodic memory. We propose PEP as a candidate neural mechanism for information coding and storage in the hippocampal system. Observations from the CA1 region of the hippocampus suggest that basal dendrites (which primarily receive excitation from recurrent connections) support submillisecond timing precision, consistent with PEP [13]. We have shown, in a silicon model, PEP’s ability to exploit such fast recurrent connections to sharpen timing precision as well as to associatively store and recall patterns. Acknowledgments We thank Joe Lin for assistance with chip generation. The Office of Naval Research funded this work (Award No. N000140210468). References [1] O’Keefe J. & Recce M.L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317-330. [2] Mehta M.R., Lee A.K. & Wilson M.A. (2002) Role of experience and oscillations in transforming a rate code into a temporal code. Nature 417(6890):741-746. [3] Bi G.Q. & Wang H.X. (2002) Temporal asymmetry in spike timing-dependent synaptic plasticity. Physiology & Behavior 77:551-555. [4] Rodriguez-Vazquez, A., Linan, G., Espejo S. & Dominguez-Castro R. (2003) Mismatch-induced trade-offs and scalability of analog preprocessing visual microprocessor chips. Analog Integrated Circuits and Signal Processing 37:73-83. [5] Boahen K.A. (2000) Point-to-point connectivity between neuromorphic chips using address events. IEEE Transactions on Circuits and Systems II 47:416-434. [6] Culurciello E.R., Etienne-Cummings R. & Boahen K.A. (2003) A biomorphic digital image sensor. IEEE Journal of Solid State Circuits 38:281-294. [7] Bofill A., Murray A.F & Thompson D.P. (2005) Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning. In: Advances in Neural Information Processing Systems 14, MIT Press, 2002. [8] Cameron K., Boonsobhak V., Murray A. & Renshaw D. (2005) Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI. IEEE Transactions on Neural Networks 16(6):1626-1627. [9] Chicca E., Badoni D., Dante V., D’Andreagiovanni M., Salina G., Carota L., Fusi S. & Del Giudice P. (2003) A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Transaction on Neural Networks 14(5):1297-1307. [10] Poirazi P., & Mel B.W. (2001) Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29(3)779-796. [11] Arthur J.V. & Boahen K. (2004) Recurrently connected silicon neurons with active dendrites for one-shot learning. In: IEEE International Joint Conference on Neural Networks 3, pp.1699-1704. [12] Hopfield J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Science 81(10):3088-3092. [13] Ariav G., Polsky A. & Schiller J. (2003) Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons. Journal of Neuroscience 23(21):7750-7758.
4 0.2387896 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects
Author: Jean-pascal Pfister, Wulfram Gerstner
Abstract: While classical experiments on spike-timing dependent plasticity analyzed synaptic changes as a function of the timing of pairs of pre- and postsynaptic spikes, more recent experiments also point to the effect of spike triplets. Here we develop a mathematical framework that allows us to characterize timing based learning rules. Moreover, we identify a candidate learning rule with five variables (and 5 free parameters) that captures a variety of experimental data, including the dependence of potentiation and depression upon pre- and postsynaptic firing frequencies. The relation to the Bienenstock-Cooper-Munro rule as well as to some timing-based rules is discussed. 1
5 0.19367242 99 nips-2005-Integrate-and-Fire models with adaptation are good enough
Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner
Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1
6 0.18062623 181 nips-2005-Spiking Inputs to a Winner-take-all Network
7 0.17487282 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
8 0.11346497 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
9 0.10819059 64 nips-2005-Efficient estimation of hidden state dynamics from spike trains
10 0.10436795 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity
11 0.10352764 40 nips-2005-CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits
12 0.098547764 128 nips-2005-Modeling Memory Transfer and Saving in Cerebellar Motor Learning
13 0.095831186 176 nips-2005-Silicon growth cones map silicon retina
14 0.077527657 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models
15 0.068123855 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions
16 0.065028258 89 nips-2005-Group and Topic Discovery from Relations and Their Attributes
17 0.063459501 164 nips-2005-Representing Part-Whole Relationships in Recurrent Neural Networks
18 0.059506379 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions
19 0.059048809 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks
20 0.058481853 165 nips-2005-Response Analysis of Neuronal Population with Synaptic Depression
topicId topicWeight
[(0, 0.181), (1, -0.382), (2, -0.111), (3, -0.164), (4, -0.053), (5, -0.088), (6, 0.032), (7, 0.092), (8, -0.002), (9, -0.064), (10, -0.005), (11, -0.013), (12, 0.072), (13, 0.036), (14, 0.055), (15, -0.065), (16, -0.129), (17, 0.064), (18, -0.004), (19, -0.118), (20, -0.067), (21, -0.055), (22, 0.089), (23, -0.014), (24, -0.069), (25, -0.05), (26, -0.067), (27, -0.04), (28, -0.015), (29, 0.039), (30, -0.139), (31, 0.109), (32, -0.005), (33, -0.196), (34, 0.072), (35, 0.06), (36, -0.006), (37, 0.032), (38, 0.099), (39, 0.075), (40, -0.124), (41, -0.012), (42, 0.02), (43, -0.004), (44, -0.02), (45, 0.049), (46, 0.06), (47, -0.036), (48, 0.107), (49, 0.049)]
simIndex simValue paperId paperTitle
same-paper 1 0.98445171 188 nips-2005-Temporally changing synaptic plasticity
Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter
Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1
2 0.75662577 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects
Author: Jean-pascal Pfister, Wulfram Gerstner
Abstract: While classical experiments on spike-timing dependent plasticity analyzed synaptic changes as a function of the timing of pairs of pre- and postsynaptic spikes, more recent experiments also point to the effect of spike triplets. Here we develop a mathematical framework that allows us to characterize timing based learning rules. Moreover, we identify a candidate learning rule with five variables (and 5 free parameters) that captures a variety of experimental data, including the dependence of potentiation and depression upon pre- and postsynaptic firing frequencies. The relation to the Bienenstock-Cooper-Munro rule as well as to some timing-based rules is discussed. 1
3 0.7451883 118 nips-2005-Learning in Silicon: Timing is Everything
Author: John V. Arthur, Kwabena Boahen
Abstract: We describe a neuromorphic chip that uses binary synapses with spike timing-dependent plasticity (STDP) to learn stimulated patterns of activity and to compensate for variability in excitability. Specifically, STDP preferentially potentiates (turns on) synapses that project from excitable neurons, which spike early, to lethargic neurons, which spike late. The additional excitatory synaptic current makes lethargic neurons spike earlier, thereby causing neurons that belong to the same pattern to spike in synchrony. Once learned, an entire pattern can be recalled by stimulating a subset. 1 Variability in Neural Systems Evidence suggests precise spike timing is important in neural coding, specifically, in the hippocampus. The hippocampus uses timing in the spike activity of place cells (in addition to rate) to encode location in space [1]. Place cells employ a phase code: the timing at which a neuron spikes relative to the phase of the inhibitory theta rhythm (5-12Hz) conveys information. As an animal approaches a place cell’s preferred location, the place cell not only increases its spike rate, but also spikes at earlier phases in the theta cycle. To implement a phase code, the theta rhythm is thought to prevent spiking until the input synaptic current exceeds the sum of the neuron threshold and the decreasing inhibition on the downward phase of the cycle [2]. However, even with identical inputs and common theta inhibition, neurons do not spike in synchrony. Variability in excitability spreads the activity in phase. Lethargic neurons (such as those with high thresholds) spike late in the theta cycle, since their input exceeds the sum of the neuron threshold and theta inhibition only after the theta inhibition has had time to decrease. Conversely, excitable neurons (such as those with low thresholds) spike early in the theta cycle. Consequently, variability in excitability translates into variability in timing. We hypothesize that the hippocampus achieves its precise spike timing (about 10ms) through plasticity enhanced phase-coding (PEP). The source of hippocampal timing precision in the presence of variability (and noise) remains unexplained. Synaptic plasticity can compensate for variability in excitability if it increases excitatory synaptic input to neurons in inverse proportion to their excitabilities. Recasting this in a phase-coding framework, we desire a learning rule that increases excitatory synaptic input to neurons directly related to their phases. Neurons that lag require additional synaptic input, whereas neurons that lead 120µm 190µm A B Figure 1: STDP Chip. A The chip has a 16-by-16 array of microcircuits; one microcircuit includes four principal neurons, each with 21 STDP circuits. B The STDP Chip is embedded in a circuit board including DACs, a CPLD, a RAM chip, and a USB chip, which communicates with a PC. require none. The spike timing-dependent plasticity (STDP) observed in the hippocampus satisfies this requirement [3]. It requires repeated pre-before-post spike pairings (within a time window) to potentiate and repeated post-before-pre pairings to depress a synapse. Here we validate our hypothesis with a model implemented in silicon, where variability is as ubiquitous as it is in biology [4]. Section 2 presents our silicon system, including the STDP Chip. Section 3 describes and characterizes the STDP circuit. Section 4 demonstrates that PEP compensates for variability and provides evidence that STDP is the compensation mechanism. Section 5 explores a desirable consequence of PEP: unconventional associative pattern recall. Section 6 discusses the implications of the PEP model, including its benefits and applications in the engineering of neuromorphic systems and in the study of neurobiology. 2 Silicon System We have designed, submitted, and tested a silicon implementation of PEP. The STDP Chip was fabricated through MOSIS in a 1P5M 0.25µm CMOS process, with just under 750,000 transistors in just over 10mm2 of area. It has a 32 by 32 array of excitatory principal neurons commingled with a 16 by 16 array of inhibitory interneurons that are not used here (Figure 1A). Each principal neuron has 21 STDP synapses. The address-event representation (AER) [5] is used to transmit spikes off chip and to receive afferent and recurrent spike input. To configure the STDP Chip as a recurrent network, we embedded it in a circuit board (Figure 1B). The board has five primary components: a CPLD (complex programmable logic device), the STDP Chip, a RAM chip, a USB interface chip, and DACs (digital-to-analog converters). The central component in the system is the CPLD. The CPLD handles AER traffic, mediates communication between devices, and implements recurrent connections by accessing a lookup table, stored in the RAM chip. The USB interface chip provides a bidirectional link with a PC. The DACs control the analog biases in the system, including the leak current, which the PC varies in real-time to create the global inhibitory theta rhythm. The principal neuron consists of a refractory period and calcium-dependent potassium circuit (RCK), a synapse circuit, and a soma circuit (Figure 2A). RCK and the synapse are ISOMA Soma Synapse STDP Presyn. Spike PE LPF A Presyn. Spike Raster AH 0 0.1 Spike probability RCK Postsyn. Spike B 0.05 0.1 0.05 0.1 0.08 0.06 0.04 0.02 0 0 Time(s) Figure 2: Principal neuron. A A simplified schematic is shown, including: the synapse, refractory and calcium-dependent potassium channel (RCK), soma, and axon-hillock (AH) circuits, plus their constituent elements, the pulse extender (PE) and the low-pass filter (LPF). B Spikes (dots) from 81 principal neurons are temporally dispersed, when excited by poisson-like inputs (58Hz) and inhibited by the common 8.3Hz theta rhythm (solid line). The histogram includes spikes from five theta cycles. composed of two reusable blocks: the low-pass filter (LPF) and the pulse extender (PE). The soma is a modified version of the LPF, which receives additional input from an axonhillock circuit (AH). RCK is inhibitory to the neuron. It consists of a PE, which models calcium influx during a spike, and a LPF, which models calcium buffering. When AH fires a spike, a packet of charge is dumped onto a capacitor in the PE. The PE’s output activates until the charge decays away, which takes a few milliseconds. Also, while the PE is active, charge accumulates on the LPF’s capacitor, lowering the LPF’s output voltage. Once the PE deactivates, this charge leaks away as well, but this takes tens of milliseconds because the leak is smaller. The PE’s and the LPF’s inhibitory effects on the soma are both described below in terms of the sum (ISHUNT ) of the currents their output voltages produce in pMOS transistors whose sources are at Vdd (see Figure 2A). Note that, in the absence of spikes, these currents decay exponentially, with a time-constant determined by their respective leaks. The synapse circuit is excitatory to the neuron. It is composed of a PE, which represents the neurotransmitter released into the synaptic cleft, and a LPF, which represents the bound neurotransmitter. The synapse circuit is similar to RCK in structure but differs in function: It is activated not by the principal neuron itself but by the STDP circuits (or directly by afferent spikes that bypass these circuits, i.e., fixed synapses). The synapse’s effect on the soma is also described below in terms of the current (ISYN ) its output voltage produces in a pMOS transistor whose source is at Vdd. The soma circuit is a leaky integrator. It receives excitation from the synapse circuit and shunting inhibition from RCK and has a leak current as well. Its temporal behavior is described by: τ dISOMA ISYN I0 + ISOMA = dt ISHUNT where ISOMA is the current the capacitor’s voltage produces in a pMOS transistor whose source is at Vdd (see Figure 2A). ISHUNT is the sum of the leak, refractory, and calciumdependent potassium currents. These currents also determine the time constant: τ = C Ut κISHUNT , where I0 and κ are transistor parameters and Ut is the thermal voltage. STDP circuit ~LTP SRAM Presynaptic spike A ~LTD Inverse number of pairings Integrator Decay Postsynaptic spike Potentiation 0.1 0.05 0 0.05 0.1 Depression -80 -40 0 Presynaptic spike Postsynaptic spike 40 Spike timing: t pre - t post (ms) 80 B Figure 3: STDP circuit design and characterization. A The circuit is composed of three subcircuits: decay, integrator, and SRAM. B The circuit potentiates when the presynaptic spike precedes the postsynaptic spike and depresses when the postsynaptic spike precedes the presynaptic spike. The soma circuit is connected to an AH, the locus of spike generation. The AH consists of model voltage-dependent sodium and potassium channel populations (modified from [6] by Kai Hynna). It initiates the AER signaling process required to send a spike off chip. To characterize principal neuron variability, we excited 81 neurons with poisson-like 58Hz spike trains (Figure 2B). We made these spike trains poisson-like by starting with a regular 200Hz spike train and dropping spikes randomly, with probability of 0.71. Thus spikes were delivered to neurons that won the coin toss in synchrony every 5ms. However, neurons did not lock onto the input synchrony due to filtering by the synaptic time constant (see Figure 2B). They also received a common inhibitory input at the theta frequency (8.3Hz), via their leak current. Each neuron was prevented from firing more than one spike in a theta cycle by its model calcium-dependent potassium channel population. The principal neurons’ spike times were variable. To quantify the spike variability, we used timing precision, which we define as twice the standard deviation of spike times accumulated from five theta cycles. With an input rate of 58Hz the timing precision was 34ms. 3 STDP Circuit The STDP circuit (related to [7]-[8]), for which the STDP Chip is named, is the most abundant, with 21,504 copies on the chip. This circuit is built from three subcircuits: decay, integrator, and SRAM (Figure 3A). The decay and integrator are used to implement potentiation, and depression, in a symmetric fashion. The SRAM holds the current binary state of the synapse, either potentiated or depressed. For potentiation, the decay remembers the last presynaptic spike. Its capacitor is charged when that spike occurs and discharges linearly thereafter. A postsynaptic spike samples the charge remaining on the capacitor, passes it through an exponential function, and dumps the resultant charge into the integrator. This charge decays linearly thereafter. At the time of the postsynaptic spike, the SRAM, a cross-coupled inverter pair, reads the voltage on the integrator’s capacitor. If it exceeds a threshold, the SRAM switches state from depressed to potentiated (∼LTD goes high and ∼LTP goes low). The depression side of the STDP circuit is exactly symmetric, except that it responds to postsynaptic activation followed by presynaptic activation and switches the SRAM’s state from potentiated to depressed (∼LTP goes high and ∼LTD goes low). When the SRAM is in the potentiated state, the presynaptic 50 After STDP 83 92 100 Timing precision(ms) Before STDP 75 B Before STDP After STDP 40 30 20 10 0 50 60 70 80 90 Input rate(Hz) 100 50 58 67 text A 0.2 0.4 Time(s) 0.6 0.2 0.4 Time(s) 0.6 C Figure 4: Plasticity enhanced phase-coding. A Spike rasters of 81 neurons (9 by 9 cluster) display synchrony over a two-fold range of input rates after STDP. B The degree of enhancement is quantified by timing precision. C Each neuron (center box) sends synapses to (dark gray) and receives synapses from (light gray) twenty-one randomly chosen neighbors up to five nodes away (black indicates both connections). spike activates the principal neuron’s synapse; otherwise the spike has no effect. We characterized the STDP circuit by activating a plastic synapse and a fixed synapse– which elicits a spike at different relative times. We repeated this pairing at 16Hz. We counted the number of pairings required to potentiate (or depress) the synapse. Based on this count, we calculated the efficacy of each pairing as the inverse number of pairings required (Figure 3B). For example, if twenty pairings were required to potentiate the synapse, the efficacy of that pre-before-post time-interval was one twentieth. The efficacy of both potentiation and depression are fit by exponentials with time constants of 11.4ms and 94.9ms, respectively. This behavior is similar to that observed in the hippocampus: potentiation has a shorter time constant and higher maximum efficacy than depression [3]. 4 Recurrent Network We carried out an experiment designed to test the STDP circuit’s ability to compensate for variability in spike timing through PEP. Each neuron received recurrent connections from 21 randomly selected neurons within an 11 by 11 neighborhood centered on itself (see Figure 4C). Conversely, it made recurrent connections to randomly chosen neurons within the same neighborhood. These connections were mediated by STDP circuits, initialized to the depressed state. We chose a 9 by 9 cluster of neurons and delivered spikes at a mean rate of 50 to 100Hz to each one (dropping spikes with a probability of 0.75 to 0.5 from a regular 200Hz train) and provided common theta inhibition as before. We compared the variability in spike timing after five seconds of learning with the initial distribution. Phase coding was enhanced after STDP (Figure 4A). Before STDP, spike timing among neurons was highly variable (except for the very highest input rate). After STDP, variability was virtually eliminated (except for the very lowest input rate). Initially, the variability, characterized by timing precision, was inversely related to the input rate, decreasing from 34 to 13ms. After five seconds of STDP, variability decreased and was largely independent of input rate, remaining below 11ms. Potentiated synapses 25 A Synaptic state after STDP 20 15 10 5 0 B 50 100 150 200 Spiking order 250 Figure 5: Compensating for variability. A Some synapses (dots) become potentiated (light) while others remain depressed (dark) after STDP. B The number of potentiated synapses neurons make (pluses) and receive (circles) is negatively (r = -0.71) and positively (r = 0.76) correlated to their rank in the spiking order, respectively. Comparing the number of potentiated synapses each neuron made or received with its excitability confirmed the PEP hypothesis (i.e., leading neurons provide additional synaptic current to lagging neurons via potentiated recurrent synapses). In this experiment, to eliminate variability due to noise (as opposed to excitability), we provided a 17 by 17 cluster of neurons with a regular 200Hz excitatory input. Theta inhibition was present as before and all synapses were initialized to the depressed state. After 10 seconds of STDP, a large fraction of the synapses were potentiated (Figure 5A). When the number of potentiated synapses each neuron made or received was plotted versus its rank in spiking order (Figure 5B), a clear correlation emerged (r = -0.71 or 0.76, respectively). As expected, neurons that spiked early made more and received fewer potentiated synapses. In contrast, neurons that spiked late made fewer and received more potentiated synapses. 5 Pattern Completion After STDP, we found that the network could recall an entire pattern given a subset, thus the same mechanisms that compensated for variability and noise could also compensate for lack of information. We chose a 9 by 9 cluster of neurons as our pattern and delivered a poisson-like spike train with mean rate of 67Hz to each one as in the first experiment. Theta inhibition was present as before and all synapses were initialized to the depressed state. Before STDP, we stimulated a subset of the pattern and only neurons in that subset spiked (Figure 6A). After five seconds of STDP, we stimulated the same subset again. This time they recruited spikes from other neurons in the pattern, completing it (Figure 6B). Upon varying the fraction of the pattern presented, we found that the fraction recalled increased faster than the fraction presented. We selected subsets of the original pattern randomly, varying the fraction of neurons chosen from 0.1 to 1.0 (ten trials for each). We classified neurons as active if they spiked in the two second period over which we recorded. Thus, we characterized PEP’s pattern-recall performance as a function of the probability that the pattern in question’s neurons are activated (Figure 6C). At a fraction of 0.50 presented, nearly all of the neurons in the pattern are consistently activated (0.91±0.06), showing robust pattern completion. We fitted the recall performance with a sigmoid that reached 0.50 recall fraction with an input fraction of 0.30. No spurious neurons were activated during any trials. Rate(Hz) Rate(Hz) 8 7 7 6 6 5 5 0.6 0.4 2 0.2 0 0 3 3 2 1 1 A 0.8 4 4 Network activity before STDP 1 Fraction of pattern actived 8 0 B Network activity after STDP C 0 0.2 0.4 0.6 0.8 Fraction of pattern stimulated 1 Figure 6: Associative recall. A Before STDP, half of the neurons in a pattern are stimulated; only they are activated. B After STDP, half of the neurons in a pattern are stimulated, and all are activated. C The fraction of the pattern activated grows faster than the fraction stimulated. 6 Discussion Our results demonstrate that PEP successfully compensates for graded variations in our silicon recurrent network using binary (on–off) synapses (in contrast with [8], where weights are graded). While our chip results are encouraging, variability was not eliminated in every case. In the case of the lowest input (50Hz), we see virtually no change (Figure 4A). We suspect the timing remains imprecise because, with such low input, neurons do not spike every theta cycle and, consequently, provide fewer opportunities for the STDP synapses to potentiate. This shortfall illustrates the system’s limits; it can only compensate for variability within certain bounds, and only for activity appropriate to the PEP model. As expected, STDP is the mechanism responsible for PEP. STDP potentiated recurrent synapses from leading neurons to lagging neurons, reducing the disparity among the diverse population of neurons. Even though the STDP circuits are themselves variable, with different efficacies and time constants, when using timing the sign of the weight-change is always correct (data not shown). For this reason, we chose STDP over other more physiological implementations of plasticity, such as membrane-voltage-dependent plasticity (MVDP), which has the capability to learn with graded voltage signals [9], such as those found in active dendrites, providing more computational power [10]. Previously, we investigated a MVDP circuit, which modeled a voltage-dependent NMDAreceptor-gated synapse [11]. It potentiated when the calcium current analog exceeded a threshold, which was designed to occur only during a dendritic action potential. This circuit produced behavior similar to STDP, implying it could be used in PEP. However, it was sensitive to variability in the NMDA and potentiation thresholds, causing a fraction of the population to potentiate anytime the synapse received an input and another fraction to never potentiate, rendering both subpopulations useless. Therefore, the simpler, less biophysical STDP circuit won out over the MVDP circuit: In our system timing is everything. Associative storage and recall naturally emerge in the PEP network when synapses between neurons coactivated by a pattern are potentiated. These synapses allow neurons to recruit their peers when a subset of the pattern is presented, thereby completing the pattern. However, this form of pattern storage and completion differs from Hopfield’s attractor model [12] . Rather than forming symmetric, recurrent neuronal circuits, our recurrent network forms asymmetric circuits in which neurons make connections exclusively to less excitable neurons in the pattern. In both the poisson-like and regular cases (Figures 4 & 5), only about six percent of potentiated connections were reciprocated, as expected by chance. We plan to investigate the storage capacity of this asymmetric form of associative memory. Our system lends itself to modeling brain regions that use precise spike timing, such as the hippocampus. We plan to extend the work presented to store and recall sequences of patterns, as the hippocampus is hypothesized to do. Place cells that represent different locations spike at different phases of the theta cycle, in relation to the distance to their preferred locations. This sequential spiking will allow us to link patterns representing different locations in the order those locations are visited, thereby realizing episodic memory. We propose PEP as a candidate neural mechanism for information coding and storage in the hippocampal system. Observations from the CA1 region of the hippocampus suggest that basal dendrites (which primarily receive excitation from recurrent connections) support submillisecond timing precision, consistent with PEP [13]. We have shown, in a silicon model, PEP’s ability to exploit such fast recurrent connections to sharpen timing precision as well as to associatively store and recall patterns. Acknowledgments We thank Joe Lin for assistance with chip generation. The Office of Naval Research funded this work (Award No. N000140210468). References [1] O’Keefe J. & Recce M.L. (1993). Phase relationship between hippocampal place units and the EEG theta rhythm. Hippocampus 3(3):317-330. [2] Mehta M.R., Lee A.K. & Wilson M.A. (2002) Role of experience and oscillations in transforming a rate code into a temporal code. Nature 417(6890):741-746. [3] Bi G.Q. & Wang H.X. (2002) Temporal asymmetry in spike timing-dependent synaptic plasticity. Physiology & Behavior 77:551-555. [4] Rodriguez-Vazquez, A., Linan, G., Espejo S. & Dominguez-Castro R. (2003) Mismatch-induced trade-offs and scalability of analog preprocessing visual microprocessor chips. Analog Integrated Circuits and Signal Processing 37:73-83. [5] Boahen K.A. (2000) Point-to-point connectivity between neuromorphic chips using address events. IEEE Transactions on Circuits and Systems II 47:416-434. [6] Culurciello E.R., Etienne-Cummings R. & Boahen K.A. (2003) A biomorphic digital image sensor. IEEE Journal of Solid State Circuits 38:281-294. [7] Bofill A., Murray A.F & Thompson D.P. (2005) Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning. In: Advances in Neural Information Processing Systems 14, MIT Press, 2002. [8] Cameron K., Boonsobhak V., Murray A. & Renshaw D. (2005) Spike timing dependent plasticity (STDP) can ameliorate process variations in neuromorphic VLSI. IEEE Transactions on Neural Networks 16(6):1626-1627. [9] Chicca E., Badoni D., Dante V., D’Andreagiovanni M., Salina G., Carota L., Fusi S. & Del Giudice P. (2003) A VLSI recurrent network of integrate-and-fire neurons connected by plastic synapses with long-term memory. IEEE Transaction on Neural Networks 14(5):1297-1307. [10] Poirazi P., & Mel B.W. (2001) Impact of active dendrites and structural plasticity on the memory capacity of neural tissue. Neuron 29(3)779-796. [11] Arthur J.V. & Boahen K. (2004) Recurrently connected silicon neurons with active dendrites for one-shot learning. In: IEEE International Joint Conference on Neural Networks 3, pp.1699-1704. [12] Hopfield J.J. (1984) Neurons with graded response have collective computational properties like those of two-state neurons. Proceedings of the National Academy of Science 81(10):3088-3092. [13] Ariav G., Polsky A. & Schiller J. (2003) Submillisecond precision of the input-output transformation function mediated by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons. Journal of Neuroscience 23(21):7750-7758.
4 0.73276901 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity
Author: Robert A. Legenstein, Wolfgang Maass
Abstract: We investigate under what conditions a neuron can learn by experimentally supported rules for spike timing dependent plasticity (STDP) to predict the arrival times of strong “teacher inputs” to the same neuron. It turns out that in contrast to the famous Perceptron Convergence Theorem, which predicts convergence of the perceptron learning rule for a simplified neuron model whenever a stable solution exists, no equally strong convergence guarantee can be given for spiking neurons with STDP. But we derive a criterion on the statistical dependency structure of input spike trains which characterizes exactly when learning with STDP will converge on average for a simple model of a spiking neuron. This criterion is reminiscent of the linear separability criterion of the Perceptron Convergence Theorem, but it applies here to the rows of a correlation matrix related to the spike inputs. In addition we show through computer simulations for more realistic neuron models that the resulting analytically predicted positive learning results not only hold for the common interpretation of STDP where STDP changes the weights of synapses, but also for a more realistic interpretation suggested by experimental data where STDP modulates the initial release probability of dynamic synapses. 1
5 0.57043552 99 nips-2005-Integrate-and-Fire models with adaptation are good enough
Author: Renaud Jolivet, Alexander Rauch, Hans-rudolf Lüscher, Wulfram Gerstner
Abstract: Integrate-and-Fire-type models are usually criticized because of their simplicity. On the other hand, the Integrate-and-Fire model is the basis of most of the theoretical studies on spiking neuron models. Here, we develop a sequential procedure to quantitatively evaluate an equivalent Integrate-and-Fire-type model based on intracellular recordings of cortical pyramidal neurons. We find that the resulting effective model is sufficient to predict the spike train of the real pyramidal neuron with high accuracy. In in vivo-like regimes, predicted and recorded traces are almost indistinguishable and a significant part of the spikes can be predicted at the correct timing. Slow processes like spike-frequency adaptation are shown to be a key feature in this context since they are necessary for the model to connect between different driving regimes. 1
6 0.56352663 40 nips-2005-CMOL CrossNets: Possible Neuromorphic Nanoelectronic Circuits
7 0.54293841 128 nips-2005-Modeling Memory Transfer and Saving in Cerebellar Motor Learning
8 0.52775532 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
9 0.51554245 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
10 0.51245296 181 nips-2005-Spiking Inputs to a Winner-take-all Network
11 0.41740659 165 nips-2005-Response Analysis of Neuronal Population with Synaptic Depression
12 0.36175486 176 nips-2005-Silicon growth cones map silicon retina
13 0.29129413 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models
14 0.25934944 64 nips-2005-Efficient estimation of hidden state dynamics from spike trains
15 0.23453023 89 nips-2005-Group and Topic Discovery from Relations and Their Attributes
16 0.23413561 203 nips-2005-Visual Encoding with Jittering Eyes
17 0.23119991 174 nips-2005-Separation of Music Signals by Harmonic Structure Modeling
18 0.22434986 73 nips-2005-Fast biped walking with a reflexive controller and real-time policy searching
19 0.22417676 43 nips-2005-Comparing the Effects of Different Weight Distributions on Finding Sparse Representations
20 0.21782218 29 nips-2005-Analyzing Coupled Brain Sources: Distinguishing True from Spurious Interaction
topicId topicWeight
[(3, 0.032), (10, 0.025), (11, 0.011), (27, 0.033), (31, 0.047), (34, 0.046), (39, 0.02), (55, 0.023), (57, 0.429), (69, 0.043), (71, 0.019), (73, 0.022), (77, 0.011), (88, 0.068), (91, 0.055)]
simIndex simValue paperId paperTitle
same-paper 1 0.90897107 188 nips-2005-Temporally changing synaptic plasticity
Author: Minija Tamosiunaite, Bernd Porr, Florentin Wörgötter
Abstract: Recent experimental results suggest that dendritic and back-propagating spikes can influence synaptic plasticity in different ways [1]. In this study we investigate how these signals could temporally interact at dendrites leading to changing plasticity properties at local synapse clusters. Similar to a previous study [2], we employ a differential Hebbian plasticity rule to emulate spike-timing dependent plasticity. We use dendritic (D-) and back-propagating (BP-) spikes as post-synaptic signals in the learning rule and investigate how their interaction will influence plasticity. We will analyze a situation where synapse plasticity characteristics change in the course of time, depending on the type of post-synaptic activity momentarily elicited. Starting with weak synapses, which only elicit local D-spikes, a slow, unspecific growth process is induced. As soon as the soma begins to spike this process is replaced by fast synaptic changes as the consequence of the much stronger and sharper BP-spike, which now dominates the plasticity rule. This way a winner-take-all-mechanism emerges in a two-stage process, enhancing the best-correlated inputs. These results suggest that synaptic plasticity is a temporal changing process by which the computational properties of dendrites or complete neurons can be substantially augmented. 1
2 0.81698948 11 nips-2005-A Hierarchical Compositional System for Rapid Object Detection
Author: Long Zhu, Alan L. Yuille
Abstract: We describe a hierarchical compositional system for detecting deformable objects in images. Objects are represented by graphical models. The algorithm uses a hierarchical tree where the root of the tree corresponds to the full object and lower-level elements of the tree correspond to simpler features. The algorithm proceeds by passing simple messages up and down the tree. The method works rapidly, in under a second, on 320 × 240 images. We demonstrate the approach on detecting cats, horses, and hands. The method works in the presence of background clutter and occlusions. Our approach is contrasted with more traditional methods such as dynamic programming and belief propagation. 1
3 0.81564397 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression
Author: Misha Ahrens, Liam Paninski, Quentin J. Huys
Abstract: Our understanding of the input-output function of single cells has been substantially advanced by biophysically accurate multi-compartmental models. The large number of parameters needing hand tuning in these models has, however, somewhat hampered their applicability and interpretability. Here we propose a simple and well-founded method for automatic estimation of many of these key parameters: 1) the spatial distribution of channel densities on the cell’s membrane; 2) the spatiotemporal pattern of synaptic input; 3) the channels’ reversal potentials; 4) the intercompartmental conductances; and 5) the noise level in each compartment. We assume experimental access to: a) the spatiotemporal voltage signal in the dendrite (or some contiguous subpart thereof, e.g. via voltage sensitive imaging techniques), b) an approximate kinetic description of the channels and synapses present in each compartment, and c) the morphology of the part of the neuron under investigation. The key observation is that, given data a)-c), all of the parameters 1)-4) may be simultaneously inferred by a version of constrained linear regression; this regression, in turn, is efficiently solved using standard algorithms, without any “local minima” problems despite the large number of parameters and complex dynamics. The noise level 5) may also be estimated by standard techniques. We demonstrate the method’s accuracy on several model datasets, and describe techniques for quantifying the uncertainty in our estimates. 1
4 0.50251448 181 nips-2005-Spiking Inputs to a Winner-take-all Network
Author: Matthias Oster, Shih-Chii Liu
Abstract: Recurrent networks that perform a winner-take-all computation have been studied extensively. Although some of these studies include spiking networks, they consider only analog input rates. We present results of this winner-take-all computation on a network of integrate-and-fire neurons which receives spike trains as inputs. We show how we can configure the connectivity in the network so that the winner is selected after a pre-determined number of input spikes. We discuss spiking inputs with both regular frequencies and Poisson-distributed rates. The robustness of the computation was tested by implementing the winner-take-all network on an analog VLSI array of 64 integrate-and-fire neurons which have an innate variance in their operating parameters. 1
5 0.41735521 61 nips-2005-Dynamical Synapses Give Rise to a Power-Law Distribution of Neuronal Avalanches
Author: Anna Levina, Michael Herrmann
Abstract: There is experimental evidence that cortical neurons show avalanche activity with the intensity of firing events being distributed as a power-law. We present a biologically plausible extension of a neural network which exhibits a power-law avalanche distribution for a wide range of connectivity parameters. 1
6 0.39736328 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions
7 0.38931295 8 nips-2005-A Criterion for the Convergence of Learning with Spike Timing Dependent Plasticity
8 0.38638389 157 nips-2005-Principles of real-time computing with feedback applied to cortical microcircuit models
9 0.38584763 118 nips-2005-Learning in Silicon: Timing is Everything
10 0.38474411 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity
11 0.37949911 99 nips-2005-Integrate-and-Fire models with adaptation are good enough
12 0.37552005 39 nips-2005-Beyond Pair-Based STDP: a Phenomenological Rule for Spike Triplet and Frequency Effects
13 0.36830378 176 nips-2005-Silicon growth cones map silicon retina
14 0.35783795 203 nips-2005-Visual Encoding with Jittering Eyes
15 0.33378807 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity
16 0.33062589 121 nips-2005-Location-based activity recognition
17 0.32797882 124 nips-2005-Measuring Shared Information and Coordinated Activity in Neuronal Networks
18 0.32024077 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity
19 0.31998265 197 nips-2005-Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments
20 0.31906435 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions