nips nips2001 nips2001-37 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
Reference: text
sentIndex sentText sentNum sentScore
1 Associative memory in realistic neuronal networks P. [sent-1, score-0.415]
2 edu Abstract Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. [sent-4, score-0.518]
3 It is still not clear, however, whether realistic neuronal networks can support multiple attractors. [sent-5, score-0.301]
4 The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. [sent-6, score-1.028]
5 Embedding attractor is easy; doing so without destabilizing the background is not. [sent-7, score-0.366]
6 Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. [sent-8, score-0.602]
7 Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. [sent-9, score-0.909]
8 In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. [sent-10, score-0.764]
9 Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. [sent-11, score-0.551]
10 One of the most important features of the nervous system is its ability to perform associative memory. [sent-12, score-0.094]
11 It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. [sent-13, score-0.343]
12 Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. [sent-14, score-0.502]
13 The "realistic" feature that is probably hardest to capture is the steady firing at low rates - the background state - that is observed throughout the intact nervous system [8- 13]. [sent-15, score-0.73]
14 The reason it is difficult to build an attractor network that is stable at low firing rates, at least in the sparse coding limit, is as follows [2,3]: Attractor networks are constructed by strengthening recurrent connections among sub-populations of neurons. [sent-16, score-1.044]
15 The strengthening must be large enough that neurons within a sub-population can sustain a high firing rate state, but not so large that the sub-population can be spontaneously active. [sent-17, score-0.879]
16 This implies that the neuronal gain functions - the firing rate of the post-synaptic neurons as a function of the average • http) / culture. [sent-18, score-1.133]
17 edu/ "'pel firing rate of the pre-synaptic neurons - must be sigmoidal: small at low firing rate to provide stability, high at intermediate firing rate to provide a threshold (at an unstable equilibrium), and low again at high firing rate to provide saturation and a stable attractor. [sent-21, score-2.542]
18 In other words, a requirement for the co-existence of a stable background state and multiple attractors is that the gain function of the excitatory neurons be super linear at the observed background rates of a few Hz [2,3]. [sent-22, score-1.491]
19 However - and this is where the problem lies - above a few Hz most realistic gain function are nearly linear or sublinear (see, for example, Fig. [sent-23, score-0.217]
20 The superlinearity requirement rests on the implicit assumption that the activity of the sub-population involved in a memory does not affect the other neurons in the network. [sent-25, score-0.63]
21 While this assumption is valid in the sparse coding limit , it breaks down in realistic networks containing both excitatory and inhibitory neurons. [sent-26, score-0.68]
22 In such networks, activity among excitatory cells results in inhibitory feedback. [sent-27, score-0.377]
23 This feedback, if powerful enough, can stabilize attractors even without a saturating nonlinearity, essentially by stabilizing the equilibrium (above considered unstable) on the steep part of the gain function. [sent-28, score-0.488]
24 The price one pays, though, is that a reasonable fraction of the neurons must be involved in each of the memories, which takes us away from the sparse coding limit and thus reduces network capacity [15]. [sent-29, score-0.799]
25 1 The model A relatively good description of neuronal networks is provided by synaptically coupled, conductance-based neurons. [sent-30, score-0.227]
26 An alternative is to model neurons by their firing rates. [sent-32, score-0.741]
27 In such simplified models, the equilibrium firing rate of a neuron is a function of the firing rates of all the other neurons in the network. [sent-34, score-1.467]
28 Letting VEi and VIi denote the firing rates of the excitatory and inhibitory neurons, respectively, and assuming that synaptic input sums linearly, the equilibrium equations may be written ¢Ei (~Af;EVEj' ~Af;'V'j) (la) ¢;; (~AifVEj, ~ Ai! [sent-35, score-0.941]
29 (lb) Here ¢E and ¢I are the excitatory and inhibitory gain functions and Aij determines the connection strength from neuron j to neuron i. [sent-37, score-0.688]
30 The gain functions can, in principle, be derived from conductance-based model equations [17]. [sent-38, score-0.228]
31 (1) allows both attractors and a stable state at low firing rate. [sent-40, score-0.822]
32 To accomplish this we will use mean field theory. [sent-41, score-0.15]
33 First, we let the inhibitory neurons be completely homogeneous (¢Ii independent of i and connectivity to and from inhibitory neurons all-to-all and uniform). [sent-43, score-1.136]
34 (lb) becomes simply VI = ¢(VE' VI) where VE and VI are the average firing rates of the excitatory and inhibitory neurons. [sent-45, score-0.781]
35 Finally, we assume that cPT is threshold linear and the network operates in a regime in which the inhibitory firing rate is above zero. [sent-49, score-0.775]
36 (2) refers exclusively to excitatory neurons, defined v to be the average firing rate, v == N-1 Li Vi, and rescaled parameters. [sent-52, score-0.552]
37 (2) decreases with increasing average firing rate, since it's argument is -(1 + a)v and a is positive. [sent-60, score-0.388]
38 This negative dependence on v arises because we are working in the large coupling regime in which excitation and inhibition are balanced [18,19]. [sent-61, score-0.182]
39 The negative coupling to firing rate has important consequences for stability, as we will see below. [sent-62, score-0.526]
40 (4) below; W i j , which corresponds to background connectivity, is a random matrix whose elements are Gaussian distributed with mean 1 and variance 8w 2 ; and J ij produces the attractors. [sent-64, score-0.328]
41 We will follow the Hopfield prescription and write J ij as (3) where f is the coupling strength among neurons involved in the memories, and the patterns TJ",i determine which neurons participate in each memory. [sent-65, score-1.019]
42 In simulations we use TJ",i = [(1 - 1)11]1/2 with probability 1 and -(f 1(1 - IW /2 with probability 1 - I, so a fraction 1 of the neurons are involved in each memory. [sent-67, score-0.524]
43 2 Mean field equations The main difficulty in deriving the mean field equations from Eq. [sent-69, score-0.406]
44 Our first step in this endeavor is to analyze the noise associated with the clipped weights. [sent-71, score-0.086]
45 To do this we break Cijg(Wij pieces: Cijg(Wij + Jij) = (g) + (g')Jij + bCij where + J ij ) into two The angle brackets around 9 represent an average over the distributions of W ij and Jij, and a prime denotes a derivative. [sent-72, score-0.149]
46 In our simulations we use the clipping function g(z) = z if z is between 0 and 2, 0 if z ::::; 0 and 2 if z ;::: 2. [sent-75, score-0.087]
47 Our main assumptions in the development of a mean field theory are that L;#i bCijvj is a Gaussian random variable, and that bCij and Vj are independent. [sent-76, score-0.15]
48 Consequently, where (v 2 ) == N- 1 L;i v; is the second moment of the firing rate. [sent-77, score-0.388]
49 (20) as (5) We have defined the clipped memory strength, Ee , as Ee == E(g')/(g). [sent-80, score-0.2]
50 This makes network behavior robust to changes in E, the strength of the memories, so long as E is large. [sent-82, score-0.147]
51 (2)), and, recall, j is the fraction of neurons that participate in each memory. [sent-87, score-0.448]
52 The average firing rate, v, and strength of the memory, m == N- 1 2:: i rJljVj (taken without loss of generality to be the overlap with pattern 1), are given in terms of z and was v Xo m 3 Results The mean field equations can be understood by examining Eqs. [sent-91, score-0.734]
53 This equation always has a solution at w = 0 (and thus m = 0) , which corresponds to a background state with no memories active. [sent-96, score-0.448]
54 (6b), describes the behavior of the mean firing rate. [sent-100, score-0.452]
55 This equation looks complicated only because the noise - the variation in firing rate from neuron to neuron - must be determined self-consistently. [sent-101, score-0.646]
56 The solid lines, including the horizontal line at w = 0, represents the solution to Eq. [sent-105, score-0.155]
57 The arrows indicate approximate flow directions: vertical arrows indicate time evolution of w at fixed z; horizontal arrows indicate time evolution of z at fixed w. [sent-119, score-0.497]
58 Note the exchange of stability to the right of the solid curve, indicating that intersections too far to the right will be unstable. [sent-121, score-0.283]
59 While stability cannot be inferred from the equilibrium equations, a reasonable assumption is that the evolution equations for the firing rates , at least near an equilibrium, have the form Tdvi/dt = ¢i - Vi. [sent-124, score-0.773]
60 In that case, the arrows represent flow directions, and we see that there are potentially stable equilibria at the intersections marked by the solid squares. [sent-125, score-0.451]
61 Note that in the sparse coding limit, f ---+ 0, z is independent of w, meaning that the mean firing rate, v , is independent of the overlap, m. [sent-126, score-0.612]
62 In this limit there can be no feedback to inhibitory neurons , and thus no chance for stabilization. [sent-127, score-0.657]
63 1, the effect of letting f ---+ 0 is to make the dashed line vertical. [sent-129, score-0.174]
64 This eliminates the possibility of the upper stable equilibrium (the solid square at w > 0), and returns us to the situation where a superlinear gain function is required for attractors to be embedded, as discussed in the introduction. [sent-130, score-0.709]
65 First, the attractors can be stable even though the gain functions never saturate (recall that we used thresholdlinear gain functions). [sent-133, score-0.678]
66 The stabilization mechanism is feedback to inhibitory neurons, via the -(1 + a)v term in Eq. [sent-134, score-0.227]
67 This feedback is what makes the dashed line in Fig. [sent-136, score-0.175]
68 Second, if the dashed line shifts to the right relative to the solid line, the background becomes destabilized. [sent-138, score-0.436]
69 Thus, there is a tradeoff: w, and thus the mean firing rate of the memory neurons, can be increased by shifting the dashed line up or to the right , but eventually the background becomes destabilized. [sent-140, score-1.064]
70 Shifting the dashed line to the left, on the other hand, will eventually eliminate the solution at w > 0, destroying all attractors but the background. [sent-141, score-0.395]
71 For fixed load parameter Ct, fraction of neurons involved in a memory, f, and degree of connectivity, c, there are three parameters that have a large effect on the location of the equilibria in Fig. [sent-142, score-0.612]
72 1: the gain, {3, the clipped memory strength, fe, and the degree of heterogeneity in individual neurons, Bo. [sent-143, score-0.2]
73 2, which shows a stability plot in the f-{3 plane, determined by numerically solving the the equations Tdvi/dt = ¢i - Vi (see Eq. [sent-145, score-0.178]
74 The filled circles indicate regions where memories were embedded without destabilizing the background, open circles indicate regions where no memories could be embedded, and xs indicate regions where the background was unstable. [sent-147, score-1.021]
75 As discussed above, fe becomes approximately independent of the strength of the memories, f, when f becomes large. [sent-148, score-0.146]
76 2A, in which network behavior stabilizes when f becomes larger than about 4; increasing f beyond 8 would, presumably, produce no surprises. [sent-150, score-0.112]
77 There is some sensitivity to gain, (3: when f > 4, memories co-existed with a stable background for (3 in a ±15% range. [sent-151, score-0.57]
78 However, more detailed analysis indicates that the stability region gets larger as the number of neurons in the network, N, increases. [sent-153, score-0.446]
79 :S 35 ~ I o background •••• 4 0 0 ~ 4 8 E Figure 2: A. [sent-157, score-0.212]
80 Filled circles: memories co-exist with a stable background (also outlined with solid lines); open circles: memories could not be embedded; x s: background was unstable. [sent-160, score-1.077]
81 The average background rate, when the background was stable, was around 3 Hz. [sent-161, score-0.424]
82 These parameters led to an effective gain, pl /2 (3f c , of about 10, which is consistent with the gain in large networks in which each neuron receives "-'5-10,000 inputs. [sent-169, score-0.294]
83 Plot of firing rate of memory neurons , m, when the memory was active (upper trace) and not active (lower trace) versus f at (3 = 2. [sent-171, score-1.125]
84 4 Discussion The main outcome of this analysis is that attractors can co-exist with a stable background when neurons have generic threshold-linear gain functions, so long as the sparse coding limit is avoided. [sent-172, score-1.337]
85 The parameter regime for this co-existence is much larger than for attractor networks that operate in the sparse coding limit [2,23]. [sent-173, score-0.454]
86 While these results are encouraging, they do not definitively establishing t hat attractors can exist in realistic networks. [sent-174, score-0.307]
87 Future work must include inhibitory neurons , incorporate a much larger exploration of parameter space to ensure that the results are robust , and ultimately involve simulations with spiking neurons. [sent-175, score-0.567]
88 Persistent activity and the single-cell frequency-current curve in a cortical network model. [sent-188, score-0.186]
89 Neuronal correlates of parametric working memory in the prefrontal cortex. [sent-219, score-0.152]
90 Laminar differences in receptive field properties of cells in cat primary visual cortex. [sent-224, score-0.133]
91 Cerebral neorcortical neurons in the aged rat: spontaneous activity, properties of pyramidal tract neurons and effect of acetylcholine and cholinergic drugs. [sent-232, score-0.706]
92 Intracellular injection of apamin reduces a slow potassium current mediating afterhyperpolarizations and IPSPs in neocortical neurons of cats. [sent-240, score-0.353]
93 Neuronal activity in normal and deafferented forelimb somatosensory cortex of the awake cat . [sent-248, score-0.205]
94 Cutaneous responsiveness of lumbar spinal neurons in awake and halothane-anesthetized sheep. [sent-256, score-0.403]
95 Effects of quinine on neural activity in cat primary auditory cortex. [sent-264, score-0.121]
96 The enhanced storage capacity in neural networks with low activity level. [sent-288, score-0.182]
97 Modeling neuronal networks in cortex by rate models using the current-frequency response properties of cortical cells. [sent-300, score-0.386]
98 Chaos in neuronal networks with balanced excitatory and inhibitory activity. [sent-306, score-0.577]
99 Retrieval properties of attractor neural that obey Dale's law using a self-consistent signal-to-noise analysis. [sent-329, score-0.104]
100 Dynamics of a recurrent network of spiking neurons before and following learning. [sent-335, score-0.428]
wordName wordTfidf (topN-words)
[('firing', 0.388), ('neurons', 0.353), ('attractors', 0.233), ('background', 0.212), ('memories', 0.199), ('inhibitory', 0.177), ('neuronal', 0.161), ('stable', 0.159), ('gain', 0.143), ('excitatory', 0.126), ('jij', 0.125), ('bcij', 0.115), ('memory', 0.114), ('equilibrium', 0.112), ('attractor', 0.104), ('coding', 0.095), ('stability', 0.093), ('involved', 0.089), ('rate', 0.088), ('clipped', 0.086), ('intersections', 0.086), ('field', 0.086), ('equations', 0.085), ('neuron', 0.085), ('limit', 0.077), ('connectivity', 0.076), ('fk', 0.076), ('network', 0.075), ('latham', 0.075), ('realistic', 0.074), ('vi', 0.074), ('activity', 0.074), ('strength', 0.072), ('dashed', 0.069), ('ei', 0.068), ('cp', 0.067), ('networks', 0.066), ('sparse', 0.065), ('mean', 0.064), ('solid', 0.062), ('xo', 0.06), ('associative', 0.059), ('arrows', 0.057), ('circles', 0.057), ('cijg', 0.057), ('cpi', 0.057), ('vei', 0.057), ('vreeswijk', 0.057), ('line', 0.056), ('rates', 0.053), ('ij', 0.052), ('embedded', 0.051), ('coupling', 0.05), ('cpt', 0.05), ('awake', 0.05), ('clipping', 0.05), ('destabilizing', 0.05), ('ev', 0.05), ('hopfield', 0.05), ('lb', 0.05), ('participate', 0.05), ('strengthening', 0.05), ('feedback', 0.05), ('letting', 0.049), ('tj', 0.049), ('cat', 0.047), ('balanced', 0.047), ('regime', 0.047), ('equilibria', 0.045), ('prime', 0.045), ('soc', 0.045), ('fraction', 0.045), ('la', 0.044), ('ee', 0.043), ('eo', 0.042), ('angeles', 0.042), ('exchange', 0.042), ('filled', 0.042), ('flow', 0.042), ('unstable', 0.042), ('evolution', 0.042), ('low', 0.042), ('indicate', 0.04), ('fixed', 0.04), ('load', 0.04), ('overlap', 0.039), ('working', 0.038), ('ve', 0.038), ('rescaled', 0.038), ('cortical', 0.037), ('solution', 0.037), ('simulations', 0.037), ('becomes', 0.037), ('shifting', 0.036), ('fluctuations', 0.036), ('nervous', 0.035), ('wij', 0.035), ('active', 0.034), ('cortex', 0.034), ('open', 0.034)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999893 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
2 0.37852025 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
Author: Hiroyuki Nakahara, Shun-ichi Amari
Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1
3 0.23548977 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
Author: Julian Eggert, Berthold Bäuml
Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds
4 0.23162323 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
5 0.21138251 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
Author: Jesper Tegnér, Ádám Kepecs
Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1
6 0.15971629 23 nips-2001-A theory of neural integration in the head-direction system
7 0.15955989 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
8 0.15722682 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes
9 0.15663135 2 nips-2001-3 state neurons for contextual processing
10 0.1547246 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
11 0.14236814 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity
12 0.136766 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
13 0.1019899 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
14 0.098349966 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
15 0.098135009 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
16 0.093570441 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance
17 0.088958107 57 nips-2001-Correlation Codes in Neuronal Populations
18 0.087886363 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
19 0.083199345 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
20 0.080868997 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation
topicId topicWeight
[(0, -0.215), (1, -0.38), (2, -0.195), (3, 0.063), (4, 0.209), (5, 0.042), (6, 0.117), (7, -0.101), (8, -0.058), (9, 0.009), (10, 0.044), (11, -0.002), (12, 0.103), (13, -0.017), (14, 0.049), (15, -0.092), (16, 0.032), (17, -0.077), (18, -0.121), (19, -0.237), (20, 0.096), (21, 0.073), (22, 0.028), (23, 0.105), (24, -0.074), (25, -0.053), (26, -0.052), (27, 0.094), (28, -0.05), (29, -0.018), (30, 0.104), (31, 0.046), (32, -0.04), (33, 0.048), (34, 0.003), (35, 0.061), (36, 0.011), (37, -0.074), (38, 0.039), (39, -0.055), (40, 0.004), (41, 0.013), (42, 0.222), (43, 0.102), (44, -0.008), (45, -0.017), (46, -0.035), (47, 0.041), (48, -0.043), (49, -0.057)]
simIndex simValue paperId paperTitle
same-paper 1 0.97943777 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
2 0.90609109 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
Author: Hiroyuki Nakahara, Shun-ichi Amari
Abstract: We present an information-geometric measure to systematically investigate neuronal firing patterns, taking account not only of the second-order but also of higher-order interactions. We begin with the case of two neurons for illustration and show how to test whether or not any pairwise correlation in one period is significantly different from that in the other period. In order to test such a hypothesis of different firing rates, the correlation term needs to be singled out 'orthogonally' to the firing rates, where the null hypothesis might not be of independent firing. This method is also shown to directly associate neural firing with behavior via their mutual information, which is decomposed into two types of information, conveyed by mean firing rate and coincident firing, respectively. Then, we show that these results, using the 'orthogonal' decomposition, are naturally extended to the case of three neurons and n neurons in general. 1
3 0.64330417 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
Author: Jesper Tegnér, Ádám Kepecs
Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1
4 0.57847828 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
Author: Julian Eggert, Berthold Bäuml
Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds
5 0.5458861 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
6 0.51979679 2 nips-2001-3 state neurons for contextual processing
7 0.50918418 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction
8 0.50618285 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
9 0.46584427 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity
10 0.46345845 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
11 0.45197242 23 nips-2001-A theory of neural integration in the head-direction system
12 0.43985444 57 nips-2001-Correlation Codes in Neuronal Populations
13 0.40950999 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
14 0.35791782 83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons
15 0.35204804 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
16 0.32864228 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
17 0.31882876 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
18 0.3067168 158 nips-2001-Receptive field structure of flow detectors for heading perception
19 0.30454069 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
20 0.29058102 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
topicId topicWeight
[(14, 0.041), (17, 0.024), (19, 0.045), (27, 0.148), (30, 0.105), (38, 0.045), (59, 0.014), (72, 0.05), (74, 0.269), (79, 0.059), (91, 0.119)]
simIndex simValue paperId paperTitle
1 0.90892601 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
same-paper 2 0.86380768 37 nips-2001-Associative memory in realistic neuronal networks
Author: Peter E. Latham
Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The
3 0.6755873 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
Author: Jesper Tegnér, Ádám Kepecs
Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1
4 0.64842117 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
Author: Gregor Wenning, Klaus Obermayer
Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1
5 0.64055896 13 nips-2001-A Natural Policy Gradient
Author: Sham M. Kakade
Abstract: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris. 1
6 0.63823628 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade
7 0.63624585 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
8 0.63425994 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks
9 0.63360178 60 nips-2001-Discriminative Direction for Kernel Classifiers
10 0.63343239 46 nips-2001-Categorization by Learning and Combining Object Parts
11 0.63261783 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's
12 0.63059998 89 nips-2001-Grouping with Bias
13 0.62921268 190 nips-2001-Thin Junction Trees
14 0.62906742 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior
15 0.62791842 8 nips-2001-A General Greedy Approximation Algorithm with Applications
17 0.62445402 56 nips-2001-Convolution Kernels for Natural Language
18 0.62431198 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
19 0.62367439 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines
20 0.62333882 57 nips-2001-Correlation Codes in Neuronal Populations