nips nips2001 nips2001-72 knowledge-graph by maker-knowledge-mining

72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons


Source: pdf

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Exact differential equation population dynamics for Integrate-and-Fire neurons Julian Eggert * HONDA R&D; Europe (Deutschland) GmbH Future Technology Research Carl-Legien-StraBe 30 63073 Offenbach/Main, Germany julian. [sent-1, score-1.053]

2 de Abstract Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. [sent-9, score-1.037]

3 In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. [sent-10, score-0.888]

4 For Integrate- and- Fire type neurons , these formulations were only approximately correct. [sent-11, score-0.333]

5 Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. [sent-12, score-0.678]

6 It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. [sent-13, score-0.51]

7 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. [sent-14, score-1.687]

8 The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6. [sent-15, score-0.456]

9 t being the number of neurons in the population that release a spike in an interval [t, t+6. [sent-16, score-1.093]

10 Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. [sent-18, score-0.418]

11 At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. [sent-19, score-0.645]

12 First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. [sent-20, score-0.572]

13 Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. [sent-21, score-0.468]

14 Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. [sent-22, score-0.194]

15 Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e. [sent-23, score-0.621]

16 More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. [sent-26, score-0.504]

17 It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. [sent-28, score-0.617]

18 Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. [sent-29, score-0.402]

19 In this paper, we derive the exact population dynamics formulation for I&F; neurons. [sent-30, score-0.678]

20 This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. [sent-31, score-0.621]

21 The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. [sent-34, score-1.101]

22 The resting potential v R est is the stationary potential that is approached in the no-input case. [sent-35, score-0.276]

23 The input arriving from other neurons is described in form of a current ji. [sent-36, score-0.34]

24 (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. [sent-38, score-0.445]

25 Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. [sent-39, score-1.136]

26 (2) At the same time, it is said that the release of a spike occurred (i. [sent-42, score-0.435]

27 , the neuron fired), and the time ti = t of this singular event is stored. [sent-44, score-0.4]

28 Here ti indicates the time of the most recent spike. [sent-45, score-0.254]

29 Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). [sent-46, score-0.411]

30 2 Integral form Now we look at the single neuron in a neuronal compound. [sent-48, score-0.343]

31 We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . [sent-49, score-0.958]

32 ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6. [sent-50, score-0.315]

33 (6) If we start the integration at the time ti* of the spike before the last spike, then for ti* :::; t < ti the membrane potential is given by an expression like eq. [sent-52, score-1.106]

34 Especially we can now express v( ti - 0) and therefore the initial condition for an integration starting at tT in terms of ti* and v(ti* - 0). [sent-54, score-0.251]

35 n(t) to the membrane potential indicate refractory and synaptic components to the neuron i, respectively, as normally used in the Spike- Response- Model (SRM) notation [2]. [sent-59, score-0.918]

36 Both equations (4) and (7) formulate the neuronal dynamics using a refractory component, which depends on the own spike releases of a neuron, and a synaptic component, which comprises the integrated input contribution to the membrane potential by arrival of spikes from other neurons 2. [sent-60, score-2.088]

37 The synaptic component is based on the alpha-function characteristic of isolated arriving spikes, with an increase of the membrane potential after spike arrival and a subsequent exponential decrease. [sent-61, score-0.953]

38 2S0 the I&F; model can be formulated as a special case of the Spike- Response- Model, which defines the neuronal dynamics in the integral formulation. [sent-64, score-0.499]

39 (4), the refractory component depends only on the time elapsed since the last spike (thus reflecting a renewal property, sometimes also called a short term memory for refractory properties), whereas in eq. [sent-69, score-0.917]

40 The simpler form of the refractory contribution in eq. [sent-71, score-0.238]

41 (4) is achieved at the cost of an alpha-function that now depends on the time t - ti elapsed since the last own spike in addition to the times t - tf elapsed since the release of spikes at the presynaptic neurons j that provide input to i. [sent-72, score-1.466]

42 (7) , we have a more complex refractory contribution, but an alpha-function that does not depend on the last own spike time any more. [sent-74, score-0.575]

43 3 Probabilistic spike release Probabilistic firing is introduced into the I&F; model eq. [sent-76, score-0.466]

44 The spike release of each neuron is controlled by a hazard function >. [sent-79, score-0.671]

45 that a neuron with membrane potential v spikes in [t , t + dt) (9) When a neuron spikes, we proceed as usual: The membrane potential is reset by a fixed amount 6. [sent-82, score-1.463]

46 1 Density description Descriptions of neuronal populations usually assume a neuronal density function p(t; X) which depends on the state variables X of the neurons. [sent-85, score-0.779]

47 The density quantifies the likelihood that a neuron picked out of the population will be found in the vicinity of the point X in state space, p(t; X) dX = Portion of neurons at time t with state in [X, X + dX) (10) If we know p(t; X) , the population activity A(t) can be easily calculated. [sent-86, score-1.488]

48 Using the hazard function >'(t; X), the instantaneous population activity (spikes per time) can be calculated by computing the spike release averaged over the population, A(t) = J dX >. [sent-87, score-1.004]

49 (t; X) p(t; X) (11) The population dynamics is then given by the time course of the neuronal density function p(t; X), which changes because each neuron evolves according to its own internal dynamics, e. [sent-88, score-1.143]

50 after a spike release and the subsequent reset of the membrane potential. [sent-90, score-0.808]

51 The main challenge for the formulation of a population dynamics resides in selecting a low-dimensional state space [for an easy calculation of A(t)] and a suitable form for gtp(t; X). [sent-91, score-0.729]

52 As an example, for the population dynamics for I&F; neurons it would be straightforward to use the membrane potential v from eq. [sent-92, score-1.4]

53 But this leads to a complicated density dynamics, because the dynamics for v(t) consist of a continuous (differential equation (1)) and a discrete part (spike generation). [sent-94, score-0.397]

54 2 Exact population dynamics for I&F; neurons Which is the best state space for a population dynamics of I&F; neurons? [sent-97, score-1.608]

55 For the formulation of a population dynamics, it is usually assumed that the synaptic contributions to the membrane potential are identical for all neurons. [sent-98, score-1.011]

56 This is the case if we group all neurons of the same dynamical type and with identical connectivity patterns into one population. [sent-99, score-0.333]

57 (4), we see that , since o:(s, s') depends on s = t - ti and therefore on the own last spike time, the synaptic contribution to the membrane potential differs according to the state of the neuron. [sent-102, score-1.287]

58 Here, we see that for identical connectivity patterns Wi,j, the synaptic contributions are the same for all neurons, because 0:(00, s') does not depend on the own spike time any more. [sent-105, score-0.522]

59 We see that, for a fixed synaptic contribution, the membrane potential Vi is fully determined by the set of the own past spiking times {tf}. [sent-108, score-0.85]

60 But this would mean an infinite-dimensional density for the state description of a population, and, accordingly, a computationally overly expensive calculation of the population activity A(t) according to eq. [sent-109, score-0.658]

61 (8), the single spike refractory contributions 'TJ(s) are exponential. [sent-113, score-0.552]

62 (7) (12) Now the membrane potential Vi(t) only depends on the time of the last own spike ti and the refractory contribution amplitude modulation factor at the last spike 'TJi . [sent-115, score-1.698]

63 In addition, we have to care about updating of ti and 'TJi when a neuron spikes so that we get 3 'TJi --+ 'TJi = 1 + 'TJie -(t-tn! [sent-117, score-0.489]

64 The effect of taking into account more than the most recent spike ti in the refractory component vief(t) leads to a modulation factor 'TJi greater than 1, in particular if spikes come in a rapid succession so that refractory contributions can accumulate. [sent-119, score-1.111]

65 Instead of using a modulation factor 'TJi the effect of previous spikes can also be taken into account by introducing an effective last spiking time ii. [sent-120, score-0.484]

66 Because of 'TJi ::::: 1 it holds for the effective last spiking time ii ::::: ti. [sent-122, score-0.317]

67 This means, that , while at a given time t it is allways ti :::; t, it happens that ii ::::: t, meaning the neurons behave as if they would spike in the future. [sent-123, score-0.852]

68 For the membrane potential we get now instead of eq. [sent-125, score-0.468]

69 : 0:(00; t - t;) (16) f j and for the update rule for the effective last spiking time *) * * tA - +tA = f (t 'tA ' i i i with t; follows (17) (18) Therefore we can regard the dimensionality of the state space of the I&F; dynamics as 1-dimensional in the description of eq. [sent-130, score-0.695]

70 The dynamics of the single I&F; neurons now turns out to be very simple: Calculate the membrane potential Vi(t) using eq. [sent-132, score-1.079]

71 If the membrane potential exceeds threshold, update according to eq. [sent-135, score-0.518]

72 t; Using this single neuron dynamics , we can now proceed to gain a population dynamics using a density p(t; t*). [sent-137, score-1.21]

73 By fixing t* and the synaptic contribution vsyn(t) to the membrane potential, the state of a neuron is fully determined and the hazard function can be written as ,X[vs yn(t); t*]. [sent-139, score-0.801]

74 The dynamics of the density p(t; t*) is then calculated as follows. [sent-140, score-0.404]

75 Changes of p(t; t*) occur when neurons spike and t* is updated according to eq. [sent-141, score-0.623]

76 The hazard function controls the spike release, and, therefore, the change of p(t; t*). [sent-143, score-0.377]

77 For chosen state variables, p(t; t*) decreases due to spiking of the neurons with the fixed t*, and increases because neurons with other t'* spike and get updated in just that way that after updating their state variable falls around t*. [sent-144, score-1.241]

78 (11) as follows 1 +00 A(t) = - 00 dt* ,X[vsyn(t); t*] p(t; t*) (21) Remark that the expression for the density dynamics (eq. [sent-148, score-0.375]

79 20) automatically conserves the norm of the density, so that 1 +00 - 00 dt* p( t ; t*) = const , (22) which is a necessary condition because the number of neurons participating in the dynamics must remain constant. [sent-149, score-0.585]

80 This means that all we have to store is the density p(t; i*) for past and future effective last spiking times i* 4 . [sent-151, score-0.416]

81 The activity A(t) only appears as an auxiliary variable that is calculated with the help of the neuronal density. [sent-153, score-0.28]

82 In figure 1 the simulation results for populations of of spiking neurons are shown. [sent-154, score-0.755]

83 The neurons are uncoupled and a hazard function A(V) = ~ e2,B(v-e) (23) , TO with spike rate at threshold liTO = 1. [sent-155, score-0.758]

84 (1) are: resting potential v Rest = 0, jump in membrane potential after spike release ~ = 1 and time constant T = 20ms. [sent-160, score-1.101]

85 ~,' j ~ I:, r------- vsyr'i-'_ _100 _ _ __0_ _ 0 _ _ 50_ _ 0 ----,1 15 20_ 2_ 30_ (ms) o~ I c) 100 150 200 250 300 I (ms) Figure 1: Activity A(t) of simulated populations of neurons. [sent-184, score-0.231]

86 The neurons are uncoupled and to each neuron the same synaptic field vsyn(t), ploted in c) and d), is applied. [sent-185, score-0.615]

87 a) shows the activity A(t) for a population of I&F; neurons simulated on the one hand as N = 10000 single neurons (solid line) using eq. [sent-186, score-1.075]

88 (7) and on the other hand using the density dynamics eq. [sent-187, score-0.375]

89 In b) the activity A(t) of a population ofI&F; neurons (dashed line) and a population of SRM neurons with renewal (solid line) are compared. [sent-189, score-1.45]

90 (20) reproduces the activity A( t) of a population of single I&F; neurons almost perfect, with the exception of the noise in the single neuron simulations due to the finite size effects. [sent-192, score-1.003]

91 This holds even for the peaks occuring at the steps of the applied synaptic field v syn (t), although the density dynamics is entirely based on differential equations and one would therefore not expect such an excellent match for fast changes in activity. [sent-193, score-0.68]

92 The simulations also show that there can be a big difference between I&F; and SRM neurons with renewal. [sent-195, score-0.378]

93 Because of the accumulation of the refractory effects of all former spikes in the case of I&F; neurons the activity A(t) is generaly lower than for the SRM neurons with renewal and the higher the absolute actitvity level the bigger is the difference between both. [sent-196, score-1.079]

94 5 Conclusions In this paper we derived an exact differential equation density dynamics for a population of I&F; neurons starting from the microscopical equations for a single neuron. [sent-197, score-1.283]

95 This density dynamics allows a compuationaly efficient simulation of a whole population of neurons. [sent-198, score-0.742]

96 In future work we want to simulate a network of connected neuronal populations. [sent-199, score-0.191]

97 by x) , a self-consistent system of differential equations based on the population's p(x, t; i*) and A(x, t) emerges if we constrain ourselves to neuronal populations connected synaptically according to the constraints given by the pool definition [2]. [sent-202, score-0.559]

98 In this case, two neurons i and j belong to pools x and y, if Wi,j = W(x, y). [sent-203, score-0.311]

99 Population dynamics of spiking neurons: Fast transients, asynchronous states and locking. [sent-215, score-0.467]

100 Excitatory and inhibitory interactions in localized populations of model neurons. [sent-251, score-0.231]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('population', 0.347), ('membrane', 0.344), ('neurons', 0.311), ('spike', 0.287), ('dynamics', 0.274), ('populations', 0.231), ('ti', 0.208), ('spiking', 0.193), ('refractory', 0.188), ('neuronal', 0.171), ('release', 0.148), ('neuron', 0.146), ('spikes', 0.135), ('vi', 0.131), ('vsyn', 0.125), ('potential', 0.124), ('synaptic', 0.116), ('density', 0.101), ('differential', 0.099), ('hazard', 0.09), ('tf', 0.087), ('srm', 0.083), ('activity', 0.08), ('simulations', 0.067), ('eggert', 0.062), ('state', 0.055), ('elapsed', 0.054), ('knight', 0.054), ('renewal', 0.054), ('last', 0.054), ('contributions', 0.051), ('contribution', 0.05), ('fire', 0.049), ('time', 0.046), ('dt', 0.046), ('ry', 0.043), ('integration', 0.043), ('proceed', 0.042), ('microscopical', 0.042), ('reemplacement', 0.042), ('uncoupled', 0.042), ('dx', 0.041), ('quantitatively', 0.041), ('rest', 0.038), ('presynaptic', 0.038), ('syn', 0.036), ('und', 0.036), ('equations', 0.033), ('course', 0.033), ('modulation', 0.032), ('firing', 0.031), ('fur', 0.031), ('gerstner', 0.031), ('arrival', 0.031), ('yn', 0.031), ('integral', 0.03), ('calculated', 0.029), ('fixed', 0.029), ('arriving', 0.029), ('reset', 0.029), ('formulation', 0.029), ('compact', 0.028), ('exact', 0.028), ('threshold', 0.028), ('oo', 0.028), ('resting', 0.028), ('wilson', 0.028), ('rc', 0.028), ('descriptions', 0.028), ('tn', 0.026), ('sf', 0.026), ('description', 0.026), ('single', 0.026), ('exceeds', 0.025), ('according', 0.025), ('formulated', 0.024), ('wi', 0.024), ('effective', 0.024), ('write', 0.024), ('ta', 0.024), ('calculation', 0.024), ('past', 0.024), ('depends', 0.024), ('regard', 0.023), ('relaxation', 0.023), ('realistic', 0.023), ('instantaneous', 0.023), ('tt', 0.023), ('dealing', 0.023), ('enables', 0.023), ('equation', 0.022), ('component', 0.022), ('formulations', 0.022), ('fj', 0.022), ('connectivity', 0.022), ('ji', 0.021), ('entirely', 0.021), ('simulation', 0.02), ('simulate', 0.02), ('times', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999952 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

2 0.27441451 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

3 0.24862722 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

4 0.23548977 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.20040171 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

6 0.19321631 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

7 0.16280952 57 nips-2001-Correlation Codes in Neuronal Populations

8 0.15528487 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

9 0.15353589 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

10 0.1372115 23 nips-2001-A theory of neural integration in the head-direction system

11 0.13592425 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

12 0.12666202 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

13 0.12462659 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

14 0.11809638 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

15 0.11668295 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

16 0.11561381 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

17 0.087120913 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

18 0.075404659 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

19 0.074660443 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

20 0.070312522 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.173), (1, -0.379), (2, -0.176), (3, 0.061), (4, 0.192), (5, 0.041), (6, 0.202), (7, -0.051), (8, -0.076), (9, -0.018), (10, 0.058), (11, -0.079), (12, 0.07), (13, -0.033), (14, -0.028), (15, -0.16), (16, -0.002), (17, 0.016), (18, -0.037), (19, -0.128), (20, 0.0), (21, -0.029), (22, 0.003), (23, -0.023), (24, -0.03), (25, 0.007), (26, 0.097), (27, -0.056), (28, -0.046), (29, 0.088), (30, -0.06), (31, -0.17), (32, 0.086), (33, -0.035), (34, -0.037), (35, -0.083), (36, 0.013), (37, 0.093), (38, -0.005), (39, 0.004), (40, 0.01), (41, 0.013), (42, -0.054), (43, -0.105), (44, 0.066), (45, -0.041), (46, 0.001), (47, -0.057), (48, 0.031), (49, 0.099)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.98628443 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

2 0.80159348 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

3 0.75525141 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

4 0.74863005 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

5 0.61063313 57 nips-2001-Correlation Codes in Neuronal Populations

Author: Maoz Shamir, Haim Sompolinsky

Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.

6 0.5967657 37 nips-2001-Associative memory in realistic neuronal networks

7 0.56444228 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

8 0.4851518 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

9 0.47997573 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

10 0.46549883 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

11 0.46184087 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

12 0.4526884 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

13 0.4509795 23 nips-2001-A theory of neural integration in the head-direction system

14 0.42862472 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

15 0.42224073 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

16 0.36853856 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

17 0.35807106 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

18 0.35645503 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

19 0.3342478 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance

20 0.31719571 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.032), (17, 0.013), (19, 0.047), (27, 0.127), (30, 0.057), (38, 0.398), (59, 0.019), (72, 0.042), (74, 0.011), (79, 0.041), (83, 0.019), (91, 0.116)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.97992343 80 nips-2001-Generalizable Relational Binding from Coarse-coded Distributed Representations

Author: Randall C. O'Reilly, R. S. Busby

Abstract: We present a model of binding of relationship information in a spatial domain (e.g., square above triangle) that uses low-order coarse-coded conjunctive representations instead of more popular temporal synchrony mechanisms. Supporters of temporal synchrony argue that conjunctive representations lack both efficiency (i.e., combinatorial numbers of units are required) and systematicity (i.e., the resulting representations are overly specific and thus do not support generalization to novel exemplars). To counter these claims, we show that our model: a) uses far fewer hidden units than the number of conjunctions represented, by using coarse-coded, distributed representations where each unit has a broad tuning curve through high-dimensional conjunction space, and b) is capable of considerable generalization to novel inputs.

2 0.93048614 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

Author: Lance R. Williams, John W. Zweck

Abstract: We describe a neural network which enhances and completes salient closed contours. Our work is different from all previous work in three important ways. First, like the input provided to V1 by LGN, the input to our computation is isotropic. That is, the input is composed of spots not edges. Second, our network computes a well defined function of the input based on a distribution of closed contours characterized by a random process. Third, even though our computation is implemented in a discrete network, its output is invariant to continuous rotations and translations of the input pattern.

same-paper 3 0.91522926 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

4 0.63476747 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

Author: Randall C. O'Reilly, R. Soto

Abstract: We present a neural network model that shows how the prefrontal cortex, interacting with the basal ganglia, can maintain a sequence of phonological information in activation-based working memory (i.e., the phonological loop). The primary function of this phonological loop may be to transiently encode arbitrary bindings of information necessary for tasks - the combinatorial expressive power of language enables very flexible binding of essentially arbitrary pieces of information. Our model takes advantage of the closed-class nature of phonemes, which allows different neural representations of all possible phonemes at each sequential position to be encoded. To make this work, we suggest that the basal ganglia provide a region-specific update signal that allocates phonemes to the appropriate sequential coding slot. To demonstrate that flexible, arbitrary binding of novel sequences can be supported by this mechanism, we show that the model can generalize to novel sequences after moderate amounts of training. 1

5 0.61940366 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

6 0.59534454 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

7 0.59073055 57 nips-2001-Correlation Codes in Neuronal Populations

8 0.58623856 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

9 0.5828464 54 nips-2001-Contextual Modulation of Target Saliency

10 0.54451174 182 nips-2001-The Fidelity of Local Ordinal Encoding

11 0.54377472 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

12 0.54080582 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

13 0.5397293 46 nips-2001-Categorization by Learning and Combining Object Parts

14 0.53007442 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

15 0.52688277 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

16 0.52570975 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

17 0.52464509 37 nips-2001-Associative memory in realistic neuronal networks

18 0.52119577 161 nips-2001-Reinforcement Learning with Long Short-Term Memory

19 0.52025104 75 nips-2001-Fast, Large-Scale Transformation-Invariant Clustering

20 0.51001221 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons