nips nips2001 nips2001-27 knowledge-graph by maker-knowledge-mining

27 nips-2001-Activity Driven Adaptive Stochastic Resonance


Source: pdf

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Abstract Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. [sent-4, score-0.212]

2 Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. [sent-5, score-0.555]

3 Here we study how a neuron could maximize its sensitivity w. [sent-6, score-0.468]

4 Weak signals embedded in fluctuations is the natural realm of stochastic resonance. [sent-10, score-0.175]

5 The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. [sent-11, score-0.03]

6 We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. [sent-12, score-0.487]

7 We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. [sent-13, score-0.21]

8 The main results are compared with simulations of a biophysically more realistic neuron model. [sent-14, score-0.531]

9 inputs which on their own are not capable of driving a neuron , play an important role in information processing. [sent-17, score-0.493]

10 This implies that measures must be taken, such that the relevant information which is contained in the inputs is amplified in order to be transmitted. [sent-18, score-0.133]

11 One way to increase the sensitivity of a threshold device is the addition of noise. [sent-19, score-0.184]

12 This phenomenon is called stochastic resonance (see [3] for a review) , and has already been investigated and experimentally demonstrated in the context of neural systems (e. [sent-20, score-0.347]

13 The optimal noise level, however , depends on the distribution of the input signals, hence neurons must adapt their internal noise levels when the statistics of the input is changing. [sent-23, score-0.767]

14 Here we derive and explore an activity depend ent learning rule which is intuitive and which only depends on quantities (input and output rates) which a neuron could - in principle - estimate. [sent-24, score-0.715]

15 In section 2 we describe the neuron model and we introduce the m embrane potential dynamics in its hazard function approximation. [sent-26, score-0.684]

16 In section 3 we characterize stochastic resonance in this model system and we calculate the optimal noise level as a function of t he input and output rates. [sent-27, score-0.859]

17 Section 5 contains a comparison to the results from a biophysically more realistic neuron model. [sent-29, score-0.531]

18 ~ train with rate A s "0 >8 > {5 " -5 '-< O/l 0. [sent-34, score-0.08]

19 S 2 N balanced Poisson spike trains with rates As ;:: 0. [sent-45, score-0.482]

20 8 average membrane potential Figure 1: a)The basic model setup. [sent-51, score-0.356]

21 b) A family of Arrhenius type hazard functions for different noise levels. [sent-53, score-0.261]

22 1 corresponds to the threshold and values below 1 are subthreshold . [sent-54, score-0.14]

23 e a "signal" input , which we assume to be a Poisson distributed spike train with a rate As. [sent-55, score-0.309]

24 The rate As is low enough , so that the membrane potential V of the neuron remains sub-threshold and no output spikes are generated . [sent-56, score-0.997]

25 For the following we assume that the information the input and output of the neuron convey is coded by its input and output rates As and Ao only. [sent-57, score-1.086]

26 Sensitivity is then increased by adding 2N balanced excitatory and inhibitory "noise" inputs (N inputs each) with rates An and Poisson distributed spikes . [sent-58, score-0.752]

27 Balanced inputs [5, 6] were chosen , because they do not affect t he average membrane potential and allow to separate the effect of decreasing the distance of the neuron's operating point to the threshold potential from the effect of increasing the variance of the noise. [sent-59, score-0.753]

28 Signal and noise inputs are coupled to t he neuron via synaptic weights Ws and Wn for the signal and noise inputs . [sent-60, score-1.049]

29 Without loss of generality the membrane time-constant, the neuron 's resting potential, and the neuron 's threshold are set to one, zero , and one , respectively. [sent-62, score-1.142]

30 If the total rate 2N An of incoming spikes on t he "noise" channel is large and the individual coupling constants Wn are small , the dynamics of the m embrane potential can b e approximated by an Ornstein-Uhlenbeck process, dV =-V dt + J. [sent-63, score-0.445]

31 l = wsA s and (J"2 = w1A s + 2NwYvAN, and where dW describes a Gaussian noise process with m ean zero and variance one [8]. [sent-66, score-0.266]

32 Spike initiation is included by inserting an absorbing boundary with reset. [sent-67, score-0.126]

33 Equation (1) can b e solved an alytically for special cases [8], but here we opt for a more versatile approximation (cf. [sent-68, score-0.151]

34 In this approximation, the probability of crossing the threshold , which is proportional to the instantaneous output rate of the neuron , is described by an effective transfer function. [sent-70, score-0.789]

35 Figure 1 b) shows a family of Arrhenius type transfer functions for different noise levels cr. [sent-74, score-0.29]

36 3 e, Stochastic Resonance in an Ornstein- Uhlenbeck Neuron Several measures can be used to quantify the impact of noise on the quality of signal transmission through threshold devices . [sent-75, score-0.373]

37 A natural choice is the mutual information [9] between the distributions p( As) and p( Ao) of input and output rates, which we will discuss in section 4, see also figure 3f. [sent-76, score-0.295]

38 In order to keep the analysis and the derivation of the learning rule simple , however, we first consider a scenario, in which a neuron should distinguish between two sub-threshold input rates As and As + ~s. [sent-77, score-0.885]

39 Optimal distinguishability is achieved if the difference ~o of the corresponding output rates is maximal, i. [sent-78, score-0.365]

40 if ~o = /(As + ~ s) - /(As) (3) = max , where / is the transfer function given by eq. [sent-80, score-0.1]

41 Obviously there is a close connection between these two measures , because increasing both of them leads to an increase in the entropy of p( Ao) . [sent-82, score-0.034]

42 cr 2 for two different base rates As = 2 (left) and 7 (right) and 10 different values of ~ s = 0. [sent-97, score-0.578]

43 cr 2 is given in per cent of the maximum cr 2 = 2N W;An. [sent-104, score-0.329]

44 (3), the arrowh eads below the x-axis indicate the optimal value computed using eq. [sent-106, score-0.069]

45 All curves show a clear maximum at a particular noise level. [sent-112, score-0.19]

46 The optimal noise level increases wit h decreasing t he input rate As, but is roughly independent of the difference ~ s as long as ~ s is small. [sent-113, score-0.598]

47 Therefore, one optimal noise level holds even if a neuron has to distinguish several sub-threshold input rates - as long as these rates are clustered around a given base rate As. [sent-114, score-1.851]

48 The optimal noise level for constant As (stationary states) is given by the condition d d(j2 (f(A s + ~ s) - f(As)) = 0 , (4) where f is given by eq. [sent-115, score-0.374]

49 We obtain (j;pt = 2(1 - ws As)2 (5) if the main part of the variance of the membrane potential is a result of the balanced . [sent-118, score-0.653]

50 This shows that the optimal noise level depends either only on As or on Ao(As; (j2), both are quantities which are locally available at the cell. [sent-127, score-0.48]

51 4 Adaptive Stochastic Resonance We now consider the case , that a neuron needs to adapt its internal noise level because the base input rate As changes. [sent-128, score-1.201]

52 A simple learning rule which converges to the optimal noise level is given by ~(j2 = - f (j2 log( - 2 , -) (j opt (6) where the learning parameter f determines the time-scale of adaptation . [sent-129, score-0.651]

53 Inserting the corresponding expressions for the actual and the optimal variance we obtain a learning rule for the weights W n , ~wn = -f I og ( ( 2NAnw; )2 ) . [sent-130, score-0.227]

54 2 1 - ws As (7) Note, t hat equivalent learning rules (in the sense of eq. [sent-131, score-0.164]

55 (6)) can be formulat ed for the number N of the noise inputs and for their rates An as well. [sent-132, score-0.549]

56 (6) and (7) depend only on quantities which are locally available at the neuron. [sent-137, score-0.106]

57 3ab shows the stochastic adaptation of the noise level, using eq. [sent-139, score-0.381]

58 randomly distributed As which are clustered around a base rate. [sent-140, score-0.374]

59 (7) to an Ornstein-Uhlenbeck neuron whose noise level needs to adapt to three different base input rates. [sent-143, score-1.08]

60 (7) is shown (solid line), for comparison t he Wn which maximizes eq. [sent-148, score-0.062]

61 Mutual information was calculated between a distribution of randomly chosen input rates which are clustered around the base rate As. [sent-150, score-0.819]

62 The Wn that maximizes mutual Information between input and output rates is displayed in fig. [sent-151, score-0.665]

63 3e shows the ratio ~ o / ~ s computed by using eq. [sent-154, score-0.035]

64 (8) (dashed dotted line) and the same ratio for the quadratic approximation. [sent-156, score-0.158]

65 3f shows the mutual information between the input and output rates as a function of the changing w n . [sent-158, score-0.555]

66 • 5 00 As 500 1000 1500 2000 2500 3000 ':1 C ) 0 0 I 500 1000 • 1 00 5 ri" I I 2 500 3000 200 2 0 500 3000 • time[u d steps1 p ate time [update steps] Figure 3: a) Input rates As are evenly distributed around a base rate with width 0. [sent-165, score-0.669]

67 Adaptation of the noise level to t hree different input base rates As. [sent-169, score-0.927]

68 (7) (solid line) , the optimal Wn that maximizes eq. [sent-172, score-0.131]

69 (3) (dashed dotted line) and the optimal Wn that maximizes the m ut ual information between t he input and output rates (dashed). [sent-173, score-0.847]

70 T he opt imal values of Wn as the quadratic approximation, eq. [sent-174, score-0.141]

71 (3) (dashed dotted line) and t he quadratic approximation (solid line) . [sent-181, score-0.153]

72 f) Mut ual information between input and output rates as a function of base rate and changing synaptic coupling constant W n . [sent-182, score-0.98]

73 For calculating the mutual information the input rates were chosen randomly from the interval [As - 0. [sent-183, score-0.45]

74 T he fig ure shows , that the learning rule, eq. [sent-188, score-0.087]

75 (7) in t he quadratic approximation leads to values for () which are near-optimal, and that optimizing the difference of output rates leads to results similar to t he optimization of the m ut ual information . [sent-189, score-0.612]

76 5 Conductance based Model Neuron To check if and how t he results from the abstract model carryover to a biophysically mode realistic one we explore a modified Hodgkin-Huxley point neuron with an additional A-Current (a slow potassium current) as in [11] . [sent-190, score-0.633]

77 T he dynamics of the membrane potential V is described by t he following equation C~~ - gL(V(t ) - EL) - ! [sent-191, score-0.392]

78 iAa ~ b(t)(V - EK) + l syn + la pp, (8) the parameters can be found in the appendix. [sent-194, score-0.078]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('neuron', 0.394), ('wn', 0.364), ('rates', 0.26), ('membrane', 0.244), ('base', 0.215), ('noise', 0.19), ('resonance', 0.182), ('ws', 0.164), ('ao', 0.144), ('cent', 0.123), ('ual', 0.123), ('level', 0.115), ('potential', 0.112), ('input', 0.111), ('threshold', 0.11), ('output', 0.105), ('adaptation', 0.104), ('cr', 0.103), ('transfer', 0.1), ('inputs', 0.099), ('biophysically', 0.091), ('opt', 0.091), ('balanced', 0.09), ('stochastic', 0.087), ('arrhenius', 0.082), ('rule', 0.082), ('clustered', 0.081), ('rate', 0.08), ('mutual', 0.079), ('spike', 0.078), ('sensitivity', 0.074), ('dotted', 0.073), ('poisson', 0.072), ('embrane', 0.071), ('hazard', 0.071), ('dashed', 0.071), ('optimal', 0.069), ('maximizes', 0.062), ('spikes', 0.062), ('quantities', 0.061), ('excitatory', 0.06), ('adapt', 0.055), ('fig', 0.054), ('dw', 0.054), ('inserting', 0.054), ('trains', 0.054), ('line', 0.053), ('fluctuations', 0.052), ('quadratic', 0.05), ('ek', 0.05), ('coupling', 0.048), ('displayed', 0.048), ('realistic', 0.046), ('locally', 0.045), ('ut', 0.044), ('variance', 0.043), ('inhibitory', 0.042), ('la', 0.042), ('lin', 0.041), ('internal', 0.041), ('distributed', 0.04), ('activity', 0.04), ('phenomenon', 0.039), ('experimentally', 0.039), ('signal', 0.039), ('synaptic', 0.038), ('distinguish', 0.038), ('solid', 0.038), ('around', 0.038), ('dt', 0.036), ('dynamics', 0.036), ('conductance', 0.036), ('conductances', 0.036), ('ena', 0.036), ('hree', 0.036), ('absorbing', 0.036), ('ate', 0.036), ('carryover', 0.036), ('initiation', 0.036), ('oby', 0.036), ('realm', 0.036), ('signature', 0.036), ('syn', 0.036), ('cortical', 0.036), ('ratio', 0.035), ('calculated', 0.034), ('measures', 0.034), ('berlin', 0.034), ('explore', 0.033), ('decreasing', 0.033), ('potassium', 0.033), ('klaus', 0.033), ('ure', 0.033), ('ean', 0.033), ('fluctuating', 0.033), ('og', 0.033), ('subthreshold', 0.03), ('versatile', 0.03), ('concludes', 0.03), ('approximation', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

2 0.24862722 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

3 0.1913249 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

4 0.16822392 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

5 0.136766 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

6 0.13225819 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

7 0.10924076 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

8 0.099558488 152 nips-2001-Prodding the ROC Curve: Constrained Optimization of Classifier Performance

9 0.093349315 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

10 0.092173457 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

11 0.087572277 137 nips-2001-On the Convergence of Leveraging

12 0.086775884 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

13 0.078684255 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

14 0.076708727 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

15 0.073488839 167 nips-2001-Semi-supervised MarginBoost

16 0.070236228 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance

17 0.068244644 57 nips-2001-Correlation Codes in Neuronal Populations

18 0.065953366 23 nips-2001-A theory of neural integration in the head-direction system

19 0.065446548 4 nips-2001-ALGONQUIN - Learning Dynamic Noise Models From Noisy Speech for Robust Speech Recognition

20 0.062791415 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.192), (1, -0.225), (2, -0.125), (3, 0.078), (4, 0.112), (5, 0.029), (6, 0.176), (7, -0.055), (8, -0.059), (9, -0.05), (10, 0.064), (11, -0.086), (12, 0.05), (13, -0.009), (14, -0.029), (15, -0.058), (16, 0.017), (17, -0.039), (18, -0.029), (19, -0.05), (20, -0.013), (21, -0.121), (22, -0.063), (23, 0.059), (24, 0.018), (25, 0.042), (26, 0.096), (27, -0.104), (28, -0.083), (29, 0.181), (30, -0.018), (31, -0.117), (32, 0.026), (33, -0.051), (34, -0.063), (35, -0.159), (36, -0.061), (37, 0.213), (38, -0.037), (39, 0.083), (40, 0.1), (41, -0.112), (42, -0.124), (43, -0.097), (44, 0.141), (45, -0.133), (46, 0.073), (47, -0.012), (48, -0.024), (49, -0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9756366 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

2 0.78427291 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

3 0.77104682 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

4 0.54969168 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

5 0.43454319 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

Author: Jesper Tegnér, Ádám Kepecs

Abstract: Hebbian learning rules are generally formulated as static rules. Under changing condition (e.g. neuromodulation, input statistics) most rules are sensitive to parameters. In particular, recent work has focused on two different formulations of spike-timing-dependent plasticity rules. Additive STDP [1] is remarkably versatile but also very fragile, whereas multiplicative STDP [2, 3] is more robust but lacks attractive features such as synaptic competition and rate stabilization. Here we address the problem of robustness in the additive STDP rule. We derive an adaptive control scheme, where the learning function is under fast dynamic control by postsynaptic activity to stabilize learning under a variety of conditions. Such a control scheme can be implemented using known biophysical mechanisms of synapses. We show that this adaptive rule makes the addit ive STDP more robust. Finally, we give an example how meta plasticity of the adaptive rule can be used to guide STDP into different type of learning regimes. 1

6 0.42922464 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

7 0.40092939 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

8 0.39348111 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

9 0.38564655 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

10 0.37973699 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

11 0.37427405 57 nips-2001-Correlation Codes in Neuronal Populations

12 0.36325696 177 nips-2001-Switch Packet Arbitration via Queue-Learning

13 0.34920567 152 nips-2001-Prodding the ROC Curve: Constrained Optimization of Classifier Performance

14 0.34615156 37 nips-2001-Associative memory in realistic neuronal networks

15 0.33678433 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

16 0.31989586 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

17 0.30737013 26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks

18 0.28712747 168 nips-2001-Sequential Noise Compensation by Sequential Monte Carlo Method

19 0.27765885 38 nips-2001-Asymptotic Universality for Learning Curves of Support Vector Machines

20 0.27050304 85 nips-2001-Grammar Transfer in a Second Order Recurrent Neural Network


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(4, 0.019), (14, 0.055), (17, 0.025), (19, 0.012), (27, 0.133), (30, 0.098), (38, 0.104), (59, 0.03), (67, 0.012), (72, 0.071), (79, 0.072), (85, 0.145), (91, 0.115)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91240895 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

2 0.87062538 191 nips-2001-Transform-invariant Image Decomposition with Similarity Templates

Author: Chris Stauffer, Erik Miller, Kinh Tieu

Abstract: Recent work has shown impressive transform-invariant modeling and clustering for sets of images of objects with similar appearance. We seek to expand these capabilities to sets of images of an object class that show considerable variation across individual instances (e.g. pedestrian images) using a representation based on pixel-wise similarities, similarity templates. Because of its invariance to the colors of particular components of an object, this representation enables detection of instances of an object class and enables alignment of those instances. Further, this model implicitly represents the regions of color regularity in the class-specific image set enabling a decomposition of that object class into component regions. 1

3 0.81248504 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

Author: Mário Figueiredo

Abstract: In this paper we introduce a new sparseness inducing prior which does not involve any (hyper)parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly available benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, both in terms of error rates and sparseness, although it involves no tuning or adjusting of sparsenesscontrolling hyper-parameters.

4 0.79384089 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

5 0.79338187 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

Author: Si Wu, Shun-ichi Amari

Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1

6 0.79213673 46 nips-2001-Categorization by Learning and Combining Object Parts

7 0.79033351 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

8 0.78410846 57 nips-2001-Correlation Codes in Neuronal Populations

9 0.77971506 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

10 0.77552581 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

11 0.77502251 110 nips-2001-Learning Hierarchical Structures with Linear Relational Embedding

12 0.77363163 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

13 0.77147728 13 nips-2001-A Natural Policy Gradient

14 0.77139872 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

15 0.77035242 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

16 0.76917005 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

17 0.76785833 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

18 0.76751828 63 nips-2001-Dynamic Time-Alignment Kernel in Support Vector Machine

19 0.76703894 37 nips-2001-Associative memory in realistic neuronal networks

20 0.76690108 182 nips-2001-The Fidelity of Local Ordinal Encoding