nips nips2000 nips2000-146 knowledge-graph by maker-knowledge-mining

146 nips-2000-What Can a Single Neuron Compute?


Source: pdf

Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek

Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. [sent-7, score-0.323]

2 We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. [sent-8, score-0.321]

3 A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. [sent-9, score-0.565]

4 This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. [sent-10, score-0.125]

5 1 Introduction Classical neural network models approximate neurons as devices that sum their inputs and generate a nonzero output if the sum exceeds a threshold. [sent-11, score-0.252]

6 From our current state of knowledge in neurobiology it is easy to criticize these models as oversimplified: where is the complex geometry of neurons, or the many different kinds of ion channel, each with its own intricate multistate kinetics? [sent-12, score-0.092]

7 Indeed, progress at this more microscopic level of description has led us to the point where we can write (almost) exact models for the electrical dynamics of neurons, at least on short time scales. [sent-13, score-0.26]

8 These nearly exact models are complicated by any measure, including tens if not hundreds of differential equations to describe the states of different channels in different spatial compartments of the cell. [sent-14, score-0.102]

9 Faced with this detailed microscopic description, we need to answer a question which goes well beyond the biological context: given a continuous dynamical system, what does it compute? [sent-15, score-0.094]

10 Our goal in this paper is to make this question about what a neuron computes somewhat more precise, and then to explore what we take to be the simplest example, namely the Hodgkin- Huxley model [1],[2] (and refs therein). [sent-16, score-0.204]

11 Real neurons take as inputs signals at their synapses and give as outputs sequences of discrete, identical pulses-action potentials or 'spikes'. [sent-18, score-0.257]

12 The inputs themselves are spikes from other neurons, so the neuron is a device which takes N '" 103 pulse trains as inputs and generates one pulse train as output. [sent-19, score-0.641]

13 More realistically, if the average spike rates are'" 10 sec- 1, the input words can be compressed by a factor of ten. [sent-21, score-0.644]

14 Thus we might be able to think about neurons as evaluating a Boolean function of roughly 1000 Boolean variables, and then characterizing the computational function of the cell amounts to specifying this Boolean function. [sent-22, score-0.281]

15 The above estimate, though crude, makes clear that there will be no direct empirical attack on the question of what a neuron computes: there are too many possibilities to learn the function by brute force from any reasonable set of experiments. [sent-23, score-0.204]

16 Progress requires the hypothesis that the function computed by a neuron is not arbitrary, but belongs to a simple class. [sent-24, score-0.165]

17 Our suggestion is that this simple class involves functions that vary only over a low dimensional subspace of the inputs, and in fact we will start by searching for linear subspaces. [sent-25, score-0.18]

18 Specifically, we begin by simplifying away the spatial structure of neurons and take inputs to be just injected currents into a point- like neuron. [sent-26, score-0.356]

19 If the input is an injected current, then the neuron maps the history of this current, I(t < to), into the presence or absence of a spike at time to. [sent-29, score-0.872]

20 More generally we might imagine that the cell (or our description) is noisy, so that there is a probability of spiking P[spike@toII(t < to)] which depends on the current history. [sent-30, score-0.201]

21 We emphasize that the dependence on the history of the current means that there still are many dimensions to the input signal even though we have collapsed any spatial variations. [sent-31, score-0.272]

22 If we work at time resolution flt and assume that currents in a window of size T are relevant to the decision to spike, then the inputs live in a space of D = T / flt, of order 100 dimensions in many interesting cases. [sent-32, score-0.554]

23 If the neuron is sensitive only to a low dimensional linear subspace, we can define a set of signals S1, S2,···, SK by filtering the current, s,. [sent-33, score-0.343]

24 (t)I(to - t), (1) so that the probability of spiking depends only on this finite set of signals, P[spike@toII(t < to)] = P[spike@to]g(s1,s2,· . [sent-37, score-0.135]

25 ,SK), (2) where we include the average probability of spiking so that 9 is dimensionless. [sent-39, score-0.143]

26 If we think of the current I(t < to) as a vector, with one dimension for each time sample, then these filtered signals are linear projections of this vector. [sent-40, score-0.293]

27 In this formulation, characterizing the computation done by a neuron means estimating the number of relevant stimulus dimensions (K, hopefully much less than D), identifying the filters which project into this relevant subspace,! [sent-41, score-0.759]

28 The classical perceptron- like cell of neural network theory has only one relevant dimension and a simple form for g. [sent-43, score-0.167]

29 3 Identifying low-dimensional structure The idea that neurons might be sensitive only to low-dimensional projections of their inputs was developed explicitly in work on a motion sensitive neuron of the fly visual system [3]. [sent-44, score-0.538]

30 Thus the spike triggered average stimulus, or reverse correlation function [4], is the first moment ST A(T) =j [ds] P[s(t < to)lspike@to]s(to - T). [sent-48, score-0.773]

31 (4) We can also compute the covariance matrix of fluctuations around this average, Cspike(T,T') = j[dS] P[s(t < to)lspike@to]s(to-T)s(to-T')-STA(T)STA(T'). [sent-49, score-0.158]

32 (6) Notice that all of these covariance matrices are D x D in size. [sent-51, score-0.124]

33 The surprising finding of [3] was that the change in the covariance matrix, t1C = Cs ike - Cprior, had only a very small number of nonzero eigenvalues. [sent-52, score-0.212]

34 In fact it can be shown that if the probability of spiking depends on K linear projections of the stimulus as in eq. [sent-53, score-0.28]

35 (2), and if the inputs s(t) are chosen from a Gaussian distribution, then the rank of the matrix t1C is exactly K. [sent-54, score-0.14]

36 Further, the eigenvectors associated with nonzero eigenvalues span the relevant subspace (up to a rotation associated with the autocorrelations in the inputs. [sent-55, score-0.37]

37 Thus eigenvalue analysis of the spike triggered covariance matrix gives us a direct way to search for a low dimensional linear subspace that captures the relevant stimulus features. [sent-56, score-1.344]

38 The subscripted voltages VI and VNa are ion-specific reversal potentials. [sent-60, score-0.046]

39 91, 9K and 9Na are empirically determined maximal conductances for the different ions,2 and the gating variables n, m and h (on the interval [0,1]) have their own voltage dependent dynamics: dn/dt dm/dt dh/dt = = = (O. [sent-61, score-0.046]

40 Here we are interested in dynamic inputs I(t), but it is important to remember that for constant inputs the Hodgkin-Huxley model undergoes a Hopf bifurcation to spike at a constant frequency; further, this frequency is rather insensitive to the precise value of the input above onset. [sent-73, score-0.892]

41 This 'rigidity' of the system is felt also in 2We have used the original parameters, with a sign change for voltages: C = lJ. [sent-74, score-0.032]

42 gK many regimes of dynamic stimulation, and can be thought of as a strong interaction among successive spikes. [sent-80, score-0.041]

43 These interactions lead to long memory times, reflecting the infinite phase memory of the periodic orbit which exists for constant input. [sent-81, score-0.068]

44 While spike interactions are interesting, we want to focus on the way that input current modulates the probability of spiking. [sent-82, score-0.649]

45 These are defined by accumulating the interspike interval distribution and noticing that for some intervals t > tc the distribution decays exponentially, which means that the system has lost memory of the previous spike; thus spikes which are more than tc after the previous spike are isolated. [sent-84, score-0.896]

46 In what follows we consider the response of the Hodgkin- Huxley model to currents I(t) with zero mean, 0. [sent-85, score-0.082]

47 1 shows the change in covariance matrix f1C( r, r') for isolated spikes in our HH simulation, and fig. [sent-90, score-0.421]

48 2(a) shows the resulting spectrum of eigenvalues as a function of sample size. [sent-91, score-0.118]

49 The result strongly suggests that there are many fewer than D relevant dimensions. [sent-92, score-0.114]

50 In particular, there seem to be two outstanding modes; the STA itself lies largely in the subspace of these modes, as shown in Fig. [sent-93, score-0.115]

51 00 S ~ t' ({l\se c) Figure 1: The isolated spike triggered covariance matrix f1C(r,r'). [sent-97, score-0.962]

52 If the neuron filtered its inputs and generated a spike when the output of the filter crosses threshold, we would find that there are two significant dimensions, corresponding to the filter and its derivative. [sent-100, score-0.941]

53 Notice also that both filters have significant differentiating components- the cell is not simply integrating its inputs. [sent-102, score-0.124]

54 2(a) suggests that two modes dominate, it also demonstrates that the smaller nonzero eigenvalues of the other modes are not just noise. [sent-104, score-0.371]

55 The width of any spectral band of eigenvalues near zero due to finite sampling should decline with increasing sample size. [sent-105, score-0.156]

56 Thus while the system is primarily sensitive to two dimensions, there is something 02 0. [sent-108, score-0.052]

57 0 10+3 10+4 10+5 1 10+6 number of spikes accu mulated Figure 2: (a) Convergence ofthe largest 32 eigenvalues of the isolated spike triggered covariance with increasing sample size_ (b) Projections of the isolated STA onto the covariance modes_ eigenmodes 1 and 2 . [sent-113, score-1.401]

58 normalized derivative of mode 1 -30 -25 -20 Figure 3: Most significant two modes of the spike-triggered covariance_ missing in this picture. [sent-121, score-0.148]

59 To quantify this, we must first characterize the nonlinear function g(81' 82). [sent-122, score-0.033]

60 6 Nonlinearity and information At each instant of time we can find the relevant projections of the stimulus 81 and 82. [sent-123, score-0.328]

61 By construction, the distribution of these signals over the whole experiment, P(81, 82), is Gaussian. [sent-124, score-0.096]

62 On the other hand, each time we see a spike we get a sample from the distribution P(81' 82Ispike@to), leading to the picture in fig. [sent-125, score-0.662]

63 The prior and spike conditional distributions clearly are better separated in two dimensions than in one, which means that our two dimensional description captures more than the spike triggered average. [sent-127, score-1.685]

64 Further, we see that the spike conditional distribution is curved, unlike what we would expect for a simple thresholding device. [sent-128, score-0.631]

65 (2) and (3), we have ( ) _ P(81,82Ispike@to) P( ) , 81,82 9 81, 82 - (9) so that these two distributions determine the input/output relation of the neuron in this 2D space. [sent-130, score-0.201]

66 We emphasize that although the subspace is linear, 9 can have arbitrary nonlinearity. [sent-131, score-0.158]

67 4 shows that this input/output relation has sharp edges, but also some fuzziness. [sent-133, score-0.036]

68 The HH model is deterministic, so in principle the input/output relation should be a c5 function: spikes occur only when certain exact conditions are met. [sent-134, score-0.223]

69 Of course we have blurred things a bit by working at finite time -w o 2 ~ . [sent-135, score-0.101]

70 N en -2 -4 ~ 4 a 2 s, (standard deviations) Figure 4: 104 spike-conditional stimuli projected along the first 2 covariance modes. [sent-138, score-0.124]

71 Given that we work at finite llt, spikes carry only a finite amount of information, and the quality of our 2D approximation can be judged by asking how much of this information is captured by this description. [sent-141, score-0.338]

72 As explained in [5], the arrival time of a single spike provides an information lonespike = ( r~) log2 [r~)] ), (10) where r(t) is the time dependent spike rate, f is the average spike rate, and (. [sent-142, score-1.965]

73 With a deterministic model like HH, the rate r(t) either is zero or corresponds to one spike occurring in one bin of size llt, that is r = l/11t. [sent-146, score-0.667]

74 On the other hand, if the probability of spiking really depends only on the stimulus dimensions 81 and 82, we can substitute r(t) - f -+ P(81,82Ispike@t) ---'--=::::-:-=--'--=----:---"- P(81,82)' (11) and use the ergodicity of the stimulus to replace time averages in Eq. [sent-148, score-0.462]

75 Then we find [3, 5] (12) If our two dimensional approximation were exact we would find l~~~s:pike = lone spike; more generally we will find 1~~~ss2pike ~ lone spike, and the fraction of the information we capture measures the quality of the approximation. [sent-150, score-0.317]

76 For comparison, we also show the information captured by considering only the stimulus projection along the STA. [sent-153, score-0.187]

77 -+- Covariance modes 1 and 2 (2D) 02 ~~----~~----~----~--~ 6 B 10 time discretization (msec) Figure 5: Fraction of spike timing information captured by STA (lower curve) and projection onto covariance modes 1 and 2 (upper curve). [sent-154, score-1.176]

78 7 Discussion The simple, low-dimensional model described captures a substantial amount of information about spike timing for a HH neuron. [sent-155, score-0.769]

79 However, the absolute information captured saturates for both the 1D and 2D cases, at RJ 3. [sent-158, score-0.077]

80 Hence the information fraction captured plummets; recovering precise spike timing requires a more complex, higher dimensional representation of the stimulus. [sent-160, score-0.99]

81 Is this effect important, or is timing at this resolution too noisy for this extra complexity to matter in a real neuron? [sent-161, score-0.19]

82 Stochastic HH simulations have suggested that, when realistic noise sources are taken into account, the timing of spikes in response to dynamic stimuli is reproducible to within 1- 2 msec [6]. [sent-162, score-0.459]

83 This suggests that such timing details may indeed be important. [sent-163, score-0.116]

84 Even in 2D, one can observe that the spike conditional distribution is curved (fig. [sent-164, score-0.713]

85 4); it is likely to curve along other dimensions as well. [sent-165, score-0.154]

86 It may be possible to improve our approximation by considering the computation to take place on a low-dimensional but curved manifold, instead of a linear subspace. [sent-166, score-0.082]

87 4 also implies that the computation in the HH model is not well approximated by an integrate and fire model, or a perceptron model limited to linear separations. [sent-168, score-0.032]

88 Characterizing the complexity of the computation is an important step toward understanding neural systems. [sent-169, score-0.066]

89 How to quantify this complexity theoretically is an area for future work; here, we have made progress toward this goal by describing such computations in a compact way and then evaluating the completeness of the description using information. [sent-170, score-0.272]

90 How does the addition of more channels increase the complexity of the computation? [sent-172, score-0.067]

91 Will this add more relevant dimensions or does the non-linearity change? [sent-173, score-0.228]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('spike', 0.598), ('neuron', 0.165), ('hh', 0.16), ('lspike', 0.159), ('spikes', 0.154), ('msec', 0.148), ('triggered', 0.129), ('covariance', 0.124), ('timing', 0.116), ('modes', 0.115), ('subspace', 0.115), ('relevant', 0.114), ('dimensions', 0.114), ('stimulus', 0.11), ('sta', 0.109), ('inputs', 0.106), ('spiking', 0.097), ('huxley', 0.095), ('vna', 0.095), ('description', 0.093), ('fraction', 0.093), ('neurons', 0.09), ('eigenvalues', 0.085), ('currents', 0.082), ('curved', 0.082), ('isolated', 0.077), ('captured', 0.077), ('ruyter', 0.074), ('projections', 0.073), ('characterizing', 0.068), ('dimensional', 0.065), ('cprior', 0.063), ('cspike', 0.063), ('flt', 0.063), ('hodgkin', 0.063), ('llt', 0.063), ('lone', 0.063), ('lonespike', 0.063), ('toii', 0.063), ('tols', 0.063), ('boolean', 0.061), ('ds', 0.061), ('signals', 0.061), ('nonzero', 0.056), ('bialek', 0.056), ('captures', 0.055), ('adrienne', 0.055), ('microscopic', 0.055), ('pulse', 0.055), ('tc', 0.055), ('cell', 0.053), ('sensitive', 0.052), ('princeton', 0.051), ('current', 0.051), ('progress', 0.048), ('de', 0.046), ('average', 0.046), ('patch', 0.046), ('voltages', 0.046), ('conductances', 0.046), ('injected', 0.046), ('van', 0.044), ('resolution', 0.044), ('emphasize', 0.043), ('vk', 0.043), ('exp', 0.043), ('precise', 0.041), ('ion', 0.041), ('jersey', 0.041), ('dynamic', 0.041), ('curve', 0.04), ('question', 0.039), ('filtered', 0.039), ('membrane', 0.039), ('bin', 0.039), ('finite', 0.038), ('filters', 0.038), ('think', 0.038), ('channels', 0.037), ('relation', 0.036), ('toward', 0.036), ('identifying', 0.036), ('whole', 0.035), ('matrix', 0.034), ('memory', 0.034), ('quantify', 0.033), ('sample', 0.033), ('conditional', 0.033), ('exact', 0.033), ('significant', 0.033), ('spatial', 0.032), ('change', 0.032), ('bit', 0.032), ('history', 0.032), ('perceptron', 0.032), ('evaluating', 0.032), ('time', 0.031), ('carry', 0.031), ('complexity', 0.03), ('deterministic', 0.03)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 146 nips-2000-What Can a Single Neuron Compute?

Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek

Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1

2 0.35874942 55 nips-2000-Finding the Key to a Synapse

Author: Thomas Natschläger, Wolfgang Maass

Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.

3 0.31735134 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code

Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck

Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.

4 0.29636052 141 nips-2000-Universality and Individuality in a Neural Code

Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek

Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1

5 0.27912211 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

Author: Gal Chechik, Naftali Tishby

Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1

6 0.16259634 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

7 0.10646982 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

8 0.094812512 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

9 0.09017086 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

10 0.088650376 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks

11 0.087197535 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

12 0.078199729 76 nips-2000-Learning Continuous Distributions: Simulations With Field Theoretic Priors

13 0.068999298 77 nips-2000-Learning Curves for Gaussian Processes Regression: A Framework for Good Approximations

14 0.068592466 27 nips-2000-Automatic Choice of Dimensionality for PCA

15 0.067229293 121 nips-2000-Sparse Kernel Principal Component Analysis

16 0.066693462 49 nips-2000-Explaining Away in Weight Space

17 0.065815724 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics

18 0.063056313 81 nips-2000-Learning Winner-take-all Competition Between Groups of Neurons in Lateral Inhibitory Networks

19 0.062759101 65 nips-2000-Higher-Order Statistical Properties Arising from the Non-Stationarity of Natural Signals

20 0.062439807 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.254), (1, -0.285), (2, -0.372), (3, -0.016), (4, 0.072), (5, -0.012), (6, -0.123), (7, 0.352), (8, -0.1), (9, 0.024), (10, -0.027), (11, -0.048), (12, 0.052), (13, 0.166), (14, -0.1), (15, 0.065), (16, -0.033), (17, 0.018), (18, -0.17), (19, -0.031), (20, -0.07), (21, 0.029), (22, 0.008), (23, 0.063), (24, -0.048), (25, 0.028), (26, -0.021), (27, 0.048), (28, -0.103), (29, -0.009), (30, -0.018), (31, -0.077), (32, 0.038), (33, -0.082), (34, 0.045), (35, -0.045), (36, 0.024), (37, 0.064), (38, -0.028), (39, -0.091), (40, -0.007), (41, -0.014), (42, 0.039), (43, 0.001), (44, 0.059), (45, -0.033), (46, -0.015), (47, 0.084), (48, 0.01), (49, -0.003)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96972269 146 nips-2000-What Can a Single Neuron Compute?

Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek

Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1

2 0.84054708 141 nips-2000-Universality and Individuality in a Neural Code

Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek

Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1

3 0.79447526 55 nips-2000-Finding the Key to a Synapse

Author: Thomas Natschläger, Wolfgang Maass

Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.

4 0.764355 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code

Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck

Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.

5 0.57451761 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

Author: Gal Chechik, Naftali Tishby

Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1

6 0.41519961 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron

7 0.30034062 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli

8 0.2662898 147 nips-2000-Who Does What? A Novel Algorithm to Determine Function Localization

9 0.25605425 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

10 0.23999463 43 nips-2000-Dopamine Bonuses

11 0.2327223 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks

12 0.22712155 65 nips-2000-Higher-Order Statistical Properties Arising from the Non-Stationarity of Natural Signals

13 0.21663469 77 nips-2000-Learning Curves for Gaussian Processes Regression: A Framework for Good Approximations

14 0.21450923 125 nips-2000-Stability and Noise in Biochemical Switches

15 0.20704025 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images

16 0.2044231 27 nips-2000-Automatic Choice of Dimensionality for PCA

17 0.20438957 76 nips-2000-Learning Continuous Distributions: Simulations With Field Theoretic Priors

18 0.203842 99 nips-2000-Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech

19 0.19306955 120 nips-2000-Sparse Greedy Gaussian Process Regression

20 0.18820313 48 nips-2000-Exact Solutions to Time-Dependent MDPs


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(10, 0.018), (17, 0.107), (32, 0.023), (33, 0.045), (42, 0.044), (55, 0.03), (60, 0.262), (62, 0.04), (65, 0.017), (67, 0.091), (75, 0.02), (76, 0.064), (79, 0.022), (81, 0.074), (90, 0.028), (93, 0.033), (97, 0.012)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.84648204 46 nips-2000-Ensemble Learning and Linear Response Theory for ICA

Author: Pedro A. d. F. R. Højen-Sørensen, Ole Winther, Lars Kai Hansen

Abstract: We propose a general Bayesian framework for performing independent component analysis (leA) which relies on ensemble learning and linear response theory known from statistical physics. We apply it to both discrete and continuous sources. For the continuous source the underdetermined (overcomplete) case is studied. The naive mean-field approach fails in this case whereas linear response theory-which gives an improved estimate of covariances-is very efficient. The examples given are for sources without temporal correlations. However, this derivation can easily be extended to treat temporal correlations. Finally, the framework offers a simple way of generating new leA algorithms without needing to define the prior distribution of the sources explicitly.

same-paper 2 0.8061161 146 nips-2000-What Can a Single Neuron Compute?

Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek

Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1

3 0.55846477 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics

Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador

Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their

4 0.51895463 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code

Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck

Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.

5 0.51520282 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account

Author: Gal Chechik, Naftali Tishby

Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1

6 0.51224297 134 nips-2000-The Kernel Trick for Distances

7 0.51012117 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning

8 0.50946009 49 nips-2000-Explaining Away in Weight Space

9 0.50687236 122 nips-2000-Sparse Representation for Gaussian Process Models

10 0.50677365 55 nips-2000-Finding the Key to a Synapse

11 0.50371414 99 nips-2000-Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech

12 0.50369048 79 nips-2000-Learning Segmentation by Random Walks

13 0.50314599 125 nips-2000-Stability and Noise in Biochemical Switches

14 0.49886394 37 nips-2000-Convergence of Large Margin Separable Linear Classification

15 0.49839717 74 nips-2000-Kernel Expansions with Unlabeled Examples

16 0.49758792 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks

17 0.49669588 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks

18 0.48985341 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition

19 0.48901477 95 nips-2000-On a Connection between Kernel PCA and Metric Multidimensional Scaling

20 0.48857147 69 nips-2000-Incorporating Second-Order Functional Knowledge for Better Option Pricing