nips nips2000 nips2000-141 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek
Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1
Reference: text
sentIndex sentText sentNum sentScore
1 Universality and individuality in a neural code Elad Schneidman,1,2 Naama Brenner,3 Naftali Tishby,1,3 Rob R. [sent-1, score-0.17]
2 de Ruyter van Steveninck, 3 William Bialek3 ISchool of Computer Science and Engineering, Center for Neural Computation and 2Department of Neurobiology, Hebrew University, Jerusalem 91904, Israel 3NEC Research Institute, 4 Independence Way, Princeton, New Jersey 08540, USA { elads, tishby} @cs. [sent-2, score-0.079]
3 com Abstract The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. [sent-8, score-0.064]
4 One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. [sent-9, score-0.094]
5 We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. [sent-10, score-0.578]
6 We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. [sent-11, score-0.659]
7 On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. [sent-12, score-1.133]
8 Thus the neural code has a quantifiable mixture of individuality and universality. [sent-13, score-0.199]
9 An accessible version of this question is whether different observers of the same sense data have the same neural representation of these data: how much of the neural code is universal, and how much is individual? [sent-16, score-0.163]
10 Differences in the neural codes of different individuals may arise from various sources: First, different individuals may use different 'vocabularies' of coding symbols. [sent-17, score-0.278]
11 Second, they may use the same symbols to encode different stimulus features. [sent-18, score-0.205]
12 Finally, perhaps the most interesting possibility is that different individuals might encode different features of the stimulus, so that they 'talk about different things'. [sent-20, score-0.123]
13 If we are to compare neural codes we must give a quantitative definition of similarity or divergence among neural responses. [sent-21, score-0.201]
14 We shall use ideas from information theory [1, 2] to quantify the the notions of distinguishability, functional equivalence and content in the neural code. [sent-22, score-0.12]
15 This approach does not require a metric either on the space of stimuli or on the space of neural responses (but see [3]); all notions of similarity emerge from the statistical structure of the neural responses. [sent-23, score-0.177]
16 We apply these methods to analyze experiments on an identified motion sensitive neuron in the fly's visual system, the cell HI [4]. [sent-24, score-0.154]
17 Many invertebrate nervous systems have cells that can be named and numbered [5]; in many cases, including the motion sensitive cells in the fly's lobula plate, a small number of neurons is involved in representing a similarly identifiable portion of the sensory world. [sent-25, score-0.181]
18 It might seem that in these cases the question of whether different individuals share the same neural representation of the visual world would have a trivial answer. [sent-26, score-0.224]
19 Far from trivial, we shall see that the neural code even for identified neurons in flies has components which are common among flies and significant components which are individual to each fly. [sent-27, score-1.386]
20 2 Distinguishing flies according to their spike patterns Nine different flies are shown precisely the same movie, which is repeated many times for each fly (Figure Ia). [sent-28, score-1.936]
21 As we show the movie we record the action potentials from the HI neuron. [sent-29, score-0.072]
22 1 The details of the stimulus movie should not have a qualitative impact on the results, provided that the movie is sufficiently long and rich to drive the system through a reasonable and natural range of responses. [sent-30, score-0.31]
23 To analyze similarities and differences among the neural codes, we begin by discretizing the neural response into time bins of size I:l. [sent-33, score-0.268]
24 At this resolution there are almost never two spikes in a single bin, so we can think of the neural response as a binary string, as in Fig. [sent-35, score-0.203]
25 We examine the response in blocks or windows of time having length T, so that an individual neural response becomes a binary 'word' W with T / I:l. [sent-37, score-0.245]
26 Figure If shows that different flies 'speak' with similar but distinct vocabularies. [sent-41, score-0.584]
27 We quantify the divergence among vocabularies by asking how much information the observation of a single word W provides about the identity of its source, that is about the identity of the fly which generates this word: J(W -+ identity; T) = 8 N . [sent-42, score-1.107]
28 ~ P'(W) log2 pens(w) bits, (1) lThe stimulus presented to the flies is a rigidly moving pattern of vertical bars, randomly dark or bright, with average intensity I ~ 100mW/(m 2 • sr). [sent-44, score-0.795]
29 (2) In each Hy we identify the H1 cell as the unique spiking neuron in the lobula plate that has a combination of wide field sensitivity, inward directional selectivity for horizontal motion, and contralateral projection. [sent-48, score-0.093]
30 a Stimulus o~200~ '0 o 0 a; >_200 L-----L-----~----~----~----~ b c Spike trains e Fly 1 . [sent-50, score-0.107]
31 pFly 1(Wlt=3306) " ~~~--~H*~~--+4~~------- Fly3 ~~U- _ _~~_ _~_ _~~~_ _ _ _ _ ___ _~~ ____ ~ Fly4 ~---------*--~--~~~------- Fly5 ~---mrr-----,; Fly 6 d Fly 6 pFIY'(Wlt=33061 f Total word distribution ~---*l¥-- Fly7 ~---'"""'---------' Fly8 ~--. [sent-55, score-0.189]
32 4 3,5 Time (5) 3306 3318 Time (ms) 20 40 60 binary word value Figure 1: Different flies' spike trains and word statistics. [sent-57, score-0.823]
33 (a) All flies view the same random vertical bar pattern moving across their visual field with a time dependent velocity, part of which is shown. [sent-58, score-0.68]
34 (b) A set of 45 response traces to the part of the stimulus shown in (a) from each of the 9 flies . [sent-60, score-0.828]
35 The traces are taken from the segment of the experiment where the transient responses have decayed. [sent-61, score-0.078]
36 (c) Example of construction of the local word distributions. [sent-62, score-0.189]
37 Zooming in on a segment of the repeated responses of fly 1 to the visual stimuli, the fly's spike trains are divided into contiguous 2 ms bins, and the spikes in each of the bins are counted. [sent-63, score-1.187]
38 For example, we get the 6 letter words that the fly used at time 3306 ms into the input trace. [sent-64, score-0.601]
39 (e) The distributions of words that flies 1 and 6 used at time t = 3306 ms from the beginning of the stimulus. [sent-66, score-0.76]
40 , binary word value '17' stands for the word '010001' . [sent-69, score-0.401]
41 (f) Collecting the words that each of the flies used through all of the visual stimulus presentations, we get the total word distributions for flies 1 and 6, pI (W) and P6(W} . [sent-70, score-1.68]
42 (2) i=l The measure J(W -+ identity;T) has been discussed by Lin [11] as the 'JensenShannon divergence' DJS among the distributions pi(W). [sent-73, score-0.068]
43 2 2Unlike the Kullback- Leibler divergence [2] (the 'standard' choice for measuring dissimilarity among distributions), the Jensen- Shannon divergence is symmetric, and bounded (see also [12]). [sent-74, score-0.13]
44 We find that information about identity is accumulating at more or less constant rate well before the under sampling limits of the experiment are reached (Fig. [sent-76, score-0.266]
45 Since the mean spike rate can be measured by counting the number of Is in each word W, this information includes the differences in firing rate among the different flies. [sent-83, score-0.823]
46 Even if flies use very similar vocabularies, they may differ substantially in the way that they associate words with particular stimulus features. [sent-84, score-0.794]
47 Since we present the stimulus repeatedly to each fly, we can specify the stimulus precisely by noting the time relative to the beginning of the stimulus. [sent-85, score-0.364]
48 We can therefore consider the word W that the ith fly will generate at time t. [sent-86, score-0.674]
49 This word is drawn from the distribution pi(Wlt) which we can sample, as in Fig. [sent-87, score-0.189]
50 lc-e, by looking across multiple presentations of the same stimulus movie. [sent-88, score-0.202]
51 In parallel with the discussion above, we can measure the information that the word W observed at known t gives us about the identity of the fly, . [sent-89, score-0.393]
52 I(W -+ IdentIty It; T) N = ~ 11 ~ p i [ pi(Wlt) ] (Wit) log2 pens(Wlt) , (3) where the distribution of words used at time t by the whole ensemble of flies is N pens(Wlt) =L l1Pi(Wlt). [sent-91, score-0.733]
53 (4) i=l The natural quantity is an average over all times t, I( {W, t} -+ identity; T) = (I(W -+ identity It; T)t bits, (5) where (. [sent-92, score-0.188]
54 Observing both the spike train and the stimulus together provides 32 ± 1 bits/s about the identity of the fly. [sent-96, score-0.675]
55 This is more than six times as much information as we can gain by observing the spike train alone, and corresponds to gaining one bit in ""' 30 ms; correspondingly, a typical pair of flies in our ensemble can be distinguished reliably in ""' 30 ms. [sent-97, score-1.127]
56 This is the time scale on which flies actually use their estimates of visual motion to guide their flight during chasing behavior [6], so that the neural codes of different individuals are distinguishable on the time scales relevant to behavior. [sent-98, score-0.976]
57 8 '" ~001 Fly 6 vs mixture ---=========== Fly 1 vs mixture % L--~ 5-~ 0 -~~~ 0 ~-2 5~~ 0 1~ 15 2~ ~ 3~ Word length (msec) 5 10 15 20 25 30 Word length (msec) Figure 2: Distinguishing one fly from others based on spike trains. [sent-107, score-0.888]
58 (a) The average rate of information gained about the identity of a fly from its word distribution, as a function of the word size used (middle curve). [sent-108, score-1.165]
59 The information rate is saturated even before we reach the maximal word length used. [sent-109, score-0.312]
60 Also shown is the average rate of information that the word distribution of fly 1 (and 6) gives about its identity, compared with the word distribution mixture of all of the flies. [sent-110, score-1.028]
61 (b) Similar to (a) , we compute the average amount of information that the distribution of words the fly used at a specific point in time gives about its identity. [sent-112, score-0.67]
62 Averaging over all times, we show the amount of information gained about the identity of fly 1 (and 6) based on its time dependent word distributions, and the average over the 9 flies (middle curve). [sent-113, score-1.565]
63 A "baseline calculation" , where we subdivided the spike trains of one fly into artificial new individuals, and compared their spike trains, gave significantly smaller values (not shown) . [sent-115, score-1.19]
64 Figure 3a shows that the flies in our ensemble span a range of information rates from ~ 50 to ~ 150 bits/so This threefold range of information rates is correlated with the range of spike rates, so that each of the cells transmits nearly a constant amount of information per spike, 2. [sent-116, score-1.336]
65 This universal efficiency (10% variance over the population, despite three fold variations in total spike rate), reflects that cells with higher firing rates are not generating extra spikes at random, but rather each extra spike is equally informative about the stimulus. [sent-119, score-1.084]
66 Although information rates are correlated with spike rates, this does not mean that information is carried by a "rate code" alone. [sent-120, score-0.56]
67 To address the rate/timing distinction we compare the total information rate in Fig. [sent-121, score-0.144]
68 3a, which includes the detailed structure of the spike train, with the information carried in the temporal modulations of the spike rate. [sent-122, score-0.793]
69 For all the flies in our ensemble, the total rate at which the spike train carries information is substantially larger than the 'single spike' information- 2. [sent-125, score-1.13]
70 This extra information is carried in the temporal patterns of spikes (Fig. [sent-129, score-0.295]
71 Even though flies differ in the structures of their neural responses, distinguishable responses could be functionally equivalent. [sent-132, score-0.703]
72 Thus it might be that all flies could be a 150 ! [sent-133, score-0.584]
73 20 40 60 20 Firing rate (spikes/sec) 40 60 Firing rate (spikes/sec) Figure 3: The information about the stimulus that a fly's spike train carries is correlated with firing rate, and yet a significant part is in the temporal structure. [sent-176, score-0.879]
74 (a) The rate at the HI spike train provides information about the visual stimulus is shown as a function of the average spike rate, with each fly providing a single data point The linear fit of the data points for the 9 flies corresponds to a universal rate of 2. [sent-177, score-2.256]
75 (b) The extra amount of information that the temporal structure of the spike train of each of the Hies carry about the stimulus, as a function of the average firing rate of the fly (see [10]). [sent-180, score-1.166]
76 The average amount of additional information that is carried by the temporal structure of the spike trains, over the population is 45 ± 17%. [sent-181, score-0.558]
77 ) with a universal or consensus codebook that allows each individual to make sense of her own spike trains, despite the differences from her conspecifics. [sent-183, score-0.528]
78 Thus we want to ask how much information we lose if the identity of the flies is hidden from us, or equivalently how much each fly can gain by knowing its own individual code. [sent-184, score-1.355]
79 If we observe the response of a neuron but don't know the identity of the individual generating this response, then we are observing responses drawn from the ensemble distributions defined above, pens(WJt) and pens(w). [sent-185, score-0.452]
80 The information that words provide about the visual stimulus then is IffiiX(W ~ s(t)j T) = ( ~ pens(WJt) 10g2 [~::~~~)] ) t bits. [sent-186, score-0.335]
81 (7) On the other hand, if we know the identity of the fly to be i, we gain the information that its spike train conveys about the stimulus, Ji(W ~ s(t) j T), Eq. [sent-187, score-1.023]
82 The average information loss is then N I~:~(W ~ s(t)j T) = L lUi(W ~ s(t)j T) - IffiiX(W ~ s(t)j T). [sent-189, score-0.106]
83 (8) i= l After some algebra it can be shown that this average information loss is related to the information that the neural responses give about the identity of the individuals, as defined above: I( {W, t} ~ identityj T) -I(W ~ identityj T). [sent-190, score-0.466]
84 (9) The result is that, on average, not knowing the identity of the fly limits us to extracting only 64 bits/s of information about the visual stimulus. [sent-191, score-0.764]
85 This should be compared with the average information rate of 92. [sent-192, score-0.168]
86 3 bits/s in our ensemble of flies: knowing her own identity allows the average fly to extract 44% more information from Hl. [sent-193, score-0.818]
87 Further analysis shows that each individual fly gains approximately the same relative amount of information from knowing its personal codebook. [sent-194, score-0.637]
88 5 Discussion We have found that the flies use similar yet distinct set of 'words' to encode information about the stimulus. [sent-195, score-0.684]
89 The main source of this difference is not in the total set of words (or spike rates) but rather in how (i. [sent-196, score-0.38]
90 when) these words are used to encode the stimulus; taking this into account the flies are discriminable on time scales of relevance to behavior. [sent-198, score-0.699]
91 Using their different codes, the flies' HI spike trains convey very different amounts of information from the same visual inputs. [sent-199, score-0.581]
92 Nonetheless, all the flies achieve a high and constant efficiency in their encoding of this information, and the temporal structure of their spike trains adds nearly 50% more information than that carried by the rate. [sent-200, score-1.193]
93 So how much is universal and how much is individual? [sent-201, score-0.078]
94 We find that each individual fly would lose'" 30% of the visual information carried by this neuron if it 'knew' only the codebook appropriate to the whole ensemble of flies. [sent-202, score-0.831]
95 We leave the judgment of whether this is high individuality or not to the reader, but recall that this is the individuality in an identified neuron. [sent-203, score-0.209]
96 Hence, we should expect that all neural circuits- both vertebrate and invertebrate-express a degree of universality and a degree of individuality. [sent-204, score-0.064]
97 We hope that the methods introduced here will help to explore this issue of individuality more generally. [sent-205, score-0.083]
98 Nature and precision of temporal coding in visual cortex: a metric- space analysis, J. [sent-219, score-0.141]
99 Bialek, Reproducibility and variability in neural spike trains, Science 275, 1805- 1808, (1997). [sent-246, score-0.35]
100 Entropy and information in neural spike trains , Phys. [sent-252, score-0.518]
wordName wordTfidf (topN-words)
[('flies', 0.584), ('fly', 0.453), ('spike', 0.315), ('word', 0.189), ('wlt', 0.167), ('stimulus', 0.166), ('identity', 0.143), ('pens', 0.134), ('trains', 0.107), ('spikes', 0.09), ('individuals', 0.084), ('individuality', 0.083), ('universal', 0.078), ('ruyter', 0.078), ('ensemble', 0.073), ('movie', 0.072), ('ms', 0.072), ('hies', 0.067), ('steveninck', 0.065), ('visual', 0.064), ('rate', 0.062), ('information', 0.061), ('responses', 0.055), ('response', 0.055), ('firing', 0.054), ('carried', 0.054), ('code', 0.052), ('train', 0.051), ('codebook', 0.05), ('koberie', 0.05), ('bialek', 0.049), ('temporal', 0.048), ('codes', 0.046), ('pi', 0.045), ('average', 0.045), ('divergence', 0.045), ('individual', 0.045), ('rates', 0.045), ('words', 0.044), ('knowing', 0.043), ('extra', 0.042), ('de', 0.041), ('differences', 0.04), ('among', 0.04), ('encode', 0.039), ('van', 0.038), ('motion', 0.037), ('carries', 0.036), ('shannon', 0.036), ('presentations', 0.036), ('neural', 0.035), ('amount', 0.035), ('convey', 0.034), ('chasing', 0.033), ('djs', 0.033), ('identityj', 0.033), ('iffiix', 0.033), ('invertebrates', 0.033), ('lobula', 0.033), ('naama', 0.033), ('synergy', 0.033), ('vocabularies', 0.033), ('wjt', 0.033), ('cells', 0.032), ('time', 0.032), ('neuron', 0.031), ('vs', 0.031), ('bins', 0.031), ('hi', 0.03), ('mixture', 0.029), ('coding', 0.029), ('cts', 0.029), ('universality', 0.029), ('plate', 0.029), ('distinguishable', 0.029), ('distributions', 0.028), ('stimuli', 0.028), ('variations', 0.026), ('comp', 0.026), ('msec', 0.026), ('lose', 0.026), ('bars', 0.025), ('tishby', 0.024), ('notions', 0.024), ('neurons', 0.024), ('efficiency', 0.024), ('correlated', 0.024), ('binary', 0.023), ('traces', 0.023), ('nervous', 0.023), ('gained', 0.023), ('identified', 0.022), ('observing', 0.022), ('things', 0.021), ('reliably', 0.021), ('nonetheless', 0.021), ('total', 0.021), ('whether', 0.021), ('en', 0.021), ('question', 0.02)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000004 141 nips-2000-Universality and Individuality in a Neural Code
Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek
Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1
2 0.29636052 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
3 0.283288 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck
Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.
4 0.21368811 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
5 0.13548362 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
6 0.11866689 6 nips-2000-A Neural Probabilistic Language Model
7 0.096108668 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
8 0.072584011 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
9 0.059303366 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
10 0.059033453 43 nips-2000-Dopamine Bonuses
11 0.057580475 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
12 0.056403641 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
13 0.055624098 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas
14 0.054881647 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
15 0.053807676 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron
16 0.052757572 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation
17 0.051496208 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
18 0.048817236 101 nips-2000-Place Cells and Spatial Navigation Based on 2D Visual Feature Extraction, Path Integration, and Reinforcement Learning
19 0.048773501 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
20 0.047509719 130 nips-2000-Text Classification using String Kernels
topicId topicWeight
[(0, 0.166), (1, -0.227), (2, -0.268), (3, -0.006), (4, 0.008), (5, -0.02), (6, -0.054), (7, 0.302), (8, 0.017), (9, 0.1), (10, 0.012), (11, -0.008), (12, 0.106), (13, 0.32), (14, -0.122), (15, 0.131), (16, 0.003), (17, -0.065), (18, -0.157), (19, -0.066), (20, 0.021), (21, 0.028), (22, 0.083), (23, 0.044), (24, 0.066), (25, -0.069), (26, -0.073), (27, 0.027), (28, -0.06), (29, -0.045), (30, -0.013), (31, -0.196), (32, 0.103), (33, -0.093), (34, -0.016), (35, -0.072), (36, 0.036), (37, -0.012), (38, -0.003), (39, -0.082), (40, 0.014), (41, 0.058), (42, -0.004), (43, 0.076), (44, 0.016), (45, 0.002), (46, 0.055), (47, 0.041), (48, 0.005), (49, -0.041)]
simIndex simValue paperId paperTitle
same-paper 1 0.97766531 141 nips-2000-Universality and Individuality in a Neural Code
Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek
Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1
2 0.78594023 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
3 0.75614554 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck
Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.
4 0.53067052 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
5 0.36634761 6 nips-2000-A Neural Probabilistic Language Model
Author: Yoshua Bengio, Réjean Ducharme, Pascal Vincent
Abstract: A goal of statistical language modeling is to learn the joint probability function of sequences of words. This is intrinsically difficult because of the curse of dimensionality: we propose to fight it with its own weapons. In the proposed approach one learns simultaneously (1) a distributed representation for each word (i.e. a similarity between words) along with (2) the probability function for word sequences, expressed with these representations. Generalization is obtained because a sequence of words that has never been seen before gets high probability if it is made of words that are similar to words forming an already seen sentence. We report on experiments using neural networks for the probability function, showing on two text corpora that the proposed approach very significantly improves on a state-of-the-art trigram model.
7 0.27218688 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
8 0.26933029 43 nips-2000-Dopamine Bonuses
9 0.26411933 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
10 0.19316345 131 nips-2000-The Early Word Catches the Weights
11 0.19122289 10 nips-2000-A Productive, Systematic Framework for the Representation of Visual Structure
12 0.18392314 8 nips-2000-A New Model of Spatial Representation in Multimodal Brain Areas
13 0.17822652 45 nips-2000-Emergence of Movement Sensitive Neurons' Properties by Learning a Sparse Code for Natural Moving Images
14 0.17013018 25 nips-2000-Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators
15 0.15599674 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
16 0.14954761 68 nips-2000-Improved Output Coding for Classification Using Continuous Relaxation
17 0.14943141 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron
18 0.14883488 19 nips-2000-Adaptive Object Representation with Hierarchically-Distributed Memory Sites
19 0.14787275 125 nips-2000-Stability and Noise in Biochemical Switches
20 0.14106289 38 nips-2000-Data Clustering by Markovian Relaxation and the Information Bottleneck Method
topicId topicWeight
[(10, 0.016), (17, 0.061), (32, 0.021), (33, 0.025), (42, 0.08), (55, 0.022), (62, 0.03), (65, 0.029), (67, 0.048), (75, 0.025), (76, 0.034), (79, 0.015), (81, 0.441), (90, 0.011), (97, 0.019)]
simIndex simValue paperId paperTitle
same-paper 1 0.95908064 141 nips-2000-Universality and Individuality in a Neural Code
Author: Elad Schneidman, Naama Brenner, Naftali Tishby, Robert R. de Ruyter van Steveninck, William Bialek
Abstract: The problem of neural coding is to understand how sequences of action potentials (spikes) are related to sensory stimuli, motor outputs, or (ultimately) thoughts and intentions. One clear question is whether the same coding rules are used by different neurons, or by corresponding neurons in different individuals. We present a quantitative formulation of this problem using ideas from information theory, and apply this approach to the analysis of experiments in the fly visual system. We find significant individual differences in the structure of the code, particularly in the way that temporal patterns of spikes are used to convey information beyond that available from variations in spike rate. On the other hand, all the flies in our ensemble exhibit a high coding efficiency, so that every spike carries the same amount of information in all the individuals. Thus the neural code has a quantifiable mixture of individuality and universality. 1
2 0.95095521 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
Author: Szabolcs KĂĄli, Peter Dayan
Abstract: In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself.
3 0.91987401 137 nips-2000-The Unscented Particle Filter
Author: Rudolph van der Merwe, Arnaud Doucet, Nando de Freitas, Eric A. Wan
Abstract: In this paper, we propose a new particle filter based on sequential importance sampling. The algorithm uses a bank of unscented filters to obtain the importance proposal distribution. This proposal has two very
4 0.85152 103 nips-2000-Probabilistic Semantic Video Indexing
Author: Milind R. Naphade, Igor Kozintsev, Thomas S. Huang
Abstract: We propose a novel probabilistic framework for semantic video indexing. We define probabilistic multimedia objects (multijects) to map low-level media features to high-level semantic labels. A graphical network of such multijects (multinet) captures scene context by discovering intra-frame as well as inter-frame dependency relations between the concepts. The main contribution is a novel application of a factor graph framework to model this network. We model relations between semantic concepts in terms of their co-occurrence as well as the temporal dependencies between these concepts within video shots. Using the sum-product algorithm [1] for approximate or exact inference in these factor graph multinets, we attempt to correct errors made during isolated concept detection by forcing high-level constraints. This results in a significant improvement in the overall detection performance. 1
5 0.59674603 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
6 0.57640928 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
7 0.55443257 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
8 0.49657819 146 nips-2000-What Can a Single Neuron Compute?
9 0.49638861 131 nips-2000-The Early Word Catches the Weights
10 0.46351662 43 nips-2000-Dopamine Bonuses
11 0.41310439 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
12 0.40876555 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
13 0.40859607 80 nips-2000-Learning Switching Linear Models of Human Motion
14 0.40353158 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
15 0.40054664 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
16 0.3989197 49 nips-2000-Explaining Away in Weight Space
17 0.38956645 125 nips-2000-Stability and Noise in Biochemical Switches
18 0.38738847 30 nips-2000-Bayesian Video Shot Segmentation
19 0.3830238 71 nips-2000-Interactive Parts Model: An Application to Recognition of On-line Cursive Script
20 0.38299379 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics