nips nips2001 nips2001-57 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Maoz Shamir, Haim Sompolinsky
Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.
Reference: text
sentIndex sentText sentNum sentScore
1 ¡ £ © § ¥ £ ¡ Abstract Population codes often rely on the tuning of the mean responses to the stimulus parameters. [sent-4, score-0.399]
2 However, this information can be greatly suppressed by long range correlations. [sent-5, score-0.039]
3 Here we study the efficiency of coding information in the second order statistics of the population responses. [sent-6, score-0.3]
4 We show that the Fisher Information of this system grows linearly with the size of the system. [sent-7, score-0.113]
5 We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. [sent-8, score-1.217]
6 It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses. [sent-9, score-0.336]
7 1 Introduction Experiments in the last years have shown that in many cortical areas, the fluctuations in the responses of neurons to external stimuli are significantly correlated [1, 2, 3, 4], raising important questions regarding the computational implications of neuronal correlations. [sent-10, score-0.374]
8 Recent theoretical studies have addressed the issue of how neuronal correlations affect the efficiency of population coding [4, 5, 6]. [sent-11, score-0.543]
9 It is often assumed that the information about stimuli is coded mainly in the mean neuronal responses, e. [sent-12, score-0.217]
10 , in the tuning of the mean firing rates, and that by averaging the tuned responses across large populations, an accurate estimate can be obtained despite the significant noise in the single neuron responses. [sent-14, score-0.296]
11 Indeed, for uncorrelated neurons the Fisher Information of the population is extensive [7]; namely, it increases linearly with the number of neurons in the population. [sent-15, score-0.56]
12 Furthermore, it has been shown that this extensive information can be extracted by relatively simple linear readout mechanisms [7, 8]. [sent-16, score-0.503]
13 However, it was recently shown [6] that positive correlations which vary smoothly with space may drastically suppress the information in the mean responses. [sent-17, score-0.254]
14 In particular, the Fisher Information of the system saturates to a finite value as the system size grows. [sent-18, score-0.15]
15 This raises questions about the computational utility of neuronal population codes. [sent-19, score-0.357]
16 Neuronal population responses can represent information in the higher order statistics of the responses [3], not only in their means. [sent-20, score-0.476]
17 In this work, we study the accuracy of coding information in the second order statistics. [sent-21, score-0.076]
18 Specifically, we assume that the neuronal responses obey multivariate Gaussian statistics governed by a stimulus-dependent correlation matrix. [sent-23, score-0.47]
19 We ask whether the Fisher Information of such a system is extensive even in the presence of strong correlations in the neuronal noise. [sent-24, score-0.387]
20 2 Fisher Information of a Correlation Code Our model consists of a system of neurons that code a 2D angle , . [sent-26, score-0.199]
21 753 ) ' & " 1 B Q@ %$" " B C@ 1 A9 @ ' (¡ " is the mean activity of the -th neuron and its dependence on is usually referred Here to as the tuning curve of the neuron; is the correlation matrix; and is a normalization constant. [sent-28, score-0.342]
22 Here we shall limit ourselves to the case of multiplicative modulation of the correlations. [sent-29, score-0.133]
23 It is important to note that the neuron, variance adds a contribution to which is larger than the contribution of the smooth part of the correlations. [sent-33, score-0.085]
24 For reasons that will become clear below, we write, d ' ' y ) ' aT F(¡ " S e X¡ " UTS qX¡ " UT S (5) S aTy S denotes the smooth part of the correlation matrix and where diagonal part, which in the example of Eqs. [sent-34, score-0.235]
25 (6) ' W (¡ " w D' ( 1 " q(¡ " S ) ' @ A useful measure of the accuracy of a population code is the Fisher Information (FI). [sent-36, score-0.227]
26 In the case of uncorrelated populations it is well known that FI increases linearly with system size [7], indicating that the accuracy of the population coding improves as the system size is increased. [sent-37, score-0.707]
27 Furthermore, it has been shown that relatively simple, linear schemes can provide reasonable readout models for extracting the information in uncorrelated populations [8]. [sent-38, score-0.739]
28 The form of these terms reveals that in general the correlations play two roles. [sent-40, score-0.147]
29 First they control the efficiency of the information encoded in the mean activities (note the dependence of on ). [sent-41, score-0.174]
30 Secondly, provides an additional source of information about the stimulus ( ). [sent-42, score-0.157]
31 When the correlations are independent of the stimulus, , it was shown [6] that positive correlations, , with long correlation length, , ' h i 1 " i) (¡ W e d¥ ( ) ' " ¦ ' ! [sent-43, score-0.3]
32 ' X¡ " ¡ B ¢ g( f ' (¡ " G G S HH8 PF φ=0o C(φ,ψ) φ=−60o o φ=60 φ=−120o −120 −60 ψ 0 60 120 180 [deg] ¡ b ) @ −180 φ=120o ¡ b ) b @ Figure 1: The stimulus-dependent correlation matrix, Eqs. [sent-45, score-0.153]
33 ( ¢ i) ' ¡© ub" S § ¤© ) v £ cause the saturation of FI to a finite limit at large . [sent-49, score-0.063]
34 This implies that in the presence of such correlations, population averaging cannot overcome the noise even in large networks. [sent-50, score-0.231]
35 Evaluating these terms for the multiplicative is positive, so that . [sent-54, score-0.064]
36 (2), we find that tion of this term shows that saturates at large to a small finite value, so that for large (11) as shown in Fig. [sent-56, score-0.107]
37 We thus conclude that increases linearly with and is equal, for large , to the FI of variance coding namely to of an independent population in which information is encoded in their activity variances. [sent-58, score-0.504]
38 Since in our system the information is encoded in the second order statistics of the population responses, it is obvious that linear readouts are inadequate. [sent-59, score-0.444]
39 This raises the question of whether there are relatively simple nonlinear readout models for such systems. [sent-60, score-0.497]
40 In the next sections we will study bilinear readouts and show that they are useful models for extracting information from correlation codes. [sent-61, score-0.805]
41 3 A Bilinear Readout for Discrimination Tasks In a two-interval discrimination task the system is given two sets of neuronal activities generated by two proximal stimuli and and must infer which stimulus generated which activity. [sent-62, score-0.476]
42 The Maximum-Likelihood (ML) discrimination yields the ¡ d e ¡ ¡ w $ © $ −3 1 1 x 10 −2 [deg ] −2 [deg ] 0. [sent-63, score-0.081]
43 (2)(4), as a function of the number of neurons in the system. [sent-69, score-0.052]
44 ¥) ( probability of error given by discriminability equals and the (12) It has been previously shown that in the case of uncorrelated populations with mean coding, the optimal linear readouts achieves the Maximum-Likelihood discrimination performance in large N [7]. [sent-76, score-0.654]
45 B In order to isolate the properties of correlation coding we will assume that no information hereafter. [sent-77, score-0.229]
46 We suggest a is coded in the average firing rates of the neurons, and take bilinear readout as a simple generalization of the linear readout to correlation codes. [sent-78, score-1.502]
47 In a discrimination task the bilinear readout makes a decision according to the sign of ) w T w @ T #UT " $ % (13) ¥ aT ) ! [sent-79, score-0.981]
48 Maximizing the signal-to-noise ratio of this rule, the optimal bilinear discriminator (OBD) matrix is given by (14) Using the optimal weights to evaluate the discrimination error we obtain that in large the performance of the OBD saturates the ML performance, Eq. [sent-82, score-0.806]
49 Thus, since FI of this model increases linearly with the size of the system, the discriminability increases as . [sent-84, score-0.217]
50 £ Since the correlation matrix depends on the stimulus, , the OBD matrix, Eq. [sent-85, score-0.204]
51 1 Optimal bilinear readout for estimation To study the global performance of bilinear readouts we investigate bilinear readouts which minimize the square error of estimating the angle averaged over the whole range of . [sent-89, score-2.325]
52 For convenience we use complex notation for the encoded angle, and write as the estimator ¡ & § ¡ ¥ UT ) & . [sent-90, score-0.096]
53 Let T Pu aT § § ) of (15) UT where are stimulus independent complex weights. [sent-91, score-0.157]
54 We define the optimal bilinear estimator (OBE) as the set of weights that minimizes on average the quadratic estimation error of an unbiased estimator. [sent-92, score-0.709]
55 This error is given by ¢ ) 1 '(¡ $ & § " ' (¡ " ¡ © § § (16) ' (¡ " § & ¨ © © § w&d; @ &§& ¡ 1 ¥ ¦ § § q' ) " ¢ ¤ £ ' (¡ " where is the Lagrange multiplier of the constraint . [sent-93, score-0.036]
56 In general, it is impossible to find a perfectly unbiased estimator for a continuously varied stimulus, using a finite number of weights. [sent-94, score-0.071]
57 However, in the case of angle estimation, we can employ the underlying rotational symmetry to generate such an estimator. [sent-95, score-0.121]
58 For this we use the symmetry of the correlation matrix, Eq. [sent-96, score-0.195]
59 Using these symmetry properties, can be written in the following form (for even ) ' ub" (18) % & # ! [sent-101, score-0.042]
60 3 (a)) Figure 3 (a) presents an example of the function also suggest that the function is mainly determined by a few harmonics plus a delta . [sent-104, score-0.073]
61 Below we will use this fact to study simpler forms of bilinear readout. [sent-105, score-0.48]
62 Figure 3 (b) shows the numerical calculation of the OBE error (open circles) as a function of . [sent-109, score-0.078]
63 The dashed line is the asymptotic behavior, given by Eq. [sent-110, score-0.141]
64 From the graph one can see that the estimation efficiency of this readout grows linearly with the size of the system, , but is lower than the bound. [sent-113, score-0.539]
65 2 Truncated bilinear readout Motivated by the simple structure of the optimal readout matrix observed in Fig. [sent-115, score-1.411]
66 3 (a), we studied a bilinear readout of the form of Eqs. [sent-116, score-0.9]
67 Restricting the number of harmonics to relatively small integers, we evaluated numerically the optimal values of the coefficients for large systems. [sent-118, score-0.126]
68 Surprisingly we found that for small and large , these coefficients approach a value which is independent of the specifics of the model and equals , yielding a bilinear weight matrix of the form ' ub" ! [sent-119, score-0.589]
69 (20) ts p § g 75 hf3 ' T b ub" ' § " (fr § aT d ¥ Tb eb @ 1 @ @ 1 ¡ ) 0 UT f£ § @ ) ( Figure 4 shows the numerical results for the squared average error of this readout for several values of and . [sent-120, score-0.498]
70 (b) Numerical evaluation of one over the squared estimation error, for the optimal bilinear readout in the multiplicative modulation model (open circles). [sent-126, score-1.087]
71 The dashed line is the asymptotic behavior, given by Eq. [sent-127, score-0.141]
72 Here , for the optimal bilinear readout in the multiplicative modulation model. [sent-129, score-1.045]
73 ¢ ) ( © ) v ¤ £ inverse square error initially increases linearly with but saturates in the limit of large . [sent-133, score-0.356]
74 The precise form of depends on the specifics of the correlation model. [sent-135, score-0.153]
75 Figure 4 shows that for this range of , and the deviations of the inverse square error from linearity are small. [sent-138, score-0.124]
76 Thus, in the regime , is given by the asymptotic behavior, Eq. [sent-139, score-0.065]
77 ( E ' w X& ¡ d " ' " y E y ' "Ey ( ¡ ( ¢ ( ' "Ey ( 2 % % % % 1) ( We thus conclude that the OBE (with unlimited ) will generate an inverse square estimawith a coefficient given by Eq. [sent-141, score-0.088]
78 (19), and that tion error which increases linearly with this value can be achieved for reasonable values of by an approximate bilinear weight matrix, of the form of Eq. [sent-142, score-0.671]
79 (19), is smaller than the optimal value given by the full FI, Eq. [sent-145, score-0.04]
80 In fact, it is equal to the error of an independent population with a variance which equals and a quadratic population vector readout of the form ( ( ' (¡ " S £ ¤! [sent-148, score-0.998]
81 (21) w ¥ & § It is important to note that in the presence of correlations, the quadratic readout of Eq. [sent-149, score-0.496]
82 (21) is very inefficient, yielding a finite error for large as shown in Fig. [sent-150, score-0.036]
83 5 Discussion To understand the reason for the simple form of the approximately optimal bilinear weight matrix, Eq. [sent-152, score-0.52]
84 1 quadratic 0 0 500 1000 1500 2000 N Figure 4: Inverse square estimation error of the finite- approximation for the OBE, Eq. [sent-163, score-0.164]
85 The dashed line is the asymptotic behavior, given by Eq. [sent-167, score-0.141]
86 The FI bound is shown by the dotted , and were used. [sent-169, score-0.047]
87 (21) it can be seen that our readout is in the form of a bilinear population vector in which the lowest Fourier modes of the response vector have been removed. [sent-176, score-1.149]
88 Retaining only the high Fourier modes in the response profile suppresses the cross-correlations between the different components of the residual responses because the underlying correlations have smooth spatial dependence, whose power is concentrated mostly in the low Fourier modes. [sent-177, score-0.39]
89 On the other hand, the information contained in the variance is not removed because the variance contains a discontinuous spatial component, . [sent-178, score-0.193]
90 In other words, the variance of a correlation profile which has only high Fourier modes can still preserve its slowly varying components. [sent-179, score-0.261]
91 Thus, by projecting out the low Fourier modes of the spatial responses the spatial correlations are suppressed but the information in the response variance is retained. [sent-180, score-0.484]
92 & $ ' (¡ " S This interpretation of the bilinear readout implies that although all the elements of the correlation matrix depend on the stimulus, only the stimulus dependence of the diagonal elements is important. [sent-181, score-1.312]
93 (11) and (19) show, the asymptotic performance of both the full FI as well as that of the OBE are equivalent to those of an . [sent-184, score-0.065]
94 uncorrelated population with a stimulus dependent variance which equals ' (¡ " S Although we have presented results here concerning a multiplicative model of correlations, we have studied other models of stimulus dependent correlations. [sent-185, score-0.804]
95 These studies indicate that the above conclusions apply to a broad class of populations in which information is encoded in the second order statistics of the responses. [sent-186, score-0.228]
96 Also, for the sake of clarity we have assumed here that the mean responses are untuned, . [sent-187, score-0.154]
97 Our studies have shown that adding tuned mean inputs does not modify the picture since the smoothly varying positive correlations greatly suppress the information embedded in the first order statistics. [sent-188, score-0.286]
98 ) B The relatively simple form of the readout Eq. [sent-189, score-0.46]
99 (22) suggests that neuronal hardware may be able to extract efficiently information embedded in local populations of cells whose noisy responses are strongly correlated, provided that the variances of their responses are significantly tuned to the stimulus. [sent-190, score-0.589]
100 This latter condition is not too restrictive, since tuning of variances of neuronal firing rates to stimulus and motor variables is quite common in the nervous system. [sent-191, score-0.368]
wordName wordTfidf (topN-words)
[('bilinear', 0.48), ('readout', 0.42), ('obe', 0.21), ('population', 0.195), ('ut', 0.167), ('stimulus', 0.157), ('correlation', 0.153), ('correlations', 0.147), ('fi', 0.146), ('populations', 0.146), ('ub', 0.136), ('cb', 0.134), ('readouts', 0.131), ('responses', 0.126), ('neuronal', 0.125), ('hh', 0.122), ('obd', 0.105), ('uncorrelated', 0.092), ('deg', 0.083), ('discrimination', 0.081), ('fisher', 0.081), ('angle', 0.079), ('saturates', 0.078), ('sompolinsky', 0.078), ('linearly', 0.077), ('fourier', 0.077), ('coding', 0.076), ('asymptotic', 0.065), ('multiplicative', 0.064), ('neuron', 0.058), ('equals', 0.058), ('ef', 0.058), ('variance', 0.054), ('modes', 0.054), ('encoded', 0.053), ('discontinuous', 0.053), ('shamir', 0.053), ('yoon', 0.053), ('neurons', 0.052), ('tuning', 0.052), ('matrix', 0.051), ('dependence', 0.051), ('ciency', 0.049), ('increases', 0.049), ('pf', 0.048), ('dotted', 0.047), ('square', 0.046), ('harmonics', 0.046), ('extensive', 0.043), ('estimator', 0.043), ('estimation', 0.042), ('numerical', 0.042), ('inverse', 0.042), ('activities', 0.042), ('symmetry', 0.042), ('ring', 0.042), ('le', 0.042), ('suppress', 0.042), ('discriminability', 0.042), ('abbott', 0.042), ('modulation', 0.041), ('extracting', 0.041), ('optimal', 0.04), ('quadratic', 0.04), ('line', 0.04), ('relatively', 0.04), ('qx', 0.039), ('suppressed', 0.039), ('fr', 0.039), ('pro', 0.038), ('multivariate', 0.037), ('nite', 0.037), ('tb', 0.037), ('smoothly', 0.037), ('jerusalem', 0.037), ('raises', 0.037), ('dashed', 0.036), ('system', 0.036), ('correlated', 0.036), ('codes', 0.036), ('error', 0.036), ('presence', 0.036), ('stimuli', 0.035), ('saturation', 0.035), ('variances', 0.034), ('tuned', 0.032), ('code', 0.032), ('spatial', 0.032), ('smooth', 0.031), ('statistics', 0.029), ('tion', 0.029), ('coded', 0.029), ('coef', 0.029), ('limit', 0.028), ('secondly', 0.028), ('unbiased', 0.028), ('mean', 0.028), ('delta', 0.027), ('concerning', 0.027), ('cs', 0.026)]
simIndex simValue paperId paperTitle
same-paper 1 1.0000001 57 nips-2001-Correlation Codes in Neuronal Populations
Author: Maoz Shamir, Haim Sompolinsky
Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.
2 0.16280952 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
Author: Julian Eggert, Berthold Bäuml
Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds
3 0.11955666 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
Author: Si Wu, Shun-ichi Amari
Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1
4 0.10621262 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
Author: Gal Chechik, Amir Globerson, M. J. Anderson, E. D. Young, Israel Nelken, Naftali Tishby
Abstract: The way groups of auditory neurons interact to code acoustic information is investigated using an information theoretic approach. We develop measures of redundancy among groups of neurons, and apply them to the study of collaborative coding efficiency in two processing stations in the auditory pathway: the inferior colliculus (IC) and the primary auditory cortex (AI). Under two schemes for the coding of the acoustic content, acoustic segments coding and stimulus identity coding, we show differences both in information content and group redundancies between IC and AI neurons. These results provide for the first time a direct evidence for redundancy reduction along the ascending auditory pathway, as has been hypothesized for theoretical considerations [Barlow 1959,2001]. The redundancy effects under the single-spikes coding scheme are significant only for groups larger than ten cells, and cannot be revealed with the redundancy measures that use only pairs of cells. The results suggest that the auditory system transforms low level representations that contain redundancies due to the statistical structure of natural stimuli, into a representation in which cortical neurons extract rare and independent component of complex acoustic signals, that are useful for auditory scene analysis. 1
5 0.10051191 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
6 0.099572964 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
7 0.089319512 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
8 0.088958107 37 nips-2001-Associative memory in realistic neuronal networks
9 0.07909368 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
10 0.074799113 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
11 0.073711492 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
12 0.07198634 23 nips-2001-A theory of neural integration in the head-direction system
13 0.071465552 123 nips-2001-Modeling Temporal Structure in Classical Conditioning
14 0.068244644 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
15 0.067396499 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity
16 0.061843332 195 nips-2001-Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning
17 0.058580954 96 nips-2001-Information-Geometric Decomposition in Spike Analysis
18 0.05680887 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments
19 0.056240257 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance
20 0.05288817 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
topicId topicWeight
[(0, -0.16), (1, -0.185), (2, -0.099), (3, 0.016), (4, 0.083), (5, -0.017), (6, 0.091), (7, 0.002), (8, 0.02), (9, 0.004), (10, -0.023), (11, 0.045), (12, -0.057), (13, -0.036), (14, 0.005), (15, -0.11), (16, 0.05), (17, 0.065), (18, -0.001), (19, -0.041), (20, -0.048), (21, 0.01), (22, 0.051), (23, -0.028), (24, 0.009), (25, 0.024), (26, 0.069), (27, 0.009), (28, 0.034), (29, 0.031), (30, -0.084), (31, -0.079), (32, 0.025), (33, -0.017), (34, 0.049), (35, -0.025), (36, 0.141), (37, -0.012), (38, 0.021), (39, -0.05), (40, -0.085), (41, -0.012), (42, 0.025), (43, -0.026), (44, -0.094), (45, -0.041), (46, 0.158), (47, -0.043), (48, 0.074), (49, 0.034)]
simIndex simValue paperId paperTitle
same-paper 1 0.95722002 57 nips-2001-Correlation Codes in Neuronal Populations
Author: Maoz Shamir, Haim Sompolinsky
Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.
2 0.68788987 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
Author: Si Wu, Shun-ichi Amari
Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1
3 0.64826208 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
Author: Julian Eggert, Berthold Bäuml
Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds
4 0.62641293 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections
Author: Xiaohui Xie, Martin A. Giese
Abstract: Asymmetric lateral connections are one possible mechanism that can account for the direction selectivity of cortical neurons. We present a mathematical analysis for a class of these models. Contrasting with earlier theoretical work that has relied on methods from linear systems theory, we study the network’s nonlinear dynamic properties that arise when the threshold nonlinearity of the neurons is taken into account. We show that such networks have stimulus-locked traveling pulse solutions that are appropriate for modeling the responses of direction selective cortical neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of this solutions breaks down giving rise to another class of solutions that are characterized by specific spatiotemporal periodicity. This predicts that if direction selectivity in the cortex is mainly achieved by asymmetric lateral connections lurching activity waves might be observable in ensembles of direction selective cortical neurons within appropriate regimes of the stimulus speed.
5 0.59971946 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons
Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas
Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.
6 0.59520394 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway
7 0.50691897 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds
8 0.4921383 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision
9 0.48068509 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement
10 0.4619728 37 nips-2001-Associative memory in realistic neuronal networks
11 0.45863083 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments
12 0.45848089 23 nips-2001-A theory of neural integration in the head-direction system
13 0.42228231 18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task
14 0.36736971 26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks
15 0.36102122 48 nips-2001-Characterizing Neural Gain Control using Spike-triggered Covariance
16 0.35906199 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity
17 0.35571435 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
18 0.34641528 195 nips-2001-Variance Reduction Techniques for Gradient Estimates in Reinforcement Learning
19 0.34627411 123 nips-2001-Modeling Temporal Structure in Classical Conditioning
20 0.34534174 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA
topicId topicWeight
[(14, 0.043), (17, 0.015), (19, 0.029), (27, 0.139), (30, 0.066), (38, 0.09), (59, 0.037), (72, 0.062), (74, 0.013), (79, 0.029), (83, 0.023), (91, 0.154), (97, 0.208)]
simIndex simValue paperId paperTitle
same-paper 1 0.87987781 57 nips-2001-Correlation Codes in Neuronal Populations
Author: Maoz Shamir, Haim Sompolinsky
Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.
2 0.82060957 127 nips-2001-Multi Dimensional ICA to Separate Correlated Sources
Author: Roland Vollgraf, Klaus Obermayer
Abstract: We present a new method for the blind separation of sources, which do not fulfill the independence assumption. In contrast to standard methods we consider groups of neighboring samples (
3 0.74092376 27 nips-2001-Activity Driven Adaptive Stochastic Resonance
Author: Gregor Wenning, Klaus Obermayer
Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1
4 0.72770518 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior
Author: Mário Figueiredo
Abstract: In this paper we introduce a new sparseness inducing prior which does not involve any (hyper)parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly available benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, both in terms of error rates and sparseness, although it involves no tuning or adjusting of sparsenesscontrolling hyper-parameters.
Author: Gregory Z. Grudic, Lyle H. Ungar
Abstract: We address two open theoretical questions in Policy Gradient Reinforcement Learning. The first concerns the efficacy of using function approximation to represent the state action value function, . Theory is presented showing that linear function approximation representations of can degrade the rate of convergence of performance gradient estimates by a factor of relative to when no function approximation of is used, where is the number of possible actions and is the number of basis functions in the function approximation representation. The second concerns the use of a bias term in estimating the state action value function. Theory is presented showing that a non-zero bias term can improve the rate of convergence of performance gradient estimates by , where is the number of possible actions. Experimental evidence is presented showing that these theoretical results lead to significant improvement in the convergence properties of Policy Gradient Reinforcement Learning algorithms. ¤ ¨ ¦ ¢ ©§¥¤£¡ ¦ ¤ ¨ £¡ ¨ ¤¢ ¢
6 0.72525001 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes
7 0.72501385 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons
8 0.71817368 13 nips-2001-A Natural Policy Gradient
9 0.71799982 84 nips-2001-Global Coordination of Local Linear Models
10 0.71743071 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules
11 0.71634197 89 nips-2001-Grouping with Bias
12 0.71620309 46 nips-2001-Categorization by Learning and Combining Object Parts
13 0.71610266 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex
14 0.71545553 95 nips-2001-Infinite Mixtures of Gaussian Process Experts
15 0.71491611 8 nips-2001-A General Greedy Approximation Algorithm with Applications
16 0.71416962 182 nips-2001-The Fidelity of Local Ordinal Encoding
17 0.71386999 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments
18 0.71296102 161 nips-2001-Reinforcement Learning with Long Short-Term Memory
19 0.71162093 88 nips-2001-Grouping and dimensionality reduction by locally linear embedding
20 0.70986235 135 nips-2001-On Spectral Clustering: Analysis and an algorithm