nips nips2001 nips2001-131 knowledge-graph by maker-knowledge-mining

131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes


Source: pdf

Author: Si Wu, Shun-ichi Amari

Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 for Mathematic Neuroscience, RIKEN Brain Science Institute, JAPAN Abstract This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. [sent-2, score-1.149]

2 We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. [sent-3, score-1.252]

3 1 Introduction Information in the brain is not processed by a single neuron, but rather by a population of them. [sent-4, score-0.228]

4 It is conceivable that population coding has advantage of being robust to the fluctuation in a single neuron's activity. [sent-6, score-0.255]

5 However, people argue that population coding may have other computationally desirable properties. [sent-7, score-0.215]

6 One such property is to provide a framework for encoding complex objects by using basis functions [1]. [sent-8, score-0.048]

7 It is reasonable to think that similar strategies are used in the brain under the support of population codes. [sent-11, score-0.201]

8 However, to confirm this idea, a general suspicion has to be clarified: can the brain perform such complex statistic inference? [sent-12, score-0.049]

9 They show that Maximum Likelihood (ML) Inference, which is usually thought to be complex, can be implemented by a biologically plausible recurrent network using the idea of line attractor. [sent-14, score-0.374]

10 ML is a special case of Bayesian inference when the stimulus is (or assumed to be) uniformly distributed. [sent-15, score-0.266]

11 In case there is prior knowledge on the stimulus distribution, Maximum a Posteriori (MAP) Estimate has better performance. [sent-16, score-0.354]

12 has successfully applied MAP for reconstructing the rat position in a maze from the activity of hippocampal place cells [6]. [sent-18, score-0.201]

13 In their method, the prior knowledge is the rat's position in the previous time step, which restricts the variability of rat's position in the current step under the continuity constraint. [sent-19, score-0.323]

14 It turns out that MAP has a much better performance than other decoding methods, and overcomes the inefficiency of ML when information is not sufficient (when the rat stops running). [sent-20, score-0.484]

15 So far, in the literature MAP has been mainly studied as a mathematic tool for reconstructing data, though its potential neural implementation was pointed out by [1 ,6]. [sent-22, score-0.117]

16 In the present study, we will firmly show how to implement MAP in a biologic way. [sent-23, score-0.057]

17 The same kind of recurrent network for achieving ML is used [4,5]. [sent-24, score-0.246]

18 In the first step when there is no prior knowledge of the stimulus, the network implements ML. [sent-26, score-0.448]

19 Its estimation is subsequently used to form the prior distribution of stimulus for consecutive decoding, which we assume is a Gaussian function with the mean value being the estimation. [sent-27, score-0.454]

20 It turns out that this prior knowledge can be naturally conveyed by the change in the recurrent interactions according to the Hebbian learning rule. [sent-28, score-0.379]

21 In the second step, with the changed interactions, the network implements MAP. [sent-30, score-0.197]

22 The decoding accuracy of MAP and the optimal form of Gaussian prior are also analyzed in this paper. [sent-31, score-0.575]

23 2 MAP in Population Codes Let us consider a standard population coding paradigm. [sent-32, score-0.215]

24 Here ri is the response of the ith neuron, which is given by (1) where fi(X) is the tuning function and fi is a random noise. [sent-35, score-0.073]

25 The encoding process of a population code is specified by the conditional probability Q(rlx) (i. [sent-36, score-0.231]

26 The decoding is to infer the value of x from the observed r. [sent-39, score-0.398]

27 We consider a general Bayesian inference in a population code, which estimates the stimulus by maximizing a log posterior distribution , In P(xlr) , i. [sent-40, score-0.418]

28 , argmaxx argmaxx In P(xlr) , InP(rlx) + InP(x), (2) where P(rlx) is the likelihood function. [sent-42, score-0.202]

29 It can be equal to or different from the real encoding model Q(rlx) , depending on the available information of the encoding process [7]. [sent-43, score-0.096]

30 P(x) is the distribution of x , representing the prior knowledge. [sent-44, score-0.093]

31 When the distribution of x is, or is assumed to be (when there is no prior knowledge) uniform, MAP is equivalent to ML. [sent-46, score-0.093]

32 MAP could be used in the information processing of the brain in several occasions. [sent-47, score-0.049]

33 Let us consider the following scenario: a stimulus is decoded in multiple steps. [sent-48, score-0.226]

34 This happens when the same stimulus is presented through multiple steps, or during a single presentation, neural signals are sampled many times. [sent-49, score-0.226]

35 In both cases, the brain successively gains a rough estimation of the stimulus in each step decoding, which can serve to be the prior knowledge when further decoding is concerned. [sent-50, score-0.927]

36 Experiencing slightly different stimuli in consecutive steps as studied in [6], or more generally, stimulus slowly changes with time (multiple-step diagram is a discreted approximation), is a similar scenario. [sent-52, score-0.327]

37 For simplicity, we only consider that stimulus is unchanged in the present study. [sent-53, score-0.2]

38 Denote Xt a particular estimation of the stimulus in the tth step, and 0; the corresponding variance. [sent-57, score-0.295]

39 The prior distribution of x in the t + lth step is assumed to be a Gaussian with the mean value X"~ i. [sent-58, score-0.212]

40 ,J2irTt where the parameter Tt reflects the estimator's confidence on value will be calculated later. [sent-61, score-0.028]

41 (3) xt, whose optimal The posterior distribution of x in the t + lth step is given by P( I )= xr P(rlx)P(xlxt) P(r) , (4) and the solution of MAP is obtained by solving \7 In P(Xt+1 Ir) \7lnP(rlxt+l) - (Xt+l-Xt)/T;, O. [sent-62, score-0.144]

42 (5) We calculate the decoding accuracies iteratively. [sent-63, score-0.433]

43 In the first step decoding, since there is no prior knowledge on x, ML is used, whose decoding accuracy is known to be [7] 02- «\7lnP(rlx))2> (6) 1 - < -\7\7lnP(rlx) >2' where the bracket < . [sent-64, score-0.73]

44 This includes the cases when neural responses are independent, weakly correlated, uniformly correlated, correlated with strength proportional to firing rate (multiplicative correlation), or the fluctuation in neural responses are sufficiently small. [sent-67, score-0.229]

45 In other strong correlation cases, ML is proved to be non-Fisherian, i. [sent-68, score-0.054]

46 e, its decoding error satisfies a Cauchy type of distribution with variance diverging. [sent-69, score-0.452]

47 Decoding accuracy can no longer be quantified by variance in such situations (for details, please refer to [8]) . [sent-70, score-0.112]

48 Now come to calculate the decoding error in the second step. [sent-71, score-0.398]

49 (7) The random variable Xl can be decomposed as Xl = x + f1, where f1 is a random number satisfying Gaussian distribution of zero mean and variance Oi. [sent-75, score-0.031]

50 By using the notation of f1, we have A X2 -x = \7lnP(rlx)+fdTf \7\7lnP(rlx)' l/T; - (8) For the correlation cases considered in the present study (i. [sent-76, score-0.069]

51 Obviously R satisfies the Gaussian distribution of zero mean and variance = (10) 0I. [sent-79, score-0.054]

52 By using the notations 0: and R, we get X2- X o:R+fl = --- (11) 1+0: whose variance is calculated to be (12) Since (1 + 0: 2)/(1 + 0:)2 ::::: 1 holds for any positive 0:, the decoding accuracy in the second step is always improved. [sent-80, score-0.631]

53 - \7\71n P(rlx) When a faithful model is used , -\7\71nQ(rlx) is the Fisher information. [sent-96, score-0.037]

54 (14) Tl hence Following the same procedure, it can be proved that the optimal decoding accuracy in the tth step is 0; = tOI when the width of Gaussian prior being Tl = tTl. [sent-99, score-0.725]

55 It is interesting to see that the above multiple decoding procedure, when the optimal values of Tt are used, achieves the same decoding accuracy as a one-step ML by using all N x t signals. [sent-100, score-0.906]

56 However, the multiple decoding is not a trivial replacement of one-step ML, and has many advantages. [sent-102, score-0.424]

57 One of them is to save memory, considering that only N signals and the value of previous estimation are stored in each step. [sent-103, score-0.051]

58 Moreover, if a slowly changing stimulus is concerned, the multiple decoding outperforms one-step ML for the balance between adaptation and memory. [sent-104, score-0.676]

59 3 Network Implementation of MAP In this section, we investigate how to implement MAP by a recurrent network. [sent-106, score-0.188]

60 The network we consider is a fully connected one-dimensional homogeneous neural field, in which c denotes the position coordinate, i. [sent-109, score-0.162]

61 The tuning function of the neuron with preferred stimulus c is f c(x) = _1_ exp-( c- x)2/ 2a 2 . [sent-112, score-0.357]

62 A faithful model is used in both steps decoding, i. [sent-114, score-0.037]

63 For the above model setting, the solution of ML in the first step is calculated to be J rc! [sent-118, score-0.125]

64 e(x)de, Xl = argmaxx where the condition J J;(x)de = (17) const has been used. [sent-119, score-0.141]

65 The solution of MAP in the second step is X2 = argmaxx J rc! [sent-120, score-0.176]

66 (18) has one more term corresponding to the contribution of prior distribution. [sent-124, score-0.093]

67 Now come to the study of using a recurrent network to realize eqs. [sent-125, score-0.268]

68 Let Ue denote the (average) internal state of neuron at e, and W e,e' the recurrent connection weights from neurons at e to those at e'. [sent-129, score-0.282]

69 The dynamics of neural excitation is governed by dUe dt where = -Ue + J We ,e' 0 e, de ' + Ie, U; oe = ----;;-=--=1 + f. [sent-130, score-0.284]

70 LJU;de (19) (20) is the activity of neurons at e and Ie is the external input arriving at e. [sent-132, score-0.132]

71 The recurrent interactions are chosen to be W c,c' - exp-(e-e')2/ 2a 2, - (21) which ensures that when there is no external input (Ie = 0), the network is neutrally stable on line attractor, 'r:/z, (22) where the parameter D is constant and can be determined easily. [sent-133, score-0.425]

72 Note that the line attractor has the same shape as the tuning function. [sent-134, score-0.184]

73 This is crucial, which allows the network perform template-matching by using the tuning function , being as same as ML and MAP. [sent-135, score-0.165]

74 When a sufficiently small input Ie is added, the network is no longer neutrally stable on the line attractor. [sent-136, score-0.245]

75 It can be proved that the steady state of the network has approximately the same shape as eq. [sent-137, score-0.192]

76 ), whereas, its steady position on the line attractor (i. [sent-139, score-0.227]

77 , the network estimation) is determined by maximizing the overlap between Ie and Oe(Z) [4,9]. [sent-141, score-0.164]

78 Thus, if Ie = ere in the first step1, where e is a sufficiently small number, the network estimation is given by 21 = argmaxz ------------- J reOe(z)de, (23) lConsider an instant input, triggering the network to be initially at Oe(t = 0) = r e, as used in [5] , has the same result . [sent-142, score-0.412]

79 To implement MAP in the second step, it is critical to identify a neural mechanism which can 'transmit' the prior knowledge obtained in the first step to the second one. [sent-146, score-0.308]

80 After the first step decoding, the recurrent interaction changes a small amount according to the Hebbian rule, whose new value is (24) where TJ is a small positive number representing the Hebbian learning rate, and Oe(,2d is the neuron activity in the first step. [sent-148, score-0.374]

81 With the new recurrent interactions, the net input from other neurons to the one at c is calculated to be J We,e Oe dc' l l J We,e Oe dc' +TJOe(,2d l l J Oe/(zd Oe,dc' , (25) where 1/ is a small constant. [sent-149, score-0.226]

82 These factors ensures the approximation, Oe/ (zd Oe,dc' :=;:j const to be good enough. [sent-151, score-0.04]

83 (25) in (19), we see that the network dynamics in the second step, when compared with the first one, is in effect to modify the input Ie to be I~ = €(re + AOc(zd), where A is a constant and can be determined easily. [sent-153, score-0.159]

84 Thus, the network estimation in the second step is determined by maximizing the overlap between I~ and Oc(z), which gives Z2 = argmaxz J rcOc(z)dc + A J Oe(zdO e(z )dc. [sent-154, score-0.34]

85 Let us see the contribution of the second one, which can be transformed to J = Bexp-CZI-Z)2/4a2, :=;:j Oe(zd Oc(z)dc -B(z - zd 2 /4a 2 + terms not on z, (27) where B is a constant. [sent-156, score-0.151]

86 (I8) and (27), we see that the second term plays the same role as the prior knowledge in MAP. [sent-159, score-0.154]

87 I) , which was done with 101 neurons uniformly distributed in the region [-3,3] and the true stimulus being at O. [sent-163, score-0.293]

88 It shows that the estimation of the network agrees well with MAP. [sent-164, score-0.166]

89 Table 1: Comparing the decoding accuracies of the network and MAP with different values of a (the corresponding values of T[ and A are adjusted. [sent-165, score-0.548]

90 4 Conclusion and Discussion In summary we have investigated how to implement MAP by using a biologically plausible recurrent network. [sent-171, score-0.273]

91 In the first step when there is no prior knowledge, the network implements ML, whose estimation is subsequently used to form the prior distribution of stimulus for consecutive decoding. [sent-173, score-0.841]

92 Line attractor and Hebbian learning are two critical elements to implement MAP. [sent-175, score-0.148]

93 The former enables the network to do template-matching by using the tuning function, being as same as ML and MAP. [sent-176, score-0.165]

94 The latter provides a mechanism that conveys the prior knowledge obtained from the first step to the second one. [sent-177, score-0.251]

95 Though the results in this paper may quantitatively depend on the formulation of the models , it is reasonable to believe that they are qualitatively true, as both Hebbian learning and line attractor are biologically plausible. [sent-178, score-0.189]

96 Line attractor comes from the translation invariance of network interactions, and has been shown to be involved in several neural computations [10-12]. [sent-179, score-0.206]

97 We expect that the essential idea of Bayesian inference of utilizing previous knowledge for successive decoding is used in the information processing of the brain. [sent-180, score-0.499]

98 We also analyzed the decoding accuracy of MAP in a population code and the optimal form of Gaussian prior. [sent-181, score-0.665]

99 In the present study, stimulus is kept to be fixed during consecutive decodings. [sent-182, score-0.275]

100 A generalization to the case when stimulus slowly changes over time is straightforward. [sent-183, score-0.252]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('rlx', 0.578), ('decoding', 0.398), ('oe', 0.262), ('ml', 0.201), ('stimulus', 0.2), ('map', 0.162), ('population', 0.152), ('zd', 0.151), ('recurrent', 0.131), ('hebbian', 0.128), ('network', 0.115), ('argmaxx', 0.101), ('pouget', 0.101), ('prior', 0.093), ('attractor', 0.091), ('neuron', 0.084), ('implements', 0.082), ('consecutive', 0.075), ('step', 0.075), ('ie', 0.072), ('neurons', 0.067), ('rat', 0.064), ('coding', 0.063), ('interactions', 0.061), ('knowledge', 0.061), ('accuracy', 0.059), ('implement', 0.057), ('tl', 0.057), ('dc', 0.056), ('biologically', 0.055), ('slowly', 0.052), ('estimation', 0.051), ('wu', 0.05), ('argmaxz', 0.05), ('neutrally', 0.05), ('reconstructing', 0.05), ('xlr', 0.05), ('tuning', 0.05), ('brain', 0.049), ('encoding', 0.048), ('xt', 0.048), ('position', 0.047), ('steady', 0.046), ('correlated', 0.046), ('mathematic', 0.044), ('lth', 0.044), ('tth', 0.044), ('line', 0.043), ('activity', 0.04), ('inference', 0.04), ('const', 0.04), ('fluctuation', 0.04), ('notations', 0.04), ('posteriori', 0.038), ('inp', 0.037), ('oc', 0.037), ('faithful', 0.037), ('sufficiently', 0.037), ('amari', 0.036), ('subsequently', 0.035), ('accuracies', 0.035), ('rc', 0.033), ('conveyed', 0.033), ('proved', 0.031), ('variance', 0.031), ('code', 0.031), ('plausible', 0.03), ('zhang', 0.028), ('calculated', 0.028), ('responses', 0.028), ('bayesian', 0.028), ('processed', 0.027), ('tt', 0.027), ('uniformly', 0.026), ('xl', 0.026), ('multiple', 0.026), ('maximizing', 0.026), ('external', 0.025), ('optimal', 0.025), ('cases', 0.024), ('gaussian', 0.024), ('implementation', 0.023), ('efficient', 0.023), ('fi', 0.023), ('paradigm', 0.023), ('preferred', 0.023), ('satisfies', 0.023), ('correlation', 0.023), ('overlap', 0.023), ('codes', 0.023), ('study', 0.022), ('first', 0.022), ('dynamics', 0.022), ('triggering', 0.022), ('latham', 0.022), ('quantified', 0.022), ('nakahara', 0.022), ('inefficiency', 0.022), ('bracket', 0.022), ('investigates', 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

Author: Si Wu, Shun-ichi Amari

Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1

2 0.19952647 98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

Author: Shiro Ikeda, Toshiyuki Tanaka, Shun-ichi Amari

Abstract: The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it difficult to obtain the true “belief” by BP, and the characteristics of the algorithm and its equilibrium are not clearly understood. Our study gives an intuitive understanding of the mechanism, and a new framework for the analysis. Based on the framework, we reveal basic properties of the turbo decoding.

3 0.18738823 97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

Author: Toshiyuki Tanaka, Shiro Ikeda, Shun-ichi Amari

Abstract: We report a result of perturbation analysis on decoding error of the belief propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error at equilibrium comes from the m-embedding curvature of the log-linear submanifold spanned by the estimated pseudoposteriors, one for the full marginal, and K for partial posteriors, each of which takes a single check into account, where K is the number of checks in the Gallager code. It is then shown that the principal error term vanishes when the parity-check matrix of the code is so sparse that there are no two columns with overlap greater than 1. 1

4 0.11955666 57 nips-2001-Correlation Codes in Neuronal Populations

Author: Maoz Shamir, Haim Sompolinsky

Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.

5 0.11809638 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

6 0.10766589 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

7 0.10032495 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

8 0.098349966 37 nips-2001-Associative memory in realistic neuronal networks

9 0.096127957 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

10 0.089277633 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

11 0.086396277 123 nips-2001-Modeling Temporal Structure in Classical Conditioning

12 0.085191064 45 nips-2001-Boosting and Maximum Likelihood for Exponential Models

13 0.082997099 159 nips-2001-Reducing multiclass to binary by coupling probability estimates

14 0.072255701 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

15 0.070524707 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

16 0.064044029 76 nips-2001-Fast Parameter Estimation Using Green's Functions

17 0.062791415 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

18 0.062127922 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

19 0.060000297 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

20 0.058615215 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.17), (1, -0.19), (2, -0.092), (3, 0.004), (4, 0.051), (5, -0.135), (6, 0.099), (7, -0.134), (8, 0.106), (9, 0.127), (10, -0.139), (11, 0.039), (12, -0.08), (13, 0.008), (14, 0.051), (15, -0.064), (16, 0.201), (17, -0.049), (18, 0.058), (19, 0.043), (20, -0.069), (21, 0.152), (22, 0.129), (23, 0.013), (24, 0.022), (25, 0.046), (26, 0.151), (27, 0.001), (28, 0.094), (29, 0.071), (30, 0.022), (31, -0.114), (32, 0.03), (33, -0.101), (34, 0.008), (35, 0.11), (36, 0.097), (37, -0.07), (38, 0.053), (39, -0.041), (40, -0.077), (41, 0.038), (42, -0.039), (43, -0.046), (44, -0.008), (45, -0.117), (46, 0.057), (47, -0.029), (48, 0.022), (49, -0.011)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95630038 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

Author: Si Wu, Shun-ichi Amari

Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1

2 0.70084947 97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

Author: Toshiyuki Tanaka, Shiro Ikeda, Shun-ichi Amari

Abstract: We report a result of perturbation analysis on decoding error of the belief propagation decoder for Gallager codes. The analysis is based on information geometry, and it shows that the principal term of decoding error at equilibrium comes from the m-embedding curvature of the log-linear submanifold spanned by the estimated pseudoposteriors, one for the full marginal, and K for partial posteriors, each of which takes a single check into account, where K is the number of checks in the Gallager code. It is then shown that the principal error term vanishes when the parity-check matrix of the code is so sparse that there are no two columns with overlap greater than 1. 1

3 0.69997185 98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

Author: Shiro Ikeda, Toshiyuki Tanaka, Shun-ichi Amari

Abstract: The mystery of belief propagation (BP) decoder, especially of the turbo decoding, is studied from information geometrical viewpoint. The loopy belief network (BN) of turbo codes makes it difficult to obtain the true “belief” by BP, and the characteristics of the algorithm and its equilibrium are not clearly understood. Our study gives an intuitive understanding of the mechanism, and a new framework for the analysis. Based on the framework, we reveal basic properties of the turbo decoding.

4 0.64667964 57 nips-2001-Correlation Codes in Neuronal Populations

Author: Maoz Shamir, Haim Sompolinsky

Abstract: Population codes often rely on the tuning of the mean responses to the stimulus parameters. However, this information can be greatly suppressed by long range correlations. Here we study the efficiency of coding information in the second order statistics of the population responses. We show that the Fisher Information of this system grows linearly with the size of the system. We propose a bilinear readout model for extracting information from correlation codes, and evaluate its performance in discrimination and estimation tasks. It is shown that the main source of information in this system is the stimulus dependence of the variances of the single neuron responses.

5 0.43933922 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

Author: Gal Chechik, Amir Globerson, M. J. Anderson, E. D. Young, Israel Nelken, Naftali Tishby

Abstract: The way groups of auditory neurons interact to code acoustic information is investigated using an information theoretic approach. We develop measures of redundancy among groups of neurons, and apply them to the study of collaborative coding efficiency in two processing stations in the auditory pathway: the inferior colliculus (IC) and the primary auditory cortex (AI). Under two schemes for the coding of the acoustic content, acoustic segments coding and stimulus identity coding, we show differences both in information content and group redundancies between IC and AI neurons. These results provide for the first time a direct evidence for redundancy reduction along the ascending auditory pathway, as has been hypothesized for theoretical considerations [Barlow 1959,2001]. The redundancy effects under the single-spikes coding scheme are significant only for groups larger than ten cells, and cannot be revealed with the redundancy measures that use only pairs of cells. The results suggest that the auditory system transforms low level representations that contain redundancies due to the statistical structure of natural stimuli, into a representation in which cortical neurons extract rare and independent component of complex acoustic signals, that are useful for auditory scene analysis. 1

6 0.43378466 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

7 0.39177263 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

8 0.38135475 83 nips-2001-Geometrical Singularities in the Neuromanifold of Multilayer Perceptrons

9 0.36591968 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

10 0.34934488 18 nips-2001-A Rational Analysis of Cognitive Control in a Speeded Discrimination Task

11 0.34227967 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

12 0.34098819 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

13 0.33637065 76 nips-2001-Fast Parameter Estimation Using Green's Functions

14 0.33380878 37 nips-2001-Associative memory in realistic neuronal networks

15 0.3316946 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

16 0.32641685 26 nips-2001-Active Portfolio-Management based on Error Correction Neural Networks

17 0.31966561 159 nips-2001-Reducing multiclass to binary by coupling probability estimates

18 0.31664333 52 nips-2001-Computing Time Lower Bounds for Recurrent Sigmoidal Neural Networks

19 0.29535696 19 nips-2001-A Rotation and Translation Invariant Discrete Saliency Network

20 0.28470823 156 nips-2001-Rao-Blackwellised Particle Filtering via Data Augmentation


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.232), (14, 0.05), (17, 0.026), (19, 0.033), (27, 0.146), (30, 0.084), (38, 0.052), (59, 0.039), (72, 0.065), (79, 0.041), (91, 0.127)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.86034155 50 nips-2001-Classifying Single Trial EEG: Towards Brain Computer Interfacing

Author: Benjamin Blankertz, Gabriel Curio, Klaus-Robert Müller

Abstract: Driven by the progress in the field of single-trial analysis of EEG, there is a growing interest in brain computer interfaces (BCIs), i.e., systems that enable human subjects to control a computer only by means of their brain signals. In a pseudo-online simulation our BCI detects upcoming finger movements in a natural keyboard typing condition and predicts their laterality. This can be done on average 100–230 ms before the respective key is actually pressed, i.e., long before the onset of EMG. Our approach is appealing for its short response time and high classification accuracy (>96%) in a binary decision where no human training is involved. We compare discriminative classifiers like Support Vector Machines (SVMs) and different variants of Fisher Discriminant that possess favorable regularization properties for dealing with high noise cases (inter-trial variablity).

same-paper 2 0.84081429 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

Author: Si Wu, Shun-ichi Amari

Abstract: This study investigates a population decoding paradigm, in which the estimation of stimulus in the previous step is used as prior knowledge for consecutive decoding. We analyze the decoding accuracy of such a Bayesian decoder (Maximum a Posteriori Estimate), and show that it can be implemented by a biologically plausible recurrent network, where the prior knowledge of stimulus is conveyed by the change in recurrent interactions as a result of Hebbian learning. 1

3 0.70716667 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

4 0.69645494 29 nips-2001-Adaptive Sparseness Using Jeffreys Prior

Author: Mário Figueiredo

Abstract: In this paper we introduce a new sparseness inducing prior which does not involve any (hyper)parameters that need to be adjusted or estimated. Although other applications are possible, we focus here on supervised learning problems: regression and classification. Experiments with several publicly available benchmark data sets show that the proposed approach yields state-of-the-art performance. In particular, our method outperforms support vector machines and performs competitively with the best alternative techniques, both in terms of error rates and sparseness, although it involves no tuning or adjusting of sparsenesscontrolling hyper-parameters.

5 0.69484526 13 nips-2001-A Natural Policy Gradient

Author: Sham M. Kakade

Abstract: We provide a natural gradient method that represents the steepest descent direction based on the underlying structure of the parameter space. Although gradient methods cannot make large changes in the values of the parameters, we show that the natural gradient is moving toward choosing a greedy optimal action rather than just a better action. These greedy optimal actions are those that would be chosen under one improvement step of policy iteration with approximate, compatible value functions, as defined by Sutton et al. [9]. We then show drastic performance improvements in simple MDPs and in the more challenging MDP of Tetris. 1

6 0.69022268 8 nips-2001-A General Greedy Approximation Algorithm with Applications

7 0.69019091 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

8 0.68976301 57 nips-2001-Correlation Codes in Neuronal Populations

9 0.68974751 46 nips-2001-Categorization by Learning and Combining Object Parts

10 0.68859953 157 nips-2001-Rates of Convergence of Performance Gradient Estimates Using Function Approximation and Bias in Reinforcement Learning

11 0.68726867 92 nips-2001-Incorporating Invariances in Non-Linear Support Vector Machines

12 0.68719721 88 nips-2001-Grouping and dimensionality reduction by locally linear embedding

13 0.6863395 60 nips-2001-Discriminative Direction for Kernel Classifiers

14 0.6854893 77 nips-2001-Fast and Robust Classification using Asymmetric AdaBoost and a Detector Cascade

15 0.68466437 190 nips-2001-Thin Junction Trees

16 0.68362963 89 nips-2001-Grouping with Bias

17 0.68275791 185 nips-2001-The Method of Quantum Clustering

18 0.68187755 127 nips-2001-Multi Dimensional ICA to Separate Correlated Sources

19 0.68095636 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

20 0.68068397 84 nips-2001-Global Coordination of Local Linear Models