nips nips2004 nips2004-181 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Jochen Triesch
Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1
Reference: text
sentIndex sentText sentNum sentScore
1 edu Abstract This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. [sent-3, score-1.217]
2 It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. [sent-4, score-1.501]
3 The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. [sent-5, score-1.293]
4 In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. [sent-6, score-0.427]
5 In particular, neurons in different visual cortical areas show an approximately exponential distribution of their firing rates in response to stimulation with natural video sequences [1]. [sent-8, score-0.425]
6 Several different mechanisms seem to play a role: First, synaptic learning can change a neuron’s response to a distribution of inputs. [sent-13, score-0.29]
7 Second, intrinsic learning may change conductances in the dendrites and soma to adapt the distribution of firing rates [7]. [sent-14, score-0.68]
8 This paper investigates the interaction of intrinsic and synaptic learning processes in individual model neurons in the learning of sparse codes. [sent-18, score-0.9]
9 We consider an individual continuous activation model neuron with a non-linear transfer function that has adjustable parameters. [sent-19, score-0.67]
10 We are proposing a simple intrinsic learning mechanism based on estimates of low-order moments of the activity distribution that allows the model neuron to adjust the parameters of its non-linear transfer function to obtain an approximately exponential distribution of its activity. [sent-20, score-1.521]
11 We then show that if combined with a standard Hebbian learning rule employing multiplicative weight normalization, this leads to the extraction of sparse features from the input. [sent-21, score-0.346]
12 This is in sharp contrast to standard Hebbian learning in linear units with multiplicative weight normalization, which leads to the extraction of the principal Eigenvector of the input correlation matrix. [sent-22, score-0.255]
13 We demonstrate the behavior of the combined intrinsic and synaptic learning mechanisms on the classic bars problem [4], a non-linear independent component analysis problem. [sent-23, score-0.874]
14 Section 2 introduces our scheme for intrinsic plasticity and presents experiments demonstrating the effectiveness of the proposed mechanism for inducing a sparse firing rate distribution. [sent-25, score-1.245]
15 Section 3 studies the combination of intrinsic plasticity with Hebbian learning at the synapses and demonstrates how it gives rise to the discovery of sparse directions in the input. [sent-26, score-1.125]
16 2 Intrinsic Plasticity Mechanism Biological neurons do not only adapt synaptic properties but also change their excitability through the modification of voltage gated channels. [sent-29, score-0.413]
17 Such intrinsic plasticity has been observed across many species and brain areas [9]. [sent-30, score-0.92]
18 Although our understanding of these processes and their underlying mechanisms remains quite unclear, it has been hypothesized that this form of plasticity contributes to a neuron’s homeostasis of its mean firing rate level. [sent-31, score-0.616]
19 Our basic hypothesis is that the goal of intrinsic plasticity is to ensure an approximately exponential distribution of firing rate levels in individual neurons. [sent-32, score-1.295]
20 A learning rule was derived that adapts the properties of voltage gated channels to match the firing rate distribution of the unit to a desired distribution. [sent-34, score-0.421]
21 In order to facilitate the simulation of potentially large networks we choose a different, more abstract level of modeling employing a continuous activation unit with a non-linear transfer function. [sent-35, score-0.463]
22 Our model neuron is described by: Y = Sθ (X) , X = wT u , (1) where Y is the neuron’s output (firing rate), X is the neuron’s total synaptic current, w is the neuron’s weight vector representing synaptic strengths, the vector u represents the pre-synaptic input, and Sθ (. [sent-36, score-0.797]
23 ) is the neuron’s non-linear transfer function (activation function), parameterized by a vector of parameters θ. [sent-37, score-0.272]
24 In this section we will not be concerned with synaptic mechanism changing the weight vector w, so we will just consider a particular distribution p(X = x) ≡ p(x) of the net synaptic current and consider the resulting distribution of firing rates p(Y = y) ≡ p(y). [sent-38, score-0.754]
25 Intrinsic plasticity is modeled as inducing changes to the non-linear transfer function with the goal of bringing the distribution of activity levels p(y) close to an exponential distribution. [sent-39, score-1.039]
26 Given a signal with a certain distribution, find a non-linear transfer function that converts the signal to one with a desired distribution. [sent-41, score-0.38]
27 In particular, it requires that the individual neuron can change its nonlinear transfer function arbitrarily, i. [sent-46, score-0.526]
28 Since the non-linearity has only two degrees of freedom it is generally not possible to ascertain an exponential activity distribution for an arbitrary input distribution. [sent-53, score-0.322]
29 A plausible alternative goal is to just match low order moments of the activity distribution to those of a specific target distribution. [sent-54, score-0.255]
30 The mean µ of the desired exponential distribution is a free parameter which may vary across cortical areas. [sent-59, score-0.329]
31 2 Experiments with Intrinsic Plasticity Mechanism We tested the proposed intrinsic plasticity mechanism for a number of distributions of the synaptic current X (Fig. [sent-66, score-1.222]
32 Panel b compares the theoretically optimal transfer function (dotted), which would lead to an exactly exponential distribution of Y , with the learned sigmoidal transfer function (solid). [sent-77, score-0.707]
33 The learned transfer function gives a very good fit. [sent-78, score-0.269]
34 The resulting distribution of Y is in fact very close to the desired exponential distribution. [sent-79, score-0.262]
35 For the general case of a Gaussian input distribution with mean µG and standard deviation σG , the sigmoid parameters will converge to a → a∞ σG and b → b∞ σG + µG under the intrinsic plasticity rule. [sent-80, score-1.202]
36 If the input to the unit can be assumed to be Gaussian, this relation can be used to calculate the desired parameters of the sigmoid non-linearity directly. [sent-81, score-0.377]
37 In general, however, intrinsic plasticity may give rise to non-linear changes that cannot be captured by such a linear re-scaling of all weights. [sent-83, score-0.954]
38 a b 10 1 8 input distribution optimal transfer fct. [sent-84, score-0.354]
39 2 0 0 1 2 3 4 0 −4 5 −2 0 2 4 a c d input distribution optimal transfer fct. [sent-90, score-0.354]
40 8 1 Figure 1: Dynamics of intrinsic plasticity mechanism for various input distributions. [sent-110, score-1.102]
41 Panel b shows the optimal (dotted) and learned transfer function (solid). [sent-116, score-0.269]
42 Panels c and d show the result of intrinsic plasticity for two other input distributions. [sent-121, score-0.982]
43 In the case of a uniform input distribution in the interval [0, 1] (panel c) the optimal transfer function becomes infinitely steep for x → 1. [sent-122, score-0.354]
44 For an exponentially distributed input (panel d), the ideal transfer function would simply be the identity function. [sent-123, score-0.302]
45 In both cases the intrinsic plasticity mechanism adjusts the sigmoid non-linearity in a sensible fashion and the output distribution is a fair approximation of the desired exponential distribution. [sent-124, score-1.516]
46 3 Discussion of the Intrinsic Plasticity Mechanism The proposed mechanism for intrinsic plasticity is effective in driving a neuron to exhibit an approximately exponential distribution of firing rates as observed in biological neurons in the visual system. [sent-126, score-1.684]
47 The same adaptation mechanism can also be used in conjunction with, say, an adjustable threshold-linear activation function. [sent-128, score-0.301]
48 An interesting alternative to the proposed mechanism can be derived by directly minimizing the KL divergence between the output distribution and the desired exponential distribution through stochastic gradient descent. [sent-129, score-0.466]
49 The resulting learning rule, which is closely related to a rule for adapting a sigmoid nonlinearity to max- imize the output entropy derived by Bell and Sejnowski[2], will be discussed elsewhere. [sent-130, score-0.235]
50 A specific, testable prediction of the simple model is that changes to the distribution of a neuron’s firing rate levels that keep the average firing rate of the neuron unchanged but alter the second moment of the firing rate distribution should lead to measurable changes in the neuron’s excitability. [sent-134, score-0.62]
51 3 Combination of Intrinsic and Synaptic Plasticity In this Section we want to study the effects of simultaneous intrinsic and synaptic learning for an individual model neuron. [sent-135, score-0.769]
52 In principle, any Hebbian learning rule can be combined with our scheme for intrinsic plasticity. [sent-137, score-0.538]
53 We simply adopt a multiplicative normalization scheme that after each update re-scales the weight vector to unit length: w ← w/|| w ||. [sent-141, score-0.332]
54 1 Analysis for the Limiting Case of Fast Intrinsic Plasticity Under a few assumptions, an interesting intuition about the simultaneous intrinsic and Hebbian learning can be gained. [sent-143, score-0.526]
55 Consider the limit of intrinsic plasticity being much faster than Hebbian plasticity. [sent-144, score-0.92]
56 In this case we may assume that the non-linearity has adapted to give an approximately exponential distribution of the firing rate Y before w can change much. [sent-146, score-0.291]
57 Thus, from (6), ∆w can be seen as a weighted sum of the inputs u, with the activities Y acting as weights that follow an approximately exponential distribution. [sent-147, score-0.273]
58 Since similar inputs u will produce similar outputs Y , the expected value of the weight update ∆w will be dominated by a small set of inputs that produce the highest output activities. [sent-148, score-0.25]
59 The remainder of the inputs will “pull” the weight vector back to the average input u . [sent-149, score-0.281]
60 Due to the multiplicative weight normalization, the stationary states of the weight vector are reached if ∆w is parallel to w, i. [sent-150, score-0.356]
61 A simple example shall illustrate the effect of intrinsic plasticity on Hebbian learning in more detail. [sent-153, score-0.92]
62 If the weight vector is slightly closer to one of the two clusters, inputs from this cluster will activate the unit more strongly and will exert a stronger “pull” on the weight vector. [sent-156, score-0.468]
63 Let m = µ ln(2) denote the median of the exponential firing rate distribution with mean µ. [sent-157, score-0.255]
64 Then inputs from the closer cluster, say c1 , will be responsible for all activities above m while the inputs from the other cluster will be responsible for all activities below m. [sent-158, score-0.274]
65 (9) || (1 ± ln 2)c1 + (1 ln 2)c2 || The weight vector moves close to one of the two clusters but does not fully commit to it. [sent-163, score-0.296]
66 For the general case of N input clusters, only a few clusters will strongly contribute to the final weight vector. [sent-164, score-0.247]
67 Generalizing the result from above, it is not difficult to derive that the weight vector w will be proportional to a weighted sum of the cluster centers: N fi ci ; with fi = 1 + log(N ) − i log(i) + (i − 1) log(i − 1) , w∝ (10) i=1 where we define 0 log(0) ≡ 0. [sent-165, score-0.378]
68 Here, fi denotes the relative contribution of the i-th closest input cluster to the final weight vector. [sent-166, score-0.316]
69 Note that the final weight vector does not depend on the desired mean activity level µ. [sent-169, score-0.355]
70 2 plots (10) for N = 1000 (left) and shows that the resulting distribution of the fi is approximately exponential (right). [sent-171, score-0.327]
71 We can see why such a weight vector may correspond to a sparse direction in the input space as follows: consider the case where the input cluster centers are random vectors of unit length in a high-dimensional space. [sent-172, score-0.515]
72 i If we consider the projection of an input from an arbitrary cluster, say cj , onto the weight T T vector, we see that wT cj ∝ i fi ci cj ≈ fj . [sent-174, score-0.389]
73 The distribution of X = w u follows the distribution of the fi , which is approximately exponential. [sent-175, score-0.261]
74 Thus, the projection of all inputs onto the weight vector has an approximately exponential distribution. [sent-176, score-0.38]
75 It is interesting to note that in this situation the optimal transfer function S ∗ that will make the unit’s activity Y have an exponential distribution of a desired mean µ is simply a multiplication with a constant k, i. [sent-178, score-0.621]
76 Thus, depending on the initial weight vector and the resulting distribution of X, the neuron’s activation function may transiently adapt to enforce an approximately exponential firing rate distribution, but the simultaneous Hebbian learning drives it back to a linear form. [sent-181, score-0.642]
77 In the end, a simple linear activation function may result from this interplay of intrinsic and synaptic plasticity. [sent-182, score-0.747]
78 In fact, the observation of approximately linear activation functions in cortical neurons is not uncommon. [sent-183, score-0.292]
79 We have trained an individual sigmoidal model neuron on the bars input domain. [sent-208, score-0.517]
80 The theoretical analysis above assumed that intrinsic plasticity is much faster than synaptic plasticity. [sent-209, score-1.102]
81 Here, we set the intrinsic plasticity to be slower than the synaptic plasticity, which is more plausible biologically, to see if this may still allow the discovery of sparse directions in the input. [sent-210, score-1.3]
82 3 (right) the unit’s weight vector aligns with one of the individual bars as soon as the intrinsic plasticity has pushed the model neuron into a regime where its responses are sparse: the unit has discovered one of the independent sources of the input domain. [sent-212, score-1.708]
83 This result is robust if the desired mean activity µ of the unit is changed over a wide range. [sent-213, score-0.295]
84 15, the unit will fail to represent an individual bar but will learn a mixture of two or more bars, with different bars being represented with different strengths. [sent-217, score-0.343]
85 Thus, in this example — in contrast to the theoretical result above — the desired mean activity µ does influence the weight vector that is being learned. [sent-218, score-0.355]
86 The reason for this is that the intrinsic plasticity only imperfectly adjusts the output distribution to the desired exponential shape. [sent-219, score-1.257]
87 For low µ, only the highest mode, which corresponds to a specific single bar presented in isolation, contributes strongly to the weight vector. [sent-222, score-0.221]
88 While the plasticity of a neuron’s synapses has always been a core topic of neural computation research, there has been little work investigating the computational properties of intrinsic plasticity mechanisms and the relation between intrinsic and synaptic learning. [sent-224, score-2.111]
89 This paper has investigated the potential role of intrinsic learning mechanisms operating at the soma when used in conjunction with Hebbian learning at the synapses. [sent-225, score-0.597]
90 To this end, we have proposed a new intrinsic plasticity mechanism that adjusts the parameters of a sigmoid nonlinearity to move the neuron’s firing rate distribution to a sparse regime. [sent-226, score-1.443]
91 The learning mechanism is effective in producing approximately exponential firing rate distributions as observed in neurons in the visual system of cats and primates. [sent-227, score-0.489]
92 Studying simultaneous intrinsic and synaptic learning, we found a synergistic relation between the two. [sent-228, score-0.708]
93 We demonstrated how the two mechanisms may cooperate to discover sparse directions in the input. [sent-229, score-0.221]
94 When applied to the classic “bars” problem, a single unit was shown to discover one of the independent sources as soon as the intrinsic plasticity moved the unit’s activity distribution into a sparse regime. [sent-230, score-1.332]
95 In such approaches, the “standard” Hebbian weight update rule is modified to allow the discovery of non-gaussian directions in the input. [sent-234, score-0.235]
96 We have shown that the combination of intrinsic plasticity with the standard Hebbian learning rule can be sufficient for the discovery of sparse directions in the input. [sent-235, score-1.132]
97 Future work will analyze the combination of intrinsic plasticity with other Hebbian learning rules. [sent-236, score-0.92]
98 The nonlinear nature of the transfer function may facilitate the construction of hierarchical networks for unsupervised learning. [sent-238, score-0.262]
99 It will also be interesting to study the effects of intrinsic plasticity in the context of recurrent networks, where it may contribute to keeping the network in a certain desired dynamic regime. [sent-239, score-1.012]
100 The other side of the engram: Experience-driven changes in neuronal intrinsic excitability. [sent-302, score-0.529]
wordName wordTfidf (topN-words)
[('intrinsic', 0.472), ('plasticity', 0.448), ('hebbian', 0.245), ('transfer', 0.24), ('ring', 0.225), ('neuron', 0.225), ('synaptic', 0.182), ('bars', 0.141), ('sigmoid', 0.139), ('mechanism', 0.12), ('exponential', 0.118), ('weight', 0.112), ('neurons', 0.096), ('activation', 0.093), ('desired', 0.092), ('fi', 0.092), ('activity', 0.09), ('sparse', 0.089), ('moments', 0.087), ('unit', 0.084), ('approximately', 0.065), ('conductances', 0.064), ('input', 0.062), ('individual', 0.061), ('panel', 0.059), ('bar', 0.057), ('mechanisms', 0.056), ('rate', 0.056), ('gated', 0.056), ('simultaneous', 0.054), ('inputs', 0.053), ('stationary', 0.052), ('distribution', 0.052), ('ln', 0.052), ('adjustable', 0.051), ('mt', 0.05), ('cluster', 0.05), ('directions', 0.049), ('multiplicative', 0.048), ('clusters', 0.048), ('frankfurt', 0.043), ('nullclines', 0.043), ('triesch', 0.043), ('adjusts', 0.043), ('ica', 0.042), ('cj', 0.041), ('voltage', 0.041), ('rule', 0.04), ('adapt', 0.038), ('cortical', 0.038), ('pull', 0.037), ('conjunction', 0.037), ('activities', 0.037), ('changes', 0.034), ('inducing', 0.034), ('discovery', 0.034), ('visual', 0.034), ('synapses', 0.033), ('extraction', 0.033), ('vector', 0.032), ('output', 0.032), ('soma', 0.032), ('moment', 0.032), ('biological', 0.032), ('wt', 0.03), ('normalization', 0.03), ('mean', 0.029), ('learned', 0.029), ('sigmoidal', 0.028), ('unclear', 0.028), ('dy', 0.027), ('contributes', 0.027), ('discover', 0.027), ('dotted', 0.027), ('plausible', 0.026), ('scheme', 0.026), ('sources', 0.025), ('strongly', 0.025), ('passed', 0.024), ('intersection', 0.024), ('centers', 0.024), ('employing', 0.024), ('responses', 0.024), ('signal', 0.024), ('bell', 0.024), ('trajectories', 0.024), ('nonlinearity', 0.024), ('levels', 0.023), ('blind', 0.023), ('neuronal', 0.023), ('classic', 0.023), ('numerically', 0.023), ('formation', 0.023), ('soon', 0.022), ('facilitate', 0.022), ('enforce', 0.022), ('limiting', 0.022), ('rates', 0.022), ('responsible', 0.022), ('remainder', 0.022)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999952 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
Author: Jochen Triesch
Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1
2 0.2478984 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
Author: Máté Lengyel, Peter Dayan
Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1
3 0.23420294 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity
Author: Sander M. Bohte, Michael C. Mozer
Abstract: Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron’s variability to a given presynaptic input, causing the neuron’s output to become more reliable in the face of noise. Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of cortical adaptation. 1
4 0.20002542 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
Author: Rajesh P. Rao
Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
5 0.18307896 114 nips-2004-Maximum Likelihood Estimation of Intrinsic Dimension
Author: Elizaveta Levina, Peter J. Bickel
Abstract: We propose a new method for estimating intrinsic dimension of a dataset derived by applying the principle of maximum likelihood to the distances between close neighbors. We derive the estimator by a Poisson process approximation, assess its bias and variance theoretically and by simulations, and apply it to a number of simulated and real datasets. We also show it has the best overall performance compared with two other intrinsic dimension estimators. 1
6 0.16062321 28 nips-2004-Bayesian inference in spiking neurons
7 0.15833148 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity
8 0.15492415 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern
9 0.13256253 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model
10 0.11548892 112 nips-2004-Maximising Sensitivity in a Spiking Network
11 0.11391422 88 nips-2004-Intrinsically Motivated Reinforcement Learning
12 0.10948468 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
13 0.10771079 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons
14 0.074441411 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks
15 0.06956549 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture
16 0.066902041 172 nips-2004-Sparse Coding of Natural Images Using an Overcomplete Set of Limited Capacity Units
17 0.06669239 198 nips-2004-Unsupervised Variational Bayesian Learning of Nonlinear Models
18 0.059806153 33 nips-2004-Brain Inspired Reinforcement Learning
19 0.058665805 9 nips-2004-A Method for Inferring Label Sampling Mechanisms in Semi-Supervised Learning
20 0.055099782 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
topicId topicWeight
[(0, -0.199), (1, -0.317), (2, -0.083), (3, 0.029), (4, 0.004), (5, 0.084), (6, -0.035), (7, 0.019), (8, -0.023), (9, 0.075), (10, 0.053), (11, -0.075), (12, 0.028), (13, -0.105), (14, -0.077), (15, -0.003), (16, -0.188), (17, 0.017), (18, -0.065), (19, -0.007), (20, 0.102), (21, 0.049), (22, 0.132), (23, -0.038), (24, -0.102), (25, 0.01), (26, 0.126), (27, -0.092), (28, -0.053), (29, 0.054), (30, -0.108), (31, 0.082), (32, 0.024), (33, 0.018), (34, -0.072), (35, -0.17), (36, -0.028), (37, 0.047), (38, -0.084), (39, -0.047), (40, 0.007), (41, 0.085), (42, 0.003), (43, -0.086), (44, -0.038), (45, -0.011), (46, 0.098), (47, -0.025), (48, -0.024), (49, -0.072)]
simIndex simValue paperId paperTitle
same-paper 1 0.97535008 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
Author: Jochen Triesch
Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1
2 0.71003181 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity
Author: Sander M. Bohte, Michael C. Mozer
Abstract: Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron’s variability to a given presynaptic input, causing the neuron’s output to become more reliable in the face of noise. Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of cortical adaptation. 1
3 0.70114332 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
Author: Máté Lengyel, Peter Dayan
Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1
4 0.60162038 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model
Author: Taro Toyoizumi, Jean-pascal Pfister, Kazuyuki Aihara, Wulfram Gerstner
Abstract: We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model. Under the assumption of small fluctuations of the input, we find a spike-timing dependent plasticity (STDP) function which depends on the time course of excitatory postsynaptic potentials (EPSPs) and the autocorrelation function of the postsynaptic neuron. We show that the STDP function has both positive and negative phases. The positive phase is related to the shape of the EPSP while the negative phase is controlled by neuronal refractoriness. 1
5 0.59764206 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity
Author: Marcelo A. Montemurro, Stefano Panzeri
Abstract: A typical neuron in visual cortex receives most inputs from other cortical neurons with a roughly similar stimulus preference. Does this arrangement of inputs allow efficient readout of sensory information by the target cortical neuron? We address this issue by using simple modelling of neuronal population activity and information theoretic tools. We find that efficient synaptic information transmission requires that the tuning curve of the afferent neurons is approximately as wide as the spread of stimulus preferences of the afferent neurons reaching the target neuron. By meta analysis of neurophysiological data we found that this is the case for cortico-cortical inputs to neurons in visual cortex. We suggest that the organization of V1 cortico-cortical synaptic inputs allows optimal information transmission. 1
6 0.56176382 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons
7 0.55649072 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
8 0.50054759 114 nips-2004-Maximum Likelihood Estimation of Intrinsic Dimension
9 0.46075693 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
10 0.45006901 28 nips-2004-Bayesian inference in spiking neurons
11 0.43767145 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern
12 0.40137804 112 nips-2004-Maximising Sensitivity in a Spiking Network
13 0.35816646 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture
14 0.35791957 180 nips-2004-Synchronization of neural networks by mutual learning and its application to cryptography
15 0.34368503 193 nips-2004-Theories of Access Consciousness
16 0.34298646 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
17 0.33691537 198 nips-2004-Unsupervised Variational Bayesian Learning of Nonlinear Models
18 0.3358646 88 nips-2004-Intrinsically Motivated Reinforcement Learning
20 0.31018448 128 nips-2004-Neural Network Computation by In Vitro Transcriptional Circuits
topicId topicWeight
[(10, 0.117), (13, 0.152), (15, 0.122), (19, 0.013), (26, 0.053), (31, 0.028), (33, 0.153), (35, 0.082), (39, 0.034), (50, 0.046), (71, 0.011), (76, 0.022), (82, 0.039)]
simIndex simValue paperId paperTitle
same-paper 1 0.94013637 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons
Author: Jochen Triesch
Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1
2 0.87984657 28 nips-2004-Bayesian inference in spiking neurons
Author: Sophie Deneve
Abstract: We propose a new interpretation of spiking neurons as Bayesian integrators accumulating evidence over time about events in the external world or the body, and communicating to other neurons their certainties about these events. In this model, spikes signal the occurrence of new information, i.e. what cannot be predicted from the past activity. As a result, firing statistics are close to Poisson, albeit providing a deterministic representation of probabilities. We proceed to develop a theory of Bayesian inference in spiking neural networks, recurrent interactions implementing a variant of belief propagation. Many perceptual and motor tasks performed by the central nervous system are probabilistic, and can be described in a Bayesian framework [4, 3]. A few important but hidden properties, such as direction of motion, or appropriate motor commands, are inferred from many noisy, local and ambiguous sensory cues. These evidences are combined with priors about the sensory world and body. Importantly, because most of these inferences should lead to quick and irreversible decisions in a perpetually changing world, noisy cues have to be integrated on-line, but in a way that takes into account unpredictable events, such as a sudden change in motion direction or the appearance of a new stimulus. This raises the question of how this temporal integration can be performed at the neural level. It has been proposed that single neurons in sensory cortices represent and compute the log probability that a sensory variable takes on a certain value (eg Is visual motion in the neuron’s preferred direction?) [9, 7]. Alternatively, to avoid normalization issues and provide an appropriate signal for decision making, neurons could represent the log probability ratio of a particular hypothesis (eg is motion more likely to be towards the right than towards the left) [7, 6]. Log probabilities are convenient here, since under some assumptions, independent noisy cues simply combine linearly. Moreover, there are physiological evidence for the neural representation of log probabilities and log probability ratios [9, 6, 7]. However, these models assume that neurons represent probabilities in their firing rates. We argue that it is important to study how probabilistic information are encoded in spikes. Indeed, it seems spurious to marry the idea of an exquisite on-line integration of noisy cues with an underlying rate code that requires averaging on large populations of noisy neurons and long periods of time. In particular, most natural tasks require this integration to take place on the time scale of inter-spike intervals. Spikes are more efficiently signaling events ∗ Institute of Cognitive Science, 69645 Bron, France than analog quantities. In addition, a neural theory of inference with spikes will bring us closer to the physiological level and generate more easily testable predictions. Thus, we propose a new theory of neural processing in which spike trains provide a deterministic, online representation of a log-probability ratio. Spikes signals events, eg that the log-probability ratio has exceeded what could be predicted from previous spikes. This form of coding was loosely inspired by the idea of ”energy landscape” coding proposed by Hinton and Brown [2]. However, contrary to [2] and other theories using rate-based representation of probabilities, this model is self-consistent and does not require different models for encoding and decoding: As output spikes provide new, unpredictable, temporally independent evidence, they can be used directly as an input to other Bayesian neurons. Finally, we show that these neurons can be used as building blocks in a theory of approximate Bayesian inference in recurrent spiking networks. Connections between neurons implement an underlying Bayesian network, consisting of coupled hidden Markov models. Propagation of spikes is a form of belief propagation in this underlying graphical model. Our theory provides computational explanations of some general physiological properties of cortical neurons, such as spike frequency adaptation, Poisson statistics of spike trains, the existence of strong local inhibition in cortical columns, and the maintenance of a tight balance between excitation and inhibition. Finally, we discuss the implications of this model for the debate about temporal versus rate-based neural coding. 1 Spikes and log posterior odds 1.1 Synaptic integration seen as inference in a hidden Markov chain We propose that each neuron codes for an underlying ”hidden” binary variable, xt , whose state evolves over time. We assume that xt depends only on the state at the previous time step, xt−dt , and is conditionally independent of other past states. The state xt can switch 1 from 0 to 1 with a constant rate ron = dt limdt→0 P (xt = 1|xt−dt = 0), and from 1 to 0 with a constant rate roff . For example, these transition rates could represent how often motion in a preferred direction appears the receptive field and how long it is likely to stay there. The neuron infers the state of its hidden variable from N noisy synaptic inputs, considered to be observations of the hidden state. In this initial version of the model, we assume that these inputs are conditionally independent homogeneous Poisson processes, synapse i i emitting a spike between time t and t + dt (si = 1) with constant probability qon dt if t i xt = 1, and another constant probability qoff dt if xt = 0. The synaptic spikes are assumed to be otherwise independent of previous synaptic spikes, previous states and spikes at other synapses. The resulting generative model is a hidden Markov chain (figure 1-A). However, rather than estimating the state of its hidden variable and communicating this estimate to other neurons (for example by emitting a spike when sensory evidence for xt = 1 goes above a threshold) the neuron reports and communicates its certainty that the current state is 1. This certainty takes the form of the log of the ratio of the probability that the hidden state is 1, and the probability that the state is 0, given all the synaptic inputs P (xt =1|s0→t ) received so far: Lt = log P (xt =0|s0→t ) . We use s0→t as a short hand notation for the N synaptic inputs received at present and in the past. We will refer to it as the log odds ratio. Thanks to the conditional independencies assumed in the generative model, we can compute this Log odds ratio iteratively. Taking the limit as dt goes to zero, we get the following differential equation: ˙ L = ron 1 + e−L − roff 1 + eL + i wi δ(si − 1) − θ t B. A. xt ron .roff dt qon , qoff st xt ron .roff i t st dt s qon , qoff qon , qoff st dt xt j st Ot It Gt Ot Lt t t dt C. E. 2 0 -2 -4 D. 500 1000 1500 2000 2500 2 3000 Count Log odds 4 20 Lt 0 -2 0 500 1000 1500 2000 2500 Time Ot 3000 0 200 400 600 ISI Figure 1: A. Generative model for the synaptic input. B. Schematic representation of log odds ratio encoding and decoding. The dashed circle represents both eventual downstream elements and the self-prediction taking place inside the model neuron. A spike is fired only when Lt exceeds Gt . C. One example trial, where the state switches from 0 to 1 (shaded area) and back to 0. plain: Lt , dotted: Gt . Black stripes at the top: corresponding spikes train. D. Mean Log odds ratio (dark line) and mean output firing rate (clear line). E. Output spike raster plot (1 line per trial) and ISI distribution for the neuron shown is C. and D. Clear line: ISI distribution for a poisson neuron with the same rate. wi , the synaptic weight, describe how informative synapse i is about the state of the hidden i qon variable, e.g. wi = log qi . Each synaptic spike (si = 1) gives an impulse to the log t off odds ratio, which is positive if this synapse is more active when the hidden state if 1 (i.e it increases the neuron’s confidence that the state is 1), and negative if this synapse is more active when xt = 0 (i.e it decreases the neuron’s confidence that the state is 1). The bias, θ, is determined by how informative it is not to receive any spike, e.g. θ = i i i qon − qoff . By convention, we will consider that the ”bias” is positive or zero (if not, we need simply to invert the status of the state x). 1.2 Generation of output spikes The spike train should convey a sparse representation of Lt , so that each spike reports new information about the state xt that is not redundant with that reported by other, preceding, spikes. This proposition is based on three arguments: First, spikes, being metabolically expensive, should be kept to a minimum. Second, spikes conveying redundant information would require a decoding of the entire spike train, whereas independent spike can be taken into account individually. And finally, we seek a self consistent model, with the spiking output having a similar semantics to its spiking input. To maximize the independence of the spikes (conditioned on xt ), we propose that the neuron fires only when the difference between its log odds ratio Lt and a prediction Gt of this log odds ratio based on the output spikes emitted so far reaches a certain threshold. Indeed, supposing that downstream elements predicts Lt as best as they can, the neuron only needs to fire when it expects that prediction to be too inaccurate (figure 1-B). In practice, this will happen when the neuron receives new evidence for xt = 1. Gt should thereby follow the same dynamics as Lt when spikes are not received. The equation for Gt and the output Ot (Ot = 1 when an output spike is fired) are given by: ˙ G = Ot = ron 1 + e−L − roff 1 + eL + go δ(Ot − 1) go 1. when Lt > Gt + , 0 otherwise, 2 (1) (2) Here go , a positive constant, is the only free parameter, the other parameters being constrained by the statistics of the synaptic input. 1.3 Results Figure 1-C plots a typical trial, showing the behavior of L, G and O before, during and after presentation of the stimulus. As random synaptic inputs are integrated, L fluctuates and eventually exceeds G + 0.5, leading to an output spike. Immediately after a spike, G jumps to G + go , which prevents (except in very rare cases) a second spike from immediately following the first. Thus, this ”jump” implements a relative refractory period. However, ron G decays as it tends to converge back to its stable level gstable = log roff . Thus L eventually exceeds G again, leading to a new spike. This threshold crossing happens more often during stimulation (xt = 1) as the net synaptic input alters to create a higher overall level of certainty, Lt . Mean Log odds ratio and output firing rate ¯ The mean firing rate Ot of the Bayesian neuron during presentation of its preferred stimulus (i.e. when xt switches from 0 to 1 and back to 0) is plotted in figure 1-D, together with the ¯ mean log posterior ratio Lt , both averaged over trials. Not surprisingly, the log-posterior ratio reflects the leaky integration of synaptic evidence, with an effective time constant that depends on the transition probabilities ron , roff . If the state is very stable (ron = roff ∼ 0), synaptic evidence is integrated over almost infinite time periods, the mean log posterior ratio tending to either increase or decrease linearly with time. In the example in figure 1D, the state is less stable, so ”old” synaptic evidence are discounted and Lt saturates. ¯ In contrast, the mean output firing rate Ot tracks the state of xt almost perfectly. This is because, as a form of predictive coding, the output spikes reflect the new synaptic i evidence, It = i δ(st − 1) − θ, rather than the log posterior ratio itself. In particular, the mean output firing rate is a rectified linear function of the mean input, e. g. + ¯ ¯ wi q i −θ . O= 1I= go i on(off) Analogy with a leaky integrate and fire neuron We can get an interesting insight into the computation performed by this neuron by linearizing L and G around their mean levels over trials. Here we reduce the analysis to prolonged, statistically stable periods when the state is constant (either ON or OFF). In this case, the ¯ ¯ mean level of certainty L and its output prediction G are also constant over time. We make the rough approximation that the post spike jump, go , and the input fluctuations are small ¯ compared to the mean level of certainty L. Rewriting Vt = Lt − Gt + go 2 as the ”membrane potential” of the Bayesian neuron: ˙ V = −kL V + It − ∆go − go Ot ¯ ¯ ¯ where kL = ron e−L + roff eL , the ”leak” of the membrane potential, depends on the overall ¯ level of certainty. ∆go is positive and a monotonic increasing function of go . A. s t1 dt s t1 s t1 dt B. C. x t1 x t3 dt x t3 x t3 dt x t1 x t1 x t1 x t2 x t3 x t1 … x tn x t3 x t2 … x tn … dt dt Lx2 D. x t2 dt s t2 dt x t2 s t2 x t2 dt s t2 dt Log odds 10 No inh -0.5 -1 -1 -1.5 -2 5 Feedback 500 1000 1500 2000 Tiger Stripes 0 -5 -10 500 1000 1500 2000 2500 Time Figure 2: A. Bayesian causal network for yt (tiger), x1 (stripes) and x2 (paws). B. A nett t work feedforward computing the log posterior for x1 . C. A recurrent network computing t the log posterior odds for all variables. D. Log odds ratio in a simulated trial with the net2 1 1 work in C (see text). Thick line: Lx , thin line: Lx , dash-dotted: Lx without inhibition. t t t 2 Insert: Lx averaged over trials, showing the effect of feedback. t The linearized Bayesian neuron thus acts in its stable regime as a leaky integrate and fire (LIF) neuron. The membrane potential Vt integrates its input, Jt = It − ∆go , with a leak kL . The neuron fires when its membrane potential reaches a constant threshold go . After ¯ each spikes, Vt is reset to 0. Interestingly, for appropriately chosen compression factor go , the mean input to the lin¯ ¯ earized neuron J = I − ∆go ≈ 0 1 . This means that the membrane potential is purely driven to its threshold by input fluctuations, or a random walk in membrane potential. As a consequence, the neuron’s firing will be memoryless, and close to a Poisson process. In particular, we found Fano factor close to 1 and quasi-exponential ISI distribution (figure 1E) on the entire range of parameters tested. Indeed, LIF neurons with balanced inputs have been proposed as a model to reproduce the statistics of real cortical neurons [8]. This balance is implemented in our model by the neuron’s effective self-inhibition, even when the synaptic input itself is not balanced. Decoding As we previously said, downstream elements could predict the log odds ratio Lt by computing Gt from the output spikes (Eq 1, fig 1-B). Of course, this requires an estimate of the transition probabilities ron , roff , that could be learned from the observed spike trains. However, we show next that explicit decoding is not necessary to perform bayesian inference in spiking networks. Intuitively, this is because the quantity that our model neurons receive and transmit, eg new information, is exactly what probabilistic inference algorithm propagate between connected statistical elements. 1 ¯ Even if go is not chosen optimally, the influence of the drift J is usually negligible compared to the large fluctuations in membrane potential. 2 Bayesian inference in cortical networks The model neurons, having the same input and output semantics, can be used as building blocks to implement more complex generative models consisting of coupled Markov chains. Consider, for example, the example in figure 2-A. Here, a ”parent” variable x1 t (the presence of a tiger) can cause the state of n other ”children” variables ([xk ]k=2...n ), t of whom two are represented (the presence of stripes,x2 , and motion, x3 ). The ”chilt t dren” variables are Bayesian neurons identical to those described previously. The resulting bayesian network consist of n + 1 coupled hidden Markov chains. Inference in this architecture corresponds to computing the log posterior odds ratio for the tiger, x1 , and the log t posterior of observing stripes or motion, ([xk ]k=2...n ), given the synaptic inputs received t by the entire network so far, i.e. s2 , . . . , sk . 0→t 0→t Unfortunately, inference and learning in this network (and in general in coupled Markov chains) requires very expensive computations, and cannot be performed by simply propagating messages over time and among the variable nodes. In particular, the state of a child k variable xt depends on xk , sk , x1 and the state of all other children at the previous t t t−dt time step, [xj ]2
3 0.86400241 131 nips-2004-Non-Local Manifold Tangent Learning
Author: Yoshua Bengio, Martin Monperrus
Abstract: We claim and present arguments to the effect that a large class of manifold learning algorithms that are essentially local and can be framed as kernel learning algorithms will suffer from the curse of dimensionality, at the dimension of the true underlying manifold. This observation suggests to explore non-local manifold learning algorithms which attempt to discover shared structure in the tangent planes at different positions. A criterion for such an algorithm is proposed and experiments estimating a tangent plane prediction function are presented, showing its advantages with respect to local manifold learning algorithms: it is able to generalize very far from training data (on learning handwritten character image rotations), where a local non-parametric method fails. 1
4 0.8524093 151 nips-2004-Rate- and Phase-coded Autoassociative Memory
Author: Máté Lengyel, Peter Dayan
Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1
5 0.85170615 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons
Author: Rajesh P. Rao
Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1
6 0.85132301 5 nips-2004-A Harmonic Excitation State-Space Approach to Blind Separation of Speech
7 0.8507669 163 nips-2004-Semi-parametric Exponential Family PCA
8 0.85075074 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid
9 0.84805536 189 nips-2004-The Power of Selective Memory: Self-Bounded Learning of Prediction Suffix Trees
10 0.84463066 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits
11 0.84100813 70 nips-2004-Following Curved Regularized Optimization Solution Paths
12 0.84100693 93 nips-2004-Kernel Projection Machine: a New Tool for Pattern Recognition
13 0.84034002 22 nips-2004-An Investigation of Practical Approximate Nearest Neighbor Algorithms
14 0.83962214 1 nips-2004-A Cost-Shaping LP for Bellman Error Minimization with Performance Guarantees
15 0.83886498 102 nips-2004-Learning first-order Markov models for control
16 0.8381148 60 nips-2004-Efficient Kernel Machines Using the Improved Fast Gauss Transform
17 0.83799064 142 nips-2004-Outlier Detection with One-class Kernel Fisher Discriminants
18 0.83578467 178 nips-2004-Support Vector Classification with Input Data Uncertainty
19 0.8354705 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model
20 0.83489227 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture