nips nips2004 nips2004-151 knowledge-graph by maker-knowledge-mining

151 nips-2004-Rate- and Phase-coded Autoassociative Memory


Source: pdf

Author: Máté Lengyel, Peter Dayan

Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 uk Abstract Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. [sent-4, score-0.469]

2 We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. [sent-5, score-0.74]

3 One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. [sent-7, score-0.511]

4 The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. [sent-8, score-0.549]

5 However, the characteristic patterns of activity in areas such as CA3 that are involved in memory are quite unlike those specified in the bulk of models. [sent-13, score-0.382]

6 In particular neurons (for instance hippocampal place cells) show graded activity during recall [2], prominent theta frequency oscillations [3] and an apparent variety of rules governing synaptic plasticity [4, 5]. [sent-14, score-1.125]

7 The wealth of studies of memory capacity of attractor networks of binary units does not give many clues to the specification, analysis or optimization of networks acting in these biologically relevant regimes. [sent-15, score-0.231]

8 In fact, even theoretical approaches to autoassociative memories with graded activities are computationally brittle. [sent-16, score-0.716]

9 Formally, these models interpret recall as Bayesian inference based on information given by the noisy input, the synaptic weight matrix, and prior knowledge about the distribution of possible activity patterns coding for memories. [sent-18, score-0.813]

10 More concretely (see section 2), the assumed activity patterns and synaptic plasticity rules determine the term in neuronal update dynamics that describes interactions between interconnected cells. [sent-19, score-1.099]

11 Different aspects of biologically reasonable autoassociative memories arise from different assumptions. [sent-20, score-0.489]

12 We show (section 3) We thank Boris Gutkin for helpful discussions on the phase resetting characteristics of different neuron types. [sent-21, score-0.215]

13 that for neurons are characterized by their graded firing rates, the regular rate-based characterization of neurons effectively approximates optimal Bayesian inference. [sent-23, score-0.321]

14 Memories are coded by the the phase of the LFPO at which each neuron fires, and are stored by spike timing dependent plasticity. [sent-26, score-0.615]

15 2 MAP autoassociative recall The first requirement is to specify the task for autoassociative recall in a probabilistically sound manner. [sent-28, score-0.676]

16 This specification leads to a natural account of the dynamics of the neurons during recall, whose form is largely determined by the learning rule. [sent-29, score-0.22]

17 Unfortunately, the full dynamics includes terms that are not purely local to the information a post-synaptic neuron has about pre-synaptic activity, and we therefore consider approximations that restore essential characteristics necessary to satisfy the most basic biological constraints. [sent-30, score-0.322]

18 The construction of the objective function: Consider an autoassociative network which has stored information about M memories x1 . [sent-32, score-0.819]

19 xM in a synaptic weight matrix, W between a set of N neurons. [sent-35, score-0.312]

20 We specify these quantities rather generally at first to allow for different ways of construing the memories later. [sent-36, score-0.184]

21 ˜ First, the activity pattern referred to by the input is unclear unless there is no input noise. [sent-39, score-0.284]

22 Second, biological synaptic plasticity rules are data-lossy ‘compression algorithms’, and so W specifies only imprecise information about the stored memories. [sent-40, score-0.702]

23 In an ideal case, P [x|˜, W] would have support only on the M stored patterns x1 . [sent-41, score-0.531]

24 We assume that the noise corrupting each element of the patterns is indepen˜ dent, and independent of the original pattern, so P [˜|x] := i P [˜i |x] := i P [˜i |xi ]. [sent-51, score-0.285]

25 Storing a single random pattern drawn from the prior distribution will result in a synaptic weight change with a distribution determined by the prior and the learning rule, having 2 µ∆w = Ω (x1 , x2 ) Px [x1 ]·Px [x2 ] mean, and σ∆w = Ω2 (x1 , x2 ) P [x ]·P [x ] − µ2 ∆w x 1 x 2 variance. [sent-55, score-0.541]

26 Storing M − 1 random patterns means adding M − 1 iid. [sent-56, score-0.185]

27 random variables and thus, for moderately large M , results in a synaptic weight with an approximately Gaussian distribution P [Wi,j ] G (Wi,j ; µW , σW ), with mean µW = (M − 1) µ∆w and variance 2 2 σW = (M − 1) σ∆w . [sent-57, score-0.345]

28 We therefore specify neuronal dynamics arising from gradient ascent on the objective function: ˙ τx x ∝ x O (x) . [sent-61, score-0.313]

29 (5) Combining equations 4 and 5 we get τx dxi = dt ∂ ∂xi ∂ ∂xi log P [W|x] = log P [x] + ∂ j=i ∂xi ∂ ∂xi ∂ ∂xi log P [W|x] , where ∂ ∂xi log P [Wj,i |xj , xi ] . [sent-62, score-0.423]

30 log P [˜|x] + x log P [Wi,j |xi , xj ] + (6) (7) The first two terms in equation 6 only depend on the activity of the neuron itself and its input. [sent-63, score-0.553]

31 The terms in equation 7 indicate how a neuron should take into account the activity of other neurons based on the synaptic weights. [sent-65, score-0.64]

32 The last terms in each express the effects of other cells, but without there being corresponding synaptic weights. [sent-67, score-0.265]

33 In this case + − ∂ ∂ αi = Ω (xi , xj ) ∂xi Ω (xi , xj ) Px [xj ] and αi = Ω (xj , xi ) ∂xi Ω (xj , xi ) Px [xj ] contribute terms that only depend on the activity of the updated cell, and so can be lumped with the prior- and input-dependent terms of Eq. [sent-69, score-0.899]

34 Further, equation 10 includes synaptic weights, Wj,i , that are postsynaptic with respect to the updated neuron. [sent-71, score-0.436]

35 This would require the neuron to change its activity depending on the weights of its postsynaptic synapses. [sent-72, score-0.397]

36 One simple work-around is to approximate a postsynaptic weight by the mean of its conditional distribution given the corresponding presynaptic weight: Wj,i P [Wj,i |Wi,j ] . [sent-73, score-0.243]

37 In the simplest case of perfectly symmetric or anti-symmetric learning, with Ω (xi , xj ) = ±Ω (xj , xi ), we have Wj,i = ±Wj,i and + − αi = αi = αi . [sent-74, score-0.383]

38 Making these assumptions, the neuronal interaction function simplifies to ∂ H (xi , xj ) = (Wi,j − µW ) ∂xi Ω (xi , xj ) (11) and σ2 2 j=i H (xi , xj ) − (N − 1) αi is the weight-dependent term of equation 7. [sent-76, score-0.839]

39 It also shows that the magnitude of this interaction should be proportional to the synaptic weight connecting the two cells, Wi,j . [sent-78, score-0.369]

40 We derive appropriate dynamics from learning rules, and show that, despite the approximations, the networks have good recall performance. [sent-80, score-0.261]

41 3 Rate-based memories The most natural assumption about pattern encoding is that the activity of each unit is interpreted directly as its firing rate. [sent-81, score-0.35]

42 Note, however, that most approaches to autoassociative memory assume binary patterns [9], sitting ill with the lack of saturation in cortical or hippocampal neurons in the appropriate regime. [sent-82, score-0.651]

43 Experiments [10] suggest that regulating activity levels in such networks is very tricky, requiring exquisitely carefully tuned neuronal dynamics. [sent-83, score-0.356]

44 There has been work on graded activities in the special case of line or surface attractor networks [11, 12], but these also pose dynamical complexitiese. [sent-84, score-0.351]

45 By contrast, graded activities are straightforward in our framework. [sent-85, score-0.279]

46 Consider Hebbian covariance learning: Ωcov (xi , xj ) := Acov (xi − µx ) (xj − µx ), where Acov > 0 is a normalizing constant and µx is the mean of the prior distribution of the patterns to be stored. [sent-86, score-0.462]

47 11, the optimal neuronal interaction function is Hcov (xi , xj ) = Acov (Wi,j − µW ) (xj − µx ). [sent-88, score-0.416]

48 This leads to a term in the dynamics which is the conventional weighted sum of pre-synaptic firing 2 2 rates. [sent-89, score-0.167]

49 The other key term in the dynamics is αi = −A2 σx (xi − µx ), where σx is the cov variance of the prior distribution, expressing self-decay to a baseline activity level determined by µx . [sent-90, score-0.46]

50 Thus, canonical models of synaptic plasticity (the Hebbian covariance rule) and single neuron firing rate dynamics are exactly matched for autoassociative recall. [sent-94, score-0.96]

51 Optimal values for all parameters of single neuron dynamics (except the membrane time constant determining the speed of gradient ascent) are directly implied. [sent-95, score-0.257]

52 This is important, since it indicates how to solve the problem for graded autoassociative memories (as opposed to saturing ones [14, 15]), that neuronal dynamics have to be finely tuned. [sent-96, score-0.897]

53 As examples, the leak conductance is given by the sum of the coefficients of all terms linear in xi , the optimal bias current is the sum of all terms independent of xi , and the level of inhibition can be determined from the negative terms in the interaction function, −µW and −µx . [sent-97, score-0.522]

54 To gauge the performance of the Bayes-optimal network we compared it to networks of increasing complexity (Fig. [sent-99, score-0.188]

55 A trivial lower bound of performance is A B prior input ideal observer Bayesian: prior + input Bayesian: prior + input + synapses 3 2 0. [sent-101, score-0.748]

56 7 prior input Bayesian: prior + input Bayesian: prior + input + synapses 1 0. [sent-102, score-0.471]

57 Frequency histograms of errors (difference between recalled and stored firing rates). [sent-124, score-0.321]

58 The ideal observer is not plotted because its error distribution was a Dirac-delta at 0. [sent-125, score-0.277]

59 Benchmarking the Bayesian network against the network of Treves [13] (∗) on patterns of non-negative firing rates. [sent-127, score-0.475]

60 Average error is the square root of the mean squared error (C), average normalized error measures only the angle difference between true and recalled activities (D). [sent-128, score-0.216]

61 The input 2 2 was corrupted by unbiased Gaussian noise of σx = 1 variance (A,B), or σx = 1. [sent-135, score-0.241]

62 The number of cells in the network was N = 50 (A,B) and N = 100 (C,D), and the number of memories stored was M = 2 (A,B) or varied between M = 2 . [sent-138, score-0.628]

63 For each data point, 10 different networks were simulated with a different set of stored patterns, and for each network, 10 attempts at recall were made, with a noisy version of a randomly chosen pattern as the input and with activities initialized at this input. [sent-142, score-0.589]

64 given by a network that generates random patterns from the same prior distribution from which the patterns to be stored were drawn (P [x]). [sent-143, score-0.85]

65 Another simple alternative is a network that simply transmits its input (˜) to its output. [sent-144, score-0.241]

66 (Note that the ‘input only’ network x is not necessarily superior to the ‘prior only’ network: their relative effectiveness depends on the relative variances of the prior and noise distributions, a narrow prior with a wide noise distribution would make the latter perform better, as in Fig. [sent-145, score-0.477]

67 Such an ideal observer only makes errors when both the number of patterns stored and the noise in the input is sufficiently large, so that corrupting a stored pattern is likely to make it more similar to another stored pattern. [sent-151, score-1.365]

68 1A,B, this is not the case, since only two patterns were stored, and the ideal observer performs perfectly as expected. [sent-153, score-0.462]

69 Nevertheless, there may be situations in which perfect performance is out of reach even for an ideal observer (Fig. [sent-154, score-0.277]

70 In summary, the performance of any network can be assessed by measuring where it lies between the better one of the ‘prior only’ and ‘input only’ networks and the ideal observer. [sent-156, score-0.297]

71 1C,D), which we chose because it is a rare example of a network that was designed to have near optimal recall performance in the face of non-binary patterns. [sent-158, score-0.23]

72 In this work, Treves considered ternary patterns, drawn from the distribution P [xi ] := 1 1 − 4 a δ (xi ) + aδ xi − 2 + a δ xi − 3 , where δ (x) is the Dirac-delta function. [sent-159, score-0.514]

73 3 3 2 Here, a = µx quantifies the density of the patterns (i. [sent-160, score-0.185]

74 The patterns are stored using the covariance rule as stated above (with Acov := N1 2 ). [sent-163, score-0.492]

75 First the ‘local field’ 3 is calculated as hi := j=i Wi,j xj − k ( i xi − N ) + Input, then the output of the neuron is calculated as a threshold linear function of the local field: xi := g (hi − hThr ) if hi > hThr and xi := 0 otherwise, where g := 0. [sent-165, score-0.985]

76 Further, we corrupted the inputs by unbiased additive Gaussian noise 2 (with variance σx = 1. [sent-168, score-0.182]

77 5), but truncated the activities at 0, though did not adjust the dy˜ namics of our network in the light of the truncation. [sent-169, score-0.314]

78 Still, the Bayesian network clearly outperformed the Treves network when the patterns were drawn from a truncated Gaussian (Fig. [sent-171, score-0.512]

79 The performance of the Bayesian network stayed close to that of an ideal observer assuming non-truncated Gaussian input, showing that most of the errors were caused by this assumption and not from suboptimality of neural interactions decoding the information in synaptic weights. [sent-173, score-0.687]

80 Finally, again for ternary patterns, we also considered only penalizing errors about the direction of the vectors of recalled activities ignoring errors about their magnitudes (Fig. [sent-175, score-0.322]

81 4 Phase-based memories Brain areas known to be involved in memory processing demonstrate prominent oscillations (LFPOs) under a variety of conditions, including both wake and sleep states [16]. [sent-179, score-0.31]

82 The discovery of spike timing dependent plasticity (STDP) in which the relative timing of pre- and postsynaptic firings determines the sign and extent of synaptic weight change, offered new insights into how the information represented by spike times may be stored in neural networks [19]. [sent-183, score-1.193]

83 However, bar some interesting suggestions about neuronal resonance [20], it is less clear how one might correctly recall information thereby stored in the synaptic weights. [sent-184, score-0.767]

84 First, neuronal activities, xi , will be interpreted as firing times relative to a reference phase of the ongoing LFPO, such as the peak of theta oscillation in the hippocampus, and will thus be circular variables drawn from a circular Gaussian. [sent-186, score-0.705]

85 Next, our learning rule is an exponentially decaying Gabor-function of the phase difference between pre- and postsynaptic firing: ΩSTDP (xi , xj ) := ASTDP exp[κSTDP cos(∆φi,j )] sin(∆φi,j − φSTDP ) with ∆φi,j = 2π (xi − xj ) /TSTDP . [sent-187, score-0.625]

86 11 is HSTDP (xi , xj ) = 2πASTDP /TSTDP Wi,j exp[κSTDP cos(∆φi,j )] cos(∆φi,j ) − κSTDP sin2 (∆φi,j ) . [sent-191, score-0.179]

87 This interaction function decreases firing phase, and thus accelerates the postsynaptic cell if the presynaptic spike precedes postsynaptic firing, and delays the postsynaptic cell if the presynaptic spike arrives just after the postsynaptic cell fired. [sent-192, score-1.076]

88 This characteristic is the essence of the biphasic phase reset curve of type II cells [21], and has been observed in various types of neurons, including neocortical cells [22]. [sent-193, score-0.215]

89 Thus again, our derivation directly couples STDP, a canonical model of synaptic plasticity, and phase reset curves in a canonical model of neural dynamics. [sent-194, score-0.47]

90 2 as is comparable to that of the rate coded network (Fig. [sent-197, score-0.191]

91 5 Discussion We have described a Bayesian approach to recall in autoassociative memories. [sent-200, score-0.338]

92 This permits the derivation of neuronal dynamics appropriate to a synaptic plasticity rule, and we used this to show a coupling between canonical Hebbian and STDP plasticity rules and canonical rate-based and phase-based neuronal dynamics respectively. [sent-201, score-1.302]

93 This suggests that neurons may employ a dual code – the more rate-based probability of being active in a cycle, and the phase-based timing of the spike relative to the cycle [24]. [sent-206, score-0.302]

94 05 input ideal observer Bayesian 2 1 0 −60 −40 −20 0 Error 20 40 60 0 1 10 Number of stored patterns 100 Figure 2: Performance of the phase-coded network. [sent-215, score-0.758]

95 Error distribution for the ideal observer was a Dirac-delta at 0 (B) and was thus omitted from A. [sent-216, score-0.277]

96 5 concentration on a TΘ = 125 ms long cycle matching data on theta frequency modulation of pyramidal cell population activity in the hippocampus [23]. [sent-219, score-0.397]

97 Input was corrupted by unbiased circular Gaussian (von Mises) noise with κx = 10 concentration. [sent-220, score-0.205]

98 Learning rule was circular STDP rule with ASTDP = 0. [sent-221, score-0.196]

99 The network consisted of N = 100 cells, and the number of memories stored was M = 10 (A) or varied between M = 2 . [sent-223, score-0.566]

100 Our formalism also suggests that there may be a way to optimally choose the learning rule itself in the first place, by matching it to the prior distribution of patterns. [sent-229, score-0.168]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('synaptic', 0.265), ('autoassociative', 0.253), ('stored', 0.237), ('treves', 0.232), ('stdp', 0.219), ('xi', 0.204), ('patterns', 0.185), ('memories', 0.184), ('neuronal', 0.18), ('xj', 0.179), ('observer', 0.168), ('graded', 0.147), ('network', 0.145), ('postsynaptic', 0.14), ('activity', 0.133), ('dynamics', 0.133), ('activities', 0.132), ('plasticity', 0.128), ('acov', 0.127), ('neuron', 0.124), ('ring', 0.12), ('ideal', 0.109), ('ternary', 0.106), ('bayesian', 0.102), ('prior', 0.098), ('neurons', 0.087), ('recall', 0.085), ('lfpo', 0.084), ('recalled', 0.084), ('sci', 0.084), ('theta', 0.084), ('px', 0.08), ('spike', 0.079), ('timing', 0.072), ('xm', 0.071), ('rule', 0.07), ('hippocampus', 0.067), ('memory', 0.064), ('astdp', 0.063), ('corrupting', 0.063), ('hthr', 0.063), ('natl', 0.063), ('hippocampal', 0.062), ('cells', 0.062), ('input', 0.059), ('unbiased', 0.058), ('interaction', 0.057), ('phase', 0.057), ('canonical', 0.057), ('circular', 0.056), ('presynaptic', 0.056), ('acad', 0.055), ('corrupted', 0.054), ('hebbian', 0.054), ('biologically', 0.052), ('cell', 0.049), ('hop', 0.047), ('dxi', 0.047), ('weight', 0.047), ('coded', 0.046), ('neurosci', 0.044), ('networks', 0.043), ('log', 0.043), ('eld', 0.043), ('accelerates', 0.042), ('keefe', 0.042), ('lond', 0.042), ('mises', 0.042), ('rules', 0.041), ('proc', 0.039), ('truncated', 0.037), ('phys', 0.037), ('recurrently', 0.037), ('oscillation', 0.037), ('transmits', 0.037), ('noise', 0.037), ('hi', 0.035), ('characteristics', 0.034), ('term', 0.034), ('reset', 0.034), ('biol', 0.034), ('comput', 0.034), ('abbott', 0.034), ('oscillations', 0.034), ('variance', 0.033), ('pattern', 0.033), ('cycle', 0.033), ('cos', 0.032), ('dayan', 0.032), ('frequency', 0.031), ('biological', 0.031), ('relative', 0.031), ('equation', 0.031), ('brain', 0.03), ('cov', 0.029), ('attractor', 0.029), ('conductance', 0.029), ('prominent', 0.028), ('trans', 0.028), ('inhibition', 0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

Author: Máté Lengyel, Peter Dayan

Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1

2 0.32906821 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

Author: Sander M. Bohte, Michael C. Mozer

Abstract: Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron’s variability to a given presynaptic input, causing the neuron’s output to become more reliable in the face of noise. Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of cortical adaptation. 1

3 0.2478984 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

Author: Jochen Triesch

Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1

4 0.23054224 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

Author: Taro Toyoizumi, Jean-pascal Pfister, Kazuyuki Aihara, Wulfram Gerstner

Abstract: We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model. Under the assumption of small fluctuations of the input, we find a spike-timing dependent plasticity (STDP) function which depends on the time course of excitatory postsynaptic potentials (EPSPs) and the autocorrelation function of the postsynaptic neuron. We show that the STDP function has both positive and negative phases. The positive phase is related to the shape of the EPSP while the negative phase is controlled by neuronal refractoriness. 1

5 0.20056744 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern

Author: Kosuke Hamaguchi, Masato Okada, Kazuyuki Aihara

Abstract: Repeated spike patterns have often been taken as evidence for the synfire chain, a phenomenon that a stable spike synchrony propagates through a feedforward network. Inter-spike intervals which represent a repeated spike pattern are influenced by the propagation speed of a spike packet. However, the relation between the propagation speed and network structure is not well understood. While it is apparent that the propagation speed depends on the excitatory synapse strength, it might also be related to spike patterns. We analyze a feedforward network with Mexican-Hattype connectivity (FMH) using the Fokker-Planck equation. We show that both a uniform and a localized spike packet are stable in the FMH in a certain parameter region. We also demonstrate that the propagation speed depends on the distinct firing patterns in the same network.

6 0.19011799 28 nips-2004-Bayesian inference in spiking neurons

7 0.18784674 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

8 0.18472949 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

9 0.15262367 178 nips-2004-Support Vector Classification with Input Data Uncertainty

10 0.14725797 112 nips-2004-Maximising Sensitivity in a Spiking Network

11 0.13642269 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks

12 0.13473284 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons

13 0.12056477 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

14 0.10490578 84 nips-2004-Inference, Attention, and Decision in a Bayesian Neural Architecture

15 0.097302981 46 nips-2004-Constraining a Bayesian Model of Human Visual Speed Perception

16 0.095129974 148 nips-2004-Probabilistic Computation in Spiking Populations

17 0.092184916 188 nips-2004-The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space

18 0.082436264 55 nips-2004-Distributed Occlusion Reasoning for Tracking with Nonparametric Belief Propagation

19 0.076200828 67 nips-2004-Exponentiated Gradient Algorithms for Large-margin Structured Classification

20 0.075690143 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.25), (1, -0.366), (2, -0.14), (3, 0.14), (4, 0.048), (5, 0.081), (6, -0.022), (7, 0.023), (8, -0.088), (9, 0.035), (10, 0.101), (11, -0.01), (12, -0.013), (13, -0.004), (14, -0.058), (15, 0.039), (16, -0.165), (17, 0.059), (18, -0.059), (19, 0.001), (20, -0.014), (21, 0.072), (22, 0.091), (23, -0.048), (24, -0.019), (25, -0.012), (26, 0.134), (27, -0.014), (28, -0.079), (29, 0.109), (30, -0.115), (31, 0.071), (32, -0.024), (33, -0.087), (34, 0.006), (35, -0.078), (36, 0.001), (37, -0.043), (38, -0.072), (39, -0.04), (40, 0.019), (41, 0.064), (42, -0.008), (43, 0.091), (44, -0.058), (45, 0.117), (46, 0.062), (47, 0.033), (48, 0.029), (49, -0.073)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9462316 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

Author: Máté Lengyel, Peter Dayan

Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1

2 0.81319743 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

Author: Sander M. Bohte, Michael C. Mozer

Abstract: Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron’s variability to a given presynaptic input, causing the neuron’s output to become more reliable in the face of noise. Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of cortical adaptation. 1

3 0.80666941 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

Author: Jochen Triesch

Abstract: This paper explores the computational consequences of simultaneous intrinsic and synaptic plasticity in individual model neurons. It proposes a new intrinsic plasticity mechanism for a continuous activation model neuron based on low order moments of the neuron’s firing rate distribution. The goal of the intrinsic plasticity mechanism is to enforce a sparse distribution of the neuron’s activity level. In conjunction with Hebbian learning at the neuron’s synapses, the neuron is shown to discover sparse directions in the input. 1

4 0.74880099 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

Author: Taro Toyoizumi, Jean-pascal Pfister, Kazuyuki Aihara, Wulfram Gerstner

Abstract: We derive an optimal learning rule in the sense of mutual information maximization for a spiking neuron model. Under the assumption of small fluctuations of the input, we find a spike-timing dependent plasticity (STDP) function which depends on the time course of excitatory postsynaptic potentials (EPSPs) and the autocorrelation function of the postsynaptic neuron. We show that the STDP function has both positive and negative phases. The positive phase is related to the shape of the EPSP while the negative phase is controlled by neuronal refractoriness. 1

5 0.61351401 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

Author: Rajesh P. Rao

Abstract: There is growing evidence from psychophysical and neurophysiological studies that the brain utilizes Bayesian principles for inference and decision making. An important open question is how Bayesian inference for arbitrary graphical models can be implemented in networks of spiking neurons. In this paper, we show that recurrent networks of noisy integrate-and-fire neurons can perform approximate Bayesian inference for dynamic and hierarchical graphical models. The membrane potential dynamics of neurons is used to implement belief propagation in the log domain. The spiking probability of a neuron is shown to approximate the posterior probability of the preferred state encoded by the neuron, given past inputs. We illustrate the model using two examples: (1) a motion detection network in which the spiking probability of a direction-selective neuron becomes proportional to the posterior probability of motion in a preferred direction, and (2) a two-level hierarchical network that produces attentional effects similar to those observed in visual cortical areas V2 and V4. The hierarchical model offers a new Bayesian interpretation of attentional modulation in V2 and V4. 1

6 0.6066975 140 nips-2004-Optimal Information Decoding from Neuronal Populations with Specific Stimulus Selectivity

7 0.59807634 194 nips-2004-Theory of localized synfire chain: characteristic propagation speed of stable spike pattern

8 0.58441025 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits

9 0.58177918 157 nips-2004-Saliency-Driven Image Acuity Modulation on a Reconfigurable Array of Spiking Silicon Neurons

10 0.51438171 112 nips-2004-Maximising Sensitivity in a Spiking Network

11 0.49551147 28 nips-2004-Bayesian inference in spiking neurons

12 0.43924069 178 nips-2004-Support Vector Classification with Input Data Uncertainty

13 0.40775922 180 nips-2004-Synchronization of neural networks by mutual learning and its application to cryptography

14 0.40586439 193 nips-2004-Theories of Access Consciousness

15 0.40435547 26 nips-2004-At the Edge of Chaos: Real-time Computations and Self-Organized Criticality in Recurrent Neural Networks

16 0.39767069 35 nips-2004-Chemosensory Processing in a Spiking Model of the Olfactory Bulb: Chemotopic Convergence and Center Surround Inhibition

17 0.38413239 55 nips-2004-Distributed Occlusion Reasoning for Tracking with Nonparametric Belief Propagation

18 0.37733486 188 nips-2004-The Laplacian PDF Distance: A Cost Function for Clustering in a Kernel Feature Space

19 0.37012321 11 nips-2004-A Second Order Cone programming Formulation for Classifying Missing Data

20 0.36024106 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.092), (15, 0.118), (19, 0.01), (26, 0.065), (31, 0.021), (33, 0.154), (35, 0.073), (36, 0.245), (39, 0.014), (44, 0.045), (50, 0.039), (82, 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.94132614 6 nips-2004-A Hidden Markov Model for de Novo Peptide Sequencing

Author: Bernd Fischer, Volker Roth, Jonas Grossmann, Sacha Baginsky, Wilhelm Gruissem, Franz Roos, Peter Widmayer, Joachim M. Buhmann

Abstract: De novo Sequencing of peptides is a challenging task in proteome research. While there exist reliable DNA-sequencing methods, the highthroughput de novo sequencing of proteins by mass spectrometry is still an open problem. Current approaches suffer from a lack in precision to detect mass peaks in the spectrograms. In this paper we present a novel method for de novo peptide sequencing based on a hidden Markov model. Experiments effectively demonstrate that this new method significantly outperforms standard approaches in matching quality. 1

same-paper 2 0.83294135 151 nips-2004-Rate- and Phase-coded Autoassociative Memory

Author: Máté Lengyel, Peter Dayan

Abstract: Areas of the brain involved in various forms of memory exhibit patterns of neural activity quite unlike those in canonical computational models. We show how to use well-founded Bayesian probabilistic autoassociative recall to derive biologically reasonable neuronal dynamics in recurrently coupled models, together with appropriate values for parameters such as the membrane time constant and inhibition. We explicitly treat two cases. One arises from a standard Hebbian learning rule, and involves activity patterns that are coded by graded firing rates. The other arises from a spike timing dependent learning rule, and involves patterns coded by the phase of spike times relative to a coherent local field potential oscillation. Our model offers a new and more complete understanding of how neural dynamics may support autoassociation. 1

3 0.78395694 53 nips-2004-Discriminant Saliency for Visual Recognition from Cluttered Scenes

Author: Dashan Gao, Nuno Vasconcelos

Abstract: Saliency mechanisms play an important role when visual recognition must be performed in cluttered scenes. We propose a computational definition of saliency that deviates from existing models by equating saliency to discrimination. In particular, the salient attributes of a given visual class are defined as the features that enable best discrimination between that class and all other classes of recognition interest. It is shown that this definition leads to saliency algorithms of low complexity, that are scalable to large recognition problems, and is compatible with existing models of early biological vision. Experimental results demonstrating success in the context of challenging recognition problems are also presented. 1

4 0.68842638 153 nips-2004-Reducing Spike Train Variability: A Computational Theory Of Spike-Timing Dependent Plasticity

Author: Sander M. Bohte, Michael C. Mozer

Abstract: Experimental studies have observed synaptic potentiation when a presynaptic neuron fires shortly before a postsynaptic neuron, and synaptic depression when the presynaptic neuron fires shortly after. The dependence of synaptic modulation on the precise timing of the two action potentials is known as spike-timing dependent plasticity or STDP. We derive STDP from a simple computational principle: synapses adapt so as to minimize the postsynaptic neuron’s variability to a given presynaptic input, causing the neuron’s output to become more reliable in the face of noise. Using an entropy-minimization objective function and the biophysically realistic spike-response model of Gerstner (2001), we simulate neurophysiological experiments and obtain the characteristic STDP curve along with other phenomena including the reduction in synaptic plasticity as synaptic efficacy increases. We compare our account to other efforts to derive STDP from computational principles, and argue that our account provides the most comprehensive coverage of the phenomena. Thus, reliability of neural response in the face of noise may be a key goal of cortical adaptation. 1

5 0.67835438 189 nips-2004-The Power of Selective Memory: Self-Bounded Learning of Prediction Suffix Trees

Author: Ofer Dekel, Shai Shalev-shwartz, Yoram Singer

Abstract: Prediction suffix trees (PST) provide a popular and effective tool for tasks such as compression, classification, and language modeling. In this paper we take a decision theoretic view of PSTs for the task of sequence prediction. Generalizing the notion of margin to PSTs, we present an online PST learning algorithm and derive a loss bound for it. The depth of the PST generated by this algorithm scales linearly with the length of the input. We then describe a self-bounded enhancement of our learning algorithm which automatically grows a bounded-depth PST. We also prove an analogous mistake-bound for the self-bounded algorithm. The result is an efficient algorithm that neither relies on a-priori assumptions on the shape or maximal depth of the target PST nor does it require any parameters. To our knowledge, this is the first provably-correct PST learning algorithm which generates a bounded-depth PST while being competitive with any fixed PST determined in hindsight. 1

6 0.67159569 28 nips-2004-Bayesian inference in spiking neurons

7 0.67092639 1 nips-2004-A Cost-Shaping LP for Bellman Error Minimization with Performance Guarantees

8 0.66828418 69 nips-2004-Fast Rates to Bayes for Kernel Machines

9 0.66809773 131 nips-2004-Non-Local Manifold Tangent Learning

10 0.666821 181 nips-2004-Synergies between Intrinsic and Synaptic Plasticity in Individual Model Neurons

11 0.66506857 173 nips-2004-Spike-timing Dependent Plasticity and Mutual Information Maximization for a Spiking Neuron Model

12 0.66192353 172 nips-2004-Sparse Coding of Natural Images Using an Overcomplete Set of Limited Capacity Units

13 0.66188902 76 nips-2004-Hierarchical Bayesian Inference in Networks of Spiking Neurons

14 0.6583491 4 nips-2004-A Generalized Bradley-Terry Model: From Group Competition to Individual Skill

15 0.65705597 174 nips-2004-Spike Sorting: Bayesian Clustering of Non-Stationary Data

16 0.65539896 178 nips-2004-Support Vector Classification with Input Data Uncertainty

17 0.65526706 58 nips-2004-Edge of Chaos Computation in Mixed-Mode VLSI - A Hard Liquid

18 0.65519881 206 nips-2004-Worst-Case Analysis of Selective Sampling for Linear-Threshold Algorithms

19 0.65495384 127 nips-2004-Neighbourhood Components Analysis

20 0.65481746 118 nips-2004-Methods for Estimating the Computational Power and Generalization Capability of Neural Microcircuits