nips nips2000 nips2000-124 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
Reference: text
sentIndex sentText sentNum sentScore
1 dk Abstract We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. [sent-11, score-0.699]
2 The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. [sent-12, score-0.386]
3 In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. [sent-13, score-0.656]
4 We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. [sent-14, score-0.248]
5 1 Introduction Recent studies of synapses between pyramidal neocortical and hippocampal neurons [1, 2, 3, 4] have revealed that changes in synaptic efficacy can depend on the relative timing of pre- and postsynaptic spikes. [sent-15, score-0.427]
6 Typically, a presynaptic spike followed by a postsynaptic one leads to an increase in efficacy (LTP), while the reverse temporal order leads to a decrease (LTD). [sent-16, score-0.199]
7 The dependence of the change in synaptic efficacy on the difference r between the two spike times may be characterized by a kernel which we denote A(r) [4]. [sent-17, score-0.276]
8 For hippocampal pyramidal neurons, the half-width of this kernel is around 20 ms. [sent-18, score-0.149]
9 Many important neural structures, notably hippocampus and olfactory cortex, exhibit oscillatory activity in the 20-50 Hz range. [sent-19, score-0.44]
10 Here the temporal variation of the neuronal firing can clearly affect the synaptic dynamics, and vice versa. [sent-20, score-0.177]
11 In this paper we study a simple model for learning oscillatory patterns, based on the structure of the kernel A(r) and other known physiology of these areas. [sent-21, score-0.371]
12 We will assume that these synaptic changes in long range lateral connections are driven by oscillatory, patterned input to a network that initially has only local synaptic connections. [sent-22, score-0.467]
13 The result is an imprinting of the oscillatory patterns in the synapses, such that subsequent input of a similar pattern will evoke a strong resonant response. [sent-23, score-1.14]
14 It can be viewed as a generalization to oscillatory networks with spike-timing-dependent learning of the standard scenario whereby stationary patterns are stored in Hopfield networks using the conventional Hebb rule. [sent-24, score-0.752]
15 2 Model The computational neurons of the model represent local populations of biological neurons that share common input. [sent-25, score-0.12]
16 (2) #i Here Ui and Vi are membrane potentials for excitatory and inhibitory (formal) neuron i, a- 1 is their membrane time constant, and the sigmoidal functions gu( ) and gv( ) model the dependence of their outputs (interpreted as instantaneous firing rates) on their membrane potentials. [sent-28, score-0.509]
17 excitatory-to-inhibitory) connection strengths within local excitatory-inhibitory pairs, and for simplicity we take the external drive Ii(~ to act only on the excitatory units. [sent-32, score-0.29]
18 We include nonlocal excitatory couplings J ij between excitatory units and wg from excitatory units to inhibitory ones. [sent-33, score-0.962]
19 In this minimal model, we ignore long-range inhibitory couplings, appealing to the fact that real anatomical inhibitory connections are predominantly short-ranged. [sent-34, score-0.332]
20 The model is nonlinear, but here we will limit our treatment to an analysis of small oscillations around a stable fixed point {ii, v} determined by the DC part of the input. [sent-40, score-0.14]
21 Performing the linearization and eliminating the inhibitory units [6, 5], we obtain ii + [2a - J]ti + [a 2 + (3(')' + W) - aJ]u = (at + a)81. [sent-41, score-0.198]
22 For simplicity, we have assumed that the effective local couplings (3i = g~(Vi)(3? [sent-43, score-0.148]
23 , the oscillatory pattern elements ~i = I~ile-i¢i are complex, reflecting possible phase differences across the units. [sent-48, score-0.518]
24 We likewise separate the response u = u+ + u- (after the initial transients) into positive- and negative-frequency components u± (with u- = u+* and u± ex: e'fiwt). [sent-49, score-0.107]
25 2a is the intrinsic damping and a 2 + (3')' the frequency of the individual oscillators. [sent-56, score-0.239]
26 The model: In addition to the local excitatory-inhibitory connections (vertical solid lines), there are nonlocallong-range connections (dashed lines) between excitatory units (Jij ) and from excitatory to inhibitory units (Wij ). [sent-64, score-0.765]
27 B: Activation function used in simulations for excitatory units (B. [sent-66, score-0.312]
28 1 Learning phase We employ a generalized Hebb rule of the form c5Cij (t) ='T/ r 10 T dtjOO dTYi(t+T)A(T)Xj(t) (6) -00 for synaptic weight Cij, where Xj and Yi are the pre- and postsynaptic activities, measured relative to stationary levels at which no changes in synaptic strength occur. [sent-71, score-0.501]
29 We consider a general kernel A(T), although experimentally A(T) > 0 « 0) for T > 0 « 0). [sent-72, score-0.057]
30 We will apply the rule to both J and W in our linearized network, where the firing rates 9u(Ui) and 9v(Vi) vary linearly with Ui and Vi, so we will use Eqn. [sent-73, score-0.111]
31 In the brain structures we are modeling, cholinergic modulation makes the long-range connections ineffective during learning [7]. [sent-78, score-0.07]
32 ~J*, a generalization of the outer-product learning rule to the complex patterns el-l from the Hopfield-Hebb form for real-valued patterns. [sent-83, score-0.198]
33 , the learned weights are simply sums of contributions from individual patterns like Eqns. [sent-88, score-0.243]
34 2 Recall phase We return to the single-pattern problem and study the simple case when 'fJJ + w~). [sent-92, score-0.121]
35 Consider first an input pattern 81 = ee- iwt + c. [sent-93, score-0.276]
36 that matches the stored pattern exactly (e = eO), but possibly oscillating at a different frequency. [sent-95, score-0.426]
37 (3), the (positive-frequency) response 'fJw'"Y(3/(a 2 (w + ia)eoe- iwt - 2aw - ~(w + wo)A'(wo) + i[a 2 + (3'"Y - ~(w + WO)AII(WO) - w2 ]· u+ - (10) where A'(wo) == ReA(wo) and A"(WO) == ImA(wo). [sent-98, score-0.219]
38 For strong response at w = Wo, we require (11) This means (1) the resonance frequency Wo is determined by A", (2) the effective damping 2a - JoA' should be small, and (3) deviation of w from Wo reduces the responses. [sent-99, score-0.433]
39 It is instructive to consider the case where the width of the time window for synaptic change is small compared with the oscillation period. [sent-100, score-0.227]
40 Experimentally, al > 0 , implying a resonant frequency greater than the intrinsic local frequency, a 2 + (3'"Y obtained in the absence of long-range coupling. [sent-102, score-0.367]
41 J If the drive e does not match the stored pattern (in phase and amplitude), the response will consist of two terms. [sent-103, score-0.657]
42 (10) but reduced in amplitude by an overlap factor e O . [sent-105, score-0.187]
43 ) The second term is proportional to the part of e orthogonal to the stored pattern. [sent-108, score-0.395]
44 The J and W matrices do not act in this subspace, so the frequency dependence of this term is just that of uncoupled oscillators, i. [sent-109, score-0.155]
45 This response is always highly damped and therefore small. [sent-113, score-0.107]
46 It is straightforward to extend this analysis to multiple imprinted patterns. [sent-114, score-0.336]
47 The response consists of a sum of terms, one for each stored pattern. [sent-115, score-0.347]
48 The term for each stored pattern is just like that just described in the single-stored-pattern case: it has one part for the input component parallel to the stored pattern and another part for the component orthogonal to the stored pattern. [sent-116, score-1.126]
49 We note that, in this linear analysis, an input which overlaps several stored patterns will (if the imprinting and input frequencies match) evoke a resonant response which is a linear combination of the stored patterns. [sent-117, score-1.629]
50 Thus, a network tuned to operate in a nearly linear regime is able to interpolate in forming its representation of the input. [sent-118, score-0.22]
51 For categorical associative memory, on the other hand, a network has to work in the extreme nonlinear limit, responding with only the strongest stored pattern in an input mixture. [sent-119, score-0.584]
52 As our network operates near the threshold for spontaneous oscillations, we expect that it should exhibit properties intermediate between A B C 00 0 00 90', - - - - - - - - - - - - - : : = - - - , $ ,* 200 200 * " " ~ 'C . [sent-120, score-0.052]
53 ~ 100 ~ 100 o 60 02 o-eo* 04 06 Overlap " 00 / • * 08 45 Input angle (degrees) 90 Figure 2: Circles show non-linear simulation results, stars show the linear simulation results, while the dotted line show the analytical prediction for the linearized model. [sent-124, score-0.152]
54 Importance of frequency match: amplitude of response of output units as a function of the frequency of the current input. [sent-126, score-0.601]
55 Importance of amplitude and phase mismatch: amplitude of response as a function of overlap between current input and imprinted pattern (i. [sent-129, score-1.032]
56 C: Input - output relationship when two orthogonal patterns and have been imprinted at the same frequency w = 41Hz. [sent-133, score-0.762]
57 The angle of input pattern with resrect to ~1 is shown as a function of the angle of the output pattern with respect to ~ ,for many different input patterns. [sent-134, score-0.41]
58 We find that this is indeed the case in the simulations reported in the next section. [sent-136, score-0.065]
59 From our analysis it turns out that the network behaves like a Hopfield-memory (separate basins, without interpolation capability) for patterns with different imprinting frequencies, but at the same time it is able to interpolate among patterns which share a common frequency. [sent-137, score-0.841]
60 3 Simulations Checking the validity of our linear approximation in the analysis, we performed numerical simulations of both the non-linear equations (1,2) and the linearized ones (3). [sent-138, score-0.176]
61 We simulated the recall phase of a network consisting of 10 excitatory and 10 inhibitory cells. [sent-139, score-0.519]
62 The connections Jij and Wij were calculated from Eqns. [sent-140, score-0.07]
63 (9), where we used the approximations (12) for the kernel shape A(T). [sent-141, score-0.057]
64 Parameters were set in such a way that the selective resonance was in the 40-Hz range. [sent-142, score-0.048]
65 In non-linear simulations we used different piecewise linear activation functions for 9u() and 9v(), as shown in Fig. [sent-143, score-0.11]
66 We chose the parameters of the functions 9u () and 9v () so that the network equilibrium points Ui, ih were close to, but below, the high-gain region, i. [sent-145, score-0.052]
67 The results confirm that when the input pattern matches the imprinted one in frequency, amplitude and phase, the network responds with strong resonant oscillations. [sent-149, score-0.956]
68 However, it does not resonate if the frequencies do not match, as shown in the frequency tuning curve in Fig. [sent-150, score-0.217]
69 The behavior when the two frequencies are close to each other differs in the linear and nonlinear cases. [sent-152, score-0.146]
70 However, in both cases a sharp selectivity in frequency is observed. [sent-153, score-0.193]
71 The dependence on the overlap between the input and the stored pattern is shown in Fig. [sent-154, score-0.474]
72 The non-linear case, indicated by circles, should be compared with the linear case, where the amplitude is always linear in the overlap. [sent-156, score-0.207]
73 In the nonlinear case, the linearity holds roughly only for overlaps lower than about 0. [sent-157, score-0.105]
74 4; for larger overlaps the amplification is as high as for the perfect match case. [sent-158, score-0.125]
75 This means that input patterns with an overlap with the imprinted one greater than 0. [sent-159, score-0.685]
76 During the learning phases the parameter al of kernel was tuned appropriately, i. [sent-171, score-0.194]
77 The response elicited when two orthogonal patterns have been imprinted with the same frequency is shown in Fig. [sent-176, score-0.869]
78 In both linear and non-linear simulations the network responds vigorously(with highamplitude oscillations) to the drive if e is in the subspace spanned by the imprinted patterns, and fails to respond appreciably if e is orthogonal to that plane. [sent-185, score-0.788]
79 When the input pattern e is in the plane spanned by the stored patterns, the resonant response u also lies in this plane. [sent-186, score-0.733]
80 However, while in the linear case the output is proportional to the input, in agreement with the analytical analysis, in the nonlinear case there are preferred directions, in the stored pattern plane. [sent-187, score-0.446]
81 e, e, Finally we performed linear simulations storing two orthogonal patterns ele-iwlt + c. [sent-189, score-0.438]
82 3 shows a good performance of the network in separating the basins of attraction in this case. [sent-195, score-0.156]
83 The response to a linear combination of the two patterns, (ae + be)e- iw2t + c. [sent-196, score-0.152]
84 is proportional to the part of the input whose imprinting frequency matches the current driving frequency. [sent-198, score-0.589]
85 Linear combinations of the two imprinted patterns are not attractors if the two patterns do not share the same imprinting frequency. [sent-199, score-1.088]
86 4 Summary and Discussion We have presented a model of learning for memory or input representations in neural networks with input-driven oscillatory activity. [sent-200, score-0.395]
87 The model structure is an abstrac- tion of the hippocampus or the olfactory cortex. [sent-201, score-0.126]
88 We propose a simple generalized Hebbian rule, using temporal-activity-dependent LTP and LTD, to encode both magnitudes and phases of oscillatory patterns into the synapses in the network. [sent-202, score-0.611]
89 After learning, the model responds resonantly to inputs which have been learned (or, for networks which operate essentially linearly, to linear combinations of learned inputs), but negligibly to other input patterns. [sent-203, score-0.374]
90 Encoding both amplitude and phase enhances computational capacity, for which the price is having to learn both the excitatory-to-excitatory and the excitatory-to-inhibitory connections. [sent-204, score-0.294]
91 , for small oscillation frequencies, it requires that the overall LTP dominates the overall LTD, but this requirement should be modified if the stored oscillations are of high frequencies. [sent-207, score-0.432]
92 Plasticity in the excitatory-to-inhibitory connections (for which experimental evidence and investigation is still scarce) is required by our model for storing phase locked but unsynchronous oscillation patterns. [sent-208, score-0.343]
93 As for the Hopfield model, we distinguish two functional phases: (1) the learning phase, in which the system is clamped dynamically to the external inputs and (2) the recall phase, in which the system dynamics is determined by both the external inputs and the internal interactions. [sent-209, score-0.261]
94 A special property of our model in the linear regime is the following interpolation capability: under a given oscillation frequency, once the system has learned a set of representation states, all other states in the subspace spanned by the learned states can also evoke vigorous responses. [sent-210, score-0.517]
95 Each cell has a localised "place field", and the superposition of activity of several cells wth nearby place fields can represent continuously-varying position. [sent-212, score-0.061]
96 The locality of the place fields also means that this representation is conservative (and thus robust), in the sense that interpolation does not extend beyond the spatial range of the experienced locations or to locations in between two learned but distant and disjoint spatial regions. [sent-213, score-0.187]
97 Of course, this interpolation property is not always desirable. [sent-214, score-0.081]
98 For instance, in categorical memory, one does not want inputs which are linear combinations of stored patterns to elicit responses which are also similar linear combinations. [sent-215, score-0.662]
99 Suitable nonlinearity can (as we saw in the last section), enable the system to perform categorization: one way involved storing different patterns (or, by implication, different classes of patterns) at different frequencies. [sent-216, score-0.255]
100 For instance, in a multimodal area, "place fields" might be stored at one oscillation frequency, and (say) odor memories at another. [sent-217, score-0.335]
wordName wordTfidf (topN-words)
[('imprinted', 0.336), ('oscillatory', 0.314), ('wo', 0.267), ('stored', 0.24), ('imprinting', 0.224), ('patterns', 0.198), ('excitatory', 0.18), ('resonant', 0.168), ('frequency', 0.155), ('synaptic', 0.132), ('inhibitory', 0.131), ('phase', 0.121), ('amplitude', 0.117), ('iwt', 0.112), ('couplings', 0.109), ('response', 0.107), ('oscillations', 0.097), ('oscillation', 0.095), ('ui', 0.087), ('damping', 0.084), ('pattern', 0.083), ('input', 0.081), ('interpolation', 0.081), ('vi', 0.077), ('postsynaptic', 0.076), ('uj', 0.073), ('orthogonal', 0.073), ('evoke', 0.072), ('responds', 0.072), ('olfactory', 0.072), ('ltp', 0.072), ('overlap', 0.07), ('connections', 0.07), ('units', 0.067), ('overlaps', 0.066), ('ltd', 0.066), ('linearized', 0.066), ('simulations', 0.065), ('external', 0.063), ('frequencies', 0.062), ('hopfield', 0.061), ('place', 0.061), ('match', 0.059), ('storing', 0.057), ('jij', 0.057), ('kernel', 0.057), ('attractors', 0.056), ('basins', 0.056), ('enhances', 0.056), ('hebb', 0.056), ('iwot', 0.056), ('joa', 0.056), ('oscillating', 0.056), ('salerno', 0.056), ('spanned', 0.054), ('phases', 0.054), ('hippocampal', 0.054), ('hippocampus', 0.054), ('network', 0.052), ('jo', 0.051), ('membrane', 0.051), ('inputs', 0.05), ('categorical', 0.048), ('resonance', 0.048), ('interpolate', 0.048), ('attraction', 0.048), ('gv', 0.048), ('wg', 0.048), ('drive', 0.047), ('matches', 0.047), ('firing', 0.045), ('synapses', 0.045), ('spike', 0.045), ('learned', 0.045), ('linear', 0.045), ('al', 0.044), ('subspace', 0.044), ('zhaoping', 0.044), ('part', 0.043), ('efficacy', 0.042), ('angle', 0.041), ('italy', 0.041), ('associative', 0.041), ('gu', 0.041), ('hertz', 0.041), ('neurons', 0.04), ('share', 0.04), ('employ', 0.04), ('proportional', 0.039), ('effective', 0.039), ('nonlinear', 0.039), ('tuned', 0.039), ('pyramidal', 0.038), ('selectivity', 0.038), ('combinations', 0.036), ('capability', 0.036), ('hebbian', 0.036), ('regime', 0.036), ('presynaptic', 0.036), ('recall', 0.035)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999952 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
2 0.19089465 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
Author: Kevin A. Archie, Bartlett W. Mel
Abstract: Neurons in area V4 have relatively large receptive fields (RFs), so multiple visual features are simultaneously
4 0.15062785 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
5 0.13607879 55 nips-2000-Finding the Key to a Synapse
Author: Thomas Natschläger, Wolfgang Maass
Abstract: Experimental data have shown that synapses are heterogeneous: different synapses respond with different sequences of amplitudes of postsynaptic responses to the same spike train. Neither the role of synaptic dynamics itself nor the role of the heterogeneity of synaptic dynamics for computations in neural circuits is well understood. We present in this article methods that make it feasible to compute for a given synapse with known synaptic parameters the spike train that is optimally fitted to the synapse, for example in the sense that it produces the largest sum of postsynaptic responses. To our surprise we find that most of these optimally fitted spike trains match common firing patterns of specific types of neurons that are discussed in the literature.
6 0.11117508 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
8 0.091214582 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
9 0.088650376 146 nips-2000-What Can a Single Neuron Compute?
10 0.078404345 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
11 0.078345701 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
12 0.07522478 91 nips-2000-Noise Suppression Based on Neurophysiologically-motivated SNR Estimation for Robust Speech Recognition
13 0.074952535 67 nips-2000-Homeostasis in a Silicon Integrate and Fire Neuron
14 0.074420393 100 nips-2000-Permitted and Forbidden Sets in Symmetric Threshold-Linear Networks
15 0.070375763 34 nips-2000-Competition and Arbors in Ocular Dominance
16 0.070076481 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect
17 0.069597721 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
18 0.06606324 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
19 0.063864782 99 nips-2000-Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech
20 0.063017793 24 nips-2000-An Information Maximization Approach to Overcomplete and Recurrent Representations
topicId topicWeight
[(0, 0.212), (1, -0.185), (2, -0.222), (3, -0.004), (4, 0.029), (5, 0.031), (6, -0.056), (7, -0.013), (8, 0.062), (9, 0.025), (10, 0.05), (11, -0.081), (12, -0.041), (13, -0.057), (14, 0.127), (15, -0.028), (16, 0.063), (17, -0.011), (18, 0.244), (19, 0.041), (20, -0.12), (21, -0.018), (22, -0.119), (23, -0.091), (24, -0.21), (25, 0.029), (26, 0.116), (27, -0.065), (28, 0.048), (29, 0.168), (30, 0.029), (31, -0.033), (32, -0.021), (33, 0.036), (34, -0.005), (35, 0.085), (36, -0.131), (37, -0.09), (38, 0.005), (39, 0.009), (40, -0.053), (41, 0.047), (42, -0.079), (43, 0.047), (44, -0.01), (45, 0.063), (46, 0.115), (47, -0.163), (48, -0.012), (49, -0.038)]
simIndex simValue paperId paperTitle
same-paper 1 0.96676058 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
Author: Kevin A. Archie, Bartlett W. Mel
Abstract: Neurons in area V4 have relatively large receptive fields (RFs), so multiple visual features are simultaneously
3 0.57821852 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
4 0.56467646 66 nips-2000-Hippocampally-Dependent Consolidation in a Hierarchical Model of Neocortex
Author: Szabolcs KĂĄli, Peter Dayan
Abstract: In memory consolidation, declarative memories which initially require the hippocampus for their recall, ultimately become independent of it. Consolidation has been the focus of numerous experimental and qualitative modeling studies, but only little quantitative exploration. We present a consolidation model in which hierarchical connections in the cortex, that initially instantiate purely semantic information acquired through probabilistic unsupervised learning, come to instantiate episodic information as well. The hippocampus is responsible for helping complete partial input patterns before consolidation is complete, while also training the cortex to perform appropriate completion by itself.
5 0.55070037 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
6 0.5123232 55 nips-2000-Finding the Key to a Synapse
7 0.48824945 131 nips-2000-The Early Word Catches the Weights
8 0.4514758 34 nips-2000-Competition and Arbors in Ocular Dominance
9 0.42545733 42 nips-2000-Divisive and Subtractive Mask Effects: Linking Psychophysics and Biophysics
11 0.30032787 126 nips-2000-Stagewise Processing in Error-correcting Codes and Image Restoration
12 0.28722844 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
13 0.26798755 99 nips-2000-Periodic Component Analysis: An Eigenvalue Method for Representing Periodic Structure in Speech
14 0.26586741 20 nips-2000-Algebraic Information Geometry for Learning Machines with Singularities
15 0.25840408 89 nips-2000-Natural Sound Statistics and Divisive Normalization in the Auditory System
16 0.25164157 90 nips-2000-New Approaches Towards Robust and Adaptive Speech Recognition
17 0.25066942 87 nips-2000-Modelling Spatial Recall, Mental Imagery and Neglect
18 0.22646229 146 nips-2000-What Can a Single Neuron Compute?
19 0.22296236 25 nips-2000-Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser Demodulators
20 0.21937992 125 nips-2000-Stability and Noise in Biochemical Switches
topicId topicWeight
[(4, 0.013), (5, 0.331), (10, 0.021), (17, 0.089), (18, 0.026), (26, 0.012), (32, 0.016), (33, 0.023), (38, 0.023), (42, 0.046), (55, 0.026), (62, 0.035), (65, 0.022), (67, 0.067), (76, 0.039), (79, 0.014), (81, 0.049), (90, 0.036), (91, 0.018), (97, 0.017)]
simIndex simValue paperId paperTitle
same-paper 1 0.81605184 124 nips-2000-Spike-Timing-Dependent Learning for Oscillatory Networks
Author: Silvia Scarpetta, Zhaoping Li, John A. Hertz
Abstract: We apply to oscillatory networks a class of learning rules in which synaptic weights change proportional to pre- and post-synaptic activity, with a kernel A(r) measuring the effect for a postsynaptic spike a time r after the presynaptic one. The resulting synaptic matrices have an outer-product form in which the oscillating patterns are represented as complex vectors. In a simple model, the even part of A(r) enhances the resonant response to learned stimulus by reducing the effective damping, while the odd part determines the frequency of oscillation. We relate our model to the olfactory cortex and hippocampus and their presumed roles in forming associative memories and input representations. 1
2 0.3952013 104 nips-2000-Processing of Time Series by Neural Circuits with Biologically Realistic Synaptic Dynamics
Author: Thomas Natschläger, Wolfgang Maass, Eduardo D. Sontag, Anthony M. Zador
Abstract: Experimental data show that biological synapses behave quite differently from the symbolic synapses in common artificial neural network models. Biological synapses are dynamic, i.e., their
3 0.38514546 146 nips-2000-What Can a Single Neuron Compute?
Author: Blaise Agüera y Arcas, Adrienne L. Fairhall, William Bialek
Abstract: In this paper we formulate a description of the computation performed by a neuron as a combination of dimensional reduction and nonlinearity. We implement this description for the HodgkinHuxley model, identify the most relevant dimensions and find the nonlinearity. A two dimensional description already captures a significant fraction of the information that spikes carry about dynamic inputs. This description also shows that computation in the Hodgkin-Huxley model is more complex than a simple integrateand-fire or perceptron model. 1
4 0.38087428 129 nips-2000-Temporally Dependent Plasticity: An Information Theoretic Account
Author: Gal Chechik, Naftali Tishby
Abstract: The paradigm of Hebbian learning has recently received a novel interpretation with the discovery of synaptic plasticity that depends on the relative timing of pre and post synaptic spikes. This paper derives a temporally dependent learning rule from the basic principle of mutual information maximization and studies its relation to the experimentally observed plasticity. We find that a supervised spike-dependent learning rule sharing similar structure with the experimentally observed plasticity increases mutual information to a stable near optimal level. Moreover, the analysis reveals how the temporal structure of time-dependent learning rules is determined by the temporal filter applied by neurons over their inputs. These results suggest experimental prediction as to the dependency of the learning rule on neuronal biophysical parameters 1
5 0.37628055 88 nips-2000-Multiple Timescales of Adaptation in a Neural Code
Author: Adrienne L. Fairhall, Geoffrey D. Lewen, William Bialek, Robert R. de Ruyter van Steveninck
Abstract: Many neural systems extend their dynamic range by adaptation. We examine the timescales of adaptation in the context of dynamically modulated rapidly-varying stimuli, and demonstrate in the fly visual system that adaptation to the statistical ensemble of the stimulus dynamically maximizes information transmission about the time-dependent stimulus. Further, while the rate response has long transients, the adaptation takes place on timescales consistent with optimal variance estimation.
6 0.3657054 49 nips-2000-Explaining Away in Weight Space
7 0.36549044 106 nips-2000-Propagation Algorithms for Variational Bayesian Learning
8 0.3594842 122 nips-2000-Sparse Representation for Gaussian Process Models
9 0.3592709 55 nips-2000-Finding the Key to a Synapse
10 0.35868081 74 nips-2000-Kernel Expansions with Unlabeled Examples
11 0.35620844 102 nips-2000-Position Variance, Recurrence and Perceptual Learning
12 0.35564131 92 nips-2000-Occam's Razor
13 0.35496411 107 nips-2000-Rate-coded Restricted Boltzmann Machines for Face Recognition
14 0.35350451 40 nips-2000-Dendritic Compartmentalization Could Underlie Competition and Attentional Biasing of Simultaneous Visual Stimuli
15 0.35286018 79 nips-2000-Learning Segmentation by Random Walks
16 0.35225502 98 nips-2000-Partially Observable SDE Models for Image Sequence Recognition Tasks
17 0.35146567 125 nips-2000-Stability and Noise in Biochemical Switches
18 0.35111469 134 nips-2000-The Kernel Trick for Distances
19 0.35076106 69 nips-2000-Incorporating Second-Order Functional Knowledge for Better Option Pricing