nips nips2008 nips2008-204 knowledge-graph by maker-knowledge-mining

204 nips-2008-Self-organization using synaptic plasticity


Source: pdf

Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen

Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Self-organization using synaptic plasticity Vicenc G´ mez1 ¸ o vgomez@iua. [sent-1, score-0.492]

2 Diagonal 177, 08018 Barcelona, Spain Abstract Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. [sent-11, score-0.343]

3 An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. [sent-12, score-0.3]

4 In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. [sent-13, score-0.446]

5 Self-organization occurs via synaptic dynamics that we analytically derive. [sent-14, score-0.417]

6 The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. [sent-15, score-0.502]

7 1 Introduction It is accepted that neural activity self-regulates to prevent neural circuits from becoming hyper- or hypoactive by means of homeostatic processes [14]. [sent-16, score-0.104]

8 Closely related to this idea is the claim that optimal information processing in complex systems is attained at a critical point, near a transition between an ordered and an unordered regime of dynamics [3, 11, 9]. [sent-17, score-0.319]

9 Recently, Kinouchi and Copelli [8] provided a realization of this claim, showing that sensitivity and dynamic range of a network are maximized at the critical point of a non-equilibrium phase transition. [sent-18, score-0.381]

10 Self-Organized Criticality (SOC) [1] has been proposed as a mechanism for neural systems which evolve naturally to a critical state without any tuning of external parameters. [sent-20, score-0.281]

11 In a critical state, typical macroscopic quantities present structural or temporal scale-invariance. [sent-21, score-0.171]

12 A possible regulation mechanism may be provided by synaptic plasticity, as proposed in [10], where synaptic depression is shown to cause the mean synaptic strengths to approach a critical value for a range of interaction parameters which grows with the system size. [sent-23, score-1.216]

13 In this work we analytically derive a local synaptic rule that can drive and maintain a neural network near the critical state. [sent-24, score-0.703]

14 According to the proposed rule, synapses are either strengthened or weakened whenever a post-synaptic neuron receives either more or less input from the population than the required to fire at its natural frequency. [sent-25, score-0.336]

15 This simple principle is enough for the network to selforganize at a critical region where the dynamic range is maximized. [sent-26, score-0.25]

16 We illustrate this using a model of non-leaky spiking neurons with delayed coupling for which a phase transition was analyzed in [7]. [sent-27, score-0.27]

17 The state of a neuron i at time t is encoded by its activation level ai (t), which performs at discrete timesteps a random walk with positive drift towards an absorbing barrier L. [sent-29, score-0.33]

18 This spontaneous evolution is modelled using a Bernoulli process with parameter p. [sent-30, score-0.512]

19 When the threshold L is reached, the states of the other units j in the network are increased after one timestep by the synaptic efficacy ǫji , ai is reset to 1, and the unit i remains insensitive to incoming spikes during the following timestep. [sent-31, score-0.812]

20 In [7] it is shown that the system undergoes a phase transition around the critical value η = 1. [sent-38, score-0.23]

21 The study provides upper (τmax ) and lower bounds (τmin ) for the mean inter-spike-interval (ISI) τ of the ensemble and shows that the range of possible ISIs taking the average network behavior (∆τ = τmax -τmin ) is maximized at η = 1. [sent-39, score-0.169]

22 2p (3) Self-organization using synaptic plasticity We now introduce synaptic dynamics in the model. [sent-43, score-0.884]

23 We first present the dissipated spontaneous evolution, a magnitude also maximized at η = 1. [sent-44, score-0.543]

24 The gradient of this magnitude turns to be simple analytically and leads to a plasticity rule that can be expressed using only local information encoded in the post-synaptic unit. [sent-45, score-0.294]

25 1 The dissipated spontaneous evolution During one ISI, we distinguish between the spontaneous evolution carried out by a unit and the actual spontaneous evolution needed for a unit to reach the threshold L. [sent-47, score-2.027]

26 The difference of both quantities can be regarded as a surplus of spontaneous evolution, which is dissipated during an ISI. [sent-48, score-0.521]

27 2 60 50 time 10 0 50 20 time 50 0 100 # neuron η≤1 clustering # neuron ∆τ = τmax − τmin 40 25 # neuron 0 30 η = 0. [sent-52, score-0.438]

28 At this point, the network is also broken down in a maximal number of clusters of units which fire according to a periodic pattern. [sent-67, score-0.183]

29 First, we calculate the spontaneous evolution of the given unit during one ISI, which it is just its number of stochastic state transitions during an ISI of length τ (thick black lines in Figure 2a). [sent-69, score-0.747]

30 These state transitions occur with probability p at every timestep except from the timestep directly after spiking. [sent-70, score-0.172]

31 Using the average ISI-length τ over many spikes and all units we can calculate the average total spontaneous evolution: Etotal = ( τ − 1)p. [sent-71, score-0.439]

32 (4) Since the state of a given unit can exceed the threshold because of the received messages from the rest of the population (blue dashed lines in Figure 2a), a fraction of (4) is actually not required to induce a spike in that unit, and therefore is dissipated. [sent-72, score-0.497]

33 We can obtain this fraction by subtracting from (4) the actual number of state transitions that was required to reach the threshold L. [sent-73, score-0.279]

34 The latter quantity can be referred to as effective spontaneous evolution Eef f and is on average L − 1 minus (N − 1) ǫ , the mean evolution caused by the messages received from the rest of the units during an ISI. [sent-74, score-0.904]

35 For η ≤ 1, the activity is self-sustained and the messages from other units are enough to drive a unit above the threshold. [sent-75, score-0.326]

36 In this case, all the spontaneous evolution is dissipated and Eef f = 0. [sent-76, score-0.677]

37 At η > 1 the units reach the threshold L mainly because of their spontaneous evolution. [sent-80, score-0.516]

38 2 Synaptic dynamics After having presented our magnitude of interest we now derive a plasticity rule in the model. [sent-87, score-0.343]

39 2 Figure 2: (a) Example trajectory of the state of a neuron: the dissipated spontaneous evolution Ediss is the difference between the total spontaneous evolution Etotal (thick black lines) and the actual evolution required to reach the threshold Eef f (dark gray dimensioning) in one ISI. [sent-95, score-1.677]

40 The analytical results are rather simple and allow a clear interpretation of the underlying mechanism governing the dynamics of the network under the proposed synaptic rule. [sent-102, score-0.546]

41 Ediss is now defined in terms of each individual neuron i as:   2 L − 1 − k=i ǫik L − 1 − k=i ǫik ǫik k=i i p Ediss =  + +1 + 2p 2p 2p − max{0, L − 1 − ǫik }. [sent-105, score-0.146]

42 (8) k=i An update of ǫij occurs when a spike from the pre-synaptic unit j induces a spike in a post-synaptic unit i. [sent-106, score-0.518]

43 The results are robust as long as synaptic updates are produced at the spike-time of the post-synaptic neuron. [sent-108, score-0.364]

44 + 1 + k=i 2p 2p (10) For a plasticity rule to be biologically plausible it must be local, so only information encoded in the states of the pre-synaptic j and the post-synaptic i neurons must be considered to update ǫij . [sent-111, score-0.318]

45 (a) First derivative of the dissipated spontaneous evolution Ediss for κ = 1, L = 1000 and c = 0. [sent-121, score-0.677]

46 We propagate k=i ǫik to the state of the post-synaptic unit i by considering for every unit i, an effective threshold Li which decreases deterministically every time an incoming pulse is received [6]. [sent-124, score-0.318]

47 Intuitively, Li indicates how the activity received from the population in the last ISI differs from the activity required to induce and spike in i. [sent-126, score-0.355]

48 We replace it by a constant c and show later its limited influence on the synaptic rule. [sent-128, score-0.318]

49 We can understand the mechanism involved in a particular synaptic update by analyzing in detail Eq. [sent-130, score-0.386]

50 In the case of a negative effective threshold (Li < 0) unit i receives more input from the rest of the units than the required to spike, which translates into a weakening of the synapse. [sent-132, score-0.323]

51 Conversely, if Li > 0 some spontaneous evolution was required for the unit i to fire, Eq. [sent-133, score-0.637]

52 The intermediate case (Li = 0), corresponds to η = 1 and no synaptic update is needed (nor is it defined). [sent-135, score-0.318]

53 (11) in bold lines together with ∂Etotal /∂ǫij (dashed line, corresponding to i η < 1) and ∂Etotal /∂ǫij + 1 (dashed-dotted, η > 1), for different values of the effective threshold Li of a given unit at the end on an ISI. [sent-138, score-0.177]

54 Etotal indicates the amount of synaptic change and Eef f determines whether the synapse is strengthened or weakened. [sent-139, score-0.379]

55 The largest updates occur in the transition from a positive to a negative Li and tend to zero for larger absolute values of Li . [sent-140, score-0.09]

56 Therefore, significant updates correspond to those synapses with post-synaptic neurons which during the last ISI have received a similar amount of activity from the whole network as the one required to fire. [sent-141, score-0.462]

57 We remark the similarity between Figure 3b and the rule characterizing spike time dependent plasticity (STDP) [4, 13]. [sent-142, score-0.4]

58 Figure 3b illustrates the role of c in the plasticity rule. [sent-144, score-0.174]

59 For small c, updates are only significant in a tiny range of Li values near zero. [sent-145, score-0.105]

60 6 100 200 # periods 300 0 1 # periods 2 4 x 10 Figure 4: Empirical results of convergence toward η = 1 for three different initial states above (top four plots) and below (bottom four plots) the critical point. [sent-175, score-0.285]

61 Horizontal axis denote number of ISIs of the same random unit during all the simulations. [sent-176, score-0.088]

62 Larger panels shows the full trajectory until 103 timesteps after convergence. [sent-179, score-0.153]

63 3 Simulations In this section we show empirical results for the proposed plasticity rule. [sent-186, score-0.174]

64 We focus our analysis on the time τconv required for the system to converge toward the critical point. [sent-187, score-0.175]

65 For the experiments we use a network composed of N = 500 units with homogeneous L = 500 and p = 0. [sent-189, score-0.183]

66 Synapses are initialized homogeneously and random initial states are chosen for all units in each trial. [sent-191, score-0.124]

67 Every time a unit i fires, we update its afferent synapses ǫij , for all j = i, which breaks the homogeneity in the interaction strengths. [sent-192, score-0.274]

68 The network starts with a certain initial condition η0 and evolves according to its original discrete dynamics, Eq. [sent-193, score-0.113]

69 To measure the time τconv necessary to reach a value close to η = 1 for the first time, we select a neuron i randomly and compute η every time i fires. [sent-195, score-0.211]

70 In all cases, after an initial transient, the network settles close to η = 1, presenting some fluctuations. [sent-201, score-0.113]

71 We can see that for larger updates of the synapses (κ = 0. [sent-210, score-0.166]

72 We therefore can conclude that κ determines the speed of convergence and the quality and stability of the dynamics at the critical state: high values of κ cause fast convergence but turn the dynamics of the network less stable at the critical state. [sent-214, score-0.51]

73 Given N, L, c and κ, we can approximate the global change in η after one entire ISI of a random unit assuming that all neurons change its afferent synapses uniformly. [sent-216, score-0.32]

74 5 2 0 Figure 5: Number of ISIs (a) and timesteps (b) required to reach the critical state in function of the initial configuration η0 . [sent-225, score-0.403]

75 Then the number of ISIs and the number of timesteps can be obtained by2 : τconv τconv = min({i : |ηt − 1| ≤ ν}), τconv steps = τapp (ηt ). [sent-232, score-0.088]

76 However, the opposite occurs if we consider timesteps as time units. [sent-236, score-0.088]

77 A hysteresis effect (described in [7]) present in the system if η0 < 1, causes the system to be more resistant against synaptic changes, which increases the number of updates (spikes) necessary to achieve the same effect as for η0 > 1. [sent-237, score-0.444]

78 4 Discussion Based on the amount of spontaneous evolution which is dissipated during an ISI, we have derived a local synaptic mechanism which causes a network of spiking neurons to self-organize near a critical state. [sent-239, score-1.494]

79 Our motivation differs from those of similar studies, for instance [8], where the average branching ratio σ of the network is used to characterize criticality. [sent-240, score-0.132]

80 Briefly, σ is defined as the average number of excitations created in the next time step by a spike of a given neuron. [sent-241, score-0.158]

81 If we initialize the units uniformly in [1, L], we have approximately one unit in every subinterval of length ηǫ, and in consequence, the closest unit to the threshold spikes in 1/η cases if it receives a spike. [sent-243, score-0.381]

82 For η > 1, a spike of a neuron rarely induces another neuron to spike, so σ < 1. [sent-244, score-0.476]

83 Conversely, for η < 1, the spike of a single neuron triggers more than one neuron to spike (σ > 1). [sent-245, score-0.608]

84 Only for η = 1 the spike of a neuron elicits the order of one spike (σ = 1). [sent-246, score-0.462]

85 Our study thus represents a realization of a local synaptic mechanism which induces global homeostasis towards an optimal branching factor. [sent-247, score-0.525]

86 This idea is also related to the SOC rule proposed in [3], where a mechanism is defined for threshold gates (binary units) in terms of bit flip probabilities instead of spiking neurons. [sent-248, score-0.268]

87 As in our model, criticality is achieved via synaptic scaling, where each neuron adjusts its synaptic input according to an effective threshold called margin. [sent-249, score-0.945]

88 7 When the network is operating at the critical regime, the dynamics can be seen as balancing between a predictable pattern of activity and uncorrelated random behavior typically present in SOC. [sent-252, score-0.361]

89 Preliminary results indicate that, if the stochastic evolution is reset to zero (p = 0) at the critical state, inducing an artificial spike on a randomly selected unit causes neuronal avalanches of sizes and lengths which span several orders of magnitude and follow heavy tailed distributions. [sent-254, score-0.788]

90 The spontaneous evolution can be interpreted for instance as activity from other brain areas not considered in the pool of the simulated units, or as stochastic sensory input. [sent-256, score-0.601]

91 Our results indicate that the amount of this stochastic activity that is absorbed by the system is maximized at an optimal state, which in a sense minimizes the possible effect of fluctuations due to noise on the behavior of the system. [sent-257, score-0.146]

92 The application of the synaptic rule for information processing is left for future research. [sent-258, score-0.386]

93 We advance, however, that external perturbations when the network is critical would cause a transient activity. [sent-259, score-0.293]

94 During the transient, synapses could be modified according to some other form of learning to encode the proper values which drive the whole network to attain a characteristic synchronized pattern for the external stimuli presented. [sent-260, score-0.296]

95 We conjecture that the hysteresis effect shown in the regime of η < 1 may be suitable for such purposes, since the network then is able to keep the same pattern of activity until the critical state is reached again. [sent-261, score-0.447]

96 At the edge of chaos: Real-time computations a and self-organized criticality in recurrent neural networks. [sent-279, score-0.103]

97 Synaptic modifications in cultured hippocampal neurons: Dependence on spike timing, synaptic strength, and postsynaptic cell type. [sent-287, score-0.476]

98 Random walk models for the spike activity of a single neuron. [sent-293, score-0.221]

99 Event modeling of message interchange in stochastic neural o o ensembles. [sent-300, score-0.109]

100 Phase transition and hysteresis in an ensemble of stochastic o o spiking neurons. [sent-306, score-0.196]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ediss', 0.351), ('synaptic', 0.318), ('spontaneous', 0.294), ('etotal', 0.248), ('evolution', 0.218), ('isi', 0.215), ('eef', 0.207), ('plasticity', 0.174), ('conv', 0.17), ('dissipated', 0.165), ('spike', 0.158), ('ik', 0.154), ('neuron', 0.146), ('critical', 0.138), ('synapses', 0.12), ('isis', 0.116), ('ij', 0.11), ('li', 0.108), ('criticality', 0.103), ('units', 0.097), ('hl', 0.09), ('timesteps', 0.088), ('unit', 0.088), ('network', 0.086), ('lef', 0.083), ('neurons', 0.076), ('dynamics', 0.074), ('spiking', 0.072), ('rule', 0.068), ('mechanism', 0.068), ('app', 0.066), ('reach', 0.065), ('activity', 0.063), ('surplus', 0.062), ('soc', 0.062), ('periods', 0.06), ('threshold', 0.06), ('maximized', 0.057), ('avalanches', 0.054), ('hysteresis', 0.054), ('interchange', 0.054), ('state', 0.048), ('spikes', 0.048), ('ai', 0.048), ('phase', 0.048), ('branching', 0.046), ('updates', 0.046), ('stdp', 0.044), ('transitions', 0.044), ('transition', 0.044), ('messages', 0.043), ('transient', 0.042), ('barcelona', 0.041), ('cacies', 0.041), ('dimensioning', 0.041), ('diss', 0.041), ('homeostasis', 0.041), ('homeostatic', 0.041), ('kaltenbrunner', 0.041), ('kinouchi', 0.041), ('mez', 0.041), ('weakening', 0.041), ('timestep', 0.04), ('required', 0.037), ('afferent', 0.036), ('trajectory', 0.035), ('uctuations', 0.035), ('drive', 0.035), ('received', 0.034), ('strengthened', 0.033), ('macroscopic', 0.033), ('near', 0.033), ('simulations', 0.031), ('nijmegen', 0.031), ('chaos', 0.031), ('interaction', 0.03), ('panels', 0.03), ('coupling', 0.03), ('regime', 0.03), ('aj', 0.03), ('abrupt', 0.029), ('lines', 0.029), ('message', 0.029), ('reached', 0.028), ('synchronized', 0.028), ('synapse', 0.028), ('magnitude', 0.027), ('external', 0.027), ('reset', 0.027), ('thick', 0.027), ('initial', 0.027), ('induces', 0.026), ('causes', 0.026), ('neuronal', 0.026), ('realization', 0.026), ('range', 0.026), ('neuroscience', 0.026), ('stochastic', 0.026), ('actual', 0.025), ('analytically', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9999997 204 nips-2008-Self-organization using synaptic plasticity

Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen

Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1

2 0.17486472 209 nips-2008-Short-Term Depression in VLSI Stochastic Synapse

Author: Peng Xu, Timothy K. Horiuchi, Pamela A. Abshire

Abstract: We report a compact realization of short-term depression (STD) in a VLSI stochastic synapse. The behavior of the circuit is based on a subtractive single release model of STD. Experimental results agree well with simulation and exhibit expected STD behavior: the transmitted spike train has negative autocorrelation and lower power spectral density at low frequencies which can remove redundancy in the input spike train, and the mean transmission probability is inversely proportional to the input spike rate which has been suggested as an automatic gain control mechanism in neural systems. The dynamic stochastic synapse could potentially be a powerful addition to existing deterministic VLSI spiking neural systems. 1

3 0.14078894 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation

Author: Dotan D. Castro, Dmitry Volkinshtein, Ron Meir

Abstract: Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches often fail (e.g., when function approximation is involved). Interestingly, there is growing evidence that actor-critic approaches based on phasic dopamine signals play a key role in biological learning through cortical and basal ganglia loops. We derive a temporal difference based actor critic learning algorithm, for which convergence can be proved without assuming widely separated time scales for the actor and the critic. The approach is demonstrated by applying it to networks of spiking neurons. The established relation between phasic dopamine and the temporal difference signal lends support to the biological relevance of such algorithms. 1

4 0.1342849 220 nips-2008-Spike Feature Extraction Using Informative Samples

Author: Zhi Yang, Qi Zhao, Wentai Liu

Abstract: This paper presents a spike feature extraction algorithm that targets real-time spike sorting and facilitates miniaturized microchip implementation. The proposed algorithm has been evaluated on synthesized waveforms and experimentally recorded sequences. When compared with many spike sorting approaches our algorithm demonstrates improved speed, accuracy and allows unsupervised execution. A preliminary hardware implementation has been realized using an integrated microchip interfaced with a personal computer. 1

5 0.13091184 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons

Author: Adam Ponzi, Jeff Wickens

Abstract: Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum[1] and hippocampus CA3[2]. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points. 1

6 0.11861795 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1

7 0.11357943 81 nips-2008-Extracting State Transition Dynamics from Multiple Spike Trains with Correlated Poisson HMM

8 0.11087389 59 nips-2008-Dependent Dirichlet Process Spike Sorting

9 0.10816635 38 nips-2008-Bio-inspired Real Time Sensory Map Realignment in a Robotic Barn Owl

10 0.10319708 137 nips-2008-Modeling Short-term Noise Dependence of Spike Counts in Macaque Prefrontal Cortex

11 0.086795315 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions

12 0.085631169 166 nips-2008-On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor

13 0.082834899 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing

14 0.079242319 49 nips-2008-Clusters and Coarse Partitions in LP Relaxations

15 0.078623042 27 nips-2008-Artificial Olfactory Brain for Mixture Identification

16 0.073917285 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks

17 0.068523318 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity

18 0.066048332 16 nips-2008-Adaptive Template Matching with Shift-Invariant Semi-NMF

19 0.064050257 118 nips-2008-Learning Transformational Invariants from Natural Movies

20 0.063447177 116 nips-2008-Learning Hybrid Models for Image Annotation with Partially Labeled Data


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.147), (1, 0.113), (2, 0.192), (3, 0.195), (4, -0.11), (5, -0.005), (6, 0.001), (7, -0.079), (8, -0.064), (9, -0.011), (10, -0.027), (11, 0.06), (12, -0.004), (13, -0.013), (14, 0.037), (15, -0.051), (16, 0.026), (17, -0.067), (18, 0.006), (19, -0.184), (20, -0.014), (21, -0.01), (22, -0.173), (23, 0.119), (24, 0.037), (25, 0.011), (26, -0.001), (27, -0.045), (28, 0.127), (29, 0.043), (30, 0.06), (31, 0.052), (32, -0.142), (33, 0.065), (34, -0.046), (35, -0.001), (36, 0.021), (37, 0.018), (38, 0.041), (39, -0.064), (40, 0.026), (41, -0.0), (42, -0.013), (43, -0.046), (44, 0.051), (45, -0.035), (46, 0.055), (47, -0.005), (48, -0.062), (49, 0.021)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9722532 204 nips-2008-Self-organization using synaptic plasticity

Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen

Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1

2 0.73591423 209 nips-2008-Short-Term Depression in VLSI Stochastic Synapse

Author: Peng Xu, Timothy K. Horiuchi, Pamela A. Abshire

Abstract: We report a compact realization of short-term depression (STD) in a VLSI stochastic synapse. The behavior of the circuit is based on a subtractive single release model of STD. Experimental results agree well with simulation and exhibit expected STD behavior: the transmitted spike train has negative autocorrelation and lower power spectral density at low frequencies which can remove redundancy in the input spike train, and the mean transmission probability is inversely proportional to the input spike rate which has been suggested as an automatic gain control mechanism in neural systems. The dynamic stochastic synapse could potentially be a powerful addition to existing deterministic VLSI spiking neural systems. 1

3 0.68839473 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons

Author: Adam Ponzi, Jeff Wickens

Abstract: Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum[1] and hippocampus CA3[2]. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points. 1

4 0.68535089 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1

Author: Klaus Wimmer, Marcel Stimberg, Robert Martin, Lars Schwabe, Jorge Mariño, James Schummers, David C. Lyon, Mriganka Sur, Klaus Obermayer

Abstract: The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime. 1

5 0.63144261 38 nips-2008-Bio-inspired Real Time Sensory Map Realignment in a Robotic Barn Owl

Author: Juan Huo, Zhijun Yang, Alan F. Murray

Abstract: The visual and auditory map alignment in the Superior Colliculus (SC) of barn owl is important for its accurate localization for prey behavior. Prism learning or Blindness may interfere this alignment and cause loss of the capability of accurate prey. However, juvenile barn owl could recover its sensory map alignment by shifting its auditory map. The adaptation of this map alignment is believed based on activity dependent axon developing in Inferior Colliculus (IC). A model is built to explore this mechanism. In this model, axon growing process is instructed by an inhibitory network in SC while the strength of the inhibition adjusted by Spike Timing Dependent Plasticity (STDP). We test and analyze this mechanism by application of the neural structures involved in spatial localization in a robotic system. 1

6 0.6145224 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing

7 0.56766701 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation

8 0.53514403 27 nips-2008-Artificial Olfactory Brain for Mixture Identification

9 0.52262282 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks

10 0.48722309 81 nips-2008-Extracting State Transition Dynamics from Multiple Spike Trains with Correlated Poisson HMM

11 0.48553169 59 nips-2008-Dependent Dirichlet Process Spike Sorting

12 0.47054878 220 nips-2008-Spike Feature Extraction Using Informative Samples

13 0.42678747 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity

14 0.41988191 158 nips-2008-Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks

15 0.41910881 152 nips-2008-Non-stationary dynamic Bayesian networks

16 0.40631112 166 nips-2008-On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor

17 0.39135316 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions

18 0.33944029 16 nips-2008-Adaptive Template Matching with Shift-Invariant Semi-NMF

19 0.31601125 137 nips-2008-Modeling Short-term Noise Dependence of Spike Counts in Macaque Prefrontal Cortex

20 0.3102901 8 nips-2008-A general framework for investigating how far the decoding process in the brain can be simplified


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(4, 0.028), (6, 0.069), (7, 0.063), (12, 0.025), (15, 0.014), (25, 0.374), (28, 0.137), (57, 0.053), (59, 0.015), (63, 0.032), (71, 0.03), (77, 0.059), (83, 0.018)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.78877395 204 nips-2008-Self-organization using synaptic plasticity

Author: Vicençc Gómez, Andreas Kaltenbrunner, Vicente López, Hilbert J. Kappen

Abstract: Large networks of spiking neurons show abrupt changes in their collective dynamics resembling phase transitions studied in statistical physics. An example of this phenomenon is the transition from irregular, noise-driven dynamics to regular, self-sustained behavior observed in networks of integrate-and-fire neurons as the interaction strength between the neurons increases. In this work we show how a network of spiking neurons is able to self-organize towards a critical state for which the range of possible inter-spike-intervals (dynamic range) is maximized. Self-organization occurs via synaptic dynamics that we analytically derive. The resulting plasticity rule is defined locally so that global homeostasis near the critical state is achieved by local regulation of individual synapses. 1

2 0.62353969 238 nips-2008-Theory of matching pursuit

Author: Zakria Hussain, John S. Shawe-taylor

Abstract: We analyse matching pursuit for kernel principal components analysis (KPCA) by proving that the sparse subspace it produces is a sample compression scheme. We show that this bound is tighter than the KPCA bound of Shawe-Taylor et al [7] and highly predictive of the size of the subspace needed to capture most of the variance in the data. We analyse a second matching pursuit algorithm called kernel matching pursuit (KMP) which does not correspond to a sample compression scheme. However, we give a novel bound that views the choice of subspace of the KMP algorithm as a compression scheme and hence provide a VC bound to upper bound its future loss. Finally we describe how the same bound can be applied to other matching pursuit related algorithms. 1

3 0.59987038 24 nips-2008-An improved estimator of Variance Explained in the presence of noise

Author: Ralf M. Haefner, Bruce G. Cumming

Abstract: A crucial part of developing mathematical models of information processing in the brain is the quantification of their success. One of the most widely-used metrics yields the percentage of the variance in the data that is explained by the model. Unfortunately, this metric is biased due to the intrinsic variability in the data. We derive a simple analytical modification of the traditional formula that significantly improves its accuracy (as measured by bias) with similar or better precision (as measured by mean-square error) in estimating the true underlying Variance Explained by the model class. Our estimator advances on previous work by a) accounting for overfitting due to free model parameters mitigating the need for a separate validation data set, b) adjusting for the uncertainty in the noise estimate and c) adding a conditioning term. We apply our new estimator to binocular disparity tuning curves of a set of macaque V1 neurons and find that on a population level almost all of the variance unexplained by Gabor functions is attributable to noise. 1

4 0.44107056 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation

Author: Dotan D. Castro, Dmitry Volkinshtein, Ron Meir

Abstract: Actor-critic algorithms for reinforcement learning are achieving renewed popularity due to their good convergence properties in situations where other approaches often fail (e.g., when function approximation is involved). Interestingly, there is growing evidence that actor-critic approaches based on phasic dopamine signals play a key role in biological learning through cortical and basal ganglia loops. We derive a temporal difference based actor critic learning algorithm, for which convergence can be proved without assuming widely separated time scales for the actor and the critic. The approach is demonstrated by applying it to networks of spiking neurons. The established relation between phasic dopamine and the temporal difference signal lends support to the biological relevance of such algorithms. 1

5 0.43846133 62 nips-2008-Differentiable Sparse Coding

Author: J. A. Bagnell, David M. Bradley

Abstract: Prior work has shown that features which appear to be biologically plausible as well as empirically useful can be found by sparse coding with a prior such as a laplacian (L1 ) that promotes sparsity. We show how smoother priors can preserve the benefits of these sparse priors while adding stability to the Maximum A-Posteriori (MAP) estimate that makes it more useful for prediction problems. Additionally, we show how to calculate the derivative of the MAP estimate efficiently with implicit differentiation. One prior that can be differentiated this way is KL-regularization. We demonstrate its effectiveness on a wide variety of applications, and find that online optimization of the parameters of the KL-regularized model can significantly improve prediction performance. 1

6 0.43744487 37 nips-2008-Biasing Approximate Dynamic Programming with a Lower Discount Factor

7 0.43672237 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions

8 0.43535528 216 nips-2008-Sparse probabilistic projections

9 0.43464383 195 nips-2008-Regularized Policy Iteration

10 0.43207926 202 nips-2008-Robust Regression and Lasso

11 0.43190509 162 nips-2008-On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost

12 0.43156347 135 nips-2008-Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \boldmath$\ell 1$-regularized MLE

13 0.43109602 118 nips-2008-Learning Transformational Invariants from Natural Movies

14 0.43108466 75 nips-2008-Estimating vector fields using sparse basis field expansions

15 0.43094227 79 nips-2008-Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

16 0.43035352 85 nips-2008-Fast Rates for Regularized Objectives

17 0.43007803 87 nips-2008-Fitted Q-iteration by Advantage Weighted Regression

18 0.42951152 150 nips-2008-Near-optimal Regret Bounds for Reinforcement Learning

19 0.42950109 49 nips-2008-Clusters and Coarse Partitions in LP Relaxations

20 0.42932189 175 nips-2008-PSDBoost: Matrix-Generation Linear Programming for Positive Semidefinite Matrices Learning