nips nips2008 nips2008-240 knowledge-graph by maker-knowledge-mining

240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks


Source: pdf

Author: K. Wong, Si Wu, Chi Fung

Abstract: Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 cn Abstract Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. [sent-10, score-0.256]

2 Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. [sent-11, score-0.398]

3 In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. [sent-12, score-0.466]

4 We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. [sent-13, score-0.472]

5 We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. [sent-14, score-0.844]

6 Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. [sent-15, score-0.472]

7 1 Introduction Understanding how the dynamics of a neural network is shaped by the network structure, and consequently facilitates the functions implemented by the neural system, is at the core of using mathematical models to elucidate brain functions [1]. [sent-16, score-0.604]

8 Recently, a type of attractor networks, called continuous attractor neural networks (CANNs), has received considerable attention (see, e. [sent-18, score-0.21]

9 These networks possess a translational invariance of the neuronal interactions. [sent-21, score-0.232]

10 Thus, in the continuum limit, they form a continuous manifold in which the system is neutrally stable, and the network state can translate easily when the external stimulus changes continuously. [sent-23, score-0.717]

11 Beyond pure memory retrieval, this large-scale stucture of the state space endows the neural system with a tracking capability. [sent-24, score-0.341]

12 The tracking dynamics of a CANN has been investigated by several authors in the literature (see, e. [sent-26, score-0.392]

13 These studies have shown that a CANN has the capacity of tracking a moving 1 stimulus continuously and that this tracking property can well justify many brain functions. [sent-29, score-0.82]

14 Despite these successes, however, a detailed analysis of the tracking behaviors of a CANN is still lacking. [sent-30, score-0.306]

15 These include, for instance, 1) the conditions under which a CANN can successfully track a moving stimulus, 2) the distortion of the shape of the network state during the tracking, and 3) the effects of these distortions on the tracking speed. [sent-31, score-0.769]

16 We display clearly how the dynamics of a CANN is decomposed into different distortion modes, corresponding to, respectively, changes in the height, position, width and skewness of the network state. [sent-35, score-0.565]

17 We then demonstrate which of them dominates the tracking behaviors of the network. [sent-36, score-0.306]

18 In order to solve the dynamics which is otherwise extremely complicated for a large recurrent network, we develop a time-dependent perturbation method to approximate the tracking performance of the network. [sent-37, score-0.629]

19 The solution is expressed in a simple closed-form, and we can approximate the network dynamics up to an arbitory accuracy depending on the order of perturbation used. [sent-38, score-0.438]

20 Our work generates new predictions on the tracking behaviors of CANNs, namely, the maximum tracking speed to moving stimuli, and the reaction time to sudden changes in external stimuli, both are testable by experiments. [sent-40, score-0.964]

21 2 The Intrinsic Dynamics of CANNs We consider a one-dimensional continuous stimulus being encoded by an ensemble of neurons. [sent-41, score-0.256]

22 The stimulus may represent, for example, the moving direction, the orientation, or a general continuous feature of an external object. [sent-42, score-0.46]

23 Let U (x, t) be the synaptic input at time t to the neurons with preferred stimulus of real-valued x. [sent-43, score-0.296]

24 The dynamics of the synaptic input U (x, t) is determined by the external input Iext (x, t), the network input from other neurons, and its own relaxation. [sent-47, score-0.466]

25 We first consider the intrinsic dynamics of the CANN model in the absence of external stimuli. [sent-54, score-0.291]

26 For √ 0 < k < kc ≡ ρJ 2 /(8 2πa), the network holds a continuous family of stationary states, which are (x − z)2 ˜ U (x|z) = U0 exp − , (4) 4a2 √ where U0 = [1 + (1 − k/kc )1/2 ]J/(4 πak). [sent-55, score-0.311]

27 These stationary states are translationally invariant among themselves and have the Gaussian bumped shape peaked at arbitrary positions z. [sent-56, score-0.258]

28 The stability of the Gaussian bumps can be studied by considering the dynamics of fluctuations. [sent-57, score-0.251]

29 5 -1 Figure 1: The first four basis functions of the quantum harmonic oscillators, which represent four distortion modes of the network dynamics, namely, changes in the height, position, width and skewness of a bump state. [sent-69, score-1.073]

30 1 The motion modes To compute the eigenfunctions and eigenvalues of the kernel F (x, x ), we choose the wave functions of the quantum harmonic oscillators as the basis, namely, vn (x|z) = exp(−ξ 2 /2)Hn (ξ) (2π)1/2 an! [sent-72, score-0.454]

31 Indeed, the ground state of the quantum harmonic oscillator corresponds to the Gaussian bump, and the first, second, and third excited states correspond to fluctuations in the peak position, width, and skewness of the bump respectively (see Fig. [sent-74, score-0.717]

32 The eigenfunctions of F correspond to the various distortion modes of the bump. [sent-79, score-0.306]

33 Since λ1 = 1 and all other eigenvalues are less than 1, the stationary state is neutrally stable in one component, and stable in all other components. [sent-80, score-0.366]

34 (1) The eigenfunction for the eigenvalue λ0 is u0 (x|z), and represents a distortion of the amplitude of the bump. [sent-82, score-0.241]

35 As we shall see, amplitude changes of the bump affect its tracking performance. [sent-83, score-0.782]

36 (2) Central to the tracking capability of CANNs, the eigenfunction for the eigenvalue 1 is u1 (x|z) and is neutrally stable. [sent-84, score-0.401]

37 We note that u1 (x|z) ∝ ∂v0 (x|z)/∂z, corresponding to the shift of the bump position among the stationary states. [sent-85, score-0.728]

38 This neutral stability is the consequence of the translational invariance of the network. [sent-86, score-0.273]

39 It implies that when there are external inputs, however small, the bump will move continuously. [sent-87, score-0.617]

40 Other eigenfunctions correspond to distortions of the shape of the bump, for example, the eigenfunction u3 (x|z) corresponds to a skewed distortion of the bump. [sent-89, score-0.434]

41 5 0 -2 0 x 2 Figure 2: The canyon formed by the stationary states of a CANN projected onto the subspace formed by b1 |0 , the position shift, and b0 |0 , the height distortion. [sent-96, score-0.41]

42 Motion along the canyon corresponds to the displacement of the bump (inset). [sent-97, score-0.556]

43 A small force along the tangent of the canyon can move the network state easily. [sent-103, score-0.281]

44 This illustrates how the landscape of the state space of a CANN is shaped by the network structure, leading to the neutral stability of the system, and how this neutral stability shapes the network dynamics. [sent-104, score-0.689]

45 3 The Tracking Behaviors We now consider the network dynamics in the presence of a weak external stimulus. [sent-105, score-0.433]

46 Since the dynamics is primarily dominated by the translational motion of the bump, with secondary distortions in shape, we may develop a time-dependent perturbation analysis using {vn (x|z(t))} as the basis, and consider perturbations in increasing orders of n. [sent-107, score-0.546]

47 (8) n=0 Furthermore, since the Gaussian bump is the steady-state solution of the dynamical equation in the absence of external stimuli, the neuronal interaction term in Eq. [sent-109, score-0.764]

48 (2) expressions for dan /dt at each order n of perturbation, which are d 1 − λn + an dt τ = In − U0 τ + 1 τ ∞ r=1 (2π)1/2 aδn1 + √ √ nan−1 − n + 1an+1 1 dz 2a dt (n + 2r)! [sent-112, score-0.209]

49 We can approximate the network dynamics up to an arbitary accuracy depending on the choice of the order of perturbation. [sent-127, score-0.291]

50 1 Tracking a moving stimulus Consider the external stimulus consisting of a Gaussian bump, namely, Iext (x, t) = αU0 exp[−(x − z0 )2 /4a2 ]. [sent-130, score-0.636]

51 (b) The dependence of the terminal separation s on the stimulus speed v. [sent-144, score-0.25]

52 αU0 (2π)1/2 a exp[−(z0 − z)2 /8a2 ]/τ , and α (z0 − z)2 dz = (z0 − z) exp − R(t)−1 , dt τ 8a2 (11) t where R(t) = 1 + α −∞ (dt /τ ) exp[−(1 − λ0 )(t − t )/τ − (z0 − z(t ))2 /8a2 ], representing the ratio of the bump height relative to that in the absence of the external stimulus (α = 0). [sent-151, score-1.063]

53 Hence, the dynamics is driven by a pull of the bump position towards the stimulus position z0 . [sent-152, score-1.032]

54 The factor R(t) > 1 implies that the increase in amplitude of the bump slows down its response. [sent-153, score-0.508]

55 The tracking performance of a CANN is a key property that is believed to have wide applications in neural systems. [sent-154, score-0.317]

56 Suppose the stimulus is moving at a constant velocity v. [sent-155, score-0.278]

57 Denoting the lag of the bump behind the stimulus by s = z0 − z we have, after the transients, ds αse−s = v − g(s); g(s) ≡ dt τ 2 /8a2 2 2 αe−s /8a 1+ 1 − λ0 −1 . [sent-158, score-0.763]

58 (12) The value of s is determined by two competing factors: the first term represents the movement of the stimulus, which tends to enlarge the separation, and the second term represents the collective effects of the neuronal recurrent interactions, which tends to reduce the lag. [sent-159, score-0.234]

59 This means that if v > gmax , the network is unable to track the stimulus. [sent-164, score-0.239]

60 Thus, gmax defines the maximum trackable speed of a moving stimulus. [sent-165, score-0.274]

61 Notably, gmax increases with the strength of the external signal and the range of neuronal recurrent interactions. [sent-166, score-0.413]

62 This is reasonable since it is the neuronal interactions that induce the movement of the bump. [sent-167, score-0.223]

63 gmax decreases with the time constant of the network, as this reflects the responsiveness of the network to external inputs. [sent-168, score-0.381]

64 Otherwise, the tracking of the stimulus will be lost. [sent-172, score-0.459]

65 2 Tracking an abrupt change of the stimulus Suppose the network has reached a steady state with an external stimulus stationary at t < 0, and the stimulus position jumps from 0 to z0 suddenly at t = 0. [sent-177, score-1.229]

66 We are interested in estimating the reaction time T , which is 5 the time taken by the bump to move to a small distance θ from the stimulus position. [sent-182, score-0.808]

67 5 U(x) 300 (b) Simulation "n=1" perturbation "n=2" perturbation "n=3" perturbation "n=4" perturbation "n=5" perturbation 100 0 0 1 0. [sent-185, score-0.735]

68 5 0 3 -2 0 x 2 Figure 4: (a) The dependence of the reaction time T on the new stimulus position z0 . [sent-189, score-0.429]

69 (b) Profiles of the bump between the old and new positions at z0 = π/2 in the simulation. [sent-192, score-0.475]

70 When the strength α of the external stimulus is larger, improvement using a perturbation analysis up to n = 1 is required when the jump size z0 is large. [sent-193, score-0.551]

71 This amounts to taking into account the change of the bump height during its movement from the old to new position. [sent-194, score-0.626]

72 4(a) shows that the n = 1 perturbation overcomes the insufficiency of the logarithmic estimate, and has an excellent agreement with simulation results for z0 up to the order of 2a. [sent-201, score-0.248]

73 This implies that beyond the range of neuronal interaction, tracking is influenced by the distortion of the width and the skewed shape of the bump. [sent-203, score-0.586]

74 Consider a neural ensemble encoding a 2D continuous stimulus x = (x1 , x2 ), and the network dynamics satisfies Eqs. [sent-205, score-0.614]

75 2, we obtain the distortion modes of the bump dynamics, which are expressed as the product of the motion modes in the 1D case, i. [sent-209, score-0.884]

76 (15) The eigenvalues for these motion modes are calculated to be λ0,0 = λ0 , λm,0 = λm , for m = 0, λ0,n = λn , for n = 0, and λm,n = λm λn , for m = 0 and n = 0. [sent-214, score-0.215]

77 The mode u1,0 (x|z) corresponds to the position shift of the bump in the direction x1 and u0,1 (x|z) the position shift in the direction x2 . [sent-215, score-0.807]

78 A linear combination of them, c1 u1,0 (x|z) + c2 u0,1 (x|z), corresponds to the position shift of the bump in the direction (c1 , c2 ). [sent-216, score-0.628]

79 We see that the eigenvalues 6 for these motion modes are 1, implying that the network is neutrally stable in the 2D manifold. [sent-217, score-0.483]

80 The eigenvalues for all other motion modes are less than 1. [sent-218, score-0.215]

81 Figure 5 illustrates the tracking of a 2D stimulus, and the comparison of simulation results on the reaction time with the perturbative approach. [sent-219, score-0.468]

82 The n = 1 perturbation already has an excellent agreement over a wide range of stimulus positions. [sent-220, score-0.426]

83 5 3 Figure 5: (a) The tracking process of the network; (b) The reaction time vs. [sent-229, score-0.36]

84 5 Conclusions and Discussions To conclude, we have systematically investigated how the neutral stability of a CANN facilitates the tracking performance of the network, a capability which is believed to have wide applications in brain functions. [sent-236, score-0.466]

85 Two interesting behaviors are observed, namely, the maximum trackable speed for a moving stimulus and the reaction time for catching up an abrupt change of a stimulus, logarithmic for small changes and increasing rapidly beyond the neuronal range. [sent-237, score-0.731]

86 In order to solve the dynamics which is otherwise extremely complicated for a large recurrent network, we have developed a perturbative analysis to simplify the dynamics of a CANN. [sent-240, score-0.458]

87 Geometrically, it is equivalent to projecting the network state on its dominant directions of the state space. [sent-241, score-0.258]

88 The tracking dynamics of a CANN has also been studied by other authors. [sent-248, score-0.392]

89 In particular, Zhang proposed a mechanism of using asymmetrical recurrent interactions to drive the bump, so that the shape distortion is minimized [4]. [sent-249, score-0.416]

90 further proposed a double ring network model to achieve these asymmetrical interactions in the head-direction system [8]. [sent-251, score-0.256]

91 For instance, in the visual and hippocampal systems, it is often assumed that the bump movement is directly driven by external inputs (see, e. [sent-253, score-0.677]

92 , [5, 19, 20]), and the distortion of the bump is inevitable (indeed the bump distortions in [19, 20] are associated with visual perception). [sent-255, score-1.17]

93 The contribution of this study is on that we quantify how the distortion of the bump shape affects the network tracking performance, and obtain a new finding on the maximum trackable speed of the network. [sent-256, score-1.157]

94 For the latter, it is often difficult to get a closed-form of the network stationary state. [sent-260, score-0.242]

95 Amari used a Heaviside function to simplify the neural response, and obtained the boxshaped network stationary state [2]. [sent-261, score-0.34]

96 However, since the Heaviside function is not differentiable, it is difficult to describe the tracking dynamics in the Amari model. [sent-262, score-0.392]

97 Here, by using divisive normalization and the Gaussian-shaped recurrent interactions, we solve the network stationary states and the tracking dynamics analytically. [sent-264, score-0.82]

98 Those inhibitory neurons have a time constant much shorter than that of excitatory neurons, and they inhibit the activities of excitatory neurons in a uniform shunting way, thus achieving the effect of divisive normalization. [sent-268, score-0.252]

99 This is because our calculation is based on the fact that the dynamics of a CANN is dominated by the motion mode of position shift of the network state, and this property is due to the translational invariance of the neuronal recurrent interactions, rather than the inhibition mechanism. [sent-270, score-0.91]

100 We have formally proved that for a CANN model, once the recurrent interactions are translationally invariant, the interaction kernel has a unit eigenvalue with respect to the position shift mode irrespective of the inhibition mechanism (to be reported elsewhere). [sent-271, score-0.502]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('bump', 0.475), ('cann', 0.443), ('tracking', 0.243), ('stimulus', 0.216), ('canns', 0.161), ('dynamics', 0.149), ('perturbation', 0.147), ('network', 0.142), ('external', 0.142), ('distortion', 0.138), ('reaction', 0.117), ('stationary', 0.1), ('modes', 0.1), ('gmax', 0.097), ('translational', 0.097), ('position', 0.096), ('height', 0.091), ('recurrent', 0.09), ('neutrally', 0.088), ('neuronal', 0.084), ('distortions', 0.082), ('canyon', 0.081), ('iext', 0.081), ('trackable', 0.081), ('interactions', 0.079), ('dt', 0.072), ('motion', 0.071), ('landscape', 0.07), ('eigenfunction', 0.07), ('perturbative', 0.07), ('eigenfunctions', 0.068), ('attractor', 0.065), ('behaviors', 0.063), ('neutral', 0.063), ('stability', 0.062), ('moving', 0.062), ('skewness', 0.06), ('movement', 0.06), ('state', 0.058), ('amari', 0.057), ('shift', 0.057), ('divisive', 0.054), ('vn', 0.054), ('quantum', 0.053), ('invariance', 0.051), ('bn', 0.048), ('neurons', 0.047), ('inhibition', 0.047), ('jump', 0.046), ('width', 0.045), ('eigenvalues', 0.044), ('stimuli', 0.044), ('shape', 0.044), ('abrupt', 0.043), ('states', 0.042), ('namely', 0.041), ('kong', 0.041), ('bumps', 0.04), ('translationally', 0.04), ('continuous', 0.04), ('neural', 0.04), ('hong', 0.039), ('excitatory', 0.039), ('simulation', 0.038), ('nth', 0.038), ('dz', 0.038), ('stable', 0.038), ('interaction', 0.037), ('xie', 0.035), ('asymmetrical', 0.035), ('hkust', 0.035), ('oscillators', 0.035), ('believed', 0.034), ('facilitates', 0.034), ('speed', 0.034), ('amplitude', 0.033), ('synaptic', 0.033), ('dx', 0.033), ('agreement', 0.032), ('skewed', 0.032), ('hop', 0.032), ('peaked', 0.032), ('solvable', 0.032), ('excellent', 0.031), ('changes', 0.031), ('heaviside', 0.03), ('brain', 0.03), ('mechanism', 0.03), ('harmonic', 0.029), ('exp', 0.029), ('testable', 0.029), ('shanghai', 0.029), ('dan', 0.027), ('shaped', 0.027), ('encoding', 0.027), ('dynamical', 0.026), ('mode', 0.026), ('analytical', 0.026), ('justify', 0.026), ('inhibitory', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks

Author: K. Wong, Si Wu, Chi Fung

Abstract: Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. 1

2 0.10289536 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1

Author: Klaus Wimmer, Marcel Stimberg, Robert Martin, Lars Schwabe, Jorge Mariño, James Schummers, David C. Lyon, Mriganka Sur, Klaus Obermayer

Abstract: The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime. 1

3 0.09317784 231 nips-2008-Temporal Dynamics of Cognitive Control

Author: Jeremy Reynolds, Michael C. Mozer

Abstract: Cognitive control refers to the flexible deployment of memory and attention in response to task demands and current goals. Control is often studied experimentally by presenting sequences of stimuli, some demanding a response, and others modulating the stimulus-response mapping. In these tasks, participants must maintain information about the current stimulus-response mapping in working memory. Prominent theories of cognitive control use recurrent neural nets to implement working memory, and optimize memory utilization via reinforcement learning. We present a novel perspective on cognitive control in which working memory representations are intrinsically probabilistic, and control operations that maintain and update working memory are dynamically determined via probabilistic inference. We show that our model provides a parsimonious account of behavioral and neuroimaging data, and suggest that it offers an elegant conceptualization of control in which behavior can be cast as optimal, subject to limitations on learning and the rate of information processing. Moreover, our model provides insight into how task instructions can be directly translated into appropriate behavior and then efficiently refined with subsequent task experience. 1

4 0.086195856 206 nips-2008-Sequential effects: Superstition or rational behavior?

Author: Angela J. Yu, Jonathan D. Cohen

Abstract: In a variety of behavioral tasks, subjects exhibit an automatic and apparently suboptimal sequential effect: they respond more rapidly and accurately to a stimulus if it reinforces a local pattern in stimulus history, such as a string of repetitions or alternations, compared to when it violates such a pattern. This is often the case even if the local trends arise by chance in the context of a randomized design, such that stimulus history has no real predictive power. In this work, we use a normative Bayesian framework to examine the hypothesis that such idiosyncrasies may reflect the inadvertent engagement of mechanisms critical for adapting to a changing environment. We show that prior belief in non-stationarity can induce experimentally observed sequential effects in an otherwise Bayes-optimal algorithm. The Bayesian algorithm is shown to be well approximated by linear-exponential filtering of past observations, a feature also apparent in the behavioral data. We derive an explicit relationship between the parameters and computations of the exact Bayesian algorithm and those of the approximate linear-exponential filter. Since the latter is equivalent to a leaky-integration process, a commonly used model of neuronal dynamics underlying perceptual decision-making and trial-to-trial dependencies, our model provides a principled account of why such dynamics are useful. We also show that parameter-tuning of the leaky-integration process is possible, using stochastic gradient descent based only on the noisy binary inputs. This is a proof of concept that not only can neurons implement near-optimal prediction based on standard neuronal dynamics, but that they can also learn to tune the processing parameters without explicitly representing probabilities. 1

5 0.082752243 136 nips-2008-Model selection and velocity estimation using novel priors for motion patterns

Author: Shuang Wu, Hongjing Lu, Alan L. Yuille

Abstract: Psychophysical experiments show that humans are better at perceiving rotation and expansion than translation. These findings are inconsistent with standard models of motion integration which predict best performance for translation [6]. To explain this discrepancy, our theory formulates motion perception at two levels of inference: we first perform model selection between the competing models (e.g. translation, rotation, and expansion) and then estimate the velocity using the selected model. We define novel prior models for smooth rotation and expansion using techniques similar to those in the slow-and-smooth model [17] (e.g. Green functions of differential operators). The theory gives good agreement with the trends observed in human experiments. 1

6 0.081764124 67 nips-2008-Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance

7 0.078723386 218 nips-2008-Spectral Clustering with Perturbed Data

8 0.078401446 109 nips-2008-Interpreting the neural code with Formal Concept Analysis

9 0.074336544 118 nips-2008-Learning Transformational Invariants from Natural Movies

10 0.073917285 204 nips-2008-Self-organization using synaptic plasticity

11 0.06692151 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing

12 0.066668987 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons

13 0.063243881 60 nips-2008-Designing neurophysiology experiments to optimally constrain receptive field models along parametric submanifolds

14 0.061651397 247 nips-2008-Using Bayesian Dynamical Systems for Motion Template Libraries

15 0.060239572 158 nips-2008-Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks

16 0.059882499 110 nips-2008-Kernel-ARMA for Hand Tracking and Brain-Machine interfacing During 3D Motor Control

17 0.058736939 166 nips-2008-On the asymptotic equivalence between differential Hebbian and temporal difference learning using a local third factor

18 0.055690959 157 nips-2008-Nonrigid Structure from Motion in Trajectory Space

19 0.054355178 152 nips-2008-Non-stationary dynamic Bayesian networks

20 0.053601481 230 nips-2008-Temporal Difference Based Actor Critic Learning - Convergence and Neural Implementation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.141), (1, 0.08), (2, 0.137), (3, 0.069), (4, 0.009), (5, 0.021), (6, -0.024), (7, -0.022), (8, 0.099), (9, 0.015), (10, -0.006), (11, 0.075), (12, -0.094), (13, 0.139), (14, -0.002), (15, 0.066), (16, -0.024), (17, 0.052), (18, -0.004), (19, -0.166), (20, -0.155), (21, -0.005), (22, -0.009), (23, -0.041), (24, 0.033), (25, 0.029), (26, -0.054), (27, -0.025), (28, 0.094), (29, -0.024), (30, -0.055), (31, -0.024), (32, -0.022), (33, 0.081), (34, 0.024), (35, -0.024), (36, -0.021), (37, -0.021), (38, 0.049), (39, -0.015), (40, -0.036), (41, 0.033), (42, 0.008), (43, -0.03), (44, 0.054), (45, -0.097), (46, -0.063), (47, 0.031), (48, -0.089), (49, 0.11)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9647907 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks

Author: K. Wong, Si Wu, Chi Fung

Abstract: Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. 1

2 0.62699687 160 nips-2008-On Computational Power and the Order-Chaos Phase Transition in Reservoir Computing

Author: Benjamin Schrauwen, Lars Buesing, Robert A. Legenstein

Abstract: Randomly connected recurrent neural circuits have proven to be very powerful models for online computations when a trained memoryless readout function is appended. Such Reservoir Computing (RC) systems are commonly used in two flavors: with analog or binary (spiking) neurons in the recurrent circuits. Previous work showed a fundamental difference between these two incarnations of the RC idea. The performance of a RC system built from binary neurons seems to depend strongly on the network connectivity structure. In networks of analog neurons such dependency has not been observed. In this article we investigate this apparent dichotomy in terms of the in-degree of the circuit nodes. Our analyses based amongst others on the Lyapunov exponent reveal that the phase transition between ordered and chaotic network behavior of binary circuits qualitatively differs from the one in analog circuits. This explains the observed decreased computational performance of binary circuits of high node in-degree. Furthermore, a novel mean-field predictor for computational performance is introduced and shown to accurately predict the numerically obtained results. 1

3 0.61265171 109 nips-2008-Interpreting the neural code with Formal Concept Analysis

Author: Dominik Endres, Peter Foldiak

Abstract: We propose a novel application of Formal Concept Analysis (FCA) to neural decoding: instead of just trying to figure out which stimulus was presented, we demonstrate how to explore the semantic relationships in the neural representation of large sets of stimuli. FCA provides a way of displaying and interpreting such relationships via concept lattices. We explore the effects of neural code sparsity on the lattice. We then analyze neurophysiological data from high-level visual cortical area STSa, using an exact Bayesian approach to construct the formal context needed by FCA. Prominent features of the resulting concept lattices are discussed, including hierarchical face representation and indications for a product-of-experts code in real neurons. 1

4 0.54282194 43 nips-2008-Cell Assemblies in Large Sparse Inhibitory Networks of Biologically Realistic Spiking Neurons

Author: Adam Ponzi, Jeff Wickens

Abstract: Cell assemblies exhibiting episodes of recurrent coherent activity have been observed in several brain regions including the striatum[1] and hippocampus CA3[2]. Here we address the question of how coherent dynamically switching assemblies appear in large networks of biologically realistic spiking neurons interacting deterministically. We show by numerical simulations of large asymmetric inhibitory networks with fixed external excitatory drive that if the network has intermediate to sparse connectivity, the individual cells are in the vicinity of a bifurcation between a quiescent and firing state and the network inhibition varies slowly on the spiking timescale, then cells form assemblies whose members show strong positive correlation, while members of different assemblies show strong negative correlation. We show that cells and assemblies switch between firing and quiescent states with time durations consistent with a power-law. Our results are in good qualitative agreement with the experimental studies. The deterministic dynamical behaviour is related to winner-less competition[3], shown in small closed loop inhibitory networks with heteroclinic cycles connecting saddle-points. 1

5 0.54020619 58 nips-2008-Dependence of Orientation Tuning on Recurrent Excitation and Inhibition in a Network Model of V1

Author: Klaus Wimmer, Marcel Stimberg, Robert Martin, Lars Schwabe, Jorge Mariño, James Schummers, David C. Lyon, Mriganka Sur, Klaus Obermayer

Abstract: The computational role of the local recurrent network in primary visual cortex is still a matter of debate. To address this issue, we analyze intracellular recording data of cat V1, which combine measuring the tuning of a range of neuronal properties with a precise localization of the recording sites in the orientation preference map. For the analysis, we consider a network model of Hodgkin-Huxley type neurons arranged according to a biologically plausible two-dimensional topographic orientation preference map. We then systematically vary the strength of the recurrent excitation and inhibition relative to the strength of the afferent input. Each parametrization gives rise to a different model instance for which the tuning of model neurons at different locations of the orientation map is compared to the experimentally measured orientation tuning of membrane potential, spike output, excitatory, and inhibitory conductances. A quantitative analysis shows that the data provides strong evidence for a network model in which the afferent input is dominated by strong, balanced contributions of recurrent excitation and inhibition. This recurrent regime is close to a regime of “instability”, where strong, self-sustained activity of the network occurs. The firing rate of neurons in the best-fitting network is particularly sensitive to small modulations of model parameters, which could be one of the functional benefits of a network operating in this particular regime. 1

6 0.53027666 8 nips-2008-A general framework for investigating how far the decoding process in the brain can be simplified

7 0.51999813 204 nips-2008-Self-organization using synaptic plasticity

8 0.49787247 231 nips-2008-Temporal Dynamics of Cognitive Control

9 0.49633119 7 nips-2008-A computational model of hippocampal function in trace conditioning

10 0.49360088 67 nips-2008-Effects of Stimulus Type and of Error-Correcting Code Design on BCI Speller Performance

11 0.45052636 136 nips-2008-Model selection and velocity estimation using novel priors for motion patterns

12 0.44346425 118 nips-2008-Learning Transformational Invariants from Natural Movies

13 0.43794391 90 nips-2008-Gaussian-process factor analysis for low-dimensional single-trial analysis of neural population activity

14 0.43538091 100 nips-2008-How memory biases affect information transmission: A rational analysis of serial reproduction

15 0.43498483 172 nips-2008-Optimal Response Initiation: Why Recent Experience Matters

16 0.40799299 158 nips-2008-Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks

17 0.39716721 152 nips-2008-Non-stationary dynamic Bayesian networks

18 0.39142314 27 nips-2008-Artificial Olfactory Brain for Mixture Identification

19 0.38495192 157 nips-2008-Nonrigid Structure from Motion in Trajectory Space

20 0.38207304 218 nips-2008-Spectral Clustering with Perturbed Data


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(4, 0.025), (6, 0.075), (7, 0.09), (9, 0.255), (12, 0.036), (15, 0.014), (28, 0.199), (57, 0.052), (59, 0.024), (63, 0.027), (71, 0.011), (77, 0.046), (78, 0.014), (83, 0.04)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.84959036 69 nips-2008-Efficient Exact Inference in Planar Ising Models

Author: Nicol N. Schraudolph, Dmitry Kamenetsky

Abstract: We give polynomial-time algorithms for the exact computation of lowest-energy states, worst margin violators, partition functions, and marginals in certain binary undirected graphical models. Our approach provides an interesting alternative to the well-known graph cut paradigm in that it does not impose any submodularity constraints; instead we require planarity to establish a correspondence with perfect matchings in an expanded dual graph. Maximum-margin parameter estimation for a boundary detection task shows our approach to be efficient and effective. A C++ implementation is available from http://nic.schraudolph.org/isinf/. 1

same-paper 2 0.84930617 240 nips-2008-Tracking Changing Stimuli in Continuous Attractor Neural Networks

Author: K. Wong, Si Wu, Chi Fung

Abstract: Continuous attractor neural networks (CANNs) are emerging as promising models for describing the encoding of continuous stimuli in neural systems. Due to the translational invariance of their neuronal interactions, CANNs can hold a continuous family of neutrally stable states. In this study, we systematically explore how neutral stability of a CANN facilitates its tracking performance, a capacity believed to have wide applications in brain functions. We develop a perturbative approach that utilizes the dominant movement of the network stationary states in the state space. We quantify the distortions of the bump shape during tracking, and study their effects on the tracking performance. Results are obtained on the maximum speed for a moving stimulus to be trackable, and the reaction time to catch up an abrupt change in stimulus. 1

3 0.7792539 162 nips-2008-On the Design of Loss Functions for Classification: theory, robustness to outliers, and SavageBoost

Author: Hamed Masnadi-shirazi, Nuno Vasconcelos

Abstract: The machine learning problem of classifier design is studied from the perspective of probability elicitation, in statistics. This shows that the standard approach of proceeding from the specification of a loss, to the minimization of conditional risk is overly restrictive. It is shown that a better alternative is to start from the specification of a functional form for the minimum conditional risk, and derive the loss function. This has various consequences of practical interest, such as showing that 1) the widely adopted practice of relying on convex loss functions is unnecessary, and 2) many new losses can be derived for classification problems. These points are illustrated by the derivation of a new loss which is not convex, but does not compromise the computational tractability of classifier design, and is robust to the contamination of data with outliers. A new boosting algorithm, SavageBoost, is derived for the minimization of this loss. Experimental results show that it is indeed less sensitive to outliers than conventional methods, such as Ada, Real, or LogitBoost, and converges in fewer iterations. 1

4 0.69770569 96 nips-2008-Hebbian Learning of Bayes Optimal Decisions

Author: Bernhard Nessler, Michael Pfeiffer, Wolfgang Maass

Abstract: Uncertainty is omnipresent when we perceive or interact with our environment, and the Bayesian framework provides computational methods for dealing with it. Mathematical models for Bayesian decision making typically require datastructures that are hard to implement in neural networks. This article shows that even the simplest and experimentally best supported type of synaptic plasticity, Hebbian learning, in combination with a sparse, redundant neural code, can in principle learn to infer optimal Bayesian decisions. We present a concrete Hebbian learning rule operating on log-probability ratios. Modulated by reward-signals, this Hebbian plasticity rule also provides a new perspective for understanding how Bayesian inference could support fast reinforcement learning in the brain. In particular we show that recent experimental results by Yang and Shadlen [1] on reinforcement learning of probabilistic inference in primates can be modeled in this way. 1

5 0.68682384 135 nips-2008-Model Selection in Gaussian Graphical Models: High-Dimensional Consistency of \boldmath$\ell 1$-regularized MLE

Author: Garvesh Raskutti, Bin Yu, Martin J. Wainwright, Pradeep K. Ravikumar

Abstract: We consider the problem of estimating the graph structure associated with a Gaussian Markov random field (GMRF) from i.i.d. samples. We study the performance of study the performance of the ℓ1 -regularized maximum likelihood estimator in the high-dimensional setting, where the number of nodes in the graph p, the number of edges in the graph s and the maximum node degree d, are allowed to grow as a function of the number of samples n. Our main result provides sufficient conditions on (n, p, d) for the ℓ1 -regularized MLE estimator to recover all the edges of the graph with high probability. Under some conditions on the model covariance, we show that model selection can be achieved for sample sizes n = Ω(d2 log(p)), with the error decaying as O(exp(−c log(p))) for some constant c. We illustrate our theoretical results via simulations and show good correspondences between the theoretical predictions and behavior in simulations.

6 0.68421644 231 nips-2008-Temporal Dynamics of Cognitive Control

7 0.68412852 79 nips-2008-Exploring Large Feature Spaces with Hierarchical Multiple Kernel Learning

8 0.68250048 195 nips-2008-Regularized Policy Iteration

9 0.68180281 62 nips-2008-Differentiable Sparse Coding

10 0.68153399 21 nips-2008-An Homotopy Algorithm for the Lasso with Online Observations

11 0.68022794 4 nips-2008-A Scalable Hierarchical Distributed Language Model

12 0.67864835 106 nips-2008-Inferring rankings under constrained sensing

13 0.67864537 218 nips-2008-Spectral Clustering with Perturbed Data

14 0.67846382 196 nips-2008-Relative Margin Machines

15 0.67826796 37 nips-2008-Biasing Approximate Dynamic Programming with a Lower Discount Factor

16 0.67683101 175 nips-2008-PSDBoost: Matrix-Generation Linear Programming for Positive Semidefinite Matrices Learning

17 0.67682779 118 nips-2008-Learning Transformational Invariants from Natural Movies

18 0.67674249 205 nips-2008-Semi-supervised Learning with Weakly-Related Unlabeled Data : Towards Better Text Categorization

19 0.67671597 202 nips-2008-Robust Regression and Lasso

20 0.67636406 129 nips-2008-MAS: a multiplicative approximation scheme for probabilistic inference