nips nips2001 nips2001-23 knowledge-graph by maker-knowledge-mining

23 nips-2001-A theory of neural integration in the head-direction system


Source: pdf

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A theory of neural integration in the head-direction system     Richard H. [sent-1, score-0.25]

2 edu   ¡ ¢ Abstract Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. [sent-6, score-1.263]

3 In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al. [sent-7, score-0.67]

4 Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. [sent-11, score-0.767]

5 In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. [sent-12, score-0.548]

6 Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. [sent-13, score-0.495]

7 For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. [sent-14, score-0.326]

8 We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures. [sent-15, score-0.702]

9 1 Introduction Several network models have been designed to emulate the properties of head-direction neurons (HDNs) [Zhang, 1996, Redish et al. [sent-16, score-0.292]

10 The model by Zhang reproduces persistent activity during stationary head positions. [sent-18, score-0.505]

11 Persistent neural activity is generated in a ring-attractor network with symmetric excitatory and inhibitory synaptic connections. [sent-19, score-0.467]

12 showed that integration is possible by adding asymmetrical connections to the attractor network. [sent-21, score-0.612]

13 They assumed that the strength of these asymmetrical connections is modulated by head-velocity. [sent-22, score-0.283]

14 When the rat moves its head to the right, the asymmetrical connections induce a rightward shift of the activity in the attractor network. [sent-23, score-0.867]

15 A more plausible model without multiplicative modulation of connections has been studied recently by Goodridge and Touretzky. [sent-24, score-0.255]

16 There, the head-velocity input has a modulatory influence on firing rates of intermittent neurons rather than on connection strengths. [sent-25, score-0.347]

17 The intermittent neurons are divided into two groups that make spatially offset connections, one group to the right, the other to the left. [sent-26, score-0.366]

18 The different types of neurons in the Goodridge and Touretzky model have firing properties that are comparable to neurons in the various nuclei of the head-direction system. [sent-27, score-0.428]

19 What all these previous models have in common is that the integration is performed in an inherent double-ring network with very fast synapses (less than ms for [Goodridge and Touretzky, 2000]). [sent-28, score-0.438]

20 The connections made by one ring are responsible for rightward turns and the connections made by the other ring are responsible for leftward turns. [sent-29, score-1.321]

21 In order to derive a network theory of integration valid for fast and slow synapses, here we solve a simple double-ring network in the linear and in the saturated regimes. [sent-30, score-0.519]

22 An important property of the head-direction system is that the integration be linear over a large range of head-velocities. [sent-31, score-0.327]

23 We are interested in finding those type of synaptic connections that yield a large linear range and pose our findings as predictions on optimal network architectures. [sent-32, score-0.477]

24 2 Definition of the model We assume that the number of neurons in the double-ring network is large and write its dynamics as a continuous neural field  £ (1)  £ (2)    ¨ ©¦ ! [sent-34, score-0.301]

25 and are the firing rates of neurons in the left and right ring, respectively. [sent-41, score-0.301]

26 The quantities and represent synaptic activations (amount of neurotransmitter release caused by the firing rates and ). [sent-42, score-0.176]

27 For simplicity, we assume that is proportional to angular headbetween neurons on the same ring and velocity. [sent-45, score-0.656]

28 The synaptic connection profiles between neurons on different rings are given by: (3) , , and define the intra and inter-ring connection strengths. [sent-46, score-0.662]

29 is the intra-ring connection offset and the inter-ring offset. [sent-47, score-0.202]

30 In this case, within a certain range of synaptic connections, steady bumps of activities appear i T on the two rings. [sent-51, score-0.408]

31 When the head of the animal rotates, the activity bumps travel at a velocity determined by . [sent-52, score-0.815]

32 For perfect integration, should be proportional to over the full range of possible head-velocities. [sent-53, score-0.134]

33 This is a difficult computational problem, in particular for slow synapses. [sent-54, score-0.089]

34 The half width of these bumps is given by £ ¢ ¥  R ƒQ  Q T  „‚€ e ¨ ¢ £¨ "ƒ§€ ‡ © ` ƒ (5)  When the angular head velocity is small ( ), we linearize the dynamics around the stationary solution Eq. [sent-57, score-0.924]

35 „ B I¨ and H I where the velocity is given by (6) (7) (10) % Equation (8) is the desired result, relating the velocity of the two bumps to the differential vestibular input . [sent-64, score-0.872]

36 1 we show simulation results using slow synapses ( ms). [sent-66, score-0.218]

37 The integration is linear over almost the entire range of head-velocities (up to more than ) when , i. [sent-67, score-0.327]

38 , when the amplitudes of inter-ring and intra-ring cannot directly be deduced connections are equal. [sent-69, score-0.273]

39 We point out that the condition from the above formulas, some empirical tuning (for example ) was necessary to and ). [sent-70, score-0.104]

40 achieve this large range of linearity (large both in  ¡   Q   i  R † w x     † Q w    † T @8¥6i 7937¤i 5 i 4 T When the bumps move, their amplitudes tend to decrease. [sent-71, score-0.248]

41 1d shows the peak firing rates of neurons in the two rings as a function of vestibular input. [sent-73, score-0.707]

42 As can be seen, the firing rates are a linear function of vestibular input, in agreement with equations 17 and 18 of the Appendix. [sent-74, score-0.34]

43 However, a linear firing-rate modulation by head velocity is not universal, for some parameters we have seen asymmetrically head-velocity tuning, with a preference for small head velocities (not shown). [sent-75, score-0.908]

44 5 Figure 1: Velocity of activity bumps as a function of vestibular input . [sent-88, score-0.522]

45 Head-velocity dependent modulation of firing rates (on the right and on the left ring). [sent-99, score-0.158]

46 6 §¥ ¨¦ B   w ¤ ¡ †T   i  R † 6 1 ©  B   w    † i 6 4 R f†    E 6 ¢ i £ 6 4 B w    B †   ¡   i 4 i  ¡   R † 5 Saturating velocity Q When is very large, at some point, the left ring becomes inactive. [sent-103, score-0.614]

47 Because inactivating the left ring means that the push-pull competition between the two rings is minimized, we are able to determine the saturating velocity of the double-ring network. [sent-104, score-0.987]

48 The saturating velocity is given by the on-ring connections . [sent-105, score-0.588]

49 Now, let be the steady solution of a ring network with symmetric connections . [sent-109, score-0.744]

50 By differentiating, it follows that is the solution of a ring network with connections . [sent-110, score-0.687]

51 Hence, the saturating velocity is given by &¨ ¡ 9 ! [sent-111, score-0.373]

52 "  e Notice that a traveling solution may not always exist if one ring is inactive (this is the case when there are no intra-ring excitatory connections). [sent-114, score-0.555]

53 However, even without a traveling solution, equation (11) remains valid. [sent-115, score-0.098]

54 1a and b, the saturating velocity is indicated by the horizontal dotted lines, in Fig. [sent-117, score-0.373]

55 "    @8¥6 99¤9©i ¢ © 6 ADN and POs neurons Goodridge and Touretzky’s integrator model was designed to emulate details of neuronal tuning as observed in the different areas of the head-direction system. [sent-121, score-0.409]

56 Wondering whether the simple double ring studied here can also reproduce multiple tuning curves, we analyze simple read-out methods of the firing rates and . [sent-122, score-0.557]

57 ADN neurons: By reading out firing rates using a maximum operation, , anticipatory head-direction tuning arises due to the fact that there is an activity offset between the two rings, equation (13). [sent-126, score-0.554]

58 When the head turns to the right, is the activity on the right ring is larger than on the left ring and so the tuning of biased to the right. [sent-127, score-1.388]

59 Similarly, for left turns, is biased to the left. [sent-128, score-0.059]

60 Thus, the activity offset between the two rings leads to an anticipation time for ADN neurons, see Figure 2. [sent-129, score-0.551]

61 Because, by assumption is head-velocity independent, it follows that is inversely proportional to head-velocity (assuming perfect integration), . [sent-130, score-0.08]

62 In other words, the anticipation time tends to be smaller for fast head rotations and larger for slow head rotations. [sent-131, score-0.93]

63 hfc g e ¥ POs neurons: By reading out the double ring activity as an average, , neurons in POs do not have any anticipation time: because averaging is a symmetric operation, all information about the direction of head rotations is lost. [sent-135, score-1.32]

64 Right turn Left turn Left ring Right ring Firing Rate Max Average 0 90 180 270 Head−direction (degs) 360 Figure 2: Snapshots of the activities on the two rings (top). [sent-137, score-0.997]

65 Reading out the activities by averaging and by a maximum operation (bottom). [sent-138, score-0.102]

66 7 Discussion Here we discuss how the various connection parameters contribute to the double-ring network to function as an integrator. [sent-139, score-0.128]

67 in order to yield an integration that is large in ¡   Q T ¡ ¡ : By assumption the synaptic time constant is large. [sent-141, score-0.371]

68 has the simplest effect of all parameters on the integrator properties. [sent-142, score-0.077]

69 Notice that if were small, a large range of could be trivially achieved. [sent-144, score-0.054]

70 : The connection offset between neurons receiving similar vestibular input is the sole parameter besides determing the saturating head-velocity, beyond which integration is impossible. [sent-146, score-1.096]

71 According to equation (11), the saturating veloc(we want the saturating velocity to be large). [sent-147, score-0.579]

72 In ity is large if is close to other words, for good integration, excitatory connections should be strongest (or inhibitory connections weakest) for neuron pairs with preferred head-directions differing by close to . [sent-148, score-0.61]

73 : The connection offset between neurons receiving different vestibular input determines the anticipation time of thalamic neurons. [sent-149, score-0.811]

74 If is large, then , ¡     ¡ ¡ B ¡ 6 i ¥ E   B ¡ B E %   6i ¡ E   ¥ the activity offset in equation (13) is large. [sent-150, score-0.279]

75 And, because is proportional to (assuming perfect integration), we conclude that should preferentially be large (close to ) if is to be large. [sent-151, score-0.08]

76 Notice that by equation (8), the range of is not affected by . [sent-152, score-0.094]

77 %   E % Ei 6 ¡ † R f† and : The inter-ring connections should be mainly excitatory, which implies that should not be too negative ( was found to be optimal). [sent-153, score-0.215]

78 We want the integration to be as linear in as possible, which means that we want our linear expansions (6) and (7) to deviate as little as possible from (4). [sent-155, score-0.296]

79 Hence, the differential gain between the two rings should be small, which is the case when the two rings excite each other. [sent-156, score-0.451]

80 The interring excitation makes sure, even for large values of , that there are comparable activity levels on the two rings. [sent-157, score-0.101]

81 Q  a† R i T Q R f†   T w R ¤w and : The intra-ring connections should be mainly inhibitory, which implies that should be strongly negative. [sent-159, score-0.215]

82 The reason for this is that inhibition is necessary to result in proper and stable integration. [sent-160, score-0.028]

83 Notice also that according to equation (15), cannot be much larger than . [sent-162, score-0.04]

84 If this were the case, the persistent activity in the no head-movement case would become unstable. [sent-163, score-0.149]

85 For linear integration we have found that the condition is necessary; small deviations from this condition cause the integrator to become sub- or supralinear. [sent-164, score-0.35]

86 † R w   w      R "w   w †   8 Conclusion We have presented a theory for integration in the head-direction system with slow synapses. [sent-165, score-0.339]

87 We have found that in order to achieve a large range of linear integration, there should be strong excitatory connections between neurons with dissimilar head-velocity tuning and inhibitory connections between neurons with similar head-velocity tuning (see the discussion). [sent-166, score-1.237]

88 Similar to models of the occulomotor integrator [Seung, 1996], we have found that linear integration can only be achieved by precise tuning of synaptic weights (for example ). [sent-167, score-0.575]

89 w x   † Appendix   To study the traveling pulse solution with velocity , it is convenient to go into a moving coordinate frame by the change of variables . [sent-168, score-0.357]

90 The stationary solution in the moving frame reads   £    ¨ ¤# ! [sent-169, score-0.129]

91 In order to find the fixed points of equation (12), we use the ansatz (4) and Set equate the coefficients of the 3 Fourier modes , and the -independent mode. [sent-172, score-0.079]

92    The above set of equations fully characterize the solution for . [sent-185, score-0.034]

93 (13) determines the offset between the two rings, eq. [sent-187, score-0.164]

94 Q is small, we assume that the perturbed solution around  ` ƒ # £ © ¦  £ ©  £¨ 9 £p¨ R A9 9 ) and equate the Fourier coefficients. [sent-191, score-0.073]

95 We determine linearized dynamics of the differential mode Comparing once more the Fourier coefficients leads to (16) £ ©  R T ¥ We linearize the dynamics (12) (to first order in This leads to ¡¦ ¥ ¨ i When the vestibular input and takes the form:   (17)   where . [sent-199, score-0.452]

96 Role of the lateral mammillary nucleus in the rat head direction circuit: A combined single unit recording and lesion study. [sent-207, score-0.444]

97 Anticipatory head diirection signals in anterior thalamus: evidence for a thalamocortical circuit that integrates angular head motion to compute head direction. [sent-212, score-1.119]

98 Reduction of conductance-based models with slow synapses to neural nets. [sent-216, score-0.184]

99 A coupled attractor model of the rodent head direction system. [sent-229, score-0.49]

100 Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory. [sent-242, score-0.054]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ring', 0.374), ('head', 0.319), ('vestibular', 0.262), ('integration', 0.25), ('connections', 0.215), ('rings', 0.207), ('velocity', 0.207), ('goodridge', 0.183), ('neurons', 0.183), ('saturating', 0.166), ('bumps', 0.159), ('blair', 0.157), ('offset', 0.138), ('adn', 0.131), ('synaptic', 0.121), ('anticipation', 0.105), ('redish', 0.105), ('tuning', 0.104), ('pos', 0.104), ('activity', 0.101), ('touretzky', 0.097), ('synapses', 0.095), ('excitatory', 0.089), ('slow', 0.089), ('seung', 0.079), ('attractor', 0.079), ('ermentrout', 0.079), ('integrator', 0.077), ('rotations', 0.069), ('angular', 0.069), ('anticipatory', 0.068), ('asymmetrical', 0.068), ('thalamus', 0.068), ('inhibitory', 0.067), ('network', 0.064), ('connection', 0.064), ('nuclei', 0.062), ('zhang', 0.059), ('traveling', 0.058), ('sharp', 0.057), ('rates', 0.055), ('range', 0.054), ('dynamics', 0.054), ('mammillary', 0.052), ('postsubiculum', 0.052), ('rightward', 0.052), ('rodent', 0.052), ('perfect', 0.05), ('persistent', 0.048), ('reading', 0.048), ('fourier', 0.046), ('emulate', 0.045), ('intermittent', 0.045), ('linearize', 0.045), ('activities', 0.042), ('wq', 0.041), ('qq', 0.041), ('modulation', 0.04), ('direction', 0.04), ('equation', 0.04), ('anterior', 0.039), ('equate', 0.039), ('differential', 0.037), ('stationary', 0.037), ('notice', 0.035), ('amplitudes', 0.035), ('solution', 0.034), ('simulation', 0.034), ('rat', 0.033), ('receiving', 0.033), ('left', 0.033), ('nd', 0.033), ('averaging', 0.032), ('moving', 0.032), ('responsible', 0.032), ('steady', 0.032), ('proportional', 0.03), ('right', 0.03), ('fast', 0.029), ('animal', 0.029), ('circuit', 0.029), ('operation', 0.028), ('coef', 0.028), ('inhibition', 0.028), ('turns', 0.027), ('firing', 0.026), ('biased', 0.026), ('determines', 0.026), ('frame', 0.026), ('symmetric', 0.025), ('signals', 0.025), ('preferred', 0.024), ('double', 0.024), ('linear', 0.023), ('deduced', 0.023), ('deformation', 0.023), ('dorsal', 0.023), ('formulas', 0.023), ('hahnloser', 0.023), ('intra', 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 23 nips-2001-A theory of neural integration in the head-direction system

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

2 0.29766646 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

Author: Yun Gao, Michael J. Black, Elie Bienenstock, Shy Shoham, John P. Donoghue

Abstract: Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model and rigorously compare it with previous models using cross-validation. Second, we infer a posterior probability distribution over hand motion conditioned on a sequence of neural test data using Bayesian inference. The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for the kinematics. A particle filtering method is used to represent, update, and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it may be appropriate for neural prosthetic applications.

3 0.1909226 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

Author: Xiaohui Xie, Martin A. Giese

Abstract: Asymmetric lateral connections are one possible mechanism that can account for the direction selectivity of cortical neurons. We present a mathematical analysis for a class of these models. Contrasting with earlier theoretical work that has relied on methods from linear systems theory, we study the network’s nonlinear dynamic properties that arise when the threshold nonlinearity of the neurons is taken into account. We show that such networks have stimulus-locked traveling pulse solutions that are appropriate for modeling the responses of direction selective cortical neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of this solutions breaks down giving rise to another class of solutions that are characterized by specific spatiotemporal periodicity. This predicts that if direction selectivity in the cortex is mainly achieved by asymmetric lateral connections lurching activity waves might be observable in ensembles of direction selective cortical neurons within appropriate regimes of the stimulus speed.

4 0.15971629 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.1372115 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

6 0.13714123 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

7 0.13619876 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

8 0.11394 86 nips-2001-Grammatical Bigrams

9 0.095904484 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

10 0.079807915 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

11 0.07198634 57 nips-2001-Correlation Codes in Neuronal Populations

12 0.070918486 2 nips-2001-3 state neurons for contextual processing

13 0.068794854 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

14 0.065953366 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

15 0.065269269 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

16 0.063771687 181 nips-2001-The Emergence of Multiple Movement Units in the Presence of Noise and Feedback Delay

17 0.060533479 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

18 0.060229752 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

19 0.059015609 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

20 0.056993634 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.146), (1, -0.273), (2, -0.143), (3, 0.064), (4, 0.072), (5, 0.023), (6, 0.017), (7, 0.064), (8, -0.022), (9, 0.068), (10, 0.0), (11, -0.024), (12, -0.04), (13, -0.079), (14, 0.024), (15, -0.0), (16, -0.223), (17, -0.134), (18, -0.016), (19, -0.108), (20, 0.106), (21, 0.145), (22, 0.065), (23, 0.019), (24, 0.005), (25, 0.023), (26, 0.102), (27, -0.263), (28, 0.111), (29, -0.084), (30, -0.162), (31, 0.023), (32, -0.062), (33, 0.055), (34, 0.161), (35, -0.026), (36, 0.104), (37, -0.157), (38, 0.064), (39, 0.017), (40, 0.006), (41, -0.028), (42, -0.012), (43, -0.021), (44, 0.011), (45, 0.18), (46, 0.025), (47, -0.123), (48, -0.051), (49, -0.028)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97843581 23 nips-2001-A theory of neural integration in the head-direction system

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

2 0.72760588 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections

Author: Xiaohui Xie, Martin A. Giese

Abstract: Asymmetric lateral connections are one possible mechanism that can account for the direction selectivity of cortical neurons. We present a mathematical analysis for a class of these models. Contrasting with earlier theoretical work that has relied on methods from linear systems theory, we study the network’s nonlinear dynamic properties that arise when the threshold nonlinearity of the neurons is taken into account. We show that such networks have stimulus-locked traveling pulse solutions that are appropriate for modeling the responses of direction selective cortical neurons. In addition, our analysis shows that outside a certain regime of stimulus speeds the stability of this solutions breaks down giving rise to another class of solutions that are characterized by specific spatiotemporal periodicity. This predicts that if direction selectivity in the cortex is mainly achieved by asymmetric lateral connections lurching activity waves might be observable in ensembles of direction selective cortical neurons within appropriate regimes of the stimulus speed.

3 0.7221294 150 nips-2001-Probabilistic Inference of Hand Motion from Neural Activity in Motor Cortex

Author: Yun Gao, Michael J. Black, Elie Bienenstock, Shy Shoham, John P. Donoghue

Abstract: Statistical learning and probabilistic inference techniques are used to infer the hand position of a subject from multi-electrode recordings of neural activity in motor cortex. First, an array of electrodes provides training data of neural firing conditioned on hand kinematics. We learn a nonparametric representation of this firing activity using a Bayesian model and rigorously compare it with previous models using cross-validation. Second, we infer a posterior probability distribution over hand motion conditioned on a sequence of neural test data using Bayesian inference. The learned firing models of multiple cells are used to define a nonGaussian likelihood term which is combined with a prior probability for the kinematics. A particle filtering method is used to represent, update, and propagate the posterior distribution over time. The approach is compared with traditional linear filtering methods; the results suggest that it may be appropriate for neural prosthetic applications.

4 0.61062795 166 nips-2001-Self-regulation Mechanism of Temporally Asymmetric Hebbian Plasticity

Author: N. Matsumoto, M. Okada

Abstract: Recent biological experimental findings have shown that the synaptic plasticity depends on the relative timing of the pre- and postsynaptic spikes which determines whether Long Term Potentiation (LTP) occurs or Long Term Depression (LTD) does. The synaptic plasticity has been called “Temporally Asymmetric Hebbian plasticity (TAH)”. Many authors have numerically shown that spatiotemporal patterns can be stored in neural networks. However, the mathematical mechanism for storage of the spatio-temporal patterns is still unknown, especially the effects of LTD. In this paper, we employ a simple neural network model and show that interference of LTP and LTD disappears in a sparse coding scheme. On the other hand, it is known that the covariance learning is indispensable for storing sparse patterns. We also show that TAH qualitatively has the same effect as the covariance learning when spatio-temporal patterns are embedded in the network. 1

5 0.4165512 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

6 0.40616527 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

7 0.39705625 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

8 0.3847971 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

9 0.35979092 57 nips-2001-Correlation Codes in Neuronal Populations

10 0.29914933 86 nips-2001-Grammatical Bigrams

11 0.28266931 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

12 0.26754284 111 nips-2001-Learning Lateral Interactions for Feature Binding and Sensory Segmentation

13 0.26509288 131 nips-2001-Neural Implementation of Bayesian Inference in Population Codes

14 0.24022385 124 nips-2001-Modeling the Modulatory Effect of Attention on Human Spatial Vision

15 0.23078237 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments

16 0.22642572 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

17 0.2158009 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

18 0.21288638 85 nips-2001-Grammar Transfer in a Second Order Recurrent Neural Network

19 0.21278311 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

20 0.19451758 3 nips-2001-ACh, Uncertainty, and Cortical Inference


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.015), (19, 0.021), (27, 0.628), (30, 0.065), (38, 0.021), (59, 0.01), (72, 0.02), (74, 0.019), (79, 0.02), (91, 0.082)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.99300492 117 nips-2001-MIME: Mutual Information Minimization and Entropy Maximization for Bayesian Belief Propagation

Author: Anand Rangarajan, Alan L. Yuille

Abstract: Bayesian belief propagation in graphical models has been recently shown to have very close ties to inference methods based in statistical physics. After Yedidia et al. demonstrated that belief propagation fixed points correspond to extrema of the so-called Bethe free energy, Yuille derived a double loop algorithm that is guaranteed to converge to a local minimum of the Bethe free energy. Yuille’s algorithm is based on a certain decomposition of the Bethe free energy and he mentions that other decompositions are possible and may even be fruitful. In the present work, we begin with the Bethe free energy and show that it has a principled interpretation as pairwise mutual information minimization and marginal entropy maximization (MIME). Next, we construct a family of free energy functions from a spectrum of decompositions of the original Bethe free energy. For each free energy in this family, we develop a new algorithm that is guaranteed to converge to a local minimum. Preliminary computer simulations are in agreement with this theoretical development. 1

same-paper 2 0.98782587 23 nips-2001-A theory of neural integration in the head-direction system

Author: Richard Hahnloser, Xiaohui Xie, H. S. Seung

Abstract: Integration in the head-direction system is a computation by which horizontal angular head velocity signals from the vestibular nuclei are integrated to yield a neural representation of head direction. In the thalamus, the postsubiculum and the mammillary nuclei, the head-direction representation has the form of a place code: neurons have a preferred head direction in which their firing is maximal [Blair and Sharp, 1995, Blair et al., 1998, ?]. Integration is a difficult computation, given that head-velocities can vary over a large range. Previous models of the head-direction system relied on the assumption that the integration is achieved in a firing-rate-based attractor network with a ring structure. In order to correctly integrate head-velocity signals during high-speed head rotations, very fast synaptic dynamics had to be assumed. Here we address the question whether integration in the head-direction system is possible with slow synapses, for example excitatory NMDA and inhibitory GABA(B) type synapses. For neural networks with such slow synapses, rate-based dynamics are a good approximation of spiking neurons [Ermentrout, 1994]. We find that correct integration during high-speed head rotations imposes strong constraints on possible network architectures.

3 0.9868207 165 nips-2001-Scaling Laws and Local Minima in Hebbian ICA

Author: Magnus Rattray, Gleb Basalyga

Abstract: We study the dynamics of a Hebbian ICA algorithm extracting a single non-Gaussian component from a high-dimensional Gaussian background. For both on-line and batch learning we find that a surprisingly large number of examples are required to avoid trapping in a sub-optimal state close to the initial conditions. To extract a skewed signal at least examples are required for -dimensional data and examples are required to extract a symmetrical signal with non-zero kurtosis. § ¡ ©£¢  £ §¥ ¡ ¨¦¤£¢

4 0.98679644 129 nips-2001-Multiplicative Updates for Classification by Mixture Models

Author: Lawrence K. Saul, Daniel D. Lee

Abstract: We investigate a learning algorithm for the classification of nonnegative data by mixture models. Multiplicative update rules are derived that directly optimize the performance of these models as classifiers. The update rules have a simple closed form and an intuitive appeal. Our algorithm retains the main virtues of the Expectation-Maximization (EM) algorithm—its guarantee of monotonic improvement, and its absence of tuning parameters—with the added advantage of optimizing a discriminative objective function. The algorithm reduces as a special case to the method of generalized iterative scaling for log-linear models. The learning rate of the algorithm is controlled by the sparseness of the training data. We use the method of nonnegative matrix factorization (NMF) to discover sparse distributed representations of the data. This form of feature selection greatly accelerates learning and makes the algorithm practical on large problems. Experiments show that discriminatively trained mixture models lead to much better classification than comparably sized models trained by EM. 1

5 0.98506153 106 nips-2001-Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering

Author: Mikhail Belkin, Partha Niyogi

Abstract: Drawing on the correspondence between the graph Laplacian, the Laplace-Beltrami op erator on a manifold , and the connections to the heat equation , we propose a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space. The algorithm provides a computationally efficient approach to nonlinear dimensionality reduction that has locality preserving properties and a natural connection to clustering. Several applications are considered. In many areas of artificial intelligence, information retrieval and data mining, one is often confronted with intrinsically low dimensional data lying in a very high dimensional space. For example, gray scale n x n images of a fixed object taken with a moving camera yield data points in rn: n2 . However , the intrinsic dimensionality of the space of all images of t he same object is the number of degrees of freedom of the camera - in fact the space has the natural structure of a manifold embedded in rn: n2 . While there is a large body of work on dimensionality reduction in general, most existing approaches do not explicitly take into account the structure of the manifold on which the data may possibly reside. Recently, there has been some interest (Tenenbaum et aI, 2000 ; Roweis and Saul, 2000) in the problem of developing low dimensional representations of data in this particular context. In this paper , we present a new algorithm and an accompanying framework of analysis for geometrically motivated dimensionality reduction. The core algorithm is very simple, has a few local computations and one sparse eigenvalu e problem. The solution reflects th e intrinsic geom etric structure of the manifold. Th e justification comes from the role of the Laplacian op erator in providing an optimal emb edding. Th e Laplacian of the graph obtained from the data points may be viewed as an approximation to the Laplace-Beltrami operator defined on the manifold. The emb edding maps for the data come from approximations to a natural map that is defined on the entire manifold. The framework of analysis presented here makes this connection explicit. While this connection is known to geometers and specialists in sp ectral graph theory (for example , see [1, 2]) to the best of our knowledge we do not know of any application to data representation yet. The connection of the Laplacian to the heat kernel enables us to choose the weights of the graph in a principled manner. The locality preserving character of the Laplacian Eigenmap algorithm makes it relatively insensitive to outliers and noise. A byproduct of this is that the algorithm implicitly emphasizes the natural clusters in the data. Connections to spectral clustering algorithms developed in learning and computer vision (see Shi and Malik , 1997) become very clear. Following the discussion of Roweis and Saul (2000) , and Tenenbaum et al (2000), we note that the biological perceptual apparatus is confronted with high dimensional stimuli from which it must recover low dimensional structure. One might argue that if the approach to recovering such low-dimensional structure is inherently local , then a natural clustering will emerge and thus might serve as the basis for the development of categories in biological perception. 1 The Algorithm Given k points Xl , ... , Xk in ]]{ I, we construct a weighted graph with k nodes, one for each point , and the set of edges connecting neighboring points to each other. 1. Step 1. [Constru cting th e Graph] We put an edge between nodes i and j if Xi and Xj are

6 0.98254412 133 nips-2001-On Discriminative vs. Generative Classifiers: A comparison of logistic regression and naive Bayes

7 0.97622108 47 nips-2001-Causal Categorization with Bayes Nets

8 0.89378327 98 nips-2001-Information Geometrical Framework for Analyzing Belief Propagation Decoder

9 0.85776812 9 nips-2001-A Generalization of Principal Components Analysis to the Exponential Family

10 0.82421672 137 nips-2001-On the Convergence of Leveraging

11 0.77646148 103 nips-2001-Kernel Feature Spaces and Nonlinear Blind Souce Separation

12 0.76544923 127 nips-2001-Multi Dimensional ICA to Separate Correlated Sources

13 0.76540232 97 nips-2001-Information-Geometrical Significance of Sparsity in Gallager Codes

14 0.76447529 8 nips-2001-A General Greedy Approximation Algorithm with Applications

15 0.75985622 81 nips-2001-Generalization Performance of Some Learning Problems in Hilbert Functional Spaces

16 0.75057232 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

17 0.74950898 88 nips-2001-Grouping and dimensionality reduction by locally linear embedding

18 0.74945396 114 nips-2001-Learning from Infinite Data in Finite Time

19 0.74741471 190 nips-2001-Thin Junction Trees

20 0.74405897 154 nips-2001-Products of Gaussians