nips nips2001 nips2001-2 knowledge-graph by maker-knowledge-mining

2 nips-2001-3 state neurons for contextual processing


Source: pdf

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 3 state neurons for contextual processing Adam Kepecs* and Sridhar Raghavachari Volen Center for Complex Systems Brandeis University Waltham MA 02454 {kepecs,sraghava}@brandeis. [sent-1, score-0.354]

2 We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. [sent-3, score-0.693]

3 The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. [sent-6, score-0.353]

4 The NMDA type receptors are slow ((TNMDA '" 150ms) and have been mostly investigated for their critical role in the induction of long term potentiation, which is thought to be the mechanism for storing long term memories. [sent-11, score-0.22]

5 Crucial to this is the unique voltage dependence of NMDA receptors [6] that requires both the presynaptic neuron to be active and the post-synaptic neuron to be depolarized for the channel to open. [sent-12, score-0.588]

6 However, pharamacological studies which block the NMDA receptors impair a variety of brain processes, suggesting that NMDA receptors also playa role in shaping the dynamic activity of neural networks [10, 3, 8, 11, 2]. [sent-13, score-0.412]

7 Therefore, we wanted to examine the role of NMDA receptors in post-synaptic integration. [sent-14, score-0.191]

8 Harsch and Robinson [4] have observed that injection of NMDA conductance that simulates synchronous synaptic input regularized firing while lowering response reliability. [sent-15, score-0.387]

9 large NMDA inputs in a leaky dendrite showed a large regenerative depolarization. [sent-17, score-0.202]

10 Neurons however, also possess a variety of potassium currents that are able to limit these large excursions in voltage. [sent-18, score-0.35]

11 In particular, recent observations show that A-type potassium currents are abundant in dendrites of a variety of neurons [7] . [sent-19, score-0.569]

12 Combining these potassium currents with random NMDA inputs showed that the membrane voltage alternated between two distinct subthreshold states. [sent-20, score-0.785]

13 Similar observations of two-state fluctuations have been made in vivo in several cortical areas and the striatum [17, 9, 1]. [sent-21, score-0.349]

14 The origin and possible functional relevance of these fluctuations have remained a puzzle. [sent-22, score-0.224]

15 We suggest that the NMDA type inputs combined with potassium currents are sufficient to produce such membrane dynamics. [sent-23, score-0.68]

16 Our results lead us to suggest that the fluctuations could be used to represent contextual modulation of neuronal firing. [sent-24, score-0.511]

17 1 NMDA-type input causes 2 state membrane fluctuations Model To examine the role of NMDA type inputs, we built a simple model of a cortical neuron receiving AMPA and NMDA type inputs. [sent-26, score-0.799]

18 To capture the spatial extent of neuronal morphology we use a two-compartment model of pyramidal neurons [15]. [sent-27, score-0.248]

19 We represent the soma, proximal dendrites and the axon lumped into one compartment containing the channels necessary for spike generation (INa and IK). [sent-28, score-0.29]

20 The dendritic compartment includes two potassium currents, a fast activating IKA and the slower IKS along with a persistent sodium current INaP. [sent-29, score-0.396]

21 The dendrite also receives synaptic input as INMDA and IAMPA . [sent-30, score-0.215]

22 The membrane voltage of the neuron obeys the current balance equations: while the dendritic voltage, "\lid obeys: em is the specific membrane capacitance which is taken to be 1 I1F / cm 2 for where both the dendrite and the soma for all cells and p =0. [sent-31, score-0.844]

23 The passive leak current in both the soma and dendrites were modeled as h eak = El eak ), where gl eak was the leak conductance which was taken to be 0. [sent-34, score-0.589]

24 El eak = -80mV was the leak reversal potential for both the compartments. [sent-36, score-0.207]

25 The sodium current, INa = gNam~ h(VS - E Na ), where gNa = 45 mS/cm 2 and sodium reversal potential, ENa = 55 mV with m oo(V) = a=(~)~~~(V). [sent-38, score-0.17]

26 The delayed rectifier potassium current, IKDr = gKn4(VS - EK), where gK = 9 mS/cm 2 and potassium reversal potential, EK = -80 mV with O::n(V) = -O. [sent-43, score-0.466]

27 25 mS/cm 2 • The two potassium currents were hs = gKsq(V - VK), with qoo (V) = 1/(1 + exp( -(V + 50)/2)) and Tq(V) = 200/(exp( -(V + 60)/10) + exp((V + 60)/10)) and gKS = 0. [sent-48, score-0.35]

28 08V)), where S was the activation variable and f denoted the inactivation of NMDA channels due to calcium entry. [sent-53, score-0.13]

29 AMPA and NMDA inputs were modeled as conductance kicks that decayed with TAMPA = 5 ms and TNMDA = 150 ms. [sent-54, score-0.213]

30 Calcium dependent inactivation of the NMDA conductance was modeled as a negative feedback df /dt = (foo - f)/2 , where f oo was a shallow sigmoid function that was 1 below a conductance threshold of 2 ms/cm 2 and was inversely proportional to the NMDA conductance above threshold. [sent-55, score-0.404]

31 Synchronous inputs were modeled as a compound Poisson process representing 100 inputs firing at a rate A each spiking with a probability of 0. [sent-65, score-0.286]

32 2 NMDA induced two-state fluctuations Figure 1A shows the firing produced by inputs with high AMPA/NMDA ratio. [sent-70, score-0.42]

33 Figure 1B shows that the same spike train input delivered via synapses with a high NMDA content results in robust two-state membrane behavior. [sent-71, score-0.374]

34 We term the lower and higher voltage states as UP and DOWN states respectively. [sent-72, score-0.206]

35 In general, the same AMPA input can only elicit spikes in the postsynaptic neuron when the NMDA input switches that neuron into the up-state. [sent-74, score-0.479]

36 Transitions from down to up-state occur when synchronous NMDA inputs depolarize the membrane enough to cause the opening of additional NMDA receptor channels (due to the voltage-dependence of their opening). [sent-75, score-0.542]

37 This results in a regeneretive depolarization event, which is limited by the fast opening of IKA-type Time [s] Figure 1: Inputs with high AMPA-NMDA ratio cause the cell to spike (top trace, =0. [sent-76, score-0.19]

38 Strong NMDA inputs combined with potassium currents (for the same AMPA input) result in fluctuations of the membrane potential between two subthreshold states, with occasional firing due to the AMPA inputs (bottom trace, gAMPA =0. [sent-79, score-1.09]

39 This up-state is stable because the regenerative nature and long lifetime of NMDA receptor opening keeps the membrane depolarized, while the slower I Ks potassium current prevents further depolarization. [sent-82, score-0.552]

40 When input ceases, NMDA channels eventually (TNMDA ~ 150ms) close and the membrane jumps to the down-state. [sent-83, score-0.339]

41 Since the voltage threshold for spike generation in the somal axon compartment is above the up-state, it acts as a barrier. [sent-85, score-0.278]

42 A number of previous experimental studies have reported similar phenomena in various brain regions [16, 9, 1] where the two states persist even with all intrinsic inward currents blocked but the inputs left intact [17] . [sent-87, score-0.354]

43 Pharmacological block of the potassium currents resulted in prolonged up-states [17] . [sent-88, score-0.385]

44 These experimental results suggested a conceptual model in which two-state fluctuations are (i) input driven, (ii) the membrane states are stabilized by potassium currents. [sent-89, score-0.755]

45 Below, we examine the origins of the two-state fluctuations in light of these findings. [sent-92, score-0.259]

46 3 Analysis of two state fluctuations Figure 2A shows the histogram of membrane potential for a neuron driven by combined AMPA and NMDA input at 30 Hz. [sent-94, score-0.697]

47 The distribution of the up-state durations depend on the maximal NMDA conductance and the decay time constant of NMDA (not shown), as well as the mean rate of NMDA inputs (Figure 2C). [sent-100, score-0.213]

48 Additionally, larger maximal potassium conductances shorten the duration of the up states. [sent-111, score-0.256]

49 Thus, we predict that the NMDA receptors are intimately involved in shaping the firing characteristics of these neurons. [sent-112, score-0.281]

50 Furthermore, our mechanistic explanation leads a strong prediction about the functional role for these fluctuations in neuronal processing. [sent-113, score-0.323]

51 3 Contextual processing with NMDA and AMPA pathways Since NMDA and AMPA pathways have distinct roles in respectively switching and firing our model neuron, we suggest the following conceptual model shown on Fig 3A. [sent-114, score-0.319]

52 Without any input the neuron is at the rest or disabled state. [sent-115, score-0.266]

53 Contextual input (via NMDA receptors) can bring the neuron into an enabled state. [sent-116, score-0.311]

54 Informational (for instance, cue or positional) input (via AMPA receptors) can fire a neuron only from this enabled state. [sent-117, score-0.423]

55 In the CAl region of the hippocampus, pyramidal cells receive two distinct , spatially segregated input pathways: the perforant path from cortex and the Schaffer collaterals from the CA3 region. [sent-119, score-0.351]

56 The perforant path has a very large NMDA receptor content [14] which is, interestingly, co-localized with high densities of I KA conductances [5]. [sent-120, score-0.155]

57 Experimental [13] and theoretical [12] observations suggest that these two pathways carry distinct information. [sent-121, score-0.176]

58 Lisman has suggested that the perforant path carries contextual information and the Schaffer collaterals bring sequence information [12]. [sent-122, score-0.288]

59 Thus our model seems to apply biophysically as well as suggest a possible way for CAl neurons to carry out contextual computations. [sent-123, score-0.365]

60 It is known that these cell can fire at specific places in specific contexts. [sent-124, score-0.151]

61 As shown on Fig3B , our model neuron can only fire spikes due to positional input when the right context enables it. [sent-126, score-0.413]

62 We note that a requirement for contextual processing is that the two inputs be anatomically segregated, as they are in the CAl region. [sent-127, score-0.29]

63 However, we stress that the phenomenon of 2-state fluctuations itself is independent of the location of the two kinds of inputs. [sent-128, score-0.224]

64 Contextual input (high NMDA) switches the neuron from a rest state to an up state. [sent-139, score-0.243]

65 Informational input (high AMPA) cause the neuron to spike only from the up state. [sent-140, score-0.312]

66 Weak informational input can cause the cell to fire in conjunction with the contextual input, (left traces) while strong informational input will not fire the cell in the absence of contextual input (right traces). [sent-142, score-1.17]

67 In this simulation, the soma/proximal dendrite compartment receives AMPA input, while the NMDA input targets the dendritic compartment. [sent-143, score-0.306]

68 We simulated 3 neurons each receiving the same AMPA, informational input. [sent-145, score-0.262]

69 Each of these neurons also receives distinct contextual input via NMDA type receptors. [sent-147, score-0.461]

70 Even though each neuron receives the same strong AMPA input, their firing seems uncorrelated. [sent-151, score-0.272]

71 To evaluate the performance of the network in processing contextual conjunctions, we measured the correlations between the information and each contextual input. [sent-152, score-0.4]

72 We then measured the number of spikes emitted by each neuron during each "meaning" . [sent-154, score-0.19]

73 Figure 4B shows that the neurons performed well , each tuned to fire preferentially in its appropriate context. [sent-155, score-0.236]

74 4 Discussion Voltage fluctuations between two subthresold levels with similar properties are observed in vivo in a variety of brain regions. [sent-157, score-0.316]

75 Our model is in accordance with these data and lead us to a new picture of how might these neuron operate in a functional manner. [sent-158, score-0.137]

76 It has a stable low membrane state from which it cannot fire spikes, which we called disabled. [sent-160, score-0.341]

77 It also has a stable depolarized state from which action potentials can be elicited, which this we call enabled state. [sent-161, score-0.19]

78 Additionally, it has a firing state which is only reachable from the enabled state. [sent-162, score-0.234]

79 We suggest that if high and low NMDA-content pathways carry separate information these neurons can compute A "objects" "people" C B Contextual input: 90 0 "fruit" 111 0)(2)(3) objects people frun 0 ! [sent-164, score-0.235]

80 3 neurons each receive independent contextual (NMDA) and common informational (AMPA) input. [sent-167, score-0.428]

81 Voltage traces showing differences in firing patterns depending upon context. [sent-169, score-0.138]

82 Correlation was measured between the informational spike train and each contextual spike train smoothed with a gaussian filter (a = 60ms). [sent-172, score-0.438]

83 The most correlated context was defined to be the right one and the spikes of all neurons were counted. [sent-173, score-0.177]

84 If the high NMDA-content pathway carries contextual information then it would be in position to enable or disable a neuron. [sent-175, score-0.2]

85 In the enabled state, AMPA-type informational input could then fire a neuron (Fig 3B). [sent-176, score-0.527]

86 We have presented a biophysical model for two-state fluctuations that is strongly supported by data. [sent-177, score-0.263]

87 One concern might be that most observations of 2-state fluctuations in vivo have been when the animal is anesthetized, implying that this kind of neuronal dynamics is an artifact of the anesthetized state. [sent-178, score-0.38]

88 However, these fluctuations have been observed in several different kinds of anesthesia, including local anesthesia [16]. [sent-179, score-0.259]

89 Furthermore, it has been shown that the duration of the up-states correlate with orientation selectivity in visual cortical neurons suggesting that these fluctuations might playa role in information processing. [sent-180, score-0.447]

90 These observations suggest that this phenomenon may be more indicative of a natural state of the cortex rather than a by-product of anesthesia. [sent-181, score-0.139]

91 When the inputs with different AMPA/NMDA content are anatomically segregated, t he NMDA input alone generates voltage fluctuations between a resting and depolarized state, while the AMPA input causes the neuron to spike when in the up-state. [sent-182, score-0.846]

92 This mechanism naturally leads to the suggestion that such two-state fluctuations could have a function in computing context/input conjuctions. [sent-183, score-0.253]

93 In summary, we suggest the known biophysical mechanisms of some neurons can enable them two operate as 3-state devices. [sent-184, score-0.204]

94 In this mode of operation, the neurons could be used for contextual processing. [sent-185, score-0.324]

95 Stimulus dependence of two-state fluctuations of membrane potential in cat visual cortex. [sent-193, score-0.454]

96 Postsynaptic variability of firing rates in rat cortical neurons: the role of input synchronization and synaptic nmda receptor conductance. [sent-222, score-0.931]

97 K+ channel regulation of signal propagation in dendrites of hippocampal pyramidal neurons. [sent-227, score-0.188]

98 Ventral tegmental area afferents to the prefrontal cortex maintain membrane potential 'up' states in pyramidal neurons via dl dopamine receptors. [sent-261, score-0.552]

99 Spontaneous firing patterns of identified spiny neurons in the rat neostriatum. [sent-317, score-0.265]

100 The origins of two-state spontaneuous fluctuations of neostriatal spiny neurons. [sent-323, score-0.294]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('nmda', 0.561), ('ampa', 0.336), ('fluctuations', 0.224), ('potassium', 0.21), ('contextual', 0.2), ('membrane', 0.199), ('currents', 0.14), ('receptors', 0.138), ('neuron', 0.137), ('neurons', 0.124), ('conductance', 0.123), ('voltage', 0.114), ('fire', 0.112), ('firing', 0.106), ('informational', 0.104), ('enabled', 0.098), ('inputs', 0.09), ('eak', 0.088), ('pyramidal', 0.078), ('dendrite', 0.077), ('input', 0.076), ('lisman', 0.071), ('pathways', 0.07), ('spike', 0.067), ('channels', 0.064), ('compartment', 0.062), ('dendrites', 0.062), ('dendritic', 0.062), ('depolarized', 0.062), ('sodium', 0.062), ('exp', 0.059), ('receptor', 0.056), ('soma', 0.056), ('spikes', 0.053), ('disabled', 0.053), ('perforant', 0.053), ('tnmda', 0.053), ('role', 0.053), ('mv', 0.052), ('opening', 0.052), ('cal', 0.049), ('synchronous', 0.049), ('hippocampal', 0.048), ('states', 0.046), ('conductances', 0.046), ('reversal', 0.046), ('vivo', 0.046), ('cortical', 0.046), ('neuronal', 0.046), ('brain', 0.046), ('leak', 0.042), ('segregated', 0.042), ('suggest', 0.041), ('cell', 0.039), ('biophysical', 0.039), ('dopamine', 0.039), ('shaping', 0.037), ('green', 0.036), ('anesthesia', 0.035), ('axon', 0.035), ('colbert', 0.035), ('collaterals', 0.035), ('conjuctions', 0.035), ('gka', 0.035), ('gks', 0.035), ('gna', 0.035), ('gnap', 0.035), ('harsch', 0.035), ('hoffman', 0.035), ('inactivation', 0.035), ('inmda', 0.035), ('johnston', 0.035), ('magee', 0.035), ('origins', 0.035), ('positional', 0.035), ('prolonged', 0.035), ('regenerative', 0.035), ('spiny', 0.035), ('tampa', 0.035), ('cortex', 0.035), ('receiving', 0.034), ('observations', 0.033), ('synaptic', 0.033), ('ek', 0.032), ('synapses', 0.032), ('traces', 0.032), ('cause', 0.032), ('intrinsic', 0.032), ('distinct', 0.032), ('potential', 0.031), ('anesthetized', 0.031), ('calcium', 0.031), ('ena', 0.031), ('gk', 0.031), ('ina', 0.031), ('schaffer', 0.031), ('state', 0.03), ('neurosci', 0.03), ('mechanism', 0.029), ('receives', 0.029)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

2 0.20040171 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

3 0.1913249 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

4 0.15663135 37 nips-2001-Associative memory in realistic neuronal networks

Author: Peter E. Latham

Abstract: Almost two decades ago , Hopfield [1] showed that networks of highly reduced model neurons can exhibit multiple attracting fixed points, thus providing a substrate for associative memory. It is still not clear, however, whether realistic neuronal networks can support multiple attractors. The main difficulty is that neuronal networks in vivo exhibit a stable background state at low firing rate, typically a few Hz. Embedding attractor is easy; doing so without destabilizing the background is not. Previous work [2, 3] focused on the sparse coding limit, in which a vanishingly small number of neurons are involved in any memory. Here we investigate the case in which the number of neurons involved in a memory scales with the number of neurons in the network. In contrast to the sparse coding limit, we find that multiple attractors can co-exist robustly with a stable background state. Mean field theory is used to understand how the behavior of the network scales with its parameters, and simulations with analog neurons are presented. One of the most important features of the nervous system is its ability to perform associative memory. It is generally believed that associative memory is implemented using attractor networks - experimental studies point in that direction [4- 7], and there are virtually no competing theoretical models. Perhaps surprisingly, however, it is still an open theoretical question whether attractors can exist in realistic neuronal networks. The

5 0.1440682 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

Author: Giorgio A. Ascoli, Alexei V. Samsonovich

Abstract: Visual inspection of neurons suggests that dendritic orientation may be determined both by internal constraints (e.g. membrane tension) and by external vector fields (e.g. neurotrophic gradients). For example, basal dendrites of pyramidal cells appear nicely fan-out. This regular orientation is hard to justify completely with a general tendency to grow straight, given the zigzags observed experimentally. Instead, dendrites could (A) favor a fixed (“external”) direction, or (B) repel from their own soma. To investigate these possibilities quantitatively, reconstructed hippocampal cells were subjected to Bayesian analysis. The statistical model combined linearly factors A and B, as well as the tendency to grow straight. For all morphological classes, B was found to be significantly positive and consistently greater than A. In addition, when dendrites were artificially re-oriented according to this model, the resulting structures closely resembled real morphologies. These results suggest that somatodendritic repulsion may play a role in determining dendritic orientation. Since hippocampal cells are very densely packed and their dendritic trees highly overlap, the repulsion must be cellspecific. We discuss possible mechanisms underlying such specificity. 1 I n t r od uc t i on The study of brain dynamics and development at the cellular level would greatly benefit from a standardized, accurate and yet succinct statistical model characterizing the morphology of major neuronal classes. Such model could also provide a basis for simulation of anatomically realistic virtual neurons [1]. The model should accurately distinguish among different neuronal classes: a morphological difference between classes would be captured by a difference in model parameters and reproduced in generated virtual neurons. In addition, the model should be self-consistent: there should be no statistical difference in model parameters measured from real neurons of a given class and from virtual neurons of the same class. The assumption that a simple statistical model of this sort exists relies on the similarity of average environmental and homeostatic conditions encountered by individual neurons during development and on the limited amount of genetic information that underlies differentiation of neuronal classes. Previous research in computational neuroanatomy has mainly focused on the topology and internal geometry of dendrites (i.e., the properties described in “dendrograms”) [2,3]. Recently, we attempted to include spatial orientation in the models, thus generating 2 virtual neurons in 3D [4]. Dendritic growth was assumed to deviate from the straight direction both randomly and based on a constant bias in a given direction, or “tropism”. Different models of tropism (e.g. along a fixed axis, towards a plane, or away from the soma) had dramatic effects on the shape of virtual neurons [5]. Our current strategy is to split the problem of finding a statistical model describing neuronal morphology in two parts. First, we maintain that the topology and the internal geometry of a particular dendritic tree can be described independently of its 3D embedding (i.e., the set of local dendritic orientations). At the same time, one and the same internal geometry (e.g., the experimental dendrograms obtained from real neurons) may have many equally plausible 3D embeddings that are statistically consistent with the anatomical characteristics of that neuronal class. The present work aims at finding a minimal statistical model describing local dendritic orientation in experimentally reconstructed hippocampal principal cells. Hippocampal neurons have a polarized shape: their dendrites tend to grow from the soma as if enclosed in cones. In pyramidal cells, basal and apical dendrites invade opposite hemispaces (fig. 1A), while granule cell dendrites all invade the same hemispace. This behavior could be caused by a tendency to grow towards the layers of incoming fibers to establish synapses. Such tendency would correspond to a tropism in a direction roughly parallel to the cell main axis. Alternatively, dendrites could initially stem in the appropriate (possibly genetically determined) directions, and then continue to grow approximately in a radial direction from the soma. A close inspection of pyramidal (basal) trees suggests that dendrites may indeed be repelled from their soma (Fig. 1B). A typical dendrite may reorient itself (arrow) to grow nearly straight along a radius from the soma. Remarkably, this happens even after many turns, when the initial direction is lost. Such behavior may be hard to explain without tropism. If the deviations from straight growth were random, one should be able to “remodel”th e trees by measuring and reproducing the statistics of local turn angles, assuming its independence of dendritic orientation and location. Figure 1C shows the cell from 1A after such remodeling. In this case basal and apical dendrites retain only their initial (stemming) orientations from the original data. The resulting “cotton ball” uggests that dendritic turns are not in dependent s of dendritic orientation. In this paper, we use Bayesian analysis to quantify the dendritic tropism. 2 M e t h od s Digital files of fully reconstructed rat hippocampal pyramidal cells (24 CA3 and 23 CA1 neurons) were kindly provided by Dr. D. Amaral. The overall morphology of these cells, as well as the experimental acquisition methods, were extensively described [6]. In these files, dendrites are represented as (branching) chains of cylindrical sections. Each section is connected to one other section in the path to the soma, and may be connected on the other extremity to two other sections (bifurcation), one other section (continuation point), or no other section (terminal tip). Each section is described in the file by its ending point coordinates, its diameter and its

6 0.1333473 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

7 0.1219467 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

8 0.11869392 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

9 0.099722892 54 nips-2001-Contextual Modulation of Target Saliency

10 0.082373865 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

11 0.082189716 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

12 0.077970944 87 nips-2001-Group Redundancy Measures Reveal Redundancy Reduction in the Auditory Pathway

13 0.075119838 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

14 0.070918486 23 nips-2001-A theory of neural integration in the head-direction system

15 0.066656269 112 nips-2001-Learning Spike-Based Correlations and Conditional Probabilities in Silicon

16 0.059471533 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

17 0.058833875 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

18 0.057744693 3 nips-2001-ACh, Uncertainty, and Cortical Inference

19 0.056328282 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

20 0.054132055 82 nips-2001-Generating velocity tuning by asymmetric recurrent connections


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.123), (1, -0.276), (2, -0.138), (3, 0.059), (4, 0.105), (5, 0.054), (6, 0.084), (7, -0.048), (8, -0.077), (9, -0.021), (10, 0.025), (11, -0.049), (12, 0.112), (13, -0.002), (14, -0.001), (15, -0.029), (16, 0.056), (17, 0.0), (18, -0.075), (19, -0.109), (20, 0.099), (21, -0.045), (22, -0.027), (23, 0.035), (24, 0.016), (25, 0.009), (26, -0.043), (27, 0.059), (28, -0.084), (29, 0.077), (30, 0.074), (31, -0.057), (32, 0.007), (33, -0.043), (34, -0.09), (35, -0.074), (36, -0.178), (37, 0.267), (38, 0.002), (39, 0.08), (40, 0.071), (41, -0.057), (42, -0.161), (43, -0.115), (44, 0.03), (45, -0.003), (46, -0.076), (47, -0.131), (48, -0.042), (49, -0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97153896 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

2 0.71329939 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

Author: Gregor Wenning, Klaus Obermayer

Abstract: Cortical neurons might be considered as threshold elements integrating in parallel many excitatory and inhibitory inputs. Due to the apparent variability of cortical spike trains this yields a strongly fluctuating membrane potential, such that threshold crossings are highly irregular. Here we study how a neuron could maximize its sensitivity w.r.t. a relatively small subset of excitatory input. Weak signals embedded in fluctuations is the natural realm of stochastic resonance. The neuron's response is described in a hazard-function approximation applied to an Ornstein-Uhlenbeck process. We analytically derive an optimality criterium and give a learning rule for the adjustment of the membrane fluctuations, such that the sensitivity is maximal exploiting stochastic resonance. We show that adaptation depends only on quantities that could easily be estimated locally (in space and time) by the neuron. The main results are compared with simulations of a biophysically more realistic neuron model. 1

3 0.66498649 42 nips-2001-Bayesian morphometry of hippocampal cells suggests same-cell somatodendritic repulsion

Author: Giorgio A. Ascoli, Alexei V. Samsonovich

Abstract: Visual inspection of neurons suggests that dendritic orientation may be determined both by internal constraints (e.g. membrane tension) and by external vector fields (e.g. neurotrophic gradients). For example, basal dendrites of pyramidal cells appear nicely fan-out. This regular orientation is hard to justify completely with a general tendency to grow straight, given the zigzags observed experimentally. Instead, dendrites could (A) favor a fixed (“external”) direction, or (B) repel from their own soma. To investigate these possibilities quantitatively, reconstructed hippocampal cells were subjected to Bayesian analysis. The statistical model combined linearly factors A and B, as well as the tendency to grow straight. For all morphological classes, B was found to be significantly positive and consistently greater than A. In addition, when dendrites were artificially re-oriented according to this model, the resulting structures closely resembled real morphologies. These results suggest that somatodendritic repulsion may play a role in determining dendritic orientation. Since hippocampal cells are very densely packed and their dendritic trees highly overlap, the repulsion must be cellspecific. We discuss possible mechanisms underlying such specificity. 1 I n t r od uc t i on The study of brain dynamics and development at the cellular level would greatly benefit from a standardized, accurate and yet succinct statistical model characterizing the morphology of major neuronal classes. Such model could also provide a basis for simulation of anatomically realistic virtual neurons [1]. The model should accurately distinguish among different neuronal classes: a morphological difference between classes would be captured by a difference in model parameters and reproduced in generated virtual neurons. In addition, the model should be self-consistent: there should be no statistical difference in model parameters measured from real neurons of a given class and from virtual neurons of the same class. The assumption that a simple statistical model of this sort exists relies on the similarity of average environmental and homeostatic conditions encountered by individual neurons during development and on the limited amount of genetic information that underlies differentiation of neuronal classes. Previous research in computational neuroanatomy has mainly focused on the topology and internal geometry of dendrites (i.e., the properties described in “dendrograms”) [2,3]. Recently, we attempted to include spatial orientation in the models, thus generating 2 virtual neurons in 3D [4]. Dendritic growth was assumed to deviate from the straight direction both randomly and based on a constant bias in a given direction, or “tropism”. Different models of tropism (e.g. along a fixed axis, towards a plane, or away from the soma) had dramatic effects on the shape of virtual neurons [5]. Our current strategy is to split the problem of finding a statistical model describing neuronal morphology in two parts. First, we maintain that the topology and the internal geometry of a particular dendritic tree can be described independently of its 3D embedding (i.e., the set of local dendritic orientations). At the same time, one and the same internal geometry (e.g., the experimental dendrograms obtained from real neurons) may have many equally plausible 3D embeddings that are statistically consistent with the anatomical characteristics of that neuronal class. The present work aims at finding a minimal statistical model describing local dendritic orientation in experimentally reconstructed hippocampal principal cells. Hippocampal neurons have a polarized shape: their dendrites tend to grow from the soma as if enclosed in cones. In pyramidal cells, basal and apical dendrites invade opposite hemispaces (fig. 1A), while granule cell dendrites all invade the same hemispace. This behavior could be caused by a tendency to grow towards the layers of incoming fibers to establish synapses. Such tendency would correspond to a tropism in a direction roughly parallel to the cell main axis. Alternatively, dendrites could initially stem in the appropriate (possibly genetically determined) directions, and then continue to grow approximately in a radial direction from the soma. A close inspection of pyramidal (basal) trees suggests that dendrites may indeed be repelled from their soma (Fig. 1B). A typical dendrite may reorient itself (arrow) to grow nearly straight along a radius from the soma. Remarkably, this happens even after many turns, when the initial direction is lost. Such behavior may be hard to explain without tropism. If the deviations from straight growth were random, one should be able to “remodel”th e trees by measuring and reproducing the statistics of local turn angles, assuming its independence of dendritic orientation and location. Figure 1C shows the cell from 1A after such remodeling. In this case basal and apical dendrites retain only their initial (stemming) orientations from the original data. The resulting “cotton ball” uggests that dendritic turns are not in dependent s of dendritic orientation. In this paper, we use Bayesian analysis to quantify the dendritic tropism. 2 M e t h od s Digital files of fully reconstructed rat hippocampal pyramidal cells (24 CA3 and 23 CA1 neurons) were kindly provided by Dr. D. Amaral. The overall morphology of these cells, as well as the experimental acquisition methods, were extensively described [6]. In these files, dendrites are represented as (branching) chains of cylindrical sections. Each section is connected to one other section in the path to the soma, and may be connected on the other extremity to two other sections (bifurcation), one other section (continuation point), or no other section (terminal tip). Each section is described in the file by its ending point coordinates, its diameter and its

4 0.6543175 72 nips-2001-Exact differential equation population dynamics for integrate-and-fire neurons

Author: Julian Eggert, Berthold Bäuml

Abstract: Mesoscopical, mathematical descriptions of dynamics of populations of spiking neurons are getting increasingly important for the understanding of large-scale processes in the brain using simulations. In our previous work, integral equation formulations for population dynamics have been derived for a special type of spiking neurons. For Integrate- and- Fire type neurons , these formulations were only approximately correct. Here, we derive a mathematically compact, exact population dynamics formulation for Integrate- and- Fire type neurons. It can be shown quantitatively in simulations that the numerical correspondence with microscopically modeled neuronal populations is excellent. 1 Introduction and motivation The goal of the population dynamics approach is to model the time course of the collective activity of entire populations of functionally and dynamically similar neurons in a compact way, using a higher descriptionallevel than that of single neurons and spikes. The usual observable at the level of neuronal populations is the populationaveraged instantaneous firing rate A(t), with A(t)6.t being the number of neurons in the population that release a spike in an interval [t, t+6.t). Population dynamics are formulated in such a way, that they match quantitatively the time course of a given A(t), either gained experimentally or by microscopical, detailed simulation. At least three main reasons can be formulated which underline the importance of the population dynamics approach for computational neuroscience. First, it enables the simulation of extensive networks involving a massive number of neurons and connections, which is typically the case when dealing with biologically realistic functional models that go beyond the single neuron level. Second, it increases the analytical understanding of large-scale neuronal dynamics , opening the way towards better control and predictive capabilities when dealing with large networks. Third, it enables a systematic embedding of the numerous neuronal models operating at different descriptional scales into a generalized theoretic framework, explaining the relationships, dependencies and derivations of the respective models. Early efforts on population dynamics approaches date back as early as 1972, to the work of Wilson and Cowan [8] and Knight [4], which laid the basis for all current population-averaged graded-response models (see e.g. [6] for modeling work using these models). More recently, population-based approaches for spiking neurons were developed, mainly by Gerstner [3, 2] and Knight [5]. In our own previous work [1], we have developed a theoretical framework which enables to systematize and simulate a wide range of models for population-based dynamics. It was shown that the equations of the framework produce results that agree quantitatively well with detailed simulations using spiking neurons, so that they can be used for realistic simulations involving networks with large numbers of spiking neurons. Nevertheless, for neuronal populations composed of Integrate-and-Fire (I&F;) neurons, this framework was only correct in an approximation. In this paper, we derive the exact population dynamics formulation for I&F; neurons. This is achieved by reducing the I&F; population dynamics to a point process and by taking advantage of the particular properties of I&F; neurons. 2 2.1 Background: Integrate-and-Fire dynamics Differential form We start with the standard Integrate- and- Fire (I&F;) model in form of the wellknown differential equation [7] (1) which describes the dynamics of the membrane potential Vi of a neuron i that is modeled as a single compartment with RC circuit characteristics. The membrane relaxation time is in this case T = RC with R being the membrane resistance and C the membrane capacitance. The resting potential v R est is the stationary potential that is approached in the no-input case. The input arriving from other neurons is described in form of a current ji. In addition to eq. (1), which describes the integrate part of the I&F; model, the neuronal dynamics are completed by a nonlinear step. Every time the membrane potential Vi reaches a fixed threshold () from below, Vi is lowered by a fixed amount Ll > 0, and from the new value of the membrane potential integration according to eq. (1) starts again. if Vi(t) = () (from below) . (2) At the same time, it is said that the release of a spike occurred (i.e., the neuron fired), and the time ti = t of this singular event is stored. Here ti indicates the time of the most recent spike. Storing all the last firing times , we gain the sequence of spikes {t{} (spike ordering index j, neuronal index i). 2.2 Integral form Now we look at the single neuron in a neuronal compound. We assume that the input current contribution ji from presynaptic spiking neurons can be described using the presynaptic spike times tf, a response-function ~ and a connection weight W¡ . ',J ji(t) = Wi ,j ~(t - tf) (3) l: l: j f Integrating the I&F; equation (1) beginning at the last spiking time tT, which determines the initial condition by Vi(ti) = vi(ti - 0) - 6., where vi(ti - 0) is the membrane potential just before the neuron spikes, we get 1 Vi(t) = v Rest + fj(t - t:) + l: Wi ,j l: a(t - t:; t - tf) , j - Vi(t:)) e- S / T (4) f with the refractory function fj(s) = - (v Rest (5) and the alpha-function r ds

5 0.53532517 141 nips-2001-Orientation-Selective aVLSI Spiking Neurons

Author: Shih-Chii Liu, Jörg Kramer, Giacomo Indiveri, Tobi Delbrück, Rodney J. Douglas

Abstract: We describe a programmable multi-chip VLSI neuronal system that can be used for exploring spike-based information processing models. The system consists of a silicon retina, a PIC microcontroller, and a transceiver chip whose integrate-and-fire neurons are connected in a soft winner-take-all architecture. The circuit on this multi-neuron chip approximates a cortical microcircuit. The neurons can be configured for different computational properties by the virtual connections of a selected set of pixels on the silicon retina. The virtual wiring between the different chips is effected by an event-driven communication protocol that uses asynchronous digital pulses, similar to spikes in a neuronal system. We used the multi-chip spike-based system to synthesize orientation-tuned neurons using both a feedforward model and a feedback model. The performance of our analog hardware spiking model matched the experimental observations and digital simulations of continuous-valued neurons. The multi-chip VLSI system has advantages over computer neuronal models in that it is real-time, and the computational time does not scale with the size of the neuronal network.

6 0.4293887 37 nips-2001-Associative memory in realistic neuronal networks

7 0.4197157 197 nips-2001-Why Neuronal Dynamics Should Control Synaptic Learning Rules

8 0.38145056 142 nips-2001-Orientational and Geometric Determinants of Place and Head-direction

9 0.36507484 11 nips-2001-A Maximum-Likelihood Approach to Modeling Multisensory Enhancement

10 0.31346673 96 nips-2001-Information-Geometric Decomposition in Spike Analysis

11 0.29308003 3 nips-2001-ACh, Uncertainty, and Cortical Inference

12 0.2865966 177 nips-2001-Switch Packet Arbitration via Queue-Learning

13 0.27193555 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

14 0.26320735 174 nips-2001-Spike timing and the coding of naturalistic sounds in a central auditory area of songbirds

15 0.25508341 49 nips-2001-Citcuits for VLSI Implementation of Temporally Asymmetric Hebbian Learning

16 0.25128987 176 nips-2001-Stochastic Mixed-Signal VLSI Architecture for High-Dimensional Kernel Machines

17 0.25039628 65 nips-2001-Effective Size of Receptive Fields of Inferior Temporal Visual Cortex Neurons in Natural Scenes

18 0.24572225 73 nips-2001-Eye movements and the maturation of cortical orientation selectivity

19 0.22956415 54 nips-2001-Contextual Modulation of Target Saliency

20 0.22765812 160 nips-2001-Reinforcement Learning and Time Perception -- a Model of Animal Experiments


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(14, 0.02), (17, 0.012), (19, 0.024), (27, 0.07), (30, 0.048), (38, 0.067), (39, 0.01), (62, 0.023), (72, 0.019), (74, 0.016), (79, 0.484), (83, 0.01), (91, 0.11)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.92704099 35 nips-2001-Analysis of Sparse Bayesian Learning

Author: Anita C. Faul, Michael E. Tipping

Abstract: The recent introduction of the 'relevance vector machine' has effectively demonstrated how sparsity may be obtained in generalised linear models within a Bayesian framework. Using a particular form of Gaussian parameter prior, 'learning' is the maximisation, with respect to hyperparameters, of the marginal likelihood of the data. This paper studies the properties of that objective function, and demonstrates that conditioned on an individual hyperparameter, the marginal likelihood has a unique maximum which is computable in closed form. It is further shown that if a derived 'sparsity criterion' is satisfied, this maximum is exactly equivalent to 'pruning' the corresponding parameter from the model. 1

same-paper 2 0.91813082 2 nips-2001-3 state neurons for contextual processing

Author: Ádám Kepecs, S. Raghavachari

Abstract: Neurons receive excitatory inputs via both fast AMPA and slow NMDA type receptors. We find that neurons receiving input via NMDA receptors can have two stable membrane states which are input dependent. Action potentials can only be initiated from the higher voltage state. Similar observations have been made in several brain areas which might be explained by our model. The interactions between the two kinds of inputs lead us to suggest that some neurons may operate in 3 states: disabled, enabled and firing. Such enabled, but non-firing modes can be used to introduce context-dependent processing in neural networks. We provide a simple example and discuss possible implications for neuronal processing and response variability. 1

3 0.89072889 115 nips-2001-Linear-time inference in Hierarchical HMMs

Author: Kevin P. Murphy, Mark A. Paskin

Abstract: The hierarchical hidden Markov model (HHMM) is a generalization of the hidden Markov model (HMM) that models sequences with structure at many length/time scales [FST98]. Unfortunately, the original infertime, where is ence algorithm is rather complicated, and takes the length of the sequence, making it impractical for many domains. In this paper, we show how HHMMs are a special kind of dynamic Bayesian network (DBN), and thereby derive a much simpler inference algorithm, which only takes time. Furthermore, by drawing the connection between HHMMs and DBNs, we enable the application of many standard approximation techniques to further speed up inference. ¥ ©§ £ ¨¦¥¤¢ © £ ¦¥¤¢

4 0.86839402 78 nips-2001-Fragment Completion in Humans and Machines

Author: David Jacobs, Bas Rokers, Archisman Rudra, Zili Liu

Abstract: Partial information can trigger a complete memory. At the same time, human memory is not perfect. A cue can contain enough information to specify an item in memory, but fail to trigger that item. In the context of word memory, we present experiments that demonstrate some basic patterns in human memory errors. We use cues that consist of word fragments. We show that short and long cues are completed more accurately than medium length ones and study some of the factors that lead to this behavior. We then present a novel computational model that shows some of the flexibility and patterns of errors that occur in human memory. This model iterates between bottom-up and top-down computations. These are tied together using a Markov model of words that allows memory to be accessed with a simple feature set, and enables a bottom-up process to compute a probability distribution of possible completions of word fragments, in a manner similar to models of visual perceptual completion.

5 0.85785103 180 nips-2001-The Concave-Convex Procedure (CCCP)

Author: Alan L. Yuille, Anand Rangarajan

Abstract: We introduce the Concave-Convex procedure (CCCP) which constructs discrete time iterative dynamical systems which are guaranteed to monotonically decrease global optimization/energy functions. It can be applied to (almost) any optimization problem and many existing algorithms can be interpreted in terms of CCCP. In particular, we prove relationships to some applications of Legendre transform techniques. We then illustrate CCCP by applications to Potts models, linear assignment, EM algorithms, and Generalized Iterative Scaling (GIS). CCCP can be used both as a new way to understand existing optimization algorithms and as a procedure for generating new algorithms. 1

6 0.5563252 183 nips-2001-The Infinite Hidden Markov Model

7 0.49283096 118 nips-2001-Matching Free Trees with Replicator Equations

8 0.49087113 184 nips-2001-The Intelligent surfer: Probabilistic Combination of Link and Content Information in PageRank

9 0.48663473 86 nips-2001-Grammatical Bigrams

10 0.47935706 162 nips-2001-Relative Density Nets: A New Way to Combine Backpropagation with HMM's

11 0.47294861 169 nips-2001-Small-World Phenomena and the Dynamics of Information

12 0.47288585 3 nips-2001-ACh, Uncertainty, and Cortical Inference

13 0.46530971 172 nips-2001-Speech Recognition using SVMs

14 0.46222949 12 nips-2001-A Model of the Phonological Loop: Generalization and Binding

15 0.46022481 192 nips-2001-Tree-based reparameterization for approximate inference on loopy graphs

16 0.45382142 123 nips-2001-Modeling Temporal Structure in Classical Conditioning

17 0.45173898 27 nips-2001-Activity Driven Adaptive Stochastic Resonance

18 0.44751647 194 nips-2001-Using Vocabulary Knowledge in Bayesian Multinomial Estimation

19 0.44652158 171 nips-2001-Spectral Relaxation for K-means Clustering

20 0.44135883 37 nips-2001-Associative memory in realistic neuronal networks