nips nips2002 nips2002-129 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: David Barber
Abstract: We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. 1
Reference: text
sentIndex sentText sentNum sentScore
1 uk Abstract We consider a statistical framework for learning in a class of networks of spiking neurons. [sent-6, score-0.304]
2 Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. [sent-7, score-0.852]
3 Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. [sent-8, score-0.508]
4 We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. [sent-9, score-0.799]
5 A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. [sent-10, score-0.508]
6 1 Introduction Models of individual neurons range from simple rate based approaches to spiking models and further detailed descriptions of protein dynamics within the cell[9, 10, 13, 6, 12]. [sent-11, score-0.541]
7 As the experimental search for the neural correlates of memory increasingly consider multi-cell observations, theoretical models of distributed memory become more relevant[12]. [sent-12, score-0.157]
8 Despite increasing complexity of neural description, many theoretical models of learning are based on correlation Hebbian assumptions – that is, changes in synaptic efficacy are related to correlations of preand post-synaptic firing[9, 10, 14]. [sent-13, score-0.313]
9 Whilst such learning rules have some theoretical justification in toy neural models, they are not necessarily optimal in more complex cases in which the dynamics of the cell contains historical information, such as modelled by synaptic facilitation and depression, for example[1]. [sent-14, score-0.69]
10 It is our belief that appropriate synaptic learning rules should appear as a natural consequence of the neurodynamical system and some desired functionality – such as storage of temporal sequences. [sent-15, score-0.489]
11 It seems clear that, as the brain operates dynamically through time, relevant cognitive processes are plausibly represented in vivo as temporal sequences of spikes in restricted neural assemblies. [sent-16, score-0.169]
12 This paradigm has heralded a new research front in dynamic systems of spiking neurons[10]. [sent-17, score-0.269]
13 However, to date, many learning algorithms assume Hebbian learning, and assess its performance in a given model[8, 6, 14]. [sent-18, score-0.051]
14 neuron j neuron i Highly Complex (deterministic) Internal Dynamics h(1) h(2) h(t) v(1) v(2) v(t) (a) Deterministic Hiddens stochastic firing axon . [sent-20, score-0.413]
15 (b) Neural firing model Figure 1: (a) A first order Dynamic Bayesian Network with deterministic hidden states (represented by diamonds). [sent-21, score-0.304]
16 Recent work[13] has taken into account some of the complexities in the synaptic dynamics, including facilitation and depression, and derived appropriate learning rules. [sent-23, score-0.291]
17 However, these are rate based models, and do not capture the detailed stochastic firing effects of individual neurons. [sent-24, score-0.124]
18 Other recent work [4] has used experimental observations to modify Hebbian learning rules to make heuristic rules consistent with empirical observations[11]. [sent-25, score-0.237]
19 However, as more and more detail of cellular processes are experimentally discovered, it would be satisfying to see learning mechanisms as naturally derivable consequences of the underlying cellular constraints. [sent-26, score-0.309]
20 This paper is a modest step in this direction, in which we outline a framework for learning in spiking systems which can handle highly complex cellular processes. [sent-27, score-0.379]
21 The major simplifying assumption is that internal cellular processes are deterministic, whilst communication between cells can be stochastic. [sent-28, score-0.197]
22 The central aim of this paper is to show that optimal learning algorithms are derivable consequences of statistical learning criteria. [sent-29, score-0.187]
23 2 A Framework for Learning A neural assembly of V neurons is represented by a vector v(t) whose components vi (t), i = 1, . [sent-31, score-0.533]
24 Throughout we assume that vi (t) ∈ {0, 1}, for which vi (t) = 1 means that neuron i spikes at time t, and vi (t) = 0 denotes no spike. [sent-35, score-0.833]
25 The shape of an action potential is assumed therefore not to carry any information. [sent-36, score-0.132]
26 This constraint of a binary state firing representation could be readily relaxed without great inconvenience to multiple or even continuous states. [sent-37, score-0.106]
27 Our stated goal is to derive optimal learning rules for an assumed desired functionality and a given neural dynamics. [sent-38, score-0.281]
28 To make this more concrete, we assume that the task is sequence learning (although generalistions to other forms of learning, including input-output type dynamics are readily achievable[2]). [sent-39, score-0.361]
29 We make the important assumption that the neural assembly has a sequence of states V = {v(1), v(2), . [sent-40, score-0.344]
30 In addition to the neural firing states, V, we assume that there are hidden/latent variables which influence the dynamics of the assembly, but which cannot be directly observed. [sent-44, score-0.253]
31 These might include protein levels within a cell, for example. [sent-45, score-0.059]
32 These variables may also represent environmental conditions external to the cell and common to groups of cells. [sent-46, score-0.12]
33 We represent a sequence of hidden variables by H = {h(1), h(2), . [sent-47, score-0.21]
34 The general form of our model is depicted in fig(1)[a] and comprises two components 1. [sent-51, score-0.045]
35 The distribution is parameterised by θ v , which can be learned from a training sequence (see below). [sent-53, score-0.104]
36 Deterministic Hidden Variable Updating : h(t + 1) = f (v(t + 1), v(t), h(t), θ h ) (2) This equation specifies that the next hidden state of the assembly h(t + 1) depends on a vector function f of the states v(t+1), v(t), h(t). [sent-57, score-0.389]
37 The function f is parameterised by θ h which is to be learned. [sent-58, score-0.044]
38 This model is a special case of Dynamic Bayesian networks, in which the hidden variables are deterministic functions of their parental states and is treated in more generality in [2]. [sent-59, score-0.342]
39 The model assumptions are depicted in fig(1)[b] in which potentially complex deterministic interactions within a neuron can be considered, with lossy transmission of this information between neurons in the form of stochastic firing. [sent-60, score-0.599]
40 Whilst the restriction to deterministic hidden dynamics appears severe, it has the critical advantage that learning in such models can be achieved by deterministic forward propagation through time. [sent-61, score-0.694]
41 The key mechanism for learning in statistical models is maximising the log-likelihood L(θ v , θ h |V) of a sequence V, T −1 L(θ v , θ h |V) = log p(v(1)|θ v ) + log p(v(t + 1)|v(t), h(t), θ v ) (3) t=1 where the hidden unit values are calculated recursively using (2). [sent-64, score-0.367]
42 P is straightforward using the log-likelihood µ µ L(θ v , θ h |V ). [sent-68, score-0.045]
43 In a biological system it is natural to use gradient ascent training θ ← θ +ηdL/dθ where the learning rate η is chosen small enough to ensure convergence to a local optimum of the likelihood. [sent-73, score-0.116]
44 This batch training procedure is readily convertible to an online form if needed. [sent-74, score-0.152]
45 Highly complex functions f and tables p(v(t + 1)|v(t), h(t)) may be used. [sent-76, score-0.073]
46 In the remaining sections, we apply this framework to some simple models and show how optimal learning rules can be derived for old and new theoretical models. [sent-77, score-0.215]
47 1 Stochastically Spiking Neurons We assume that neuron i fires depending on the membrane potential ai (t) through p(vi (t + 1) = 1|v(t), h(t)) = p(vi (t + 1) = 1|ai (t)). [sent-79, score-0.795]
48 (More complex dependencies on environmental variables are also clearly possible). [sent-80, score-0.117]
49 The log-likelihood of a sequence of visible states V is T −1 V log σ ((2vi (t + 1) − 1)ai (t)) L= (8) t=1 i=1 and the (online) gradient of the log-likelihood is then dL(t + 1) dai (t) = (vi (t + 1) − σ(ai (t))) dwij dwij (9) where we used the fact that vi ∈ {0, 1}. [sent-84, score-0.774]
50 The batch gradient is simply given by summing the above online gradient over time. [sent-85, score-0.15]
51 Here wij are parameters of the membrane potential (see below). [sent-86, score-0.714]
52 We take (9) as common to the remainder in which we model the membrane potential ai (t) with increasing complexity. [sent-87, score-0.636]
53 2 A simple model of the membrane potential Perhaps the simplest membrane potential model is the Hopfield potential V ai (t) ≡ wij vj (t) − bi (10) j=1 where wij characterizes the synaptic efficacy from neuron j (pre-synaptic) to neuron i (post-synaptic), and bi is a threshold. [sent-89, score-2.581]
54 The potential is a deterministic function of the network state and (the collection of) membrane potentials influences the next state of the network. [sent-92, score-0.607]
55 (b) Dynamic synapses correspond to hidden variables which influence the membrane potential and update themselves, depending on the firing of the network. [sent-93, score-0.659]
56 Only one membrane potential and one synaptic factor is shown. [sent-94, score-0.588]
57 Note that in the above rule vi (t + 1) refers to the desired known training pattern, and σ(ai (t)) can be interpreted as the average instantaneous firing rate of neuron i at time t + 1 when its inputs are clamped to the known desired values of the network at time t. [sent-96, score-0.642]
58 The above learning rule can be seen as a modification of the standard T −1 Hebb learning rule wij = t=1 vi (t + 1)vj (t). [sent-98, score-0.768]
59 However, the rule (11) can store a sequence of V linearly independent patterns, much greater than the 0. [sent-99, score-0.16]
60 Biologically, the rule (11) could be implemented by measuring the difference between the desired training state vi (t + 1) of neuron i, and the instantaneous firing rate of neuron i when all other neurons, j = i are clamped in training states vj (t). [sent-101, score-1.053]
61 3 Dynamic Synapses In more realistic synaptic models, neurotransmitter generation depends on a finite rate of cell subcomponent production, and the quantity of vesicles released is affected by the history of firing[1]. [sent-103, score-0.33]
62 The depression mechanism affects the impact of spiking on the membrane potential response by moderating terms in the membrane potential ai (t) of the form j wij vj (t) to j wij xj (t)vj (t), for depression factors xj (t) ∈ [0, 1]. [sent-104, score-2.639]
63 where δt, τ , and U represent time scales, recovery times and spiking effect parameters respectively. [sent-108, score-0.181]
64 Note that these depression factor dynamics are exactly of the form of hidden variables that are not observed, consistent with our framework in section (2), see fig(2)[b]. [sent-109, score-0.521]
65 Whilst some previous models have considered learning rules for dynamic synapses using spiking-rate models [13, 15] we consider learning in a stochastic spiking model. [sent-110, score-0.741]
66 Also, in contrast to a previous study which assumes that the synaptic dynamics modulates baseline Hebbian weights[14], we show below that it is straightforward to include dynamic synapses in a principled way using our learning framework. [sent-111, score-0.681]
67 Since the depression dynamics in this model do not explicitly depend on wij , the gradients are simple to calculate. [sent-112, score-0.649]
68 Note that synaptic facilitation is also straightforward to include in principle[15]. [sent-113, score-0.315]
69 For the Hopfield potential, the learning dynamics is simply given by equations (9,12), with dai (t) = xj (t)vj (t). [sent-114, score-0.409]
70 In fig(3) we demonstrate learning a random temdwij poral sequence of 20 time steps for an assembly of 50 neurons. [sent-115, score-0.304]
71 After learning w ij with our rule, we initialised the trained network in the first state of the training sequence. [sent-116, score-0.083]
72 The remaining states of the sequence were then correctly recalled by iteration of the learned model. [sent-117, score-0.112]
73 For comparison, we plot the results of using the dynamics having set the w ij using a temporal Hebb rule. [sent-119, score-0.238]
74 The poor performance of the correlation based Hebb rule demonstrates the necessity, in general, to couple a dynamical system with an appropriate learning mechanism which, in this case at least, is readily available. [sent-120, score-0.244]
75 4 Leaky Integrate and Fire models Leaky integrate and fire models move a step towards biological realism in which the membrane potential increments if it receives an excitatory stimulus (wij > 0), and decrements if it receives an inhibitory stimulus (wij < 0). [sent-121, score-0.519]
76 A model that incorporates such effects is ai (t) = αai (t − 1) + j wij vj (t) + θ rest (1 − α) (1 − vi (t − 1)) + vi (t − 1)θ f ired (13) Since vi ∈ {0, 1}, if neuron i fires at time t − 1 the potential is reset to θ f ired at time t. [sent-122, score-1.889]
77 Similarly, with no synaptic input, the potential equilibrates to θ rest with time constant −1/ log α. [sent-123, score-0.376]
78 Here α ∈ [0, 1] represents membrane leakage characteristic of this class of models. [sent-124, score-0.271]
79 a(t − 1) a(t) a(t + 1) r(t − 1) r(t) r(t + 1) v(t − 1) v(t) v(t + 1) Figure 4: Stochastic vesicle release (synaptic dynamic factors not indicated). [sent-125, score-0.392]
80 Despite the apparent increase in complexity of the membrane potential over the simple Hopfield case, deriving appropriate learning dynamics for this new system is straightforward since, as before, the hidden variables (here the membrane potentials) update in a deterministic fashion. [sent-126, score-1.236]
81 The membrane derivatives are dai (t) dai (t − 1) = (1 − vi (t − 1)) α + vj (t) dwij dwij (14) i (t=1) By initialising the derivative dadwij = 0, equations (9,13,14) define a first order recursion for the gradient which can be used to adapt wij in the usual manner wij ← wij + ηdL/dwij . [sent-127, score-2.152]
82 We could also apply synaptic dynamics to this case by replacing the term vj (t) in (14) by xj (t)vj (t). [sent-128, score-0.666]
83 5 A Stochastic Vesicle Release Model Neurotransmitter release can be highly stochastic and it would be desirable to include this mechanism in our models. [sent-130, score-0.305]
84 The membrane potential is then governed in integrate and fire models by ai (t) = αai (t − 1) + j wij rij (t) + θ rest (1 − α) (1 − vi (t − 1)) + vi (t − 1)θ f ired (17) This model is schematically depicted in fig(4). [sent-132, score-1.753]
85 Since the unobserved stochastic release variables rij (t) are hidden, this model does not have fully deterministic hidden dynamics. [sent-133, score-0.691]
86 In general, learning in such models is more complex and would require both forward and backward temporal propagations including, undoubtably, graphical model approximation techniques[7]. [sent-134, score-0.232]
87 6 Discussion Leaving aside the issue of stochastic vesicle release, a further step in the evolution of membrane complexity is to use Hodgkin-Huxley type dynamics[9]. [sent-135, score-0.504]
88 Whilst this might appear complex, in principle, this is straightforward since the membrane dynamics can be represented by deterministic hidden dynamics. [sent-136, score-0.744]
89 Explicitly summing out the hidden variables would then give a representation of Hodgkin-Huxley dynamics analogous to that of the Spike Response Model (see Gerstner in [10]). [sent-137, score-0.326]
90 Deriving optimal learning in assemblies of stochastic spiking neurons can be achieved using maximum likelihood. [sent-138, score-0.463]
91 This is straightforward in cases for which the latent dynamics is deterministic. [sent-139, score-0.221]
92 It is worth emphasising, therefore, that almost arbitrarily complex spatio-temporal patterns may potentially be learned – and generated under cued retrieval – for very complex neural dynamics. [sent-140, score-0.155]
93 Whilst this framework cannot deal with arbitrarily complex stochastic interactions, it can deal with learning in a class of interesting neural models, and concepts from graphical models can be useful in this area. [sent-141, score-0.3]
94 A more general stochastic framework would need to examine approximate causal learning rules which, despite not being fully optimal, may perform well. [sent-142, score-0.272]
95 Finally, our assumption that the brain operates optimally (albeit within severe constraints) enables us to drop other assumptions about unobserved processes, and leads to models with potentially more predictive power. [sent-143, score-0.137]
96 Nelson, Synaptic depression and cortical gain control, Science 275 (1997), 220–223. [sent-151, score-0.162]
97 Agakov, Correlated sequence learning in a network of spiking neurons using maximum likelihood, Tech. [sent-156, score-0.38]
98 Sherrington, Phase diagram and storage capacity u of sequence processing neural networks, Journal of Physics A 31 (1998), 8607–8621. [sent-168, score-0.099]
99 van Hemmen, Hebbian learning and spiking neurons, Physical Review E 59 (1999), 4498–4514. [sent-182, score-0.232]
100 Sakmann, Regulation of synaptic efficacy by coindence of postsynaptic APs and EPSPs, Science 275 (1997), 213–215. [sent-192, score-0.185]
wordName wordTfidf (topN-words)
[('wij', 0.311), ('membrane', 0.271), ('ai', 0.233), ('vj', 0.219), ('vi', 0.213), ('assembly', 0.193), ('synaptic', 0.185), ('spiking', 0.181), ('dynamics', 0.176), ('depression', 0.162), ('neuron', 0.159), ('dwij', 0.144), ('rij', 0.144), ('deterministic', 0.14), ('vesicle', 0.138), ('release', 0.132), ('potential', 0.132), ('xij', 0.122), ('ring', 0.114), ('hebbian', 0.113), ('hidden', 0.112), ('synapses', 0.106), ('hop', 0.105), ('hebb', 0.102), ('dai', 0.096), ('stochastic', 0.095), ('whilst', 0.094), ('rules', 0.093), ('dynamic', 0.088), ('dl', 0.088), ('neurons', 0.088), ('xj', 0.086), ('ired', 0.083), ('readily', 0.074), ('rule', 0.071), ('cellular', 0.07), ('temporal', 0.062), ('barber', 0.061), ('sequence', 0.06), ('cacy', 0.058), ('gerstner', 0.058), ('derivable', 0.055), ('facilitation', 0.055), ('states', 0.052), ('learning', 0.051), ('desired', 0.051), ('batch', 0.05), ('mechanism', 0.048), ('hemmen', 0.048), ('assemblies', 0.048), ('functionality', 0.047), ('cell', 0.047), ('dh', 0.046), ('depicted', 0.045), ('straightforward', 0.045), ('edinburgh', 0.044), ('parameterised', 0.044), ('quantal', 0.044), ('complex', 0.044), ('res', 0.043), ('severe', 0.041), ('eld', 0.04), ('memory', 0.04), ('integrate', 0.04), ('neural', 0.039), ('networks', 0.039), ('leaky', 0.039), ('neurotransmitter', 0.039), ('clamped', 0.039), ('forrest', 0.039), ('variables', 0.038), ('models', 0.038), ('forward', 0.037), ('markram', 0.037), ('gradient', 0.036), ('spikes', 0.035), ('environmental', 0.035), ('factors', 0.034), ('maass', 0.034), ('framework', 0.033), ('processes', 0.033), ('bi', 0.033), ('hill', 0.032), ('state', 0.032), ('spike', 0.032), ('ects', 0.031), ('realistic', 0.03), ('consequences', 0.03), ('unobserved', 0.03), ('include', 0.03), ('rest', 0.03), ('tables', 0.029), ('plasticity', 0.029), ('protein', 0.029), ('log', 0.029), ('rate', 0.029), ('instantaneous', 0.029), ('store', 0.029), ('potentially', 0.028), ('online', 0.028)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999964 129 nips-2002-Learning in Spiking Neural Assemblies
Author: David Barber
Abstract: We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. 1
2 0.26570147 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
Author: Shih-Chii Liu, Malte Boegershausen, Pascal Suter
Abstract: We describe a model of short-term synaptic depression that is derived from a silicon circuit implementation. The dynamics of this circuit model are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the neuron up to threshold when the membrane potential is close to the resting potential.
3 0.24530193 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables
Author: David Barber
Abstract: The application of latent/hidden variable Dynamic Bayesian Networks is constrained by the complexity of marginalising over latent variables. For this reason either small latent dimensions or Gaussian latent conditional tables linearly dependent on past states are typically considered in order that inference is tractable. We suggest an alternative approach in which the latent variables are modelled using deterministic conditional probability tables. This specialisation has the advantage of tractable inference even for highly complex non-linear/non-Gaussian visible conditional probability tables. This approach enables the consideration of highly complex latent dynamics whilst retaining the benefits of a tractable probabilistic model. 1
4 0.1658064 154 nips-2002-Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
Author: Giacomo Indiveri
Abstract: We present analog neuromorphic circuits for implementing bistable synapses with spike-timing-dependent plasticity (STDP) properties. In these types of synapses, the short-term dynamics of the synaptic efficacies are governed by the relative timing of the pre- and post-synaptic spikes, while on long time scales the efficacies tend asymptotically to either a potentiated state or to a depressed one. We fabricated a prototype VLSI chip containing a network of integrate and fire neurons interconnected via bistable STDP synapses. Test results from this chip demonstrate the synapse’s STDP learning properties, and its long-term bistable characteristics.
5 0.16179168 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model
Author: Luk Chong Yeung, Brian S. Blais, Leon N. Cooper, Harel Z. Shouval
Abstract: A unified, biophysically motivated Calcium-Dependent Learning model has been shown to account for various rate-based and spike time-dependent paradigms for inducing synaptic plasticity. Here, we investigate the properties of this model for a multi-synapse neuron that receives inputs with different spike-train statistics. In addition, we present a physiological form of metaplasticity, an activity-driven regulation mechanism, that is essential for the robustness of the model. A neuron thus implemented develops stable and selective receptive fields, given various input statistics 1
6 0.16118562 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
7 0.15724145 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain
8 0.15404019 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex
9 0.13980378 171 nips-2002-Reconstructing Stimulus-Driven Neural Networks from Spike Times
10 0.1202294 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits
11 0.11788365 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex
12 0.11775338 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior
13 0.1124418 71 nips-2002-Dopamine Induced Bistability Enhances Signal Processing in Spiny Neurons
14 0.10469003 151 nips-2002-Multiplicative Updates for Nonnegative Quadratic Programming in Support Vector Machines
15 0.093129776 43 nips-2002-Binary Coding in Auditory Cortex
16 0.082834028 45 nips-2002-Boosted Dyadic Kernel Discriminants
17 0.079109333 160 nips-2002-Optoelectronic Implementation of a FitzHugh-Nagumo Neural Model
18 0.07764367 187 nips-2002-Spikernels: Embedding Spiking Neurons in Inner-Product Spaces
19 0.07763806 141 nips-2002-Maximally Informative Dimensions: Analyzing Neural Responses to Natural Signals
20 0.075385831 28 nips-2002-An Information Theoretic Approach to the Functional Classification of Neurons
topicId topicWeight
[(0, -0.234), (1, 0.23), (2, -0.017), (3, -0.115), (4, 0.05), (5, 0.232), (6, 0.094), (7, 0.074), (8, -0.01), (9, -0.102), (10, 0.026), (11, -0.105), (12, 0.022), (13, 0.017), (14, -0.023), (15, -0.027), (16, -0.031), (17, 0.106), (18, -0.001), (19, 0.106), (20, -0.056), (21, -0.07), (22, 0.045), (23, -0.047), (24, 0.017), (25, 0.091), (26, -0.021), (27, 0.055), (28, 0.044), (29, -0.06), (30, -0.081), (31, 0.162), (32, 0.107), (33, 0.019), (34, 0.164), (35, -0.146), (36, -0.003), (37, 0.191), (38, 0.068), (39, -0.091), (40, -0.033), (41, 0.015), (42, -0.08), (43, -0.048), (44, 0.13), (45, -0.074), (46, -0.003), (47, -0.039), (48, -0.007), (49, 0.113)]
simIndex simValue paperId paperTitle
same-paper 1 0.97001553 129 nips-2002-Learning in Spiking Neural Assemblies
Author: David Barber
Abstract: We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. 1
2 0.70256764 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
Author: Shih-Chii Liu, Malte Boegershausen, Pascal Suter
Abstract: We describe a model of short-term synaptic depression that is derived from a silicon circuit implementation. The dynamics of this circuit model are similar to the dynamics of some present theoretical models of shortterm depression except that the recovery dynamics of the variable describing the depression is nonlinear and it also depends on the presynaptic frequency. The equations describing the steady-state and transient responses of this synaptic model fit the experimental results obtained from a fabricated silicon network consisting of leaky integrate-and-fire neurons and different types of synapses. We also show experimental data demonstrating the possible computational roles of depression. One possible role of a depressing synapse is that the input can quickly bring the neuron up to threshold when the membrane potential is close to the resting potential.
3 0.65343654 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables
Author: David Barber
Abstract: The application of latent/hidden variable Dynamic Bayesian Networks is constrained by the complexity of marginalising over latent variables. For this reason either small latent dimensions or Gaussian latent conditional tables linearly dependent on past states are typically considered in order that inference is tractable. We suggest an alternative approach in which the latent variables are modelled using deterministic conditional probability tables. This specialisation has the advantage of tractable inference even for highly complex non-linear/non-Gaussian visible conditional probability tables. This approach enables the consideration of highly complex latent dynamics whilst retaining the benefits of a tractable probabilistic model. 1
4 0.63292235 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model
Author: Luk Chong Yeung, Brian S. Blais, Leon N. Cooper, Harel Z. Shouval
Abstract: A unified, biophysically motivated Calcium-Dependent Learning model has been shown to account for various rate-based and spike time-dependent paradigms for inducing synaptic plasticity. Here, we investigate the properties of this model for a multi-synapse neuron that receives inputs with different spike-train statistics. In addition, we present a physiological form of metaplasticity, an activity-driven regulation mechanism, that is essential for the robustness of the model. A neuron thus implemented develops stable and selective receptive fields, given various input statistics 1
5 0.59467888 154 nips-2002-Neuromorphic Bisable VLSI Synapses with Spike-Timing-Dependent Plasticity
Author: Giacomo Indiveri
Abstract: We present analog neuromorphic circuits for implementing bistable synapses with spike-timing-dependent plasticity (STDP) properties. In these types of synapses, the short-term dynamics of the synaptic efficacies are governed by the relative timing of the pre- and post-synaptic spikes, while on long time scales the efficacies tend asymptotically to either a potentiated state or to a depressed one. We fabricated a prototype VLSI chip containing a network of integrate and fire neurons interconnected via bistable STDP synapses. Test results from this chip demonstrate the synapse’s STDP learning properties, and its long-term bistable characteristics.
6 0.53930569 71 nips-2002-Dopamine Induced Bistability Enhances Signal Processing in Spiny Neurons
7 0.53163904 186 nips-2002-Spike Timing-Dependent Plasticity in the Address Domain
8 0.50982541 11 nips-2002-A Model for Real-Time Computation in Generic Neural Microcircuits
9 0.49268633 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
10 0.40447026 76 nips-2002-Dynamical Constraints on Computing with Spike Timing in the Cortex
11 0.39839709 171 nips-2002-Reconstructing Stimulus-Driven Neural Networks from Spike Times
12 0.39337492 93 nips-2002-Forward-Decoding Kernel-Based Phone Recognition
13 0.39034662 164 nips-2002-Prediction of Protein Topologies Using Generalized IOHMMs and RNNs
14 0.37675092 160 nips-2002-Optoelectronic Implementation of a FitzHugh-Nagumo Neural Model
15 0.37399679 199 nips-2002-Timing and Partial Observability in the Dopamine System
16 0.3418417 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior
17 0.32904619 151 nips-2002-Multiplicative Updates for Nonnegative Quadratic Programming in Support Vector Machines
18 0.32854155 5 nips-2002-A Digital Antennal Lobe for Pattern Equalization: Analysis and Design
19 0.32411247 128 nips-2002-Learning a Forward Model of a Reflex
20 0.30095932 127 nips-2002-Learning Sparse Topographic Representations with Products of Student-t Distributions
topicId topicWeight
[(23, 0.011), (42, 0.031), (54, 0.07), (55, 0.028), (57, 0.012), (67, 0.011), (68, 0.074), (74, 0.033), (92, 0.012), (98, 0.622)]
simIndex simValue paperId paperTitle
same-paper 1 0.99109131 129 nips-2002-Learning in Spiking Neural Assemblies
Author: David Barber
Abstract: We consider a statistical framework for learning in a class of networks of spiking neurons. Our aim is to show how optimal local learning rules can be readily derived once the neural dynamics and desired functionality of the neural assembly have been specified, in contrast to other models which assume (sub-optimal) learning rules. Within this framework we derive local rules for learning temporal sequences in a model of spiking neurons and demonstrate its superior performance to correlation (Hebbian) based approaches. We further show how to include mechanisms such as synaptic depression and outline how the framework is readily extensible to learning in networks of highly complex spiking neurons. A stochastic quantal vesicle release mechanism is considered and implications on the complexity of learning discussed. 1
2 0.97507596 103 nips-2002-How Linear are Auditory Cortical Responses?
Author: Maneesh Sahani, Jennifer F. Linden
Abstract: By comparison to some other sensory cortices, the functional properties of cells in the primary auditory cortex are not yet well understood. Recent attempts to obtain a generalized description of auditory cortical responses have often relied upon characterization of the spectrotemporal receptive field (STRF), which amounts to a model of the stimulusresponse function (SRF) that is linear in the spectrogram of the stimulus. How well can such a model account for neural responses at the very first stages of auditory cortical processing? To answer this question, we develop a novel methodology for evaluating the fraction of stimulus-related response power in a population that can be captured by a given type of SRF model. We use this technique to show that, in the thalamo-recipient layers of primary auditory cortex, STRF models account for no more than 40% of the stimulus-related power in neural responses.
3 0.95359689 86 nips-2002-Fast Sparse Gaussian Process Methods: The Informative Vector Machine
Author: Ralf Herbrich, Neil D. Lawrence, Matthias Seeger
Abstract: We present a framework for sparse Gaussian process (GP) methods which uses forward selection with criteria based on informationtheoretic principles, previously suggested for active learning. Our goal is not only to learn d–sparse predictors (which can be evaluated in O(d) rather than O(n), d n, n the number of training points), but also to perform training under strong restrictions on time and memory requirements. The scaling of our method is at most O(n · d2 ), and in large real-world classification experiments we show that it can match prediction performance of the popular support vector machine (SVM), yet can be significantly faster in training. In contrast to the SVM, our approximation produces estimates of predictive probabilities (‘error bars’), allows for Bayesian model selection and is less complex in implementation. 1
4 0.94881499 56 nips-2002-Concentration Inequalities for the Missing Mass and for Histogram Rule Error
Author: Luis E. Ortiz, David A. McAllester
Abstract: This paper gives distribution-free concentration inequalities for the missing mass and the error rate of histogram rules. Negative association methods can be used to reduce these concentration problems to concentration questions about independent sums. Although the sums are independent, they are highly heterogeneous. Such highly heterogeneous independent sums cannot be analyzed using standard concentration inequalities such as Hoeffding’s inequality, the Angluin-Valiant bound, Bernstein’s inequality, Bennett’s inequality, or McDiarmid’s theorem.
5 0.94740719 92 nips-2002-FloatBoost Learning for Classification
Author: Stan Z. Li, Zhenqiu Zhang, Heung-yeung Shum, Hongjiang Zhang
Abstract: AdaBoost [3] minimizes an upper error bound which is an exponential function of the margin on the training set [14]. However, the ultimate goal in applications of pattern classification is always minimum error rate. On the other hand, AdaBoost needs an effective procedure for learning weak classifiers, which by itself is difficult especially for high dimensional data. In this paper, we present a novel procedure, called FloatBoost, for learning a better boosted classifier. FloatBoost uses a backtrack mechanism after each iteration of AdaBoost to remove weak classifiers which cause higher error rates. The resulting float-boosted classifier consists of fewer weak classifiers yet achieves lower error rates than AdaBoost in both training and test. We also propose a statistical model for learning weak classifiers, based on a stagewise approximation of the posterior using an overcomplete set of scalar features. Experimental comparisons of FloatBoost and AdaBoost are provided through a difficult classification problem, face detection, where the goal is to learn from training examples a highly nonlinear classifier to differentiate between face and nonface patterns in a high dimensional space. The results clearly demonstrate the promises made by FloatBoost over AdaBoost.
6 0.89574236 59 nips-2002-Constraint Classification for Multiclass Classification and Ranking
7 0.84315073 79 nips-2002-Evidence Optimization Techniques for Estimating Stimulus-Response Functions
8 0.82043958 50 nips-2002-Circuit Model of Short-Term Synaptic Dynamics
9 0.8132019 184 nips-2002-Spectro-Temporal Receptive Fields of Subthreshold Responses in Auditory Cortex
10 0.80624557 43 nips-2002-Binary Coding in Auditory Cortex
11 0.79507291 12 nips-2002-A Neural Edge-Detection Model for Enhanced Auditory Sensitivity in Modulated Noise
12 0.78046352 102 nips-2002-Hidden Markov Model of Cortical Synaptic Plasticity: Derivation of the Learning Rule
13 0.7724902 110 nips-2002-Incremental Gaussian Processes
14 0.75564426 180 nips-2002-Selectivity and Metaplasticity in a Unified Calcium-Dependent Model
15 0.74754035 116 nips-2002-Interpreting Neural Response Variability as Monte Carlo Sampling of the Posterior
16 0.74656832 73 nips-2002-Dynamic Bayesian Networks with Deterministic Latent Tables
17 0.74587274 199 nips-2002-Timing and Partial Observability in the Dopamine System
18 0.73810238 41 nips-2002-Bayesian Monte Carlo
19 0.73002291 81 nips-2002-Expected and Unexpected Uncertainty: ACh and NE in the Neocortex
20 0.72033942 7 nips-2002-A Hierarchical Bayesian Markovian Model for Motifs in Biopolymer Sequences