nips nips2006 nips2006-187 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
Reference: text
sentIndex sentText sentNum sentScore
1 fr Abstract In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. [sent-4, score-0.708]
2 This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. [sent-5, score-0.613]
3 Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. [sent-7, score-0.297]
4 1 Introduction The temporal coding hypothesis states that information is encoded in the precise timing of action potentials sent by neurons. [sent-9, score-0.367]
5 In order to achieve computations in the time domain, it is thus necessary to have neurons spike at desired times. [sent-10, score-0.807]
6 However, at a more fundamental level, it is also necessary to describe how the timings of action potentials received by a neuron are combined together, in a way that is consistent with the neural code. [sent-11, score-0.464]
7 In these models, the membrane potential at the soma of a neuron is a weighted sum of PSPs arriving from dendrites at different times. [sent-13, score-0.396]
8 The spike time of the neuron is defined as the time when its membrane potential first reaches a firing threshold, and it depends on the precise temporal arrangement of PSPs, thus enabling computations in the time domain. [sent-14, score-0.888]
9 A consequence is that the length of the rising segment of post-synaptic potentials limits the available coding interval [1, 2]. [sent-16, score-0.217]
10 This theory takes advantage of the fact that the effect of synaptic currents depends on the internal state of the postsynaptic neuron. [sent-18, score-0.55]
11 For neurons spiking regularly, this dependency is classically described by the Phase Response Curve (PRC) [4]. [sent-19, score-0.523]
12 We use theta neurons, which are mathematically equivalent to quadratic integrate-and-fire neurons [5, 6]. [sent-20, score-0.585]
13 In these neuron models, once the potential has crossed the firing threshold, the neuron is still sensitive to incoming currents, which may change the timing of the next spike. [sent-21, score-0.792]
14 In the proposed model, computations do not rely on the shape of PSPs, which alleviates the restriction imposed by the length of their rising segment. [sent-22, score-0.194]
15 Therefore, we may use a simplified model of synaptic currents; we model synaptic currents as Diracs, which means that we do not take into account synaptic time constants. [sent-23, score-0.96]
16 Another advantage of our model is that computations do not rely on the delays imposed by inter-neuron transmission; this means that it is not necessary to fine-tune delays in order to learn desired spike times. [sent-24, score-0.544]
17 1 Description of the model The Theta Neuron The theta neuron is described by the following differential equation: dθ = (1 − cosθ) + αI(1 + cosθ) dt (1) where θ is the “potential” of the neuron, and I is a variable input current, measured in radians per unit of time. [sent-26, score-0.593]
18 The effect of an input current is not uniform across the circle; currents that occur late (for θ close to π) have little effect on θ, while currents that arrive when θ is close to zero have a much greater effect. [sent-30, score-0.526]
19 θ+ 0 π π θ− 0 I>0 I<0 Figure 1: Phase circle of the theta model. [sent-31, score-0.246]
20 2 Synaptic interactions The input current I is the sum of a constant current I0 and transient synaptic currents Ii (t), where i ∈ 1. [sent-35, score-0.533]
21 N indexes the synapses: N Ii (t) I = I0 + (2) i=1 Synaptic currents are modeled as Diracs : Ii (t) = wi δ(t − ti ), where ti is the firing time of presynaptic neuron i, and wi is the weight of the synapse. [sent-37, score-1.087]
22 Figure 2: Response properties of the theta model. [sent-39, score-0.209]
23 Curves shows the change of firing time tf of a neuron receiving a Dirac current of weight w at time t. [sent-40, score-0.403]
24 Left: For I0 > 0, the neuron spikes regularly (I0 = 0. [sent-41, score-0.482]
25 For w < 0, the current might cancel the spike if it occurs early. [sent-51, score-0.333]
26 Figure 2 shows how the firing time of a theta neuron changes with the time of arrival of a synaptic current. [sent-52, score-0.888]
27 In our time coding model, we view this curve as the transfer function of the neuron; it describes how the neuron converts input spike times into output spike times. [sent-53, score-1.276]
28 Following [2], we consider the mean squared error, E, between desired spike times ts and actual spike times ts : ¯ E =< (ts − ts )2 > (3) where < . [sent-56, score-1.093]
29 Gradient descent on E yields the following stochastic learning rule: ∆wi = −η The partial derivative ∂ts ∂wi ∂E ¯ ∂ts = −2η(ts − ts ) ∂wi ∂wi (4) expresses the credit assignment problem for synapses. [sent-58, score-0.223]
30 θ d θ+ i θ+ i wi ti θ θi− ts ti time Figure 3: Notations used in the text. [sent-59, score-0.472]
31 An incoming spike triggers an instantaneous change of the − + potential θ. [sent-60, score-0.411]
32 A small modification dwi of the synaptic weight wi induces a change dθi Let F denote the “remaining time”, that is, the time that remains before the neuron will fire: π F (t) = θ(t) dθ (1 − cosθ) + αI(1 + cosθ) (5) In our model, I is not continuous, because of Dirac synaptic currents. [sent-64, score-1.07]
33 In addition, we assume that the neuron receives one spike on each of its synapses, and that all synaptic weights are positive. [sent-66, score-0.964]
34 Let tj denote the time of − + arrival of the action potential on synapse j. [sent-67, score-0.353]
35 after) the synaptic current: − θj = θ(t− ) j + − − θj = θ(t+ ) = θj + αwj (1 + cos θj ) j (6) We consider the effect of a small change of weight wi . [sent-70, score-0.662]
36 To keep notations simple, we assume that action potentials are − ordered, ie : tj ≤ tj+1 for all j. [sent-72, score-0.27]
37 The j th terms of this sum depend on the time elapsed between tj and ti . [sent-75, score-0.194]
38 For that reason, we neglect this sum in our stochastic learning rule: ∂ts ∂F ∂θ+ ≈ + i ∂wi ∂θi ∂wi (9) which yields : − (1 + cos θi )α ∂ts ≈− + + ∂wi (1 − cos θi ) + αI0 (1 + cos θi ) (10) + + Note that this expression is not bounded when θi is close to the unstable point θ0 . [sent-77, score-0.523]
39 In that case, θ is in a region where it changes very slowly, and the timing of other action potentials for j > i will mostly determine the firing time ts . [sent-78, score-0.328]
40 For these reasons, we introduce a credit bound, C, and we modify the learning rule as follows: ∂ts ¯ ∂ts if 0 < − < C then: ∆wi = −2η(ts − ts ) (11) ∂wi ∂wi ¯ else: ∆wi = 2η(ts − ts )C (12) 2. [sent-81, score-0.434]
41 The learning rule takes effect at the end of a trial of fixed duration. [sent-83, score-0.19]
42 If a neuron does not fire at all during the trial, then its firing time is considered to be equal to the duration of the trial. [sent-84, score-0.341]
43 For each synapse, it is necessary to compute the credit from Equation (10) everytime a current is transmitted. [sent-85, score-0.211]
44 We may relax the assumption that each synapse receives one single action potential; if a presynaptic neuron fires several times before the postsynaptic neuron fires, then the credit corresponding to all spikes is summed. [sent-86, score-1.203]
45 Theta neurons were simulated using Euler integration of Equation (1). [sent-87, score-0.376]
46 The time step must be carefully chosen; if the temporal resolution is too coarse, then the credit assignment problem becomes too difficult, which increases the number of trials necessary for learning. [sent-88, score-0.208]
47 The network has to find a representation of the input that minimizes mean squared reconstruction error. [sent-93, score-0.215]
48 The network has three populations of neurons: (i) An input population X of size n neurons, where an input vector is represented using spike times. [sent-94, score-0.611]
49 We call Inter Stimulus Interval (ISI) the interval between the spikes encoding the input and the echo. [sent-95, score-0.251]
50 (ii) An output population Y , of size m neurons, that is activated by neurons in X. [sent-97, score-0.504]
51 The learning rule updates the feedback connections (wij )i≤n,j≤m from Y to X, comparing spike times in X and in X ′ . [sent-100, score-0.49]
52 We use I0 < 0, so the response to positive transient currents is approximately linear (see fig. [sent-101, score-0.348]
53 We thus expect neurons to perform linear summation of spike times. [sent-103, score-0.674]
54 If spike times are within the linear part of the response curve, then we expect this network to perform Principal Component Analysis (PCA) in the time domain. [sent-106, score-0.569]
55 This means that spike times can only code for positive values (even though synaptic weights can be of both signs). [sent-108, score-0.843]
56 An input vector is translated into firing times of the input population. [sent-110, score-0.212]
57 Output neurons are activated by input neurons through feed-forward connections. [sent-111, score-0.864]
58 A reconstruction of the input burst is generated through feedback connections. [sent-112, score-0.23]
59 Target firing times are provided by a delayed version of the input burst (echo). [sent-113, score-0.188]
60 In order to code for values of both signs, one would need a transfer function that changes its sign around a time that would code for zero, so that the effect of a current is reversed when its arrival time crosses zero. [sent-114, score-0.402]
61 Here we may view the neural code as a positive code: Early spikes code for high values, and late spikes code for values close to zero. [sent-115, score-0.615]
62 In this architecture, it is necessary to ensure that each neuron in Y fires a single spike on each trial. [sent-116, score-0.647]
63 In order to do this, we impose that neurons in Y have the same average firing time. [sent-117, score-0.376]
64 For this, we add a centering term to the learning rule: ∆wij = −η ∂E − λφj ∂wij (13) where λ ∈ I and φj is the average phase of neuron j. [sent-118, score-0.414]
65 φj is a leaky average of the difference R between the firing time tj and the average firing times of all neurons in population Y . [sent-119, score-0.651]
66 It is updated after each trial: m 1 φj ← τ φj + (1 − τ ) tj − tk (14) m k=1 This modification of the learning rule results in neurons that have no preferred firing order. [sent-120, score-0.563]
67 At the beginning of a trial, all neurons were initialized to their stable fixed point. [sent-124, score-0.404]
68 1 n In each experiment, the input vector was encoded in spike times. [sent-128, score-0.461]
69 When doing so, one must make sure that the values taken by the input are within the coding interval of the neurons, ie the range of values where the PRC is not zero. [sent-129, score-0.209]
70 In practice, spikes that arrive too late in the firing cycle are not taken into account by the learning rule. [sent-130, score-0.187]
71 In that case, the weights corresponding to other synapses become overly increased, which eventually causes some postsynaptic neurons in X ′ to fire before presynaptic neurons in Y (”anticausal spikes”). [sent-131, score-1.066]
72 1 Principal Component Analysis of a Gaussian distribution A two-dimensional Gaussian random variable was encoded in the spike times of three input neurons. [sent-134, score-0.525]
73 Because the network does not have an absolute time reference, it is necessary to use three input neurons, in order to encode two degrees of freedom in relative spiking times. [sent-137, score-0.362]
74 The output layer had two neurons (one degree of freedom). [sent-138, score-0.42]
75 Input spikes times were centered around t = 3ms, where t = 0 denotes the beginning of a trial. [sent-143, score-0.201]
76 The input vector was encoded in the relative spike times of three input neurons. [sent-156, score-0.599]
77 The direction of the branches results from the synaptic weights of the neurons. [sent-166, score-0.451]
78 This suggests that the response function of neurons is not perfectly linear in the interval where spike times are coded. [sent-168, score-0.879]
79 One degree of freedom per neuron in X ′ is used to adapt its mean firing times to the value imposed by the ISI; the smaller the ISI, the larger the weights. [sent-171, score-0.456]
80 2 Encoding natural images An encoder network was trained on the set of raw natural images used in [9]1 . [sent-180, score-0.181]
81 The encoder had 64 output neurons and 256 input neurons. [sent-181, score-0.532]
82 64 neurons were trained to represent natural images patches of size 16 × 16. [sent-196, score-0.41]
83 Different grey scales are used in order to display positive and negative weights (black is negative, white is positive). [sent-197, score-0.267]
84 Only positive weights are visible at this scale, because they are much larger than negative weights. [sent-199, score-0.225]
85 Negative weights are visible, positive weights are beyond scale. [sent-203, score-0.246]
86 There is a strong difference of amplitude between positive and negative weights; positive weights typically have values between 0 and 1, while negative weights are one order of magnitude smaller. [sent-206, score-0.382]
87 For that reason, weights are displayed twice, with two different grey scales. [sent-207, score-0.178]
88 An image reconstructed from spike times is shown in Figure 7. [sent-208, score-0.362]
89 The difference of amplitude between positive and negative weights results from higher sensitivity of the response curves to negative weights, as shown in Figure 2. [sent-213, score-0.384]
90 Synaptic weights with negative values have the ability to strongly delay the output spike, and even to cancel it. [sent-214, score-0.228]
91 One possible explanation lies in the response properties of the theta neurons. [sent-217, score-0.31]
92 The response function is not linear, especially in the case of negative weights (Figure 2). [sent-218, score-0.25]
93 5 Conclusions We have shown that the dynamic response properties of spiking neurons can be effectively used as transfer functions, in order to perform computations (in this paper, PCA and Nonlinear PCA). [sent-221, score-0.692]
94 A similar proposal was made in [11], where the PRC of neurons has been adapted to a biologically realistic STDP rule. [sent-222, score-0.376]
95 We used theta neurons, which are of type I, and equivalent to quadratic integrate-and-fire neurons. [sent-224, score-0.209]
96 Type I neurons have a PRC that is always positive. [sent-225, score-0.376]
97 This means that spike times can encode only Figure 7: Natural image and reconstruction from spike times. [sent-226, score-0.726]
98 The reconstruction (right) is derived from spikes times in X ′ . [sent-228, score-0.267]
99 Efficient estimation of phase-resetting curves in real a neurons and its significance for neural-network modeling. [sent-264, score-0.421]
100 Matching storage and recall:hippocampal spike timingdependent plasticity and phase response curves. [sent-302, score-0.473]
wordName wordTfidf (topN-words)
[('neurons', 0.376), ('neuron', 0.31), ('spike', 0.298), ('synaptic', 0.254), ('ring', 0.218), ('theta', 0.209), ('wi', 0.19), ('prc', 0.168), ('currents', 0.167), ('cos', 0.153), ('isi', 0.144), ('spikes', 0.137), ('ts', 0.123), ('spiking', 0.109), ('weights', 0.102), ('response', 0.101), ('credit', 0.1), ('tj', 0.099), ('postsynaptic', 0.095), ('branches', 0.095), ('encoded', 0.089), ('rule', 0.088), ('res', 0.088), ('psps', 0.083), ('code', 0.083), ('grey', 0.076), ('network', 0.075), ('input', 0.074), ('phase', 0.074), ('everytime', 0.072), ('presynaptic', 0.071), ('trial', 0.068), ('coding', 0.066), ('reconstruction', 0.066), ('times', 0.064), ('ti', 0.064), ('unstable', 0.064), ('synapse', 0.064), ('potentials', 0.063), ('computations', 0.063), ('timing', 0.059), ('principal', 0.059), ('echo', 0.057), ('potential', 0.054), ('arrival', 0.053), ('action', 0.052), ('burst', 0.05), ('late', 0.05), ('pca', 0.048), ('imposed', 0.048), ('diracs', 0.048), ('oja', 0.048), ('rising', 0.048), ('voegtlin', 0.048), ('delays', 0.048), ('curve', 0.048), ('negative', 0.047), ('population', 0.046), ('synapses', 0.046), ('curves', 0.045), ('crosses', 0.044), ('populations', 0.044), ('output', 0.044), ('transfer', 0.043), ('lasted', 0.042), ('positive', 0.042), ('interval', 0.04), ('feedback', 0.04), ('sin', 0.04), ('necessary', 0.039), ('activated', 0.038), ('encoder', 0.038), ('transient', 0.038), ('classically', 0.038), ('temporal', 0.038), ('dots', 0.037), ('circle', 0.037), ('signs', 0.037), ('wij', 0.037), ('regularly', 0.035), ('leaky', 0.035), ('cancel', 0.035), ('shape', 0.035), ('visible', 0.034), ('images', 0.034), ('effect', 0.034), ('freedom', 0.034), ('membrane', 0.032), ('change', 0.031), ('time', 0.031), ('branch', 0.03), ('centering', 0.03), ('ii', 0.03), ('dirac', 0.029), ('ie', 0.029), ('transmission', 0.028), ('incoming', 0.028), ('initialized', 0.028), ('notations', 0.027), ('complementary', 0.027)]
simIndex simValue paperId paperTitle
same-paper 1 1.0 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
2 0.44066143 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
Author: Máté Lengyel, Peter Dayan
Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1
3 0.39954022 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
4 0.38192213 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
5 0.34793508 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
Author: Ryota Kobayashi, Shigeru Shinomoto
Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1
6 0.34746507 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
7 0.29210559 154 nips-2006-Optimal Change-Detection and Spiking Neurons
8 0.22360083 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
9 0.1770989 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
10 0.14819337 17 nips-2006-A recipe for optimizing a time-histogram
11 0.127501 16 nips-2006-A Theory of Retinal Population Coding
12 0.12394959 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment
13 0.10457277 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
14 0.10153402 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
15 0.10051196 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
16 0.098751947 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
17 0.089732192 20 nips-2006-Active learning for misspecified generalized linear models
18 0.086340025 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
19 0.066093378 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis
20 0.06345395 175 nips-2006-Simplifying Mixture Models through Function Approximation
topicId topicWeight
[(0, -0.238), (1, -0.648), (2, 0.007), (3, 0.175), (4, 0.068), (5, 0.115), (6, -0.056), (7, 0.044), (8, -0.012), (9, 0.039), (10, -0.058), (11, -0.011), (12, 0.066), (13, 0.048), (14, 0.061), (15, 0.022), (16, -0.062), (17, 0.05), (18, -0.005), (19, -0.039), (20, -0.009), (21, -0.036), (22, -0.034), (23, -0.021), (24, 0.013), (25, 0.067), (26, 0.005), (27, 0.035), (28, 0.037), (29, -0.02), (30, 0.009), (31, 0.023), (32, -0.014), (33, 0.003), (34, 0.048), (35, -0.007), (36, -0.048), (37, -0.012), (38, 0.031), (39, 0.017), (40, 0.033), (41, 0.021), (42, 0.0), (43, 0.043), (44, 0.019), (45, -0.035), (46, -0.017), (47, -0.038), (48, 0.026), (49, -0.026)]
simIndex simValue paperId paperTitle
same-paper 1 0.98108006 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
2 0.90819144 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
3 0.87496579 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
Author: Máté Lengyel, Peter Dayan
Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1
4 0.84697461 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
5 0.83391768 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
7 0.72485209 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
8 0.62933314 154 nips-2006-Optimal Change-Detection and Spiking Neurons
9 0.45197102 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
10 0.43481109 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
11 0.39344382 17 nips-2006-A recipe for optimizing a time-histogram
12 0.36508301 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
13 0.35717946 16 nips-2006-A Theory of Retinal Population Coding
14 0.31175387 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis
15 0.28380924 29 nips-2006-An Information Theoretic Framework for Eukaryotic Gradient Sensing
16 0.27921823 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
17 0.26843026 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
18 0.26479247 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment
19 0.25047815 72 nips-2006-Efficient Learning of Sparse Representations with an Energy-Based Model
20 0.21667777 20 nips-2006-Active learning for misspecified generalized linear models
topicId topicWeight
[(1, 0.099), (3, 0.018), (7, 0.055), (9, 0.091), (15, 0.017), (20, 0.011), (22, 0.066), (25, 0.011), (44, 0.046), (57, 0.068), (64, 0.012), (65, 0.034), (69, 0.03), (71, 0.162), (82, 0.2)]
simIndex simValue paperId paperTitle
1 0.90884697 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning
Author: Gediminas Lukšys, Jérémie Knüsel, Denis Sheynikhovich, Carmen Sandi, Wulfram Gerstner
Abstract: Stress and genetic background regulate different aspects of behavioral learning through the action of stress hormones and neuromodulators. In reinforcement learning (RL) models, meta-parameters such as learning rate, future reward discount factor, and exploitation-exploration factor, control learning dynamics and performance. They are hypothesized to be related to neuromodulatory levels in the brain. We found that many aspects of animal learning and performance can be described by simple RL models using dynamic control of the meta-parameters. To study the effects of stress and genotype, we carried out 5-hole-box light conditioning and Morris water maze experiments with C57BL/6 and DBA/2 mouse strains. The animals were exposed to different kinds of stress to evaluate its effects on immediate performance as well as on long-term memory. Then, we used RL models to simulate their behavior. For each experimental session, we estimated a set of model meta-parameters that produced the best fit between the model and the animal performance. The dynamics of several estimated meta-parameters were qualitatively similar for the two simulated experiments, and with statistically significant differences between different genetic strains and stress conditions. 1
same-paper 2 0.85027134 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
3 0.73968679 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes
Author: Neil D. Lawrence, Guido Sanguinetti, Magnus Rattray
Abstract: Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies.
4 0.73236907 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes
Author: Huan Xu, Shie Mannor
Abstract: Computation of a satisfactory control policy for a Markov decision process when the parameters of the model are not exactly known is a problem encountered in many practical applications. The traditional robust approach is based on a worstcase analysis and may lead to an overly conservative policy. In this paper we consider the tradeoff between nominal performance and the worst case performance over all possible models. Based on parametric linear programming, we propose a method that computes the whole set of Pareto efficient policies in the performancerobustness plane when only the reward parameters are subject to uncertainty. In the more general case when the transition probabilities are also subject to error, we show that the strategy with the “optimal” tradeoff might be non-Markovian and hence is in general not tractable. 1
5 0.72564882 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
Author: Yingxue Wang, Rodney J. Douglas, Shih-Chii Liu
Abstract: The neurons of the neocortex communicate by asynchronous events called action potentials (or ’spikes’). However, for simplicity of simulation, most models of processing by cortical neural networks have assumed that the activations of their neurons can be approximated by event rates rather than taking account of individual spikes. The obstacle to exploring the more detailed spike processing of these networks has been reduced considerably in recent years by the development of hybrid analog-digital Very-Large Scale Integrated (hVLSI) neural networks composed of spiking neurons that are able to operate in real-time. In this paper we describe such a hVLSI neural network that performs an interesting task of selective attentional processing that was previously described for a simulated ’pointer-map’ rate model by Hahnloser and colleagues. We found that most of the computational features of their rate model can be reproduced in the spiking implementation; but, that spike-based processing requires a modification of the original network architecture in order to memorize a previously attended target. 1
6 0.72022271 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
7 0.71988177 56 nips-2006-Conditional Random Sampling: A Sketch-based Sampling Technique for Sparse Data
8 0.65214866 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
9 0.64083928 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
10 0.63142204 154 nips-2006-Optimal Change-Detection and Spiking Neurons
11 0.62679374 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
12 0.61451072 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
13 0.59594357 167 nips-2006-Recursive ICA
14 0.59256166 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis
15 0.5833503 16 nips-2006-A Theory of Retinal Population Coding
16 0.58324361 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
17 0.5754351 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
18 0.57177037 75 nips-2006-Efficient sparse coding algorithms
19 0.56844378 65 nips-2006-Denoising and Dimension Reduction in Feature Space
20 0.56606174 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall