nips nips2006 nips2006-99 knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
Reference: text
sentIndex sentText sentNum sentScore
1 at Abstract The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. [sent-3, score-0.157]
2 However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. [sent-7, score-0.346]
3 We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. [sent-8, score-0.508]
4 The new learning rule that achieves this is derived from abstract information optimization principles. [sent-9, score-0.165]
5 However it has turned out to be quite difficult to establish links between known learning algorithms that have been derived from these general principles, and learning rules that could possibly be implemented by synaptic plasticity of a spiking neuron. [sent-12, score-0.239]
6 Fortunately, in a simpler context a direct link between an abstract information theoretic optimization goal and a rule for synaptic plasticity has recently been established [3]. [sent-13, score-0.241]
7 The resulting rule for the change of synaptic weights in [3] maximizes the mutual information between pre- and postsynaptic spike trains, under the constraint that the postsynaptic firing rate stays close to some target firing rate. [sent-14, score-1.342]
8 We show in this article, that this approach can be extended to situations where simultaneously the mutual information between the postsynaptic spike train of the neuron and other signals (such as for example the spike trains of other neurons) has to be minimized (Figure 1). [sent-15, score-1.538]
9 This opens the door to the exploration of learning rules for information bottleneck analysis and independent component extraction with spiking neurons that would be optimal from a theoretical perspective. [sent-16, score-0.487]
10 We review in section 2 the neuron model and learning rule from [3]. [sent-17, score-0.418]
11 We show in section 3 how this learning rule can be extended so that it not only maximizes mutual information with some given spike trains and keeps the output firing rate within a desired range, but simultaneously minimizes mutual information with other spike trains, or other time-varying signals. [sent-18, score-1.353]
12 A In an information bottleneck task the learning neuron (neuron 1) wants to maximize the mutual information between its output Y1K and the activity of one or several target neurons Y2K , Y3K , . [sent-20, score-0.993]
13 (which can be functions of the inputs X K and/or other external signals), while at the same time keeping the mutual information between the inputs X K and the output Y1K as low as possible (and its firing rate within a desired range). [sent-23, score-0.47]
14 Thus the neuron should learn to extract from its high-dimensional input those aspects that are related to these target signals. [sent-24, score-0.518]
15 B Two neurons receiving the same inputs X K from a common set of presynaptic neurons both learn to maximize information transmission, and simultaneously to keep their outputs Y1K and Y2K statistically independent. [sent-26, score-0.565]
16 In section 5 we show that a modification of this learning rule allows a spiking neuron to extract information from its input spike trains that is independent from the component extracted by another neuron. [sent-29, score-1.129]
17 2 Neuron model and a basic learning rule We use the model from [3], which is a stochastically spiking neuron model with refractoriness, where the probability of firing in each time step depends on the current membrane potential and the time since the last output spike. [sent-30, score-0.799]
18 The total membrane potential of a neuron i in time step tk = k∆t is given by N k ui (tk ) = ur + wij (tk − tn )xn , j (1) j=1 n=1 where ur = −70mV is the resting potential and wij is the weight of synapse j (j = 1, . [sent-32, score-1.46]
19 k An input spike train at synapse j up to the k-th time step is described by a sequence Xj = n n 1 2 k (xj , xj , . [sent-36, score-0.564]
20 , xj ) of zeros (no spike) and ones (spike); each presynaptic spike at time t (xj = 1) evokes a postsynaptic potential (PSP) with exponentially decaying time course (t − tn ) with time constant τm = 10ms. [sent-39, score-0.679]
21 The probability ρk of firing of neuron i in each time step tk is given by i ρk = 1 − exp[−g(ui (tk )Ri (tk )∆t] ≈ g(ui (tk ))Ri (tk )∆t, i (2) where g(u) = r0 log{1 + exp[(u − u0 )/∆u]} is a smooth increasing function of the membrane potential u (u0 = −65mV, ∆u = 2mV, r0 = 11Hz). [sent-40, score-0.966]
22 The refractory variable Ri (t) = τ 2 (t−ti −τabs ) )2 Θ(t − ti − τabs ) assumes i ˆ +(t−t −τ ref r i abs ˆ values in [0, 1] and depends on the last firing time ti of neuron i (absolute refractory period τabs = 3ms, relative refractory time τref r = 10ms). [sent-42, score-0.594]
23 This model from [3] is a special case of the spike-response model, and with a refractory variable R(t) that depends only on the time since the last postsynaptic event it has renewal properties [4]. [sent-44, score-0.293]
24 k The output of neuron i at the k-th time step is denoted by a variable yi that assumes the value 1 if a postsynaptic spike occurred and 0 otherwise. [sent-45, score-1.012]
25 A specific spike train up to the k-th time step is written 1 2 k as Yik = (yi , yi , . [sent-46, score-0.448]
26 The information transmission between an ensemble of input spike trains XK and the output spike K train Yi can be quantified by the mutual information1 [5] K I(XK ; Yi ) = P (X K , YiK ) log X K ,YiK P (YiK |X K ) . [sent-50, score-1.121]
27 This distribution was chosen to be that of a constant target firing rate g accounting for homeostatic processes. [sent-52, score-0.244]
28 Obviously, this is the generic IB scenario applied to spiking neurons (see Figure 1A). [sent-58, score-0.272]
29 A learning rule for extracting independent components with spiking neurons (see section 5) can be derived in a similar manner. [sent-59, score-0.441]
30 For simplicity, we consider the case of an IB optimization for only one target spike train Y2K , and derive an update rule for the synaptic weights w1j of neuron 1. [sent-60, score-1.032]
31 To maximize this objective function, we derive the weight k change ∆w1j during the k-th time step by gradient ascent on (7), assuming that the weights w1j can change between some bounds 0 ≤ w1j ≤ wmax (we assume wmax = 1 throughout this paper). [sent-62, score-0.369]
32 The rate gi (tk ) = g(ui (tk )) Xk |Y k−1 denotes an expectation of the firing rate over the input distribution ¯ i given the postsynaptic history and is implemented as a running average with an exponential time window (with a time constant of 10ms). [sent-64, score-0.52]
33 2 Note that all three terms of (7) implicitly depend on w1j because the output distribution P (Y1K ) changes if we modify the weights w1j . [sent-65, score-0.163]
34 ˜k In order to get an expression for the weight change in a specific time step tk we write the probabilities P (YiK ) and P (Y1K , Y2K ) occurring in (7) as products over individual time bins, i. [sent-67, score-0.672]
35 ¯ ¯ g Here, gi (tk ) = g(ui (tk )) ¯ k k Xk |Yik−1 (9) denotes the average firing rate of neuron i and g12 (tk ) = ¯ g(u1 (t ))g(u2 (t )) Xk |Y k−1 ,Y k−1 denotes the average product of firing rates of both neurons. [sent-75, score-0.404]
36 ∆t (10) k which consists of a term C1j sensitive to correlations between the output of the neuron and its k k presynaptic input at synapse j (“correlation term”) and terms B1 and B12 that characterize the k postsynaptic state of the neuron (“postsynaptic terms”). [sent-79, score-1.242]
37 In addition, our learning rule k k contains an extra term B12 = F12 /(∆t)2 that is sensitive to the statistical dependence between the output spike train of the neuron and the target. [sent-83, score-0.882]
38 In this way, it measures the momentary mutual information between the output of the neuron and the target spike train. [sent-86, score-0.977]
39 For a simplified neuron model without refractoriness (R(t) = 1), the update rule (4) resembles the BCM-rule [6] as shown in [3]. [sent-87, score-0.486]
40 The values ν1 , ν2 , and ν12 are running averages of the output rate ν1 , the ¯ ¯ ¯ k k k rate of the target signal ν2 and of the product of these values, ν1 ν2 , respectively. [sent-90, score-0.46]
41 Note that this term is zero if the rates of the two neurons are independent. [sent-93, score-0.178]
42 These regimes are separated by a sliding threshold, however, in contrast to the original BCM rule this threshold does not only depend on the running average of k ¯k the postsynaptic rate ν1 , but also on the current values of ν2 and ν2 . [sent-96, score-0.425]
43 ¯k 4 Application to Information Bottleneck Optimization We use a setup as in Figure 1A where we want to maximize the information which the output Y1K of a learning neuron conveys about two target signals Y2K and Y3K . [sent-97, score-0.66]
44 If the target signals are statistically independent from each other we can optimize the mutual information to each target signal separately. [sent-98, score-0.585]
45 This leads to an update rule k ∆w1j k k k k = −αC1j B1 (−γ) − β∆t B12 + B13 ∆t , (14) k k where B12 and B13 are the postsynaptic terms (11) sensitive to the statistical dependence between the output and target signals 1 and 2, respectively. [sent-99, score-0.626]
46 We choose g = 30Hz for the target firing rate, ˜ and we use discrete time with ∆t = 1ms. [sent-100, score-0.167]
47 In this experiment we demonstrate that it is possible to consider two very different kinds of target signals: one target spike train has has a similar rate modulation as one part of the input, while the other target spike train has a high spike-spike correlation with another part of the input. [sent-101, score-1.284]
48 The learning neuron receives input at 100 synapses, which are divided into 4 groups of 25 inputs each. [sent-102, score-0.465]
49 The first two input groups consist of rate modulated Poisson spike trains4 (Figure 2A). [sent-103, score-0.458]
50 Spike trains from the remaining groups 3 and 4 are correlated with a coefficient of 0. [sent-104, score-0.266]
51 5 within each group, however, spike trains from different groups are uncorrelated. [sent-105, score-0.499]
52 Correlated spike trains are generated by the procedure described in [7]. [sent-106, score-0.451]
53 The first target signal is chosen to have the same rate modulation as the inputs from group 1, except that Gaussian random noise is superimposed with a standard deviation of 2Hz. [sent-107, score-0.399]
54 The second target spike train is correlated with inputs from group 3 (with a coefficient of 0. [sent-108, score-0.66]
55 Furthermore, both target signals are silent during random intervals: at each 3 In the absence of refractoriness we use an alternative gain function galt (u) = [1/gmax + 1/g(u)]−1 in order to pose an upper limit of gmax = 100Hz on the postsynaptic firing rate. [sent-110, score-0.511]
56 04 50 0 0 2500 5000 2500 t [ms] 5000 50 0 0 F MI/KLD of neuron 1 0. [sent-113, score-0.301]
57 01 target 1 [Hz] output [Hz] B 50 synapse idx input 2 [Hz] input 1 [Hz] A 1 x 10 I(output;targets) correlation with targets 0. [sent-114, score-0.634]
58 A Modulation of input rates to input groups 1 and 2. [sent-123, score-0.158]
59 B Evolution of weights during 60 minutes of learning (bright: strong synapses, wij ≈ 1, dark: depressed synapses, wij ≈ 0. [sent-124, score-0.409]
60 C Output rate and rate of target signal 1 during 5 seconds after learning. [sent-128, score-0.305]
61 D Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. [sent-129, score-0.558]
62 E Evolution of the average mutual information per time bin between output and both target spike trains as a function of time. [sent-131, score-0.968]
63 F Trace of the correlation between output rate and rate of target signal 1 (solid line) and the spike-spike correlation (dashed line) between the output and target spike train 2 during learning. [sent-132, score-1.163]
64 time step, each target signal is independently set to 0 with a certain probability (10−5 ) and remains silent for a duration chosen from a Gaussian distribution with mean 5s and SD 1s (minimum duration is 1s). [sent-134, score-0.288]
65 Hence this experiment tests whether learning works even if the target signals are not available all of the time. [sent-135, score-0.197]
66 Figure 2 shows that strong weights evolve for the first and third group of synapses, whereas the efficacies for the remaining inputs are depressed. [sent-136, score-0.226]
67 Both groups with growing weights are correlated with one of the target signals, therefore the mutual information between output and target spike trains increases. [sent-137, score-1.149]
68 Since spike-spike correlations convey more information than rate modulations synaptic efficacies develop more strongly to group 3 (the group with spike-spike correlations). [sent-138, score-0.345]
69 This results in an initial decrease in correlation with the rate-modulated target to the benefit of higher correlation with the second target. [sent-139, score-0.332]
70 However, after about 30 minutes when the weights become stable, the correlations as well as the mutual information quantities stay roughly constant. [sent-140, score-0.343]
71 An application of the simplified rule (12) to the same task is shown in Figure 3 where it can be seen that strong weights close to wmax are developed for the rate-modulated input. [sent-141, score-0.277]
72 To some extent weights grow also for the inputs with spike-spike correlations in order to reach the constant target firing rate g . [sent-142, score-0.39]
73 In contrast to the spike-based rule the simplified rule is not able to detect spike-spike ˜ correlations between output and target spike trains. [sent-143, score-0.807]
74 4 The rate of the first 25 inputs is modulated by a Gaussian white-noise signal with mean 20Hz that has been low pass filtered with a cut-off frequency of 5Hz. [sent-144, score-0.156]
75 Synapses 26 to 50 receive a rate that has a constant value of 2Hz, except that a burst is initiated at each time step with a probability of 0. [sent-145, score-0.2]
76 A Evolution of weights during 30 minutes of learning (bright: strong synapses, wij ≈ 1, dark: depressed synapses, wij ≈ 0. [sent-168, score-0.409]
77 B Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin (dashed line, right scale) as a function of time. [sent-172, score-0.558]
78 C Trace of the correlation between output rate and target rate during learning. [sent-174, score-0.472]
79 5 Extracting Independent Components With a slight modification in the objective function (7) the learning rule allows us to extract statistically independent components from an ensemble of input spike trains. [sent-176, score-0.547]
80 We consider two neurons receiving the same input at their synapses (see Figure 1B). [sent-177, score-0.35]
81 For both neurons i = 1, 2 we maximize information transmission under the constraint that their outputs stay as statistically independent from each other as possible. [sent-178, score-0.371]
82 (15) Since the same terms (up to the sign) are optimized in (7) and (15) we can derive a gradient ascent rule for the weights of neuron i, wij , analogously to section 3: k ∆wij k k k = αCij Bi (γ) − β∆tB12 . [sent-180, score-0.624]
83 ∆t (16) Figure 4 shows the results of an experiment where two neurons receive the same Poisson input with a rate of 20Hz at their 100 synapses. [sent-181, score-0.301]
84 The input is divided into two groups of 40 spike trains each, such that synapses 1 to 40 and 41 to 80 receive correlated input with a correlation coefficient of 0. [sent-182, score-0.94]
85 5 within each group, however, any spike trains belonging to different input groups are uncorrelated. [sent-183, score-0.554]
86 Weights close to the maximal efficacy wmax = 1 are developed for one of the groups of synapses that receives correlated input (group 2 in this case) whereas those for the other correlated group (group 1) as well as those for the uncorrelated group (group 3) stay low. [sent-185, score-0.64]
87 Neuron 2 develops strong weights to the other correlated group of synapses (group 1) whereas the efficacies of the second correlated group (group 2) remain depressed, thereby trying to produce a statistically independent output. [sent-186, score-0.561]
88 For both neurons the mutual information is maximized and the target output distribution of a constant firing rate of 30Hz is approached well. [sent-187, score-0.605]
89 After an initial increase in the mutual information and in the correlation between the outputs, when the weights of both neurons start to grow simultaneously, the amounts of information and correlation drop as both neurons develop strong efficacies to different parts of the input. [sent-188, score-0.753]
90 6 Discussion Information Bottleneck (IB) and Independent Component Analysis (ICA) have been proposed as general principles for unsupervised learning in lower cortical areas, however, learning rules that can implement these principles with spiking neurons have been missing. [sent-189, score-0.385]
91 In this article we have derived from information theoretic principles learning rules which enable a stochastically spiking neuron to solve these tasks. [sent-190, score-0.586]
92 These learning rules are optimal from the perspective of information theory, but they are not local in the sense that they use only information that is available at a single A B C weights of neuron 1 −4 weights of neuron 2 20 40 0. [sent-191, score-0.823]
93 5 60 2 80 100 0 D 6 1 synapse idx synapse idx 1 10 20 t [min] 30 0 0 0 10 20 30 t [min] E F MI/KLD of neuron 2 MI/KLD of neuron 1 correlation between outputs 0. [sent-193, score-1.106]
94 A,B Evolution of weights during 30 minutes of learning for both postsynaptic neurons (red: strong synapses, wij ≈ 1, blue: depressed synapses, wij ≈ 0. [sent-213, score-0.769]
95 C Evolution of the average mutual information per time bin between both output spike trains as a function of time. [sent-217, score-0.828]
96 D,E Evolution of the average mutual information per time bin (solid line, left scale) between input and output and the Kullback-Leibler divergence per time bin for both neurons (dashed line, right scale) as a function of time. [sent-218, score-0.702]
97 F Trace of the correlation between both output spike trains during learning. [sent-220, score-0.643]
98 Rather, they tell us what type of information would have to be ideally provided by such auxiliary network, and how the synapse should change its efficacy in order to approximate a theoretically optimal learning rule. [sent-223, score-0.175]
99 Generalized Bienenstock-Cooper-Munro rule for spiking neurons that maximizes information transmission. [sent-245, score-0.415]
100 Theory for the development of neuron selectivity: orientation specificity and binocular interaction in visual cortex. [sent-271, score-0.301]
wordName wordTfidf (topN-words)
[('tk', 0.569), ('neuron', 0.301), ('spike', 0.285), ('postsynaptic', 0.216), ('yik', 0.204), ('trains', 0.166), ('ring', 0.163), ('synapses', 0.151), ('neurons', 0.144), ('target', 0.14), ('mutual', 0.129), ('spiking', 0.128), ('synapse', 0.124), ('rule', 0.117), ('wij', 0.114), ('output', 0.096), ('correlation', 0.096), ('bottleneck', 0.091), ('xk', 0.089), ('bin', 0.077), ('evolution', 0.076), ('group', 0.073), ('ib', 0.071), ('rate', 0.07), ('bcm', 0.068), ('cacies', 0.068), ('idx', 0.068), ('refractoriness', 0.068), ('wmax', 0.068), ('weights', 0.067), ('presynaptic', 0.063), ('yi', 0.063), ('inputs', 0.061), ('abs', 0.059), ('depressed', 0.059), ('signals', 0.057), ('input', 0.055), ('ui', 0.054), ('correlated', 0.052), ('correlations', 0.052), ('synaptic', 0.051), ('refractory', 0.05), ('train', 0.049), ('groups', 0.048), ('cij', 0.047), ('burst', 0.047), ('membrane', 0.045), ('dkl', 0.04), ('maximize', 0.04), ('statistically', 0.039), ('principles', 0.039), ('stay', 0.039), ('hz', 0.037), ('averages', 0.037), ('rules', 0.035), ('term', 0.034), ('tn', 0.034), ('extraction', 0.034), ('homeostatic', 0.034), ('ster', 0.034), ('stochastically', 0.034), ('calculated', 0.033), ('duration', 0.033), ('gi', 0.033), ('lk', 0.032), ('receive', 0.032), ('minutes', 0.03), ('modulation', 0.03), ('transmission', 0.03), ('silent', 0.03), ('ref', 0.03), ('independent', 0.029), ('uncorrelated', 0.029), ('min', 0.029), ('bright', 0.027), ('ur', 0.027), ('gerstner', 0.027), ('time', 0.027), ('bi', 0.026), ('information', 0.026), ('poisson', 0.026), ('signal', 0.025), ('change', 0.025), ('plasticity', 0.025), ('segments', 0.025), ('ascent', 0.025), ('strong', 0.025), ('simultaneously', 0.024), ('outputs', 0.024), ('coef', 0.024), ('graz', 0.024), ('step', 0.024), ('simpli', 0.023), ('extracting', 0.023), ('article', 0.023), ('sd', 0.023), ('per', 0.022), ('extract', 0.022), ('optimization', 0.022), ('running', 0.022)]
simIndex simValue paperId paperTitle
same-paper 1 0.99999982 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
2 0.39954022 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
3 0.29162234 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
Author: Ryota Kobayashi, Shigeru Shinomoto
Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1
4 0.28546378 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
Author: Elisabetta Chicca, Giacomo Indiveri, Rodney J. Douglas
Abstract: Cooperative competitive networks are believed to play a central role in cortical processing and have been shown to exhibit a wide set of useful computational properties. We propose a VLSI implementation of a spiking cooperative competitive network and show how it can perform context dependent computation both in the mean firing rate domain and in spike timing correlation space. In the mean rate case the network amplifies the activity of neurons belonging to the selected stimulus and suppresses the activity of neurons receiving weaker stimuli. In the event correlation case, the recurrent network amplifies with a higher gain the correlation between neurons which receive highly correlated inputs while leaving the mean firing rate unaltered. We describe the network architecture and present experimental data demonstrating its context dependent computation capabilities. 1
5 0.23977651 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
Author: Máté Lengyel, Peter Dayan
Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1
6 0.22719526 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
7 0.20302029 154 nips-2006-Optimal Change-Detection and Spiking Neurons
8 0.19221246 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
9 0.18197152 17 nips-2006-A recipe for optimizing a time-histogram
10 0.1296906 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
11 0.11848892 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
12 0.071194194 125 nips-2006-Logarithmic Online Regret Bounds for Undiscounted Reinforcement Learning
13 0.070784457 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
14 0.069261231 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
15 0.068388604 65 nips-2006-Denoising and Dimension Reduction in Feature Space
16 0.068156525 3 nips-2006-A Complexity-Distortion Approach to Joint Pattern Alignment
17 0.063806951 100 nips-2006-Information Bottleneck for Non Co-Occurrence Data
18 0.056734975 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
19 0.056461647 34 nips-2006-Approximate Correspondences in High Dimensions
20 0.054554075 16 nips-2006-A Theory of Retinal Population Coding
topicId topicWeight
[(0, -0.197), (1, -0.508), (2, -0.01), (3, 0.146), (4, 0.047), (5, 0.124), (6, -0.024), (7, 0.042), (8, 0.023), (9, -0.008), (10, -0.084), (11, 0.019), (12, 0.039), (13, 0.05), (14, 0.1), (15, 0.014), (16, -0.071), (17, -0.027), (18, -0.0), (19, -0.109), (20, 0.019), (21, -0.019), (22, -0.002), (23, -0.013), (24, 0.013), (25, 0.082), (26, 0.106), (27, 0.016), (28, -0.027), (29, 0.051), (30, 0.008), (31, 0.079), (32, -0.018), (33, -0.007), (34, 0.051), (35, 0.087), (36, -0.053), (37, -0.019), (38, 0.099), (39, -0.034), (40, 0.065), (41, -0.052), (42, -0.051), (43, 0.032), (44, -0.0), (45, -0.0), (46, -0.057), (47, -0.018), (48, -0.031), (49, 0.039)]
simIndex simValue paperId paperTitle
same-paper 1 0.97899085 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
2 0.87218338 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
3 0.86401838 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
Author: Ryota Kobayashi, Shigeru Shinomoto
Abstract: It has been established that a neuron reproduces highly precise spike response to identical fluctuating input currents. We wish to accurately predict the firing times of a given neuron for any input current. For this purpose we adopt a model that mimics the dynamics of the membrane potential, and then take a cue from its dynamics for predicting the spike occurrence for a novel input current. It is found that the prediction is significantly improved by observing the state space of the membrane potential and its time derivative(s) in advance of a possible spike, in comparison to simply thresholding an instantaneous value of the estimated potential. 1
4 0.72834373 197 nips-2006-Uncertainty, phase and oscillatory hippocampal recall
Author: Máté Lengyel, Peter Dayan
Abstract: Many neural areas, notably, the hippocampus, show structured, dynamical, population behavior such as coordinated oscillations. It has long been observed that such oscillations provide a substrate for representing analog information in the firing phases of neurons relative to the underlying population rhythm. However, it has become increasingly clear that it is essential for neural populations to represent uncertainty about the information they capture, and the substantial recent work on neural codes for uncertainty has omitted any analysis of oscillatory systems. Here, we observe that, since neurons in an oscillatory network need not only fire once in each cycle (or even at all), uncertainty about the analog quantities each neuron represents by its firing phase might naturally be reported through the degree of concentration of the spikes that it fires. We apply this theory to memory in a model of oscillatory associative recall in hippocampal area CA3. Although it is not well treated in the literature, representing and manipulating uncertainty is fundamental to competent memory; our theory enables us to view CA3 as an effective uncertainty-aware, retrieval system. 1
5 0.71619982 18 nips-2006-A selective attention multi--chip system with dynamic synapses and spiking neurons
Author: Chiara Bartolozzi, Giacomo Indiveri
Abstract: Selective attention is the strategy used by biological sensory systems to solve the problem of limited parallel processing capacity: salient subregions of the input stimuli are serially processed, while non–salient regions are suppressed. We present an mixed mode analog/digital Very Large Scale Integration implementation of a building block for a multi–chip neuromorphic hardware model of selective attention. We describe the chip’s architecture and its behavior, when its is part of a multi–chip system with a spiking retina as input, and show how it can be used to implement in real-time flexible models of bottom-up attention. 1
6 0.68629086 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
8 0.62180251 17 nips-2006-A recipe for optimizing a time-histogram
9 0.58059311 154 nips-2006-Optimal Change-Detection and Spiking Neurons
10 0.3381452 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
11 0.29093567 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
12 0.27979001 107 nips-2006-Large Margin Multi-channel Analog-to-Digital Conversion with Applications to Neural Prosthesis
13 0.26199669 190 nips-2006-The Neurodynamics of Belief Propagation on Binary Markov Random Fields
14 0.24854505 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
15 0.24469501 16 nips-2006-A Theory of Retinal Population Coding
16 0.23907857 148 nips-2006-Nonlinear physically-based models for decoding motor-cortical population activity
17 0.22762528 34 nips-2006-Approximate Correspondences in High Dimensions
18 0.20969978 178 nips-2006-Sparse Multinomial Logistic Regression via Bayesian L1 Regularisation
19 0.20503482 100 nips-2006-Information Bottleneck for Non Co-Occurrence Data
20 0.20311531 71 nips-2006-Effects of Stress and Genotype on Meta-parameter Dynamics in Reinforcement Learning
topicId topicWeight
[(1, 0.126), (3, 0.019), (7, 0.053), (9, 0.057), (15, 0.23), (22, 0.054), (44, 0.066), (57, 0.059), (65, 0.033), (69, 0.041), (71, 0.123), (90, 0.012)]
simIndex simValue paperId paperTitle
same-paper 1 0.80313772 99 nips-2006-Information Bottleneck Optimization and Independent Component Extraction with Spiking Neurons
Author: Stefan Klampfl, Wolfgang Maass, Robert A. Legenstein
Abstract: The extraction of statistically independent components from high-dimensional multi-sensory input streams is assumed to be an essential component of sensory processing in the brain. Such independent component analysis (or blind source separation) could provide a less redundant representation of information about the external world. Another powerful processing strategy is to extract preferentially those components from high-dimensional input streams that are related to other information sources, such as internal predictions or proprioceptive feedback. This strategy allows the optimization of internal representation according to the information bottleneck method. However, concrete learning rules that implement these general unsupervised learning principles for spiking neurons are still missing. We show how both information bottleneck optimization and the extraction of independent components can in principle be implemented with stochastically spiking neurons with refractoriness. The new learning rule that achieves this is derived from abstract information optimization principles. 1
2 0.76432103 45 nips-2006-Blind Motion Deblurring Using Image Statistics
Author: Anat Levin
Abstract: We address the problem of blind motion deblurring from a single image, caused by a few moving objects. In such situations only part of the image may be blurred, and the scene consists of layers blurred in different degrees. Most of of existing blind deconvolution research concentrates at recovering a single blurring kernel for the entire image. However, in the case of different motions, the blur cannot be modeled with a single kernel, and trying to deconvolve the entire image with the same kernel will cause serious artifacts. Thus, the task of deblurring needs to involve segmentation of the image into regions with different blurs. Our approach relies on the observation that the statistics of derivative filters in images are significantly changed by blur. Assuming the blur results from a constant velocity motion, we can limit the search to one dimensional box filter blurs. This enables us to model the expected derivatives distributions as a function of the width of the blur kernel. Those distributions are surprisingly powerful in discriminating regions with different blurs. The approach produces convincing deconvolution results on real world images with rich texture.
3 0.68445206 191 nips-2006-The Robustness-Performance Tradeoff in Markov Decision Processes
Author: Huan Xu, Shie Mannor
Abstract: Computation of a satisfactory control policy for a Markov decision process when the parameters of the model are not exactly known is a problem encountered in many practical applications. The traditional robust approach is based on a worstcase analysis and may lead to an overly conservative policy. In this paper we consider the tradeoff between nominal performance and the worst case performance over all possible models. Based on parametric linear programming, we propose a method that computes the whole set of Pareto efficient policies in the performancerobustness plane when only the reward parameters are subject to uncertainty. In the more general case when the transition probabilities are also subject to error, we show that the strategy with the “optimal” tradeoff might be non-Markovian and hence is in general not tractable. 1
4 0.67227536 135 nips-2006-Modelling transcriptional regulation using Gaussian Processes
Author: Neil D. Lawrence, Guido Sanguinetti, Magnus Rattray
Abstract: Modelling the dynamics of transcriptional processes in the cell requires the knowledge of a number of key biological quantities. While some of them are relatively easy to measure, such as mRNA decay rates and mRNA abundance levels, it is still very hard to measure the active concentration levels of the transcription factor proteins that drive the process and the sensitivity of target genes to these concentrations. In this paper we show how these quantities for a given transcription factor can be inferred from gene expression levels of a set of known target genes. We treat the protein concentration as a latent function with a Gaussian process prior, and include the sensitivities, mRNA decay rates and baseline expression levels as hyperparameters. We apply this procedure to a human leukemia dataset, focusing on the tumour repressor p53 and obtaining results in good accordance with recent biological studies.
5 0.65391803 187 nips-2006-Temporal Coding using the Response Properties of Spiking Neurons
Author: Thomas Voegtlin
Abstract: In biological neurons, the timing of a spike depends on the timing of synaptic currents, in a way that is classically described by the Phase Response Curve. This has implications for temporal coding: an action potential that arrives on a synapse has an implicit meaning, that depends on the position of the postsynaptic neuron on the firing cycle. Here we show that this implicit code can be used to perform computations. Using theta neurons, we derive a spike-timing dependent learning rule from an error criterion. We demonstrate how to train an auto-encoder neural network using this rule. 1
6 0.62698555 36 nips-2006-Attentional Processing on a Spike-Based VLSI Neural Network
7 0.62190914 162 nips-2006-Predicting spike times from subthreshold dynamics of a neuron
8 0.61535931 145 nips-2006-Neurophysiological Evidence of Cooperative Mechanisms for Stereo Computation
9 0.60361326 154 nips-2006-Optimal Change-Detection and Spiking Neurons
10 0.60357356 165 nips-2006-Real-time adaptive information-theoretic optimization of neurophysiology experiments
11 0.59196985 167 nips-2006-Recursive ICA
12 0.58913195 59 nips-2006-Context dependent amplification of both rate and event-correlation in a VLSI network of spiking neurons
13 0.58429205 65 nips-2006-Denoising and Dimension Reduction in Feature Space
14 0.57784224 32 nips-2006-Analysis of Empirical Bayesian Methods for Neuroelectromagnetic Source Localization
15 0.57675833 16 nips-2006-A Theory of Retinal Population Coding
16 0.57186341 175 nips-2006-Simplifying Mixture Models through Function Approximation
17 0.57151651 189 nips-2006-Temporal dynamics of information content carried by neurons in the primary visual cortex
18 0.57010728 75 nips-2006-Efficient sparse coding algorithms
19 0.56949365 138 nips-2006-Multi-Task Feature Learning
20 0.5690065 76 nips-2006-Emergence of conjunctive visual features by quadratic independent component analysis