nips nips2007 nips2007-177 knowledge-graph by maker-knowledge-mining

177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons


Source: pdf

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 at Abstract We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. [sent-3, score-0.725]

2 This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. [sent-4, score-0.534]

3 Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. [sent-5, score-0.267]

4 A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. [sent-6, score-0.433]

5 By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. [sent-7, score-0.501]

6 In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . [sent-8, score-0.563]

7 In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. [sent-9, score-0.421]

8 The learning goal is to learn a transformation from X into another signal Y that extracts only those components from X that are related to the relevance signal YT . [sent-12, score-0.398]

9 In this article Y will simply be the spike output of a neuron that receives the spike trains X as inputs. [sent-14, score-0.852]

10 The starting point for our analysis is the first learning rule for IB optimization in for this setup, which has recently been proposed in [1], [3]. [sent-15, score-0.305]

11 Unfortunately, this learning rule is complicated, restricted to discrete time and no theoretical analysis of its behavior is feasible. [sent-16, score-0.267]

12 Any online learning rule for IB optimization has to make a number of simplifying assumptions, since true IB optimization can only be carried out in an offline setting. [sent-17, score-0.389]

13 We show here, that with a slightly different set of assumptions than those made in [1] and [3], one arrives at a drastically simpler and intuitively perspicuous online learning rule for IB optimization with spiking neurons. [sent-18, score-0.56]

14 The learning rule in [1] was derived by maximizing the objective function1 L0 : ˜ L0 = −I(X, Y ) + βI(Y, YT ) − γDKL (P (Y ) P (Y )), (1) 1 ˜ The term DKL (P (Y ) P (Y )) denotes the Kullback-Leibler divergence between the distribution P (Y ) ˜ and a target distribution P (Y ). [sent-19, score-0.378]

15 The target signal YT was assumed to be given by a spike train. [sent-24, score-0.406]

16 The learning rule from [1] (see [3] for a detailed interpretation) is quite involved and requires numerous auxiliary definitions (hence we cannot repeat it in this abstract). [sent-25, score-0.294]

17 4 in [3], concerning the expectation value ρ at time step k of the neural firing probability ρ , given the information about the postsynaptic spikes and the target signal spikes up to the preceding time step k − 1 (see our detailed discussion in [4])2 . [sent-28, score-0.353]

18 In section 2 of this paper, we propose a much simpler learning rule for IB optimization with spiking neurons, which can also be formulated in continuous time. [sent-30, score-0.445]

19 Further simplifications in comparison to [3] are achieved by considering a simpler neuron model (the linear Poisson neuron, see [5]). [sent-32, score-0.226]

20 However we show through computer simulation in [4] that the resulting simple learning rule performs equally well for the more complex neuron model with refractoriness from [1] - [5]. [sent-33, score-0.493]

21 The learning rule presented here can be analyzed by the means of the drift function of the corresponding Fokker-Planck equation. [sent-34, score-0.323]

22 A link between the presented learning rule and Principal Component Analysis (PCA) is established in section 5. [sent-36, score-0.267]

23 A more detailed comparison of the learning rule presented here and the one of [3] as well as results of extensive computer tests on common benchmark tasks can be found in [4]. [sent-37, score-0.294]

24 2 Neuron model and learning rule for IB optimization We consider a linear Poisson neuron with N synapses of weights w = (w1 , . [sent-38, score-0.679]

25 It is driven by the input X, consisting of N spike trains Xj (t) = i δ(t − ti ), j ∈ {1, . [sent-42, score-0.458]

26 , N }, where ti denotes j j the time of the i’th spike at synapse j. [sent-45, score-0.303]

27 The membrane potential u(t) of the neuron at time t is given by the weighted sum of the presynaptic activities ν(t) = (ν1 (t), . [sent-46, score-0.289]

28 , νN (t)): N u(t) = wj νj (t) (2) j=1 t νj (t) = ǫ(t − s)Xj (s)ds. [sent-49, score-0.225]

29 ) models the EPSP of a single spike (in simulations ǫ(t) was chosen to be a decaying exponential with a time constant of τm = 10 ms). [sent-51, score-0.278]

30 The postsynaptic neuron spikes at time t with the probability density g(t): g(t) = u(t) , u0 with u0 being a normalization constant. [sent-52, score-0.323]

31 The postsynaptic spike train is denoted as Y (t) = i δ(t − ti ), with the firing times ti . [sent-53, score-0.372]

32 As in [6], we introduce a further term L3 into the the objective function that reflects the higher metabolic costs for the neuron to maintain strong 2 synapses, a natural, simple choice being L3 = −λ wj . [sent-55, score-0.484]

33 Thus the complete objective function L to maximize is: N 2 wj . [sent-56, score-0.258]

34 2 The objective function L differs slightly from L0 given in (1), which was optimized in [3]; this change turned out to be advantageous for the PCA learning rule given in section 5, without significantly changing the characteristics of the IB learning rule. [sent-59, score-0.338]

35 The online learning rule governing the change of the weights wj (t) at time t is obtained by a gradient ascent of the objective function L: d ∂L wj (t) = α . [sent-60, score-0.852]

36 (5) The operator F [YT ](t) appearing in (4) is equal to the expectation value of the membrane potential u(t) X|YT = E[u(t)|YT ], given the observations (YT (τ )|τ ∈ R) of the relevance signal; F is thus closely linked to estimation and filtering theory. [sent-65, score-0.196]

37 (7) R According to (7), F is approximated by a convolution uT (t) of the relevance signal YT and a suitable prefactor c. [sent-73, score-0.267]

38 dt Using the above definitions, the resulting learning rule is given by (in vector notation): d Y (t)ν(t) w(t) = α [− (u(t) − u(t)) + c(t)β(uT (t) − uT (t))] − αλw(t). [sent-79, score-0.328]

39 dt u(t)u(t) (8) Equation (8) will be called the spike-based learning rule, as the postsynaptic spike train Y (t) explicitly appears. [sent-80, score-0.335]

40 An accompanying rate-base learning rule can also be derived: ν(t) d w(t) = α [− (u(t) − u(t)) + c(t)β(uT (t) − uT (t))] − αλw(t). [sent-81, score-0.267]

41 dt u0 u(t) 3 (9) 3 Analytical results The learning rules (8) and (9) are stochastic differential equations for the weights wj driven by the processes Y (. [sent-82, score-0.452]

42 Thus, for α ≪ 1, the temporal evolution of the learning rules (8) and (9) may be studied via the deterministic differential equation: d w ˆ dt = A(w) = α ˆ 1 −C 0 + βC 1 w − αλw ˆ ˆ ν0 u 0 z (10) N z = wj , ˆ (11) j=1 where z is the total weight. [sent-88, score-0.471]

43 Under this assumption the maximal mutual information between the target signal YT (t) and the output of the neuron Y (t) is obtained by a weight vector w = w∗ that is parallel to the covariance vector C T . [sent-94, score-0.585]

44 The temporal evolution of the average weights wl = 1/M j∈Gl wj of the four different synaptic subgroups Gl are shown. [sent-105, score-0.573]

45 C The performance of the rate-based rule (9); results are analogous to the ones of the spike-based rule. [sent-110, score-0.267]

46 The spike trains Xj and Xk , j = k, are statistically independent if they belong to different subgroups; within a subgroup there is a homogeneous 0 covariance term Cjk = cl , j = k for j, k ∈ Gl , which can be due either to spike-spike correlations or correlations in rate modulations. [sent-116, score-0.534]

47 The covariance between the target signal YT and the spike trains Xj is homogeneous among a subgroup. [sent-117, score-0.588]

48 The N = 100 synapses form M = 4 subgroups Gl = {25(l − 1) + 1, . [sent-119, score-0.228]

49 Synapses in G1 receive Poisson spike trains of constant rate ν0 = 20 Hz, which are mutually spikespike correlated with a correlation-coefficient 5 of 0. [sent-126, score-0.435]

50 Spike trains for G3 and G4 are uncorrelated Poisson trains with a common rate modulation, which is equal to low pass filtered white noise (cut-off frequency 5 Hz) with mean ν0 and standard deviation (SD) σ = ν0 /2. [sent-129, score-0.292]

51 Two spike trains for different synapse subgroups are statistically independent. [sent-131, score-0.536]

52 The target signal YT was chosen to be the sum of two Poisson trains. [sent-132, score-0.196]

53 5; the second is a Poisson spike train with the same rate modulation as the spike trains of G3 superimposed by additional white noise of SD 2 Hz. [sent-134, score-0.595]

54 Furthermore, the target signal was turned off during random intervals6 . [sent-135, score-0.234]

55 The resulting evolution of the weights is shown in figure 1, illustrating the performance of the spike-based rule (8) as well as of the rate-based rule (9). [sent-136, score-0.635]

56 Furthermore the trajectory of the solution w(t) to ˆ 5 Spike-spike correlated Poisson spike trains were generated according to the method outlined in [9]. [sent-140, score-0.425]

57 5 Relevance-modulated PCA with spiking neurons The presented learning rules (8) and (9) exhibit a close relation to Principal Component Analysis (PCA). [sent-146, score-0.383]

58 A learning rule which enables the linear Poisson neuron to extract principal components from the input X(. [sent-147, score-0.817]

59 ) can be derived by maximizing the following objective function: N N 2 wj = +I(X, Y ) − βI(YT , Y ) − λ LPCA = −LIB − λ j=1 2 wj , (13) j=1 which just differs from (3) by a change of sign in front of LIB . [sent-148, score-0.509]

60 The resulting learning rule is in close analogy to (8): d Y (t)ν(t) w(t) = α [(u(t) − u(t)) − c(t)β(uT (t) − uT (t))] − αλw(t). [sent-149, score-0.267]

61 ) of the target signal, it can be seen that the solution w(t) of deterministic equation corresponding to (14) (which is of ˆ the same form as (10) with the obvious sign changes) converges to an eigenvector of the covariance matrix C 0 . [sent-152, score-0.288]

62 Thus, for β = 0 we expect the learning rule (14) to perform PCA for small learning rates α. [sent-153, score-0.267]

63 The rule (14) without the relevance signal is comparable to other PCA rules, e. [sent-154, score-0.488]

64 The side information given by the relevance signal YT (. [sent-157, score-0.221]

65 ) can be used to extract specific principal components from the input, thus we call this paradigm relevance-modulated PCA. [sent-158, score-0.271]

66 Before we consider a concrete example for relevance-modulated PCA, we want to point out a further application of the learning rule (14). [sent-159, score-0.327]

67 The target signal YT can also be used to extract different components from the input with different neurons (see figure 2). [sent-160, score-0.563]

68 In order to prevent all weight vectors from converging towards i the same eigenvector of C 0 (the principal component), the target signal YT for neuron i is chosen to be the sum of all output spike trains except Yi : N i YT (t) = Yj (t). [sent-174, score-1.039]

69 (15) j=1, j=i If one weight vector wi (t) is already close to the eigenvector ek of C 0 , than by means of (15), the basins of attraction of ek for the other weight vectors wj (t), j = i are reduced (or even vanish, depending on the value of β). [sent-175, score-0.461]

70 In practice, this setup is sufficiently robust, if only a small number (≤ 4) of different components is to be extracted and if the differences between the eigenvalues λi of these principal components are not too big7 . [sent-177, score-0.324]

71 The learning rule considered in [3] displayed a close relation to Independent Component Analysis (ICA). [sent-181, score-0.267]

72 Because of the linear neuron model used here and the linearization of further terms in the derivation, the resulting learning rule (14) performs PCA instead of ICA. [sent-182, score-0.544]

73 The m = 3 for the regular PCA experiment neurons receive the same input X and their weights change according to (14). [sent-184, score-0.316]

74 The weights and input spike trains are grouped into four subgroups G1 , . [sent-185, score-0.601]

75 , G4 , as for the IB optimization discussed 7 Note that the input X may well exhibit a much larger number of principal components. [sent-188, score-0.213]

76 However it is only possible to extract a limited number of them by different neurons at the same time. [sent-189, score-0.255]

77 6 A B neuron 1 C neuron 2 E neuron 2 F neuron 3 Output Y1(t) X 1(t) X 2(t) Output Y (t) m XN(t) D neuron 1 Figure 2: A The basic setup for the PCA task: The m different neurons receive the same input X and are expected to extract different principal components of it. [sent-190, score-1.713]

78 B-F The temporal evolution of the average subgroup weights wl = 1/25 j∈Gl wj for the groups G1 (black solid line), G2 (light gray ˜ solid line) and G3 (dotted line). [sent-191, score-0.418]

79 B-C Results for the relevance-modulated PCA task: neuron 1 (fig. [sent-192, score-0.226]

80 D-F Results for the regular PCA task: neuron 1 (fig. [sent-195, score-0.226]

81 The only difference is that all groups (except for G4 ) receive spike-spike correlated Poisson spike trains with a correlation coefficient for the groups G1 , G2 , G3 of 0. [sent-200, score-0.435]

82 As can be seen in figure 2 D to F, the different neurons specialize on different principal components corresponding to potentiated i synaptic subgroups G1 , G2 and G3 respectively. [sent-205, score-0.649]

83 ), all neurons tend to specialize on the principal component corresponding to G1 (not shown). [sent-207, score-0.368]

84 As a concrete example for relevance-modulated PCA, we consider the above setup with slight modifications: Now we want m = 2 neurons to extract the components G2 and G3 from the input X, 0 and not the principal component G1 . [sent-208, score-0.634]

85 This is achieved with an additional relevance signal YT , which is the same for both neurons and has spike-spike correlations with G2 and G3 of 0. [sent-209, score-0.427]

86 The 0 resulting learning rule has exactly the same structure as (14), with an additional term due to YT . [sent-213, score-0.267]

87 6 Discussion We have introduced and analyzed a simple and perspicuous rule that enables spiking neurons to perform IB optimization in an online manner. [sent-215, score-0.754]

88 Our simulations show that this rule works as well as the substantially more complex learning rule that had previously been proposed in [3]. [sent-216, score-0.57]

89 It also performs well for more realistic neuron models as indicated in [4]. [sent-217, score-0.226]

90 We have shown that the convergence properties of our simplified IB rule can be analyzed with the help of the Fokker-Planck equation (alternatively one may also use the theoretical framework described in A. [sent-218, score-0.352]

91 The investigation of the weight vectors to which this rule converges reveals interesting relationships to PCA. [sent-220, score-0.346]

92 Apparently, very little is known about learning rules that enable spiking neurons to extract multiple principal components from an input stream (a discussion of a basic learning rule performing PCA is given in chapter 11. [sent-221, score-1.005]

93 We have demonstrated both analytically and through simulations that a slight variation of our new learning rule performs PCA. [sent-224, score-0.303]

94 Our derivation of this rule within the IB framework opens the door to new variations of PCA where preferentially those components are extracted from a high dimensional input stream that are –or are not– related to some external relevance variable. [sent-225, score-0.591]

95 The learning rule that we have proposed might in principle be able to extract from high-dimensional 7 sensory input streams X those components that are related to other sensory modalities or to internal expectations and goals. [sent-227, score-0.579]

96 Quantitative biological data on the precise way in which relevance signals YT (such as for example dopamin) might reach neurons in the cortex and modulate their synaptic plasticity are still missing. [sent-228, score-0.501]

97 Information bottleneck optimization and independent component extraction with spiking neurons. [sent-237, score-0.3]

98 Spiking neurons can learn to solve information bottleneck problems and to extract independent components. [sent-252, score-0.344]

99 Learning the structure of correlated synaptic subgroups using stable and competitive spike-timing-dependent plasticity. [sent-294, score-0.289]

100 The hebb rule for synaptic plasticity: algorithms and implementations. [sent-300, score-0.346]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('yt', 0.399), ('ib', 0.347), ('rule', 0.267), ('ut', 0.265), ('neuron', 0.226), ('wj', 0.225), ('spike', 0.21), ('neurons', 0.165), ('pca', 0.149), ('trains', 0.146), ('spiking', 0.14), ('subgroups', 0.136), ('principal', 0.122), ('signal', 0.118), ('poisson', 0.108), ('relevance', 0.103), ('lib', 0.092), ('synapses', 0.092), ('extract', 0.09), ('bottleneck', 0.089), ('gl', 0.084), ('synaptic', 0.079), ('rules', 0.078), ('target', 0.078), ('plasticity', 0.076), ('perspicuous', 0.069), ('postsynaptic', 0.064), ('eigenvector', 0.062), ('dt', 0.061), ('subgroup', 0.06), ('concrete', 0.06), ('components', 0.059), ('weights', 0.056), ('equation', 0.056), ('analytical', 0.055), ('input', 0.053), ('ms', 0.053), ('setup', 0.052), ('linearization', 0.051), ('signals', 0.05), ('mutual', 0.05), ('ti', 0.049), ('specialize', 0.048), ('online', 0.046), ('buesing', 0.046), ('klamp', 0.046), ('prefactor', 0.046), ('preferentially', 0.046), ('evolution', 0.045), ('synapse', 0.044), ('ek', 0.044), ('weight', 0.043), ('receive', 0.042), ('cij', 0.042), ('correlations', 0.041), ('sensory', 0.041), ('stationary', 0.04), ('critical', 0.04), ('potentiated', 0.04), ('silence', 0.04), ('eigenvalue', 0.038), ('gure', 0.038), ('turned', 0.038), ('optimization', 0.038), ('stable', 0.037), ('correlated', 0.037), ('numerical', 0.037), ('appearing', 0.037), ('graz', 0.037), ('legenstein', 0.037), ('simulations', 0.036), ('covariance', 0.036), ('investigation', 0.036), ('output', 0.034), ('presynaptic', 0.034), ('uctuate', 0.034), ('objective', 0.033), ('component', 0.033), ('spikes', 0.033), ('wl', 0.032), ('decaying', 0.032), ('extracted', 0.032), ('differential', 0.032), ('trajectory', 0.032), ('cortical', 0.031), ('stream', 0.031), ('dkl', 0.031), ('deterministic', 0.03), ('membrane', 0.029), ('modulation', 0.029), ('positively', 0.029), ('analyzed', 0.029), ('sd', 0.028), ('modalities', 0.028), ('biological', 0.028), ('drift', 0.027), ('operator', 0.027), ('detailed', 0.027), ('article', 0.026), ('sign', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

2 0.26151755 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

3 0.26050624 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

Author: Srinjoy Mitra, Giacomo Indiveri, Stefano Fusi

Abstract: We propose a compact, low power VLSI network of spiking neurons which can learn to classify complex patterns of mean firing rates on–line and in real–time. The network of integrate-and-fire neurons is connected by bistable synapses that can change their weight using a local spike–based plasticity mechanism. Learning is supervised by a teacher which provides an extra input to the output neurons during training. The synaptic weights are updated only if the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). We present experimental results that demonstrate how this VLSI network is able to robustly classify uncorrelated linearly separable spatial patterns of mean firing rates.

4 0.24502751 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

Author: Dejan Pecevski, Wolfgang Maass, Robert A. Legenstein

Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

5 0.20303735 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

6 0.19010544 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

7 0.18780947 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

8 0.17405351 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

9 0.17011672 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

10 0.16608268 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

11 0.13828681 164 nips-2007-Receptive Fields without Spike-Triggering

12 0.13696676 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

13 0.12862413 146 nips-2007-On higher-order perceptron algorithms

14 0.1020698 148 nips-2007-Online Linear Regression and Its Application to Model-Based Reinforcement Learning

15 0.099604875 18 nips-2007-A probabilistic model for generating realistic lip movements from speech

16 0.091222294 48 nips-2007-Collective Inference on Markov Models for Modeling Bird Migration

17 0.089685321 110 nips-2007-Learning Bounds for Domain Adaptation

18 0.08850643 188 nips-2007-Subspace-Based Face Recognition in Analog VLSI

19 0.079121687 21 nips-2007-Adaptive Online Gradient Descent

20 0.077992573 204 nips-2007-Theoretical Analysis of Heuristic Search Methods for Online POMDPs


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.253), (1, 0.121), (2, 0.407), (3, 0.125), (4, 0.021), (5, -0.129), (6, 0.163), (7, -0.07), (8, 0.0), (9, 0.052), (10, 0.021), (11, 0.063), (12, 0.012), (13, 0.01), (14, -0.086), (15, -0.026), (16, -0.102), (17, -0.079), (18, -0.042), (19, -0.077), (20, 0.005), (21, -0.053), (22, -0.01), (23, 0.005), (24, -0.024), (25, 0.067), (26, 0.047), (27, 0.021), (28, -0.002), (29, -0.024), (30, 0.011), (31, 0.006), (32, -0.095), (33, 0.056), (34, 0.029), (35, 0.057), (36, -0.126), (37, -0.007), (38, -0.112), (39, 0.071), (40, -0.034), (41, -0.034), (42, -0.059), (43, 0.122), (44, 0.015), (45, -0.03), (46, -0.013), (47, 0.05), (48, 0.064), (49, 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97877502 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

2 0.78286701 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

Author: Dejan Pecevski, Wolfgang Maass, Robert A. Legenstein

Abstract: Reward-modulated spike-timing-dependent plasticity (STDP) has recently emerged as a candidate for a learning rule that could explain how local learning rules at single synapses support behaviorally relevant adaptive changes in complex networks of spiking neurons. However the potential and limitations of this learning rule could so far only be tested through computer simulations. This article provides tools for an analytic treatment of reward-modulated STDP, which allow us to predict under which conditions reward-modulated STDP will be able to achieve a desired learning effect. In particular, we can produce in this way a theoretical explanation and a computer model for a fundamental experimental finding on biofeedback in monkeys (reported in [1]).

3 0.72071558 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

Author: Claudia Clopath, André Longtin, Wulfram Gerstner

Abstract: Independent component analysis (ICA) is a powerful method to decouple signals. Most of the algorithms performing ICA do not consider the temporal correlations of the signal, but only higher moments of its amplitude distribution. Moreover, they require some preprocessing of the data (whitening) so as to remove second order correlations. In this paper, we are interested in understanding the neural mechanism responsible for solving ICA. We present an online learning rule that exploits delayed correlations in the input. This rule performs ICA by detecting joint variations in the firing rates of pre- and postsynaptic neurons, similar to a local rate-based Hebbian learning rule. 1

4 0.71343821 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

Author: Srinjoy Mitra, Giacomo Indiveri, Stefano Fusi

Abstract: We propose a compact, low power VLSI network of spiking neurons which can learn to classify complex patterns of mean firing rates on–line and in real–time. The network of integrate-and-fire neurons is connected by bistable synapses that can change their weight using a local spike–based plasticity mechanism. Learning is supervised by a teacher which provides an extra input to the output neurons during training. The synaptic weights are updated only if the current generated by the plastic synapses does not match the output desired by the teacher (as in the perceptron learning rule). We present experimental results that demonstrate how this VLSI network is able to robustly classify uncorrelated linearly separable spatial patterns of mean firing rates.

5 0.64974463 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

Author: Massimiliano Giulioni, Mario Pannunzi, Davide Badoni, Vittorio Dante, Paolo D. Giudice

Abstract: We summarize the implementation of an analog VLSI chip hosting a network of 32 integrate-and-fire (IF) neurons with spike-frequency adaptation and 2,048 Hebbian plastic bistable spike-driven stochastic synapses endowed with a selfregulating mechanism which stops unnecessary synaptic changes. The synaptic matrix can be flexibly configured and provides both recurrent and AER-based connectivity with external, AER compliant devices. We demonstrate the ability of the network to efficiently classify overlapping patterns, thanks to the self-regulating mechanism.

6 0.62773037 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

7 0.62247694 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

8 0.58740878 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

9 0.54872054 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

10 0.50745595 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

11 0.50263762 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

12 0.47779626 164 nips-2007-Receptive Fields without Spike-Triggering

13 0.45599613 146 nips-2007-On higher-order perceptron algorithms

14 0.43498924 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

15 0.38438666 18 nips-2007-A probabilistic model for generating realistic lip movements from speech

16 0.38232642 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

17 0.38194469 188 nips-2007-Subspace-Based Face Recognition in Analog VLSI

18 0.37622321 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes

19 0.35878491 148 nips-2007-Online Linear Regression and Its Application to Model-Based Reinforcement Learning

20 0.33481869 4 nips-2007-A Constraint Generation Approach to Learning Stable Linear Dynamical Systems


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.056), (13, 0.056), (14, 0.011), (16, 0.089), (18, 0.013), (19, 0.035), (21, 0.066), (31, 0.03), (34, 0.053), (35, 0.017), (36, 0.165), (47, 0.106), (49, 0.017), (83, 0.158), (87, 0.018), (90, 0.044)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.94202566 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

2 0.90487373 142 nips-2007-Non-parametric Modeling of Partially Ranked Data

Author: Guy Lebanon, Yi Mao

Abstract: Statistical models on full and partial rankings of n items are often of limited practical use for large n due to computational consideration. We explore the use of non-parametric models for partially ranked data and derive efficient procedures for their use for large n. The derivations are largely possible through combinatorial and algebraic manipulations based on the lattice of partial rankings. In particular, we demonstrate for the first time a non-parametric coherent and consistent model capable of efficiently aggregating partially ranked data of different types. 1

same-paper 3 0.88322324 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

4 0.86096996 107 nips-2007-Iterative Non-linear Dimensionality Reduction with Manifold Sculpting

Author: Michael Gashler, Dan Ventura, Tony Martinez

Abstract: Many algorithms have been recently developed for reducing dimensionality by projecting data onto an intrinsic non-linear manifold. Unfortunately, existing algorithms often lose significant precision in this transformation. Manifold Sculpting is a new algorithm that iteratively reduces dimensionality by simulating surface tension in local neighborhoods. We present several experiments that show Manifold Sculpting yields more accurate results than existing algorithms with both generated and natural data-sets. Manifold Sculpting is also able to benefit from both prior dimensionality reduction efforts. 1

5 0.79006171 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

Author: Maneesh Sahani, Byron M. Yu, John P. Cunningham, Krishna V. Shenoy

Abstract: Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train’s underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators. 1

6 0.7883845 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

7 0.77988183 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

8 0.77655756 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

9 0.77154666 18 nips-2007-A probabilistic model for generating realistic lip movements from speech

10 0.76906067 115 nips-2007-Learning the 2-D Topology of Images

11 0.76832122 195 nips-2007-The Generalized FITC Approximation

12 0.76549387 180 nips-2007-Sparse Feature Learning for Deep Belief Networks

13 0.76482481 24 nips-2007-An Analysis of Inference with the Universum

14 0.76311648 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

15 0.76223588 174 nips-2007-Selecting Observations against Adversarial Objectives

16 0.76166147 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

17 0.76065344 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

18 0.76056588 158 nips-2007-Probabilistic Matrix Factorization

19 0.76044476 63 nips-2007-Convex Relaxations of Latent Variable Training

20 0.75888711 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods