nips nips2007 nips2007-35 knowledge-graph by maker-knowledge-mining

35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms


Source: pdf

Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak

Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Bayesian binning beats approximate alternatives: estimating peristimulus time histograms Dominik Endres, Mike Oram, Johannes Schindelin and Peter F¨ ldi´ k o a School of Psychology University of St. [sent-1, score-0.186]

2 uk Abstract The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. [sent-4, score-0.246]

3 The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. [sent-5, score-0.349]

4 Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. [sent-6, score-0.224]

5 We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. [sent-7, score-0.077]

6 1 Introduction Plotting a peristimulus time histogram (PSTH), or a spike density function (SDF), from spiketrains evoked by and aligned to a stimulus onset is often one of the first steps in the analysis of neurophysiological data. [sent-9, score-0.495]

7 It is an easy way of visualizing certain characteristics of the neural response, such as instantaneous firing rates (or firing probabilities), latencies and response offsets. [sent-10, score-0.063]

8 These measures also implicitly represent a model of the neuron’s response as a function of time and are important parts of their functional description. [sent-11, score-0.037]

9 the choice of time bin size is driven by result expectations as much as by the data. [sent-14, score-0.224]

10 Recently, there have been more principled approaches to the problem of determining the appropriate temporal resolution [1, 2]. [sent-15, score-0.042]

11 We develop an exact Bayesian solution, apply it to real neural data and demonstrate its superiority to competing methods. [sent-16, score-0.06]

12 2 The model Suppose we wanted to model a PSTH on [tmin , tmax ], which we discretize into T contiguous intervals of duration ∆t = (tmax − tmin )/T (see fig. [sent-19, score-0.5]

13 We select a discretization fine enough so that we will not observe more than one spike in a ∆t interval for any given spike train. [sent-21, score-0.436]

14 We model the PSTH by M + 1 contiguous, non-overlapping bins having inclusive upper boundaries km , within which the firing probability P (spike|t ∈ (tmin + ∆t(km−1 + 1), tmin + ∆t(km + 1)]) = fm is constant. [sent-24, score-1.236]

15 M is the number of bin boundaries inside [tmin , tmax ]. [sent-25, score-0.471]

16 Bottom: The time span between tmin and tmax is discretized into T intervals of duration ∆t = (tmax − tmin )/T , such that interval k lasts from k × ∆t + tmin to (k + 1) × ∆t + tmin . [sent-27, score-1.27]

17 ∆t is chosen such that at most one spike is observed per ∆t interval for any given spike train. [sent-28, score-0.436]

18 Then, we model the firing probabilities P (spike|t) by M + 1 = 4 contiguous, non-overlapping bins (M is the number of bin boundaries inside the time span [tmin , tmax ]), having inclusive upper boundaries km and P (spike|t ∈ (tmin + ∆t(km−1 + 1), tmin + ∆t(km + 1)]) = fm . [sent-29, score-1.731]

19 z i of independent spikes/gaps is then M P (z i |{fm }, {km }, M ) = s(z fm i ,m) (1 − fm )g(z i ,m) (1) m=0 where s(z i , m) is the number of spikes and g(z i , m) is the number of non-spikes, or gaps in spiketrain z i in bin m, i. [sent-42, score-0.935]

20 between intervals km−1 + 1 and km (both inclusive). [sent-44, score-0.505]

21 In other words, we model the spiketrains by an inhomogeneous Bernoulli process with piecewise constant probabilities. [sent-45, score-0.172]

22 Note that there is no binomial factor associated with the contribution of each bin, because we do not want to ignore the spike timing information within the bins, but rather, we try to build a simplified generative model of the spike train. [sent-47, score-0.411]

23 Therefore, the probability of a (multi)set of spiketrains {z i } = {z1 , . [sent-48, score-0.147]

24 , zN }, assuming independent generation, is N P ({z i }|{fm }, {km }, M ) M s(z fm = i ,m) (1 − fm )g(z i ,m) i=1 m=0 M s({z fm = i },m) (1 − fm )g({z i },m) (2) m=0 where s({z i }, m) = 2. [sent-51, score-1.22]

25 we have no a priori preferences for the firing rates based on the bin boundary positions. [sent-55, score-0.282]

26 Note that the prior of the fm , being continuous model parameters, is a density. [sent-56, score-0.325]

27 (1) and the constraint fm ∈ [0, 1], it is natural to choose a conjugate prior M p({fm }|M ) = B(fm ; σm , γm ). [sent-58, score-0.325]

28 Γ(σ)Γ(γ) (5) m=0 The Beta density is defined in the usual way [3]: B(p; σ, γ) = There are only finitely many configurations of the km . [sent-60, score-0.454]

29 Assuming we have no preferences for any of them, the prior for the bin boundaries becomes 1 P ({km }|M ) = . [sent-61, score-0.372]

30 (6) T −1 M where the denominator is just the number of possibilities in which M ordered bin boundaries can be distributed across T − 1 places (bin boundary M always occupies position T − 1, see fig. [sent-62, score-0.403]

31 3 Computing the evidence P ({z i }|M ) To calculate quantities of interest for a given M , e. [sent-64, score-0.045]

32 kM −1 =M −1 kM −2 =M −2 (8) k0 =0 where the summation boundaries are chosen such that the bins are non-overlapping and contiguous and 1 P ({z i }|{km }, M ) = 1 df0 0 1 dfM P ({z i }|{fm }, {km }, M )p({fm }|M ). [sent-69, score-0.24]

33 i }, m) + σ + g({z i }, m) + γ ) Γ(s({z m m m=0 Γ(σm )Γ(γm ) m=0 (10) Computing the sums in eqn. [sent-76, score-0.024]

34 {z i }) and store them in the array pr[M ]: pr[M ] := M Γ(σm +γm ) m=0 Γ(σm )Γ(γm ) T −1 M 3 . [sent-85, score-0.048]

35 kM −1 =M −1 getIEC(km−1 + 1, km , m)getIEC(0, k0 , 0) (13) k0 =0 m=1 with kM = T − 1 and the constant of proportionality being pr[M ]. [sent-91, score-0.454]

36 depend only on two consecutive bin boundaries each, it is possible to apply dynamic programming [8]: rewrite the r. [sent-95, score-0.335]

37 Thus, initialize the array subE0 [k] := getIEC(0, k, 0), and iterate for all m = 1, . [sent-107, score-0.032]

38 ; T − 2 to compute the evidence of a model with its latest boundary at T − 1. [sent-114, score-0.069]

39 the evidence for a model with M − 1 bin boundaries. [sent-117, score-0.252]

40 Hence, the array subEm−1 [k] can be reused to store subEm [k], if overwritten in reverse order. [sent-119, score-0.048]

41 In pseudo-code (E[m] contains the evidence of a model with m bin boundaries inside [tmin , tmax ] after termination): Table 1: Computing the evidences of models with up to M bin boundaries 1. [sent-120, score-0.834]

42 return E[] 4 Predictive firing rates and variances ˜ We will now calculate the predictive firing rate P (spike|k, {z i }, M ). [sent-133, score-0.049]

43 For a given configuration of {fm } and {km }, we can write M ˜ P (spike|k, {fm }, {km }, M ) = ˜ fm 1(k ∈ {km−1 + 1, km }) (16) m=0 where the indicator function 1(x) = 1 iff x is true and 0 otherwise. [sent-134, score-0.759]

44 Note that the probability of a spike given {km } and {fm } does not depend on any observed data. [sent-135, score-0.197]

45 Since the bins are non˜ ˜ overlapping, k ∈ {km−1 + 1, km } is true for exactly one summand and P (spike|k, {z i }, {km }) evaluates to the corresponding firing rate. [sent-136, score-0.526]

46 (11) with getIEC(ks , ke , m) := ˜ Γ(s({z i }, ks , ke ) + 1(k ∈ {ks , ke }) + σm )Γ(g({z i }, ks , ke ) + γm ) (17) ˜ Γ(s({z i }, ks , ke ) + 1(k ∈ {ks , ke }) + σm + g({z i }, ks , ke ) + γm ) ˜ i. [sent-146, score-2.002]

47 we are adding an additional spike to the data at k. [sent-148, score-0.197]

48 Call the array returned by this modified Ek [M ] ˜ algorithm Ek []. [sent-149, score-0.032]

49 To evaluate the ˜ 2 ˜ variance, we need the posterior expectation of fm . [sent-152, score-0.339]

50 model averaging To choose the best M given {z i }, or better, a probable range of M s, we need to determine the model posterior P ({z i }|M )P (M ) (18) P (M |{z i }) = i m P ({z }|m)P (m) where P (M ) is the prior over M , which we assume to be uniform. [sent-155, score-0.073]

51 The sum in the denominator runs over all values of m which we choose to include, at most 0 ≤ m ≤ T − 1. [sent-156, score-0.027]

52 However, making this decision means ’contriving’ information, namely that all of the posterior probability is concentrated at M . [sent-158, score-0.034]

53 If the posterior of M is unimodal (which it has been in most observed cases, see fig. [sent-162, score-0.034]

54 3, right, for an example), we can then choose the smallest interval of M s around the maximum of P (M |{z i }) such that P (Mmin ≤ M ≤ Mmax |{z i }) ≤ 1 − α (19) and carry out the averages over this range of M after renormalizing the model posterior. [sent-163, score-0.057]

55 Briefly, extracellular single-unit recordings were made using standard techniques from the upper and lower banks of the anterior part of the superior temporal sulcus (STSa) and the inferior temporal cortex (IT) of two monkeys (Macaca mulatta) performing a visual fixation task. [sent-166, score-0.255]

56 Stimuli were presented for 333 ms followed by an 333 ms inter-stimulus interval in random order. [sent-167, score-0.178]

57 The anterior-posterior extent of the recorded cells was from 7mm to 9mm anterior of the interaural plane consistent with previous studies showing visual responses to static images in this region [10, 11, 12, 13]. [sent-168, score-0.165]

58 The recorded cells were located in the upper bank (TAa, TPO), lower bank (TEa, TEm) and fundus (PGa, IPa) of STS and in the anterior areas of TE (AIT of [14]). [sent-169, score-0.154]

59 These areas are rostral to FST and we collectively call them the anterior STS (STSa), see [15] for further discussion. [sent-170, score-0.057]

60 The recorded firing patters were turned into distinct samples, each of which contained the spikes from −300 ms before to 600 ms after the stimulus onset with a temporal resolution of 1 ms. [sent-171, score-0.34]

61 2 Inferring PSTHs To see the method in action, we used it to infer a PSTH from 32 spiketrains recorded from one of the available STSa neurons (see fig. [sent-173, score-0.216]

62 05 0 -100 0 100 200 300 400 500 600 time, ms after stimulus onset Figure 2: Predicting a PSTH/SDF with 3 different methods. [sent-182, score-0.137]

63 A: the dataset used in this comparison consisted of 32 spiketrains recorded from a STSa neuron. [sent-183, score-0.185]

64 The thick line represents the predictive firing rate (section 4), the thin lines show the predictive firing rate ±1 standard deviation. [sent-186, score-0.081]

65 C: bar PSTH (solid lines), optimal binsize ≈ 26ms, and line PSTH (dashed lines), optimal binsize ≈ 78ms, computed by the methods described in [1, 2]. [sent-190, score-0.134]

66 D: SDF obtained by smoothing the spike trains with a 10ms Gaussian kernel. [sent-191, score-0.229]

67 The prior parameters were equal for all bins and set to σm = 1 and γm = 32. [sent-195, score-0.092]

68 03 in each 1 ms time interval (30 spikes/s), which is typical for the neurons in this study1 . [sent-197, score-0.141]

69 Models with 4 ≤ M ≤ 13 (expected bin sizes between ≈ 23ms-148ms) were included on an α = 0. [sent-198, score-0.224]

70 (19)) in the subsequent calculation of the predictive firing rate (i. [sent-200, score-0.032]

71 2, C, shows a bar PSTH and a line PSTH computed with the recently developed methods described in [1, 2]. [sent-205, score-0.068]

72 In this example, the bar PSTH consists of 26 bins. [sent-211, score-0.051]

73 2 depicts a SDF obtained by smoothing the spiketrains with a 10ms wide Gaussian kernel, which is a standard way of calculating SDFs in the neurophysiological literature. [sent-213, score-0.212]

74 All tested methods produce results which are, upon cursory visual inspection, largely consistent with the spiketrains. [sent-214, score-0.029]

75 However, Bayesian binning is better suited than Gaussian smoothing to model steep changes, such as the transient response starting at ≈ 100ms. [sent-215, score-0.225]

76 While the methods from [1, 2] share this advantage, they suffer from two drawbacks: firstly, the bin boundaries are evenly spaced, hence the peak of the transient is later than the scatterplots would suggest. [sent-216, score-0.371]

77 Secondly, because the bin duration is the only parameter of the model, these methods are forced to put many bins even in intervals that are relatively constant, such as the baselines before and after the stimulus-driven response. [sent-217, score-0.377]

78 In contrast, Bayesian binning, being able to put bin boundaries anywhere in the time span of interest, can model the data with less bins – the model posterior has its maximum at M = 6 (7 bins), whereas the bar PSTH consists of 26 bins. [sent-218, score-0.516]

79 015 CV error relative to Bayesian Binning 10 M 20 30 Figure 3: Left: Comparison of Bayesian Binning with competing methods by 5-fold crossvalidation. [sent-231, score-0.039]

80 The histograms show relative frequencies of CV error differences between 3 competing methods and our Bayesian binning approach. [sent-233, score-0.176]

81 Gaussian: SDFs obtained by Gaussian smoothing of the spiketrains with a 10 ms kernel. [sent-234, score-0.247]

82 Bar PSTH and line PSTH: PSTHs computed by the binning methods described in [1, 2]. [sent-235, score-0.137]

83 The shape is fairly typical for model posteriors computed from the neural data used in this paper: a sharp rise at a moderately low M followed by a maximum (here at M = 6) and an approximately exponential decay. [sent-239, score-0.038]

84 the predictive firing rate, section 4) by choosing a moderately small maximum M . [sent-244, score-0.051]

85 For a more rigorous method comparison, we split the data into distinct sets, each of which contained the responses of a cell to a different stimulus. [sent-245, score-0.039]

86 This procedure yielded 336 sets from 20 cells with at least 20 spiketrains per set. [sent-246, score-0.164]

87 The Gaussian SDFs were discretized into 1 ms time intervals prior to the procedure. [sent-249, score-0.165]

88 Here a positive value indicates that Bayesian binning predicts the test data more accurately than the alternative method. [sent-261, score-0.12]

89 Bayesian binning predicted the data better than the three other 7 methods in at least 295/336 cases, with a minimal difference of ≈ −0. [sent-264, score-0.12]

90 7 Summary We have introduced an exact Bayesian binning method for the estimation of PSTHs. [sent-266, score-0.12]

91 Besides treating uncertainty – a real problem with small neurophysiological datasets – in a principled fashion, it also outperforms competing methods on real neural data. [sent-267, score-0.072]

92 It offers automatic complexity control because the model posterior can be evaluated. [sent-268, score-0.034]

93 While its computational cost is significantly higher than that of the methods we compared it to, it is still fast enough to be useful: evaluating the predictive probability takes less than 1s on a modern PC2 , with a small memory footprint (<10MB for 512 spiketrains). [sent-269, score-0.049]

94 Our method reveals a clear and sharp initial response onset, a distinct transition from the transient to the sustained part of the response and a well-defined offset. [sent-273, score-0.159]

95 A method for selecting the bin size of a time histogram. [sent-287, score-0.224]

96 The temporal precision of neural signals: A unique role for response latency? [sent-329, score-0.079]

97 Visual properties of neurons in a polysensory area in superior temporal sulcus of the macaque. [sent-332, score-0.139]

98 Visual neurons responsive to faces in the monkey temporal cortex. [sent-335, score-0.106]

99 Coding visual images of objects in the inferotemporal cortex of the macaque monkey. [sent-357, score-0.029]

100 Integration of visual and auditory information by superior temporal sulcus neurons responsive to the sight of actions. [sent-369, score-0.201]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('km', 0.454), ('subem', 0.343), ('fm', 0.305), ('getiec', 0.278), ('psth', 0.26), ('tmin', 0.245), ('bin', 0.224), ('spike', 0.197), ('ks', 0.182), ('ke', 0.182), ('spiketrains', 0.147), ('binning', 0.12), ('tmax', 0.117), ('boundaries', 0.111), ('ring', 0.105), ('psths', 0.098), ('sube', 0.082), ('cv', 0.076), ('bins', 0.072), ('ms', 0.068), ('sdf', 0.065), ('stsa', 0.065), ('bayesian', 0.059), ('anterior', 0.057), ('contiguous', 0.057), ('oram', 0.057), ('intervals', 0.051), ('bar', 0.051), ('inclusive', 0.049), ('peristimulus', 0.049), ('sdfs', 0.049), ('sulcus', 0.049), ('onset', 0.044), ('temporal', 0.042), ('interval', 0.042), ('boundary', 0.041), ('spikes', 0.04), ('competing', 0.039), ('recorded', 0.038), ('response', 0.037), ('transient', 0.036), ('pr', 0.036), ('posterior', 0.034), ('binsize', 0.033), ('ldi', 0.033), ('responsive', 0.033), ('rolls', 0.033), ('spiketrain', 0.033), ('sts', 0.033), ('neurophysiological', 0.033), ('smoothing', 0.032), ('array', 0.032), ('predictive', 0.032), ('neurons', 0.031), ('duration', 0.03), ('neurophysiology', 0.03), ('visual', 0.029), ('endres', 0.028), ('gaps', 0.028), ('shimazaki', 0.028), ('evidence', 0.028), ('denominator', 0.027), ('discretized', 0.026), ('latencies', 0.026), ('piecewise', 0.025), ('stimulus', 0.025), ('sums', 0.024), ('andrews', 0.024), ('responses', 0.024), ('span', 0.024), ('crossvalidation', 0.022), ('xiao', 0.022), ('risk', 0.022), ('superiority', 0.021), ('bank', 0.021), ('ek', 0.021), ('effort', 0.021), ('numerator', 0.02), ('prior', 0.02), ('moderately', 0.019), ('sharp', 0.019), ('inside', 0.019), ('probable', 0.019), ('recordings', 0.019), ('virtue', 0.018), ('cells', 0.017), ('generative', 0.017), ('calculate', 0.017), ('evaluating', 0.017), ('superior', 0.017), ('preferences', 0.017), ('histograms', 0.017), ('line', 0.017), ('store', 0.016), ('distinct', 0.015), ('reveals', 0.015), ('averages', 0.015), ('gaussian', 0.015), ('fashion', 0.015), ('psychology', 0.014)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999964 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak

Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1

2 0.17035876 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

Author: Maneesh Sahani, Byron M. Yu, John P. Cunningham, Krishna V. Shenoy

Abstract: Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train’s underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators. 1

3 0.13753286 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

4 0.12787089 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

5 0.10423196 160 nips-2007-Random Features for Large-Scale Kernel Machines

Author: Ali Rahimi, Benjamin Recht

Abstract: To accelerate the training of kernel machines, we propose to map the input data to a randomized low-dimensional feature space and then apply existing fast linear methods. The features are designed so that the inner products of the transformed data are approximately equal to those in the feature space of a user specified shiftinvariant kernel. We explore two sets of random features, provide convergence bounds on their ability to approximate various radial basis kernels, and show that in large-scale classification and regression tasks linear machine learning algorithms applied to these features outperform state-of-the-art large-scale kernel machines. 1

6 0.10245055 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

7 0.081841126 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

8 0.072535478 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements

9 0.071988843 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

10 0.070556767 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

11 0.06226299 164 nips-2007-Receptive Fields without Spike-Triggering

12 0.055444732 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

13 0.053467616 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

14 0.042834304 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes

15 0.042158194 44 nips-2007-Catching Up Faster in Bayesian Model Selection and Model Averaging

16 0.040197205 126 nips-2007-McRank: Learning to Rank Using Multiple Classification and Gradient Boosting

17 0.036413167 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

18 0.035202697 16 nips-2007-A learning framework for nearest neighbor search

19 0.035200916 182 nips-2007-Sparse deep belief net model for visual area V2

20 0.032834522 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.121), (1, 0.09), (2, 0.175), (3, 0.035), (4, 0.0), (5, -0.042), (6, 0.035), (7, -0.041), (8, -0.021), (9, 0.002), (10, -0.064), (11, -0.013), (12, 0.04), (13, -0.002), (14, -0.023), (15, -0.008), (16, 0.086), (17, 0.106), (18, 0.086), (19, 0.103), (20, -0.021), (21, -0.07), (22, 0.045), (23, 0.146), (24, 0.03), (25, -0.088), (26, -0.012), (27, 0.008), (28, -0.002), (29, -0.033), (30, 0.031), (31, -0.039), (32, 0.066), (33, 0.13), (34, 0.043), (35, 0.075), (36, -0.011), (37, -0.01), (38, -0.058), (39, -0.083), (40, -0.02), (41, -0.06), (42, 0.181), (43, 0.035), (44, -0.103), (45, 0.079), (46, 0.015), (47, 0.01), (48, -0.154), (49, 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95422751 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak

Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1

2 0.74076533 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

3 0.64356786 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

Author: Maneesh Sahani, Byron M. Yu, John P. Cunningham, Krishna V. Shenoy

Abstract: Neural spike trains present challenges to analytical efforts due to their noisy, spiking nature. Many studies of neuroscientific and neural prosthetic importance rely on a smoothed, denoised estimate of the spike train’s underlying firing rate. Current techniques to find time-varying firing rates require ad hoc choices of parameters, offer no confidence intervals on their estimates, and can obscure potentially important single trial variability. We present a new method, based on a Gaussian Process prior, for inferring probabilistically optimal estimates of firing rate functions underlying single or multiple neural spike trains. We test the performance of the method on simulated data and experimentally gathered neural spike trains, and we demonstrate improvements over conventional estimators. 1

4 0.49737182 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

5 0.47334114 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

6 0.42899707 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements

7 0.41406715 160 nips-2007-Random Features for Large-Scale Kernel Machines

8 0.36096612 67 nips-2007-Direct Importance Estimation with Model Selection and Its Application to Covariate Shift Adaptation

9 0.35945514 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

10 0.35385802 164 nips-2007-Receptive Fields without Spike-Triggering

11 0.32811856 77 nips-2007-Efficient Inference for Distributions on Permutations

12 0.32583219 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

13 0.31792074 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

14 0.27959844 89 nips-2007-Feature Selection Methods for Improving Protein Structure Prediction with Rosetta

15 0.27824366 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

16 0.26131436 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

17 0.2552169 16 nips-2007-A learning framework for nearest neighbor search

18 0.25478685 85 nips-2007-Experience-Guided Search: A Theory of Attentional Control

19 0.24511491 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

20 0.23252051 210 nips-2007-Unconstrained On-line Handwriting Recognition with Recurrent Neural Networks


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.032), (13, 0.03), (16, 0.097), (18, 0.021), (19, 0.017), (21, 0.054), (34, 0.016), (35, 0.065), (47, 0.06), (49, 0.016), (83, 0.064), (87, 0.012), (90, 0.048), (97, 0.366)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.76350129 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

Author: Dominik Endres, Mike Oram, Johannes Schindelin, Peter Foldiak

Abstract: The peristimulus time histogram (PSTH) and its more continuous cousin, the spike density function (SDF) are staples in the analytic toolkit of neurophysiologists. The former is usually obtained by binning spike trains, whereas the standard method for the latter is smoothing with a Gaussian kernel. Selection of a bin width or a kernel size is often done in an relatively arbitrary fashion, even though there have been recent attempts to remedy this situation [1, 2]. We develop an exact Bayesian, generative model approach to estimating PSTHs and demonstate its superiority to competing methods. Further advantages of our scheme include automatic complexity control and error bars on its predictions. 1

2 0.73198199 133 nips-2007-Modelling motion primitives and their timing in biologically executed movements

Author: Ben Williams, Marc Toussaint, Amos J. Storkey

Abstract: Biological movement is built up of sub-blocks or motion primitives. Such primitives provide a compact representation of movement which is also desirable in robotic control applications. We analyse handwriting data to gain a better understanding of primitives and their timings in biological movements. Inference of the shape and the timing of primitives can be done using a factorial HMM based model, allowing the handwriting to be represented in primitive timing space. This representation provides a distribution of spikes corresponding to the primitive activations, which can also be modelled using HMM architectures. We show how the coupling of the low level primitive model, and the higher level timing model during inference can produce good reconstructions of handwriting, with shared primitives for all characters modelled. This coupled model also captures the variance profile of the dataset which is accounted for by spike timing jitter. The timing code provides a compact representation of the movement while generating a movement without an explicit timing model produces a scribbling style of output. 1

3 0.6437242 202 nips-2007-The discriminant center-surround hypothesis for bottom-up saliency

Author: Dashan Gao, Vijay Mahadevan, Nuno Vasconcelos

Abstract: The classical hypothesis, that bottom-up saliency is a center-surround process, is combined with a more recent hypothesis that all saliency decisions are optimal in a decision-theoretic sense. The combined hypothesis is denoted as discriminant center-surround saliency, and the corresponding optimal saliency architecture is derived. This architecture equates the saliency of each image location to the discriminant power of a set of features with respect to the classification problem that opposes stimuli at center and surround, at that location. It is shown that the resulting saliency detector makes accurate quantitative predictions for various aspects of the psychophysics of human saliency, including non-linear properties beyond the reach of previous saliency models. Furthermore, it is shown that discriminant center-surround saliency can be easily generalized to various stimulus modalities (such as color, orientation and motion), and provides optimal solutions for many other saliency problems of interest for computer vision. Optimal solutions, under this hypothesis, are derived for a number of the former (including static natural images, dense motion fields, and even dynamic textures), and applied to a number of the latter (the prediction of human eye fixations, motion-based saliency in the presence of ego-motion, and motion-based saliency in the presence of highly dynamic backgrounds). In result, discriminant saliency is shown to predict eye fixations better than previous models, and produces background subtraction algorithms that outperform the state-of-the-art in computer vision. 1

4 0.38552058 170 nips-2007-Robust Regression with Twinned Gaussian Processes

Author: Andrew Naish-guzman, Sean Holden

Abstract: We propose a Gaussian process (GP) framework for robust inference in which a GP prior on the mixing weights of a two-component noise model augments the standard process over latent function values. This approach is a generalization of the mixture likelihood used in traditional robust GP regression, and a specialization of the GP mixture models suggested by Tresp [1] and Rasmussen and Ghahramani [2]. The value of this restriction is in its tractable expectation propagation updates, which allow for faster inference and model selection, and better convergence than the standard mixture. An additional benefit over the latter method lies in our ability to incorporate knowledge of the noise domain to influence predictions, and to recover with the predictive distribution information about the outlier distribution via the gating process. The model has asymptotic complexity equal to that of conventional robust methods, but yields more confident predictions on benchmark problems than classical heavy-tailed models and exhibits improved stability for data with clustered corruptions, for which they fail altogether. We show further how our approach can be used without adjustment for more smoothly heteroscedastic data, and suggest how it could be extended to more general noise models. We also address similarities with the work of Goldberg et al. [3].

5 0.38323811 206 nips-2007-Topmoumoute Online Natural Gradient Algorithm

Author: Nicolas L. Roux, Pierre-antoine Manzagol, Yoshua Bengio

Abstract: Guided by the goal of obtaining an optimization algorithm that is both fast and yields good generalization, we study the descent direction maximizing the decrease in generalization error or the probability of not increasing generalization error. The surprising result is that from both the Bayesian and frequentist perspectives this can yield the natural gradient direction. Although that direction can be very expensive to compute we develop an efficient, general, online approximation to the natural gradient descent which is suited to large scale problems. We report experimental results showing much faster convergence in computation time and in number of iterations with TONGA (Topmoumoute Online natural Gradient Algorithm) than with stochastic gradient descent, even on very large datasets.

6 0.38238537 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

7 0.37918308 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

8 0.3791154 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

9 0.37565091 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

10 0.36570814 195 nips-2007-The Generalized FITC Approximation

11 0.36531058 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

12 0.35990506 164 nips-2007-Receptive Fields without Spike-Triggering

13 0.34515131 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes

14 0.3449901 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

15 0.33964598 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

16 0.33672071 156 nips-2007-Predictive Matrix-Variate t Models

17 0.3366887 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

18 0.33435223 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

19 0.33341101 158 nips-2007-Probabilistic Matrix Factorization

20 0.33336329 96 nips-2007-Heterogeneous Component Analysis