nips nips2007 nips2007-140 knowledge-graph by maker-knowledge-mining

140 nips-2007-Neural characterization in partially observed populations of spiking neurons


Source: pdf

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Neural characterization in partially observed populations of spiking neurons Jonathan W. [sent-1, score-0.548]

2 uk Abstract Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. [sent-8, score-0.513]

3 Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. [sent-9, score-0.734]

4 Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. [sent-10, score-0.663]

5 The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. [sent-11, score-0.228]

6 More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. [sent-12, score-0.45]

7 We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons. [sent-13, score-0.238]

8 1 Introduction A central goal of computational neuroscience is to understand how the brain transforms sensory input into spike trains, and considerable effort has focused on the development of statistical models that can describe this transformation. [sent-14, score-0.4]

9 One of the most successful of these is the linear-nonlinearPoisson (LNP) cascade model, which describes a cell’s response in terms of a linear filter (or receptive field), an output nonlinearity, and an instantaneous spiking point process [1–5]. [sent-15, score-0.403]

10 Recent efforts have generalized this model to incorporate spike-history and multi-neuronal dependencies, which greatly enhances the model’s flexibility, allowing it to capture non-Poisson spiking statistics and joint responses of an entire population of neurons [6–10]. [sent-16, score-0.644]

11 Point process models accurately describe the spiking responses of neurons in the early visual pathway to light, and of cortical neurons to injected currents. [sent-17, score-0.816]

12 Such failings are in some ways not surprising: the cascade model’s stimulus sensitivity is described with a single linear filter, whereas responses in the brain reflect multiple stages of nonlinear processing, adaptation on multiple timescales, and recurrent feedback from higher-level areas. [sent-19, score-0.493]

13 Here we extend the point-process modeling framework to incorporate a set of unobserved or “hidden” neurons, whose spike trains are unknown and treated as hidden or latent variables. [sent-21, score-0.569]

14 The unobserved neurons respond to the stimulus and to synaptic inputs from other neurons, and their spiking 1 activity can in turn affect the responses of the observed neurons. [sent-22, score-1.013]

15 Although this expanded model offers considerably greater flexibility in describing an observed set of neural responses, it is more difficult to fit to data. [sent-25, score-0.208]

16 Computing the likelihood of an observed set of spike trains requires integrating out the probability distribution over hidden activity, and we need sophisticated algorithms to find the maximum likelihood estimate of model parameters. [sent-26, score-0.733]

17 Both algorithms make use of a novel proposal density to capture the dependence of hidden spikes on the observed spike trains, which allows for fast sampling of hidden neurons’ activity. [sent-28, score-0.945]

18 We show that a single-cell model used to characterize the observed neuron performs poorly, while a coupled two-cell model estimated using the wake-sleep algorithm performs much more accurately. [sent-30, score-0.447]

19 2 Multi-neuronal point-process encoding model We begin with a description of the encoding model, which generalizes the LNP model to incorporate non-Poisson spiking and coupling between neurons. [sent-31, score-0.526]

20 In this section we do not distinguish between observed and unobserved spikes, but will do so in the next. [sent-34, score-0.19]

21 Let xt denote the stimulus at time t, and y t and zt denote the number of spikes elicited by two neurons at t, where t ∈ [0, T ] is an index over time. [sent-35, score-0.917]

22 Note that x t is a vector containing all elements of the stimulus that are causally related to the (scalar) responses y t and zt at time t. [sent-36, score-0.607]

23 Typically ∆ is sufficiently small that we observe only zero or one spike in every bin: y t , zt ∈ {0, 1}. [sent-43, score-0.477]

24 The conditional intensity (or instantaneous spike rate) of each cell depends on both the stimulus and the recent spiking history via a bank of linear filters. [sent-44, score-0.994]

25 Let y [t−τ,t) and z[t−τ,t) denote the (vector) spike train histories at time t. [sent-45, score-0.297]

26 The nonlinear function, f , maps the input to the instantaneous spike rate of each cell. [sent-53, score-0.306]

27 2 = stimulus filter x > point-process model ky stochastic nonlinearity spiking + ky neuron y kz neuron z stimulus y xt hyy hyy coupling filters exp(·) f post-spike filter x equivalent model diagram neuron y spikes hzy x1 x2 x3 x4 . [sent-55, score-2.562]

28 hzy neuron z spikes + causal structure + y[t-τ,t) yt hyz ? [sent-64, score-0.913]

29 time z[t-τ,t) z time hzz Figure 1: Schematic of generalized linear point-process (glpp) encoding model. [sent-65, score-0.319]

30 a, Diagram of model parameters for a pair of coupled neurons. [sent-66, score-0.157]

31 For each cell, the parameters consist of a stimulus filter (e. [sent-67, score-0.272]

32 , ky ), a spike-train history filter (hyy ), and a filter capturing coupling from the spike train history of the other cell (hzy ). [sent-69, score-0.86]

33 The filter outputs are summed, pass through an exponential nonlinearity, and drive spiking via an instantaneous point process. [sent-70, score-0.238]

34 b, Equivalent diagram showing just the parameters of the neuron y, as used for drawing a sample yt . [sent-71, score-0.393]

35 Gray boxes highlight the stimulus vector xt and spike train history vectors that form the input to the model on this time step. [sent-72, score-0.71]

36 c, Simplified graphical model of the glpp causal structure, which allows us to visualize how the likelihood factorizes. [sent-73, score-0.303]

37 Red arrows highlight the dependency structure for a single time bin of the response y3 . [sent-76, score-0.241]

38 Equation 1 is equivalent to f applied to a linear convolution of the stimulus and spike trains with their respective filters; a schematic is shown in figure 1. [sent-78, score-0.68]

39 The probability of observing y t spikes in a bin of size ∆ is given by a Poisson distribution with rate parameter λyt ∆, (λyt ∆)yt −λyt ∆ e , yt ! [sent-79, score-0.387]

40 This factorization is possible because λyt and λzt depend only on the process history up to time t, making y t and zt conditionally independent given the stimulus and spike histories up to t (see Fig. [sent-82, score-0.829]

41 If the response at time t were to depend on both the past and future response, we would have a causal loop , preventing factorization and making both sampling and likelihood evaluation very difficult. [sent-84, score-0.229]

42 We can write the log-likelihood simply as log P (Y, Z|X, θ) = (yt log λyt + zt log λzt − ∆λyt − ∆λzt ) + c, t where c is a constant that does not depend on θ. [sent-87, score-0.337]

43 3 (4) 3 Generalized Expectation-Maximization and Wake-Sleep Maximizing log P (Y, Z|X, θ) is straightforward if both Y and Z are observed, but here we are interested in the case where Y is observed and Z is “hidden”. [sent-88, score-0.155]

44 The log-likelihood of the observed data is given by L(θ) ≡ log P (Y |θ) = log P (Y, Z|θ), (5) Z where we have dropped X to simplify notation (all probabilities can henceforth be taken to also depend on X). [sent-90, score-0.195]

45 The variational E-step involves minimizing D KL (Q, P ) with respect to φ, which remains positive if Q does not approximate P exactly; the variational M-step is unchanged from the standard algorithm. [sent-109, score-0.188]

46 In the “wake” step (identical to the M-step), we fit the true model parameters θ by maximizing (an approximation to) the log-probability of the observed data Y . [sent-114, score-0.251]

47 We can therefore think of the wake phase as learning a model of the data (parametrized by θ), and the sleep phase as learning a consistent internal description of that model (parametrized by φ). [sent-116, score-0.4]

48 In the next section we show that considerations of the spike generation process can provide us with a good choice for Q. [sent-119, score-0.26]

49 a, Conditional model schematic, which allows zt to depend on the observed response both before and after t. [sent-121, score-0.483]

50 b, Graphical model showing causal structure of the acausal model, with arrows indicating dependency. [sent-122, score-0.314]

51 The observed spike responses (gray circles) are no longer dependent variables, but regarded as fixed, external data, which is necessary for computing Q(zt |Y, φ). [sent-123, score-0.519]

52 Red arrows illustrate the dependency structure for a single bin of the hidden response, z3 . [sent-124, score-0.251]

53 , the hidden neuron spikes at time t), it is highly likely that y t+1 = 1 (i. [sent-127, score-0.469]

54 Consequently, under the true P (Z|Y, θ), which is the probability over Z in all time bins given Y in all time bins, if y t+1 = 1 there is a high probability that zt = 1. [sent-130, score-0.291]

55 In other words, z t exhibits an acausal dependence on y t+1 . [sent-131, score-0.238]

56 But this acausal dependence is not captured in Equation 3, which expresses the probability over z t as depending only on past events at time t, ignoring the future event y t+1 = 1. [sent-132, score-0.237]

57 Thus we have ˜ ˜ ˜ ˜ λzt = exp(kz · xt + hzz · z[t−τ,t) + hzy · y[t−τ,t+τ ) ). [sent-134, score-0.52]

58 (10) ˜ ˜ ˜ ˜ As above, kz , hzz and hzy are linear filters; the important difference is that hzy · y[t−τ,t+τ ) is a sum over past and future time: from t − τ to t + τ − ∆. [sent-135, score-0.91]

59 For this model, the parameters are ˜ ˜ ˜ φ = (kz , hzz , hzy ). [sent-136, score-0.488]

60 We now have a straightforward way to implement the wake-sleep algorithm, using samples from Q to perform the wake phase (estimating θ), and samples from P (Y, Z|θ) to perform the sleep phase (estimating φ). [sent-138, score-0.386]

61 The algorithm works as follows: • Wake: Draw samples {Zi } ∼ Q(Z|Y, φ), where Y are the observed spike trains and φ is the current set of parameters for the acausal point-process model Q. [sent-139, score-0.736]

62 5 • Sleep: Draw samples {Yj , Zj } ∼ P (Y, Z|θ), the true encoding distribution with current parameters θ. [sent-143, score-0.178]

63 To perform a variational E-step, we draw samples (as above) from Q and use them to evaluate both the KL divergence D KL Q(Z|Y, φ)||P (Z|Y, θ) and its gradient with respect to φ. [sent-150, score-0.163]

64 Such samples can be used to evaluate the true log-likelihood, for comparison with the variational lower bound, and for noisy gradient ascent of the likelihood to examine how closely these approximate methods converge to the true ML estimate. [sent-154, score-0.238]

65 For fully observed data, such samples also provide a useful means for measuring how much the entropy of one neuron’s response is reduced by knowing the responses of its neighbors. [sent-155, score-0.403]

66 5 Simulations: a two-neuron example To verify the method, we applied it to a pair of neurons (as depicted in fig. [sent-156, score-0.258]

67 1), simulated using a stimulus consisting of a long presentation of white noise. [sent-157, score-0.309]

68 We denoted one of the neurons ”observed” and the other ”hidden”. [sent-158, score-0.228]

69 The cells have similarly-shaped biphasic stimulus filters with opposite sign, like those commonly observed in ON and OFF retinal ganglion cells. [sent-161, score-0.507]

70 We assume that the ON-like cell is observed, while the OFF-like cell is hidden. [sent-162, score-0.362]

71 The hidden cell has a strong positive coupling filter h zy onto the observed cell, which allows spiking activity in the hidden cell to excite the observed cell (despite the fact that the two cells receive opposite-sign stimulus input). [sent-164, score-1.691]

72 For simplicity, we assume no coupling from the observed to the hidden cell 2 . [sent-165, score-0.551]

73 3b shows rasters of the two cells’ responses to a repeated presentations of a 1s Gaussian whitenoise stimulus with a framerate of 100Hz. [sent-170, score-0.43]

74 Note that the temporal structure of the observed cell’s response is strongly correlated with that of the hidden cell due to the strong coupling from hidden to observed (and the fact that the hidden cell receives slightly stronger stimulus drive). [sent-171, score-1.49]

75 Our first task is to examine whether a standard, single-cell glpp model can capture the mapping from stimuli to spike responses. [sent-172, score-0.55]

76 3c shows the parameters obtained from such a fit to the observed data, using 10s of the response to a non-repeating white noise stimulus (1000 samples, 251 spikes). [sent-174, score-0.53]

77 Note that the estimated stimulus filter (red) has much lower amplitude than the stimulus filter of the true model (gray). [sent-175, score-0.571]

78 3d shows the parameters obtained for an observed and a hidden neuron, estimated using wake-sleep as described in section 4. [sent-177, score-0.287]

79 3e-f shows a comparison of the performance of the two models, indicating that the coupled model estimated with wake-sleep does a much better job of capturing the temporal structure of the observed neuron’s response (accounting for 60% vs. [sent-179, score-0.41]

80 15% of 2 Although the stimulus and spike-history filters bear a rough similarity to those observed in retinal ganglion cells, the coupling used here is unlike coupling filters observed (to our knowledge) between ON and OFF cells in retinal data; it is assumed purely for demonstration purposes. [sent-180, score-0.886]

81 2 0 -10 -20 hzy 0 -10 -20 0 0 hyy hzz @ coupled-model estimate using variational EM 0. [sent-182, score-0.695]

82 05 A single-cell model estimate GWN stimulus B hidden observed raster raster raster comparison coupled singletrue model cell observed observed 2 0 -2 ky > ? [sent-183, score-1.451]

83 true parameters 2 0 -2 hidden = psth comparison true rate (Hz) 100 0 0. [sent-184, score-0.293]

84 The top row shows the filters determining the input to the observed cell, while the bottom row shows those influencing the hidden cell. [sent-188, score-0.261]

85 b, Raster of spike responses of observed and hidden cells to a repeated, 1s Gaussian white noise stimulus (top). [sent-189, score-1.015]

86 c, Parameter estimates for a single-cell glpp model fit to the observed cell’s response, using just the stimulus and observed data (estimates in red; true observedcell filters in gray). [sent-190, score-0.717]

87 d, Parameters obtained using wake-sleep to estimate a coupled glpp model, again using only the stimulus and observed spike times. [sent-191, score-0.868]

88 e, Response raster of true observed cell (obtained by simulating the true two-cell model), estimated single-cell model and estimated coupled model. [sent-192, score-0.574]

89 f, Peri-stimulus time histogram (PSTH) of the above rasters showing that the coupled model gives much higher accuracy predicting the true response. [sent-193, score-0.204]

90 The single-cell model, by contrast, exhibits much worse performance, which is unsurprising given that the standard glpp encoding model can capture only quasi-linear stimulus dependencies. [sent-195, score-0.594]

91 6 Discussion Although most statistical models of spike trains posit a direct pathway from sensory stimuli to neuronal responses, neurons are in fact embedded in highly recurrent networks that exhibit dynamics on a broad range of time-scales. [sent-196, score-0.777]

92 To take into account the fact that neural responses are driven by both stimuli and network activity, and to understand the role of network interactions, we proposed a model incorporating both hidden and observed spikes. [sent-197, score-0.645]

93 We regard the observed spike responses as those recorded during a typical experiment, while the responses of unobserved neurons are modeled as latent variables (unrecorded, but exerting influence on the observed responses). [sent-198, score-1.081]

94 The resulting model is tractable, as the latent variables can be integrated out using approximate sampling methods, and optimization using variational EM or wake-sleep provides an approximate maximum likelihood estimate of the model parameters. [sent-199, score-0.225]

95 As shown by a simple example, certain settings of model parameters necessitate the incorporation unobserved spikes, as the standard single-stage encoding model does not accurately describe the data. [sent-200, score-0.273]

96 The model offers a promising tool for analyzing network structure and networkbased computations carried out in higher sensory areas, particularly in the context where data are only available from a restricted set of neurons recorded within a larger population. [sent-202, score-0.369]

97 A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. [sent-257, score-0.464]

98 Maximum likelihood estimation of cascade point-process neural encoding models. [sent-262, score-0.216]

99 Correlations and coding with multi-neuronal spike trains in primate retina. [sent-276, score-0.348]

100 Analyzing functional connectivity using a network likelihood model of ensemble neural spiking activity. [sent-291, score-0.411]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('spike', 0.26), ('hzy', 0.254), ('stimulus', 0.246), ('neurons', 0.228), ('zt', 0.217), ('hzz', 0.208), ('cell', 0.181), ('spikes', 0.168), ('kz', 0.165), ('spiking', 0.165), ('yt', 0.164), ('acausal', 0.162), ('glpp', 0.162), ('neuron', 0.155), ('hidden', 0.146), ('lters', 0.144), ('responses', 0.144), ('hyy', 0.139), ('hyz', 0.116), ('sleep', 0.116), ('wake', 0.116), ('observed', 0.115), ('ky', 0.113), ('coupling', 0.109), ('response', 0.105), ('kl', 0.1), ('variational', 0.094), ('trains', 0.088), ('schematic', 0.086), ('coupled', 0.085), ('raster', 0.081), ('encoding', 0.08), ('lter', 0.078), ('unobserved', 0.075), ('em', 0.069), ('history', 0.069), ('cells', 0.066), ('pillow', 0.065), ('sensory', 0.061), ('capturing', 0.059), ('xt', 0.058), ('causal', 0.056), ('psth', 0.055), ('bin', 0.055), ('stimuli', 0.052), ('pathway', 0.051), ('arrows', 0.05), ('cascade', 0.05), ('diagram', 0.048), ('neural', 0.047), ('dependence', 0.046), ('lnp', 0.046), ('tractably', 0.046), ('retinal', 0.046), ('model', 0.046), ('connectivity', 0.046), ('instantaneous', 0.046), ('nonlinearity', 0.042), ('bins', 0.041), ('log', 0.04), ('rasters', 0.04), ('refractoriness', 0.04), ('activity', 0.04), ('populations', 0.04), ('likelihood', 0.039), ('samples', 0.039), ('white', 0.038), ('phase', 0.038), ('gray', 0.038), ('neuronal', 0.037), ('receptive', 0.037), ('eden', 0.037), ('glm', 0.037), ('histories', 0.037), ('paninski', 0.034), ('ganglion', 0.034), ('functional', 0.034), ('network', 0.034), ('proposal', 0.034), ('true', 0.033), ('refractory', 0.032), ('maximizing', 0.031), ('generalized', 0.031), ('highlight', 0.031), ('dkl', 0.031), ('exhibits', 0.03), ('depicted', 0.03), ('divergence', 0.03), ('capture', 0.03), ('past', 0.029), ('parametrized', 0.028), ('drive', 0.027), ('brain', 0.027), ('understand', 0.027), ('conditional', 0.027), ('red', 0.027), ('parameters', 0.026), ('stages', 0.026), ('consisting', 0.025), ('neuroscience', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

2 0.38082916 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

3 0.28331333 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

4 0.26151755 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

5 0.25493959 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

6 0.22873992 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

7 0.21198839 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

8 0.18083164 182 nips-2007-Sparse deep belief net model for visual area V2

9 0.1705292 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

10 0.1633129 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

11 0.1487256 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

12 0.12787089 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

13 0.10394513 81 nips-2007-Estimating disparity with confidence from energy neurons

14 0.10128959 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

15 0.091391161 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

16 0.085001536 63 nips-2007-Convex Relaxations of Latent Variable Training

17 0.084881149 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

18 0.081543542 213 nips-2007-Variational Inference for Diffusion Processes

19 0.081192903 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes

20 0.079580233 146 nips-2007-On higher-order perceptron algorithms


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.273), (1, 0.194), (2, 0.474), (3, 0.038), (4, 0.04), (5, -0.122), (6, 0.029), (7, -0.021), (8, 0.006), (9, -0.013), (10, 0.075), (11, 0.029), (12, 0.035), (13, 0.002), (14, 0.027), (15, 0.024), (16, 0.05), (17, 0.135), (18, 0.139), (19, 0.171), (20, -0.001), (21, -0.03), (22, -0.058), (23, 0.126), (24, 0.014), (25, -0.007), (26, 0.076), (27, -0.017), (28, 0.017), (29, 0.043), (30, -0.063), (31, 0.0), (32, -0.08), (33, -0.043), (34, 0.02), (35, 0.04), (36, -0.005), (37, 0.093), (38, -0.03), (39, 0.048), (40, 0.046), (41, -0.024), (42, 0.024), (43, -0.003), (44, -0.021), (45, -0.027), (46, 0.065), (47, -0.017), (48, 0.053), (49, 0.022)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96597826 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

2 0.8906607 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

3 0.85582495 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

4 0.82038981 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

5 0.64058352 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

6 0.62921554 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

7 0.62555748 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

8 0.61754674 81 nips-2007-Estimating disparity with confidence from energy neurons

9 0.58682358 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

10 0.58329761 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

11 0.56833911 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

12 0.44618869 182 nips-2007-Sparse deep belief net model for visual area V2

13 0.40393272 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

14 0.39744517 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

15 0.37599421 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

16 0.37392691 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

17 0.36922035 130 nips-2007-Modeling Natural Sounds with Modulation Cascade Processes

18 0.31703448 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes

19 0.3014425 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

20 0.29091999 203 nips-2007-The rat as particle filter


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.064), (13, 0.036), (16, 0.185), (18, 0.024), (19, 0.013), (21, 0.056), (34, 0.034), (35, 0.03), (46, 0.2), (47, 0.071), (83, 0.096), (85, 0.012), (87, 0.016), (90, 0.082)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.83853221 81 nips-2007-Estimating disparity with confidence from energy neurons

Author: Eric K. Tsang, Bertram E. Shi

Abstract: The peak location in a population of phase-tuned neurons has been shown to be a more reliable estimator for disparity than the peak location in a population of position-tuned neurons. Unfortunately, the disparity range covered by a phasetuned population is limited by phase wraparound. Thus, a single population cannot cover the large range of disparities encountered in natural scenes unless the scale of the receptive fields is chosen to be very large, which results in very low resolution depth estimates. Here we describe a biologically plausible measure of the confidence that the stimulus disparity is inside the range covered by a population of phase-tuned neurons. Based upon this confidence measure, we propose an algorithm for disparity estimation that uses many populations of high-resolution phase-tuned neurons that are biased to different disparity ranges via position shifts between the left and right eye receptive fields. The population with the highest confidence is used to estimate the stimulus disparity. We show that this algorithm outperforms a previously proposed coarse-to-fine algorithm for disparity estimation, which uses disparity estimates from coarse scales to select the populations used at finer scales and can effectively detect occlusions.

same-paper 2 0.82993197 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

3 0.75873417 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

4 0.75390357 170 nips-2007-Robust Regression with Twinned Gaussian Processes

Author: Andrew Naish-guzman, Sean Holden

Abstract: We propose a Gaussian process (GP) framework for robust inference in which a GP prior on the mixing weights of a two-component noise model augments the standard process over latent function values. This approach is a generalization of the mixture likelihood used in traditional robust GP regression, and a specialization of the GP mixture models suggested by Tresp [1] and Rasmussen and Ghahramani [2]. The value of this restriction is in its tractable expectation propagation updates, which allow for faster inference and model selection, and better convergence than the standard mixture. An additional benefit over the latter method lies in our ability to incorporate knowledge of the noise domain to influence predictions, and to recover with the predictive distribution information about the outlier distribution via the gating process. The model has asymptotic complexity equal to that of conventional robust methods, but yields more confident predictions on benchmark problems than classical heavy-tailed models and exhibits improved stability for data with clustered corruptions, for which they fail altogether. We show further how our approach can be used without adjustment for more smoothly heteroscedastic data, and suggest how it could be extended to more general noise models. We also address similarities with the work of Goldberg et al. [3].

5 0.7536599 108 nips-2007-Kernel Measures of Conditional Dependence

Author: Kenji Fukumizu, Arthur Gretton, Xiaohai Sun, Bernhard Schölkopf

Abstract: We propose a new measure of conditional dependence of random variables, based on normalized cross-covariance operators on reproducing kernel Hilbert spaces. Unlike previous kernel dependence measures, the proposed criterion does not depend on the choice of kernel in the limit of infinite data, for a wide class of kernels. At the same time, it has a straightforward empirical estimate with good convergence behaviour. We discuss the theoretical properties of the measure, and demonstrate its application in experiments. 1

6 0.7536459 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

7 0.74630916 206 nips-2007-Topmoumoute Online Natural Gradient Algorithm

8 0.73933423 199 nips-2007-The Price of Bandit Information for Online Optimization

9 0.6880787 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

10 0.68310398 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

11 0.65729427 195 nips-2007-The Generalized FITC Approximation

12 0.6448822 164 nips-2007-Receptive Fields without Spike-Triggering

13 0.63318712 189 nips-2007-Supervised Topic Models

14 0.6133796 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

15 0.60181898 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

16 0.59409887 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

17 0.59196895 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

18 0.58991307 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

19 0.58582038 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

20 0.58421427 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images