nips nips2011 nips2011-183 knowledge-graph by maker-knowledge-mining

183 nips-2011-Neural Reconstruction with Approximate Message Passing (NeuRAMP)


Source: pdf

Author: Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava

Abstract: Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. [sent-10, score-0.449]

2 This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. [sent-11, score-0.569]

3 The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. [sent-12, score-0.292]

4 The GAMP method is based on Gaussian approximations of loopy belief propagation. [sent-13, score-0.111]

5 In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. [sent-14, score-0.169]

6 For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. [sent-15, score-0.246]

7 The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells. [sent-16, score-0.652]

8 1 Introduction Fundamental to describing the behavior of neurons in response to sensory stimuli or to inputs from other neurons is the need for succinct models that can be estimated and validated with limited data. [sent-17, score-0.29]

9 Towards this end, many functional models assume a cascade structure where an initial linear stage combines inputs to produce a low-dimensional output for subsequent nonlinear stages. [sent-18, score-0.286]

10 The linear filtering stage in these models reduces the dimensionality of the parameter estimation problem and provides a simple characterization of a neuron’s receptive field or connectivity. [sent-21, score-0.33]

11 However, even with the dimensionality reduction from assuming such linear stages, parameter estimation may be difficult when the stimulus is high-dimensional or the filter lengths are large. [sent-22, score-0.22]

12 The key insight is that although most experiments for mapping, say visual receptive fields, expose the 1 Linear filtering Stimulus (eg. [sent-24, score-0.168]

13 n pixel u1[t ] image) Gaussian noise d [t ] (u1 * w1)[t ] Nonlinearity Poisson spike process [t ] z[t ] Spike count y[t ] un [t ] (un * wn )[t ] Figure 1: Linear nonlinear Poisson (LNP) model for a neuron with n stimuli. [sent-25, score-0.334]

14 neural system under investigation to a large number of stimulus components, the overwhelming majority of the components do not affect the instantaneous spiking rate of any one particular neuron due to anatomical sparsity [5, 6]. [sent-26, score-0.386]

15 As a result, the linear weights that model the response to these stimulus components will be sparse; most of the coefficients will be zero. [sent-27, score-0.173]

16 For the retina, the stimulus is typically a large image, whereas the receptive field of any individual neuron is usually only a small portion of that image. [sent-28, score-0.391]

17 Similarly, for mapping cortical connectivity to determine the connectome, each neuron is typically only connected to a small fraction of the neurons under test [7]. [sent-29, score-0.427]

18 Due to the sparsity of the weights, estimation can be performed via sparse reconstruction techniques similar to those used in compressed sensing (CS) [8–10]. [sent-30, score-0.361]

19 This paper presents a CS-based estimation of linear neuronal weights via a recently-developed generalized approximate message passing (GAMP) methods from [11] and [12]. [sent-31, score-0.208]

20 GAMP, which builds upon earlier work in [13, 14], is a Gaussian approximation of loopy belief propagation. [sent-32, score-0.111]

21 The benefits of the GAMP method for neural mapping are that it is computationally tractable with large sums of data, can incorporate very general graphical model descriptions of the neuron and provides a method for simultaneously estimating the parameters in the linear and nonlinear stages. [sent-33, score-0.333]

22 In contrast, methods such as the common spike-triggered average (STA) perform separate estimation of the linear and nonlinear components. [sent-34, score-0.207]

23 Following the simulation methodology in [4], we show that the GAMP method offers significantly improved reconstruction of cortical wiring diagrams over other state-of-the-art CS techniques. [sent-35, score-0.144]

24 We also validate the GAMP-based sparse estimation methodology in the problem of fitting LNP models of salamander RGCs. [sent-36, score-0.317]

25 This feature suggests that GAMP-based sparse modeling may be useful in the future for other neurons and more complex models. [sent-41, score-0.192]

26 1 Linear Nonlinear Poisson Model Mathematical Model We consider the following simple LNP model for the spiking output of a single neuron under n stimulus components shown in Fig. [sent-43, score-0.336]

27 , T − 1, and we let uj [t] denote the jth stimulus input in the tth time interval, j = 1, . [sent-49, score-0.285]

28 For example, if the stimulus is a sequence of images, n would be the number of pixels in each image and uj [t] would be the value of the jth pixel over time. [sent-53, score-0.289]

29 We let y[t] denote the number of spikes in the tth time interval, and the general problem is to find a model that explains the relation between the stimuli uj [t] and spike outputs y[t]. [sent-54, score-0.328]

30 As the name suggests, the LNP model is a cascade of three stages: linear, nonlinear and Poisson. [sent-55, score-0.135]

31 In the second (nonlinear) stage of the LNP model, the scalar linear output z[t] passes through a memoryless nonlinear random function to produce a spike rate λ[t]. [sent-61, score-0.333]

32 The basic problem is to estimate the parameters θ from the input/output data uj [t] and y[t]. [sent-72, score-0.143]

33 The vector z of linear filter outputs z[t] in (1) can be written as z = Aw, where A is a known block Toeplitz matrix with the input data uj [t]. [sent-75, score-0.176]

34 Once the estimate w = wSTA or wRC has been computed, one can compute an estimate z = Aw for the linear output z and then use any scalar estimation method to find a nonlinear mapping from z[t] to λ[t] based on the outputs y[t]. [sent-79, score-0.421]

35 A maximum likelihood (ML) estimate may overcome this problem by jointly optimizing over nonlinear and linear 2 parameters. [sent-81, score-0.173]

36 (9) In this way, the ML estimate attempts to maximize the goodness of fit by simultaneously searching over the linear and nonlinear parameters. [sent-85, score-0.173]

37 As discussed above, the key idea in this work is that most stimulus components have little effect on the spiking output. [sent-88, score-0.188]

38 Most of the filter coefficients wj [ ] will be zero and exploiting this sparsity may be able to reduce the number of measurements while maintaining the same estimation accuracy. [sent-89, score-0.189]

39 The sparse nature of the filter coefficients can be modeled with the following group sparsity structure: Let ξj be a binary random variable with ξj = 1 when stimulus j is in the receptive field of the neuron and ξj = 0 when it is not. [sent-90, score-0.546]

40 We call the variables ξj the receptive field indicators, and model these indicators as i. [sent-91, score-0.221]

41 Bernoulli variables with Pr(ξj = 1) = 1 − Pr(ξj = 0) = ρ, (10) where ρ ∈ [0, 1] is the average fraction of stimuli in the receptive field. [sent-94, score-0.198]

42 We then assume that, given the vector ξ of receptive field indicators, the filter weight coefficients are independent with distribution 0 if ξj = 0 (11) p wj [ ] ξ = p wj [ ] ξj = 2 N (0, σx ) if ξj = 1. [sent-95, score-0.302]

43 That is, the linear weight coefficients are zero outside the receptive field and Gaussian within the receptive field. [sent-96, score-0.368]

44 The distribution on w defined by (10) and (11) is often called a group sparse model, since the components of the vector w are zero in groups. [sent-98, score-0.138]

45 Estimation with this sparse structure leads naturally to a compressed sensing problem. [sent-99, score-0.193]

46 Specifically, we are estimating a sparse vector w through a noisy version y of a linear transform z = Aw, which is precisely the problem of compressed sensing [8–10]. [sent-100, score-0.225]

47 With a group structure, one can employ a variety of methods including the group Lasso [19–21] and group orthogonal matching pursuit [22]. [sent-101, score-0.128]

48 In the neural model, the spike count y[t] is a nonlinear, random function of the linear output z[t] described by the probability distribution in (8). [sent-103, score-0.194]

49 2 GAMP-Based Sparse Estimation To address the nonlinearities in the outputs, we use the generalized approximate message passing (GAMP) algorithm [11] with extensions in [12]. [sent-105, score-0.132]

50 To place the neural estimation problem in the GAMP framework, first fix the stimulus input vector u, nonlinear output parameters α and 2 σd . [sent-107, score-0.365]

51 Then, the conditional joint distribution of the outputs y, linear filter weights w and receptive field indicators ξ factor as n 2 p y, ξ, w u, α, σd = j=1 z L−1 Pr(ξj ) T−1 2 Pr y[t] z[t], α, σd , p w j [ ] ξj =0 = Aw. [sent-108, score-0.296]

52 Solid circles are unknown variables, dashed circles are observed variables (in this case, spike counts) and squares are factors in the probability distribution. [sent-110, score-0.156]

53 Inference on graphical models is often performed by some variant of loopy belief propagation (BP). [sent-120, score-0.111]

54 Loopy BP attempts to reduce the joint estimation of all the variables to a sequence of lower dimensional estimation problems associated with each of the factors in the graph. [sent-121, score-0.152]

55 However, exact implementation of loopy BP is intractable for the neural estimation problem: The linear constraints z = Aw create factor nodes that connect each of the variables z[t] to all the variables wj [ ] where uj [t − ] is non-zero. [sent-124, score-0.402]

56 In the RGC experiments below, the pixels value uj [t] are non-zero 50% of the time, so each variable z[t] will be connected to, on average, half of the Ln filter weight coefficients through these factor nodes. [sent-125, score-0.135]

57 Since exact implementation of loopy BP grows exponentially in the degree of the factor nodes, loopy BP would be infeasible for the neural problem, even for moderate values of Ln. [sent-126, score-0.211]

58 The GAMP method reduces the complexity of loopy BP by exploiting the linear nature of the relations between the variables w and z. [sent-127, score-0.117]

59 4 Receptive Fields of Salamander Retinal Ganglion Cells The sparse LNP model with GAMP-based estimation was evaluated on data from recordings of neural spike trains from salamander retinal ganglion cells exposed to random checkerboard images, following the basic methods of [24]. [sent-130, score-0.616]

60 1 In the experiment, spikes from individual neurons were measured over an approximately 1900s period at a sampling interval of 10ms. [sent-131, score-0.15]

61 During the recordings, the salamander was exposed to 80 × 60 pixel random black-white binary images that changed every 3 to 4 sampling intervals. [sent-132, score-0.189]

62 We compared three methods for fitting an L = 30 tap one-dimensional LNP model for the RGC neural responses: (i) truncated STA, (ii) approximate ML, and (iii) GAMP estimation with the sparse LNP model. [sent-137, score-0.228]

63 The truncated STA method was performed by first computing a linear filter estimate as in (6) for the entire 80 × 60 image and then setting all coefficients outside an 11 × 11 pixel subarea around the pixel with the largest estimated response to zero. [sent-139, score-0.189]

64 The 11 × 11 size was chosen since it is sufficiently large to contain these neurons’ entire receptive fields. [sent-140, score-0.168]

65 From the estimate wSTA of the linear filter coefficients, we compute an estimate z = Aw of the linear filter output. [sent-144, score-0.148]

66 The fact that only a linear polynomial was needed in the output is likely due to the fact that random checkerboard images rarely align with the neuron’s filters and therefore do not excite the neural spiking into a nonlinear regime. [sent-147, score-0.326]

67 We believe that under such experimental conditions, the advantages of the GAMP-based nonlinear estimation would be even larger. [sent-149, score-0.175]

68 The GAMP-based sparse estimation used the STA estimate for initialization to select the 11 × 11 2 pixel subarea and the variances σx in (11). [sent-152, score-0.269]

69 3 shows the estimated responses for the STA and GAMP-based sparse LNP estimates for one neuron using three different lengths of training data: 400, 600 and 1000 seconds of the total 1900 second training data. [sent-158, score-0.259]

70 3(b) shows the estimated spatial receptive fields plotted as the total magnitude of the 11 × 11 filters. [sent-163, score-0.168]

71 One can immediately see that the GAMP based sparse estimate is significantly less noisy than the STA estimate, as the smaller, unreliable responses are zeroed out in the GAMP-based sparse LNP estimate. [sent-164, score-0.241]

72 At each training length, each of the three methods—STA, GAMP-based sparse LNP and approximate 2 ML—were used to produce an estimate θ = (w, α, σd ). [sent-168, score-0.118]

73 It can be seen that the GAMP-based sparse LNP estimate significantly outperforms the STA and 6 0. [sent-171, score-0.118]

74 915 Figure 4: Prediction accuracy of sparse and non-sparse LNP estimates for data from salamander RGC cells. [sent-174, score-0.2]

75 Based on cross-validation scores, the GAMP-based sparse LNP estimation provides a significantly better estimate for the same amount of training. [sent-175, score-0.194]

76 895 200 400 600 Train time (sec) 800 1000 Missed detect prob, pMD 1 Figure 5: Comparison of reconstruction methods on cortical connectome mapping with multi-neuron excitation based on simulation model in [4]. [sent-180, score-0.28]

77 In this case, connectivity from n = 500 potential pre-synaptic neurons are estimated from m = 300 measurements with 40 neurons excited in each measurement. [sent-181, score-0.344]

78 In the simulation, only 6% of the n potential neurons are actually connected to the postsynaptic neuron under test. [sent-182, score-0.261]

79 Indeed, by the measure of the cross-validation score, the sparse LNP estimate with GAMP after only 400 seconds of data was as accurate as the STA estimate with 1000 seconds of data. [sent-188, score-0.21]

80 5 Neural Mapping via Multi-Neuron Excitation The GAMP methodology was also applied to neural mapping from multi-neuron excitation, originally proposed in [4]. [sent-190, score-0.132]

81 A single post-synaptic neuron has connections to n potential pre-synaptic neurons. [sent-191, score-0.111]

82 The standard method to determine which of the n neurons are connected to the postsynaptic neurons is to excite one neuron at a time. [sent-192, score-0.412]

83 This process is wasteful, since only a small fraction of the neurons are typically connected. [sent-193, score-0.116]

84 In the method of [4], multiple neurons are excited in each measurement. [sent-194, score-0.169]

85 Then, exploiting the sparsity in the connectivity, compressed sensing techniques can be used to recover the mapping from m < n measurements. [sent-195, score-0.213]

86 Unfortunately, the output stage of spiking neurons is often nonlinear and most CS methods cannot directly incorporate such nonlinearities into the estimation. [sent-196, score-0.385]

87 To validate the methodology, we compared the performance of GAMP to various reconstruction methods following a simulation of mapping of cortical neurons with multi-neuron excitation in [4]. [sent-198, score-0.349]

88 1, where the inputs uj [t] are 1 or 0 depending on whether the jth pre-synaptic input is excited in tth measurement. [sent-200, score-0.254]

89 06 of being on (the neuron is connected) or 1 − ρ of being zero (the neuron is not connected). [sent-204, score-0.222]

90 Connectivity detection amounts to determining which of the n pre-synaptic neurons have non-zero weights. [sent-206, score-0.116]

91 It can be seen that the GAMP-based connectivity detection significantly outperforms both non-sparse RC reconstruction as well as a state-of-the-art greedy sparse method CoSaMP [26, 27]. [sent-210, score-0.181]

92 7 6 Conclusions and Future Work A general method for parameter estimation in neural models based on generalized approximate message passing was presented. [sent-211, score-0.217]

93 The GAMP methodology is computationally tractable for large data sets, can exploit sparsity in the linear coefficients and can incorporate a wide range of nonlinear modeling complexities in a systematic manner. [sent-212, score-0.218]

94 Experimental validation of the GAMP-based estimation of a sparse LNP model for salamander RGC cells shows significantly improved prediction in cross-validation over simple non-sparse estimation methods such as STA. [sent-213, score-0.39]

95 Benefits over state-of-theart sparse reconstruction methods are also apparent in simulated models of cortical mapping with multi-neuron excitation. [sent-214, score-0.229]

96 Going forward, the generality offered by the GAMP model will enable accurate parameter estimation for other complex neural models. [sent-215, score-0.117]

97 An exciting future possibility for cortical mapping is to decode memories, which are thought to be stored as the connectome [7, 28]. [sent-218, score-0.154]

98 There have been several previous suggestions that visual and general cortical regions of the brain may use belief propagation-like algorithms [29, 30]. [sent-221, score-0.145]

99 As such, we assert the biologically plausibility of the brain itself using the algorithms presented herein for receptive field and memory decoding. [sent-223, score-0.203]

100 Random sparse linear systems observed via arbitrary channels: A decoupling principle. [sent-319, score-0.108]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('gamp', 0.531), ('lnp', 0.429), ('sta', 0.389), ('receptive', 0.168), ('salamander', 0.124), ('neurons', 0.116), ('stimulus', 0.112), ('lter', 0.112), ('neuron', 0.111), ('uj', 0.101), ('nonlinear', 0.099), ('ml', 0.093), ('rgc', 0.089), ('rc', 0.085), ('loopy', 0.085), ('spike', 0.084), ('excitation', 0.08), ('sparse', 0.076), ('aw', 0.076), ('estimation', 0.076), ('wj', 0.067), ('pr', 0.065), ('cients', 0.064), ('compressed', 0.062), ('ganglion', 0.061), ('poisson', 0.059), ('connectivity', 0.059), ('cortical', 0.057), ('coef', 0.057), ('retinal', 0.056), ('bp', 0.056), ('sensing', 0.055), ('stage', 0.054), ('dmitri', 0.053), ('excited', 0.053), ('wsta', 0.053), ('indicators', 0.053), ('message', 0.052), ('mapping', 0.05), ('passing', 0.048), ('responses', 0.047), ('spiking', 0.047), ('connectome', 0.047), ('reconstruction', 0.046), ('sparsity', 0.046), ('eld', 0.044), ('outputs', 0.043), ('cosamp', 0.043), ('estimate', 0.042), ('methodology', 0.041), ('february', 0.041), ('neural', 0.041), ('pixel', 0.04), ('cells', 0.038), ('compressive', 0.038), ('passed', 0.037), ('output', 0.037), ('tth', 0.036), ('cascade', 0.036), ('jth', 0.036), ('circles', 0.036), ('alyson', 0.035), ('aniruddha', 0.035), ('checkerboard', 0.035), ('excite', 0.035), ('lav', 0.035), ('leonardo', 0.035), ('rangan', 0.035), ('rgcs', 0.035), ('subarea', 0.035), ('tap', 0.035), ('varshney', 0.035), ('wrc', 0.035), ('brain', 0.035), ('non', 0.035), ('spikes', 0.034), ('connected', 0.034), ('retina', 0.033), ('group', 0.033), ('linear', 0.032), ('nonlinearities', 0.032), ('taps', 0.031), ('stimuli', 0.03), ('filter', 0.029), ('components', 0.029), ('pursuit', 0.029), ('markus', 0.029), ('chklovskii', 0.029), ('fletcher', 0.029), ('lters', 0.028), ('inputs', 0.028), ('suggestions', 0.027), ('eero', 0.027), ('alarm', 0.027), ('memoryless', 0.027), ('cs', 0.027), ('belief', 0.026), ('seconds', 0.025), ('elds', 0.025), ('exposed', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000001 183 nips-2011-Neural Reconstruction with Approximate Message Passing (NeuRAMP)

Author: Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava

Abstract: Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells. 1

2 0.3450703 44 nips-2011-Bayesian Spike-Triggered Covariance Analysis

Author: Jonathan W. Pillow, Il M. Park

Abstract: Neurons typically respond to a restricted number of stimulus features within the high-dimensional space of natural stimuli. Here we describe an explicit modelbased interpretation of traditional estimators for a neuron’s multi-dimensional feature space, which allows for several important generalizations and extensions. First, we show that traditional estimators based on the spike-triggered average (STA) and spike-triggered covariance (STC) can be formalized in terms of the “expected log-likelihood” of a Linear-Nonlinear-Poisson (LNP) model with Gaussian stimuli. This model-based formulation allows us to define maximum-likelihood and Bayesian estimators that are statistically consistent and efficient in a wider variety of settings, such as with naturalistic (non-Gaussian) stimuli. It also allows us to employ Bayesian methods for regularization, smoothing, sparsification, and model comparison, and provides Bayesian confidence intervals on model parameters. We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference. We validate these methods using neural data recorded extracellularly from macaque primary visual cortex. 1

3 0.20259073 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Author: Yan Karklin, Eero P. Simoncelli

Abstract: Efficient coding provides a powerful principle for explaining early sensory coding. Most attempts to test this principle have been limited to linear, noiseless models, and when applied to natural images, have yielded oriented filters consistent with responses in primary visual cortex. Here we show that an efficient coding model that incorporates biologically realistic ingredients – input and output noise, nonlinear response functions, and a metabolic cost on the firing rate – predicts receptive fields and response nonlinearities similar to those observed in the retina. Specifically, we develop numerical methods for simultaneously learning the linear filters and response nonlinearities of a population of model neurons, so as to maximize information transmission subject to metabolic costs. When applied to an ensemble of natural images, the method yields filters that are center-surround and nonlinearities that are rectifying. The filters are organized into two populations, with On- and Off-centers, which independently tile the visual space. As observed in the primate retina, the Off-center neurons are more numerous and have filters with smaller spatial extent. In the absence of noise, our method reduces to a generalized version of independent components analysis, with an adapted nonlinear “contrast” function; in this case, the optimal filters are localized and oriented.

4 0.15046322 24 nips-2011-Active learning of neural response functions with Gaussian processes

Author: Mijung Park, Greg Horwitz, Jonathan W. Pillow

Abstract: A sizeable literature has focused on the problem of estimating a low-dimensional feature space for a neuron’s stimulus sensitivity. However, comparatively little work has addressed the problem of estimating the nonlinear function from feature space to spike rate. Here, we use a Gaussian process (GP) prior over the infinitedimensional space of nonlinear functions to obtain Bayesian estimates of the “nonlinearity” in the linear-nonlinear-Poisson (LNP) encoding model. This approach offers increased flexibility, robustness, and computational tractability compared to traditional methods (e.g., parametric forms, histograms, cubic splines). We then develop a framework for optimal experimental design under the GP-Poisson model using uncertainty sampling. This involves adaptively selecting stimuli according to an information-theoretic criterion, with the goal of characterizing the nonlinearity with as little experimental data as possible. Our framework relies on a method for rapidly updating hyperparameters under a Gaussian approximation to the posterior. We apply these methods to neural data from a color-tuned simple cell in macaque V1, characterizing its nonlinear response function in the 3D space of cone contrasts. We find that it combines cone inputs in a highly nonlinear manner. With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate these nonlinear combination rules. 1

5 0.14960915 244 nips-2011-Selecting Receptive Fields in Deep Networks

Author: Adam Coates, Andrew Y. Ng

Abstract: Recent deep learning and unsupervised feature learning systems that learn from unlabeled data have achieved high performance in benchmarks by using extremely large architectures with many features (hidden units) at each layer. Unfortunately, for such large architectures the number of parameters can grow quadratically in the width of the network, thus necessitating hand-coded “local receptive fields” that limit the number of connections from lower level features to higher ones (e.g., based on spatial locality). In this paper we propose a fast method to choose these connections that may be incorporated into a wide variety of unsupervised training methods. Specifically, we choose local receptive fields that group together those low-level features that are most similar to each other according to a pairwise similarity metric. This approach allows us to harness the advantages of local receptive fields (such as improved scalability, and reduced data requirements) when we do not know how to specify such receptive fields by hand or where our unsupervised training algorithm has no obvious generalization to a topographic setting. We produce results showing how this method allows us to use even simple unsupervised training algorithms to train successful multi-layered networks that achieve state-of-the-art results on CIFAR and STL datasets: 82.0% and 60.1% accuracy, respectively. 1

6 0.14701921 298 nips-2011-Unsupervised learning models of primary cortical receptive fields and receptive field plasticity

7 0.12115581 302 nips-2011-Variational Learning for Recurrent Spiking Networks

8 0.1192857 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

9 0.11476921 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

10 0.10397787 249 nips-2011-Sequence learning with hidden units in spiking neural networks

11 0.10253657 219 nips-2011-Predicting response time and error rates in visual search

12 0.10166419 200 nips-2011-On the Analysis of Multi-Channel Neural Spike Data

13 0.092616111 23 nips-2011-Active dendrites: adaptation to spike-based communication

14 0.091818914 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

15 0.091746457 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

16 0.088092752 13 nips-2011-A blind sparse deconvolution method for neural spike identification

17 0.079364717 276 nips-2011-Structured sparse coding via lateral inhibition

18 0.077089198 86 nips-2011-Empirical models of spiking in neural populations

19 0.070240684 261 nips-2011-Sparse Filtering

20 0.069449455 37 nips-2011-Analytical Results for the Error in Filtering of Gaussian Processes


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.181), (1, 0.144), (2, 0.241), (3, -0.035), (4, 0.066), (5, 0.098), (6, 0.014), (7, 0.12), (8, -0.013), (9, -0.067), (10, -0.043), (11, 0.032), (12, -0.028), (13, 0.013), (14, 0.027), (15, 0.043), (16, 0.124), (17, -0.086), (18, 0.015), (19, -0.019), (20, -0.128), (21, -0.094), (22, 0.081), (23, -0.03), (24, 0.046), (25, 0.042), (26, -0.084), (27, 0.089), (28, 0.04), (29, 0.009), (30, 0.023), (31, 0.038), (32, -0.039), (33, 0.025), (34, 0.09), (35, 0.011), (36, -0.153), (37, -0.107), (38, 0.032), (39, -0.036), (40, -0.05), (41, -0.118), (42, 0.029), (43, 0.179), (44, -0.136), (45, -0.113), (46, -0.048), (47, -0.073), (48, 0.129), (49, 0.053)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.92170483 183 nips-2011-Neural Reconstruction with Approximate Message Passing (NeuRAMP)

Author: Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava

Abstract: Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells. 1

2 0.83980614 44 nips-2011-Bayesian Spike-Triggered Covariance Analysis

Author: Jonathan W. Pillow, Il M. Park

Abstract: Neurons typically respond to a restricted number of stimulus features within the high-dimensional space of natural stimuli. Here we describe an explicit modelbased interpretation of traditional estimators for a neuron’s multi-dimensional feature space, which allows for several important generalizations and extensions. First, we show that traditional estimators based on the spike-triggered average (STA) and spike-triggered covariance (STC) can be formalized in terms of the “expected log-likelihood” of a Linear-Nonlinear-Poisson (LNP) model with Gaussian stimuli. This model-based formulation allows us to define maximum-likelihood and Bayesian estimators that are statistically consistent and efficient in a wider variety of settings, such as with naturalistic (non-Gaussian) stimuli. It also allows us to employ Bayesian methods for regularization, smoothing, sparsification, and model comparison, and provides Bayesian confidence intervals on model parameters. We describe an empirical Bayes method for selecting the number of features, and extend the model to accommodate an arbitrary elliptical nonlinear response function, which results in a more powerful and more flexible model for feature space inference. We validate these methods using neural data recorded extracellularly from macaque primary visual cortex. 1

3 0.64853883 24 nips-2011-Active learning of neural response functions with Gaussian processes

Author: Mijung Park, Greg Horwitz, Jonathan W. Pillow

Abstract: A sizeable literature has focused on the problem of estimating a low-dimensional feature space for a neuron’s stimulus sensitivity. However, comparatively little work has addressed the problem of estimating the nonlinear function from feature space to spike rate. Here, we use a Gaussian process (GP) prior over the infinitedimensional space of nonlinear functions to obtain Bayesian estimates of the “nonlinearity” in the linear-nonlinear-Poisson (LNP) encoding model. This approach offers increased flexibility, robustness, and computational tractability compared to traditional methods (e.g., parametric forms, histograms, cubic splines). We then develop a framework for optimal experimental design under the GP-Poisson model using uncertainty sampling. This involves adaptively selecting stimuli according to an information-theoretic criterion, with the goal of characterizing the nonlinearity with as little experimental data as possible. Our framework relies on a method for rapidly updating hyperparameters under a Gaussian approximation to the posterior. We apply these methods to neural data from a color-tuned simple cell in macaque V1, characterizing its nonlinear response function in the 3D space of cone contrasts. We find that it combines cone inputs in a highly nonlinear manner. With simulated experiments, we show that optimal design substantially reduces the amount of data required to estimate these nonlinear combination rules. 1

4 0.63068777 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Author: Yan Karklin, Eero P. Simoncelli

Abstract: Efficient coding provides a powerful principle for explaining early sensory coding. Most attempts to test this principle have been limited to linear, noiseless models, and when applied to natural images, have yielded oriented filters consistent with responses in primary visual cortex. Here we show that an efficient coding model that incorporates biologically realistic ingredients – input and output noise, nonlinear response functions, and a metabolic cost on the firing rate – predicts receptive fields and response nonlinearities similar to those observed in the retina. Specifically, we develop numerical methods for simultaneously learning the linear filters and response nonlinearities of a population of model neurons, so as to maximize information transmission subject to metabolic costs. When applied to an ensemble of natural images, the method yields filters that are center-surround and nonlinearities that are rectifying. The filters are organized into two populations, with On- and Off-centers, which independently tile the visual space. As observed in the primate retina, the Off-center neurons are more numerous and have filters with smaller spatial extent. In the absence of noise, our method reduces to a generalized version of independent components analysis, with an adapted nonlinear “contrast” function; in this case, the optimal filters are localized and oriented.

5 0.62741441 298 nips-2011-Unsupervised learning models of primary cortical receptive fields and receptive field plasticity

Author: Maneesh Bhand, Ritvik Mudur, Bipin Suresh, Andrew Saxe, Andrew Y. Ng

Abstract: The efficient coding hypothesis holds that neural receptive fields are adapted to the statistics of the environment, but is agnostic to the timescale of this adaptation, which occurs on both evolutionary and developmental timescales. In this work we focus on that component of adaptation which occurs during an organism’s lifetime, and show that a number of unsupervised feature learning algorithms can account for features of normal receptive field properties across multiple primary sensory cortices. Furthermore, we show that the same algorithms account for altered receptive field properties in response to experimentally altered environmental statistics. Based on these modeling results we propose these models as phenomenological models of receptive field plasticity during an organism’s lifetime. Finally, due to the success of the same models in multiple sensory areas, we suggest that these algorithms may provide a constructive realization of the theory, first proposed by Mountcastle [1], that a qualitatively similar learning algorithm acts throughout primary sensory cortices. 1

6 0.43428051 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

7 0.43035147 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

8 0.42729729 244 nips-2011-Selecting Receptive Fields in Deep Networks

9 0.40027806 219 nips-2011-Predicting response time and error rates in visual search

10 0.39534423 224 nips-2011-Probabilistic Modeling of Dependencies Among Visual Short-Term Memory Representations

11 0.39516616 34 nips-2011-An Unsupervised Decontamination Procedure For Improving The Reliability Of Human Judgments

12 0.3911984 2 nips-2011-A Brain-Machine Interface Operating with a Real-Time Spiking Neural Network Control Algorithm

13 0.39074099 13 nips-2011-A blind sparse deconvolution method for neural spike identification

14 0.35925424 200 nips-2011-On the Analysis of Multi-Channel Neural Spike Data

15 0.35098717 302 nips-2011-Variational Learning for Recurrent Spiking Networks

16 0.34826753 23 nips-2011-Active dendrites: adaptation to spike-based communication

17 0.34572062 99 nips-2011-From Stochastic Nonlinear Integrate-and-Fire to Generalized Linear Models

18 0.3359372 276 nips-2011-Structured sparse coding via lateral inhibition

19 0.33340651 85 nips-2011-Emergence of Multiplication in a Biophysical Model of a Wide-Field Visual Neuron for Computing Object Approaches: Dynamics, Peaks, & Fits

20 0.32015359 243 nips-2011-Select and Sample - A Model of Efficient Neural Inference and Learning


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.01), (4, 0.029), (20, 0.019), (26, 0.013), (31, 0.093), (33, 0.021), (34, 0.012), (39, 0.024), (43, 0.149), (45, 0.07), (54, 0.188), (57, 0.058), (65, 0.046), (74, 0.069), (83, 0.085), (99, 0.031)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.82718766 183 nips-2011-Neural Reconstruction with Approximate Message Passing (NeuRAMP)

Author: Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava

Abstract: Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a lowdimensional signal that drives subsequent nonlinear stages. This paper presents a novel and systematic parameter estimation procedure for such models and applies the method to two neural estimation problems: (i) compressed-sensing based neural mapping from multi-neuron excitation, and (ii) estimation of neural receptive fields in sensory neurons. The proposed estimation algorithm models the neurons via a graphical model and then estimates the parameters in the model using a recently-developed generalized approximate message passing (GAMP) method. The GAMP method is based on Gaussian approximations of loopy belief propagation. In the neural connectivity problem, the GAMP-based method is shown to be computational efficient, provides a more exact modeling of the sparsity, can incorporate nonlinearities in the output and significantly outperforms previous compressed-sensing methods. For the receptive field estimation, the GAMP method can also exploit inherent structured sparsity in the linear weights. The method is validated on estimation of linear nonlinear Poisson (LNP) cascade models for receptive fields of salamander retinal ganglion cells. 1

2 0.7377426 82 nips-2011-Efficient coding of natural images with a population of noisy Linear-Nonlinear neurons

Author: Yan Karklin, Eero P. Simoncelli

Abstract: Efficient coding provides a powerful principle for explaining early sensory coding. Most attempts to test this principle have been limited to linear, noiseless models, and when applied to natural images, have yielded oriented filters consistent with responses in primary visual cortex. Here we show that an efficient coding model that incorporates biologically realistic ingredients – input and output noise, nonlinear response functions, and a metabolic cost on the firing rate – predicts receptive fields and response nonlinearities similar to those observed in the retina. Specifically, we develop numerical methods for simultaneously learning the linear filters and response nonlinearities of a population of model neurons, so as to maximize information transmission subject to metabolic costs. When applied to an ensemble of natural images, the method yields filters that are center-surround and nonlinearities that are rectifying. The filters are organized into two populations, with On- and Off-centers, which independently tile the visual space. As observed in the primate retina, the Off-center neurons are more numerous and have filters with smaller spatial extent. In the absence of noise, our method reduces to a generalized version of independent components analysis, with an adapted nonlinear “contrast” function; in this case, the optimal filters are localized and oriented.

3 0.73044419 135 nips-2011-Information Rates and Optimal Decoding in Large Neural Populations

Author: Kamiar R. Rad, Liam Paninski

Abstract: Many fundamental questions in theoretical neuroscience involve optimal decoding and the computation of Shannon information rates in populations of spiking neurons. In this paper, we apply methods from the asymptotic theory of statistical inference to obtain a clearer analytical understanding of these quantities. We find that for large neural populations carrying a finite total amount of information, the full spiking population response is asymptotically as informative as a single observation from a Gaussian process whose mean and covariance can be characterized explicitly in terms of network and single neuron properties. The Gaussian form of this asymptotic sufficient statistic allows us in certain cases to perform optimal Bayesian decoding by simple linear transformations, and to obtain closed-form expressions of the Shannon information carried by the network. One technical advantage of the theory is that it may be applied easily even to non-Poisson point process network models; for example, we find that under some conditions, neural populations with strong history-dependent (non-Poisson) effects carry exactly the same information as do simpler equivalent populations of non-interacting Poisson neurons with matched firing rates. We argue that our findings help to clarify some results from the recent literature on neural decoding and neuroprosthetic design.

4 0.71773881 273 nips-2011-Structural equations and divisive normalization for energy-dependent component analysis

Author: Jun-ichiro Hirayama, Aapo Hyvärinen

Abstract: Components estimated by independent component analysis and related methods are typically not independent in real data. A very common form of nonlinear dependency between the components is correlations in their variances or energies. Here, we propose a principled probabilistic model to model the energycorrelations between the latent variables. Our two-stage model includes a linear mixing of latent signals into the observed ones like in ICA. The main new feature is a model of the energy-correlations based on the structural equation model (SEM), in particular, a Linear Non-Gaussian SEM. The SEM is closely related to divisive normalization which effectively reduces energy correlation. Our new twostage model enables estimation of both the linear mixing and the interactions related to energy-correlations, without resorting to approximations of the likelihood function or other non-principled approaches. We demonstrate the applicability of our method with synthetic dataset, natural images and brain signals. 1

5 0.70277929 86 nips-2011-Empirical models of spiking in neural populations

Author: Jakob H. Macke, Lars Buesing, John P. Cunningham, Byron M. Yu, Krishna V. Shenoy, Maneesh Sahani

Abstract: Neurons in the neocortex code and compute as part of a locally interconnected population. Large-scale multi-electrode recording makes it possible to access these population processes empirically by fitting statistical models to unaveraged data. What statistical structure best describes the concurrent spiking of cells within a local network? We argue that in the cortex, where firing exhibits extensive correlations in both time and space and where a typical sample of neurons still reflects only a very small fraction of the local population, the most appropriate model captures shared variability by a low-dimensional latent process evolving with smooth dynamics, rather than by putative direct coupling. We test this claim by comparing a latent dynamical model with realistic spiking observations to coupled generalised linear spike-response models (GLMs) using cortical recordings. We find that the latent dynamical approach outperforms the GLM in terms of goodness-offit, and reproduces the temporal correlations in the data more accurately. We also compare models whose observations models are either derived from a Gaussian or point-process models, finding that the non-Gaussian model provides slightly better goodness-of-fit and more realistic population spike counts. 1

6 0.70238096 24 nips-2011-Active learning of neural response functions with Gaussian processes

7 0.70024204 288 nips-2011-Thinning Measurement Models and Questionnaire Design

8 0.7001552 44 nips-2011-Bayesian Spike-Triggered Covariance Analysis

9 0.69838744 258 nips-2011-Sparse Bayesian Multi-Task Learning

10 0.69437122 281 nips-2011-The Doubly Correlated Nonparametric Topic Model

11 0.69399166 133 nips-2011-Inferring spike-timing-dependent plasticity from spike train data

12 0.69238538 83 nips-2011-Efficient inference in matrix-variate Gaussian models with \iid observation noise

13 0.68417662 75 nips-2011-Dynamical segmentation of single trials from population neural data

14 0.68390888 276 nips-2011-Structured sparse coding via lateral inhibition

15 0.68330508 123 nips-2011-How biased are maximum entropy models?

16 0.68249714 219 nips-2011-Predicting response time and error rates in visual search

17 0.68116385 140 nips-2011-Kernel Embeddings of Latent Tree Graphical Models

18 0.68084991 267 nips-2011-Spectral Methods for Learning Multivariate Latent Tree Structure

19 0.67825294 301 nips-2011-Variational Gaussian Process Dynamical Systems

20 0.67719668 116 nips-2011-Hierarchically Supervised Latent Dirichlet Allocation