nips nips2007 nips2007-164 knowledge-graph by maker-knowledge-mining

164 nips-2007-Receptive Fields without Spike-Triggering


Source: pdf

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 de Max Planck Institute for Biological Cybernetics S pemannstrasse 41 ¨ 72076 T u bingen, Germany Abstract S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. [sent-7, score-0.744]

2 Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. [sent-8, score-0.239]

3 This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. [sent-9, score-0.36]

4 Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? [sent-10, score-0.903]

5 Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. [sent-11, score-0.787]

6 More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. [sent-12, score-0.465]

7 We use an extension of reverse-correlation methods based on canonical correlation analysis. [sent-13, score-0.248]

8 The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. [sent-14, score-1.197]

9 We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. [sent-15, score-0.501]

10 We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. [sent-16, score-0.397]

11 Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. [sent-17, score-0.641]

12 The interpretation of these patterns constitutes a challenging problem: for computational tasks like object recognition, it is not clear what information about the image should be extracted and in which format it should be represented. [sent-19, score-0.11]

13 S imilarly, it is difficult to assess what information is conveyed by the multitude of neurons in the visual pathway. [sent-20, score-0.152]

14 Right from the first synapse, the information of an individual photoreceptor is signaled to many different cells with different temporal filtering properties, each of which is only a small unit within a complex neural network [ 20] . [sent-21, score-0.165]

15 Even if we leave the difficulties imposed by nonlinearities and feedback aside, it is hard to judge what the contribution of any particular neuron is to the information transmitted. [sent-22, score-0.156]

16 1 The prevalent tool for characterizing the behavior of sensory neurons, the spike triggered average, is based on a quasi-linear model of neural responses [ 1 5] . [sent-23, score-0.302]

17 , yN ) T denotes the vector of neural responses, x the stimulus parameters, W = ( w 1 , . [sent-27, score-0.291]

18 , w N ) T the filter matrix with row ‘ k’ containing the receptive field w k of neuron k, and ξ is the noise. [sent-30, score-0.669]

19 In order to understand the collective behavior of a neuronal population, we rather have to understand the behavior of the matrix W, and the structure of the noise correlations Σ ξ : Both of them influence the feature selectivity of the population. [sent-34, score-0.226]

20 Can we find a compact description of the features that a neural ensemble is most sensitive to? [sent-35, score-0.165]

21 In the case of a single cell, the receptive field provides such a description: It can be interpreted as the “favorite stimulus” of the neuron, in the sense that the more similar an input is to the receptive field, the higher is the spiking probability, and thus the firing rate of the neuron. [sent-36, score-1.073]

22 In addition, the receptive field can easily be estimated using a spike-triggered average, which, under certain assumptions, yields the optimal estimate of the receptive field in a linear-nonlinear cascade model [ 1 1 ] . [sent-37, score-1.026]

23 If we are considering an ensemble of neurons rather than a single neuron, it is not obvious what to trigger on: This requires assumptions about what patterns of spikes or modulations in firing rates across the population carry information about the stimulus. [sent-38, score-0.613]

24 Rather than addressing the question “what features of the stimulus are correlated with the occurence of spikes”, the question now is: “What stimulus features are correlated with what patterns of spiking activity? [sent-39, score-0.711]

25 Phrased in the language of information theory, we are searching for the subspace that contains most of the mutual information between sensory inputs and neuronal responses. [sent-41, score-0.294]

26 As an efficient implementation of this strategy, we present an extension of reverse-correlation methods based on canonical correlation analysis. [sent-43, score-0.248]

27 The resulting population receptive fields (PRFs) are not bound to be triggered on individual spikes but are linked to response patterns that are simultaneously determined by the algorithm. [sent-44, score-1.132]

28 We calculate the PRF for a population consisting of uniformly spaced cells with center-surround receptive fields and noise correlations, and estimate the PRF of a population of rabbit retinal ganglion cells from multi-electrode recordings. [sent-45, score-1.669]

29 In addition, we show how our method can be extended to explore different hypotheses about the neural code, such as spike latencies or interval coding, which require nonlinear read out mechanisms. [sent-46, score-0.222]

30 2 From reverse correlation to canonical correlation We regard the stimulus at time t as a random variable Xt ∈ R n , and the neural response as Yt ∈ R m . [sent-47, score-0.778]

31 For simplicity, we assume that the stimulus consists of Gaussian white noise, i. [sent-48, score-0.268]

32 The spike-triggered average a of a neuron can be motivated by the fact that it is the direction in stimulus-space maximizing the correlation-coefficient ρ= � Cov( a T X, Y1 ) Var( a T X) Var( Y1 ) . [sent-51, score-0.182]

33 (2) between the filtered stimulus a T X and a univariate neural response Y1 . [sent-52, score-0.383]

34 In the case of a neural population, we are not only looking for the stimulus feature a, but also need to determine what pattern of spiking activity b it is coupled with. [sent-53, score-0.433]

35 (3 ) ρ1 = � Var( a T X) Var( b T Y) 1 1 We interpret a 1 as the stimulus filter whose output is maximally correlated with the output of the “response filter” b 1 . [sent-55, score-0.304]

36 Thus, we are simultaneously searching for features of the stimulus that the neural system is selective for, and the patterns of activity that it uses to signal the presence or absence 2 of this feature. [sent-56, score-0.499]

37 We refer to the vector a 1 as the (first) population receptive field of the population, and b 1 is the response f eature corresponding to a 1 . [sent-57, score-0.84]

38 If a hypothetical neuron receives input from the population, and wants to decode the presence of the stimulus a 1 , the weights of the optimal linear readout [ 1 6] could be derived from b 1 . [sent-58, score-0.464]

39 The population receptive fields and the characteristic patterns are found by a joint optimization in stimulus and response space. [sent-67, score-1.189]

40 Therefore, one does not need to know—or assume—a priori what features the population is sensitive to, or what spike patterns convey the information. [sent-68, score-0.508]

41 The first K PRFs form a basis for the subspace of stimuli that the neural population is most sensitive to, and the individual basis vectors a k are sorted according to their “informativeness” [ 1 3 , 1 7] . [sent-69, score-0.524]

42 The mutual information between two one-dimensional Gaussian Variables with correlation ρ is given 1 by MI G a us s = − 2 log( 1 − ρ 2 ) , so maximizing correlation coefficients is equivalent to maximizing mutual information [ 3 ] . [sent-70, score-0.508]

43 Assuming the neural response Y to be Gaussian, the subspace spanned by the first K vectors B K = ( b 1 , . [sent-71, score-0.327]

44 , b K ) is also the K-subspace of stimuli that contains the maximal amount of mutual information between stimuli and neural response. [sent-74, score-0.271]

45 In contrast to oriented PCA, however, CCA does not require one to know explicitly how the response covariance Σ y = Σ s + Σ ξ splits into signal Σ s and noise Σ ξ covariance. [sent-77, score-0.196]

46 Instead, it uses the cross-covariance Σ x y which is directly available from reverse correlation experiments. [sent-78, score-0.147]

47 b K but also the most predictive stimulus components A K = ( a 1 , . [sent-82, score-0.239]

48 S ince for elliptically contoured distributions J( A T X) does not depend on A, the PRFs can be seen as the solution of a variational approach, maximizing a lower bound to the mutual information. [sent-87, score-0.189]

49 Maximizing mutual information directly is hard, requires extensive amounts of data, and usually multiple repetitions of the same stimulus sequence. [sent-88, score-0.32]

50 3 The receptive field of a population of neurons 3. [sent-89, score-0.863]

51 the neuron is not sensitive to the mean luminance of the stimulus. [sent-96, score-0.189]

52 S pecifically, we assume exponentially decaying noise correlation with Σ ξ ( s ) = exp( − | s | / λ) . [sent-99, score-0.189]

53 That is, the first PRF can be used to estimate the passband of the population transfer function. [sent-101, score-0.282]

54 (8 ) The passband of the first population filter moves as a function of both parameters A and λ. [sent-103, score-0.282]

55 In this case, the mean intensity is the stimulus property that is most faithfully signaled by the ensemble. [sent-109, score-0.286]

56 2 The receptive field of an ensemble of retinal ganglion cells We mapped the population receptive fields of rabbit retinal ganglion cells recorded with a wholemount preparation. [sent-124, score-2.204]

57 The neurons were stimulated with a 1 6 × 1 6 checkerboard consisting of binary white noise which was updated every 20ms. [sent-126, score-0.211]

58 After spikesorting, spike trains from 32 neurons were binned at 2 0ms resolution, and the response of a neuron to a stimulus at time t was defined to consist of the the spike-counts in the 1 0 bins between 40ms and 240ms after t. [sent-128, score-0.729]

59 Thus, each population response Yt is a 3 20 dimensional vector. [sent-129, score-0.327]

60 2 A) displays the first 6 PRFs, the corresponding patterns of neural activity (B) and their correlation coefficients ρ k (which were calculated using a cross-validation procedure). [sent-131, score-0.404]

61 It can be seen that the PRFs look very different to the usual center-surrond structure of retinal ganglion. [sent-132, score-0.155]

62 For comparison, we also plotted the single-cell receptive fields in Figure 3 . [sent-134, score-0.513]

63 2 C), and their projections into the spaced spanned by the first 6 PRFs. [sent-135, score-0.114]

64 These plots suggest that a small number of PRFs might 4 be sufficient to approximate each of the receptive fields. [sent-136, score-0.513]

65 The Gaussian Mutual Information �K MI G a us s = − 1 k = 1 log( 1 − ρ 2 ) is an estimate of the information contained in the subspace k 2 spanned by the first K PRFs. [sent-138, score-0.183]

66 RF RF C) Figure 2: The population receptive fields of a group of 32 retinal ganglion cells: A) the first 6 PRFs, as sorted by the correlation coefficient ρ k B) the response features b k coupled with the PRFs. [sent-153, score-1.37]

67 It can be seen that only a subset of neurons contributed to the first 6 PRFs. [sent-156, score-0.115]

68 C) The single-cell receptive fields of 2 4 neurons from our population, and their projections into the space spanned by the 6 PRFs. [sent-157, score-0.714]

69 B) Gaussian-MI of the subspace spanned by the first K PRFs. [sent-167, score-0.183]

70 4 Nonlinear extensions using Kernel Canonical Correlation Analysis Thus far, our model is completely linear: We assume that the stimulus is linearly related to the neural responses, and we also assume a linear readout of the response. [sent-168, score-0.332]

71 It is worth mentioning that the space of patterns Y itself does not have to be a vector space. [sent-175, score-0.11]

72 We illustrate the concept on simulated data: We will use a similarity measure based on the metric D interval [ 1 9] to estimate the receptive field of a neuron which does not use its firing rate, but rather the occurrence of specific interspike intervals to convey information about the stimulus. [sent-185, score-0.768]

73 ) If we consider coding-schemes that are based on patterns of spikes, the methods described here become useful even for the analysis of single neurons. [sent-189, score-0.11]

74 Our hypothetical neuron encodes information in a pattern consisting of three spikes: The relative timing of the second spike is informative about the stimulus: The bigger the correlation between receptive field and stimulus � r, s t � , the shorter is the interval. [sent-191, score-1.291]

75 If the receptive field is very dissimilar to the stimulus, the interval is long. [sent-192, score-0.546]

76 While the timing of the spikes relative to each other is precise, there is jitter in the timing of the pattern relative to the stimulus. [sent-193, score-0.189]

77 6 A) B) Spike trains C) D) 0 50 100 Time → 150 200 Figure 4: Coding by spike patterns: A) Receptive field of neuron described in S ection 4. [sent-196, score-0.283]

78 B) A subset of the simulated spike-trains, sorted with respect to the similarity between the shown stimulus and the receptive field of the model. [sent-197, score-0.823]

79 The interval between the first two informative spikes in each trial is highlighted in red. [sent-198, score-0.194]

80 C) Receptive field recovered by Kernel CCA, the correlation coefficient between real and estimated receptive field is 0. [sent-199, score-0.66]

81 D) Receptive field derived using linear decoding, correlation coefficient is 0. [sent-201, score-0.147]

82 Using these spike-trains, we tried to recover the receptive field r without telling the algorithm what the indicating pattern was. [sent-203, score-0.513]

83 Each stimulus was shown only once, and therefore, that every spikepattern occurred only once. [sent-204, score-0.239]

84 We simulated 5 000 stimulus presentations for this model, and applied Kernel CCA with a linear kernel on the stimuli, and the alignment-score on the spike-trains. [sent-205, score-0.344]

85 As many kernels on spike trains are computationally expensive, this trick can result in substantial speed-ups of the computation. [sent-207, score-0.127]

86 The receptive field was recovered (see Figure 4), despite the highly nonlinear encoding mechanism of the neuron. [sent-208, score-0.553]

87 For comparison, we also show what receptive field would be obtained using linear decoding on the indicated bins. [sent-209, score-0.549]

88 Although this neuron model may seem slightly contrived, it is a good proof of concept that, in principle, receptive fields can be estimated even if the firing rate gives no information at all about the stimulus, and the encoding is highly nonlinear. [sent-210, score-0.669]

89 Our algorithm does not only look at patterns that occur more often than expected by chance, but also takes into account to what extent their occurrence is correlated to the sensory input. [sent-211, score-0.189]

90 5 Conclusions We set out to find a useful description of the stimulus-response relationship of an ensemble of neurons akin to the concept of receptive field for single neurons. [sent-212, score-0.708]

91 The population receptive fields are found by a joint optimization over stimuli and spike-patterns, and are thus not bound to be triggered by single spikes. [sent-213, score-0.886]

92 We estimated the PRFs of a group of retinal ganglion cells, and found that the first PRF had most spectral power in the low-frequency bands, consistent with our theoretical analysis. [sent-214, score-0.371]

93 The stimulus we used was a white-noise sequence—it will be interesting to see how the informative subspace and its spectral properties change for different stimuli such as colored noise. [sent-215, score-0.479]

94 The ganglion cell layer of the retina is a system that is relatively well understood at the level of single neurons. [sent-216, score-0.266]

95 However, our approach has the potential to be especially useful in systems in which the functional significance of single cell receptive fields is difficult to interpret. [sent-218, score-0.556]

96 Using CCA, receptive fields can readily be estimated from these kinds of representations without limiting attention to single channels or extracting neural events. [sent-227, score-0.565]

97 Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. [sent-311, score-0.514]

98 Multineuronal firing patterns in the signal from eye to brain. [sent-324, score-0.144]

99 Analyzing neural responses to natural signals: maximally informative dimensions. [sent-346, score-0.17]

100 The spatial filtering properties of local edge detectors and brisk-sustained retinal ganglion cells. [sent-369, score-0.345]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('receptive', 0.513), ('prfs', 0.256), ('stimulus', 0.239), ('population', 0.235), ('cca', 0.222), ('prf', 0.21), ('ganglion', 0.19), ('neuron', 0.156), ('retinal', 0.155), ('eld', 0.152), ('correlation', 0.147), ('neurons', 0.115), ('spikes', 0.113), ('patterns', 0.11), ('elds', 0.105), ('canonical', 0.101), ('subspace', 0.097), ('spike', 0.097), ('response', 0.092), ('spanned', 0.086), ('rabbit', 0.081), ('rf', 0.081), ('mutual', 0.081), ('neuronal', 0.075), ('kernel', 0.072), ('cho', 0.07), ('mpg', 0.07), ('ofmachine', 0.07), ('stimuli', 0.069), ('triggered', 0.069), ('cells', 0.066), ('activity', 0.064), ('dc', 0.059), ('neural', 0.052), ('correlations', 0.05), ('pillow', 0.049), ('det', 0.049), ('nr', 0.049), ('informative', 0.048), ('spiking', 0.047), ('ring', 0.047), ('eichhorn', 0.047), ('passband', 0.047), ('pemannstrasse', 0.047), ('signaled', 0.047), ('var', 0.045), ('planck', 0.045), ('neurosci', 0.045), ('cell', 0.043), ('imaging', 0.043), ('responses', 0.043), ('noise', 0.042), ('mi', 0.041), ('sensory', 0.041), ('contoured', 0.041), ('elliptically', 0.041), ('zeck', 0.041), ('phys', 0.041), ('readout', 0.041), ('ensemble', 0.04), ('description', 0.04), ('nonlinear', 0.04), ('lter', 0.039), ('cov', 0.039), ('correlated', 0.038), ('sorted', 0.038), ('timing', 0.038), ('vis', 0.037), ('conveyed', 0.037), ('coding', 0.037), ('decoding', 0.036), ('dimensionality', 0.035), ('rust', 0.035), ('rev', 0.035), ('signal', 0.034), ('sensitive', 0.033), ('interval', 0.033), ('coef', 0.033), ('retina', 0.033), ('convey', 0.033), ('simulated', 0.033), ('calculate', 0.033), ('calculated', 0.031), ('germany', 0.031), ('feature', 0.031), ('trains', 0.03), ('white', 0.029), ('hypothetical', 0.028), ('oriented', 0.028), ('spaced', 0.028), ('selectivity', 0.028), ('tuning', 0.027), ('bingen', 0.027), ('maximally', 0.027), ('cients', 0.026), ('maximizing', 0.026), ('frequency', 0.026), ('spectral', 0.026), ('consisting', 0.025), ('fields', 0.025)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.99999946 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

2 0.33765444 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

3 0.28331333 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

4 0.18861519 182 nips-2007-Sparse deep belief net model for visual area V2

Author: Honglak Lee, Chaitanya Ekanadham, Andrew Y. Ng

Abstract: Motivated in part by the hierarchical organization of the cortex, a number of algorithms have recently been proposed that try to learn hierarchical, or “deep,” structure from unlabeled data. While several authors have formally or informally compared their algorithms to computations performed in visual area V1 (and the cochlea), little attempt has been made thus far to evaluate these algorithms in terms of their fidelity for mimicking computations at deeper levels in the cortical hierarchy. This paper presents an unsupervised learning model that faithfully mimics certain properties of visual area V2. Specifically, we develop a sparse variant of the deep belief networks of Hinton et al. (2006). We learn two layers of nodes in the network, and demonstrate that the first layer, similar to prior work on sparse coding and ICA, results in localized, oriented, edge filters, similar to the Gabor functions known to model V1 cell receptive fields. Further, the second layer in our model encodes correlations of the first layer responses in the data. Specifically, it picks up both colinear (“contour”) features as well as corners and junctions. More interestingly, in a quantitative comparison, the encoding of these more complex “corner” features matches well with the results from the Ito & Komatsu’s study of biological V2 responses. This suggests that our sparse variant of deep belief networks holds promise for modeling more higher-order features. 1

5 0.18387602 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

6 0.14683871 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

7 0.13828681 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

8 0.13364787 81 nips-2007-Estimating disparity with confidence from energy neurons

9 0.12399457 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

10 0.11430823 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

11 0.11037119 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

12 0.10937389 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

13 0.10720682 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

14 0.098857462 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes

15 0.083414696 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression

16 0.077466071 14 nips-2007-A configurable analog VLSI neural network with spiking neurons and self-regulating plastic synapses

17 0.076077484 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

18 0.07450752 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

19 0.072825633 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

20 0.065964043 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.219), (1, 0.184), (2, 0.36), (3, 0.055), (4, 0.007), (5, -0.036), (6, 0.035), (7, 0.058), (8, 0.023), (9, 0.046), (10, 0.042), (11, -0.022), (12, 0.081), (13, -0.003), (14, 0.07), (15, 0.006), (16, 0.151), (17, 0.08), (18, 0.141), (19, 0.181), (20, -0.005), (21, 0.02), (22, 0.014), (23, 0.057), (24, 0.019), (25, -0.023), (26, 0.054), (27, -0.001), (28, -0.073), (29, 0.061), (30, -0.078), (31, 0.022), (32, -0.076), (33, -0.107), (34, 0.032), (35, -0.034), (36, 0.079), (37, 0.088), (38, 0.084), (39, 0.07), (40, 0.037), (41, -0.055), (42, -0.067), (43, -0.026), (44, -0.037), (45, -0.063), (46, 0.039), (47, 0.032), (48, 0.087), (49, -0.081)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97396803 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

2 0.84311086 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

3 0.81720734 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

Author: Jonathan W. Pillow, Peter E. Latham

Abstract: Point process encoding models provide powerful statistical methods for understanding the responses of neurons to sensory stimuli. Although these models have been successfully applied to neurons in the early sensory pathway, they have fared less well capturing the response properties of neurons in deeper brain areas, owing in part to the fact that they do not take into account multiple stages of processing. Here we introduce a new twist on the point-process modeling approach: we include unobserved as well as observed spiking neurons in a joint encoding model. The resulting model exhibits richer dynamics and more highly nonlinear response properties, making it more powerful and more flexible for fitting neural data. More importantly, it allows us to estimate connectivity patterns among neurons (both observed and unobserved), and may provide insight into how networks process sensory input. We formulate the estimation procedure using variational EM and the wake-sleep algorithm, and illustrate the model’s performance using a simulated example network consisting of two coupled neurons.

4 0.78619516 81 nips-2007-Estimating disparity with confidence from energy neurons

Author: Eric K. Tsang, Bertram E. Shi

Abstract: The peak location in a population of phase-tuned neurons has been shown to be a more reliable estimator for disparity than the peak location in a population of position-tuned neurons. Unfortunately, the disparity range covered by a phasetuned population is limited by phase wraparound. Thus, a single population cannot cover the large range of disparities encountered in natural scenes unless the scale of the receptive fields is chosen to be very large, which results in very low resolution depth estimates. Here we describe a biologically plausible measure of the confidence that the stimulus disparity is inside the range covered by a population of phase-tuned neurons. Based upon this confidence measure, we propose an algorithm for disparity estimation that uses many populations of high-resolution phase-tuned neurons that are biased to different disparity ranges via position shifts between the left and right eye receptive fields. The population with the highest confidence is used to estimate the stimulus disparity. We show that this algorithm outperforms a previously proposed coarse-to-fine algorithm for disparity estimation, which uses disparity estimates from coarse scales to select the populations used at finer scales and can effectively detect occlusions.

5 0.69355196 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

Author: Tatyana Sharpee

Abstract: This paper compares a family of methods for characterizing neural feature selectivity with natural stimuli in the framework of the linear-nonlinear model. In this model, the neural firing rate is a nonlinear function of a small number of relevant stimulus components. The relevant stimulus dimensions can be found by maximizing one of the family of objective functions, R´ nyi divergences of different e orders [1, 2]. We show that maximizing one of them, R´ nyi divergence of ore der 2, is equivalent to least-square fitting of the linear-nonlinear model to neural data. Next, we derive reconstruction errors in relevant dimensions found by maximizing R´ nyi divergences of arbitrary order in the asymptotic limit of large spike e numbers. We find that the smallest errors are obtained with R´ nyi divergence of e order 1, also known as Kullback-Leibler divergence. This corresponds to finding relevant dimensions by maximizing mutual information [2]. We numerically test how these optimization schemes perform in the regime of low signal-to-noise ratio (small number of spikes and increasing neural noise) for model visual neurons. We find that optimization schemes based on either least square fitting or information maximization perform well even when number of spikes is small. Information maximization provides slightly, but significantly, better reconstructions than least square fitting. This makes the problem of finding relevant dimensions, together with the problem of lossy compression [3], one of examples where informationtheoretic measures are no more data limited than those derived from least squares. 1

6 0.63553518 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

7 0.48819622 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

8 0.47778097 182 nips-2007-Sparse deep belief net model for visual area V2

9 0.45129597 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

10 0.45111609 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

11 0.43967825 35 nips-2007-Bayesian binning beats approximate alternatives: estimating peri-stimulus time histograms

12 0.43596983 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

13 0.41870573 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

14 0.3955217 26 nips-2007-An online Hebbian learning rule that performs Independent Component Analysis

15 0.39432201 25 nips-2007-An in-silico Neural Model of Dynamic Routing through Neuronal Coherence

16 0.36243734 111 nips-2007-Learning Horizontal Connections in a Sparse Coding Model of Natural Images

17 0.31252381 103 nips-2007-Inferring Elapsed Time from Stochastic Neural Processes

18 0.31023848 117 nips-2007-Learning to classify complex patterns using a VLSI network of spiking neurons

19 0.30338672 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes

20 0.30192739 59 nips-2007-Continuous Time Particle Filtering for fMRI


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.06), (13, 0.034), (16, 0.119), (18, 0.019), (19, 0.03), (21, 0.082), (31, 0.01), (34, 0.028), (35, 0.024), (36, 0.263), (47, 0.086), (49, 0.016), (83, 0.074), (87, 0.016), (90, 0.06)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81839836 164 nips-2007-Receptive Fields without Spike-Triggering

Author: Guenther Zeck, Matthias Bethge, Jakob H. Macke

Abstract: S timulus selectivity of sensory neurons is often characterized by estimating their receptive field properties such as orientation selectivity. Receptive fields are usually derived from the mean (or covariance) of the spike-triggered stimulus ensemble. This approach treats each spike as an independent message but does not take into account that information might be conveyed through patterns of neural activity that are distributed across space or time. Can we find a concise description for the processing of a whole population of neurons analogous to the receptive field for single neurons? Here, we present a generalization of the linear receptive field which is not bound to be triggered on individual spikes but can be meaningfully linked to distributed response patterns. More precisely, we seek to identify those stimulus features and the corresponding patterns of neural activity that are most reliably coupled. We use an extension of reverse-correlation methods based on canonical correlation analysis. The resulting population receptive fields span the subspace of stimuli that is most informative about the population response. We evaluate our approach using both neuronal models and multi-electrode recordings from rabbit retinal ganglion cells. We show how the model can be extended to capture nonlinear stimulus-response relationships using kernel canonical correlation analysis, which makes it possible to test different coding mechanisms. Our technique can also be used to calculate receptive fields from multi-dimensional neural measurements such as those obtained from dynamic imaging methods. 1

2 0.71137381 142 nips-2007-Non-parametric Modeling of Partially Ranked Data

Author: Guy Lebanon, Yi Mao

Abstract: Statistical models on full and partial rankings of n items are often of limited practical use for large n due to computational consideration. We explore the use of non-parametric models for partially ranked data and derive efficient procedures for their use for large n. The derivations are largely possible through combinatorial and algebraic manipulations based on the lattice of partial rankings. In particular, we demonstrate for the first time a non-parametric coherent and consistent model capable of efficiently aggregating partially ranked data of different types. 1

3 0.63136584 177 nips-2007-Simplified Rules and Theoretical Analysis for Information Bottleneck Optimization and PCA with Spiking Neurons

Author: Lars Buesing, Wolfgang Maass

Abstract: We show that under suitable assumptions (primarily linearization) a simple and perspicuous online learning rule for Information Bottleneck optimization with spiking neurons can be derived. This rule performs on common benchmark tasks as well as a rather complex rule that has previously been proposed [1]. Furthermore, the transparency of this new learning rule makes a theoretical analysis of its convergence properties feasible. A variation of this learning rule (with sign changes) provides a theoretically founded method for performing Principal Component Analysis (PCA) with spiking neurons. By applying this rule to an ensemble of neurons, different principal components of the input can be extracted. In addition, it is possible to preferentially extract those principal components from incoming signals X that are related or are not related to some additional target signal YT . In a biological interpretation, this target signal YT (also called relevance variable) could represent proprioceptive feedback, input from other sensory modalities, or top-down signals. 1

4 0.62215143 107 nips-2007-Iterative Non-linear Dimensionality Reduction with Manifold Sculpting

Author: Michael Gashler, Dan Ventura, Tony Martinez

Abstract: Many algorithms have been recently developed for reducing dimensionality by projecting data onto an intrinsic non-linear manifold. Unfortunately, existing algorithms often lose significant precision in this transformation. Manifold Sculpting is a new algorithm that iteratively reduces dimensionality by simulating surface tension in local neighborhoods. We present several experiments that show Manifold Sculpting yields more accurate results than existing algorithms with both generated and natural data-sets. Manifold Sculpting is also able to benefit from both prior dimensionality reduction efforts. 1

5 0.59686404 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

Author: Sebastian Gerwinn, Matthias Bethge, Jakob H. Macke, Matthias Seeger

Abstract: Generalized linear models are the most commonly used tools to describe the stimulus selectivity of sensory neurons. Here we present a Bayesian treatment of such models. Using the expectation propagation algorithm, we are able to approximate the full posterior distribution over all weights. In addition, we use a Laplacian prior to favor sparse solutions. Therefore, stimulus features that do not critically influence neural activity will be assigned zero weights and thus be effectively excluded by the model. This feature selection mechanism facilitates both the interpretation of the neuron model as well as its predictive abilities. The posterior distribution can be used to obtain confidence intervals which makes it possible to assess the statistical significance of the solution. In neural data analysis, the available amount of experimental measurements is often limited whereas the parameter space is large. In such a situation, both regularization by a sparsity prior and uncertainty estimates for the model parameters are essential. We apply our method to multi-electrode recordings of retinal ganglion cells and use our uncertainty estimate to test the statistical significance of functional couplings between neurons. Furthermore we used the sparsity of the Laplace prior to select those filters from a spike-triggered covariance analysis that are most informative about the neural response. 1

6 0.58118492 170 nips-2007-Robust Regression with Twinned Gaussian Processes

7 0.58037609 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

8 0.57533085 206 nips-2007-Topmoumoute Online Natural Gradient Algorithm

9 0.57213205 60 nips-2007-Contraction Properties of VLSI Cooperative Competitive Neural Networks of Spiking Neurons

10 0.56310809 104 nips-2007-Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes

11 0.55192131 36 nips-2007-Better than least squares: comparison of objective functions for estimating linear-nonlinear models

12 0.55072552 195 nips-2007-The Generalized FITC Approximation

13 0.52004057 138 nips-2007-Near-Maximum Entropy Models for Binary Neural Representations of Natural Images

14 0.51761746 205 nips-2007-Theoretical Analysis of Learning with Reward-Modulated Spike-Timing-Dependent Plasticity

15 0.5157218 79 nips-2007-Efficient multiple hyperparameter learning for log-linear models

16 0.51254362 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data

17 0.51029497 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

18 0.50895363 28 nips-2007-Augmented Functional Time Series Representation and Forecasting with Gaussian Processes

19 0.50408375 158 nips-2007-Probabilistic Matrix Factorization

20 0.50376678 174 nips-2007-Selecting Observations against Adversarial Objectives