nips nips2013 nips2013-305 knowledge-graph by maker-knowledge-mining

305 nips-2013-Spectral methods for neural characterization using generalized quadratic models


Source: pdf

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Spectral methods for neural characterization using generalized quadratic models Il Memming Park∗123 , Evan Archer∗13 , Nicholas Priebe14 , & Jonathan W. [sent-1, score-0.149]

2 edu Abstract We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). [sent-10, score-0.337]

3 The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. [sent-11, score-0.167]

4 The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. [sent-12, score-0.677]

5 Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. [sent-14, score-0.188]

6 The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. [sent-15, score-0.544]

7 We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. [sent-16, score-0.112]

8 Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. [sent-17, score-0.373]

9 We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. [sent-18, score-0.522]

10 1 Introduction Although sensory stimuli are high-dimensional, sensory neurons are typically sensitive to only a small number of stimulus features. [sent-19, score-0.485]

11 These filters, which describe how the stimulus is integrated over space and time, can be considered the first stage in a “cascade” model of neural responses. [sent-21, score-0.368]

12 In the well-known linear-nonlinear-Poisson (LNP) cascade model, filter outputs are combined via a nonlinear function to produce an instantaneous spike rate, which generates spikes via an inhomogeneous Poisson process [4, 5]. [sent-22, score-0.277]

13 The most popular methods for dimensionality reduction with spike train data involve the first two moments of the spike-triggered stimulus distribution: (1) the spike-triggered average (STA) [7–9]; and (2) major and minor eigenvectors of spike-triggered covariance (STC) matrix [10, 11]1 . [sent-23, score-0.679]

14 Related moment-based estimators have also appeared in the statistics literature under the names “inverse regression” and “sufficient dimensionality reduction”, although the connection to STA and STC analysis does not appear to have been noted previously [12, 13]. [sent-25, score-0.108]

15 1 1 Generalized Quadratic Model linear filters nonlinear quadratic function noise analog or recurrent filters stimulus spikes . [sent-26, score-0.638]

16 response Figure 1: Schematic of generalized quadratic model (GQM) for analog or spike train data. [sent-29, score-0.403]

17 Recently, Park and Pillow [3] described a connection between STA/STC analysis and maximum likelihood estimators based on a quantity called the expected log-likelihood (EL). [sent-32, score-0.107]

18 The EL results from replacing the nonlinear term in the log-likelihood and with its expectation over the stimulus distribution. [sent-33, score-0.374]

19 When the stimulus is Gaussian, the EL depends only on moments (mean spike rate, STA, STC, and stimulus mean and covariance) and leads to a closed-form spectral estimate for LNP filters, which has STC analysis as a special case. [sent-34, score-0.917]

20 More recently, Ramirez and Paninski derived ELbased estimators for the linear Gaussian model and proposed fast EL-based inference methods for generalized linear models (GLMs) [14]. [sent-35, score-0.106]

21 Here, we show that the EL framework can be extended to a more general class that we refer to as the generalized quadratic model (GQM). [sent-36, score-0.12]

22 The GQM represents a straightforward extension of the generalized linear model GLM [15, 16] wherein the linear predictor is replaced by a quadratic function (Fig. [sent-37, score-0.12]

23 For Gaussian and Poisson GQMs, we derive computationally efficient EL-based estimators that apply to a variety of non-Gaussian stimulus distributions; this substantially extends previous work on the conditions of validity for moment-based estimators [7,17–19]. [sent-39, score-0.497]

24 In the Gaussian case, the EL-based estimator has a closed form solution that relies only on the first two responseweighted moments and the first four stimulus moments. [sent-40, score-0.379]

25 , where the response depends on multiple projections of the stimulus) and dependencies on spike history. [sent-43, score-0.27]

26 We show that spectral estimates of a low-dimensional feature space are nearly as accurate as maximum likelihood estimates (for GQMs without spike-history), and demonstrate the applicability of GQMs for both analog and spiking data. [sent-44, score-0.216]

27 A GLM has three basic components: a linear stimulus filter, an invertible nonlinearity (or “inverse link” function), and an exponential-family noise model. [sent-46, score-0.46]

28 The GLM describes the conditional response y to a vector stimulus x as: y|x ∼ P(f (w x)), (1) where w is the filter, f is the nonlinearity, and P(λ) denotes a noise distribution function with mean λ. [sent-47, score-0.464]

29 From the standpoint of dimensionality reduction, the GLM makes the strong modeling assumption that response y depends upon x only via its one-dimensional projection onto w. [sent-48, score-0.129]

30 (2) Spike-triggered covariance analysis and related methods provide low-cost estimates of the filters {wi } under Poisson or Bernoulli noise models, but only under restrictive conditions on the stimulus 2 distribution (e. [sent-53, score-0.455]

31 Semi-parametric estimators like “maximally informative dimensions” (MID) eliminate these restrictions [20], but do not practically scale beyond two or three filters without additional modeling assumptions [21]. [sent-56, score-0.079]

32 The generalized quadratic model (GQM) provides a tractable middle ground between the GLM and general multi-filter LN models. [sent-57, score-0.12]

33 The GQM allows for multi-dimensional stimulus dependence, yet restricts the nonlinearity to be a transformed quadratic function [22–25]. [sent-58, score-0.506]

34 The GQM can be written: y|x ∼ P(f (Q(x))), (3) where Q(x) = x Cx + b x + a denotes a quadratic function of x, governed by a (possibly lowrank) symmetric matrix C, a vector b, and a scalar a. [sent-59, score-0.093]

35 Note that the GQM may be regarded as a GLM in the space of quadratically transformed stimuli [6], although this approach does not allow Q(x) to be parametrized directly in terms of a projection onto a small number of linear filters. [sent-60, score-0.122]

36 Finally, we show that the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity with dependencies on spike train history or other response covariates. [sent-63, score-0.672]

37 The expected log-likelihood (EL) results from replacing the nonlinear term with its expectation over the stimulus distribution P (x), which in neurophysiology settings is often known a priori to the experimenter. [sent-67, score-0.402]

38 Maximizing the EL results in maximum expected log-likelihood (MEL) estimators that have very low computational cost while achieving nearly the accuracy of full maximum likelihood (ML) estimators. [sent-68, score-0.107]

39 Spectral decompositions derived from the EL provide estimators that generalize STA/STC analysis. [sent-69, score-0.079]

40 In the following, we derive MEL estimators for three special cases—two for the Gaussian noise model, and one for the Poisson noise model. [sent-70, score-0.173]

41 1 Gaussian GQMs Gaussian noise provides a natural model for analog neural response variables like membrane potential or fluorescence. [sent-72, score-0.278]

42 The canonical nonlinearity for Gaussian noise is the identity function, f (x) = x. [sent-73, score-0.153]

43 2 (5) i=1 1 The expected log-likelihood results from replacing the troublesome nonlinear term N i Q(xi )2 by its expectation over the stimulus distribution. [sent-76, score-0.402]

44 This is justified by the law of large numbers, which 2 When responses yi are spike counts, these correspond to the STA and STC. [sent-77, score-0.253]

45 y (6) 2σ Gaussian stimuli If the stimuli are drawn from a Gaussian distribution, x ∼ N (0, Σ), then we have (from [26]): E[Q(x)2 ] = 2 Tr (CΣ)2 + Tr(bT Σb) + (Tr(CΣ) + a)2 . [sent-80, score-0.2]

46 Axis-symmetric stimuli More generally, we can derive the MEL estimator for stimuli with arbitrary axis-symmetric distributions with finite 4th-order moments. [sent-82, score-0.2]

47 a, b, and C, respectively, we obtain conditions for the MEL estimates: y = E [Q(x)] = a + b E[x] + Tr(CE[xx ]) ¯ µ = E [Q(x)x] = aE[x] + b E[xx ] + Cij E[xi xj x] i,j Λ = E Q(x)xx = aE[xx ] + bi E[xi xx ] + i Cij E[xi xj xx ] i,j where the subindices within the sums are for components. [sent-98, score-0.23]

48 If we further assume that the stimulus is whitened so that E[xx ] = I, sufficient stimulus statistics are the 4th order even moments, which we represent with the matrix Mij = E x2 x2 . [sent-101, score-0.678]

49 i j In general, when the marginals are not identical but the joint distribution is axis-symmetric, Cii diag(x2 x2 , · · · , x2 x2 ) + i 1 i d Cij E[xi xj xx ] = ij i Cij Mij ei ej (11) i=j = diag(1 (I ◦ C)M ) + C ◦ M ◦ (11 − I). [sent-102, score-0.136]

50 We can solve these sets of linear equations for the diagonal terms and off-diagonal terms separately obtaining, [Cmel ]ij = Λij 2Mij , −1 Ω(M − 11 ) 4 i=j , i=j (12) 2D stimulus distribution neural response assumed stimulus distribution Gaussian [r2 = 0. [sent-104, score-0.785]

51 99] time Figure 2: Maximum expected log-likelihood (MEL) estimators for a Gaussian GQM under different assumptions about the stimulus distribution. [sent-107, score-0.446]

52 Performance is evaluated on a cross-validation test set with no noise for each C, and we see a huge loss in performance as a result of incorrect assumption about the stimulus distribution. [sent-115, score-0.386]

53 This gives the simplified formula (also 1 2 1 given in [27]): [Cmel ]ij = Λij 2µ22 , Λii −¯ y µ4 −µ22 , i=j i=j (14) When the stimulus is not Gaussian or the marginals not identical, the estimates obtained from (eq. [sent-118, score-0.372]

54 2 Poisson GQM Poisson noise provides a natural model for discrete events like spike counts, and extends easily to point process models for spike trains. [sent-125, score-0.363]

55 The canonical nonlinearity for Poisson noise is exponential, f (x) = exp(x), so the canonical-form Poisson GQM is: y|x ∼ Poiss(exp(Q(x))). [sent-126, score-0.153]

56 Under a zero-mean Gaussian stimulus distribution with covariance Σ, the closed-form MEL estimates are (from [3]): 1 bmel = Λ + y 2µµ ¯ −1 µ, Cmel = 1 2 1 Σ−1 − y Λ + y 2µµ ¯ ¯ −1 , (16) 1 where we assume that Λ + y 2µµ is invertible. [sent-130, score-0.44]

57 5 filter estimation error optimization time MELE (1st eigenvector) rank−1 MELE rank−1 ML 2 0 seconds L1 error Figure 3: Rank-1 quadratic filter reconstruction performance. [sent-133, score-0.148]

58 10 -1 10 4 5 10 10 # of samples 1 0 4 5 10 10 # of samples Mixture-of-Gaussians stimuli Results for Gaussian stimuli extend naturally to mixtures of Gaussians, which can be used to approximate arbitrary stimulus distributions. [sent-137, score-0.539]

59 The EL for mixture-of-Gaussian stimuli can be computed simply via the linearity of expectation. [sent-138, score-0.1]

60 1 Spectral estimation for low-dimensional models Low-rank parameterization We have so far focused upon MEL estimators for the parameters a, b, and C. [sent-142, score-0.079]

61 Under the GQM, a low-dimensional stimulus dependence is equivalent to having a low-rank C. [sent-144, score-0.339]

62 We can obtain spectral estimates of a low-dimensional GQM by performing an eigenvector decomposition of Cmel and selecting the eigenvectors corresponding to the largest p eigenvalues. [sent-146, score-0.153]

63 2 Consistency of subspace estimates If the conditional probability y|x = y|β x for a matrix β, the neural feature space is spanned by the columns of β. [sent-157, score-0.109]

64 As a generalization of STC, we introduce moment-based dimensionality reduction 6 A quadratic filter w 1 B quadratic filter w 2 0 −2 space (0. [sent-158, score-0.352]

65 8 1 Figure 4: GQM fit and prediction for intracellular recording in cat V1 with a trinary noise stimulus. [sent-169, score-0.153]

66 (A) On top, estimated linear (b) and quadratic (w1 and w2 ) filters for the GQM, lagged by 20ms. [sent-170, score-0.093]

67 (B) Cross-validated model prediction (red) and n = 94 recordings with repeats of identical stimulus (light grey) along with their mean (black). [sent-172, score-0.392]

68 techniques that recover (portions of) β and show the relationship of these techniques to the MEL estimators of GQM. [sent-175, score-0.079]

69 When the response is binary, this coincides with the traditional STA/STC analysis, which is provably consistent only in the case of stimuli drawn from a spherically symmetric (for STA) or independent Gaussian distribution (for STC) [5]. [sent-177, score-0.178]

70 Below, we argue that this procedure can identify the subspace when y has mean f (β x) with finite variance, f is some function, and the stimulus distribution is zero-mean with white covariance, i. [sent-178, score-0.365]

71 Effective estimation of the subspace depends critically on both the stimulus distribution and the form of f . [sent-185, score-0.365]

72 Under the GQM, the eigenvectors of E yxx are closely related to the expected log-likelihood estimators we derived earlier. [sent-186, score-0.213]

73 1 Results Intracellular membrane potential We fit a Gaussian GQM to intracellular recordings of membrane potential from a neuron in cat V1, using a 2D spatiotemporal “flickering bars” stimulus aligned with the cell’s preferred orientation (Fig. [sent-192, score-0.634]

74 6 minute recording of responses to non-repeating trinary noise stimulus . [sent-197, score-0.483]

75 We validated the model using responses to 94 repeats of a 1 second frozen noise stimulus. [sent-198, score-0.112]

76 Although the cell was classified as “simple”, meaning that its response is predominately linear, the GQM fit reveals two quadratic filters that also influence the membrane potential response. [sent-201, score-0.272]

77 7 stimulus filter gain linear rate prediction (test data) spike history 2 1 GLM ( GQM ( 0. [sent-204, score-0.637]

78 2 0 10 20 30 time (stimulus frames) 40 0 100 time (ms) 200 1 sec Figure 5: (left) GLM and GQM filters fit to spike responses of a retinal ganglion cell stimulated with a 120 Hz binary full field noise stimulus [28]. [sent-210, score-0.739]

79 The GLM has only linear stimulus and spike history filters (top left) while the GQM contains all four filters. [sent-211, score-0.56]

80 Quadratic filter outputs are squared and then subtracted from other inputs, giving them a suppressive effect on spiking (although quadratic excitation is also possible). [sent-213, score-0.187]

81 2 Retinal ganglion spike train The Poisson GLM provides a popular model for neural spike trains due to its ability to incorporate dependencies on spike history (e. [sent-216, score-0.642]

82 The GLM achieves this by incorporating a one-dimensional linear projection of spike history as an input to the model. [sent-220, score-0.243]

83 In general, however, a spike train may exhibit dependencies on more than one linear projection of spike history. [sent-221, score-0.372]

84 The GQM extends the GLM by allowing multiple stimulus filters and multiple spike-history filters. [sent-222, score-0.339]

85 , as found in complex cells) and produce dynamic spike patterns unachievable by GLMs. [sent-225, score-0.158]

86 We fit a Poisson GQM with a quadratic history filter to data recorded from a retinal ganglion cell driven by a full-field white noise stimulus [28]. [sent-226, score-0.672]

87 For ease of comparison, we fit a Poisson GLM, then added quadratic stimulus and history filters, initialized using a spectral decomposition of the MEL estimate (eq. [sent-227, score-0.536]

88 Both quadratic filters (which enter with negative sign), have a suppressive effect on spiking (Fig. [sent-229, score-0.187]

89 The quadratic stimulus filter induces strong suppression at a delay of 5 frames, while the quadratic spike history filter induces strong suppression during a 50 ms window after a spike. [sent-231, score-0.811]

90 Unlike the GLM, the GQM allows multiple stimulus and history filters and yet remains tractable for likelihood-based inference. [sent-233, score-0.402]

91 We have derived expected log-likelihood estimators in a general form that reveals a deep connection between likelihood-based and moment-based inference methods. [sent-234, score-0.107]

92 We have shown that GQM performs well on neural data, both for discrete (spiking) and analog (voltage) data. [sent-235, score-0.076]

93 Second-order volterra filtering and its application to nonlinear system identification. [sent-253, score-0.098]

94 Bayesian inference for spiking neuron models with a sparsity prior. [sent-279, score-0.093]

95 Real-time performance of a movement-senstivive neuron in the blowfly visual system: coding and information transmission in short spike sequences. [sent-302, score-0.189]

96 A point process framework for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. [sent-342, score-0.182]

97 Dimensionality reduction in neural models: An information-theoretic generalization of spike-triggered average and covariance analysis. [sent-368, score-0.092]

98 Analyzing neural responses to natural signals: maximally informative dimensions. [sent-376, score-0.119]

99 Maximally informative ”stimulus energies” in the analysis of neural responses to natural signals. [sent-402, score-0.094]

100 Precision of spike trains in primate retinal ganglion cells. [sent-446, score-0.264]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('gqm', 0.697), ('stimulus', 0.339), ('mel', 0.18), ('glm', 0.179), ('spike', 0.158), ('cmel', 0.143), ('gqms', 0.127), ('stc', 0.126), ('xx', 0.115), ('poisson', 0.11), ('el', 0.109), ('stimuli', 0.1), ('lters', 0.099), ('quadratic', 0.093), ('sta', 0.09), ('lter', 0.081), ('estimators', 0.079), ('response', 0.078), ('membrane', 0.077), ('nonlinearity', 0.074), ('responses', 0.065), ('pillow', 0.064), ('retinal', 0.064), ('history', 0.063), ('volterra', 0.063), ('spiking', 0.062), ('pp', 0.061), ('tr', 0.058), ('yxx', 0.056), ('filter', 0.055), ('intracellular', 0.052), ('eigenvectors', 0.05), ('lnp', 0.048), ('noise', 0.047), ('analog', 0.047), ('gaussian', 0.043), ('const', 0.042), ('ganglion', 0.042), ('sliced', 0.042), ('ramirez', 0.042), ('spectral', 0.041), ('moments', 0.04), ('mv', 0.04), ('memming', 0.039), ('paninski', 0.038), ('cij', 0.038), ('xxt', 0.036), ('xi', 0.036), ('covariance', 0.036), ('nonlinear', 0.035), ('dependencies', 0.034), ('cascade', 0.033), ('estimates', 0.033), ('glms', 0.033), ('axis', 0.033), ('canonical', 0.032), ('jp', 0.032), ('comput', 0.032), ('bmel', 0.032), ('mele', 0.032), ('suppressive', 0.032), ('trinary', 0.032), ('park', 0.031), ('neuron', 0.031), ('recordings', 0.031), ('yi', 0.03), ('eigenvector', 0.029), ('dimensionality', 0.029), ('neural', 0.029), ('mid', 0.028), ('uzzell', 0.028), ('expected', 0.028), ('spatiotemporal', 0.027), ('generalized', 0.027), ('reduction', 0.027), ('symmetry', 0.026), ('inhomogeneous', 0.026), ('filters', 0.026), ('subspace', 0.026), ('spikes', 0.025), ('maximally', 0.025), ('cell', 0.024), ('rust', 0.023), ('triggered', 0.023), ('sensory', 0.023), ('ml', 0.023), ('projection', 0.022), ('suppression', 0.022), ('voltage', 0.022), ('prediction', 0.022), ('plos', 0.022), ('ms', 0.021), ('spanned', 0.021), ('biol', 0.021), ('cx', 0.021), ('nicholas', 0.021), ('ij', 0.021), ('neuronal', 0.021), ('chichilnisky', 0.02), ('ae', 0.02)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

2 0.19051617 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

3 0.19032356 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

4 0.15740241 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

Author: Matjaz Jogan, Alan Stocker

Abstract: How do humans perceive the speed of a coherent motion stimulus that contains motion energy in multiple spatiotemporal frequency bands? Here we tested the idea that perceived speed is the result of an integration process that optimally combines speed information across independent spatiotemporal frequency channels. We formalized this hypothesis with a Bayesian observer model that combines the likelihood functions provided by the individual channel responses (cues). We experimentally validated the model with a 2AFC speed discrimination experiment that measured subjects’ perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single gratings. We found that the perceived speeds of the combined stimuli are independent of the relative phase of the underlying grating components. The results also show that the discrimination thresholds are smaller for the combined stimuli than for the individual grating components, supporting the cue combination hypothesis. The proposed Bayesian model fits the data well, accounting for the full psychometric functions of both simple and combined stimuli. Fits are improved if we assume that the channel responses are subject to divisive normalization. Our results provide an important step toward a more complete model of visual motion perception that can predict perceived speeds for coherent motion stimuli of arbitrary spatial structure. 1

5 0.15679045 173 nips-2013-Least Informative Dimensions

Author: Fabian Sinz, Anna Stockl, January Grewe, January Benda

Abstract: We present a novel non-parametric method for finding a subspace of stimulus features that contains all information about the response of a system. Our method generalizes similar approaches to this problem such as spike triggered average, spike triggered covariance, or maximally informative dimensions. Instead of maximizing the mutual information between features and responses directly, we use integral probability metrics in kernel Hilbert spaces to minimize the information between uninformative features and the combination of informative features and responses. Since estimators of these metrics access the data via kernels, are easy to compute, and exhibit good theoretical convergence properties, our method can easily be generalized to populations of neurons or spike patterns. By using a particular expansion of the mutual information, we can show that the informative features must contain all information if we can make the uninformative features independent of the rest. 1

6 0.15013281 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

7 0.11277734 53 nips-2013-Bayesian inference for low rank spatiotemporal neural receptive fields

8 0.10860194 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

9 0.10783355 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

10 0.096595049 121 nips-2013-Firing rate predictions in optimal balanced networks

11 0.094485275 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

12 0.086533882 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

13 0.085698463 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity

14 0.078858607 183 nips-2013-Mapping paradigm ontologies to and from the brain

15 0.067757517 205 nips-2013-Multisensory Encoding, Decoding, and Identification

16 0.064823583 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

17 0.06384258 217 nips-2013-On Poisson Graphical Models

18 0.058004625 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes

19 0.056985229 308 nips-2013-Spike train entropy-rate estimation using hierarchical Dirichlet process priors

20 0.055869866 88 nips-2013-Designed Measurements for Vector Count Data


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.135), (1, 0.084), (2, -0.062), (3, -0.043), (4, -0.236), (5, -0.007), (6, -0.024), (7, -0.025), (8, -0.009), (9, 0.073), (10, -0.032), (11, 0.025), (12, -0.053), (13, -0.05), (14, 0.051), (15, -0.02), (16, 0.011), (17, 0.025), (18, -0.059), (19, -0.008), (20, -0.037), (21, 0.068), (22, -0.061), (23, -0.122), (24, 0.003), (25, -0.06), (26, -0.048), (27, 0.055), (28, 0.046), (29, -0.042), (30, -0.002), (31, -0.078), (32, -0.069), (33, -0.114), (34, -0.014), (35, -0.019), (36, -0.094), (37, -0.083), (38, -0.076), (39, 0.007), (40, 0.037), (41, -0.021), (42, -0.131), (43, -0.044), (44, 0.015), (45, 0.024), (46, 0.023), (47, 0.079), (48, -0.081), (49, -0.058)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.93253249 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

2 0.79303479 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

Author: Matjaz Jogan, Alan Stocker

Abstract: How do humans perceive the speed of a coherent motion stimulus that contains motion energy in multiple spatiotemporal frequency bands? Here we tested the idea that perceived speed is the result of an integration process that optimally combines speed information across independent spatiotemporal frequency channels. We formalized this hypothesis with a Bayesian observer model that combines the likelihood functions provided by the individual channel responses (cues). We experimentally validated the model with a 2AFC speed discrimination experiment that measured subjects’ perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single gratings. We found that the perceived speeds of the combined stimuli are independent of the relative phase of the underlying grating components. The results also show that the discrimination thresholds are smaller for the combined stimuli than for the individual grating components, supporting the cue combination hypothesis. The proposed Bayesian model fits the data well, accounting for the full psychometric functions of both simple and combined stimuli. Fits are improved if we assume that the channel responses are subject to divisive normalization. Our results provide an important step toward a more complete model of visual motion perception that can predict perceived speeds for coherent motion stimuli of arbitrary spatial structure. 1

3 0.78766459 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

4 0.74125606 205 nips-2013-Multisensory Encoding, Decoding, and Identification

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

5 0.60238534 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

Author: David Carlson, Vinayak Rao, Joshua T. Vogelstein, Lawrence Carin

Abstract: With simultaneous measurements from ever increasing populations of neurons, there is a growing need for sophisticated tools to recover signals from individual neurons. In electrophysiology experiments, this classically proceeds in a two-step process: (i) threshold the waveforms to detect putative spikes and (ii) cluster the waveforms into single units (neurons). We extend previous Bayesian nonparametric models of neural spiking to jointly detect and cluster neurons using a Gamma process model. Importantly, we develop an online approximate inference scheme enabling real-time analysis, with performance exceeding the previous state-of-theart. Via exploratory data analysis—using data with partial ground truth as well as two novel data sets—we find several features of our model collectively contribute to our improved performance including: (i) accounting for colored noise, (ii) detecting overlapping spikes, (iii) tracking waveform dynamics, and (iv) using multiple channels. We hope to enable novel experiments simultaneously measuring many thousands of neurons and possibly adapting stimuli dynamically to probe ever deeper into the mysteries of the brain. 1

6 0.59873933 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

7 0.590985 121 nips-2013-Firing rate predictions in optimal balanced networks

8 0.58900636 173 nips-2013-Least Informative Dimensions

9 0.55702531 53 nips-2013-Bayesian inference for low rank spatiotemporal neural receptive fields

10 0.51581317 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

11 0.50098532 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

12 0.48432735 51 nips-2013-Bayesian entropy estimation for binary spike train data using parametric prior knowledge

13 0.44650355 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

14 0.44208595 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

15 0.43296137 341 nips-2013-Universal models for binary spike patterns using centered Dirichlet processes

16 0.40377676 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

17 0.40314493 88 nips-2013-Designed Measurements for Vector Count Data

18 0.40011433 67 nips-2013-Conditional Random Fields via Univariate Exponential Families

19 0.39827752 284 nips-2013-Robust Spatial Filtering with Beta Divergence

20 0.38023669 246 nips-2013-Perfect Associative Learning with Spike-Timing-Dependent Plasticity


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.029), (16, 0.03), (33, 0.101), (34, 0.082), (41, 0.019), (49, 0.068), (56, 0.061), (70, 0.035), (85, 0.025), (89, 0.426), (93, 0.023)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.84563124 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

2 0.83075768 10 nips-2013-A Latent Source Model for Nonparametric Time Series Classification

Author: George H. Chen, Stanislav Nikolov, Devavrat Shah

Abstract: For classifying time series, a nearest-neighbor approach is widely used in practice with performance often competitive with or better than more elaborate methods such as neural networks, decision trees, and support vector machines. We develop theoretical justification for the effectiveness of nearest-neighbor-like classification of time series. Our guiding hypothesis is that in many applications, such as forecasting which topics will become trends on Twitter, there aren’t actually that many prototypical time series to begin with, relative to the number of time series we have access to, e.g., topics become trends on Twitter only in a few distinct manners whereas we can collect massive amounts of Twitter data. To operationalize this hypothesis, we propose a latent source model for time series, which naturally leads to a “weighted majority voting” classification rule that can be approximated by a nearest-neighbor classifier. We establish nonasymptotic performance guarantees of both weighted majority voting and nearest-neighbor classification under our model accounting for how much of the time series we observe and the model complexity. Experimental results on synthetic data show weighted majority voting achieving the same misclassification rate as nearest-neighbor classification while observing less of the time series. We then use weighted majority to forecast which news topics on Twitter become trends, where we are able to detect such “trending topics” in advance of Twitter 79% of the time, with a mean early advantage of 1 hour and 26 minutes, a true positive rate of 95%, and a false positive rate of 4%. 1

3 0.81079423 109 nips-2013-Estimating LASSO Risk and Noise Level

Author: Mohsen Bayati, Murat A. Erdogdu, Andrea Montanari

Abstract: We study the fundamental problems of variance and risk estimation in high dimensional statistical modeling. In particular, we consider the problem of learning a coefficient vector θ0 ∈ Rp from noisy linear observations y = Xθ0 + w ∈ Rn (p > n) and the popular estimation procedure of solving the 1 -penalized least squares objective known as the LASSO or Basis Pursuit DeNoising (BPDN). In this context, we develop new estimators for the 2 estimation risk θ − θ0 2 and the variance of the noise when distributions of θ0 and w are unknown. These can be used to select the regularization parameter optimally. Our approach combines Stein’s unbiased risk estimate [Ste81] and the recent results of [BM12a][BM12b] on the analysis of approximate message passing and the risk of LASSO. We establish high-dimensional consistency of our estimators for sequences of matrices X of increasing dimensions, with independent Gaussian entries. We establish validity for a broader class of Gaussian designs, conditional on a certain conjecture from statistical physics. To the best of our knowledge, this result is the first that provides an asymptotically consistent risk estimator for the LASSO solely based on data. In addition, we demonstrate through simulations that our variance estimation outperforms several existing methods in the literature. 1

4 0.79277396 91 nips-2013-Dirty Statistical Models

Author: Eunho Yang, Pradeep Ravikumar

Abstract: We provide a unified framework for the high-dimensional analysis of “superposition-structured” or “dirty” statistical models: where the model parameters are a superposition of structurally constrained parameters. We allow for any number and types of structures, and any statistical model. We consider the general class of M -estimators that minimize the sum of any loss function, and an instance of what we call a “hybrid” regularization, that is the infimal convolution of weighted regularization functions, one for each structural component. We provide corollaries showcasing our unified framework for varied statistical models such as linear regression, multiple regression and principal component analysis, over varied superposition structures. 1

5 0.72076464 83 nips-2013-Deep Fisher Networks for Large-Scale Image Classification

Author: Karen Simonyan, Andrea Vedaldi, Andrew Zisserman

Abstract: As massively parallel computations have become broadly available with modern GPUs, deep architectures trained on very large datasets have risen in popularity. Discriminatively trained convolutional neural networks, in particular, were recently shown to yield state-of-the-art performance in challenging image classification benchmarks such as ImageNet. However, elements of these architectures are similar to standard hand-crafted representations used in computer vision. In this paper, we explore the extent of this analogy, proposing a version of the stateof-the-art Fisher vector image encoding that can be stacked in multiple layers. This architecture significantly improves on standard Fisher vectors, and obtains competitive results with deep convolutional networks at a smaller computational learning cost. Our hybrid architecture allows us to assess how the performance of a conventional hand-crafted image classification pipeline changes with increased depth. We also show that convolutional networks and Fisher vector encodings are complementary in the sense that their combination further improves the accuracy. 1

6 0.70568788 234 nips-2013-Online Variational Approximations to non-Exponential Family Change Point Models: With Application to Radar Tracking

7 0.59576011 273 nips-2013-Reinforcement Learning in Robust Markov Decision Processes

8 0.55863231 310 nips-2013-Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.

9 0.55208081 194 nips-2013-Model Selection for High-Dimensional Regression under the Generalized Irrepresentability Condition

10 0.54721969 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

11 0.52165926 68 nips-2013-Confidence Intervals and Hypothesis Testing for High-Dimensional Statistical Models

12 0.51756978 130 nips-2013-Generalizing Analytic Shrinkage for Arbitrary Covariance Structures

13 0.51363599 62 nips-2013-Causal Inference on Time Series using Restricted Structural Equation Models

14 0.5041309 116 nips-2013-Fantope Projection and Selection: A near-optimal convex relaxation of sparse PCA

15 0.50400758 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

16 0.50318742 147 nips-2013-Lasso Screening Rules via Dual Polytope Projection

17 0.50311226 163 nips-2013-Learning a Deep Compact Image Representation for Visual Tracking

18 0.49889344 302 nips-2013-Sparse Inverse Covariance Estimation with Calibration

19 0.49067208 259 nips-2013-Provable Subspace Clustering: When LRR meets SSC

20 0.48779801 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions