nips nips2013 nips2013-236 knowledge-graph by maker-knowledge-mining

236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables


Source: pdf

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. [sent-9, score-0.664]

2 It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. [sent-10, score-1.266]

3 In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. [sent-11, score-0.501]

4 We compared our solution with other well-known criteria such as maximal mutual information. [sent-13, score-0.187]

5 Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. [sent-14, score-0.297]

6 Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. [sent-15, score-0.216]

7 1 Introduction There has been much work investigating how information about stimulus variables is represented by a population of neurons in the brain [1]. [sent-16, score-0.738]

8 Studies on motion perception [2, 3] and sound localization [4, 5] have demonstrated that these representations adapt to the stimulus statistics on various time scales [6, 7, 8, 9]. [sent-17, score-0.479]

9 This raises the natural question of what encoding scheme is underlying this adaptive process? [sent-18, score-0.159]

10 Some work have focused on the scenario with a single neuron [10, 11, 12, 13, 14, 15], while other work focused on the population level [16, 17, 18, 19, 20, 21, 22, 23], with different model and noise assumptions. [sent-21, score-0.391]

11 An interesting class of solutions to this question is related to independent component analysis (ICA) [24, 25, 26], which considers maximizing the amount of information in the encoding given a distribution of stimulus inputs. [sent-23, score-0.573]

12 The use of mutual information as a metric to measure neural coding quality has also been discussed in [27]. [sent-24, score-0.294]

13 1 In this paper, we study Fisher-optimal population codes for the diffeomorphic encoding of stimuli with multivariate Gaussian distributions. [sent-25, score-0.716]

14 Using Fisher information, we investigate the properties of representations that would minimize the L2 reconstruction error assuming an optimal decoder. [sent-26, score-0.178]

15 The optimization problem is derived under a diffeomorphic assumption, i. [sent-27, score-0.284]

16 the number of encoding neurons matches the dimensionality of the input and the nonlinearity is monotonic. [sent-29, score-0.332]

17 In this case, the optimal solution can be found analytically and can be given a geometric interpretation. [sent-30, score-0.192]

18 1 Model and Methods Encoding and Decoding Model We consider a n dimensional stimulus input s = (s1 , . [sent-33, score-0.42]

19 In general, a population with m neurons can have m individual activation functions, h1 (s), . [sent-37, score-0.399]

20 However, the encoding process is affected by neural noise. [sent-41, score-0.22]

21 2 Fisher Information Matrix The Fisher information is a key concept widely used in optimal coding theory. [sent-49, score-0.186]

22 The equivalence for two noise models can be established via the variance stabilizing √ ˜ transformation hk = 2 hk [29]. [sent-51, score-0.592]

23 3 Cramer-Rao Lower Bound Ideally, a good neural population code should produce estimates ˆ that are close to the true value of s the stimulus s. [sent-55, score-0.711]

24 4 2 s ≥ tr(IF (s)−1 ) (7) s Mutual Information Limit Another possible measurement of neural coding quality is the mutual information. [sent-64, score-0.294]

25 The link between mutual information and the Fisher information matrix was established in [16]. [sent-66, score-0.158]

26 One goal (infomax) is to maximize the mutual information I(r, s) = H(r) − H(r|s). [sent-67, score-0.159]

27 Assuming perfect integration, the first term H(r) asymptotically converges to a constant H(s) for long encoding time because the noise is Gaussian. [sent-68, score-0.207]

28 For each s∗ , the conditional entropy H(r|s = s∗ ) ∝ 2 log det IF (s∗ ) since r|s∗ is asymptotically a Gaussian variable with covariance IF (s∗ ). [sent-70, score-0.175]

29 5 1 log det IF (s) 2 (8) s Diffeomorphic Population Before one can formalize the optimal coding problem, some assumptions about the neural population need to be made. [sent-72, score-0.573]

30 Under a diffeomorphic assumption, the number of neurons (m) in the population matches the dimensionality (n) of the input stimulus. [sent-73, score-0.647]

31 Each neuron projects the signal s onto its basis T wk and passes the one-dimensional projection tk = wk s through a sigmoidal tuning curve hk (·) which is bounded 0 ≤ hk (·) ≤ 1. [sent-74, score-1.578]

32 We may assume wk = 1 since the scale can be compensated by the nonlinearity. [sent-80, score-0.34]

33 Such an encoding scheme is called diffeomorphic because the population establishes a smooth and invertible mapping from the stimulus space s ∈ S to the rate space r ∈ R. [sent-81, score-1.046]

34 1a shows how the encoding scheme is implemented by a neural network. [sent-84, score-0.22]

35 1b illustrates explicitly how a 2D stimulus s is encoded by two neurons with basis w1 , w2 and nonlinear mappings h1 , h2 . [sent-86, score-0.578]

36 (a) (b) s2 input stimulus s1 s2 s3 r1 s4 s W w2 nonlinear map hk (·) output r1 r2 r3 w1 s1 T h1(w1 s) T w1 s r2 T h2(w2 s) r4 T w2 s Figure 1: (a) Illustration of a neural network with diffeomorphic encoding. [sent-87, score-0.985]

37 (b) The Linear-Nonlinear (LN) encoding process of 2D stimulus for a stimulus s. [sent-88, score-0.909]

38 3 3 Review of One Dimensional Solution In the case of encoding an one-dimensional stimulus, the diffeomorphic population is just one neuron with sigmoidal tuning curve r = h(w · s). [sent-89, score-1.035]

39 The only two options w = ±1 is determined by whether the sigmoidal tuning curve is increasing or decreasing. [sent-90, score-0.249]

40 Now apply Holder’s inequality [30] to non-negative functions p(s)/h (s)2 and h (s), 2 p(s) ds · h (s)2 h (s) ds 3 p(s)1/3 ds ≥ (10) =1 overall L2 loss s The minimum L2 loss is attained by the optimal h∗ (s) ∝ −∞ p(t)1/3 dt. [sent-94, score-0.531]

41 On the other hand, for the infomax problem we want to maximize I(r, s) because of Eq. [sent-98, score-0.335]

42 By treating the sigmoidal activation function h(s) as a cumulative probability distribution [10], we have p(s) log h (s) ds ≤ p(s) log p(s) ds (11) because the KL-divergence DKL (p||h ) = p(s) log p(s) ds − p(s) log h (s) ds is non-negative. [sent-101, score-0.599]

43 s The optimal solution is h∗ (s) = −∞ p(t)dt and the optimal value is 2H(p), where H(p) is the differential entropy of the distribution p(s). [sent-102, score-0.219]

44 4 Optimal Diffeomorphic Population In the case of encoding high-dimensional random stimulus using a diffeomorphic population code, n neurons encode n stimulus dimensions. [sent-105, score-1.556]

45 The gradient of the k-th neuron’s tuning curve is k = T hk (wk s)wk and the Fisher information matrix is thus n n IF (s) = k T k T T hk (wk s)2 wk wk = W H 2 W T = (12) k=1 k=1 T T where W = (w1 , . [sent-106, score-1.328]

46 n L(W, H) = tr(IF (s)−1 ) = [(W T W )−1 ]kk k=1 p(s) ds T hk (wk s)2 (13) If we define the marginal distribution pk (t) = T p(s)δ(t − wk s) ds (14) then the optimization over wk and hk can be decoupled in the following way. [sent-117, score-1.467]

47 For any fixed W , the integral term can be evaluated by marginalizing out all those directions perpendicular to wk . [sent-118, score-0.368]

48 As discussed in section 3, the optimal value ( pk (t)1/3 dt)3 is attained when h∗ (t) ∝ pk (t)1/3 . [sent-119, score-0.292]

49 n Lh∗ (W ) = 3 [(W T W )−1 ]kk pk (t)1/3 dt (15) k=1 In general, analytically optimizing such a term for arbitrary prior distribution p(s) is intractable. [sent-123, score-0.217]

50 4 5 Stimulus with Gaussian Prior We consider the case when the stimulus prior is Gaussian N (0, Σ). [sent-125, score-0.415]

51 This assumption allows us to calculate the marginal distribution along any direction wk as an one-dimensional Gaussian with T mean zero and variance wk Σwk = (W T ΣW )kk . [sent-126, score-0.762]

52 Let θk be the angle between wk and the hyperplane spanned by all other basis vectors (see Fig. [sent-142, score-0.378]

53 , wn }) · |wk | · sin θk , n dim parallelogram n−1 dim base parallelogram (18) height s3 w3 s2 θ3 w2 Figure 2: Illustration of θk . [sent-153, score-0.296]

54 Meanwhile, minimizing [(W T W )−1 ]kk = (sin θk )−2 strongly penalizes neurons having similar tuning directions with the rest of population. [sent-158, score-0.271]

55 To qualitatively summarize, the optimal population would tend to encode those directions with small variance while keeping certain degree of population diversity. [sent-159, score-0.693]

56 For any covariance matrix Σ, the optimal solution for Eq. [sent-162, score-0.249]

57 5 6 Comparison with Infomax Solution Previous studies have focused on finding solutions that maximize the mutual information (infomax) between the stimulus and the neural population response. [sent-174, score-0.862]

58 Mutual information can be maximized if and only if each neuron encodes an independent component of the stimulus and uses the proper nonlinear tuning curve. [sent-176, score-0.628]

59 For a Gaussian prior with covariance Σ, the infomax solution is ∗ Winfo = Σ−1/2 U ⇒ ∗T cov(Winfo s) = U T Σ−1/2 · Σ · Σ−1/2 U = I (21) where Σ−1/2 is the whitening matrix and U is an arbitrary unitary matrix. [sent-178, score-0.56]

60 In the same 2D example where Σ = diag(σx , σy ), the family of optimal solutions is parametrized by an angular variable φ 1 U (φ) = √ 2 cos φ sin φ − sin φ , cos φ ∗ Winfo (φ) = Σ−1/2 · U (φ) = − sinxφ σ cos φ σx sin φ σy (22) cos φ σy ∗ ∗ In Fig. [sent-180, score-0.495]

61 One observation is that, L2 optimal neurons do not fully decorrelate input signals unless the Gaussian prior is spherical. [sent-182, score-0.32]

62 By correlating the input signal and encoding redundant information, the channel signal to noise ratio (SNR) can be balanced to reduce the vulnerability of those independent channels with low SNR. [sent-183, score-0.237]

63 Another important observation is that the infomax solution allows a greater degree of symmetry – Eq. [sent-185, score-0.459]

64 (a) The optimal pair of basis vectors w1 , w2 for L2 -min with the prior covariance ellipse is unique unless the prior distribution has rotational symmetry. [sent-190, score-0.3]

65 (b) The loss function with ”+” marking the optimal solution shown in (a). [sent-191, score-0.214]

66 (c) One pair of optimal basis vector w1 , w2 for infomax with the prior covariance ellipse. [sent-192, score-0.536]

67 (d) The loss function with ”+” marking the optimal solution shown in (c). [sent-193, score-0.214]

68 6 7 Application – 16-by-16 Gaussian Images In this section we apply our diffeomorphic coding scheme to an image representation problem. [sent-194, score-0.391]

69 Instead of directly defining the pairwise covariance between pixels of s, we calculate its real Fourier components ˆ s ˜ = FTs s ⇔ s = Fˆ s (23) where the real Fourier matrix is F = (f1 , . [sent-196, score-0.192]

70 , σn ), s 2 where σa ∝ |ka |−β , β>0 (24) Therefore the original stimulus s has covariance cov(s) = Σ = F DF T . [sent-203, score-0.452]

71 For the stimulus s with covariance Σ, one naive choice of L2 optimal filter is simply ∗ WL2 = Σ−1/4 · I = F D−1/4 F T (25) because Σ1/2 = F D1/2 F T has constant diagonal terms (See Appendix F for detailed calculation) and U = I qualifies for Eq. [sent-205, score-0.591]

72 5(a)-(d), the L2 optimal filter half-decorrelates the input stimulus channels to keep the balance between the simplicity of the filters and the simplicity of the correlation structure. [sent-215, score-0.525]

73 For each stimulus image s, we T calculate y = Wγ s and zk = hk (yk ) + ηk to simulate the encoding process. [sent-217, score-0.817]

74 Here hk (y) ∝ y 1/3 T p (t) dt and pk (t) is Gaussian N (0, (Wγ ΣWγ )kk ). [sent-218, score-0.36]

75 8 Discussion and Conclusions In this paper, we have studied the an optimal diffeomorphic neural population code which minimizes the L2 reconstruction error. [sent-224, score-0.74]

76 The population of neurons is assumed to have sigmoidal activation functions encoding linear combinations of a high dimensional stimulus with a multivariate Gaussian 7 (a) (b) (c) filter cross−section 2D filter (d) −8 naive 1 1 0. [sent-225, score-1.18]

77 5 0 0 infomax (e) correlation cross−section 2D correlation 8 16 0 1 8 16 0. [sent-230, score-0.384]

78 (d) The cross-section of the 2D correlation of the filtered stimulus, between the neuron and other neurons on the same row. [sent-236, score-0.291]

79 The optimal solution is provided and compared with solutions which maximize the mutual information. [sent-239, score-0.338]

80 In order to derive the optimal solution, we first show that the Poisson noise model is equivalent to the constant Gaussian noise under the variance stabilizing transformation. [sent-240, score-0.249]

81 The general L2 -minimization problem can be simplified and the optimal solution is analytically derived when the stimulus distribution is Gaussian. [sent-243, score-0.567]

82 Compared to the infomax solutions, a careful evaluation and calculation of the Fisher information matrix is needed for L2 minimization. [sent-244, score-0.367]

83 The manifold of L2 optimal solutions possess a lower dimensional structure compared to the infomax solution. [sent-245, score-0.465]

84 Instead of decorrelating the input statistics, the L2 -min solution maintains a certain degree of correlation across the channels. [sent-246, score-0.242]

85 Our result suggests that maximizing mutual information and minimizing the overall decoding loss are not the same in general – encoding redundant information can be beneficial to improve reconstruction accuracy. [sent-247, score-0.415]

86 The optimal solution exhibits center-surround receptive fields, but with a decay differing from those found by decorrelating solutions. [sent-250, score-0.184]

87 Information tuning of populations of neurons in primary visual cortex. [sent-254, score-0.243]

88 Optimal neural population coding of an auditory spatial cue. [sent-268, score-0.396]

89 Neural population coding of sound level adapts to stimulus statistics. [sent-277, score-0.75]

90 A simple coding procedure enhances a neurons information capacity. [sent-283, score-0.242]

91 Non linear neurons in the low noise limit: A factorial code maximizes information transfer, 1994. [sent-287, score-0.23]

92 Optimal neural rate coding leads to bimodal firing rate distributions. [sent-292, score-0.168]

93 Maximally informative stimuli and tuning curves for sigmoidal rate-coding neurons and populations. [sent-298, score-0.346]

94 Optimal neural tuning curves for arbitrary stimulus distributions: Discrimax, infomax and minimum lp loss. [sent-304, score-0.846]

95 Narrow versus wide tuning curves: Whats best for a population code? [sent-314, score-0.336]

96 The effect of correlations on the fisher information of population codes. [sent-317, score-0.228]

97 Neural population coding is optimized by discrete tuning curves. [sent-320, score-0.443]

98 Implicit encoding of prior probabilities in optimal neural populations. [sent-326, score-0.339]

99 Error-based analysis of optimal tuning functions explains phenomena observed in sensory neurons. [sent-330, score-0.235]

100 Characterization of minimum error linear coding with sensory and neural noise. [sent-333, score-0.216]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('stimulus', 0.375), ('wk', 0.34), ('infomax', 0.302), ('diffeomorphic', 0.284), ('hk', 0.235), ('kk', 0.234), ('population', 0.228), ('encoding', 0.159), ('neurons', 0.135), ('fisher', 0.129), ('mutual', 0.126), ('ds', 0.115), ('neuron', 0.115), ('tuning', 0.108), ('coding', 0.107), ('sigmoidal', 0.103), ('winfo', 0.099), ('det', 0.098), ('degree', 0.096), ('tr', 0.09), ('pk', 0.087), ('slope', 0.082), ('optimal', 0.079), ('cov', 0.078), ('covariance', 0.077), ('lter', 0.076), ('wn', 0.075), ('sin', 0.067), ('gaussian', 0.065), ('solution', 0.061), ('neural', 0.061), ('decoding', 0.055), ('analytically', 0.052), ('harper', 0.049), ('parallelogram', 0.049), ('rotermund', 0.049), ('calculate', 0.048), ('unitary', 0.048), ('noise', 0.048), ('poisson', 0.048), ('sensory', 0.048), ('code', 0.047), ('codes', 0.045), ('dimensional', 0.045), ('cos', 0.044), ('bethge', 0.044), ('decorrelating', 0.044), ('stocker', 0.044), ('ring', 0.044), ('var', 0.043), ('reconstruction', 0.041), ('philadelphia', 0.041), ('correlation', 0.041), ('prior', 0.04), ('stabilizing', 0.04), ('marking', 0.04), ('decorrelate', 0.04), ('sound', 0.04), ('attained', 0.039), ('solutions', 0.039), ('nonlinearity', 0.038), ('curve', 0.038), ('lh', 0.038), ('dt', 0.038), ('pennsylvania', 0.038), ('basis', 0.038), ('activation', 0.036), ('pixels', 0.035), ('filter', 0.034), ('loss', 0.034), ('variance', 0.034), ('perception', 0.034), ('fourier', 0.034), ('calculation', 0.033), ('ba', 0.033), ('maximize', 0.033), ('rk', 0.032), ('matrix', 0.032), ('ka', 0.032), ('ica', 0.032), ('naive', 0.031), ('diag', 0.031), ('channels', 0.03), ('nonlinear', 0.03), ('plotted', 0.03), ('representations', 0.03), ('sher', 0.029), ('diagonal', 0.029), ('minimize', 0.028), ('dim', 0.028), ('hn', 0.028), ('directions', 0.028), ('pa', 0.028), ('adaptation', 0.028), ('ns', 0.027), ('md', 0.027), ('blind', 0.027), ('unless', 0.026), ('tk', 0.026), ('ap', 0.026)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

2 0.20352426 193 nips-2013-Mixed Optimization for Smooth Functions

Author: Mehrdad Mahdavi, Lijun Zhang, Rong Jin

Abstract: It is well known that the optimal convergence rate for stochastic optimization of √ smooth functions is O(1/ T ), which is same as stochastic optimization of Lipschitz continuous convex functions. This is in contrast to optimizing smooth functions using full gradients, which yields a convergence rate of O(1/T 2 ). In this work, we consider a new setup for optimizing smooth functions, termed as Mixed Optimization, which allows to access both a stochastic oracle and a full gradient oracle. Our goal is to significantly improve the convergence rate of stochastic optimization of smooth functions by having an additional small number of accesses to the full gradient oracle. We show that, with an O(ln T ) calls to the full gradient oracle and an O(T ) calls to the stochastic oracle, the proposed mixed optimization algorithm is able to achieve an optimization error of O(1/T ). 1

3 0.19808584 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

Author: Jasper Snoek, Richard Zemel, Ryan P. Adams

Abstract: Point processes are popular models of neural spiking behavior as they provide a statistical distribution over temporal sequences of spikes and help to reveal the complexities underlying a series of recorded action potentials. However, the most common neural point process models, the Poisson process and the gamma renewal process, do not capture interactions and correlations that are critical to modeling populations of neurons. We develop a novel model based on a determinantal point process over latent embeddings of neurons that effectively captures and helps visualize complex inhibitory and competitive interaction. We show that this model is a natural extension of the popular generalized linear model to sets of interacting neurons. The model is extended to incorporate gain control or divisive normalization, and the modulation of neural spiking based on periodic phenomena. Applied to neural spike recordings from the rat hippocampus, we see that the model captures inhibitory relationships, a dichotomy of classes of neurons, and a periodic modulation by the theta rhythm known to be present in the data. 1

4 0.19051617 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

5 0.18413562 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

Author: Ben Shababo, Brooks Paige, Ari Pakman, Liam Paninski

Abstract: With the advent of modern stimulation techniques in neuroscience, the opportunity arises to map neuron to neuron connectivity. In this work, we develop a method for efficiently inferring posterior distributions over synaptic strengths in neural microcircuits. The input to our algorithm is data from experiments in which action potentials from putative presynaptic neurons can be evoked while a subthreshold recording is made from a single postsynaptic neuron. We present a realistic statistical model which accounts for the main sources of variability in this experiment and allows for significant prior information about the connectivity and neuronal cell types to be incorporated if available. Due to the technical challenges and sparsity of these systems, it is important to focus experimental time stimulating the neurons whose synaptic strength is most ambiguous, therefore we also develop an online optimal design algorithm for choosing which neurons to stimulate at each trial. 1

6 0.1677482 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

7 0.16732731 69 nips-2013-Context-sensitive active sensing in humans

8 0.1667778 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

9 0.1619112 121 nips-2013-Firing rate predictions in optimal balanced networks

10 0.12736295 173 nips-2013-Least Informative Dimensions

11 0.11600924 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

12 0.11189733 239 nips-2013-Optimistic policy iteration and natural actor-critic: A unifying view and a non-optimality result

13 0.10783771 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

14 0.093117379 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

15 0.091654226 83 nips-2013-Deep Fisher Networks for Large-Scale Image Classification

16 0.086813174 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

17 0.0865383 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

18 0.086402051 205 nips-2013-Multisensory Encoding, Decoding, and Identification

19 0.084443748 351 nips-2013-What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach

20 0.083656631 183 nips-2013-Mapping paradigm ontologies to and from the brain


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.202), (1, 0.086), (2, -0.094), (3, -0.062), (4, -0.268), (5, -0.015), (6, -0.096), (7, -0.029), (8, 0.034), (9, 0.118), (10, -0.023), (11, 0.083), (12, -0.02), (13, -0.072), (14, -0.003), (15, -0.013), (16, -0.064), (17, -0.017), (18, -0.076), (19, 0.003), (20, -0.001), (21, 0.035), (22, -0.045), (23, -0.091), (24, -0.019), (25, -0.101), (26, -0.095), (27, 0.118), (28, -0.005), (29, -0.077), (30, -0.038), (31, -0.082), (32, -0.076), (33, -0.094), (34, -0.056), (35, -0.102), (36, 0.02), (37, -0.047), (38, -0.038), (39, 0.055), (40, -0.008), (41, -0.105), (42, -0.11), (43, 0.05), (44, 0.079), (45, -0.005), (46, -0.082), (47, 0.112), (48, -0.08), (49, -0.032)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.95675766 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

2 0.80611849 305 nips-2013-Spectral methods for neural characterization using generalized quadratic models

Author: Il M. Park, Evan W. Archer, Nicholas Priebe, Jonathan W. Pillow

Abstract: We describe a set of fast, tractable methods for characterizing neural responses to high-dimensional sensory stimuli using a model we refer to as the generalized quadratic model (GQM). The GQM consists of a low-rank quadratic function followed by a point nonlinearity and exponential-family noise. The quadratic function characterizes the neuron’s stimulus selectivity in terms of a set linear receptive fields followed by a quadratic combination rule, and the invertible nonlinearity maps this output to the desired response range. Special cases of the GQM include the 2nd-order Volterra model [1, 2] and the elliptical Linear-Nonlinear-Poisson model [3]. Here we show that for “canonical form” GQMs, spectral decomposition of the first two response-weighted moments yields approximate maximumlikelihood estimators via a quantity called the expected log-likelihood. The resulting theory generalizes moment-based estimators such as the spike-triggered covariance, and, in the Gaussian noise case, provides closed-form estimators under a large class of non-Gaussian stimulus distributions. We show that these estimators are fast and provide highly accurate estimates with far lower computational cost than full maximum likelihood. Moreover, the GQM provides a natural framework for combining multi-dimensional stimulus sensitivity and spike-history dependencies within a single model. We show applications to both analog and spiking data using intracellular recordings of V1 membrane potential and extracellular recordings of retinal spike trains. 1

3 0.80464816 237 nips-2013-Optimal integration of visual speed across different spatiotemporal frequency channels

Author: Matjaz Jogan, Alan Stocker

Abstract: How do humans perceive the speed of a coherent motion stimulus that contains motion energy in multiple spatiotemporal frequency bands? Here we tested the idea that perceived speed is the result of an integration process that optimally combines speed information across independent spatiotemporal frequency channels. We formalized this hypothesis with a Bayesian observer model that combines the likelihood functions provided by the individual channel responses (cues). We experimentally validated the model with a 2AFC speed discrimination experiment that measured subjects’ perceived speed of drifting sinusoidal gratings with different contrasts and spatial frequencies, and of various combinations of these single gratings. We found that the perceived speeds of the combined stimuli are independent of the relative phase of the underlying grating components. The results also show that the discrimination thresholds are smaller for the combined stimuli than for the individual grating components, supporting the cue combination hypothesis. The proposed Bayesian model fits the data well, accounting for the full psychometric functions of both simple and combined stimuli. Fits are improved if we assume that the channel responses are subject to divisive normalization. Our results provide an important step toward a more complete model of visual motion perception that can predict perceived speeds for coherent motion stimuli of arbitrary spatial structure. 1

4 0.74983627 205 nips-2013-Multisensory Encoding, Decoding, and Identification

Author: Aurel A. Lazar, Yevgeniy Slutskiy

Abstract: We investigate a spiking neuron model of multisensory integration. Multiple stimuli from different sensory modalities are encoded by a single neural circuit comprised of a multisensory bank of receptive fields in cascade with a population of biophysical spike generators. We demonstrate that stimuli of different dimensions can be faithfully multiplexed and encoded in the spike domain and derive tractable algorithms for decoding each stimulus from the common pool of spikes. We also show that the identification of multisensory processing in a single neuron is dual to the recovery of stimuli encoded with a population of multisensory neurons, and prove that only a projection of the circuit onto input stimuli can be identified. We provide an example of multisensory integration using natural audio and video and discuss the performance of the proposed decoding and identification algorithms. 1

5 0.66665089 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

6 0.65337193 264 nips-2013-Reciprocally Coupled Local Estimators Implement Bayesian Information Integration Distributively

7 0.63959813 208 nips-2013-Neural representation of action sequences: how far can a simple snippet-matching model take us?

8 0.61823481 6 nips-2013-A Determinantal Point Process Latent Variable Model for Inhibition in Neural Spiking Data

9 0.57734281 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

10 0.53858697 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

11 0.52732021 136 nips-2013-Hierarchical Modular Optimization of Convolutional Networks Achieves Representations Similar to Macaque IT and Human Ventral Stream

12 0.52212143 53 nips-2013-Bayesian inference for low rank spatiotemporal neural receptive fields

13 0.51212811 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

14 0.47994551 210 nips-2013-Noise-Enhanced Associative Memories

15 0.43551597 86 nips-2013-Demixing odors - fast inference in olfaction

16 0.42035967 183 nips-2013-Mapping paradigm ontologies to and from the brain

17 0.41699657 173 nips-2013-Least Informative Dimensions

18 0.40499896 267 nips-2013-Recurrent networks of coupled Winner-Take-All oscillators for solving constraint satisfaction problems

19 0.40391704 351 nips-2013-What Are the Invariant Occlusive Components of Image Patches? A Probabilistic Generative Approach

20 0.37716311 193 nips-2013-Mixed Optimization for Smooth Functions


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(2, 0.016), (16, 0.028), (22, 0.149), (33, 0.135), (34, 0.116), (41, 0.062), (49, 0.098), (56, 0.098), (70, 0.071), (85, 0.022), (89, 0.072), (93, 0.047), (95, 0.013)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.88395005 343 nips-2013-Unsupervised Structure Learning of Stochastic And-Or Grammars

Author: Kewei Tu, Maria Pavlovskaia, Song-Chun Zhu

Abstract: Stochastic And-Or grammars compactly represent both compositionality and reconfigurability and have been used to model different types of data such as images and events. We present a unified formalization of stochastic And-Or grammars that is agnostic to the type of the data being modeled, and propose an unsupervised approach to learning the structures as well as the parameters of such grammars. Starting from a trivial initial grammar, our approach iteratively induces compositions and reconfigurations in a unified manner and optimizes the posterior probability of the grammar. In our empirical evaluation, we applied our approach to learning event grammars and image grammars and achieved comparable or better performance than previous approaches. 1

same-paper 2 0.86063933 236 nips-2013-Optimal Neural Population Codes for High-dimensional Stimulus Variables

Author: Zhuo Wang, Alan Stocker, Daniel Lee

Abstract: In many neural systems, information about stimulus variables is often represented in a distributed manner by means of a population code. It is generally assumed that the responses of the neural population are tuned to the stimulus statistics, and most prior work has investigated the optimal tuning characteristics of one or a small number of stimulus variables. In this work, we investigate the optimal tuning for diffeomorphic representations of high-dimensional stimuli. We analytically derive the solution that minimizes the L2 reconstruction loss. We compared our solution with other well-known criteria such as maximal mutual information. Our solution suggests that the optimal weights do not necessarily decorrelate the inputs, and the optimal nonlinearity differs from the conventional equalization solution. Results illustrating these optimal representations are shown for some input distributions that may be relevant for understanding the coding of perceptual pathways. 1

3 0.8374449 71 nips-2013-Convergence of Monte Carlo Tree Search in Simultaneous Move Games

Author: Viliam Lisy, Vojta Kovarik, Marc Lanctot, Branislav Bosansky

Abstract: unkown-abstract

4 0.82702535 121 nips-2013-Firing rate predictions in optimal balanced networks

Author: David G. Barrett, Sophie Denève, Christian K. Machens

Abstract: How are firing rates in a spiking network related to neural input, connectivity and network function? This is an important problem because firing rates are a key measure of network activity, in both the study of neural computation and neural network dynamics. However, it is a difficult problem, because the spiking mechanism of individual neurons is highly non-linear, and these individual neurons interact strongly through connectivity. We develop a new technique for calculating firing rates in optimal balanced networks. These are particularly interesting networks because they provide an optimal spike-based signal representation while producing cortex-like spiking activity through a dynamic balance of excitation and inhibition. We can calculate firing rates by treating balanced network dynamics as an algorithm for optimising signal representation. We identify this algorithm and then calculate firing rates by finding the solution to the algorithm. Our firing rate calculation relates network firing rates directly to network input, connectivity and function. This allows us to explain the function and underlying mechanism of tuning curves in a variety of systems. 1

5 0.81488991 141 nips-2013-Inferring neural population dynamics from multiple partial recordings of the same neural circuit

Author: Srini Turaga, Lars Buesing, Adam M. Packer, Henry Dalgleish, Noah Pettit, Michael Hausser, Jakob Macke

Abstract: Simultaneous recordings of the activity of large neural populations are extremely valuable as they can be used to infer the dynamics and interactions of neurons in a local circuit, shedding light on the computations performed. It is now possible to measure the activity of hundreds of neurons using 2-photon calcium imaging. However, many computations are thought to involve circuits consisting of thousands of neurons, such as cortical barrels in rodent somatosensory cortex. Here we contribute a statistical method for “stitching” together sequentially imaged sets of neurons into one model by phrasing the problem as fitting a latent dynamical system with missing observations. This method allows us to substantially expand the population-sizes for which population dynamics can be characterized—beyond the number of simultaneously imaged neurons. In particular, we demonstrate using recordings in mouse somatosensory cortex that this method makes it possible to predict noise correlations between non-simultaneously recorded neuron pairs. 1

6 0.8118819 77 nips-2013-Correlations strike back (again): the case of associative memory retrieval

7 0.80814821 303 nips-2013-Sparse Overlapping Sets Lasso for Multitask Learning and its Application to fMRI Analysis

8 0.80622852 262 nips-2013-Real-Time Inference for a Gamma Process Model of Neural Spiking

9 0.80355465 221 nips-2013-On the Expressive Power of Restricted Boltzmann Machines

10 0.80042267 22 nips-2013-Action is in the Eye of the Beholder: Eye-gaze Driven Model for Spatio-Temporal Action Localization

11 0.8001858 49 nips-2013-Bayesian Inference and Online Experimental Design for Mapping Neural Microcircuits

12 0.79465193 64 nips-2013-Compete to Compute

13 0.79341787 345 nips-2013-Variance Reduction for Stochastic Gradient Optimization

14 0.78923464 131 nips-2013-Geometric optimisation on positive definite matrices for elliptically contoured distributions

15 0.78896648 304 nips-2013-Sparse nonnegative deconvolution for compressive calcium imaging: algorithms and phase transitions

16 0.78702348 286 nips-2013-Robust learning of low-dimensional dynamics from large neural ensembles

17 0.78576946 310 nips-2013-Statistical analysis of coupled time series with Kernel Cross-Spectral Density operators.

18 0.78537613 173 nips-2013-Least Informative Dimensions

19 0.78527719 70 nips-2013-Contrastive Learning Using Spectral Methods

20 0.78501529 301 nips-2013-Sparse Additive Text Models with Low Rank Background