nips nips2010 nips2010-81 knowledge-graph by maker-knowledge-mining

81 nips-2010-Evaluating neuronal codes for inference using Fisher information


Source: pdf

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 41, 72076 T¨ bingen, Germany u Abstract Many studies have explored the impact of response variability on the quality of sensory codes. [sent-3, score-0.271]

2 The source of this variability is almost always assumed to be intrinsic to the brain. [sent-4, score-0.184]

3 However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. [sent-5, score-0.484]

4 Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. [sent-6, score-1.266]

5 We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. [sent-7, score-0.746]

6 We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. [sent-8, score-1.188]

7 We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. [sent-10, score-0.836]

8 At the core of our approach lies the interpretation of neuronal response variability due to nuisance stimulus variability as noise. [sent-14, score-0.601]

9 Many theoretical and experimental studies have probed the impact of intrinsic response variability on the quality of sensory codes ([1, 12] and references therein). [sent-15, score-0.396]

10 However, most neurons are responsive to more than one stimulus attribute. [sent-16, score-0.203]

11 So when trying to infer a particular stimulus property, the brain needs to be able to ignore the effect of confounding attributes that also influence the neuron’s response. [sent-17, score-0.224]

12 We propose to evaluate the usefulness of a population code for inference over a particular parameter by treating the neuronal response variability due to nuisance stimulus attributes as noise equivalent to intrinsic noise (e. [sent-18, score-0.727]

13 We explore the implications of this new approach for the model system of stereo vision where the inference task is to extract depth from binocular images. [sent-21, score-0.469]

14 com) 1 Right image Left RF Right RF Tuning curve response Left image response disparity disparity Figure 1: Left: Example random dot stereogram (RDS). [sent-24, score-1.694]

15 in the monocular inputs to the standard model of early binocular processing and thereby obtain an upper bound on how precisely a model could theoretically extract depth. [sent-26, score-0.802]

16 We start by giving a brief introduction to the two principal flavors of the binocular energy model. [sent-30, score-0.403]

17 We then retrace the processing steps and compute the Fisher information with respect to depth inference that is present: first in the monocular inputs, then after binocular combination, and finally for the resulting tuning curves. [sent-31, score-0.841]

18 2 Binocular disparity as a model system Stereo vision has the advantage of a clear separation between the relevant stimulus dimension – binocular disparity – and the confounding or nuisance stimulus attributes – monocular image structure ([9]). [sent-32, score-2.447]

19 The challenge in inferring disparity in image pairs consists in distinguishing true from false matches, regardless of the monocular structures in the two images. [sent-33, score-1.013]

20 The stimulus that tests this system in the most general way are random dot stereograms (RDS) that consist of nearly identical dot patterns in either eye (see Figure 1). [sent-34, score-0.323]

21 Since RDS do not contain any monocular depth cues (e. [sent-36, score-0.341]

22 size or perspective) the brain needs to correctly match the monocular image features across eyes to compute disparity. [sent-38, score-0.352]

23 The standard model for binocular processing in primary visual cortex (V1) is the binocular energy model ([5, 10]). [sent-39, score-0.856]

24 It explains the response of disparity-selective V1 neurons by linearly combining the output of monocular simple cells and passing the sum through a squaring nonlinearity (illustrated in Figure 1). [sent-40, score-0.59]

25 The disparity tuning curve resulting from the combination in equation (1) is even-symmetric (illustrated in Figure 1 in blue) and is one of two primary types of tuning curves found in cortex ([5]). [sent-44, score-1.072]

26 2 (2) Note that the two models are identical in their monocular inputs and the monocular part of their output (the first four terms in equations 1 and 2) and only vary in their binocular interaction terms (in brackets). [sent-47, score-1.068]

27 The only way in which the first model can implement preferred disparities other than zero is by a positional displacement of the RFs in the two eyes with respect to each other (the disparity tuning curve achieves its maximum when the disparity in the image matches the disparity between the RFs). [sent-48, score-2.375]

28 The second model, on the other hand achieves non-zero preferred disparities by employing a phase shift between the left and right RF (90 deg in our case). [sent-49, score-0.221]

29 It is therefore considered to be phase-disparity model, while the first one is called a position disparity one. [sent-50, score-0.712]

30 2 3 Results How much information the response of a neuron carries about a particular stimulus attribute depends both on the sensitivity of the response to changes in that attribute and to the variability (or uncertainty) in the response across all stimuli while keeping that attribute fixed. [sent-51, score-0.739]

31 1 Response variability response Figure 2 shows the mean of the binocular response of the two models. [sent-54, score-0.689]

32 The variation of the response around the mean due to the variation in monocular image structure in the RDS is shown in Figure 3 (top row) for four exemplary disparities: −1, 0, 1 and uncorrelated (±∞), indicated in Figure 2. [sent-55, score-0.463]

33 Unlike in the commonly assumed case of intrinsic noise, pbinoc (r|d) – the stimulus-conditioned response distribution – d is far from Poisson or Gaussian. [sent-56, score-0.219]

34 Interestingly, its mode is always at zero – the average response to uncorrelated stimuli – and the fact that the mean depends on the stimulus disparity is primarily due to the -1 0 1 disparity-dependence of the skew of the response distribution (Figure 3). [sent-57, score-1.177]

35 3 The skew in turn depends on the disparity through the disparity- Figure 2: Binocular redependent correlation between the RF outputs as illustrated in Figure sponses for even (blue) and 3 (bottom row). [sent-58, score-0.769]

36 at the zero disparity 4 , the disparities at which rodd takes its minimum and maximum, respectively, and the uncorrelated case (infinite disparity). [sent-60, score-0.856]

37 In the case of zero disparity (identical images in left and right eye), the correlation is 1 between the outputs of left and right RFs (both even, or both odd). [sent-63, score-0.861]

38 What 1 is also apparent is that the binocular energy model with phase disparity (where each even-symmetric RF is paired with an odd-symmetric one) never achieves perfect correlation between the left and right eye and only covers smaller values. [sent-65, score-1.286]

39 1 Fisher information contained in monocular inputs First, we quantify the information contained in the inputs to the energy model by using Fisher information. [sent-69, score-0.607]

40 Since the ν are drawn from identical Gaussians5 , the mean responses of the 2 We use position disparity model and even-symmetric tuning interchangeably, as well as phase disparity model and odd-symmetric tuning. [sent-71, score-1.622]

41 Unfortunately, the term disparity is used for both disparities between the RFs, and for disparities between left and right images (in the stimulus). [sent-72, score-0.924]

42 If not indicated otherwise, we will always refer to stimulus disparity for the rest of the paper. [sent-73, score-0.842]

43 Blue (νL vs νR ) and red (νL vs νR ) colors refer to the model with even-symmetric tuning curve and odd-symmetric tuning curve, respectively. [sent-82, score-0.353]

44 The disparity value for each column is ±∞, −1, 0 and 1 corresponding to those highlighted in Figure 2. [sent-83, score-0.677]

45 Because the binocular part of the energy model response, or disparity tuning curve, is the convolution of the left and right RFs, the phase of the Gabor describing the disparity tuning curve is given by the difference between the phases of the corresponding RFs. [sent-86, score-2.166]

46 We obtain Iinputs (d) = 2 (1 + a2 − c2 )a 2 + (1 + c2 − a2 )c 2 + 4aca c (1 − a2 − c2 )2 (3) where we omitted the stimulus dependence of a(d) and c(d) for clarity of exposition and where denotes the 1st derivative with respect to the stimulus d. [sent-88, score-0.33]

47 We find that the Fisher information available in the inputs diverges at zero disparity (at the difference between the centers of the left and right RFs in general). [sent-91, score-0.88]

48 This means that the ability to discriminate zero disparity from 2 0 A Gabor function is defined as cos(2πf d − φ) exp[− (d−d2 ) ] were f is spatial frequency, d is disparity, 2σ φ is the Gabor phase, do is the envelope center (set to zero here, WLOG) and σ the envelope bandwidth. [sent-92, score-0.765]

49 7 The assumption that the binocular interaction can be modeled by a Gabor is not important for the principal results of this paper. [sent-93, score-0.379]

50 In fact, the formulas for the Fisher information in the monocular inputs and in the disparity tuning curves derived below hold for other (reasonable) choices for a(d) and c(d) as well. [sent-94, score-1.241]

51 1 −2 0 2 disparity d 0 −4 4 −2 0 2 0 −4 4 disparity d −2 0 2 disparity d 4 0 −4 −2 0 2 disparity d 4 Figure 4: A: Disparity tuning curves for the model using position disparity (even) and phase disparity (odd) in blue and red, respectively. [sent-101, score-4.39]

52 B: Black: Fisher information contained in the monocular inputs. [sent-102, score-0.328]

53 Blue: Fisher information left after combining inputs from left and right eye according to position disparity model. [sent-103, score-0.997]

54 Red: Fisher information after combining inputs using phase disparity model. [sent-104, score-0.878]

55 D: Same as C but with added Gaussian noise in the monocular inputs. [sent-109, score-0.328]

56 In reality, intrinsic neuronal variability will limit the Fisher information at zero. [sent-111, score-0.264]

57 2 Combination of left and right inputs Next we analyze the information that remains after linearly combining the monocular inputs in the energy model. [sent-114, score-0.623]

58 It follows that the 4-dimensional monocular input space is reduced to a 2-dimensional e e o o e o o e binocular one for each model, sampled by (νL + νR , νL + νR ) and (νL + νR , νL + νR ), respectively. [sent-115, score-0.644]

59 Again, the marginal distributions are Gaussians with zero mean independent of stimulus disparity. [sent-116, score-0.185]

60 We note here that the Fisher information for the final tuning curve for the position-disparity model is the same as in equation (4) and therefore we will postpone a more detailed discussion of it until section 3. [sent-122, score-0.186]

61 additive Gaussian noise with variance σ N on the monocular filter outputs eliminates the singularity: 2 2 det C = 1 + σ N − a2 − c2 ≥ σ N . [sent-127, score-0.394]

62 While loosing 50% of the Fisher information present in the inputs, the Fisher information after combining left and right RF outputs is much larger in this case than for the position disparity model explored above. [sent-130, score-0.895]

63 Why are the two ways of combining the monocular outputs not symmetric? [sent-132, score-0.366]

64 Insight into this question can be gained by looking at the binocular interaction terms in the quadratic expansion of the feature space for the two models. [sent-133, score-0.379]

65 9 e e o o e o o e For the position disparity model we obtain the 3-dimensional space (νL νR , νL νR , νL νR + νL νR ) of e o o e which the third dimension cannot contribute to the Fisher information since νL νR + νL νR = 0. [sent-134, score-0.745]

66 While this is not a rigorous analysis yet of the differences between the models at the stage of binocular combination, it serves as a starting point for a future investigation. [sent-138, score-0.352]

67 3 Disparity tuning curves In order to collapse the 2-dimensional binocular inputs into a scalar output that can be coded in the spike rate of a neuron, the energy model postulates a squaring output nonlinearity after each linear combination and summing the results. [sent-141, score-0.827]

68 1 to model the empirically observed relationship between disparity d and mean spike rate r. [sent-148, score-0.723]

69 0 6 Remarkably, this is exactly the same amount of information that is available after summing left and right RFs (see equation 4), so none is lost after squaring and combining the quadrature pair. [sent-150, score-0.179]

70 It is also interesting to note that the general form for Ieven (d) differs from the Fisher information based on the Poisson noise model (and ignoring stimulus variability as considered here) only by the exponent of 2 in the denominator. [sent-152, score-0.341]

71 Conversely, it follows that when Fisher information only takes the neuronal noise into consideration, it greatly overestimates the information that the neuron carries with respect to the to-be-inferred stimulus parameter for realistic spike counts (of greater than two). [sent-154, score-0.379]

72 Fisher information with respect to stimulus variability as considered here is invariant to the absolute height of the tuning curve. [sent-156, score-0.402]

73 However, we can compute the Fisher information for the two implied binocular simple cells instead. [sent-159, score-0.405]

74 and14 simple Iodd (d) = ∞ c (d)2 1 2Γ(1/2) 1 + c(d) 5 0 1 r 1 − dr √ r 4[1 + c(d)] 2 2 exp − r 4[1 + c(d)] 2 = 1 c (d) 2 [1 + c(d)]2 simple The dependence of Iodd on disparity is shown in Figure 4C (red dashed). [sent-161, score-0.699]

75 Intrinsic neuronal variability may provide part of the answer since the difference in Fisher information between both models decreases as intrinsic variability increases. [sent-165, score-0.371]

76 Figure 4D shows the Fisher information after Gaussian noise has been added to the monocular inputs. [sent-166, score-0.345]

77 However, even in this high intrinsic noise regime (noise variance of the same order as tuning curve amplitude) the model with phase disparity carries significantly more total Fisher information. [sent-167, score-1.051]

78 In that case a(d) is a Gabor function with phase π and becomes zero at zero disparity such that the Fisher information diverges. [sent-172, score-0.802]

79 13 The energy model as presented thus far models the responses of binocular complex cells. [sent-174, score-0.439]

80 0 15 This derivation equally applies to the Fisher information of simple cells with position disparity by suba (d)2 simple stituting a(d) for c(d) and we obtain Ieven (d) = 1 [1+a(d)]2 . [sent-177, score-0.765]

81 7 4 Discussion The central idea of our paper is to evaluate the quality of a sensory code with respect to an inference task by taking stimulus variability into account, in particular that induced by irrelevant stimulus attributes. [sent-179, score-0.489]

82 By framing stimulus-induced nuisance variability as noise, we were able to employ the existing framework of Fisher information for evaluating the standard model of early binocular processing with respect to inferring disparity from random dot stereograms. [sent-180, score-1.293]

83 We started by investigating the disparity-conditioned variability of the binocular response in the absence of intrinsic neuronal noise. [sent-181, score-0.714]

84 We found that the response distributions are far from Poisson or Gaussian and – independent of stimulus disparity – are always peaked at zero (the mean response to uncorrelated images). [sent-182, score-1.13]

85 The information contained in the correlations between left and right RF outputs are translated into a modulation of the neuron’s mean firing rate primarily by altering the skew of the response distribution. [sent-183, score-0.303]

86 It is noteworthy that these response distributions are entirely imposed by the sensory system – the combination of the structure of the external world with the internal processing model. [sent-185, score-0.174]

87 Unlike the case of intrinsic noise which is usually added ad-hoc after the neuronal computation has been performed, in our case the computational model impacts the usefulness of the code beyond the traditionally reported tuning functions. [sent-186, score-0.305]

88 Again, the noise correlations due to nuisance stimulus parameters are a direct consequence of the processing model and the structure of the external input. [sent-189, score-0.261]

89 Next we compared the Fisher information available for our inference task at various stages of binocular processing. [sent-190, score-0.387]

90 We computed the Fisher information available in the monocular inputs to binocular neurons in V1, after binocular combination and after the squaring nonlinearity required to translate binocular correlations into mean firing rate modulation. [sent-191, score-1.58]

91 We find that despite the great stimulus variability, the total Fisher information available in the inputs diverges and is only bounded by intrinsic neuronal variability. [sent-192, score-0.429]

92 The same is still true after binocular combination for one flavor of the model considered here – that employing phase disparity (or pairing unlike RFs in either eye), not the other one (position disparity), which has lost most information after the initial combination. [sent-193, score-1.176]

93 At this point, our new approach allows us to ask a normative question: In what way should the monocular inputs be combined so as to lose a minimal amount of information about the relevant stimulus dimension? [sent-194, score-0.562]

94 Is the combination proposed by the standard model to obtain even-symmetric tuning curves the only one to do so or are they others that produce a different tuning curve, with a different response distribution that is more suited to inferring depth? [sent-195, score-0.462]

95 We note that our approach can be readily adapted to other measures like mutual information or their framework of neurometric function analysis to compare the performance of different codes in a disparity discrimination task. [sent-198, score-0.763]

96 One reason that odd-symmetric tuning curves had higher Fisher information in the case we investigated was that odd-symmetric cells produce near-zero responses more often in the context of the energy model. [sent-200, score-0.291]

97 However, it is known from empirical observations that fitting even-symmetric disparity tuning curves requires an additional thresholding output nonlinearity. [sent-201, score-0.861]

98 And finally, we suggest that considering the different shapes of response distributions induced by the specifics of the sensory modality might have an impact on the discussion about probabilistic population codes ([7, 8] and references therein). [sent-203, score-0.257]

99 Cue-integration, for instance, has usually been studied under the assumption of Poisson-like response distributions, assumptions that do not appear to hold in the case of combining disparity cues from different parts of the visual field. [sent-204, score-0.839]

100 Stereoscopic depth discrimination in the visual cortex: neurons ideally suited as disparity detectors. [sent-244, score-0.783]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('disparity', 0.677), ('binocular', 0.352), ('monocular', 0.292), ('fisher', 0.286), ('stimulus', 0.165), ('rfs', 0.161), ('response', 0.115), ('rf', 0.114), ('tuning', 0.113), ('variability', 0.107), ('disparities', 0.094), ('inputs', 0.088), ('intrinsic', 0.077), ('phase', 0.068), ('ieven', 0.067), ('iodd', 0.067), ('gabor', 0.066), ('neuronal', 0.063), ('eye', 0.063), ('curves', 0.054), ('rds', 0.053), ('odd', 0.053), ('energy', 0.051), ('depth', 0.049), ('codes', 0.048), ('outputs', 0.046), ('population', 0.045), ('nuisance', 0.044), ('blue', 0.042), ('curve', 0.04), ('squaring', 0.04), ('neurons', 0.038), ('uncorrelated', 0.038), ('red', 0.037), ('cells', 0.036), ('noise', 0.036), ('quadrature', 0.035), ('wlog', 0.035), ('position', 0.035), ('dot', 0.034), ('sensory', 0.034), ('cortex', 0.033), ('poisson', 0.031), ('spike', 0.03), ('left', 0.03), ('right', 0.029), ('combining', 0.028), ('neuron', 0.027), ('skew', 0.027), ('interaction', 0.027), ('iinputs', 0.027), ('pbinoc', 0.027), ('peven', 0.027), ('rodd', 0.027), ('stereograms', 0.027), ('stereoscopic', 0.027), ('bingen', 0.026), ('inferring', 0.026), ('combination', 0.025), ('envelope', 0.024), ('nonlinearity', 0.024), ('carries', 0.024), ('eyes', 0.024), ('avors', 0.023), ('bethge', 0.023), ('dr', 0.022), ('neurometric', 0.021), ('pairing', 0.021), ('attributes', 0.021), ('early', 0.02), ('confounding', 0.02), ('primarily', 0.02), ('det', 0.02), ('zero', 0.02), ('responses', 0.02), ('contained', 0.019), ('diverges', 0.019), ('displacement', 0.019), ('gc', 0.019), ('bernstein', 0.019), ('lf', 0.019), ('illustrated', 0.019), ('visual', 0.019), ('dashed', 0.019), ('image', 0.018), ('brain', 0.018), ('inference', 0.018), ('extract', 0.018), ('attribute', 0.018), ('grating', 0.018), ('comput', 0.018), ('output', 0.017), ('monkey', 0.017), ('information', 0.017), ('vs', 0.017), ('primary', 0.017), ('neurosci', 0.016), ('stereo', 0.016), ('model', 0.016), ('impact', 0.015)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000002 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

2 0.22286481 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.11659696 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.10805805 241 nips-2010-Size Matters: Metric Visual Search Constraints from Monocular Metadata

Author: Mario Fritz, Kate Saenko, Trevor Darrell

Abstract: Metric constraints are known to be highly discriminative for many objects, but if training is limited to data captured from a particular 3-D sensor the quantity of training data may be severly limited. In this paper, we show how a crucial aspect of 3-D information–object and feature absolute size–can be added to models learned from commonly available online imagery, without use of any 3-D sensing or reconstruction at training time. Such models can be utilized at test time together with explicit 3-D sensing to perform robust search. Our model uses a “2.1D” local feature, which combines traditional appearance gradient statistics with an estimate of average absolute depth within the local window. We show how category size information can be obtained from online images by exploiting relatively unbiquitous metadata fields specifying camera intrinstics. We develop an efficient metric branch-and-bound algorithm for our search task, imposing 3-D size constraints as part of an optimal search for a set of features which indicate the presence of a category. Experiments on test scenes captured with a traditional stereo rig are shown, exploiting training data from from purely monocular sources with associated EXIF metadata. 1

5 0.10509641 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

6 0.094850317 1 nips-2010-(RF)^2 -- Random Forest Random Field

7 0.087087706 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

8 0.07182572 268 nips-2010-The Neural Costs of Optimal Control

9 0.063272901 17 nips-2010-A biologically plausible network for the computation of orientation dominance

10 0.060308874 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

11 0.050124586 73 nips-2010-Efficient and Robust Feature Selection via Joint ℓ2,1-Norms Minimization

12 0.049994908 34 nips-2010-Attractor Dynamics with Synaptic Depression

13 0.049362823 10 nips-2010-A Novel Kernel for Learning a Neuron Model from Spike Train Data

14 0.046513375 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

15 0.044484243 20 nips-2010-A unified model of short-range and long-range motion perception

16 0.043547641 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

17 0.042370223 272 nips-2010-Towards Holistic Scene Understanding: Feedback Enabled Cascaded Classification Models

18 0.040653192 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

19 0.04008466 96 nips-2010-Fractionally Predictive Spiking Neurons

20 0.038430281 95 nips-2010-Feature Transitions with Saccadic Search: Size, Color, and Orientation Are Not Alike


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.103), (1, 0.043), (2, -0.169), (3, 0.09), (4, 0.05), (5, 0.074), (6, -0.044), (7, 0.027), (8, 0.023), (9, -0.025), (10, 0.045), (11, -0.008), (12, -0.037), (13, 0.006), (14, 0.038), (15, 0.001), (16, -0.019), (17, -0.073), (18, -0.001), (19, 0.155), (20, 0.022), (21, -0.139), (22, 0.148), (23, 0.037), (24, -0.019), (25, 0.045), (26, 0.04), (27, 0.075), (28, -0.047), (29, 0.022), (30, -0.021), (31, -0.106), (32, -0.031), (33, -0.059), (34, -0.002), (35, -0.071), (36, 0.034), (37, -0.038), (38, 0.039), (39, 0.01), (40, 0.083), (41, -0.017), (42, -0.049), (43, -0.085), (44, -0.072), (45, 0.017), (46, -0.057), (47, -0.053), (48, -0.137), (49, -0.106)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96148449 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

2 0.85837293 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.74287009 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.66461229 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

5 0.53937131 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

Author: Kanaka Rajan, L Abbott, Haim Sompolinsky

Abstract: How are the spatial patterns of spontaneous and evoked population responses related? We study the impact of connectivity on the spatial pattern of fluctuations in the input-generated response, by comparing the distribution of evoked and intrinsically generated activity across the different units of a neural network. We develop a complementary approach to principal component analysis in which separate high-variance directions are derived for each input condition. We analyze subspace angles to compute the difference between the shapes of trajectories corresponding to different network states, and the orientation of the low-dimensional subspaces that driven trajectories occupy within the full space of neuronal activity. In addition to revealing how the spatiotemporal structure of spontaneous activity affects input-evoked responses, these methods can be used to infer input selectivity induced by network dynamics from experimentally accessible measures of spontaneous activity (e.g. from voltage- or calcium-sensitive optical imaging experiments). We conclude that the absence of a detailed spatial map of afferent inputs and cortical connectivity does not limit our ability to design spatially extended stimuli that evoke strong responses. 1 1 Motivation Stimulus selectivity in neural networks was historically measured directly from input-driven responses [1], and only later were similar selectivity patterns observed in spontaneous activity across the cortical surface [2, 3]. We argue that it is possible to work in the reverse order, and show that analyzing the distribution of spontaneous activity across the different units in the network can inform us about the selectivity of evoked responses to stimulus features, even when no apparent sensory map exists. Sensory-evoked responses are typically divided into a signal component generated by the stimulus and a noise component corresponding to ongoing activity that is not directly related to the stimulus. Subsequent effort focuses on understanding how the signal depends on properties of the stimulus, while the remaining, irregular part of the response is treated as additive noise. The distinction between external stochastic processes and the noise generated deterministically as a function of intrinsic recurrence has been previously studied in chaotic neural networks [4]. It has also been suggested that internally generated noise is not additive and can be more sensitive to the frequency and amplitude of the input, compared to the signal component of the response [5 - 8]. In this paper, we demonstrate that the interaction between deterministic intrinsic noise and the spatial properties of the external stimulus is also complex and nonlinear. We study the impact of network connectivity on the spatial pattern of input-driven responses by comparing the structure of evoked and spontaneous activity, and show how the unique signature of these dynamics determines the selectivity of networks to spatial features of the stimuli driving them. 2 Model description In this section, we describe the network model and the methods we use to analyze its dynamics. Subsequent sections explore how the spatial patterns of spontaneous and evoked responses are related in terms of the distribution of the activity across the network. Finally, we show how the stimulus selectivity of the network can be inferred from its spontaneous activity patterns. 2.1 Network elements We build a firing rate model of N interconnected units characterized by a statistical description of the underlying circuitry (as N → ∞, the system “self averages” making the description independent of a specific network architecture, see also [11, 12]). Each unit is characterized by an activation variable xi ∀ i = 1, 2, . . . N , and a nonlinear response function ri which relates to xi through ri = R0 + φ(xi ) where,   R0 tanh x for x ≤ 0 R0 φ(x) = (1) x  (Rmax − R0 ) tanh otherwise. Rmax −R0 Eq. 1 allows us to independently set the maximum firing rate Rmax and the background rate R0 to biologically reasonable values, while retaining a maximum gradient at x = 0 to guarantee the smoothness of the transition to chaos [4]. We introduce a recurrent weight matrix with element Jij equivalent to the strength of the synapse from unit j → unit i. The individual weights are chosen independently and randomly from a Gaus2 sian distribution with mean and variance given by [Jij ]J = 0 and Jij J = g 2 /N , where square brackets are ensemble averages [9 - 11,13]. The control parameter g which scales as the variance of the synaptic weights, is particularly important in determining whether or not the network produces spontaneous activity with non-trivial dynamics (Specifically, g = 0 corresponds to a completely uncoupled network and a network with g = 1 generates non-trivial spontaneous activity [4, 9, 10]). The activation variable for each unit xi is therefore determined by the relation, N τr dxi = −xi + g Jij rj + Ii , dt j=1 with the time scale of the network set by the single-neuron time constant τr of 10 ms. 2 (2) The amplitude I of an oscillatory external input of frequency f , is always the same for each unit, but in some examples shown in this paper, we introduce a neuron-specific phase factor θi , chosen randomly from a uniform distribution between 0 and 2π, such that Ii = I cos(2πf t + θi ) ∀ i = 1, 2, . . . N. (3) In visually responsive neurons, this mimics a population of simple cells driven by a drifting grating of temporal frequency f , with the different phases arising from offsets in spatial receptive field locations. The randomly assigned phases in our model ensure that the spatial pattern of input is not correlated with the pattern of recurrent connectivity. In our selectivity analysis however (Fig. 3), we replace the random phases with spatial input patterns that are aligned with network connectivity. 2.2 PCA redux Principal component analysis (PCA) has been applied profitably to neuronal recordings (see for example [14]) but these analyses often plot activity trajectories corresponding to different network states using the fixed principal component coordinates derived from combined activities under all stimulus conditions. Our analysis offers a complementary approach whereby separate principal components are derived for each stimulus condition, and the resulting principal angles reveal not only the difference between the shapes of trajectories corresponding to different network states, but also the orientation of the low-dimensional subspaces these trajectories occupy within the full N -dimensional space of neuronal activity. The instantaneous network state can be described by a point in an N -dimensional space with coordinates equal to the firing rates of the N units. Over time, the network activity traverses a trajectory in this N -dimensional space and PCA can be used to delineate the subspace in which this trajectory lies. The analysis is done by diagonalizing the equal-time cross-correlation matrix of network firing rates given by, Dij = (ri (t) − ri )(rj (t) − rj ) , (4) where

6 0.46150237 268 nips-2010-The Neural Costs of Optimal Control

7 0.41831186 17 nips-2010-A biologically plausible network for the computation of orientation dominance

8 0.40483591 95 nips-2010-Feature Transitions with Saccadic Search: Size, Color, and Orientation Are Not Alike

9 0.39430588 157 nips-2010-Learning to localise sounds with spiking neural networks

10 0.3934648 34 nips-2010-Attractor Dynamics with Synaptic Depression

11 0.36441272 121 nips-2010-Improving Human Judgments by Decontaminating Sequential Dependencies

12 0.35931921 1 nips-2010-(RF)^2 -- Random Forest Random Field

13 0.35275465 19 nips-2010-A rational decision making framework for inhibitory control

14 0.34805366 3 nips-2010-A Bayesian Framework for Figure-Ground Interpretation

15 0.31896973 96 nips-2010-Fractionally Predictive Spiking Neurons

16 0.31763434 82 nips-2010-Evaluation of Rarity of Fingerprints in Forensics

17 0.31295174 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

18 0.29684871 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

19 0.29265749 220 nips-2010-Random Projection Trees Revisited

20 0.28711683 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.031), (17, 0.013), (27, 0.206), (28, 0.241), (30, 0.039), (35, 0.049), (45, 0.12), (50, 0.043), (52, 0.033), (60, 0.024), (77, 0.063), (90, 0.039)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.81924611 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

2 0.71529037 128 nips-2010-Infinite Relational Modeling of Functional Connectivity in Resting State fMRI

Author: Morten Mørup, Kristoffer Madsen, Anne-marie Dogonowski, Hartwig Siebner, Lars K. Hansen

Abstract: Functional magnetic resonance imaging (fMRI) can be applied to study the functional connectivity of the neural elements which form complex network at a whole brain level. Most analyses of functional resting state networks (RSN) have been based on the analysis of correlation between the temporal dynamics of various regions of the brain. While these models can identify coherently behaving groups in terms of correlation they give little insight into how these groups interact. In this paper we take a different view on the analysis of functional resting state networks. Starting from the definition of resting state as functional coherent groups we search for functional units of the brain that communicate with other parts of the brain in a coherent manner as measured by mutual information. We use the infinite relational model (IRM) to quantify functional coherent groups of resting state networks and demonstrate how the extracted component interactions can be used to discriminate between functional resting state activity in multiple sclerosis and normal subjects. 1

3 0.70539945 121 nips-2010-Improving Human Judgments by Decontaminating Sequential Dependencies

Author: Harold Pashler, Matthew Wilder, Robert Lindsey, Matt Jones, Michael C. Mozer, Michael P. Holmes

Abstract: For over half a century, psychologists have been struck by how poor people are at expressing their internal sensations, impressions, and evaluations via rating scales. When individuals make judgments, they are incapable of using an absolute rating scale, and instead rely on reference points from recent experience. This relativity of judgment limits the usefulness of responses provided by individuals to surveys, questionnaires, and evaluation forms. Fortunately, the cognitive processes that transform internal states to responses are not simply noisy, but rather are influenced by recent experience in a lawful manner. We explore techniques to remove sequential dependencies, and thereby decontaminate a series of ratings to obtain more meaningful human judgments. In our formulation, decontamination is fundamentally a problem of inferring latent states (internal sensations) which, because of the relativity of judgment, have temporal dependencies. We propose a decontamination solution using a conditional random field with constraints motivated by psychological theories of relative judgment. Our exploration of decontamination models is supported by two experiments we conducted to obtain ground-truth rating data on a simple length estimation task. Our decontamination techniques yield an over 20% reduction in the error of human judgments. 1

4 0.70415223 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

5 0.7033658 60 nips-2010-Deterministic Single-Pass Algorithm for LDA

Author: Issei Sato, Kenichi Kurihara, Hiroshi Nakagawa

Abstract: We develop a deterministic single-pass algorithm for latent Dirichlet allocation (LDA) in order to process received documents one at a time and then discard them in an excess text stream. Our algorithm does not need to store old statistics for all data. The proposed algorithm is much faster than a batch algorithm and is comparable to the batch algorithm in terms of perplexity in experiments.

6 0.70040667 281 nips-2010-Using body-anchored priors for identifying actions in single images

7 0.69711202 39 nips-2010-Bayesian Action-Graph Games

8 0.68509603 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

9 0.68294883 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

10 0.6751433 161 nips-2010-Linear readout from a neural population with partial correlation data

11 0.6605019 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

12 0.65644389 97 nips-2010-Functional Geometry Alignment and Localization of Brain Areas

13 0.63637859 54 nips-2010-Copula Processes

14 0.63467896 6 nips-2010-A Discriminative Latent Model of Image Region and Object Tag Correspondence

15 0.6286543 98 nips-2010-Functional form of motion priors in human motion perception

16 0.62249166 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

17 0.62100393 268 nips-2010-The Neural Costs of Optimal Control

18 0.61561459 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

19 0.61517543 194 nips-2010-Online Learning for Latent Dirichlet Allocation

20 0.6137954 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles