nips nips2010 nips2010-268 knowledge-graph by maker-knowledge-mining

268 nips-2010-The Neural Costs of Optimal Control


Source: pdf

Author: Samuel Gershman, Robert Wilson

Abstract: Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. [sent-7, score-0.574]

2 The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. [sent-8, score-0.422]

3 This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. [sent-9, score-0.206]

4 Typically one has a choice of approximation, with more exact approximations demanding more computational resources, a penalty that can be naturally incorporated into the utility function. [sent-12, score-0.429]

5 The question we address in this paper is: given a family of approximations and their associated resource demands, what approximation will lead as close as possible to the optimal control policy? [sent-13, score-0.209]

6 However, maximizing information transfer is only one component of adaptive behavior; the utility of information must be taken into account when choosing a code [15], and this may interact in complicated ways with the computational costs of approximate inference. [sent-16, score-0.619]

7 Central to our analysis is the observation that while this expected utility cannot be maximized directly, it is possible to maximize a variational lower bound on log expected utility (see also [17, 5] for related approaches). [sent-18, score-1.274]

8 We study the properties of this lower bound and show how it accounts for some intriguing empirical properties of neural codes. [sent-19, score-0.158]

9 1 2 Optimal control with approximate densities Let a denote an action and s denote a hidden state variable drawn from some probability density p(s). [sent-20, score-0.548]

10 1 Given a utility function U (a; s), the optimal action ap is the one that maximizes expected utility Vp (a): ap = argmax Vp (a), (1) a where Vp (a) = Ep [U (a; s)] = p(s)U (a; s)ds. [sent-21, score-0.933]

11 (2) s Computing the expected utility for each action requires solving a possibly intractable integral. [sent-22, score-0.543]

12 An approximation of expected utility can be obtained by substituting an alternative density q(s) for which the expected utility is tractable. [sent-23, score-1.122]

13 Using an approximate density presents the “meta-decision” of which density to use. [sent-25, score-0.496]

14 If one chooses optimally under q(s), then the expected utility is given by Ep [U (aq ; s)] = Vp (aq ), therefore the optimal density q ∗ should be chosen according to q ∗ = argmax Vp (aq ), (3) q∈Q where Q is some family of densities. [sent-26, score-0.711]

15 3, consider the optimization as consisting of two parts: first, select an approximate density q(s) and choose the optimal action with respect to this density; then evaluate the true value of that action under the target density. [sent-28, score-0.556]

16 In general, we cannot optimize this function directly because it requires solving precisely the integral we are trying to avoid: the expected utility under p(s). [sent-30, score-0.451]

17 Examining the utility lower bound, we see that the terms exert conceptually distinct influences: 1. [sent-32, score-0.49]

18 A utility component, Eq [log U (a; s)], the expected log utility under the approximate density. [sent-33, score-0.961]

19 A cross-entropy component, −Eq [log p(s)], reflecting the mismatch between the approximate density and the target density. [sent-35, score-0.372]

20 An entropy component, −Eq [log q(s)], embodying a maximum entropy principle [8]: for a fixed utility and cross-entropy, choose the distribution with maximal entropy. [sent-38, score-0.39]

21 Intuitively, a more accurate approximate density q(s) should incur a larger computational cost. [sent-39, score-0.276]

22 One way to express this notion of cost is to incorporate it directly into the utility function. [sent-40, score-0.437]

23 That is, we consider an augmented utility function U (a, q; s) that depends on the approximate density. [sent-41, score-0.446]

24 We also refer throughout this paper to probability densities over a multimdensional, continuous state variable, but our results still apply to one dimensional and discrete variables (in which case the probability densities are replaced with probability mass functions). [sent-44, score-0.256]

25 2 The assumption that the log utility decomposes into additive reward and cost components is intuitive: it implies that reward is measured relative to the computational cost of earning it. [sent-45, score-0.78]

26 In summary, the utility lower bound L(q, a) provides an objective function for simultaneously choosing an action and choosing an approximate density over hidden states. [sent-46, score-0.873]

27 Whereas in classical decision theory, optimization is performed over the action space, in variational decision theory optimization is performed over the joint space of actions and approximate densities. [sent-47, score-0.341]

28 3 Choosing a probabilistic population code While the theory developed in the previous section applies to any representation scheme, in this section, for illustrative purposes, we focus on one specific family of approximate densities defined by the firing rate of neurons in a network. [sent-49, score-0.549]

29 Specifically, we consider a population of N neurons tasked with encoding a probability density over s. [sent-50, score-0.501]

30 We assume that the kernel density functions are Gaussian, parameterized by a preferred stimulus (mean) sn and a standard deviation σn : fn (s) = √ 1 (s − sn )2 exp − 2 2σn 2πσn (7) For simplicity, in this paper we will focus on the limiting case in which σ ⇒ 0. [sent-52, score-0.661]

31 2 In this case q(s) degenerates onto a collection of delta functions: q(s) = 1 Z N exn δ(s − sn ), (8) n=1 where δ(·) is the Dirac delta function. [sent-53, score-0.399]

32 This density corresponds to a collection of sharply tuned neurons; provided that the preferred values {s1 , . [sent-54, score-0.305]

33 Technically, the lower bound is not well defined in the limit because the target density is non-atomic (i. [sent-63, score-0.431]

34 5 by N Eq [g(s)] ≈ Z −1 n=1 exn g(sn ), as we do above, can be justified in terms of first-order Taylor series expansions around the preferred stimuli, which will be arbitrarily accurate as σ → 0. [sent-67, score-0.223]

35 3 (b) convolutional coding (c) gain coding (d) exponential coding firing rate probability density (a) probability distributions 10 30 50 s 70 10 30 50 70 neuron number 10 30 50 neuron number 70 10 30 50 70 neuron number Figure 1: Comparison between coding schemes. [sent-69, score-1.542]

36 We next seek a neuronal update rule that performs gradient ascent on the utility lower bound. [sent-74, score-0.44]

37 3 This update rule defines an attractor network whose Lyapunov function is the (negative) utility lower bound. [sent-76, score-0.44]

38 1 Relation to other probability coding schemes Exponential, convolutional and gain coding The probability coding scheme proposed in Eq. [sent-80, score-0.774]

39 8 is closely related to the exponential coding described in [16]. [sent-81, score-0.176]

40 Other related schemes include convolutional coding [28], in which a distribution is encoded by convolving it with a neural tuning function, and gain coding [11, 27], in which the variance of the distribution is inversely proportional to the gain of the neural response. [sent-83, score-0.71]

41 Convolutional coding (Figure 1b) is characterized by a neural response pattern that gets broader as the distribution gets broader. [sent-85, score-0.277]

42 In contrast, gain coding schemes (Figure 1c) posit that changes in uncertainty only change the overall gain, and not the shape, of the neural response. [sent-89, score-0.325]

43 This point is crucial for the biological plausibility of this scheme, as it seems unlikely that these minute differences in population response width would be easily measured experimentally. [sent-94, score-0.175]

44 It is also important to note that both the convolutional and gain coding schemes ignore the utility function in constructing probabilistic representations. [sent-95, score-0.836]

45 As we explore in later sections, rewards and costs place strong constraints on the types of codes that are learned by the variational objective, and the available experimental data is congruent with this view. [sent-96, score-0.368]

46 Psychological phenomena like perceptual multistability [6] and speech perception [21] are parsimoniously explained by a model in which a density over the complete hypothesis space is replaced by a small set of discrete samples. [sent-102, score-0.319]

47 Thus, it is reasonable to speculate whether our theory of population coding relates to these at the neural level. [sent-103, score-0.336]

48 In fact, we can make this correspondence precise: for any population code of the form in Eq. [sent-105, score-0.18]

49 The corresponding proposal density takes the form: p(sn ) π(s) ∝ δ(s − sn ). [sent-107, score-0.434]

50 (13) exn n This means that optimizing the bound with respect to x is equivalent to selecting a proposal density so as to maximize utility under resource constraints. [sent-108, score-1.042]

51 [26], though in a more restricted setting, showing that maximal utility is achieved with very few samples when sampling is costly. [sent-110, score-0.39]

52 Similarly, π(s) will be sensitive to the computational costs inherent in the utility lower bound, favoring a small number of samples. [sent-111, score-0.55]

53 In that treatment, the proposal density was assumed to be the prior, leading to the prediction that neurons with preferred stimulus s∗ should occur with frequency proportional to the prior probability of s∗ . [sent-113, score-0.534]

54 One source of evidence for this prediction comes from the oblique effect: the observation that more V1 neurons are tuned to cardinal orientations than to oblique orientations [3], consistent with the statistics of the natural visual environment. [sent-114, score-0.291]

55 In contrast, our model predicts that the proposal density will be sensitive to rewards in addition to the prior; as we argue in the section 5. [sent-115, score-0.293]

56 5 Results In the following sections, we examine some of the neurophysiological and psychological implications of the variational objective. [sent-117, score-0.237]

57 Tying these diverse topics together is the central idea that utilities, costs and probabilistic beliefs exert a synergistic effect on neural codes and their behavioral outputs. [sent-118, score-0.383]

58 One consequence of the variational objective is that a clear separation of these components in the brain may not exist: rewards and costs infiltrate very early sensory areas. [sent-119, score-0.517]

59 Accumulating evidence indicates that perceptual representations in the brain are modulated by reward expectation. [sent-123, score-0.275]

60 For example, Shuler and Bear [23] paired retinal stimulation of the left and right 5 Natural sounds Neural code 0. [sent-124, score-0.193]

61 Probability density of natural sounds and the optimized approximate density, with black lines demarcating the region of behaviorally relevant sounds. [sent-135, score-0.471]

62 eyes with reward after different delays and recorded neurons in primary visual cortex that switched from representing purely physical attributes of the stimulation (e. [sent-136, score-0.343]

63 Similarly, Serences [20] showed that spatially selective regions of visual cortex are biased by the prior reward associated with different spatial locations. [sent-139, score-0.184]

64 These studies raise the possibility that the brain does not encode probabilistic beliefs separately from reward; indeed, this idea has been enshrined by a recent theoretical account [4]. [sent-140, score-0.232]

65 On the other hand, the variational framework we have described accounts for these findings by showing that decision-making using approximate densities leads automatically to reward-modulated probabilistic beliefs. [sent-142, score-0.439]

66 [12] recorded the responses of grasshopper auditory neurons to different stimulus ensembles and found that the ensembles that elicited the optimal response differed systematically from the natural auditory statistics of the grasshopper’s environment. [sent-146, score-0.522]

67 In particular, the optimal ensembles were restricted to a region of stimulus space in which behaviorally important sounds live, namely species-specific mating signals. [sent-147, score-0.31]

68 , “an organism may seek to distribute its sensory resources according to the behavioral relevance of the natural stimuli, rather than according to purely statistical principles. [sent-149, score-0.161]

69 ” We modeled this phenomenon by constructing a relatively wide density of natural sounds with a narrow region of behaviorally relevant sounds (in which states are twice as rewarding). [sent-150, score-0.509]

70 Figure 2 shows the results, confirming that maximizing the utility lower bound selects a kernel density estimate that is narrower than the target density of natural sounds. [sent-151, score-1.041]

71 , using injections of muscimol, a GABA agonist) and hence increasing the metabolic requirements for action potential generation. [sent-156, score-0.165]

72 A second method is by manipulating the availability of glucose [7], either by making the subject hypoglycemic or by administering local infusions of glucose directly into the brain. [sent-157, score-0.202]

73 We predict that increasing spiking costs (either by reducing glucose levels or increasing GABAergic transmission) will result in a diminished ability to detect weak signals embedded in noise. [sent-158, score-0.271]

74 These predictions have received a more direct test in a recent visual search experiment by McPeek and Keller [14], in which muscimol was injected into local regions of the superior colliculus, a brain area known to control saccadic target selection. [sent-160, score-0.494]

75 In the absence of distractors, response latencies to the target were increased when it appeared in the receptive fields of the inhibited neurons. [sent-161, score-0.354]

76 In the presence of distractors, response latencies increased and choice accuracy decreased when the target appeared in the receptive fields of the inhibited neurons. [sent-162, score-0.354]

77 We simulated these findings by constructing a cost-field γ(n) to represent the amount of GABAergic transmission at different neurons induced by muscimol injections. [sent-163, score-0.238]

78 05 0 50 0 −50 0 s s 50 4 2 0 0 50 100 neuron number Figure 3: Spiking cost in the superior colliculus. [sent-180, score-0.177]

79 (Left column) Target density, with larger bump in the top panel representing the target; (Center column) neural code under different settings of cost-field γ(n); (Right column) firing rates under different cost-fields. [sent-183, score-0.153]

80 decreases because the increased cost of spiking in the neurons representing the target location dampens the probability density in that location. [sent-184, score-0.546]

81 Increasing spiking cost also reduces the overall firing rate in the target-representing neurons relative to the distractor-representing neurons. [sent-185, score-0.23]

82 3 Non-linear probability weighting In this section, we show that the variational objective provides a new perspective on some wellknown peculiarities of human probabilistic judgment. [sent-189, score-0.255]

83 In particular, the ostensibly irrational nonlinear weighting of probabilities in risky choice emerges naturally from optimization of the variational objective under a natural assumption about the ecological distribution of rewards. [sent-190, score-0.193]

84 The variational objective explains these phenomena by virtue of the fact that under neural resource constraints, the approximate density will be biased towards high reward regions of the state space. [sent-193, score-0.746]

85 6 Discussion We have presented a variational objective function for neural codes that balances motivational, statistical and metabolic demands in the service of optimal behavior. [sent-198, score-0.374]

86 The essential idea is that the intractable problem of computing expected utilities can be finessed by instead computing expected utilities under an approximate density that optimizes a variational lower bound on log expected utility. [sent-199, score-1.005]

87 This lower bound captures the neural costs of optimal control: more accurate approximations will require more metabolic resources, whereas less accurate approximations will diminish the amount of earned reward. [sent-200, score-0.419]

88 sensory neurons have repeatedly been found to be sensitive to reward contingencies. [sent-211, score-0.336]

89 Intuitively, expending more resources on accurately approximating the complete density of natural sensory statistics is inefficient (from an optimal control perspective) if the behaviorally relevant signals live in a compact subspace. [sent-212, score-0.534]

90 We showed that the approximation that maximizes the utility lower bound concentrates its density within this subspace. [sent-213, score-0.725]

91 Our variational framework differs in important ways from the one recently proposed by Friston [4]. [sent-214, score-0.193]

92 In his treatment, utilities are not represented explicitly at all; rather, they are implicit in the probabilistic structure of the environment. [sent-215, score-0.149]

93 Based on an evolutionary argument, Friston suggests that high utility states are precisely those that have high probability, since otherwise organisms who find themselves frequently in low utility states are unlikely to survive. [sent-216, score-0.78]

94 Thus, adopting a control policy that minimizes a variational upper bound on surprise will lead to optimal behavior. [sent-217, score-0.346]

95 In contrast, our variational framework is motivated by quite different considerations arising from the computational constraints of the brain’s architecture. [sent-221, score-0.193]

96 Nonetheless, these approaches have in common the idea that probabilistic beliefs will be shaped by the utility structure of the environment. [sent-222, score-0.505]

97 The variational framework offers a rather different perspective on bounded rationality; it asserts that humans are indeed trying to find optimal solutions, but subject to certain computational resource constraints. [sent-224, score-0.311]

98 By making explicit what these constraints are, and how they interact at a neural level, our work provides a foundation upon which to develop a more complete neurobiological theory of optimal control under resource constraints. [sent-225, score-0.259]

99 Role of glucose in regulating the brain and cognition. [sent-279, score-0.218]

100 Testing the efficiency of sensory coding with optimal stimulus ensembles. [sent-321, score-0.344]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('utility', 0.39), ('density', 0.22), ('variational', 0.193), ('exn', 0.176), ('coding', 0.176), ('eq', 0.15), ('sn', 0.141), ('neuron', 0.13), ('vp', 0.13), ('densities', 0.128), ('neurons', 0.123), ('resource', 0.118), ('population', 0.117), ('brain', 0.117), ('reward', 0.116), ('muscimol', 0.115), ('costs', 0.11), ('ring', 0.108), ('convolutional', 0.102), ('glucose', 0.101), ('behaviorally', 0.101), ('sensory', 0.097), ('target', 0.096), ('sounds', 0.094), ('action', 0.092), ('utilities', 0.087), ('latencies', 0.087), ('grasshopper', 0.086), ('vul', 0.086), ('aq', 0.075), ('tversky', 0.075), ('machens', 0.075), ('gabaergic', 0.075), ('proposal', 0.073), ('metabolic', 0.073), ('stimulus', 0.071), ('gains', 0.069), ('visual', 0.068), ('codes', 0.065), ('bound', 0.065), ('distractors', 0.065), ('resources', 0.064), ('log', 0.064), ('gain', 0.064), ('code', 0.063), ('receptive', 0.063), ('probabilistic', 0.062), ('firing', 0.062), ('expected', 0.061), ('spiking', 0.06), ('response', 0.058), ('distractor', 0.057), ('gershman', 0.057), ('mcpeek', 0.057), ('multistability', 0.057), ('rationality', 0.057), ('shuler', 0.057), ('approximate', 0.056), ('carlo', 0.053), ('beliefs', 0.053), ('panels', 0.053), ('monte', 0.052), ('control', 0.052), ('exert', 0.05), ('inhibited', 0.05), ('oblique', 0.05), ('lower', 0.05), ('auditory', 0.048), ('cost', 0.047), ('preferred', 0.047), ('panel', 0.047), ('neuroscience', 0.047), ('matt', 0.046), ('neurobiological', 0.046), ('saccadic', 0.046), ('kahneman', 0.046), ('ensembles', 0.044), ('psychological', 0.044), ('grif', 0.043), ('neural', 0.043), ('xn', 0.043), ('perceptual', 0.042), ('schemes', 0.042), ('fn', 0.041), ('encoding', 0.041), ('delta', 0.041), ('culotta', 0.041), ('optimally', 0.04), ('wilson', 0.039), ('calibration', 0.039), ('approximations', 0.039), ('elds', 0.038), ('scheme', 0.038), ('pure', 0.038), ('sharply', 0.038), ('anderson', 0.038), ('schuurmans', 0.037), ('stimulation', 0.036), ('adopting', 0.036), ('friston', 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0 268 nips-2010-The Neural Costs of Optimal Control

Author: Samuel Gershman, Robert Wilson

Abstract: Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1

2 0.27011311 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.1634302 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.15043983 59 nips-2010-Deep Coding Network

Author: Yuanqing Lin, Zhang Tong, Shenghuo Zhu, Kai Yu

Abstract: This paper proposes a principled extension of the traditional single-layer flat sparse coding scheme, where a two-layer coding scheme is derived based on theoretical analysis of nonlinear functional approximation that extends recent results for local coordinate coding. The two-layer approach can be easily generalized to deeper structures in a hierarchical multiple-layer manner. Empirically, it is shown that the deep coding approach yields improved performance in benchmark datasets.

5 0.14892301 39 nips-2010-Bayesian Action-Graph Games

Author: Albert X. Jiang, Kevin Leyton-brown

Abstract: Games of incomplete information, or Bayesian games, are an important gametheoretic model and have many applications in economics. We propose Bayesian action-graph games (BAGGs), a novel graphical representation for Bayesian games. BAGGs can represent arbitrary Bayesian games, and furthermore can compactly express Bayesian games exhibiting commonly encountered types of structure including symmetry, action- and type-specific utility independence, and probabilistic independence of type distributions. We provide an algorithm for computing expected utility in BAGGs, and discuss conditions under which the algorithm runs in polynomial time. Bayes-Nash equilibria of BAGGs can be computed by adapting existing algorithms for complete-information normal form games and leveraging our expected utility algorithm. We show both theoretically and empirically that our approaches improve significantly on the state of the art. 1

6 0.13663979 100 nips-2010-Gaussian Process Preference Elicitation

7 0.1322466 263 nips-2010-Switching state space model for simultaneously estimating state transitions and nonstationary firing rates

8 0.11707685 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

9 0.11680734 185 nips-2010-Nonparametric Density Estimation for Stochastic Optimization with an Observable State Variable

10 0.11600591 157 nips-2010-Learning to localise sounds with spiking neural networks

11 0.11580511 143 nips-2010-Learning Convolutional Feature Hierarchies for Visual Recognition

12 0.11356699 96 nips-2010-Fractionally Predictive Spiking Neurons

13 0.1123183 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

14 0.10654242 194 nips-2010-Online Learning for Latent Dirichlet Allocation

15 0.10352466 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

16 0.099801779 197 nips-2010-Optimal Bayesian Recommendation Sets and Myopically Optimal Choice Query Sets

17 0.092683673 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

18 0.092435859 283 nips-2010-Variational Inference over Combinatorial Spaces

19 0.087708525 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

20 0.08581505 33 nips-2010-Approximate inference in continuous time Gaussian-Jump processes


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.265), (1, -0.026), (2, -0.227), (3, 0.174), (4, 0.027), (5, 0.197), (6, -0.017), (7, 0.038), (8, 0.003), (9, 0.044), (10, 0.05), (11, -0.02), (12, -0.01), (13, -0.16), (14, -0.034), (15, -0.075), (16, -0.037), (17, 0.105), (18, -0.01), (19, 0.125), (20, 0.058), (21, -0.126), (22, 0.112), (23, -0.101), (24, 0.026), (25, -0.139), (26, -0.024), (27, 0.072), (28, -0.09), (29, 0.162), (30, 0.013), (31, -0.008), (32, 0.008), (33, -0.026), (34, 0.064), (35, 0.099), (36, 0.043), (37, 0.035), (38, 0.043), (39, -0.015), (40, 0.005), (41, -0.071), (42, -0.011), (43, -0.044), (44, 0.097), (45, 0.053), (46, -0.087), (47, -0.097), (48, -0.013), (49, -0.009)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.97022814 268 nips-2010-The Neural Costs of Optimal Control

Author: Samuel Gershman, Robert Wilson

Abstract: Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1

2 0.77808434 119 nips-2010-Implicit encoding of prior probabilities in optimal neural populations

Author: Deep Ganguli, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.63922065 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.59816337 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

5 0.56944293 157 nips-2010-Learning to localise sounds with spiking neural networks

Author: Dan Goodman, Romain Brette

Abstract: To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism’s lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back. Keywords: Auditory Perception & Modeling (Primary); Computational Neural Models, Neuroscience, Supervised Learning (Secondary) 1

6 0.53683114 96 nips-2010-Fractionally Predictive Spiking Neurons

7 0.53075081 197 nips-2010-Optimal Bayesian Recommendation Sets and Myopically Optimal Choice Query Sets

8 0.52051127 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

9 0.50644308 65 nips-2010-Divisive Normalization: Justification and Effectiveness as Efficient Coding Transform

10 0.49454549 100 nips-2010-Gaussian Process Preference Elicitation

11 0.46747744 39 nips-2010-Bayesian Action-Graph Games

12 0.46112278 76 nips-2010-Energy Disaggregation via Discriminative Sparse Coding

13 0.45957923 263 nips-2010-Switching state space model for simultaneously estimating state transitions and nonstationary firing rates

14 0.45890215 17 nips-2010-A biologically plausible network for the computation of orientation dominance

15 0.44881114 19 nips-2010-A rational decision making framework for inhibitory control

16 0.44477046 59 nips-2010-Deep Coding Network

17 0.44207403 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

18 0.43039495 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

19 0.41323671 252 nips-2010-SpikeAnts, a spiking neuron network modelling the emergence of organization in a complex system

20 0.41000229 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(13, 0.03), (17, 0.023), (27, 0.172), (30, 0.076), (35, 0.024), (45, 0.211), (50, 0.035), (52, 0.036), (60, 0.039), (63, 0.176), (77, 0.066), (90, 0.034)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.96878558 244 nips-2010-Sodium entry efficiency during action potentials: A novel single-parameter family of Hodgkin-Huxley models

Author: Anand Singh, Renaud Jolivet, Pierre Magistretti, Bruno Weber

Abstract: Sodium entry during an action potential determines the energy efficiency of a neuron. The classic Hodgkin-Huxley model of action potential generation is notoriously inefficient in that regard with about 4 times more charges flowing through the membrane than the theoretical minimum required to achieve the observed depolarization. Yet, recent experimental results show that mammalian neurons are close to the optimal metabolic efficiency and that the dynamics of their voltage-gated channels is significantly different than the one exhibited by the classic Hodgkin-Huxley model during the action potential. Nevertheless, the original Hodgkin-Huxley model is still widely used and rarely to model the squid giant axon from which it was extracted. Here, we introduce a novel family of HodgkinHuxley models that correctly account for sodium entry, action potential width and whose voltage-gated channels display a dynamics very similar to the most recent experimental observations in mammalian neurons. We speak here about a family of models because the model is parameterized by a unique parameter the variations of which allow to reproduce the entire range of experimental observations from cortical pyramidal neurons to Purkinje cells, yielding a very economical framework to model a wide range of different central neurons. The present paper demonstrates the performances and discuss the properties of this new family of models. 1

same-paper 2 0.8984555 268 nips-2010-The Neural Costs of Optimal Control

Author: Samuel Gershman, Robert Wilson

Abstract: Optimal control entails combining probabilities and utilities. However, for most practical problems, probability densities can be represented only approximately. Choosing an approximation requires balancing the benefits of an accurate approximation against the costs of computing it. We propose a variational framework for achieving this balance and apply it to the problem of how a neural population code should optimally represent a distribution under resource constraints. The essence of our analysis is the conjecture that population codes are organized to maximize a lower bound on the log expected utility. This theory can account for a plethora of experimental data, including the reward-modulation of sensory receptive fields, GABAergic effects on saccadic movements, and risk aversion in decisions under uncertainty. 1

3 0.84221816 161 nips-2010-Linear readout from a neural population with partial correlation data

Author: Adrien Wohrer, Ranulfo Romo, Christian K. Machens

Abstract: How much information does a neural population convey about a stimulus? Answers to this question are known to strongly depend on the correlation of response variability in neural populations. These noise correlations, however, are essentially immeasurable as the number of parameters in a noise correlation matrix grows quadratically with population size. Here, we suggest to bypass this problem by imposing a parametric model on a noise correlation matrix. Our basic assumption is that noise correlations arise due to common inputs between neurons. On average, noise correlations will therefore reflect signal correlations, which can be measured in neural populations. We suggest an explicit parametric dependency between signal and noise correlations. We show how this dependency can be used to ”fill the gaps” in noise correlations matrices using an iterative application of the Wishart distribution over positive definitive matrices. We apply our method to data from the primary somatosensory cortex of monkeys performing a two-alternativeforced choice task. We compare the discrimination thresholds read out from the population of recorded neurons with the discrimination threshold of the monkey and show that our method predicts different results than simpler, average schemes of noise correlations. 1

4 0.83786744 21 nips-2010-Accounting for network effects in neuronal responses using L1 regularized point process models

Author: Ryan Kelly, Matthew Smith, Robert Kass, Tai S. Lee

Abstract: Activity of a neuron, even in the early sensory areas, is not simply a function of its local receptive field or tuning properties, but depends on global context of the stimulus, as well as the neural context. This suggests the activity of the surrounding neurons and global brain states can exert considerable influence on the activity of a neuron. In this paper we implemented an L1 regularized point process model to assess the contribution of multiple factors to the firing rate of many individual units recorded simultaneously from V1 with a 96-electrode “Utah” array. We found that the spikes of surrounding neurons indeed provide strong predictions of a neuron’s response, in addition to the neuron’s receptive field transfer function. We also found that the same spikes could be accounted for with the local field potentials, a surrogate measure of global network states. This work shows that accounting for network fluctuations can improve estimates of single trial firing rate and stimulus-response transfer functions. 1

5 0.83421642 81 nips-2010-Evaluating neuronal codes for inference using Fisher information

Author: Haefner Ralf, Matthias Bethge

Abstract: Many studies have explored the impact of response variability on the quality of sensory codes. The source of this variability is almost always assumed to be intrinsic to the brain. However, when inferring a particular stimulus property, variability associated with other stimulus attributes also effectively act as noise. Here we study the impact of such stimulus-induced response variability for the case of binocular disparity inference. We characterize the response distribution for the binocular energy model in response to random dot stereograms and find it to be very different from the Poisson-like noise usually assumed. We then compute the Fisher information with respect to binocular disparity, present in the monocular inputs to the standard model of early binocular processing, and thereby obtain an upper bound on how much information a model could theoretically extract from them. Then we analyze the information loss incurred by the different ways of combining those inputs to produce a scalar single-neuron response. We find that in the case of depth inference, monocular stimulus variability places a greater limit on the extractable information than intrinsic neuronal noise for typical spike counts. Furthermore, the largest loss of information is incurred by the standard model for position disparity neurons (tuned-excitatory), that are the most ubiquitous in monkey primary visual cortex, while more information from the inputs is preserved in phase-disparity neurons (tuned-near or tuned-far) primarily found in higher cortical regions. 1

6 0.83001655 39 nips-2010-Bayesian Action-Graph Games

7 0.82786393 98 nips-2010-Functional form of motion priors in human motion perception

8 0.82694119 6 nips-2010-A Discriminative Latent Model of Image Region and Object Tag Correspondence

9 0.82204998 17 nips-2010-A biologically plausible network for the computation of orientation dominance

10 0.82073033 266 nips-2010-The Maximal Causes of Natural Scenes are Edge Filters

11 0.81962043 200 nips-2010-Over-complete representations on recurrent neural networks can support persistent percepts

12 0.8182348 44 nips-2010-Brain covariance selection: better individual functional connectivity models using population prior

13 0.81756842 121 nips-2010-Improving Human Judgments by Decontaminating Sequential Dependencies

14 0.81649846 194 nips-2010-Online Learning for Latent Dirichlet Allocation

15 0.80571419 19 nips-2010-A rational decision making framework for inhibitory control

16 0.80361259 123 nips-2010-Individualized ROI Optimization via Maximization of Group-wise Consistency of Structural and Functional Profiles

17 0.80072767 127 nips-2010-Inferring Stimulus Selectivity from the Spatial Structure of Neural Network Dynamics

18 0.80029112 56 nips-2010-Deciphering subsampled data: adaptive compressive sampling as a principle of brain communication

19 0.79991364 238 nips-2010-Short-term memory in neuronal networks through dynamical compressed sensing

20 0.7990911 286 nips-2010-Word Features for Latent Dirichlet Allocation