nips nips2003 nips2003-130 knowledge-graph by maker-knowledge-mining

130 nips-2003-Model Uncertainty in Classical Conditioning


Source: pdf

Author: Aaron C. Courville, Geoffrey J. Gordon, David S. Touretzky, Nathaniel D. Daw

Abstract: We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 edu Abstract We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. [sent-11, score-0.765]

2 Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. [sent-12, score-0.698]

3 We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. [sent-13, score-1.238]

4 According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. [sent-14, score-1.16]

5 1 Introduction Most theories of classical conditioning, exemplified by the classic model of Rescorla and Wagner [7], are wholly concerned with parameter learning. [sent-15, score-0.229]

6 They assume a fixed (often implicit) generative model m of reinforcer delivery and treat conditioning as a process of estimating values for the parameters wm of that model. [sent-16, score-1.026]

7 Using the model and the parameters, the probability of reinforcer delivery can be estimated; such estimates are assumed to give rise to conditioned responses in behavioral experiments. [sent-18, score-0.602]

8 More overtly statistical theories have treated uncertainty in the parameter estimates, which can influence predictions and learning [4]. [sent-19, score-0.173]

9 In realistic situations, the underlying contingencies of the environment are complex and unobservable, and it can thus make sense to view the model m as itself uncertain and subject to learning, though (to our knowledge) no explicitly statistical theories of conditioning have yet done so. [sent-20, score-0.604]

10 Under the standard Bayesian approach, such uncertainty can be treated analogously to parameter uncertainty, by representing knowledge about m as a distribution over a set of possible models, conditioned on evidence. [sent-21, score-0.432]

11 This work establishes a relationship between theories of animal learning and a recent line of theory by Tenenbaum and collaborators, which uses similar ideas about Bayesian model learning to explain human causal reasoning [9]. [sent-24, score-0.299]

12 Here we present one of the most interesting and novel applications, an explanation of a rather mysterious classical conditioning phenomenon in which opposite predictions about the likelihood of reinforcement can arise from different amounts of otherwise identical experience [11]. [sent-26, score-0.603]

13 The opposing effects, both well known, are called second-order conditioning and conditioned inhibition. [sent-27, score-0.755]

14 2 A Model of Classical Conditioning In a conditioning trial, a set of conditioned stimuli CS ≡ {A, B, . [sent-29, score-0.865]

15 } is presented, potentially accompanied by an unconditioned stimulus or reinforcement signal, US . [sent-32, score-0.241]

16 We represent the jth stimulus with a binary random variable yj such that yj = 1 when the stimulus is present. [sent-33, score-0.35]

17 Here the index j, 1 ≤ j ≤ s, ranges over both the (s − 1) conditioned stimuli and the unconditioned stimulus. [sent-34, score-0.512]

18 The collection of trials within an experimental protocol constitutes a training data set, D = {yjt }, indexed by stimulus j and trial t, 1 ≤ t ≤ T . [sent-35, score-0.391]

19 We take the perspective that animals are attempting to recover the generative process underlying the observed stimuli. [sent-36, score-0.162]

20 We claim they assert the existence of latent causes, represented by the binary variables xi ∈ {0, 1}, responsible for evoking the observed stimuli. [sent-37, score-0.184]

21 The relationship between the latent causes and observed stimuli is encoded with a sigmoid belief network. [sent-38, score-0.594]

22 Sigmoid Belief Networks In sigmoid belief networks, local conditional probabilities are defined as functions of weighted sums of parent nodes. [sent-40, score-0.232]

23 , xc , wm , m) = (1 + exp(− i wij xi − wyj ))−1 , (1) and P (yj = 0 | x1 , . [sent-44, score-0.56]

24 The weight, wij , represents the influence of the parent node xi on the child node yj . [sent-51, score-0.166]

25 The bias term wyj encodes the probability of yj in the absence of all parent nodes. [sent-52, score-0.285]

26 The parameter vector wm contains all model parameters for model structure m. [sent-53, score-0.476]

27 The form of the sigmoid belief networks we consider is represented as a directed graphical model in Figure 1a, with the latent causes as parents of the observed stimuli. [sent-54, score-0.524]

28 The latent causes encode the intratrial correlations between stimuli — we do not model the temporal structure of events within a trial. [sent-55, score-0.498]

29 Conditioned on the latent causes, the stimuli are mutually independent. [sent-56, score-0.294]

30 We can express the conditional joint probability of the observed stimuli as s j=1 P (yj | x1 , . [sent-57, score-0.142]

31 Similarly, we assume that trials are drawn from a stationary process. [sent-61, score-0.216]

32 We do not consider trial order effects, and we assume all trials are mutually independent. [sent-62, score-0.291]

33 (Because of these simplifying assumptions, the present model cannot address a number of phenomena such as the difference between latent inhibition, partial reinforcement, and extinction. [sent-63, score-0.227]

34 Conditional dependencies are depicted as links between the latent causes (x1 , x2 ) and the observed stimuli (A, B, U S) during a trial. [sent-65, score-0.417]

35 i=1 (1 + exp(−1 wxi )) Sigmoid belief networks have a number of appealing properties for modeling conditioning. [sent-71, score-0.153]

36 First, the sigmoid belief network is capable of compactly representing correlations between groups of observable stimuli. [sent-72, score-0.212]

37 Without a latent cause, the number of parameters required to represent these correlations would scale exponentially with the number of stimuli. [sent-73, score-0.244]

38 Such additivity has frequently been observed in conditioning experiments [7]. [sent-76, score-0.391]

39 1 Prediction under Parameter Uncertainty Consider a particular network structure, m, with parameters wm . [sent-78, score-0.39]

40 Given m and a set of trials, D, the uncertainty associated with the choice of parameters is represented in a posterior distribution over wm . [sent-79, score-0.517]

41 This posterior is given by Bayes’ rule, p(wm | D, m) ∝ P (D | wm , m)p(wm | m), where P (D | m) is from Equation 2 and p(wm | m) is the prior distribution over the parameters of m. [sent-80, score-0.449]

42 p(wm | m) = ij p(wij ) i p(wxi ) j p(wyj ), with Gaussian priors for weights p(wij ) = N (0, 3), latent cause biases p(wxi ) = N (0, 3), and stimulus biases p(wyj ) = N (−15, 1), the latter reflecting an assumption that stimuli are rare in the absence of causes. [sent-82, score-0.535]

43 In conditioning, the test trial measures the conditioned response (CR). [sent-83, score-0.468]

44 This is taken to be a measure of the animal’s estimate of the probability of reinforcement conditioned on the present conditioned stimuli CS . [sent-84, score-0.973]

45 This probability is also conditioned on the absence of the remaining stimuli; however, in the interest of clarity, our notation suppresses these absent stimuli. [sent-85, score-0.434]

46 In the Bayesian framework, given m, this probability, P (US | CS , m, D) is determined by integrating over all values of the parameters weighted by their posterior probability density, P (US | CS , m, D) = P (US | CS , wm , m, D)p(wm | m, D) dwm (3) 2. [sent-86, score-0.546]

47 We also encode a further preference for simpler models through the prior over model strucc ture, which we factor as P (m) = P (c) i=1 P (li ), where c is the number of latent causes and li is the number of directed links emanating from xi . [sent-91, score-0.492]

48 This strong prior over model structures is required in addition to the automatic Occam’s razor effect in order to explain the animal behaviors we consider. [sent-93, score-0.213]

49 , temporal ordering effects and multiple perceptual dimensions, model shifts equivalent to the addition of a single latent variable in our setting would introduce a great deal of additional model complexity and require proportionally more evidential justification. [sent-97, score-0.34]

50 1 Jumps include the addition or removal of links or latent causes, or updates to the stimulus biases or weights. [sent-105, score-0.351]

51 3 Second-Order Conditioning and Conditioned Inhibition We use the model to shed light on the relationship between two classical conditioning phenomena, second-order conditioning and conditioned inhibition. [sent-111, score-1.301]

52 The procedures for establishing a second-order excitor and a conditioned inhibitor are similar, yet the results are drastically different. [sent-112, score-0.591]

53 Both procedures involve two kinds of trials: a conditioned stimulus A is presented with the US (A-US ); and A is also presented with a target conditioned stimulus X in unreinforced trials (A-X). [sent-113, score-1.187]

54 In second order conditioning, X becomes an excitor — it is associated with increased probability of reinforcement, demonstrated by conditioned responding. [sent-114, score-0.483]

55 But in conditioned inhibition, X becomes an inhibitor, i. [sent-115, score-0.364]

56 Under previous theories [8], it might have seemed that the crucial distinction between second order conditioning and conditioned inhibition had to do with either blocked versus interspersed trials, or with sequential versus simultaneous presentation of the CS es. [sent-121, score-1.114]

57 However, they found that using only interspersed trials and simultaneous presentation of the conditioned stimuli, they were able to shift from second-order conditioning to conditioned inhibition simply by increasing the number of A-X pairings. [sent-122, score-1.589]

58 In Figure 2a, we see that P (US | X, D) reveals significant second order conditioning with few A-X trials. [sent-133, score-0.391]

59 With more trials the predicted probability of reinforcement quickly decreases. [sent-134, score-0.351]

60 With few A-X trials there are insufficient data to justify a complicated model that accurately fits the data. [sent-137, score-0.312]

61 Due to the automatic Occam’s razor and the prior preference for simple models, high posterior density is inferred for the simple model of Figure 3a. [sent-138, score-0.213]

62 This model combines the stimuli from all trial types and attributes them to a single latent cause. [sent-139, score-0.412]

63 When X is tested alone, its connection to the US through the latent cause results in a large P (US | X, D). [sent-140, score-0.219]

64 2 0 0 10 20 30 40 50 Number of A−X trials (a) Second-order Cond. [sent-151, score-0.216]

65 2 10 20 30 40 Number of A−X trials (b) Summation test 50 60 0 0 4 48 Number of A−X trials (c) Retardation test Figure 2: A summary of the simulation results. [sent-155, score-0.49]

66 For few trials (2 to 8), P (US | X, D) is high, indicative of second-order conditioning. [sent-158, score-0.216]

67 After 10 trials, X is able to significantly reduce the predicted probability of reinforcement generated by the presentation of B. [sent-160, score-0.173]

68 In the model, X is made a conditioned inhibitor by a negative valued weight between x 2 and X. [sent-165, score-0.477]

69 Note that the shift from excitation to inhibition is due to inclusion of uncertainty over models; inferring the parameters with the more complex model fixed would result in immediate inhibition. [sent-167, score-0.316]

70 also conducted a retardation test of conditioned inhibition for X. [sent-169, score-0.68]

71 Our retardation test results are shown in Figure 2 and are in agreement with the findings of Yin et al. [sent-171, score-0.165]

72 A further mystery about conditioned inhibitors, from the perspective of the benchmark theory of Rescorla and Wagner [7], is the nonextinction effect: repeated presentations of a conditioned inhibitor X alone and unreinforced do not extinguish its inhibitory properties. [sent-172, score-1.064]

73 An experiment by Williams and Overmier [10] demonstrated that unpaired presentations of a conditioned inhibitor can actually enhance its ability to suppress responding in a transfer test. [sent-173, score-0.828]

74 Here we used the previous dataset with only 8 A-X pairings and added a number of unpaired presentations of X. [sent-175, score-0.352]

75 The additional unpaired presentations shift the model from a secondorder conditioning regime to a conditioned inhibition regime. [sent-176, score-1.279]

76 The extinction trials suppress posterior density over simple models that exhibit a positive correlation between X and US , shifting density to more complex models and unmasking the inhibitor. [sent-177, score-0.477]

77 4 Discussion We have demonstrated our ideas in the context of a very abstract set of candidate models, ignoring the temporal arrangement of trials and of the events within them. [sent-178, score-0.254]

78 Obviously, both of these issues have important effects, and the present framework can be straightforwardly generalized to account for them, with the addition of temporal dependencies to the latent variables [1] and the removal of the stationarity assumption [4]. [sent-179, score-0.254]

79 An odd but key concept in early models of classical conditioning is the “configural unit,” a detector for a conjunction of co-active stimuli. [sent-180, score-0.509]

80 5 x1 15 10 11 x1 16 16 A X B −13 −14 −14 US −13 (a) Few A-X trials 16 −8 0. [sent-184, score-0.216]

81 8 x2 11 11 A X B −14 −14 −14 8 US −14 (b) Many A-X trials Average number of latent causes 3 −2. [sent-185, score-0.49]

82 5 1 0 10 20 30 40 50 60 Number of A−X trials (c) Model size over trials Figure 3: Sigmoid belief networks with high probability density under the posterior. [sent-188, score-0.58]

83 (b) After many A-X pairings: this model exhibits conditioned inhibition. [sent-190, score-0.407]

84 (c) The average number of latent causes as a function of A-X pairings. [sent-191, score-0.274]

85 With a stimulus configuration represented through a latent cause, our theory provides a clearer prescription for how to reason about model structure. [sent-193, score-0.357]

86 Another body of data on which our work may shed light is acquisition of a conditioned response. [sent-195, score-0.43]

87 [4]) propose that animals respond to a conditioned stimulus (CS ) when the difference in the reinforcement rate between the presence and absence of the CS satisfies some test of significance. [sent-198, score-0.72]

88 From the perspective of our model, this test looks like a heuristic for choosing between generative models of stimulus delivery that differ as to whether the CS and US are correlated through a shared hidden cause. [sent-199, score-0.318]

89 To our knowledge, the relationship between second-order conditioning and conditioned inhibition has never been explicitly studied using previous theories. [sent-200, score-0.964]

90 This is in part because the majority of classical conditioning theories do not account for second-order conditioning at all, since they typically consider learning only about CS -US but not CS -CS correlations. [sent-201, score-0.968]

91 Second-order conditioning can also be predicted if the A-X pairings cause some sort of representational change so that A’s excitatory associations generalize to X. [sent-203, score-0.56]

92 [11] suggest that if this representational learning is fast (as in [6], though that theory would need to be modified to include any second-order effects) and if conditioned inhibition accrues only gradually by error-driven learning [7], then second-order conditioning will dominate initially. [sent-205, score-0.963]

93 The details of such an account seem never to have been worked out, and even if they were, such a mechanistic theory would be considerably less illuminating than our theory as to the normative reasons why the animals should predict as they do. [sent-206, score-0.177]

94 10 p(wm,m | D ) 8 1 1 X− trial 2 X− trials 3 X− trials 0. [sent-209, score-0.507]

95 8 1 0 0 2 4 6 8 10 Number of X− trials (b) Summation test Figure 4: Effect of adding unpaired presentations of X on the strength of X as an inhibitor. [sent-217, score-0.51]

96 With only 1 unpaired presentation of X, most models predict a high probability of US (secondorder conditioning). [sent-219, score-0.29]

97 With 2 or 3 unpaired presentations of X, models which predict a low P (US | X, B) get more posterior weight (conditioned inhibition). [sent-220, score-0.392]

98 (b) A plot contrasting P (US | B, D) and P (US | X, B, D) as a function of unpaired X trials. [sent-221, score-0.152]

99 Some types of conditioned inhibitors carry collateral excitatory associations. [sent-291, score-0.454]

100 Second-order conditioning and Pavlovian conditioned inhibition: Operational similarities and differences. [sent-299, score-0.755]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('conditioning', 0.391), ('conditioned', 0.364), ('wm', 0.363), ('trials', 0.216), ('latent', 0.184), ('cs', 0.184), ('inhibition', 0.178), ('yin', 0.174), ('unpaired', 0.152), ('sigmoid', 0.119), ('presentations', 0.113), ('inhibitor', 0.113), ('stimuli', 0.11), ('retardation', 0.109), ('theories', 0.105), ('reinforcement', 0.103), ('stimulus', 0.1), ('causes', 0.09), ('excitor', 0.087), ('pairings', 0.087), ('reinforcer', 0.087), ('wyj', 0.087), ('animals', 0.086), ('classical', 0.081), ('delivery', 0.076), ('yj', 0.075), ('trial', 0.075), ('xc', 0.072), ('uncertainty', 0.068), ('acquisition', 0.066), ('contingencies', 0.065), ('dwm', 0.065), ('gural', 0.065), ('pavlovian', 0.065), ('rescorla', 0.065), ('wxi', 0.065), ('bayesian', 0.06), ('belief', 0.06), ('us', 0.06), ('animal', 0.059), ('posterior', 0.059), ('cr', 0.057), ('complicated', 0.053), ('parent', 0.053), ('occam', 0.052), ('razor', 0.052), ('mcmc', 0.051), ('excitatory', 0.047), ('li', 0.047), ('courville', 0.043), ('extinction', 0.043), ('inhibitors', 0.043), ('unreinforced', 0.043), ('yjt', 0.043), ('model', 0.043), ('xb', 0.043), ('generative', 0.039), ('presentation', 0.038), ('wij', 0.038), ('absence', 0.038), ('temporal', 0.038), ('unconditioned', 0.038), ('interspersed', 0.038), ('patterning', 0.038), ('secondorder', 0.038), ('models', 0.037), ('perspective', 0.037), ('bars', 0.036), ('cause', 0.035), ('tone', 0.034), ('wagner', 0.034), ('reversible', 0.034), ('monte', 0.034), ('biases', 0.034), ('correlations', 0.033), ('links', 0.033), ('probability', 0.032), ('wins', 0.032), ('stationarity', 0.032), ('effects', 0.032), ('relationship', 0.031), ('preference', 0.031), ('explain', 0.031), ('predict', 0.031), ('theory', 0.03), ('marginal', 0.029), ('test', 0.029), ('responding', 0.029), ('suppress', 0.029), ('jump', 0.029), ('density', 0.028), ('transfer', 0.028), ('effect', 0.028), ('networks', 0.028), ('experience', 0.028), ('simpler', 0.027), ('ndings', 0.027), ('drastically', 0.027), ('parameters', 0.027), ('et', 0.027)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000012 130 nips-2003-Model Uncertainty in Classical Conditioning

Author: Aaron C. Courville, Geoffrey J. Gordon, David S. Touretzky, Nathaniel D. Daw

Abstract: We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. 1

2 0.11892293 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

Author: Maneesh Sahani

Abstract: Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way. 1

3 0.10314795 77 nips-2003-Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Author: Neil D. Lawrence

Abstract: In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior’s covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to ‘twin kernel PCA’ in which a mapping between feature spaces occurs.

4 0.089304194 56 nips-2003-Dopamine Modulation in a Basal Ganglio-Cortical Network of Working Memory

Author: Aaron J. Gruber, Peter Dayan, Boris S. Gutkin, Sara A. Solla

Abstract: Dopamine exerts two classes of effect on the sustained neural activity in prefrontal cortex that underlies working memory. Direct release in the cortex increases the contrast of prefrontal neurons, enhancing the robustness of storage. Release of dopamine in the striatum is associated with salient stimuli and makes medium spiny neurons bistable; this modulation of the output of spiny neurons affects prefrontal cortex so as to indirectly gate access to working memory and additionally damp sensitivity to noise. Existing models have treated dopamine in one or other structure, or have addressed basal ganglia gating of working memory exclusive of dopamine effects. In this paper we combine these mechanisms and explore their joint effect. We model a memory-guided saccade task to illustrate how dopamine’s actions lead to working memory that is selective for salient input and has increased robustness to distraction. 1

5 0.085864969 64 nips-2003-Estimating Internal Variables and Paramters of a Learning Agent by a Particle Filter

Author: Kazuyuki Samejima, Kenji Doya, Yasumasa Ueda, Minoru Kimura

Abstract: When we model a higher order functions, such as learning and memory, we face a difficulty of comparing neural activities with hidden variables that depend on the history of sensory and motor signals and the dynamics of the network. Here, we propose novel method for estimating hidden variables of a learning agent, such as connection weights from sequences of observable variables. Bayesian estimation is a method to estimate the posterior probability of hidden variables from observable data sequence using a dynamic model of hidden and observable variables. In this paper, we apply particle filter for estimating internal parameters and metaparameters of a reinforcement learning model. We verified the effectiveness of the method using both artificial data and real animal behavioral data. 1

6 0.078522585 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

7 0.069383673 160 nips-2003-Prediction on Spike Data Using Kernel Algorithms

8 0.069354884 35 nips-2003-Attractive People: Assembling Loose-Limbed Models using Non-parametric Belief Propagation

9 0.069323301 117 nips-2003-Linear Response for Approximate Inference

10 0.059934702 51 nips-2003-Design of Experiments via Information Theory

11 0.057266869 29 nips-2003-Applying Metric-Trees to Belief-Point POMDPs

12 0.057187892 161 nips-2003-Probabilistic Inference in Human Sensorimotor Processing

13 0.056008868 169 nips-2003-Sample Propagation

14 0.055926137 124 nips-2003-Max-Margin Markov Networks

15 0.054683827 49 nips-2003-Decoding V1 Neuronal Activity using Particle Filtering with Volterra Kernels

16 0.052846983 196 nips-2003-Wormholes Improve Contrastive Divergence

17 0.052161694 95 nips-2003-Insights from Machine Learning Applied to Human Visual Classification

18 0.049155921 142 nips-2003-On the Concentration of Expectation and Approximate Inference in Layered Networks

19 0.049013667 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

20 0.04844819 179 nips-2003-Sparse Representation and Its Applications in Blind Source Separation


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.176), (1, 0.032), (2, 0.032), (3, 0.069), (4, -0.008), (5, -0.011), (6, 0.121), (7, -0.007), (8, -0.01), (9, -0.037), (10, 0.022), (11, 0.051), (12, 0.068), (13, -0.038), (14, 0.147), (15, 0.11), (16, -0.043), (17, 0.109), (18, 0.032), (19, 0.045), (20, -0.044), (21, 0.086), (22, -0.056), (23, 0.043), (24, 0.016), (25, -0.008), (26, 0.103), (27, -0.212), (28, 0.023), (29, -0.123), (30, -0.002), (31, 0.034), (32, 0.021), (33, -0.054), (34, 0.067), (35, 0.047), (36, -0.017), (37, 0.026), (38, 0.046), (39, 0.054), (40, 0.034), (41, 0.083), (42, -0.04), (43, -0.087), (44, 0.085), (45, 0.109), (46, -0.005), (47, 0.195), (48, -0.002), (49, -0.104)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96125299 130 nips-2003-Model Uncertainty in Classical Conditioning

Author: Aaron C. Courville, Geoffrey J. Gordon, David S. Touretzky, Nathaniel D. Daw

Abstract: We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. 1

2 0.59992564 56 nips-2003-Dopamine Modulation in a Basal Ganglio-Cortical Network of Working Memory

Author: Aaron J. Gruber, Peter Dayan, Boris S. Gutkin, Sara A. Solla

Abstract: Dopamine exerts two classes of effect on the sustained neural activity in prefrontal cortex that underlies working memory. Direct release in the cortex increases the contrast of prefrontal neurons, enhancing the robustness of storage. Release of dopamine in the striatum is associated with salient stimuli and makes medium spiny neurons bistable; this modulation of the output of spiny neurons affects prefrontal cortex so as to indirectly gate access to working memory and additionally damp sensitivity to noise. Existing models have treated dopamine in one or other structure, or have addressed basal ganglia gating of working memory exclusive of dopamine effects. In this paper we combine these mechanisms and explore their joint effect. We model a memory-guided saccade task to illustrate how dopamine’s actions lead to working memory that is selective for salient input and has increased robustness to distraction. 1

3 0.51105863 77 nips-2003-Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

Author: Neil D. Lawrence

Abstract: In this paper we introduce a new underlying probabilistic model for principal component analysis (PCA). Our formulation interprets PCA as a particular Gaussian process prior on a mapping from a latent space to the observed data-space. We show that if the prior’s covariance function constrains the mappings to be linear the model is equivalent to PCA, we then extend the model by considering less restrictive covariance functions which allow non-linear mappings. This more general Gaussian process latent variable model (GPLVM) is then evaluated as an approach to the visualisation of high dimensional data for three different data-sets. Additionally our non-linear algorithm can be further kernelised leading to ‘twin kernel PCA’ in which a mapping between feature spaces occurs.

4 0.50020766 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

Author: Maneesh Sahani

Abstract: Significant plasticity in sensory cortical representations can be driven in mature animals either by behavioural tasks that pair sensory stimuli with reinforcement, or by electrophysiological experiments that pair sensory input with direct stimulation of neuromodulatory nuclei, but usually not by sensory stimuli presented alone. Biologically motivated theories of representational learning, however, have tended to focus on unsupervised mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning has generally dealt with the acquisition of optimal policies for action in an uncertain world, rather than with the concurrent shaping of sensory representations. This paper develops a framework for representational learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of reinforcement in a biologically plausible way. 1

5 0.47584108 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

Author: Jakob J. Verbeek, Sam T. Roweis, Nikos A. Vlassis

Abstract: We propose a non-linear Canonical Correlation Analysis (CCA) method which works by coordinating or aligning mixtures of linear models. In the same way that CCA extends the idea of PCA, our work extends recent methods for non-linear dimensionality reduction to the case where multiple embeddings of the same underlying low dimensional coordinates are observed, each lying on a different high dimensional manifold. We also show that a special case of our method, when applied to only a single manifold, reduces to the Laplacian Eigenmaps algorithm. As with previous alignment schemes, once the mixture models have been estimated, all of the parameters of our model can be estimated in closed form without local optima in the learning. Experimental results illustrate the viability of the approach as a non-linear extension of CCA. 1

6 0.45123488 58 nips-2003-Efficient Multiscale Sampling from Products of Gaussian Mixtures

7 0.41845524 83 nips-2003-Hierarchical Topic Models and the Nested Chinese Restaurant Process

8 0.38992843 175 nips-2003-Sensory Modality Segregation

9 0.38774943 172 nips-2003-Semi-Supervised Learning with Trees

10 0.3862969 161 nips-2003-Probabilistic Inference in Human Sensorimotor Processing

11 0.36631513 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

12 0.36187315 127 nips-2003-Mechanism of Neural Interference by Transcranial Magnetic Stimulation: Network or Single Neuron?

13 0.35604051 140 nips-2003-Nonlinear Processing in LGN Neurons

14 0.3493278 47 nips-2003-Computing Gaussian Mixture Models with EM Using Equivalence Constraints

15 0.33800313 51 nips-2003-Design of Experiments via Information Theory

16 0.32389259 35 nips-2003-Attractive People: Assembling Loose-Limbed Models using Non-parametric Belief Propagation

17 0.31957239 196 nips-2003-Wormholes Improve Contrastive Divergence

18 0.31387386 110 nips-2003-Learning a World Model and Planning with a Self-Organizing, Dynamic Neural System

19 0.31377873 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

20 0.30179542 69 nips-2003-Factorization with Uncertainty and Missing Data: Exploiting Temporal Coherence


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(0, 0.026), (11, 0.013), (29, 0.431), (30, 0.013), (35, 0.036), (53, 0.073), (59, 0.016), (69, 0.018), (71, 0.054), (76, 0.049), (85, 0.077), (91, 0.091), (99, 0.019)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.91603065 130 nips-2003-Model Uncertainty in Classical Conditioning

Author: Aaron C. Courville, Geoffrey J. Gordon, David S. Touretzky, Nathaniel D. Daw

Abstract: We develop a framework based on Bayesian model averaging to explain how animals cope with uncertainty about contingencies in classical conditioning experiments. Traditional accounts of conditioning fit parameters within a fixed generative model of reinforcer delivery; uncertainty over the model structure is not considered. We apply the theory to explain the puzzling relationship between second-order conditioning and conditioned inhibition, two similar conditioning regimes that nonetheless result in strongly divergent behavioral outcomes. According to the theory, second-order conditioning results when limited experience leads animals to prefer a simpler world model that produces spurious correlations; conditioned inhibition results when a more complex model is justified by additional experience. 1

2 0.71071291 114 nips-2003-Limiting Form of the Sample Covariance Eigenspectrum in PCA and Kernel PCA

Author: David Hoyle, Magnus Rattray

Abstract: We derive the limiting form of the eigenvalue spectrum for sample covariance matrices produced from non-isotropic data. For the analysis of standard PCA we study the case where the data has increased variance along a small number of symmetry-breaking directions. The spectrum depends on the strength of the symmetry-breaking signals and on a parameter α which is the ratio of sample size to data dimension. Results are derived in the limit of large data dimension while keeping α fixed. As α increases there are transitions in which delta functions emerge from the upper end of the bulk spectrum, corresponding to the symmetry-breaking directions in the data, and we calculate the bias in the corresponding eigenvalues. For kernel PCA the covariance matrix in feature space may contain symmetry-breaking structure even when the data components are independently distributed with equal variance. We show examples of phase-transition behaviour analogous to the PCA results in this case. 1

3 0.68200934 33 nips-2003-Approximate Planning in POMDPs with Macro-Actions

Author: Georgios Theocharous, Leslie P. Kaelbling

Abstract: Recent research has demonstrated that useful POMDP solutions do not require consideration of the entire belief space. We extend this idea with the notion of temporal abstraction. We present and explore a new reinforcement learning algorithm over grid-points in belief space, which uses macro-actions and Monte Carlo updates of the Q-values. We apply the algorithm to a large scale robot navigation task and demonstrate that with temporal abstraction we can consider an even smaller part of the belief space, we can learn POMDP policies faster, and we can do information gathering more efficiently.

4 0.46743664 66 nips-2003-Extreme Components Analysis

Author: Max Welling, Christopher Williams, Felix V. Agakov

Abstract: Principal components analysis (PCA) is one of the most widely used techniques in machine learning and data mining. Minor components analysis (MCA) is less well known, but can also play an important role in the presence of constraints on the data distribution. In this paper we present a probabilistic model for “extreme components analysis” (XCA) which at the maximum likelihood solution extracts an optimal combination of principal and minor components. For a given number of components, the log-likelihood of the XCA model is guaranteed to be larger or equal than that of the probabilistic models for PCA and MCA. We describe an efficient algorithm to solve for the globally optimal solution. For log-convex spectra we prove that the solution consists of principal components only, while for log-concave spectra the solution consists of minor components. In general, the solution admits a combination of both. In experiments we explore the properties of XCA on some synthetic and real-world datasets.

5 0.46594962 161 nips-2003-Probabilistic Inference in Human Sensorimotor Processing

Author: Konrad P. Körding, Daniel M. Wolpert

Abstract: When we learn a new motor skill, we have to contend with both the variability inherent in our sensors and the task. The sensory uncertainty can be reduced by using information about the distribution of previously experienced tasks. Here we impose a distribution on a novel sensorimotor task and manipulate the variability of the sensory feedback. We show that subjects internally represent both the distribution of the task as well as their sensory uncertainty. Moreover, they combine these two sources of information in a way that is qualitatively predicted by optimal Bayesian processing. We further analyze if the subjects can represent multimodal distributions such as mixtures of Gaussians. The results show that the CNS employs probabilistic models during sensorimotor learning even when the priors are multimodal.

6 0.45724279 4 nips-2003-A Biologically Plausible Algorithm for Reinforcement-shaped Representational Learning

7 0.45371225 125 nips-2003-Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Model

8 0.4457528 162 nips-2003-Probabilistic Inference of Speech Signals from Phaseless Spectrograms

9 0.44468692 83 nips-2003-Hierarchical Topic Models and the Nested Chinese Restaurant Process

10 0.4419713 12 nips-2003-A Model for Learning the Semantics of Pictures

11 0.44004661 54 nips-2003-Discriminative Fields for Modeling Spatial Dependencies in Natural Images

12 0.43810663 103 nips-2003-Learning Bounds for a Generalized Family of Bayesian Posterior Distributions

13 0.43153155 20 nips-2003-All learning is Local: Multi-agent Learning in Global Reward Games

14 0.43143755 138 nips-2003-Non-linear CCA and PCA by Alignment of Local Models

15 0.42760232 38 nips-2003-Autonomous Helicopter Flight via Reinforcement Learning

16 0.42337516 131 nips-2003-Modeling User Rating Profiles For Collaborative Filtering

17 0.42200023 141 nips-2003-Nonstationary Covariance Functions for Gaussian Process Regression

18 0.41931686 77 nips-2003-Gaussian Process Latent Variable Models for Visualisation of High Dimensional Data

19 0.4192057 116 nips-2003-Linear Program Approximations for Factored Continuous-State Markov Decision Processes

20 0.4184981 177 nips-2003-Simplicial Mixtures of Markov Chains: Distributed Modelling of Dynamic User Profiles