nips nips2007 nips2007-203 knowledge-graph by maker-knowledge-mining

203 nips-2007-The rat as particle filter


Source: pdf

Author: Aaron C. Courville, Nathaniel D. Daw

Abstract: Although theorists have interpreted classical conditioning as a laboratory model of Bayesian belief updating, a recent reanalysis showed that the key features that theoretical models capture about learning are artifacts of averaging over subjects. Rather than learning smoothly to asymptote (reflecting, according to Bayesian models, the gradual tradeoff from prior to posterior as data accumulate), subjects learn suddenly and their predictions fluctuate perpetually. We suggest that abrupt and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples — one, in our simulations. Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty in their beliefs over trials. 1

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 com Abstract Although theorists have interpreted classical conditioning as a laboratory model of Bayesian belief updating, a recent reanalysis showed that the key features that theoretical models capture about learning are artifacts of averaging over subjects. [sent-7, score-0.208]

2 Rather than learning smoothly to asymptote (reflecting, according to Bayesian models, the gradual tradeoff from prior to posterior as data accumulate), subjects learn suddenly and their predictions fluctuate perpetually. [sent-8, score-0.388]

3 We suggest that abrupt and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples — one, in our simulations. [sent-9, score-0.494]

4 Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. [sent-10, score-0.447]

5 Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty in their beliefs over trials. [sent-11, score-0.836]

6 For instance, subjects observing ambiguous or rivalrous visual displays famously report experiencing either percept alternately and exclusively; for even the most fervent Bayesian, it seems impossible simultaneously to interpret the Necker cube as potentially facing either direction [1]. [sent-14, score-0.282]

7 A longstanding laboratory model for the formation of beliefs and their update in light of experience is Pavlovian conditioning in animals, and analogously structured prediction tasks in humans. [sent-15, score-0.329]

8 The data do appear in a number of respects to reflect key features of the Bayesian ideal — specifically, that subjects represent beliefs as distributions with uncertainty and appropriately employ it in updating them in light of new evidence. [sent-17, score-0.44]

9 Most notable in this respect are retrospective revaluation phenomena (e. [sent-18, score-0.239]

10 , [7]), which demonstrate that subjects are able to revise previously favored beliefs in a way suggesting that they had entertained alternative hypotheses all along [6]. [sent-20, score-0.39]

11 Whereas subject-averaged responses exhibit smooth learning curves approaching asymptote (interpreted by Bayesian modelers as reflecting the gradual tradeoff from prior to posterior as data accumulate), individual records exhibit neither smooth learning nor steady asymptote. [sent-24, score-0.467]

12 1 Here we suggest that individuals’ behavior in conditioning might be understood in terms of Monte Carlo methods for sequentially sampling different hypotheses (e. [sent-27, score-0.333]

13 Through the metaphor of particle filtering, it also explains why exact Bayesian reasoning is a good account of the ensemble. [sent-31, score-0.333]

14 To make our point in the most extreme way, and to explore the most novel corner of the model space, we here develop as proof of concept the idea that (as with percepts in the Necker cube) subjects sample only a single hypothesis at a time. [sent-34, score-0.263]

15 That is, we treat them as particle filters employing only one particle. [sent-35, score-0.298]

16 We show that even given individuals of such minimal capacity, sophisticated effects like retrospective revaluation can emerge in the ensemble. [sent-36, score-0.435]

17 1 Model Conditioning as exact filtering In conditioning experiments, a subject (say, a dog) experiences outcomes (“reinforcers,” say, food) paired with stimuli (say, a bell). [sent-40, score-0.32]

18 That subjects learn thereby to predict outcomes on the basis of antecedent stimuli is demonstrated by the finding that they emit anticipatory behaviors (such as salivation to the bell) which are taken directly to reflect the expectation of the outcome. [sent-41, score-0.363]

19 Human experiments are analogously structured, but using various cover stories (such as disease diagnosis) and with subjects typically simply asked to state their beliefs about how much they expect the outcome. [sent-42, score-0.39]

20 A standard statistical framing for such a problem [5], which we will adopt here, is to assume that subjects are trying to learn the conditional probability P (r | x) of (real-valued) outcomes r given (vector-valued) stimuli x. [sent-43, score-0.303]

21 If we further assume the weights w can change with time, and take that change as Gaussian diffusion, 2 P (wt+1 | wt ) = N (wt , σd I) (1) then we complete the well known generative model for which Bayesian inference about the weights can be accomplished using the Kalman filter algorithm [5]. [sent-47, score-0.384]

22 Returning to conditioning, a subject’s anticipatory responding to test stimulus xt is taken to be proportional to her expectation about rt conditional on xt , marginalizing out uncertainty over the ˆ ˆ weights. [sent-54, score-0.482]

23 2 Conditioning as particle filtering Here we assume instead that subjects do not maintain uncertainty in their posterior beliefs, via L covariance Σt , but instead that subject L maintains a point estimate wt and treats it as true with L certainty. [sent-57, score-0.92]

24 In particular, the mean of L 2 2 2 the sampling distribution is wt +xt+1 κ(rt+1 −xt+1 · wt ). [sent-59, score-0.546]

25 ) Of course, the idea of such sampling algorithms is that one can estimate the true posterior over wt by averaging over particles. [sent-63, score-0.395]

26 These (here, the product of P (rt+1 | xt+1 , wt = wt ) over each t) serve to squelch the contribution of particles whose trajectories turn out to be conditionally more unlikely given subsequent observations. [sent-65, score-0.636]

27 If subjects were to behave in accord with this model, then this would give us some insight into the ensemble average behavior, though if computed without importance reweighting, the ensemble average will appear to learn more slowly than the true posterior. [sent-66, score-0.643]

28 3 Resampling and jumps One reason why subjects might employ sampling is that, in generative models more interesting than the toy linear, Gaussian one used here, Bayesian reasoning is notoriously intractable. [sent-68, score-0.618]

29 This is the individual counterpart to the slowness at the ensemble level, and at the ensemble level, it can be compensated for by importance reweighting and also by resampling (for instance, standard sequential importance resampling; [12, 9]). [sent-71, score-0.856]

30 Resampling kills off conditionally unlikely particles and keeps most samples in conditionally likely parts of the space, with similar and high importance weights. [sent-72, score-0.304]

31 Since optimal reweighting and resampling both involve normalizing importance weights over the ensemble, they are not available to our subject-as-sample. [sent-73, score-0.322]

32 In particular, consider Yu and Dayan’s [13] diffusion-jump model, which replaces Equation 1 with 2 2 P (wt+1 | wt ) = (1 − π)N (wt , σd I) + πN (0, σj I) (3) with σj σd . [sent-75, score-0.228]

33 If we use Equation 3 together with the one-sample particle filtering scheme of Equation 2, then we simplify the posterior still further by not carrying over uncertainty from trial to trial, but instead L only a point estimate. [sent-79, score-0.508]

34 As before, at each step, we sample from the posterior P (wt+1 | wt = L wt , xt+1 , rt+1 ) given total confidence in our previous estimate. [sent-80, score-0.533]

35 This distribution now has two modes, one representing the posterior given that a jump occurred, the other representing the posterior given no jump. [sent-81, score-0.296]

36 Importantly, we are more likely to infer a jump, and resample from scratch, if the observation rt+1 is L far from that expected under the hypothesis of no jump, xt+1 · wt . [sent-82, score-0.228]

37 Specifically, the probability that no jump occurred (and that we therefore resample according to the posterior distribution given drift — effectively, the chance that the sample “survives” as it would have in the no-jump Kalman filter) L — is proportional to P (rt+1 | xt+1 , wt =wt , no jump). [sent-83, score-0.501]

38 This is also the factor that the trial would contribute to the importance weight in the no-jump Kalman filter model of the previous section. [sent-84, score-0.223]

39 The importance weight, in turn, is also the factor that would determine the chance that a particle would be selected during an exact resampling step [12, 9]. [sent-85, score-0.569]

40 (a) Mean over subjects reveals smooth, slow acquisition curve (timebase is in sessions). [sent-87, score-0.343]

41 (b) Individual records are noisier and with more abrupt changes (timebase is in trials). [sent-88, score-0.221]

42 (c) Examples of fits to individual records assuming the behavior is piecewise Poisson with abrupt rate shifts. [sent-89, score-0.4]

43 05 0 (a) 0 20 40 60 trial 80 100 (b) 0 0 50 100 (c) 0 0 50 100 0 (d) 1 50 dynamic interval >100 Figure 2: Simple acquisition in conditioning, simulations using particle filter models. [sent-101, score-0.535]

44 5) and no-jump (π = 0) particle filter models of conditioning, plotted against exact Kalman filter for same parameters (and π = 0). [sent-105, score-0.333]

45 (b) Two examples of individual subject traces for the no-jump particle filter model. [sent-106, score-0.487]

46 (c) Two examples of individual subject traces for the particle filter model incorporating jumps. [sent-107, score-0.487]

47 (d) Distribution over individuals using the jump model of the “dynamic interval” of acquisition, that is the number of trials over which responding grows from negligible to near-asymptotic levels. [sent-108, score-0.447]

48 There is therefore an analogy between sampling in this model and sampling with resampling in the simpler generative model of Equation 1. [sent-109, score-0.371]

49 Of course, this cannot exactly accomplish optimal resampling, both because the chance that a particle survives should be normalized with respect to the population, and because the distribution from which a non-surviving particle resamples should also depend on the ensemble distribution. [sent-110, score-0.78]

50 We can therefore view the jumps of Equation 3 in two ways. [sent-112, score-0.251]

51 First, they could correctly model a jumpy world; by periodically resetting itself, such a world would be relatively forgiving of the tendency for particles in sequential importance sampling to turn out conditionally unlikely. [sent-113, score-0.418]

52 Alternatively, the jumps can be viewed as a fiction effectively encouraging a sort of resampling to improve the performance of low-sample particle filtering in the non-jumpy world of Equation 1. [sent-114, score-0.734]

53 3 Acquisition In this and the following section, we illustrate the behavior of individuals and of the ensemble in some simple conditioning tasks, comparing particle filter models with and without jumps (Equations 1 and 3). [sent-116, score-1.053]

54 Figure 1 reproduces some data reanalyzed by Gallistel and colleagues [8], who quantify across a number of experiments what had long been anecdotally known about conditioning: that individual 4 records look nothing like the averages over subjects that have been the focus of much theorizing. [sent-117, score-0.522]

55 ) Averaged learning curves slowly and smoothly climb toward asymptote (Figure 1a, here the anticipatory behavior measured is pigeons pecking), just as does the estimate of the mean, wA , in the Kalman filter models. [sent-120, score-0.301]

56 ˆ Viewed in individual records (Figure 1b), the onset of responding is much more abrupt (often it occurred in a single trial), and the subsequent behavior much more variable. [sent-121, score-0.67]

57 One further anomaly with Bayesian models even as accounts for the average curves is that acquisition is absurdly slow from a normative perspective — it emerges long after subjects using reasonable priors would be highly certain to expect reward. [sent-126, score-0.455]

58 This was pointed out by Kakade and Dayan [5], who also suggested an account for why the slow acquisition might actually be normative due to unaccounted priors caused by pretraining procedures known as hopper training. [sent-127, score-0.225]

59 Figure 2 illustrates individual and group behavior for the two particle filter models. [sent-129, score-0.477]

60 As expected, at the ensemble level (Figure 2a), particle filtering without jumps learns slowly, when averaged without importance weighting or resampling and compared to the optimal Kalman filter for the same parameters. [sent-130, score-0.928]

61 As shown, the inclusion of jumps can speed this up. [sent-131, score-0.251]

62 In individual traces using the jumps model (Figure 2c) frequent sampled jumps both at and after acquisition of responding capture the key qualitative features of the individual records: the abrupt onset and ongoing instability. [sent-132, score-1.175]

63 The inclusion of jumps in the generative model is key to this account: as shown in Figure 2b, without these, behavior changes more smoothly. [sent-133, score-0.376]

64 In the jump model, when a jump is sampled, the posterior distribution conditional on the jump having occurred is centered near the observed rt , meaning that the sampled weight will most likely arrive immediately near its asymptotic level. [sent-134, score-0.693]

65 Figure 2d shows that such an abrupt onset of responding is the modal behavior of individuals. [sent-135, score-0.381]

66 Here (after [8]), we have fit each individual run from the jump-model simulations with a sigmoidal Weibull function, and defined the “dynamic interval” over which acquisition occurs as the number of trials during which this fit function rises from 10% to 90% of its asymptotic level. [sent-136, score-0.32]

67 These simulations demonstrate, first, how sequential sampling using a very low number of samples is a good model of the puzzling features of individual behavior in acquisition, and at the same time clarify why subject-averaged records resemble the results of exact inference. [sent-139, score-0.488]

68 Depending on the presumed frequency of jumps (which help to compensate for this problem) the fact that these averages are of course computed without importance weighting may also help to explain the apparent slowness of acquisition. [sent-140, score-0.419]

69 4 Retrospective revaluation So far, we have shown that sequential sampling provides a good qualitative characterization of individual behavior in the simplest conditioning experiments. [sent-142, score-0.61]

70 These tasks give the best indication that subjects maintain something more than a point estimate of the weights, and instead strongly suggest that they maintain a full joint distribution over them. [sent-144, score-0.228]

71 However, as we will show here, this effect can actually emerge due to covariance information being implicitly represented in the ensemble of beliefs over subjects, even if all the individuals are one-particle samplers. [sent-145, score-0.456]

72 5 A −1 −1 (a) 0 weight A −1 −1 1 after AB+ 0 0 weight A 1 0 50 trials 100 after B+ 1 B 0 average P(r) 1 weight B weight B 1 0 AB+→ B+→ 0. [sent-147, score-0.247]

73 5 A −1 −1 (b) 0 weight A 1 −1 −1 0 0 weight A 1 0 50 trials 100 Figure 3: Simulations of backward blocking effect, using exact Kalman filter (a) and particle filter model with jumps (b). [sent-148, score-1.017]

74 For the particle filter, these are derived from the histogram of individual particles’ joint point beliefs about the weights. [sent-150, score-0.563]

75 Right: Mean beliefs about wA and wB , showing development of backward blocking. [sent-151, score-0.275]

76 A typical task, called backward blocking [7], has two phases. [sent-154, score-0.278]

77 The typical finding is that responding to A is attenuated; the intuition is that the B+ trials suggested that B alone was responsible for the reward received in the AB+ trials, so the association of A with reward is retrospectively discounted. [sent-157, score-0.304]

78 Such retrospective revaluation phenomena are hard to demonstrate in animals (though see [15]) but robust in humans [7]. [sent-158, score-0.28]

79 Contrary to this intuition, Figure 3b demonstrates the same thing in the particle filter model with jumps. [sent-166, score-0.298]

80 At the end of AB+ training, the subjects as an ensemble represent the anti-correlated joint distribution over the weights, even though each individual maintains only a particular point belief. [sent-167, score-0.474]

81 Moreover, B+ training causes an aggregate backward blocking effect. [sent-168, score-0.278]

82 This is because individuals who believe that wA is high tend also to believe that wB is low, which makes them most likely to sample that a jump has occurred during subsequent B+ training. [sent-169, score-0.351]

83 The samples most likely to stay in place already have wA low and wB high; beliefs about wA are, on average, thereby reduced, producing the backward blocking effect in the ensemble. [sent-170, score-0.44]

84 Note that this effect depends on the subjects sampling using a generative model that admits of jumps (Equation 3). [sent-171, score-0.618]

85 5 Discussion We have suggested that individual subjects in conditioning experiments behave as though they are sequentially sampling hypotheses about the underlying weights: like particle filters using a single sample. [sent-174, score-0.921]

86 It also complements a recent model of human categorization learning [10], which used particle filters to sample (sparsely or even with a single sample) over possible clusterings of stimuli. [sent-179, score-0.298]

87 That work concentrated on trial ordering effects arising from the sparsely represented posterior (see also [16]); here we concentrate on a different set of phenomena related to individual versus ensemble behavior. [sent-180, score-0.406]

88 Gallistel and colleagues’ [8] demonstration that individual learning curves exhibit none of the features of the ensemble average curves that had previously been modeled poses rather a serious challenge for theorists: After all, what does it mean to model only the ensemble? [sent-181, score-0.328]

89 Surely the individual subject is the appropriate focus of theory — particularly given the evolutionary rationale often advanced for Bayesian modeling, that individuals who behave rationally will have higher fitness. [sent-182, score-0.295]

90 (At the group level, there may also be a fitness advantage to spreading different beliefs — say, about productive foraging locations — across subjects rather than having the entire population gravitate toward the “best” belief. [sent-184, score-0.429]

91 ) Previous models fail to predict any intersubject variability because they incorporate no variation in either the subjects’ beliefs or in their responses given their beliefs. [sent-186, score-0.198]

92 ) Similarly, nonlinearity in the performance function relating beliefs to response rates might help to account for the sudden onset of responding even if learning is smooth, but would not address the other features of the data. [sent-190, score-0.395]

93 In addition to addressing the empirical problem of fit to the individual, sampling also answers an additional problem with Bayesian models: that they attribute to subjects the capacity for radically intractable calculations. [sent-191, score-0.318]

94 While the simple Kalman filter used here is tractable, there has been a trend in modeling human and animal learning toward assuming subjects perform inference about model structure (e. [sent-192, score-0.267]

95 While in our model, subjects do not explicitly carry uncertainty about their beliefs from trial to trial, they do maintain hyperparameters (controlling the speed of diffusion, the noise of observations, and the probability of jumps) that serve as a sort of constant proxy for uncertainty. [sent-198, score-0.566]

96 In particular, it would be useful to develop less extreme models in which subjects either rely on sampling with more particles, or on some combination of sampling and exact inference. [sent-201, score-0.478]

97 We posit that many of the insights developed here will extend to such models, which seem more realistic since exclusive use of low-sample particle filtering would be extremely brittle and unreliable. [sent-202, score-0.298]

98 The present results on backward blocking stress again the perils of averaging and suggest that data must be analyzed much more delicately if they are ever to bear on issues of distributions and uncertainty. [sent-205, score-0.278]

99 In the case of backward blocking, if our account is correct, there should be a correlation, over individuals, between the degree to which they initially exhibited a low wB and the degree to which they subsequently exhibited a backward blocking effect. [sent-206, score-0.391]

100 Biological significance in forward and backward blocking: Resolution of a discrepancy between animal conditioning and human causal judgment. [sent-256, score-0.28]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('particle', 0.298), ('jumps', 0.251), ('wt', 0.228), ('subjects', 0.228), ('kalman', 0.195), ('conditioning', 0.167), ('wb', 0.166), ('blocking', 0.165), ('beliefs', 0.162), ('lter', 0.161), ('ensemble', 0.143), ('resampling', 0.142), ('jump', 0.142), ('abrupt', 0.126), ('revaluation', 0.124), ('responding', 0.124), ('individuals', 0.118), ('retrospective', 0.115), ('acquisition', 0.115), ('backward', 0.113), ('wa', 0.11), ('gallistel', 0.103), ('ab', 0.103), ('individual', 0.103), ('daw', 0.099), ('records', 0.095), ('importance', 0.094), ('sampling', 0.09), ('rt', 0.09), ('trial', 0.083), ('asymptote', 0.083), ('psychol', 0.082), ('xt', 0.078), ('posterior', 0.077), ('particles', 0.076), ('behavior', 0.076), ('dayan', 0.073), ('kakade', 0.072), ('ltering', 0.071), ('bayesian', 0.071), ('diffusion', 0.069), ('conditionally', 0.067), ('courville', 0.066), ('trials', 0.063), ('anticipatory', 0.062), ('necker', 0.062), ('colleagues', 0.058), ('onset', 0.055), ('sudden', 0.054), ('cube', 0.054), ('pavlovian', 0.054), ('occurred', 0.054), ('reweighting', 0.051), ('uncertainty', 0.05), ('sequential', 0.05), ('generative', 0.049), ('traces', 0.047), ('weight', 0.046), ('sophisticated', 0.045), ('sort', 0.043), ('equation', 0.042), ('balsam', 0.041), ('fairhurst', 0.041), ('forgiving', 0.041), ('hopper', 0.041), ('pigeon', 0.041), ('survives', 0.041), ('theorists', 0.041), ('timebase', 0.041), ('animals', 0.041), ('curves', 0.041), ('paired', 0.04), ('reward', 0.04), ('ac', 0.04), ('simulations', 0.039), ('subject', 0.039), ('toward', 0.039), ('stimuli', 0.039), ('averages', 0.038), ('emerges', 0.038), ('subsequent', 0.037), ('accomplished', 0.037), ('alone', 0.037), ('framing', 0.036), ('slowness', 0.036), ('goodness', 0.036), ('rescorla', 0.036), ('intersubject', 0.036), ('pretraining', 0.036), ('yu', 0.036), ('behave', 0.035), ('weights', 0.035), ('extreme', 0.035), ('exact', 0.035), ('lters', 0.034), ('behaviors', 0.034), ('smooth', 0.034), ('normative', 0.033), ('emerge', 0.033), ('accumulate', 0.033)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000004 203 nips-2007-The rat as particle filter

Author: Aaron C. Courville, Nathaniel D. Daw

Abstract: Although theorists have interpreted classical conditioning as a laboratory model of Bayesian belief updating, a recent reanalysis showed that the key features that theoretical models capture about learning are artifacts of averaging over subjects. Rather than learning smoothly to asymptote (reflecting, according to Bayesian models, the gradual tradeoff from prior to posterior as data accumulate), subjects learn suddenly and their predictions fluctuate perpetually. We suggest that abrupt and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples — one, in our simulations. Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty in their beliefs over trials. 1

2 0.12859844 11 nips-2007-A Risk Minimization Principle for a Class of Parzen Estimators

Author: Kristiaan Pelckmans, Johan Suykens, Bart D. Moor

Abstract: This paper1 explores the use of a Maximal Average Margin (MAM) optimality principle for the design of learning algorithms. It is shown that the application of this risk minimization principle results in a class of (computationally) simple learning machines similar to the classical Parzen window classifier. A direct relation with the Rademacher complexities is established, as such facilitating analysis and providing a notion of certainty of prediction. This analysis is related to Support Vector Machines by means of a margin transformation. The power of the MAM principle is illustrated further by application to ordinal regression tasks, resulting in an O(n) algorithm able to process large datasets in reasonable time. 1

3 0.11040587 3 nips-2007-A Bayesian Model of Conditioned Perception

Author: Alan Stocker, Eero P. Simoncelli

Abstract: unkown-abstract

4 0.1067856 125 nips-2007-Markov Chain Monte Carlo with People

Author: Adam Sanborn, Thomas L. Griffiths

Abstract: Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these models determine these distributions indirectly. We propose a method for directly determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice and Markov chain Monte Carlo (MCMC), we describe a method for sampling from the distributions over objects that people associate with different categories. In our task, subjects choose whether to accept or reject a proposed change to an object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the category distribution. We test this procedure for both artificial categories acquired in the laboratory, and natural categories acquired from experience. 1

5 0.10624608 59 nips-2007-Continuous Time Particle Filtering for fMRI

Author: Lawrence Murray, Amos J. Storkey

Abstract: We construct a biologically motivated stochastic differential model of the neural and hemodynamic activity underlying the observed Blood Oxygen Level Dependent (BOLD) signal in Functional Magnetic Resonance Imaging (fMRI). The model poses a difficult parameter estimation problem, both theoretically due to the nonlinearity and divergence of the differential system, and computationally due to its time and space complexity. We adapt a particle filter and smoother to the task, and discuss some of the practical approaches used to tackle the difficulties, including use of sparse matrices and parallelisation. Results demonstrate the tractability of the approach in its application to an effective connectivity study. 1

6 0.10502413 213 nips-2007-Variational Inference for Diffusion Processes

7 0.098791413 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC

8 0.098177105 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

9 0.088465847 40 nips-2007-Bundle Methods for Machine Learning

10 0.082253791 33 nips-2007-Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

11 0.08115451 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model

12 0.072755858 114 nips-2007-Learning and using relational theories

13 0.067154661 173 nips-2007-Second Order Bilinear Discriminant Analysis for single trial EEG analysis

14 0.066992946 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection

15 0.06664566 168 nips-2007-Reinforcement Learning in Continuous Action Spaces through Sequential Monte Carlo Methods

16 0.066542618 214 nips-2007-Variational inference for Markov jump processes

17 0.063925698 140 nips-2007-Neural characterization in partially observed populations of spiking neurons

18 0.063624844 100 nips-2007-Hippocampal Contributions to Control: The Third Way

19 0.063520186 145 nips-2007-On Sparsity and Overcompleteness in Image Models

20 0.063368306 148 nips-2007-Online Linear Regression and Its Application to Model-Based Reinforcement Learning


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, -0.212), (1, -0.031), (2, 0.074), (3, -0.031), (4, -0.027), (5, 0.1), (6, 0.003), (7, 0.005), (8, -0.094), (9, -0.151), (10, 0.02), (11, -0.028), (12, -0.045), (13, -0.001), (14, 0.084), (15, 0.032), (16, -0.053), (17, 0.035), (18, 0.108), (19, -0.038), (20, 0.099), (21, 0.013), (22, -0.017), (23, 0.012), (24, 0.004), (25, 0.007), (26, -0.193), (27, -0.008), (28, 0.103), (29, -0.028), (30, 0.069), (31, -0.024), (32, -0.137), (33, -0.102), (34, 0.155), (35, 0.072), (36, 0.171), (37, 0.038), (38, 0.02), (39, -0.104), (40, 0.17), (41, 0.094), (42, -0.008), (43, -0.003), (44, 0.127), (45, 0.121), (46, -0.057), (47, -0.001), (48, 0.053), (49, 0.062)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.9649654 203 nips-2007-The rat as particle filter

Author: Aaron C. Courville, Nathaniel D. Daw

Abstract: Although theorists have interpreted classical conditioning as a laboratory model of Bayesian belief updating, a recent reanalysis showed that the key features that theoretical models capture about learning are artifacts of averaging over subjects. Rather than learning smoothly to asymptote (reflecting, according to Bayesian models, the gradual tradeoff from prior to posterior as data accumulate), subjects learn suddenly and their predictions fluctuate perpetually. We suggest that abrupt and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples — one, in our simulations. Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty in their beliefs over trials. 1

2 0.70486599 3 nips-2007-A Bayesian Model of Conditioned Perception

Author: Alan Stocker, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.57795286 125 nips-2007-Markov Chain Monte Carlo with People

Author: Adam Sanborn, Thomas L. Griffiths

Abstract: Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these models determine these distributions indirectly. We propose a method for directly determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice and Markov chain Monte Carlo (MCMC), we describe a method for sampling from the distributions over objects that people associate with different categories. In our task, subjects choose whether to accept or reject a proposed change to an object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the category distribution. We test this procedure for both artificial categories acquired in the laboratory, and natural categories acquired from experience. 1

4 0.47175583 150 nips-2007-Optimal models of sound localization by barn owls

Author: Brian J. Fischer

Abstract: Sound localization by barn owls is commonly modeled as a matching procedure where localization cues derived from auditory inputs are compared to stored templates. While the matching models can explain properties of neural responses, no model explains how the owl resolves spatial ambiguity in the localization cues to produce accurate localization for sources near the center of gaze. Here, I examine two models for the barn owl’s sound localization behavior. First, I consider a maximum likelihood estimator in order to further evaluate the cue matching model. Second, I consider a maximum a posteriori estimator to test whether a Bayesian model with a prior that emphasizes directions near the center of gaze can reproduce the owl’s localization behavior. I show that the maximum likelihood estimator can not reproduce the owl’s behavior, while the maximum a posteriori estimator is able to match the behavior. This result suggests that the standard cue matching model will not be sufficient to explain sound localization behavior in the barn owl. The Bayesian model provides a new framework for analyzing sound localization in the barn owl and leads to predictions about the owl’s localization behavior.

5 0.46761724 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model

Author: Zhengdong Lu, Cristian Sminchisescu, Miguel Á. Carreira-Perpiñán

Abstract: Reliably recovering 3D human pose from monocular video requires models that bias the estimates towards typical human poses and motions. We construct priors for people tracking using the Laplacian Eigenmaps Latent Variable Model (LELVM). LELVM is a recently introduced probabilistic dimensionality reduction model that combines the advantages of latent variable models—a multimodal probability density for latent and observed variables, and globally differentiable nonlinear mappings for reconstruction and dimensionality reduction—with those of spectral manifold learning methods—no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. We analyze the performance of a LELVM-based probabilistic sigma point mixture tracker in several real and synthetic human motion sequences and demonstrate that LELVM not only provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements, but also compares favorably with alternative trackers based on PCA or GPLVM priors. Recent research in reconstructing articulated human motion has focused on methods that can exploit available prior knowledge on typical human poses or motions in an attempt to build more reliable algorithms. The high-dimensionality of human ambient pose space—between 30-60 joint angles or joint positions depending on the desired accuracy level, makes exhaustive search prohibitively expensive. This has negative impact on existing trackers, which are often not sufficiently reliable at reconstructing human-like poses, self-initializing or recovering from failure. Such difficulties have stimulated research in algorithms and models that reduce the effective working space, either using generic search focusing methods (annealing, state space decomposition, covariance scaling) or by exploiting specific problem structure (e.g. kinematic jumps). Experience with these procedures has nevertheless shown that any search strategy, no matter how effective, can be made significantly more reliable if restricted to low-dimensional state spaces. This permits a more thorough exploration of the typical solution space, for a given, comparatively similar computational effort as a high-dimensional method. The argument correlates well with the belief that the human pose space, although high-dimensional in its natural ambient parameterization, has a significantly lower perceptual (latent or intrinsic) dimensionality, at least in a practical sense—many poses that are possible are so improbable in many real-world situations that it pays off to encode them with low accuracy. A perceptual representation has to be powerful enough to capture the diversity of human poses in a sufficiently broad domain of applicability (the task domain), yet compact and analytically tractable for search and optimization. This justifies the use of models that are nonlinear and low-dimensional (able to unfold highly nonlinear manifolds with low distortion), yet probabilistically motivated and globally continuous for efficient optimization. Reducing dimensionality is not the only goal: perceptual representations have to preserve critical properties of the ambient space. Reliable tracking needs locality: nearby regions in ambient space have to be mapped to nearby regions in latent space. If this does not hold, the tracker is forced to make unrealistically large, and difficult to predict jumps in latent space in order to follow smooth trajectories in the joint angle ambient space. 1 In this paper we propose to model priors for articulated motion using a recently introduced probabilistic dimensionality reduction method, the Laplacian Eigenmaps Latent Variable Model (LELVM) [1]. Section 1 discusses the requirements of priors for articulated motion in the context of probabilistic and spectral methods for manifold learning, and section 2 describes LELVM and shows how it combines both types of methods in a principled way. Section 3 describes our tracking framework (using a particle filter) and section 4 shows experiments with synthetic and real human motion sequences using LELVM priors learned from motion-capture data. Related work: There is significant work in human tracking, using both generative and discriminative methods. Due to space limitations, we will focus on the more restricted class of 3D generative algorithms based on learned state priors, and not aim at a full literature review. Deriving compact prior representations for tracking people or other articulated objects is an active research field, steadily growing with the increased availability of human motion capture data. Howe et al. and Sidenbladh et al. [2] propose Gaussian mixture representations of short human motion fragments (snippets) and integrate them in a Bayesian MAP estimation framework that uses 2D human joint measurements, independently tracked by scaled prismatic models [3]. Brand [4] models the human pose manifold using a Gaussian mixture and uses an HMM to infer the mixture component index based on a temporal sequence of human silhouettes. Sidenbladh et al. [5] use similar dynamic priors and exploit ideas in texture synthesis—efficient nearest-neighbor search for similar motion fragments at runtime—in order to build a particle-filter tracker with observation model based on contour and image intensity measurements. Sminchisescu and Jepson [6] propose a low-dimensional probabilistic model based on fitting a parametric reconstruction mapping (sparse radial basis function) and a parametric latent density (Gaussian mixture) to the embedding produced with a spectral method. They track humans walking and involved in conversations using a Bayesian multiple hypotheses framework that fuses contour and intensity measurements. Urtasun et al. [7] use a dynamic MAP estimation framework based on a GPLVM and 2D human joint correspondences obtained from an independent image-based tracker. Li et al. [8] use a coordinated mixture of factor analyzers within a particle filtering framework, in order to reconstruct human motion in multiple views using chamfer matching to score different configuration. Wang et al. [9] learn a latent space with associated dynamics where both the dynamics and observation mapping are Gaussian processes, and Urtasun et al. [10] use it for tracking. Taylor et al. [11] also learn a binary latent space with dynamics (using an energy-based model) but apply it to synthesis, not tracking. Our work learns a static, generative low-dimensional model of poses and integrates it into a particle filter for tracking. We show its ability to work with real or partially missing data and to track multiple activities. 1 Priors for articulated human pose We consider the problem of learning a probabilistic low-dimensional model of human articulated motion. Call y ∈ RD the representation in ambient space of the articulated pose of a person. In this paper, y contains the 3D locations of anywhere between 10 and 60 markers located on the person’s joints (other representations such as joint angles are also possible). The values of y have been normalised for translation and rotation in order to remove rigid motion and leave only the articulated motion (see section 3 for how we track the rigid motion). While y is high-dimensional, the motion pattern lives in a low-dimensional manifold because most values of y yield poses that violate body constraints or are simply atypical for the motion type considered. Thus we want to model y in terms of a small number of latent variables x given a collection of poses {yn }N (recorded from a human n=1 with motion-capture technology). The model should satisfy the following: (1) It should define a probability density for x and y, to be able to deal with noise (in the image or marker measurements) and uncertainty (from missing data due to occlusion or markers that drop), and to allow integration in a sequential Bayesian estimation framework. The density model should also be flexible enough to represent multimodal densities. (2) It should define mappings for dimensionality reduction F : y → x and reconstruction f : x → y that apply to any value of x and y (not just those in the training set); and such mappings should be defined on a global coordinate system, be continuous (to avoid physically impossible discontinuities) and differentiable (to allow efficient optimisation when tracking), yet flexible enough to represent the highly nonlinear manifold of articulated poses. From a statistical machine learning point of view, this is precisely what latent variable models (LVMs) do; for example, factor analysis defines linear mappings and Gaussian densities, while the generative topographic mapping (GTM; [12]) defines nonlinear mappings and a Gaussian-mixture density in ambient space. However, factor analysis is too limited to be of practical use, and GTM— 2 while flexible—has two important practical problems: (1) the latent space must be discretised to allow tractable learning and inference, which limits it to very low (2–3) latent dimensions; (2) the parameter estimation is prone to bad local optima that result in highly distorted mappings. Another dimensionality reduction method recently introduced, GPLVM [13], which uses a Gaussian process mapping f (x), partly improves this situation by defining a tunable parameter xn for each data point yn . While still prone to local optima, this allows the use of a better initialisation for {xn }N (obtained from a spectral method, see later). This has prompted the application of n=1 GPLVM for tracking human motion [7]. However, GPLVM has some disadvantages: its training is very costly (each step of the gradient iteration is cubic on the number of training points N , though approximations based on using few points exist); unlike true LVMs, it defines neither a posterior distribution p(x|y) in latent space nor a dimensionality reduction mapping E {x|y}; and the latent representation it obtains is not ideal. For example, for periodic motions such as running or walking, repeated periods (identical up to small noise) can be mapped apart from each other in latent space because nothing constrains xn and xm to be close even when yn = ym (see fig. 3 and [10]). There exists a different type of dimensionality reduction methods, spectral methods (such as Isomap, LLE or Laplacian eigenmaps [14]), that have advantages and disadvantages complementary to those of LVMs. They define neither mappings nor densities but just a correspondence (xn , yn ) between points in latent space xn and ambient space yn . However, the training is efficient (a sparse eigenvalue problem) and has no local optima, and often yields a correspondence that successfully models highly nonlinear, convoluted manifolds such as the Swiss roll. While these attractive properties have spurred recent research in spectral methods, their lack of mappings and densities has limited their applicability in people tracking. However, a new model that combines the advantages of LVMs and spectral methods in a principled way has been recently proposed [1], which we briefly describe next. 2 The Laplacian Eigenmaps Latent Variable Model (LELVM) LELVM is based on a natural way of defining an out-of-sample mapping for Laplacian eigenmaps (LE) which, in addition, results in a density model. In LE, typically we first define a k-nearestneighbour graph on the sample data {yn }N and weigh each edge yn ∼ ym by a Gaussian affinity n=1 2 1 function K(yn , ym ) = wnm = exp (− 2 (yn − ym )/σ ). Then the latent points X result from: min tr XLX⊤ s.t. X ∈ RL×N , XDX⊤ = I, XD1 = 0 (1) where we define the matrix XL×N = (x1 , . . . , xN ), the symmetric affinity matrix WN ×N , the deN gree matrix D = diag ( n=1 wnm ), the graph Laplacian matrix L = D−W, and 1 = (1, . . . , 1)⊤ . The constraints eliminate the two trivial solutions X = 0 (by fixing an arbitrary scale) and x1 = · · · = xN (by removing 1, which is an eigenvector of L associated with a zero eigenvalue). The solution is given by the leading u2 , . . . , uL+1 eigenvectors of the normalised affinity matrix 1 1 1 N = D− 2 WD− 2 , namely X = V⊤ D− 2 with VN ×L = (v2 , . . . , vL+1 ) (an a posteriori translated, rotated or uniformly scaled X is equally valid). Following [1], we now define an out-of-sample mapping F(y) = x for a new point y as a semisupervised learning problem, by recomputing the embedding as in (1) (i.e., augmenting the graph Laplacian with the new point), but keeping the old embedding fixed: L K(y) X⊤ min tr ( X x ) K(y)⊤ 1⊤ K(y) (2) x⊤ x∈RL 2 where Kn (y) = K(y, yn ) = exp (− 1 (y − yn )/σ ) for n = 1, . . . , N is the kernel induced by 2 the Gaussian affinity (applied only to the k nearest neighbours of y, i.e., Kn (y) = 0 if y ≁ yn ). This is one natural way of adding a new point to the embedding by keeping existing embedded points fixed. We need not use the constraints from (1) because they would trivially determine x, and the uninteresting solutions X = 0 and X = constant were already removed in the old embedding anyway. The solution yields an out-of-sample dimensionality reduction mapping x = F(y): x = F(y) = X K(y) 1⊤ K(y) N K(y,yn ) PN x n=1 K(y,yn′ ) n ′ = (3) n =1 applicable to any point y (new or old). This mapping is formally identical to a Nadaraya-Watson estimator (kernel regression; [15]) using as data {(xn , yn )}N and the kernel K. We can take this n=1 a step further by defining a LVM that has as joint distribution a kernel density estimate (KDE): p(x, y) = 1 N N n=1 Ky (y, yn )Kx (x, xn ) p(y) = 3 1 N N n=1 Ky (y, yn ) p(x) = 1 N N n=1 Kx (x, xn ) where Ky is proportional to K so it integrates to 1, and Kx is a pdf kernel in x–space. Consequently, the marginals in observed and latent space are also KDEs, and the dimensionality reduction and reconstruction mappings are given by kernel regression (the conditional means E {y|x}, E {x|y}): F(y) = N n=1 p(n|y)xn f (x) = N K (x,xn ) PN x y n=1 Kx (x,xn′ ) n ′ = n =1 N n=1 p(n|x)yn . (4) We allow the bandwidths to be different in the latent and ambient spaces: 2 2 1 Kx (x, xn ) ∝ exp (− 1 (x − xn )/σx ) and Ky (y, yn ) ∝ exp (− 2 (y − yn )/σy ). They 2 may be tuned to control the smoothness of the mappings and densities [1]. Thus, LELVM naturally extends a LE embedding (efficiently obtained as a sparse eigenvalue problem with a cost O(N 2 )) to global, continuous, differentiable mappings (NW estimators) and potentially multimodal densities having the form of a Gaussian KDE. This allows easy computation of posterior probabilities such as p(x|y) (unlike GPLVM). It can use a continuous latent space of arbitrary dimension L (unlike GTM) by simply choosing L eigenvectors in the LE embedding. It has no local optima since it is based on the LE embedding. LELVM can learn convoluted mappings (e.g. the Swiss roll) and define maps and densities for them [1]. The only parameters to set are the graph parameters (number of neighbours k, affinity width σ) and the smoothing bandwidths σx , σy . 3 Tracking framework We follow the sequential Bayesian estimation framework, where for state variables s and observation variables z we have the recursive prediction and correction equations: p(st |z0:t−1 ) = p(st |st−1 ) p(st−1 |z0:t−1 ) dst−1 p(st |z0:t ) ∝ p(zt |st ) p(st |z0:t−1 ). (5) L We define the state variables as s = (x, d) where x ∈ R is the low-dim. latent space (for pose) and d ∈ R3 is the centre-of-mass location of the body (in the experiments our state also includes the orientation of the body, but for simplicity here we describe only the translation). The observed variables z consist of image features or the perspective projection of the markers on the camera plane. The mapping from state to observations is (for the markers’ case, assuming M markers): P f x ∈ RL − − → y ∈ R3M −→ ⊕ − − − z ∈ R2M −− − − −→ d ∈ R3 (6) where f is the LELVM reconstruction mapping (learnt from mocap data); ⊕ shifts each 3D marker by d; and P is the perspective projection (pinhole camera), applied to each 3D point separately. Here we use a simple observation model p(zt |st ): Gaussian with mean given by the transformation (6) and isotropic covariance (set by the user to control the influence of measurements in the tracking). We assume known correspondences and observations that are obtained either from the 3D markers (for tracking synthetic data) or 2D tracks obtained from a 2D tracker. Our dynamics model is p(st |st−1 ) ∝ pd (dt |dt−1 ) px (xt |xt−1 ) p(xt ) (7) where both dynamics models for d and x are random walks: Gaussians centred at the previous step value dt−1 and xt−1 , respectively, with isotropic covariance (set by the user to control the influence of dynamics in the tracking); and p(xt ) is the LELVM prior. Thus the overall dynamics predicts states that are both near the previous state and yield feasible poses. Of course, more complex dynamics models could be used if e.g. the speed and direction of movement are known. As tracker we use the Gaussian mixture Sigma-point particle filter (GMSPPF) [16]. This is a particle filter that uses a Gaussian mixture representation for the posterior distribution in state space and updates it with a Sigma-point Kalman filter. This Gaussian mixture will be used as proposal distribution to draw the particles. As in other particle filter implementations, the prediction step is carried out by approximating the integral (5) with particles and updating the particles’ weights. Then, a new Gaussian mixture is fitted with a weighted EM algorithm to these particles. This replaces the resampling stage needed by many particle filters and mitigates the problem of sample depletion while also preventing the number of components in the Gaussian mixture from growing over time. The choice of this particular tracker is not critical; we use it to illustrate the fact that LELVM can be introduced in any probabilistic tracker for nonlinear, nongaussian models. Given the corrected distribution p(st |z0:t ), we choose its mean as recovered state (pose and location). It is also possible to choose instead the mode closest to the state at t − 1, which could be found by mean-shift or Newton algorithms [17] since we are using a Gaussian-mixture representation in state space. 4 4 Experiments We demonstrate our low-dimensional tracker on image sequences of people walking and running, both synthetic (fig. 1) and real (fig. 2–3). Fig. 1 shows the model copes well with persistent partial occlusion and severely subsampled training data (A,B), and quantitatively evaluates temporal reconstruction (C). For all our experiments, the LELVM parameters (number of neighbors k, Gaussian affinity σ, and bandwidths σx and σy ) were set manually. We mainly considered 2D latent spaces (for pose, plus 6D for rigid motion), which were expressive enough for our experiments. More complex, higher-dimensional models are straightforward to construct. The initial state distribution p(s0 ) was chosen a broad Gaussian, the dynamics and observation covariance were set manually to control the tracking smoothness, and the GMSPPF tracker used a 5-component Gaussian mixture in latent space (and in the state space of rigid motion) and a small set of 500 particles. The 3D representation we use is a 102-D vector obtained by concatenating the 3D markers coordinates of all the body joints. These would be highly unconstrained if estimated independently, but we only use them as intermediate representation; tracking actually occurs in the latent space, tightly controlled using the LELVM prior. For the synthetic experiments and some of the real experiments (figs. 2–3) the camera parameters and the body proportions were known (for the latter, we used the 2D outputs of [6]). For the CMU mocap video (fig. 2B) we roughly guessed. We used mocap data from several sources (CMU, OSU). As observations we always use 2D marker positions, which, depending on the analyzed sequence were either known (the synthetic case), or provided by an existing tracker [6] or specified manually (fig. 2B). Alternatively 2D point trackers similar to the ones of [7] can be used. The forward generative model is obtained by combining the latent to ambient space mapping (this provides the position of the 3D markers) with a perspective projection transformation. The observation model is a product of Gaussians, each measuring the probability of a particular marker position given its corresponding image point track. Experiments with synthetic data: we analyze the performance of our tracker in controlled conditions (noise perturbed synthetically generated image tracks) both under regular circumstances (reasonable sampling of training data) and more severe conditions with subsampled training points and persistent partial occlusion (the man running behind a fence, with many of the 2D marker tracks obstructed). Fig. 1B,C shows both the posterior (filtered) latent space distribution obtained from our tracker, and its mean (we do not show the distribution of the global rigid body motion; in all experiments this is tracked with good accuracy). In the latent space plot shown in fig. 1B, the onset of running (two cycles were used) appears as a separate region external to the main loop. It does not appear in the subsampled training set in fig. 1B, where only one running cycle was used for training and the onset of running was removed. In each case, one can see that the model is able to track quite competently, with a modest decrease in its temporal accuracy, shown in fig. 1C, where the averages are computed per 3D joint (normalised wrt body height). Subsampling causes some ambiguity in the estimate, e.g. see the bimodality in the right plot in fig. 1C. In another set of experiments (not shown) we also tracked using different subsets of 3D markers. The estimates were accurate even when about 30% of the markers were dropped. Experiments with real images: this shows our tracker’s ability to work with real motions of different people, with different body proportions, not in its latent variable model training set (figs. 2–3). We study walking, running and turns. In all cases, tracking and 3D reconstruction are reasonably accurate. We have also run comparisons against low-dimensional models based on PCA and GPLVM (fig. 3). It is important to note that, for LELVM, errors in the pose estimates are primarily caused by mismatches between the mocap data used to learn the LELVM prior and the body proportions of the person in the video. For example, the body proportions of the OSU motion captured walker are quite different from those of the image in fig. 2–3 (e.g. note how the legs of the stick man are shorter relative to the trunk). Likewise, the style of the runner from the OSU data (e.g. the swinging of the arms) is quite different from that of the video. Finally, the interest points tracked by the 2D tracker do not entirely correspond either in number or location to the motion capture markers, and are noisy and sometimes missing. In future work, we plan to include an optimization step to also estimate the body proportions. This would be complicated for a general, unconstrained model because the dimensions of the body couple with the pose, so either one or the other can be changed to improve the tracking error (the observation likelihood can also become singular). But for dedicated prior pose models like ours these difficulties should be significantly reduced. The model simply cannot assume highly unlikely stances—these are either not representable at all, or have reduced probability—and thus avoids compensatory, unrealistic body proportion estimates. 5 n = 15 n = 40 n = 65 n = 90 n = 115 n = 140 A 1.5 1.5 1.5 1.5 1 −1 −1 −1 −1 −1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 −1 −1 1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1.5 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0.3 0.2 0.1 −1 0.4 0.3 0.2 −1.5 −1.5 n = 60 0.4 0.3 −2 −1 −1.5 −1 0 −0.5 n = 49 0.4 0.1 −1 −1.5 1 0 −0.5 −2 −1 −1.5 −1 0.5 0 −0.5 −1.5 −1.5 0.5 1 0.5 0 −0.5 −1.5 −1.5 −1.5 1 0.5 0 −0.5 n = 37 0.2 0.1 −1 −1 −1.5 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 −0.5 −1.5 −1.5 0.3 0.2 0.1 0 0.4 0.3 0.2 −0.5 n = 25 0.4 0.3 −1 0 −0.5 −1 −1.5 1 0.5 0 0 −0.5 1.5 1 0.5 0 −0.5 −2 1 0.5 0.5 0 −1.5 −1.5 −1.5 1.5 1 0.5 0 −1 −1.5 −2 1 1 0.5 −0.5 −1 −1.5 −1.5 n = 13 0.4 −1 −1 −1.5 −1.5 −1.5 n=1 B −1 −1 −1.5 −1.5 1.5 1 0.5 −0.5 0 −1 −1.5 −2 −2 1 0 −0.5 −0.5 −0.5 −1 −1.5 −2 0.5 0 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0.5 0 1 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 1 0.5 1.5 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 −0.5 1.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 −2 1.5 1 1 0.5 0 −0.5 −2 0 −0.5 −0.5 1.5 1 0.5 0 −1 −1.5 −1 −1.5 0.5 0 0 −0.5 −1.5 −1.5 −1.5 1.5 1.5 1 0.5 −0.5 0 −0.5 −2 1 0.5 0.5 0 −0.5 −1 −1.5 −1.5 1.5 1 1 0.5 0 −1 −1.5 1 1 0.5 0 −0.5 −0.5 1.5 1 −0.5 −2 1 0.5 0 0 −0.5 0.5 0 −1 −1.5 −2 1 0.5 0.5 0 −0.5 1.5 1.5 1 −0.5 −2 1 1 0.5 0.5 0 −1 −1.5 −1 −1.5 −2 −2 1 0 −0.5 1.5 1 0.5 −0.5 0 −0.5 −1 −1.5 −2 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −2 1 0.5 0 −0.5 0.5 0 −0.5 −1 −1.5 −1 −1.5 −2 1 0.5 0 1 0.5 0 −0.5 0 −0.5 −1 −1.5 −2 1 0.5 1.5 1 0.5 0.5 0 −0.5 −1 −1.5 1 1 1 0.5 0 −0.5 −2 1.5 1.5 1 1 0.5 0 −1 −1.5 1.5 1.5 1 0.5 −0.5 0.2 0.1 0.1 0 0 0 0 0 0 −0.1 −0.1 −0.1 −0.1 −0.1 −0.1 −0.2 −0.2 −0.2 −0.2 −0.2 −0.2 −0.3 −0.3 −0.3 −0.3 −0.3 −0.3 −0.4 0.4 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 −0.4 0.4 2.4 1.5 0.6 0.8 1.5 1.5 1.5 1 1.2 1.4 1.6 1.8 2 2.2 2.4 1.5 1.5 1.5 1.5 1.5 1 1 0.5 0.5 0 0 1 −0.5 −0.5 0 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.1 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 0 −0.5 −1.5 0.5 0 0 −0.5 0 −0.5 −0.5 −0.5 −2 −1 −1 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −2 −1 −1.5 −1 −1.5 1 0.5 0.5 0 −1.5 −1 1 1 0.5 −2 −1 −1.5 −1.5 −0.5 −1 −2 1 −1 0 −1 −1.5 −2 −0.5 −2 −1.5 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −1 −1.5 0 −1.5 0.5 0 −0.5 −1 −0.5 −1 −1.5 1 0.5 0 −0.5 −1 −0.5 −1 1 0.5 0 −1.5 1 0.5 1 0.5 0 −0.5 −2 1 0.5 −2 −1 −1.5 −1.5 −0.5 0 −1 1 −1 0 1 −1.5 −2 −0.5 −2 −1 −1.5 −0.5 1 0.5 0 0.5 −0.5 −1 −1.5 0 −0.5 −0.5 −1 −1 −1.5 0.5 0 0 −0.5 −2 −1.5 0 −1.5 1 0.5 0.5 0 −2 −1 −1.5 −1.5 −1 1 1 0.5 −0.5 −1 1 0.5 1 0.5 −0.5 −1 −2 1 0 −0.5 −1 −1.5 −1.5 −1.5 −2 −1.5 0.5 0 −0.5 −1 −0.5 0 −0.5 −1.5 −1 −1.5 1 0.5 0 −0.5 0 1 −1 −0.5 −1 1 0.5 0 1 0.5 0 0.5 −0.5 −1 −0.5 −2 1 0.5 1 0.5 1 0.5 0 −1 1 0 1 −1.5 −2 0.5 0 1 0.5 −0.5 −1 −1.5 1 0.5 1 0.5 −1.5 −1.5 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 −1.5 −1 −0.5 0 0.5 1 0.13 0.12 RMSE C RMSE 0.08 0.06 0.04 0.02 0.11 0.1 0.09 0.08 0.07 0.06 0 0 50 100 150 0.05 0 10 time step n 20 30 40 50 60 time step n Figure 1: OSU running man motion capture data. A: we use 217 datapoints for training LELVM (with added noise) and for tracking. Row 1: tracking in the 2D latent space. The contours (very tight in this sequence) are the posterior probability. Row 2: perspective-projection-based observations with occlusions. Row 3: each quadruplet (a, a′ , b, b′ ) show the true pose of the running man from a front and side views (a, b), and the reconstructed pose by tracking with our model (a′ , b′ ). B: we use the first running cycle for training LELVM and the second cycle for tracking. C: RMSE errors for each frame, for the tracking of A (left plot) and B (middle plot), normalised so that 1 equals the M 1 ˆ 2 height of the stick man. RMSE(n) = M j=1 ynj − ynj −1/2 for all 3D locations of the M ˆ markers, i.e., comparison of reconstructed stick man yn with ground-truth stick man yn . Right plot: multimodal posterior distribution in pose space for the model of A (frame 42). Comparison with PCA and GPLVM (fig. 3): for these models, the tracker uses the same GMSPPF setting as for LELVM (number of particles, initialisation, random-walk dynamics, etc.) but with the mapping y = f (x) provided by GPLVM or PCA, and with a uniform prior p(x) in latent space (since neither GPLVM nor the non-probabilistic PCA provide one). The LELVM-tracker uses both its f (x) and latent space prior p(x), as discussed. All methods use a 2D latent space. We ensured the best possible training of GPLVM by model selection based on multiple runs. For PCA, the latent space looks deceptively good, showing non-intersecting loops. However, (1) individual loops do not collect together as they should (for LELVM they do); (2) worse still, the mapping from 2D to pose space yields a poor observation model. The reason is that the loop in 102-D pose space is nonlinearly bent and a plane can at best intersect it at a few points, so the tracker often stays put at one of those (typically an “average” standing position), since leaving it would increase the error a lot. Using more latent dimensions would improve this, but as LELVM shows, this is not necessary. For GPLVM, we found high sensitivity to filter initialisation: the estimates have high variance across runs and are inaccurate ≈ 80% of the time. When it fails, the GPLVM tracker often freezes in latent space, like PCA. When it does succeed, it produces results that are comparable with LELVM, although somewhat less accurate visually. However, even then GPLVM’s latent space consists of continuous chunks spread apart and offset from each other; GPLVM has no incentive to place nearby two xs mapping to the same y. This effect, combined with the lack of a data-sensitive, realistic latent space density p(x), makes GPLVM jump erratically from chunk to chunk, in contrast with LELVM, which smoothly follows the 1D loop. Some GPLVM problems might be alleviated using higher-order dynamics, but our experiments suggest that such modeling sophistication is less 6 0 0.5 1 n=1 n = 15 n = 29 n = 43 n = 55 n = 69 A 100 100 100 100 100 50 50 50 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 20 −20 40 n=4 −40 10 20 −20 40 50 0 n=9 −40 −20 30 40 50 0 n = 14 −40 20 −20 40 50 30 20 10 0 0 −40 −20 −30 −20 −40 10 20 −30 −10 0 −50 30 50 n = 19 −50 −30 −10 −50 30 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 20 −10 −20 −30 −40 10 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 0 50 50 40 30 20 10 0 −50 −30 −10 −50 30 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 100 40 −40 10 20 −50 30 50 n = 24 40 50 n = 29 B 20 20 20 20 20 40 40 40 40 40 60 60 60 60 60 80 80 80 80 80 80 100 100 100 100 100 100 120 120 120 120 120 120 140 140 140 140 140 140 160 160 160 160 160 160 180 180 180 180 180 200 200 200 200 200 220 220 220 220 220 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 50 100 150 200 250 300 350 20 40 60 180 200 220 50 100 150 200 250 300 350 50 100 150 100 100 100 100 100 50 50 50 50 200 250 300 350 100 50 50 0 0 0 0 0 0 −50 −50 −50 −50 −50 −50 −100 −100 −100 −100 −100 −100 50 50 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 10 20 −50 30 40 −40 −20 −50 −30 −10 0 −40 −20 30 40 50 0 −40 10 20 −50 30 40 50 40 30 30 20 20 10 10 0 −50 −30 −10 −50 20 0 −10 −20 −30 −40 10 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 50 40 30 20 10 0 −10 −20 −30 50 40 30 20 10 0 −10 −50 50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 −10 −50 −40 −20 −30 −20 −30 −10 0 −40 10 20 −50 30 40 50 Figure 2: A: tracking of a video from [6] (turning & walking). We use 220 datapoints (3 full walking cycles) for training LELVM. Row 1: tracking in the 2D latent space. The contours are the estimated posterior probability. Row 2: tracking based on markers. The red dots are the 2D tracks and the green stick man is the 3D reconstruction obtained using our model. Row 3: our 3D reconstruction from a different viewpoint. B: tracking of a person running straight towards the camera. Notice the scale changes and possible forward-backward ambiguities in the 3D estimates. We train the LELVM using 180 datapoints (2.5 running cycles); 2D tracks were obtained by manually marking the video. In both A–B the mocap training data was for a person different from the video’s (with different body proportions and motions), and no ground-truth estimate was available for favourable initialisation. LELVM GPLVM PCA tracking in latent space tracking in latent space tracking in latent space 2.5 0.02 30 38 2 38 0.99 0.015 20 1.5 38 0.01 1 10 0.005 0.5 0 0 0 −0.005 −0.5 −10 −0.01 −1 −0.015 −1.5 −20 −0.02 −0.025 −0.025 −2 −0.02 −0.015 −0.01 −0.005 0 0.005 0.01 0.015 0.02 0.025 −2.5 −2 −1 0 1 2 3 −30 −80 −60 −40 −20 0 20 40 60 80 Figure 3: Method comparison, frame 38. PCA and GPLVM map consecutive walking cycles to spatially distinct latent space regions. Compounded by a data independent latent prior, the resulting tracker gets easily confused: it jumps across loops and/or remains put, trapped in local optima. In contrast, LELVM is stable and follows tightly a 1D manifold (see videos). crucial if locality constraints are correctly modeled (as in LELVM). We conclude that, compared to LELVM, GPLVM is significantly less robust for tracking, has much higher training overhead and lacks some operations (e.g. computing latent conditionals based on partly missing ambient data). 7 5 Conclusion and future work We have proposed the use of priors based on the Laplacian Eigenmaps Latent Variable Model (LELVM) for people tracking. LELVM is a probabilistic dim. red. method that combines the advantages of latent variable models and spectral manifold learning algorithms: a multimodal probability density over latent and ambient variables, globally differentiable nonlinear mappings for reconstruction and dimensionality reduction, no local optima, ability to unfold highly nonlinear manifolds, and good practical scaling to latent spaces of high dimension. LELVM is computationally efficient, simple to learn from sparse training data, and compatible with standard probabilistic trackers such as particle filters. Our results using a LELVM-based probabilistic sigma point mixture tracker with several real and synthetic human motion sequences show that LELVM provides sufficient constraints for robust operation in the presence of missing, noisy and ambiguous image measurements. Comparisons with PCA and GPLVM show LELVM is superior in terms of accuracy, robustness and computation time. The objective of this paper was to demonstrate the ability of the LELVM prior in a simple setting using 2D tracks obtained automatically or manually, and single-type motions (running, walking). Future work will explore more complex observation models such as silhouettes; the combination of different motion types in the same latent space (whose dimension will exceed 2); and the exploration of multimodal posterior distributions in latent space caused by ambiguities. Acknowledgments This work was partially supported by NSF CAREER award IIS–0546857 (MACP), NSF IIS–0535140 and EC MCEXT–025481 (CS). CMU data: http://mocap.cs.cmu.edu (created with funding from NSF EIA–0196217). OSU data: http://accad.osu.edu/research/mocap/mocap data.htm. References ´ [1] M. A. Carreira-Perpi˜ an and Z. Lu. The Laplacian Eigenmaps Latent Variable Model. In AISTATS, 2007. n´ [2] N. R. Howe, M. E. Leventon, and W. T. Freeman. Bayesian reconstruction of 3D human motion from single-camera video. In NIPS, volume 12, pages 820–826, 2000. [3] T.-J. Cham and J. M. Rehg. A multiple hypothesis approach to figure tracking. In CVPR, 1999. [4] M. Brand. Shadow puppetry. In ICCV, pages 1237–1244, 1999. [5] H. Sidenbladh, M. J. Black, and L. Sigal. Implicit probabilistic models of human motion for synthesis and tracking. In ECCV, volume 1, pages 784–800, 2002. [6] C. Sminchisescu and A. Jepson. Generative modeling for continuous non-linearly embedded visual inference. In ICML, pages 759–766, 2004. [7] R. Urtasun, D. J. Fleet, A. Hertzmann, and P. Fua. Priors for people tracking from small training sets. In ICCV, pages 403–410, 2005. [8] R. Li, M.-H. Yang, S. Sclaroff, and T.-P. Tian. Monocular tracking of 3D human motion with a coordinated mixture of factor analyzers. In ECCV, volume 2, pages 137–150, 2006. [9] J. M. Wang, D. Fleet, and A. Hertzmann. Gaussian process dynamical models. In NIPS, volume 18, 2006. [10] R. Urtasun, D. J. Fleet, and P. Fua. Gaussian process dynamical models for 3D people tracking. In CVPR, pages 238–245, 2006. [11] G. W. Taylor, G. E. Hinton, and S. Roweis. Modeling human motion using binary latent variables. In NIPS, volume 19, 2007. [12] C. M. Bishop, M. Svens´ n, and C. K. I. Williams. GTM: The generative topographic mapping. Neural e Computation, 10(1):215–234, January 1998. [13] N. Lawrence. Probabilistic non-linear principal component analysis with Gaussian process latent variable models. Journal of Machine Learning Research, 6:1783–1816, November 2005. [14] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373–1396, June 2003. [15] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986. [16] R. van der Merwe and E. A. Wan. Gaussian mixture sigma-point particle filters for sequential probabilistic inference in dynamic state-space models. In ICASSP, volume 6, pages 701–704, 2003. ´ [17] M. A. Carreira-Perpi˜ an. Acceleration strategies for Gaussian mean-shift image segmentation. In CVPR, n´ pages 1160–1167, 2006. 8

6 0.45741662 59 nips-2007-Continuous Time Particle Filtering for fMRI

7 0.45655692 213 nips-2007-Variational Inference for Diffusion Processes

8 0.45461971 114 nips-2007-Learning and using relational theories

9 0.43594271 17 nips-2007-A neural network implementing optimal state estimation based on dynamic spike train decoding

10 0.42667681 171 nips-2007-Scan Strategies for Meteorological Radars

11 0.40881631 11 nips-2007-A Risk Minimization Principle for a Class of Parzen Estimators

12 0.40570235 214 nips-2007-Variational inference for Markov jump processes

13 0.38690612 198 nips-2007-The Noisy-Logical Distribution and its Application to Causal Inference

14 0.37331948 40 nips-2007-Bundle Methods for Machine Learning

15 0.36516255 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

16 0.36326975 50 nips-2007-Combined discriminative and generative articulated pose and non-rigid shape estimation

17 0.35394153 31 nips-2007-Bayesian Agglomerative Clustering with Coalescents

18 0.34369478 206 nips-2007-Topmoumoute Online Natural Gradient Algorithm

19 0.34013414 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC

20 0.33838597 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(5, 0.024), (13, 0.03), (16, 0.025), (18, 0.491), (21, 0.057), (31, 0.016), (34, 0.026), (35, 0.018), (47, 0.08), (49, 0.011), (83, 0.073), (85, 0.023), (87, 0.023), (90, 0.037)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.88995868 203 nips-2007-The rat as particle filter

Author: Aaron C. Courville, Nathaniel D. Daw

Abstract: Although theorists have interpreted classical conditioning as a laboratory model of Bayesian belief updating, a recent reanalysis showed that the key features that theoretical models capture about learning are artifacts of averaging over subjects. Rather than learning smoothly to asymptote (reflecting, according to Bayesian models, the gradual tradeoff from prior to posterior as data accumulate), subjects learn suddenly and their predictions fluctuate perpetually. We suggest that abrupt and unstable learning can be modeled by assuming subjects are conducting inference using sequential Monte Carlo sampling with a small number of samples — one, in our simulations. Ensemble behavior resembles exact Bayesian models since, as in particle filters, it averages over many samples. Further, the model is capable of exhibiting sophisticated behaviors like retrospective revaluation at the ensemble level, even given minimally sophisticated individuals that do not track uncertainty in their beliefs over trials. 1

2 0.83265758 3 nips-2007-A Bayesian Model of Conditioned Perception

Author: Alan Stocker, Eero P. Simoncelli

Abstract: unkown-abstract

3 0.7710824 213 nips-2007-Variational Inference for Diffusion Processes

Author: Cédric Archambeau, Manfred Opper, Yuan Shen, Dan Cornford, John S. Shawe-taylor

Abstract: Diffusion processes are a family of continuous-time continuous-state stochastic processes that are in general only partially observed. The joint estimation of the forcing parameters and the system noise (volatility) in these dynamical systems is a crucial, but non-trivial task, especially when the system is nonlinear and multimodal. We propose a variational treatment of diffusion processes, which allows us to compute type II maximum likelihood estimates of the parameters by simple gradient techniques and which is computationally less demanding than most MCMC approaches. We also show how a cheap estimate of the posterior over the parameters can be constructed based on the variational free energy. 1

4 0.4576667 125 nips-2007-Markov Chain Monte Carlo with People

Author: Adam Sanborn, Thomas L. Griffiths

Abstract: Many formal models of cognition implicitly use subjective probability distributions to capture the assumptions of human learners. Most applications of these models determine these distributions indirectly. We propose a method for directly determining the assumptions of human learners by sampling from subjective probability distributions. Using a correspondence between a model of human choice and Markov chain Monte Carlo (MCMC), we describe a method for sampling from the distributions over objects that people associate with different categories. In our task, subjects choose whether to accept or reject a proposed change to an object. The task is constructed so that these decisions follow an MCMC acceptance rule, defining a Markov chain for which the stationary distribution is the category distribution. We test this procedure for both artificial categories acquired in the laboratory, and natural categories acquired from experience. 1

5 0.44176355 34 nips-2007-Bayesian Policy Learning with Trans-Dimensional MCMC

Author: Matthew Hoffman, Arnaud Doucet, Nando D. Freitas, Ajay Jasra

Abstract: A recently proposed formulation of the stochastic planning and control problem as one of parameter estimation for suitable artificial statistical models has led to the adoption of inference algorithms for this notoriously hard problem. At the algorithmic level, the focus has been on developing Expectation-Maximization (EM) algorithms. In this paper, we begin by making the crucial observation that the stochastic control problem can be reinterpreted as one of trans-dimensional inference. With this new interpretation, we are able to propose a novel reversible jump Markov chain Monte Carlo (MCMC) algorithm that is more efficient than its EM counterparts. Moreover, it enables us to implement full Bayesian policy search, without the need for gradients and with one single Markov chain. The new approach involves sampling directly from a distribution that is proportional to the reward and, consequently, performs better than classic simulations methods in situations where the reward is a rare event.

6 0.40038481 47 nips-2007-Collapsed Variational Inference for HDP

7 0.38566709 51 nips-2007-Comparing Bayesian models for multisensory cue combination without mandatory integration

8 0.38292345 153 nips-2007-People Tracking with the Laplacian Eigenmaps Latent Variable Model

9 0.37717903 74 nips-2007-EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection

10 0.3704226 93 nips-2007-GRIFT: A graphical model for inferring visual classification features from human data

11 0.36206573 214 nips-2007-Variational inference for Markov jump processes

12 0.36102641 202 nips-2007-The discriminant center-surround hypothesis for bottom-up saliency

13 0.35235691 155 nips-2007-Predicting human gaze using low-level saliency combined with face detection

14 0.34844723 87 nips-2007-Fast Variational Inference for Large-scale Internet Diagnosis

15 0.34061974 100 nips-2007-Hippocampal Contributions to Control: The Third Way

16 0.33840862 48 nips-2007-Collective Inference on Markov Models for Modeling Bird Migration

17 0.33520493 198 nips-2007-The Noisy-Logical Distribution and its Application to Causal Inference

18 0.33366039 2 nips-2007-A Bayesian LDA-based model for semi-supervised part-of-speech tagging

19 0.33131614 122 nips-2007-Locality and low-dimensions in the prediction of natural experience from fMRI

20 0.33112746 154 nips-2007-Predicting Brain States from fMRI Data: Incremental Functional Principal Component Regression