nips nips2005 nips2005-140 knowledge-graph by maker-knowledge-mining

140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior


Source: pdf

Author: Liam Paninski

Abstract: We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e.g., Gaussian). Despite this increased generality, the method is relatively simple to implement, being based in the simplest case on a linear programming algorithm, and more generally on a straightforward maximum likelihood or maximum a posteriori formulation, which turns out to be a convex optimization problem (with no non-global local maxima) in many important cases. In addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. We close by briefly discussing an interesting connection to recent models of neural population coding.

Reference: text


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Nonparametric inference of prior probabilities from Bayes-optimal behavior Liam Paninski∗ Department of Statistics, Columbia University liam@stat. [sent-1, score-0.245]

2 edu/∼liam Abstract We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. [sent-6, score-0.041]

3 The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e. [sent-7, score-0.245]

4 In addition, we develop methods for analyzing the uncertainty of these estimates. [sent-11, score-0.049]

5 We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. [sent-12, score-0.146]

6 We close by briefly discussing an interesting connection to recent models of neural population coding. [sent-13, score-0.102]

7 For example, (2) interpret visual motion illusions in terms of a prior weighted towards slow, smooth movements of objects in space. [sent-15, score-0.292]

8 In an experimental context, it is clearly desirable to empirically obtain estimates of the prior the subject is operating under; the idea would be to then compare these experimental estimates of the subject’s prior with the ecological prior he or she “should” have been using. [sent-16, score-1.12]

9 Conversely, such an approach would have the potential to establish that the subject is not behaving Bayes-optimally under any prior, but rather is in fact using a different, nonBayesian strategy. [sent-17, score-0.301]

10 Such tools would also be quite useful in the context of studies of learning and generalization, in which we would like to track the time course of a subject’s adaptation to an experimentally-chosen prior distribution (5). [sent-18, score-0.343]

11 Such estimates of the subject’s prior have in the past been rather qualitative, and/or limited to simple parametric families (e. [sent-19, score-0.319]

12 Dayan for pointing out the connection to neural population coding models. [sent-30, score-0.142]

13 the width of a Gaussian may be fit to the experimental data, but the actual Gaussian identity of the prior is not examined systematically). [sent-32, score-0.245]

14 We first discuss the method in the general case of an arbitrarily-chosen loss function (the “cost” which we assume the subject is attempting to minimize, on average), then examine a few special important cases (e. [sent-34, score-0.293]

15 The algorithms for determining the subject’s prior distributions turn out to be surprisingly quick and easy to code: the basic idea is that each observed stimulus-response pair provides a set of constraints on what the actual prior could be. [sent-37, score-0.715]

16 In the simplest case, these constraints are linear, and the resulting algorithm is simply a version of linear programming, for which very efficient algorithms exist. [sent-38, score-0.237]

17 Finally, we discuss Bayesian methods for representing the uncertainty in our estimates. [sent-40, score-0.049]

18 The experimental economics literature in particular is quite vast (where the relevance to gambling, price setting, etc. [sent-42, score-0.091]

19 is discussed at length, particularly in settings in which “rational” — expected utilitymaximizing — behavior seems to break down); see, e. [sent-43, score-0.041]

20 Finally, it is worth noting that the question of determining a subject’s (or more precisely, an opponent’s) priors in a gambling context — in particular, in the binary case of whether or not an opponent will accept a bet, given a fixed table of outcomes vs. [sent-50, score-0.316]

21 Nevertheless, we are unaware of any previous application of similar techniques (both for estimating a subject’s true prior and for analyzing the uncertainty associated with these estimates) in the psychophysical or neuroscience literature. [sent-52, score-0.375]

22 General case Our technique for determining the subject’s prior is based on several assumptions (some of which will be relaxed below). [sent-53, score-0.282]

23 To begin, we assume that the subject is behaving optimally in a Bayesian sense. [sent-54, score-0.352]

24 To be precise, we have four ingredients: a prior distribution on some hidden parameter θ; observed input (stimulus) data, dependent in some probabilistic way on θ; the subject’s corresponding output estimates of the underlying θ, given the input data; and finally a loss function D(. [sent-55, score-0.448]

25 Note that we have also implicitly assumed, in this simplest case, that both the loss D(. [sent-60, score-0.134]

26 ) and likelihood functions p(xi |θ) are known, both to the subject and to the experimenter (perhaps from a preceding set of “learning” trials). [sent-62, score-0.439]

27 So how can the experimenter actually estimate p(θ), given the likelihoods p(x|θ), the loss ˆ function D(. [sent-63, score-0.279]

28 ), and some set of data {xi } with corresponding estimates {θi } minimizing the posterior expected loss (1)? [sent-65, score-0.216]

29 This turns out to be a linear programming problem (11), for which very efficient algorithms exist (e. [sent-66, score-0.125]

30 To see why, first note that the right hand side of expression (1) is linear in the prior p(θ). [sent-70, score-0.289]

31 (See also (10), who noted the same linear programming structure in an application to cost function estimation, rather than the prior estimation examined here. [sent-72, score-0.459]

32 ) The solution to the linear programming problem defined by (2-4) isn’t necessarily unique; it corresponds to an intersection of half-spaces, which is convex in general. [sent-73, score-0.189]

33 To come up with a unique solution, we could maximimize a concave “regularizing” function on this convex set; possible such functions include, e. [sent-74, score-0.177]

34 An alternative solution would be to modify constraint (4) to ˆ p(θ)p(xi |θ) D(θi , θ) − D(z, θ) ≤ −ǫ ∀z, where we can then adjust the slack variable ǫ until the contraint set shrinks to a single point. [sent-78, score-0.19]

35 This leads directly to another linear programming problem (where we want to make the linear function ǫ as large as possible, under the above constraints). [sent-79, score-0.169]

36 That is, what if subjects are not quite behaving optimally with respect to p(θ)? [sent-83, score-0.211]

37 It is possible to detect this situation in the above framework, for example if the slack variable ǫ above is found to be negative. [sent-84, score-0.083]

38 If we assume this decision noise η has a log-concave density (i. [sent-90, score-0.053]

39 , the log of the density is a concave function; e. [sent-92, score-0.113]

40 , Gaussian, or exponential), then so does its integral (12), and the resulting maximum likelihood problem has no non-global local maxima and is therefore solvable by ascent methods. [sent-94, score-0.366]

41 To see this, write the log-likelihood ˆ of (p, σ) given data {xi , θi } as ui (z) L{xi ,θi } (p, σ) = ˆ log dp(η), −∞ with the sum over the set of all the constraints in (4) and ui (z) ≡ 1 σ ˆ p(θ)p(xi |θ) D(θi , θ) − D(z, θ) . [sent-95, score-0.241]

42 L is the sum of concave functions in ui , and hence is concave itself, and has no non-global local maxima in these variables; since σ and p are linearly related through ui (and (p, σ) live in a convex set), L has no non-global local maxima in (p, σ), either. [sent-96, score-0.696]

43 Once again, this maximum likelihood problem may be regularized by prior information1 , maximizing the a posteriori likelihood L(p) − q[p] instead of L(p); this problem is similarly tractable by ascent methods, by the concavity of −q[. [sent-97, score-0.729]

44 ] (note that this “soft-constraint” problem reduces exactly to the “hard” constraint problem (4) as the noise σ → 0)2 . [sent-98, score-0.099]

45 An additional interesting idea is to use the computed value of η as a kind of outlier test; η large implies the trial was particularly suboptimal. [sent-100, score-0.293]

46 Similar observations have appeared in the context of medical applications of Markov random field methods (13). [sent-102, score-0.081]

47 However, a problem arises: the zero measure, p(θ) ≡ 0, will always trivially satisfy the remaining constraints (2) and (4). [sent-105, score-0.115]

48 This problem could potentially be ameliorated by introducing a convex regularizing term R (or equivalently, a log-concave prior) on the total mass p(θ)dθ. [sent-106, score-0.134]

49 ˆ with the maximum taken over all xi which led to the estimate θi . [sent-107, score-0.254]

50 This setup is perhaps most appropriate for a two-alternative forced choice situation, where the problem is one of classification or discrimination, not estimation. [sent-108, score-0.054]

51 Mean-square and absolute-error regression: Our discussion assumes an even simpler form when the loss function D(. [sent-109, score-0.056]

52 In this case it is convenient to work with a slightly different noise model than the classification noise discussed above; instead, we may model the subject’s responses as optimal plus estimation noise. [sent-112, score-0.153]

53 For squared-error, the optimal ˆ θi is known to be uniquely defined as the conditional mean of θ given xi . [sent-113, score-0.146]

54 In the simplest case of Gaussian η, the maximum likelihood problem may be solved by standard nonnegative least-squares (e. [sent-115, score-0.282]

55 In each case, ηi retains its utility as an outlier score. [sent-119, score-0.139]

56 A worked example: learning the fairness of a coin In this section we will work through a concrete example, to show how to put the ideas discussed above into practice. [sent-120, score-0.407]

57 We take perhaps the simplest possible example, for clarity: the subject observes some number N of independent, identically distributed coin flips, and on each trial i tells us his/her probability of observing tails on the next trial, given that t = t(i) tails were observed in the first i trials3 . [sent-121, score-1.236]

58 Here the likelihood functions p(xi |θ) i take the standard binomial form p(t(i)|ptails ) = t pt (1 − ptails )i−t (note that it is tails reasonable to assume that these likelihoods are known to the subject, at least approximately, due to the ubiquity of binomial data). [sent-122, score-0.983]

59 Under our assumptions, the subject’s estimates ptails,i are given as the posterior mean ˆ of ptails given the number of tails observed up to trial i. [sent-123, score-0.982]

60 This puts us directly in the mean-square framework discussed in equation (5); we assume Gaussian estimation noise η, construct a regression matrix A of N rows, with the i-th row given by p(t(i)|ptails )(ptails − ptails,i ). [sent-124, score-0.1]

61 Finally, we estimate p(θ) = arg ˆ p≥0; ||Ap||2 R min 2 1 p(θ)dθ=1 0 + ǫq[p], for ǫ ≈ 10−7 ; this estimate is equivalent to MAP estimation under a (weak) Gaussian prior on the function p(θ) (truncated so that p(θ) ≥ 0), and is computed using quadprog. [sent-126, score-0.412]

62 3 We note in passing that this simple binomial paradigm has potential applications to idealobserver analysis of classical neuroscientific tasks (e. [sent-128, score-0.093]

63 true prior 2 1 0 trial # (i) 150 # tails/i estimate 100 50 likelihoods 8 6 4 2 0 est prior 0 3 2 1 est prior 0 0. [sent-131, score-1.357]

64 9 Figure 1: Learning the fairness of a coin (numerical simulation). [sent-152, score-0.407]

65 The bimodal nature of this prior indicates that the subject expects coins to be unfair (skewed towards heads, ptails < . [sent-154, score-0.96]

66 Note the systematic deviations of the subject’s estimate from the MLE; these deviations shrink as i increases and the ´ `i strength of the prior relative to the likelihood term decreases. [sent-160, score-0.495]

67 Third: Binomial likelihood terms t pt (1 − ptails )i−t . [sent-161, score-0.517]

68 Color of trace correponds to trial number i, as indicated in previous tails panel (traces are normalized for clarity). [sent-162, score-0.525]

69 Black trace indicates true prior (as in top panel); red indicates estimate ±1 posterior standard error (computed via importance sampling). [sent-164, score-0.508]

70 Black traces indicate the subject’s true posterior after observing 0 (thin trace), 50 (medium trace), and 100 (thick trace) sample coin flips; as more data are observed, the subject becomes more and more confident about the true fairness of the coin (p = . [sent-166, score-1.065]

71 Red traces indicate the estimated posterior given the full 150 or just the last 100 or 50 trials, respectively (errorbars omitted for visibility). [sent-170, score-0.139]

72 Note that the procedure tracks the evolution of the subject’s posterior quite accurately, given relatively few trials. [sent-171, score-0.25]

73 1-2 demonstrate the accuracy of the estimated p(θ); in particular, the bottom panels show that the method accurately tracks the ˆ evolution of the model subjects’ posteriors as an increasing amount of data are observed. [sent-176, score-0.15]

74 true prior 3 2 1 0 trial # (i) 150 # tails/i estimate 100 50 likelihoods 0 10 5 est prior 0 4 2 est prior 0 0. [sent-177, score-1.357]

75 Connection to neural population coding It is interesting to note a connection to the neural population coding model studied in (14) (with more recent work reviewed in (15)). [sent-202, score-0.234]

76 The basic idea is that neural populations encode not just stimuli, but probability distributions over stimuli (where the distribution describes the uncertainty in the state of the encoded object). [sent-203, score-0.1]

77 Here the experimentally observed data are neural firing rates, which provide constraints on the underlying encoded “prior” distribution in terms of the individual tuning function of each cell in the observed population. [sent-204, score-0.303]

78 : ni ∼ q ni − p(θ)f (xi , θ) σ , with q a log-concave density (typically Gaussian) to preserve the concavity of the loglikelihood; in this case, the scale σ of the noise does not vary with the mean firing rate, as it does in the Poisson model. [sent-208, score-0.303]

79 In both cases, the observed firing rates act as constraints oriented linearly with respect to p; in the latter case, the noise scale σ sets the strength, or confidence, of each such constraint (2, 3). [sent-209, score-0.287]

80 Thus, under this framework, given the simultaneously recorded activity of many cells {ni } and some model for the tuning functions f (xi , θ), we can infer p(θ) (and represent the uncertainty in these estimates) using methods quite similar to those developed above. [sent-210, score-0.146]

81 Directions The obvious open avenue for future research (aside from application to experimental data) is to relax the assumptions: that the likelihood and cost function are both known, and that the data are observed directly (without any noise). [sent-211, score-0.233]

82 It seems fair to conjecture that the subject can learn the likelihood and cost functions given enough data, but one would like to test this directly, e. [sent-212, score-0.436]

83 ) and p together, perhaps under restrictions on the form of D(. [sent-216, score-0.054]

84 As emphasized above, the utility estimation problem has received a great deal of attention, and it is plausible to expect that the methods proposed here for estimation of the prior might be combined with previously-studied methods for utility elicitation and estimation. [sent-219, score-0.611]

85 It is also interesting to consider these elicitation methods in the context of experimental design (8, 17, 18), in which we might actively seek stimuli xi to maximally constrain the possible form of the prior and/or cost function. [sent-220, score-0.633]


similar papers computed by tfidf model

tfidf for this paper:

wordName wordTfidf (topN-words)

[('ptails', 0.352), ('prior', 0.245), ('subject', 0.237), ('fairness', 0.211), ('tails', 0.201), ('coin', 0.196), ('trial', 0.196), ('xi', 0.146), ('maxima', 0.14), ('est', 0.122), ('likelihood', 0.118), ('constraints', 0.115), ('concave', 0.113), ('elicitation', 0.106), ('priors', 0.105), ('ni', 0.097), ('binomial', 0.093), ('liam', 0.092), ('posterior', 0.086), ('posteriori', 0.084), ('experimenter', 0.084), ('utility', 0.083), ('slack', 0.083), ('programming', 0.081), ('likelihoods', 0.079), ('heads', 0.078), ('simplest', 0.078), ('estimates', 0.074), ('trace', 0.074), ('paninski', 0.074), ('observed', 0.073), ('gambling', 0.07), ('hagan', 0.07), ('pouget', 0.07), ('unfair', 0.07), ('regularizing', 0.07), ('behaving', 0.064), ('convex', 0.064), ('ui', 0.063), ('contraint', 0.061), ('opponent', 0.061), ('equalities', 0.061), ('koerding', 0.061), ('evolution', 0.06), ('estimate', 0.06), ('ascent', 0.06), ('dayan', 0.056), ('truncated', 0.056), ('loss', 0.056), ('mle', 0.056), ('outlier', 0.056), ('ips', 0.056), ('expects', 0.056), ('concavity', 0.056), ('quite', 0.055), ('perhaps', 0.054), ('panel', 0.054), ('traces', 0.053), ('noise', 0.053), ('wolpert', 0.052), ('implausible', 0.052), ('population', 0.052), ('stimuli', 0.051), ('optimally', 0.051), ('bayesian', 0.05), ('connection', 0.05), ('uncertainty', 0.049), ('zemel', 0.049), ('tracks', 0.049), ('maximum', 0.048), ('estimation', 0.047), ('motion', 0.047), ('simoncelli', 0.047), ('pt', 0.047), ('median', 0.047), ('constraint', 0.046), ('gaussian', 0.046), ('linear', 0.044), ('context', 0.043), ('true', 0.043), ('object', 0.043), ('cost', 0.042), ('tuning', 0.042), ('posteriors', 0.041), ('psychophysics', 0.041), ('particularly', 0.041), ('subjects', 0.041), ('coding', 0.04), ('ring', 0.04), ('dp', 0.04), ('fair', 0.039), ('clarity', 0.039), ('neuroscience', 0.038), ('appeared', 0.038), ('nonnegative', 0.038), ('inequality', 0.037), ('determining', 0.037), ('velocity', 0.036), ('deviations', 0.036), ('literature', 0.036)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 1.0000005 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

Author: Liam Paninski

Abstract: We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e.g., Gaussian). Despite this increased generality, the method is relatively simple to implement, being based in the simplest case on a linear programming algorithm, and more generally on a straightforward maximum likelihood or maximum a posteriori formulation, which turns out to be a convex optimization problem (with no non-global local maxima) in many important cases. In addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. We close by briefly discussing an interesting connection to recent models of neural population coding.

2 0.12456849 202 nips-2005-Variational EM Algorithms for Non-Gaussian Latent Variable Models

Author: Jason Palmer, Kenneth Kreutz-Delgado, Bhaskar D. Rao, David P. Wipf

Abstract: We consider criteria for variational representations of non-Gaussian latent variables, and derive variational EM algorithms in general form. We establish a general equivalence among convex bounding methods, evidence based methods, and ensemble learning/Variational Bayes methods, which has previously been demonstrated only for particular cases.

3 0.11826181 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

Author: Alan Stocker, Eero P. Simoncelli

Abstract: We extend a previously developed Bayesian framework for perception to account for sensory adaptation. We first note that the perceptual effects of adaptation seems inconsistent with an adjustment of the internally represented prior distribution. Instead, we postulate that adaptation increases the signal-to-noise ratio of the measurements by adapting the operational range of the measurement stage to the input range. We show that this changes the likelihood function in such a way that the Bayesian estimator model can account for reported perceptual behavior. In particular, we compare the model’s predictions to human motion discrimination data and demonstrate that the model accounts for the commonly observed perceptual adaptation effects of repulsion and enhanced discriminability. 1 Motivation A growing number of studies support the notion that humans are nearly optimal when performing perceptual estimation tasks that require the combination of sensory observations with a priori knowledge. The Bayesian formulation of these problems defines the optimal strategy, and provides a principled yet simple computational framework for perception that can account for a large number of known perceptual effects and illusions, as demonstrated in sensorimotor learning [1], cue combination [2], or visual motion perception [3], just to name a few of the many examples. Adaptation is a fundamental phenomenon in sensory perception that seems to occur at all processing levels and modalities. A variety of computational principles have been suggested as explanations for adaptation. Many of these are based on the concept of maximizing the sensory information an observer can obtain about a stimulus despite limited sensory resources [4, 5, 6]. More mechanistically, adaptation can be interpreted as the attempt of the sensory system to adjusts its (limited) dynamic range such that it is maximally informative with respect to the statistics of the stimulus. A typical example is observed in the retina, which manages to encode light intensities that vary over nine orders of magnitude using ganglion cells whose dynamic range covers only two orders of magnitude. This is achieved by adapting to the local mean as well as higher order statistics of the visual input over short time-scales [7]. ∗ corresponding author. If a Bayesian framework is to provide a valid computational explanation of perceptual processes, then it needs to account for the behavior of a perceptual system, regardless of its adaptation state. In general, adaptation in a sensory estimation task seems to have two fundamental effects on subsequent perception: • Repulsion: The estimate of parameters of subsequent stimuli are repelled by those of the adaptor stimulus, i.e. the perceived values for the stimulus variable that is subject to the estimation task are more distant from the adaptor value after adaptation. This repulsive effect has been reported for perception of visual speed (e.g. [8, 9]), direction-of-motion [10], and orientation [11]. • Increased sensitivity: Adaptation increases the observer’s discrimination ability around the adaptor (e.g. for visual speed [12, 13]), however it also seems to decrease it further away from the adaptor as shown in the case of direction-of-motion discrimination [14]. In this paper, we show that these two perceptual effects can be explained within a Bayesian estimation framework of perception. Note that our description is at an abstract functional level - we do not attempt to provide a computational model for the underlying mechanisms responsible for adaptation, and this clearly separates this paper from other work which might seem at first glance similar [e.g., 15]. 2 Adaptive Bayesian estimator framework Suppose that an observer wants to estimate a property of a stimulus denoted by the variable θ, based on a measurement m. In general, the measurement can be vector-valued, and is corrupted by both internal and external noise. Hence, combining the noisy information gained by the measurement m with a priori knowledge about θ is advantageous. According to Bayes’ rule 1 p(θ|m) = p(m|θ)p(θ) . (1) α That is, the probability of stimulus value θ given m (posterior) is the product of the likelihood p(m|θ) of the particular measurement and the prior p(θ). The normalization constant α serves to ensure that the posterior is a proper probability distribution. Under the assumpˆ tion of a squared-error loss function, the optimal estimate θ(m) is the mean of the posterior, thus ∞ ˆ θ(m) = θ p(θ|m) dθ . (2) 0 ˆ Note that θ(m) describes an estimate for a single measurement m. As discussed in [16], the measurement will vary stochastically over the course of many exposures to the same stimulus, and thus the estimator will also vary. We return to this issue in Section 3.2. Figure 1a illustrates a Bayesian estimator, in which the shape of the (arbitrary) prior distribution leads on average to a shift of the estimate toward a lower value of θ than the true stimulus value θstim . The likelihood and the prior are the fundamental constituents of the Bayesian estimator model. Our goal is to describe how adaptation alters these constituents so as to account for the perceptual effects of repulsion and increased sensitivity. Adaptation does not change the prior ... An intuitively sensible hypothesis is that adaptation changes the prior distribution. Since the prior is meant to reflect the knowledge the observer has about the distribution of occurrences of the variable θ in the world, repeated viewing of stimuli with the same parameter a b probability probability attraction ! posterior likelihood prior modified prior à θ θ ˆ θ' θ θadapt Figure 1: Hypothetical model in which adaptation alters the prior distribution. a) Unadapted Bayesian estimation configuration in which the prior leads to a shift of the estimate ˆ θ, relative to the stimulus parameter θstim . Both the likelihood function and the prior distriˆ bution contribute to the exact value of the estimate θ (mean of the posterior). b) Adaptation acts by increasing the prior distribution around the value, θadapt , of the adapting stimulus ˆ parameter. Consequently, an subsequent estimate θ of the same stimulus parameter value θstim is attracted toward the adaptor. This is the opposite of observed perceptual effects, and we thus conclude that adjustments of the prior in a Bayesian model do not account for adaptation. value θadapt should presumably increase the prior probability in the vicinity of θadapt . Figure 1b schematically illustrates the effect of such a change in the prior distribution. The estimated (perceived) value of the parameter under the adapted condition is attracted to the adapting parameter value. In order to account for observed perceptual repulsion effects, the prior would have to decrease at the location of the adapting parameter, a behavior that seems fundamentally inconsistent with the notion of a prior distribution. ... but increases the reliability of the measurements Since a change in the prior distribution is not consistent with repulsion, we are led to the conclusion that adaptation must change the likelihood function. But why, and how should this occur? In order to answer this question, we reconsider the functional purpose of adaptation. We assume that adaptation acts to allocate more resources to the representation of the parameter values in the vicinity of the adaptor [4], resulting in a local increase in the signal-to-noise ratio (SNR). This can be accomplished, for example, by dynamically adjusting the operational range to the statistics of the input. This kind of increased operational gain around the adaptor has been effectively demonstrated in the process of retinal adaptation [17]. In the context of our Bayesian estimator framework, and restricting to the simple case of a scalar-valued measurement, adaptation results in a narrower conditional probability density p(m|θ) in the immediate vicinity of the adaptor, thus an increase in the reliability of the measurement m. This is offset by a broadening of the conditional probability density p(m|θ) in the region beyond the adaptor vicinity (we assume that total resources are conserved, and thus an increase around the adaptor must necessarily lead to a decrease elsewhere). Figure 2 illustrates the effect of this local increase in signal-to-noise ratio on the likeli- unadapted adapted θadapt p(m2| θ )' 1/SNR θ θ θ1 θ2 θ1 p(m2|θ) θ2 θ m2 p(m1| θ )' m1 m m p(m1|θ) θ θ θ θadapt p(m| θ2)' p(m|θ2) likelihoods p(m|θ1) p(m| θ1)' p(m|θadapt )' conditionals Figure 2: Measurement noise, conditionals and likelihoods. The two-dimensional conditional density, p(m|θ), is shown as a grayscale image for both the unadapted and adapted cases. We assume here that adaptation increases the reliability (SNR) of the measurement around the parameter value of the adaptor. This is balanced by a decrease in SNR of the measurement further away from the adaptor. Because the likelihood is a function of θ (horizontal slices, shown plotted at right), this results in an asymmetric change in the likelihood that is in agreement with a repulsive effect on the estimate. a b ^ ∆θ ^ ∆θ [deg] + 0 60 30 0 -30 - θ θ adapt -60 -180 -90 90 θadapt 180 θ [deg] Figure 3: Repulsion: Model predictions vs. human psychophysics. a) Difference in perceived direction in the pre- and post-adaptation condition, as predicted by the model. Postadaptive percepts of motion direction are repelled away from the direction of the adaptor. b) Typical human subject data show a qualitatively similar repulsive effect. Data (and fit) are replotted from [10]. hood function. The two gray-scale images represent the conditional probability densities, p(m|θ), in the unadapted and the adapted state. They are formed by assuming additive noise on the measurement m of constant variance (unadapted) or with a variance that decreases symmetrically in the vicinity of the adaptor parameter value θadapt , and grows slightly in the region beyond. In the unadapted state, the likelihood is convolutional and the shape and variance are equivalent to the distribution of measurement noise. However, in the adapted state, because the likelihood is a function of θ (horizontal slice through the conditional surface) it is no longer convolutional around the adaptor. As a result, the mean is pushed away from the adaptor, as illustrated in the two graphs on the right. Assuming that the prior distribution is fairly smooth, this repulsion effect is transferred to the posterior distribution, and thus to the estimate. 3 Simulation Results We have qualitatively demonstrated that an increase in the measurement reliability around the adaptor is consistent with the repulsive effects commonly seen as a result of perceptual adaptation. In this section, we simulate an adapted Bayesian observer by assuming a simple model for the changes in signal-to-noise ratio due to adaptation. We address both repulsion and changes in discrimination threshold. In particular, we compare our model predictions with previously published data from psychophysical experiments examining human perception of motion direction. 3.1 Repulsion In the unadapted state, we assume the measurement noise to be additive and normally distributed, and constant over the whole measurement space. Thus, assuming that m and θ live in the same space, the likelihood is a Gaussian of constant width. In the adapted state, we assume a simple functional description for the variance of the measurement noise around the adapter. Specifically, we use a constant plus a difference of two Gaussians, a b relative discrimination threshold relative discrimination threshold 1.8 1 θ θadapt 1.6 1.4 1.2 1 0.8 -40 -20 θ adapt 20 40 θ [deg] Figure 4: Discrimination thresholds: Model predictions vs. human psychophysics. a) The model predicts that thresholds for direction discrimination are reduced at the adaptor. It also predicts two side-lobes of increased threshold at further distance from the adaptor. b) Data of human psychophysics are in qualitative agreement with the model. Data are replotted from [14] (see also [11]). each having equal area, with one twice as broad as the other (see Fig. 2). Finally, for simplicity, we assume a flat prior, but any reasonable smooth prior would lead to results that are qualitatively similar. Then, according to (2) we compute the predicted estimate of motion direction in both the unadapted and the adapted case. Figure 3a shows the predicted difference between the pre- and post-adaptive average estimate of direction, as a function of the stimulus direction, θstim . The adaptor is indicated with an arrow. The repulsive effect is clearly visible. For comparison, Figure 3b shows human subject data replotted from [10]. The perceived motion direction of a grating was estimated, under both adapted and unadapted conditions, using a two-alternative-forced-choice experimental paradigm. The plot shows the change in perceived direction as a function of test stimulus direction relative to that of the adaptor. Comparison of the two panels of Figure 3 indicate that despite the highly simplified construction of the model, the prediction is quite good, and even includes the small but consistent repulsive effects observed 180 degrees from the adaptor. 3.2 Changes in discrimination threshold Adaptation also changes the ability of human observers to discriminate between the direction of two different moving stimuli. In order to model discrimination thresholds, we need to consider a Bayesian framework that can account not only for the mean of the estimate but also its variability. We have recently developed such a framework, and used it to quantitatively constrain the likelihood and the prior from psychophysical data [16]. This framework accounts for the effect of the measurement noise on the variability of the ˆ ˆ estimate θ. Specifically, it provides a characterization of the distribution p(θ|θstim ) of the estimate for a given stimulus direction in terms of its expected value and its variance as a function of the measurement noise. As in [16] we write ˆ ∂ θ(m) 2 ˆ var θ|θstim = var m ( ) |m=θstim . (3) ∂m Assuming that discrimination threshold is proportional to the standard deviation, ˆ var θ|θstim , we can now predict how discrimination thresholds should change after adaptation. Figure 4a shows the predicted change in discrimination thresholds relative to the unadapted condition for the same model parameters as in the repulsion example (Figure 3a). Thresholds are slightly reduced at the adaptor, but increase symmetrically for directions further away from the adaptor. For comparison, Figure 4b shows the relative change in discrimination thresholds for a typical human subject [14]. Again, the behavior of the human observer is qualitatively well predicted. 4 Discussion We have shown that adaptation can be incorporated into a Bayesian estimation framework for human sensory perception. Adaptation seems unlikely to manifest itself as a change in the internal representation of prior distributions, as this would lead to perceptual bias effects that are opposite to those observed in human subjects. Instead, we argue that adaptation leads to an increase in reliability of the measurement in the vicinity of the adapting stimulus parameter. We show that this change in the measurement reliability results in changes of the likelihood function, and that an estimator that utilizes this likelihood function will exhibit the commonly-observed adaptation effects of repulsion and changes in discrimination threshold. We further confirm our model by making quantitative predictions and comparing them with known psychophysical data in the case of human perception of motion direction. Many open questions remain. The results demonstrated here indicate that a resource allocation explanation is consistent with the functional effects of adaptation, but it seems unlikely that theory alone can lead to a unique quantitative prediction of the detailed form of these effects. Specifically, the constraints imposed by biological implementation are likely to play a role in determining the changes in measurement noise as a function of adaptor parameter value, and it will be important to characterize and interpret neural response changes in the context of our framework. Also, although we have argued that changes in the prior seem inconsistent with adaptation effects, it may be that such changes do occur but are offset by the likelihood effect, or occur only on much longer timescales. Last, if one considers sensory perception as the result of a cascade of successive processing stages (with both feedforward and feedback connections), it becomes necessary to expand the Bayesian description to describe this cascade [e.g., 18, 19]. For example, it may be possible to interpret this cascade as a sequence of Bayesian estimators, in which the measurement of each stage consists of the estimate computed at the previous stage. Adaptation could potentially occur in each of these processing stages, and it is of fundamental interest to understand how such a cascade can perform useful stable computations despite the fact that each of its elements is constantly readjusting its response properties. References [1] K. K¨ rding and D. Wolpert. Bayesian integration in sensorimotor learning. o 427(15):244–247, January 2004. Nature, [2] D C Knill and W Richards, editors. Perception as Bayesian Inference. Cambridge University Press, 1996. [3] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience, 5(6):598–604, June 2002. [4] H.B. Barlow. Vision: Coding and Efficiency, chapter A theory about the functional role and synaptic mechanism of visual after-effects, pages 363–375. Cambridge University Press., 1990. [5] M.J. Wainwright. Visual adaptation as optimal information transmission. Vision Research, 39:3960–3974, 1999. [6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26:695–702, June 2000. [7] S.M. Smirnakis, M.J. Berry, D.K. Warland, W. Bialek, and M. Meister. Adaptation of retinal processing to image contrast and spatial scale. Nature, 386:69–73, March 1997. [8] P. Thompson. Velocity after-effects: the effects of adaptation to moving stimuli on the perception of subsequently seen moving stimuli. Vision Research, 21:337–345, 1980. [9] A.T. Smith. Velocity coding: evidence from perceived velocity shifts. Vision Research, 25(12):1969–1976, 1985. [10] P. Schrater and E. Simoncelli. Local velocity representation: evidence from motion adaptation. Vision Research, 38:3899–3912, 1998. [11] C.W. Clifford. Perceptual adaptation: motion parallels orientation. Trends in Cognitive Sciences, 6(3):136–143, March 2002. [12] C. Clifford and P. Wenderoth. Adaptation to temporal modulaton can enhance differential speed sensitivity. Vision Research, 39:4324–4332, 1999. [13] A. Kristjansson. Increased sensitivity to speed changes during adaptation to first-order, but not to second-order motion. Vision Research, 41:1825–1832, 2001. [14] R.E. Phinney, C. Bowd, and R. Patterson. Direction-selective coding of stereoscopic (cyclopean) motion. Vision Research, 37(7):865–869, 1997. [15] N.M. Grzywacz and R.M. Balboa. A Bayesian framework for sensory adaptation. Neural Computation, 14:543–559, 2002. [16] A.A. Stocker and E.P. Simoncelli. Constraining a Bayesian model of human visual speed perception. In Lawrence K. Saul, Yair Weiss, and L´ on Bottou, editors, Advances in Neural Infore mation Processing Systems NIPS 17, pages 1361–1368, Cambridge, MA, 2005. MIT Press. [17] D. Tranchina, J. Gordon, and R.M. Shapley. Retinal light adaptation – evidence for a feedback mechanism. Nature, 310:314–316, July 1984. ´ [18] S. Deneve. Bayesian inference in spiking neurons. In Lawrence K. Saul, Yair Weiss, and L eon Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04), vol 17, Cambridge, MA, 2005. MIT Press. [19] R. Rao. Hierarchical Bayesian inference in networks of spiking neurons. In Lawrence K. Saul, Yair Weiss, and L´ on Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04), e vol 17, Cambridge, MA, 2005. MIT Press.

4 0.1157275 14 nips-2005-A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification

Author: Yves Grandvalet, Johnny Mariethoz, Samy Bengio

Abstract: In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures. 1

5 0.10944305 117 nips-2005-Learning from Data of Variable Quality

Author: Koby Crammer, Michael Kearns, Jennifer Wortman

Abstract: We initiate the study of learning from multiple sources of limited data, each of which may be corrupted at a different rate. We develop a complete theory of which data sources should be used for two fundamental problems: estimating the bias of a coin, and learning a classifier in the presence of label noise. In both cases, efficient algorithms are provided for computing the optimal subset of data. 1

6 0.092505634 194 nips-2005-Top-Down Control of Visual Attention: A Rational Account

7 0.09249521 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

8 0.09114223 30 nips-2005-Assessing Approximations for Gaussian Process Classification

9 0.083946809 113 nips-2005-Learning Multiple Related Tasks using Latent Independent Component Analysis

10 0.083651699 192 nips-2005-The Information-Form Data Association Filter

11 0.083358422 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

12 0.082825422 114 nips-2005-Learning Rankings via Convex Hull Separation

13 0.080282778 133 nips-2005-Nested sampling for Potts models

14 0.078338519 129 nips-2005-Modeling Neural Population Spiking Activity with Gibbs Distributions

15 0.076497674 50 nips-2005-Convex Neural Networks

16 0.075757742 197 nips-2005-Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments

17 0.074082248 38 nips-2005-Beyond Gaussian Processes: On the Distributions of Infinite Networks

18 0.072653085 136 nips-2005-Noise and the two-thirds power Law

19 0.07261233 28 nips-2005-Analyzing Auditory Neurons by Learning Distance Functions

20 0.072497658 58 nips-2005-Divergences, surrogate loss functions and experimental design


similar papers computed by lsi model

lsi for this paper:

topicId topicWeight

[(0, 0.268), (1, 0.007), (2, -0.019), (3, 0.067), (4, 0.129), (5, -0.024), (6, -0.041), (7, -0.104), (8, -0.092), (9, -0.001), (10, 0.071), (11, -0.101), (12, 0.092), (13, 0.092), (14, 0.015), (15, -0.019), (16, 0.011), (17, 0.122), (18, 0.006), (19, 0.083), (20, -0.068), (21, 0.048), (22, 0.063), (23, -0.039), (24, 0.019), (25, 0.112), (26, 0.071), (27, -0.014), (28, -0.014), (29, 0.071), (30, -0.066), (31, 0.037), (32, -0.07), (33, 0.003), (34, -0.081), (35, 0.055), (36, -0.061), (37, -0.078), (38, -0.066), (39, 0.007), (40, 0.047), (41, 0.134), (42, 0.05), (43, -0.037), (44, 0.058), (45, -0.041), (46, -0.042), (47, 0.159), (48, 0.013), (49, -0.109)]

similar papers list:

simIndex simValue paperId paperTitle

same-paper 1 0.96016651 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

Author: Liam Paninski

Abstract: We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e.g., Gaussian). Despite this increased generality, the method is relatively simple to implement, being based in the simplest case on a linear programming algorithm, and more generally on a straightforward maximum likelihood or maximum a posteriori formulation, which turns out to be a convex optimization problem (with no non-global local maxima) in many important cases. In addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. We close by briefly discussing an interesting connection to recent models of neural population coding.

2 0.70653659 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

Author: Alan Stocker, Eero P. Simoncelli

Abstract: We extend a previously developed Bayesian framework for perception to account for sensory adaptation. We first note that the perceptual effects of adaptation seems inconsistent with an adjustment of the internally represented prior distribution. Instead, we postulate that adaptation increases the signal-to-noise ratio of the measurements by adapting the operational range of the measurement stage to the input range. We show that this changes the likelihood function in such a way that the Bayesian estimator model can account for reported perceptual behavior. In particular, we compare the model’s predictions to human motion discrimination data and demonstrate that the model accounts for the commonly observed perceptual adaptation effects of repulsion and enhanced discriminability. 1 Motivation A growing number of studies support the notion that humans are nearly optimal when performing perceptual estimation tasks that require the combination of sensory observations with a priori knowledge. The Bayesian formulation of these problems defines the optimal strategy, and provides a principled yet simple computational framework for perception that can account for a large number of known perceptual effects and illusions, as demonstrated in sensorimotor learning [1], cue combination [2], or visual motion perception [3], just to name a few of the many examples. Adaptation is a fundamental phenomenon in sensory perception that seems to occur at all processing levels and modalities. A variety of computational principles have been suggested as explanations for adaptation. Many of these are based on the concept of maximizing the sensory information an observer can obtain about a stimulus despite limited sensory resources [4, 5, 6]. More mechanistically, adaptation can be interpreted as the attempt of the sensory system to adjusts its (limited) dynamic range such that it is maximally informative with respect to the statistics of the stimulus. A typical example is observed in the retina, which manages to encode light intensities that vary over nine orders of magnitude using ganglion cells whose dynamic range covers only two orders of magnitude. This is achieved by adapting to the local mean as well as higher order statistics of the visual input over short time-scales [7]. ∗ corresponding author. If a Bayesian framework is to provide a valid computational explanation of perceptual processes, then it needs to account for the behavior of a perceptual system, regardless of its adaptation state. In general, adaptation in a sensory estimation task seems to have two fundamental effects on subsequent perception: • Repulsion: The estimate of parameters of subsequent stimuli are repelled by those of the adaptor stimulus, i.e. the perceived values for the stimulus variable that is subject to the estimation task are more distant from the adaptor value after adaptation. This repulsive effect has been reported for perception of visual speed (e.g. [8, 9]), direction-of-motion [10], and orientation [11]. • Increased sensitivity: Adaptation increases the observer’s discrimination ability around the adaptor (e.g. for visual speed [12, 13]), however it also seems to decrease it further away from the adaptor as shown in the case of direction-of-motion discrimination [14]. In this paper, we show that these two perceptual effects can be explained within a Bayesian estimation framework of perception. Note that our description is at an abstract functional level - we do not attempt to provide a computational model for the underlying mechanisms responsible for adaptation, and this clearly separates this paper from other work which might seem at first glance similar [e.g., 15]. 2 Adaptive Bayesian estimator framework Suppose that an observer wants to estimate a property of a stimulus denoted by the variable θ, based on a measurement m. In general, the measurement can be vector-valued, and is corrupted by both internal and external noise. Hence, combining the noisy information gained by the measurement m with a priori knowledge about θ is advantageous. According to Bayes’ rule 1 p(θ|m) = p(m|θ)p(θ) . (1) α That is, the probability of stimulus value θ given m (posterior) is the product of the likelihood p(m|θ) of the particular measurement and the prior p(θ). The normalization constant α serves to ensure that the posterior is a proper probability distribution. Under the assumpˆ tion of a squared-error loss function, the optimal estimate θ(m) is the mean of the posterior, thus ∞ ˆ θ(m) = θ p(θ|m) dθ . (2) 0 ˆ Note that θ(m) describes an estimate for a single measurement m. As discussed in [16], the measurement will vary stochastically over the course of many exposures to the same stimulus, and thus the estimator will also vary. We return to this issue in Section 3.2. Figure 1a illustrates a Bayesian estimator, in which the shape of the (arbitrary) prior distribution leads on average to a shift of the estimate toward a lower value of θ than the true stimulus value θstim . The likelihood and the prior are the fundamental constituents of the Bayesian estimator model. Our goal is to describe how adaptation alters these constituents so as to account for the perceptual effects of repulsion and increased sensitivity. Adaptation does not change the prior ... An intuitively sensible hypothesis is that adaptation changes the prior distribution. Since the prior is meant to reflect the knowledge the observer has about the distribution of occurrences of the variable θ in the world, repeated viewing of stimuli with the same parameter a b probability probability attraction ! posterior likelihood prior modified prior à θ θ ˆ θ' θ θadapt Figure 1: Hypothetical model in which adaptation alters the prior distribution. a) Unadapted Bayesian estimation configuration in which the prior leads to a shift of the estimate ˆ θ, relative to the stimulus parameter θstim . Both the likelihood function and the prior distriˆ bution contribute to the exact value of the estimate θ (mean of the posterior). b) Adaptation acts by increasing the prior distribution around the value, θadapt , of the adapting stimulus ˆ parameter. Consequently, an subsequent estimate θ of the same stimulus parameter value θstim is attracted toward the adaptor. This is the opposite of observed perceptual effects, and we thus conclude that adjustments of the prior in a Bayesian model do not account for adaptation. value θadapt should presumably increase the prior probability in the vicinity of θadapt . Figure 1b schematically illustrates the effect of such a change in the prior distribution. The estimated (perceived) value of the parameter under the adapted condition is attracted to the adapting parameter value. In order to account for observed perceptual repulsion effects, the prior would have to decrease at the location of the adapting parameter, a behavior that seems fundamentally inconsistent with the notion of a prior distribution. ... but increases the reliability of the measurements Since a change in the prior distribution is not consistent with repulsion, we are led to the conclusion that adaptation must change the likelihood function. But why, and how should this occur? In order to answer this question, we reconsider the functional purpose of adaptation. We assume that adaptation acts to allocate more resources to the representation of the parameter values in the vicinity of the adaptor [4], resulting in a local increase in the signal-to-noise ratio (SNR). This can be accomplished, for example, by dynamically adjusting the operational range to the statistics of the input. This kind of increased operational gain around the adaptor has been effectively demonstrated in the process of retinal adaptation [17]. In the context of our Bayesian estimator framework, and restricting to the simple case of a scalar-valued measurement, adaptation results in a narrower conditional probability density p(m|θ) in the immediate vicinity of the adaptor, thus an increase in the reliability of the measurement m. This is offset by a broadening of the conditional probability density p(m|θ) in the region beyond the adaptor vicinity (we assume that total resources are conserved, and thus an increase around the adaptor must necessarily lead to a decrease elsewhere). Figure 2 illustrates the effect of this local increase in signal-to-noise ratio on the likeli- unadapted adapted θadapt p(m2| θ )' 1/SNR θ θ θ1 θ2 θ1 p(m2|θ) θ2 θ m2 p(m1| θ )' m1 m m p(m1|θ) θ θ θ θadapt p(m| θ2)' p(m|θ2) likelihoods p(m|θ1) p(m| θ1)' p(m|θadapt )' conditionals Figure 2: Measurement noise, conditionals and likelihoods. The two-dimensional conditional density, p(m|θ), is shown as a grayscale image for both the unadapted and adapted cases. We assume here that adaptation increases the reliability (SNR) of the measurement around the parameter value of the adaptor. This is balanced by a decrease in SNR of the measurement further away from the adaptor. Because the likelihood is a function of θ (horizontal slices, shown plotted at right), this results in an asymmetric change in the likelihood that is in agreement with a repulsive effect on the estimate. a b ^ ∆θ ^ ∆θ [deg] + 0 60 30 0 -30 - θ θ adapt -60 -180 -90 90 θadapt 180 θ [deg] Figure 3: Repulsion: Model predictions vs. human psychophysics. a) Difference in perceived direction in the pre- and post-adaptation condition, as predicted by the model. Postadaptive percepts of motion direction are repelled away from the direction of the adaptor. b) Typical human subject data show a qualitatively similar repulsive effect. Data (and fit) are replotted from [10]. hood function. The two gray-scale images represent the conditional probability densities, p(m|θ), in the unadapted and the adapted state. They are formed by assuming additive noise on the measurement m of constant variance (unadapted) or with a variance that decreases symmetrically in the vicinity of the adaptor parameter value θadapt , and grows slightly in the region beyond. In the unadapted state, the likelihood is convolutional and the shape and variance are equivalent to the distribution of measurement noise. However, in the adapted state, because the likelihood is a function of θ (horizontal slice through the conditional surface) it is no longer convolutional around the adaptor. As a result, the mean is pushed away from the adaptor, as illustrated in the two graphs on the right. Assuming that the prior distribution is fairly smooth, this repulsion effect is transferred to the posterior distribution, and thus to the estimate. 3 Simulation Results We have qualitatively demonstrated that an increase in the measurement reliability around the adaptor is consistent with the repulsive effects commonly seen as a result of perceptual adaptation. In this section, we simulate an adapted Bayesian observer by assuming a simple model for the changes in signal-to-noise ratio due to adaptation. We address both repulsion and changes in discrimination threshold. In particular, we compare our model predictions with previously published data from psychophysical experiments examining human perception of motion direction. 3.1 Repulsion In the unadapted state, we assume the measurement noise to be additive and normally distributed, and constant over the whole measurement space. Thus, assuming that m and θ live in the same space, the likelihood is a Gaussian of constant width. In the adapted state, we assume a simple functional description for the variance of the measurement noise around the adapter. Specifically, we use a constant plus a difference of two Gaussians, a b relative discrimination threshold relative discrimination threshold 1.8 1 θ θadapt 1.6 1.4 1.2 1 0.8 -40 -20 θ adapt 20 40 θ [deg] Figure 4: Discrimination thresholds: Model predictions vs. human psychophysics. a) The model predicts that thresholds for direction discrimination are reduced at the adaptor. It also predicts two side-lobes of increased threshold at further distance from the adaptor. b) Data of human psychophysics are in qualitative agreement with the model. Data are replotted from [14] (see also [11]). each having equal area, with one twice as broad as the other (see Fig. 2). Finally, for simplicity, we assume a flat prior, but any reasonable smooth prior would lead to results that are qualitatively similar. Then, according to (2) we compute the predicted estimate of motion direction in both the unadapted and the adapted case. Figure 3a shows the predicted difference between the pre- and post-adaptive average estimate of direction, as a function of the stimulus direction, θstim . The adaptor is indicated with an arrow. The repulsive effect is clearly visible. For comparison, Figure 3b shows human subject data replotted from [10]. The perceived motion direction of a grating was estimated, under both adapted and unadapted conditions, using a two-alternative-forced-choice experimental paradigm. The plot shows the change in perceived direction as a function of test stimulus direction relative to that of the adaptor. Comparison of the two panels of Figure 3 indicate that despite the highly simplified construction of the model, the prediction is quite good, and even includes the small but consistent repulsive effects observed 180 degrees from the adaptor. 3.2 Changes in discrimination threshold Adaptation also changes the ability of human observers to discriminate between the direction of two different moving stimuli. In order to model discrimination thresholds, we need to consider a Bayesian framework that can account not only for the mean of the estimate but also its variability. We have recently developed such a framework, and used it to quantitatively constrain the likelihood and the prior from psychophysical data [16]. This framework accounts for the effect of the measurement noise on the variability of the ˆ ˆ estimate θ. Specifically, it provides a characterization of the distribution p(θ|θstim ) of the estimate for a given stimulus direction in terms of its expected value and its variance as a function of the measurement noise. As in [16] we write ˆ ∂ θ(m) 2 ˆ var θ|θstim = var m ( ) |m=θstim . (3) ∂m Assuming that discrimination threshold is proportional to the standard deviation, ˆ var θ|θstim , we can now predict how discrimination thresholds should change after adaptation. Figure 4a shows the predicted change in discrimination thresholds relative to the unadapted condition for the same model parameters as in the repulsion example (Figure 3a). Thresholds are slightly reduced at the adaptor, but increase symmetrically for directions further away from the adaptor. For comparison, Figure 4b shows the relative change in discrimination thresholds for a typical human subject [14]. Again, the behavior of the human observer is qualitatively well predicted. 4 Discussion We have shown that adaptation can be incorporated into a Bayesian estimation framework for human sensory perception. Adaptation seems unlikely to manifest itself as a change in the internal representation of prior distributions, as this would lead to perceptual bias effects that are opposite to those observed in human subjects. Instead, we argue that adaptation leads to an increase in reliability of the measurement in the vicinity of the adapting stimulus parameter. We show that this change in the measurement reliability results in changes of the likelihood function, and that an estimator that utilizes this likelihood function will exhibit the commonly-observed adaptation effects of repulsion and changes in discrimination threshold. We further confirm our model by making quantitative predictions and comparing them with known psychophysical data in the case of human perception of motion direction. Many open questions remain. The results demonstrated here indicate that a resource allocation explanation is consistent with the functional effects of adaptation, but it seems unlikely that theory alone can lead to a unique quantitative prediction of the detailed form of these effects. Specifically, the constraints imposed by biological implementation are likely to play a role in determining the changes in measurement noise as a function of adaptor parameter value, and it will be important to characterize and interpret neural response changes in the context of our framework. Also, although we have argued that changes in the prior seem inconsistent with adaptation effects, it may be that such changes do occur but are offset by the likelihood effect, or occur only on much longer timescales. Last, if one considers sensory perception as the result of a cascade of successive processing stages (with both feedforward and feedback connections), it becomes necessary to expand the Bayesian description to describe this cascade [e.g., 18, 19]. For example, it may be possible to interpret this cascade as a sequence of Bayesian estimators, in which the measurement of each stage consists of the estimate computed at the previous stage. Adaptation could potentially occur in each of these processing stages, and it is of fundamental interest to understand how such a cascade can perform useful stable computations despite the fact that each of its elements is constantly readjusting its response properties. References [1] K. K¨ rding and D. Wolpert. Bayesian integration in sensorimotor learning. o 427(15):244–247, January 2004. Nature, [2] D C Knill and W Richards, editors. Perception as Bayesian Inference. Cambridge University Press, 1996. [3] Y. Weiss, E. Simoncelli, and E. Adelson. Motion illusions as optimal percept. Nature Neuroscience, 5(6):598–604, June 2002. [4] H.B. Barlow. Vision: Coding and Efficiency, chapter A theory about the functional role and synaptic mechanism of visual after-effects, pages 363–375. Cambridge University Press., 1990. [5] M.J. Wainwright. Visual adaptation as optimal information transmission. Vision Research, 39:3960–3974, 1999. [6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling maximizes information transmission. Neuron, 26:695–702, June 2000. [7] S.M. Smirnakis, M.J. Berry, D.K. Warland, W. Bialek, and M. Meister. Adaptation of retinal processing to image contrast and spatial scale. Nature, 386:69–73, March 1997. [8] P. Thompson. Velocity after-effects: the effects of adaptation to moving stimuli on the perception of subsequently seen moving stimuli. Vision Research, 21:337–345, 1980. [9] A.T. Smith. Velocity coding: evidence from perceived velocity shifts. Vision Research, 25(12):1969–1976, 1985. [10] P. Schrater and E. Simoncelli. Local velocity representation: evidence from motion adaptation. Vision Research, 38:3899–3912, 1998. [11] C.W. Clifford. Perceptual adaptation: motion parallels orientation. Trends in Cognitive Sciences, 6(3):136–143, March 2002. [12] C. Clifford and P. Wenderoth. Adaptation to temporal modulaton can enhance differential speed sensitivity. Vision Research, 39:4324–4332, 1999. [13] A. Kristjansson. Increased sensitivity to speed changes during adaptation to first-order, but not to second-order motion. Vision Research, 41:1825–1832, 2001. [14] R.E. Phinney, C. Bowd, and R. Patterson. Direction-selective coding of stereoscopic (cyclopean) motion. Vision Research, 37(7):865–869, 1997. [15] N.M. Grzywacz and R.M. Balboa. A Bayesian framework for sensory adaptation. Neural Computation, 14:543–559, 2002. [16] A.A. Stocker and E.P. Simoncelli. Constraining a Bayesian model of human visual speed perception. In Lawrence K. Saul, Yair Weiss, and L´ on Bottou, editors, Advances in Neural Infore mation Processing Systems NIPS 17, pages 1361–1368, Cambridge, MA, 2005. MIT Press. [17] D. Tranchina, J. Gordon, and R.M. Shapley. Retinal light adaptation – evidence for a feedback mechanism. Nature, 310:314–316, July 1984. ´ [18] S. Deneve. Bayesian inference in spiking neurons. In Lawrence K. Saul, Yair Weiss, and L eon Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04), vol 17, Cambridge, MA, 2005. MIT Press. [19] R. Rao. Hierarchical Bayesian inference in networks of spiking neurons. In Lawrence K. Saul, Yair Weiss, and L´ on Bottou, editors, Adv. Neural Information Processing Systems (NIPS*04), e vol 17, Cambridge, MA, 2005. MIT Press.

3 0.64069396 3 nips-2005-A Bayesian Framework for Tilt Perception and Confidence

Author: Odelia Schwartz, Peter Dayan, Terrence J. Sejnowski

Abstract: The misjudgement of tilt in images lies at the heart of entertaining visual illusions and rigorous perceptual psychophysics. A wealth of findings has attracted many mechanistic models, but few clear computational principles. We adopt a Bayesian approach to perceptual tilt estimation, showing how a smoothness prior offers a powerful way of addressing much confusing data. In particular, we faithfully model recent results showing that confidence in estimation can be systematically affected by the same aspects of images that affect bias. Confidence is central to Bayesian modeling approaches, and is applicable in many other perceptual domains. Perceptual anomalies and illusions, such as the misjudgements of motion and tilt evident in so many psychophysical experiments, have intrigued researchers for decades.1–3 A Bayesian view4–8 has been particularly influential in models of motion processing, treating such anomalies as the normative product of prior information (often statistically codifying Gestalt laws) with likelihood information from the actual scenes presented. Here, we expand the range of statistically normative accounts to tilt estimation, for which there are classes of results (on estimation confidence) that are so far not available for motion. The tilt illusion arises when the perceived tilt of a center target is misjudged (ie bias) in the presence of flankers. Another phenomenon, called Crowding, refers to a loss in the confidence (ie sensitivity) of perceived target tilt in the presence of flankers. Attempts have been made to formalize these phenomena quantitatively. Crowding has been modeled as compulsory feature pooling (ie averaging of orientations), ignoring spatial positions.9, 10 The tilt illusion has been explained by lateral interactions11, 12 in populations of orientationtuned units; and by calibration.13 However, most models of this form cannot explain a number of crucial aspects of the data. First, the geometry of the positional arrangement of the stimuli affects attraction versus repulsion in bias, as emphasized by Kapadia et al14 (figure 1A), and others.15, 16 Second, Solomon et al. recently measured bias and sensitivity simultaneously.11 The rich and surprising range of sensitivities, far from flat as a function of flanker angles (figure 1B), are outside the reach of standard models. Moreover, current explanations do not offer a computational account of tilt perception as the outcome of a normative inference process. Here, we demonstrate that a Bayesian framework for orientation estimation, with a prior favoring smoothness, can naturally explain a range of seemingly puzzling tilt data. We explicitly consider both the geometry of the stimuli, and the issue of confidence in the esti- 6 5 4 3 2 1 0 -1 -2 (B) Attraction Repulsion Sensititvity (1/deg) Bias (deg) (A) 0.6 0.5 0.4 0.3 0.2 0.1 -80 -60 -40 -20 0 20 40 60 80 Flanker tilt (deg) Figure 1: Tilt biases and sensitivities in visual perception. (A) Kapadia et al demonstrated the importance of geometry on tilt bias, with bar stimuli in the fovea (and similar results in the periphery). When 5 degrees clockwise flankers are arranged colinearly, the center target appears attracted in the direction of the flankers; when flankers are lateral, the target appears repulsed. Data are an average of 5 subjects.14 (B) Solomon et al measured both biases and sensitivities for gratings in the visual periphery.11 On the top are example stimuli, with flankers tilted 22.5 degrees clockwise. This constitutes the classic tilt illusion, with a repulsive bias percept. In addition, sensitivities vary as a function of flanker angles, in a systematic way (even in cases when there are no biases at all). Sensitivities are given in units of the inverse of standard deviation of the tilt estimate. More detailed data for both experiments are shown in the results section. mation. Bayesian analyses have most frequently been applied to bias. Much less attention has been paid to the equally important phenomenon of sensitivity. This aspect of our model should be applicable to other perceptual domains. In section 1 we formulate the Bayesian model. The prior is determined by the principle of creating a smooth contour between the target and flankers. We describe how to extract the bias and sensitivity. In section 2 we show experimental data of Kapadia et al and Solomon et al, alongside the model simulations, and demonstrate that the model can account for both geometry, and bias and sensitivity measurements in the data. Our results suggest a more unified, rational, approach to understanding tilt perception. 1 Bayesian model Under our Bayesian model, inference is controlled by the posterior distribution over the tilt of the target element. This comes from the combination of a prior favoring smooth configurations of the flankers and target, and the likelihood associated with the actual scene. A complete distribution would consider all possible angles and relative spatial positions of the bars, and marginalize the posterior over all but the tilt of the central element. For simplicity, we make two benign approximations: conditionalizing over (ie clamping) the angles of the flankers, and exploring only a small neighborhood of their positions. We now describe the steps of inference. Smoothness prior: Under these approximations, we consider a given actual configuration (see fig 2A) of flankers f1 = (φ1 , x1 ), f2 = (φ2 , x2 ) and center target c = (φc , xc ), arranged from top to bottom. We have to generate a prior over φc and δ1 = x1 − xc and δ2 = x2 − xc based on the principle of smoothness. As a less benign approximation, we do this in two stages: articulating a principle that determines a single optimal configuration; and generating a prior as a mixture of a Gaussian about this optimum and a uniform distribution, with the mixing proportion of the latter being determined by the smoothness of the optimum. Smoothness has been extensively studied in the computer vision literature.17–20 One widely (B) (C) f1 f1 β1 R Probability max smooth Max smooth target (deg) (A) 40 20 0 -20 c δ1 c -40 Φc f2 f2 1 0.8 0.6 0.4 0.2 0 -80 -60 -40 -20 0 20 40 Flanker tilt (deg) 60 80 -80 -60 -40 20 0 20 40 Flanker tilt (deg) 60 80 Figure 2: Geometry and smoothness for flankers, f1 and f2 , and center target, c. (A) Example actual configuration of flankers and target, aligned along the y axis from top to bottom. (B) The elastica procedure can rotate the target angle (to Φc ) and shift the relative flanker and target positions on the x axis (to δ1 and δ2 ) in its search for the maximally smooth solution. Small spatial shifts (up to 1/15 the size of R) of positions are allowed, but positional shift is overemphasized in the figure for visibility. (C) Top: center tilt that results in maximal smoothness, as a function of flanker tilt. Boxed cartoons show examples for given flanker tilts, of the optimally smooth configuration. Note attraction of target towards flankers for small flanker angles; here flankers and target are positioned in a nearly colinear arrangement. Note also repulsion of target away from flankers for intermediate flanker angles. Bottom: P [c, f1 , f2 ] for center tilt that yields maximal smoothness. The y axis is normalized between 0 and 1. used principle, elastica, known even to Euler, has been applied to contour completion21 and other computer vision applications.17 The basic idea is to find the curve with minimum energy (ie, square of curvature). Sharon et al19 showed that the elastica function can be well approximated by a number of simpler forms. We adopt a version that Leung and Malik18 adopted from Sharon et al.19 We assume that the probability for completing a smooth curve, can be factorized into two terms: P [c, f1 , f2 ] = G(c, f1 )G(c, f2 ) (1) with the term G(c, f1 ) (and similarly, G(c, f2 )) written as: R Dβ 2 2 Dβ = β1 + βc − β1 βc (2) − ) where σR σβ and β1 (and similarly, βc ) is the angle between the orientation at f1 , and the line joining f1 and c. The distance between the centers of f1 and c is given by R. The two constants, σβ and σR , control the relative contribution to smoothness of the angle versus the spatial distance. Here, we set σβ = 1, and σR = 1.5. Figure 2B illustrates an example geometry, in which φc , δ1 , and δ2 , have been shifted from the actual scene (of figure 2A). G(c, f1 ) = exp(− We now estimate the smoothest solution for given configurations. Figure 2C shows for given flanker tilts, the center tilt that yields maximal smoothness, and the corresponding probability of smoothness. For near vertical flankers, the spatial lability leads to very weak attraction and high probability of smoothness. As the flanker angle deviates farther from vertical, there is a large repulsion, but also lower probability of smoothness. These observations are key to our model: the maximally smooth center tilt will influence attractive and repulsive interactions of tilt estimation; the probability of smoothness will influence the relative weighting of the prior versus the likelihood. From the smoothness principle, we construct a two dimensional prior (figure 3A). One dimension represents tilt, the other dimension, the overall positional shift between target (B) Likelihood (D) Marginalized Posterior (C) Posterior 20 0.03 10 -10 -20 0 Probability 0 10 Angle Angle Angle 10 0 -10 -20 0.01 -10 -20 0.02 0 -0. 2 0 Position 0.2 (E) Psychometric function 20 -0. 2 0 0.2 -0. 2 0 0.2 Position Position -10 -5 0 Angle 5 10 Probability clockwise (A) Prior 20 1 0.8 0.6 0.4 0.2 0 -20 -10 0 10 20 Target angle (deg) Counter-clockwise Clockwise Figure 3: Bayes model for example flankers and target. (A) Prior 2D distribution for flankers set at 22.5 degrees (note repulsive preference for -5.5 degrees). (B) Likelihood 2D distribution for a target tilt of 3 degrees; (C) Posterior 2D distribution. All 2D distributions are drawn on the same grayscale range, and the presence of a larger baseline in the prior causes it to appear more dimmed. (D) Marginalized posterior, resulting in 1D distribution over tilt. Dashed line represents the mean, with slight preference for negative angle. (E) For this target tilt, we calculate probability clockwise, and obtain one point on psychometric curve. and flankers (called ’position’). The prior is a 2D Gaussian distribution, sat upon a constant baseline.22 The Gaussian is centered at the estimated smoothest target angle and relative position, and the baseline is determined by the probability of smoothness. The baseline, and its dependence on the flanker orientation, is a key difference from Weiss et al’s Gaussian prior for smooth, slow motion. It can be seen as a mechanism to allow segmentation (see Posterior description below). The standard deviation of the Gaussian is a free parameter. Likelihood: The likelihood over tilt and position (figure 3B) is determined by a 2D Gaussian distribution with an added baseline.22 The Gaussian is centered at the actual target tilt; and at a position taken as zero, since this is the actual position, to which the prior is compared. The standard deviation and baseline constant are free parameters. Posterior and marginalization: The posterior comes from multiplying likelihood and prior (figure 3C) and then marginalizing over position to obtain a 1D distribution over tilt. Figure 3D shows an example in which this distribution is bimodal. Other likelihoods, with closer agreement between target and smooth prior, give unimodal distributions. Note that the bimodality is a direct consequence of having an added baseline to the prior and likelihood (if these were Gaussian without a baseline, the posterior would always be Gaussian). The viewer is effectively assessing whether the target is associated with the same object as the flankers, and this is reflected in the baseline, and consequently, in the bimodality, and confidence estimate. We define α as the mean angle of the 1D posterior distribution (eg, value of dashed line on the x axis), and β as the height of the probability distribution at that mean angle (eg, height of dashed line). The term β is an indication of confidence in the angle estimate, where for larger values we are more certain of the estimate. Decision of probability clockwise: The probability of a clockwise tilt is estimated from the marginalized posterior: 1 P = 1 + exp (3) −α.∗k − log(β+η) where α and β are defined as above, k is a free parameter and η a small constant. Free parameters are set to a single constant value for all flanker and center configurations. Weiss et al use a similar compressive nonlinearity, but without the term β. We also tried a decision function that integrates the posterior, but the resulting curves were far from the sigmoidal nature of the data. Bias and sensitivity: For one target tilt, we generate a single probability and therefore a single point on the psychometric function relating tilt to the probability of choosing clockwise. We generate the full psychometric curve from all target tilts and fit to it a cumulative 60 40 20 -5 0 5 Target tilt (deg) 10 80 60 40 20 0 -10 (C) Data -5 0 5 Target tilt (deg) 10 80 60 40 20 0 -10 (D) Model 100 100 100 80 0 -10 Model Frequency responding clockwise (B) Data Frequency responding clockwise Frequency responding clockwise Frequency responding clockwise (A) 100 -5 0 5 Target tilt (deg) 10 80 60 40 20 0 -10 -5 0 5 10 Target tilt (deg) Figure 4: Kapadia et al data,14 versus Bayesian model. Solid lines are fits to a cumulative Gaussian distribution. (A) Flankers are tilted 5 degrees clockwise (black curve) or anti-clockwise (gray) of vertical, and positioned spatially in a colinear arrangement. The center bar appears tilted in the direction of the flankers (attraction), as can be seen by the attractive shift of the psychometric curve. The boxed stimuli cartoon illustrates a vertical target amidst the flankers. (B) Model for colinear bars also produces attraction. (C) Data and (D) model for lateral flankers results in repulsion. All data are collected in the fovea for bars. Gaussian distribution N (µ, σ) (figure 3E). The mean µ of the fit corresponds to the bias, 1 and σ to the sensitivity, or confidence in the bias. The fit to a cumulative Gaussian and extraction of these parameters exactly mimic psychophysical procedures.11 2 Results: data versus model We first consider the geometry of the center and flanker configurations, modeling the full psychometric curve for colinear and parallel flanks (recall that figure 1A showed summary biases). Figure 4A;B demonstrates attraction in the data and model; that is, the psychometric curve is shifted towards the flanker, because of the nature of smooth completions for colinear flankers. Figure 4C;D shows repulsion in the data and model. In this case, the flankers are arranged laterally instead of colinearly. The smoothest solution in the model arises by shifting the target estimate away from the flankers. This shift is rather minor, because the configuration has a low probability of smoothness (similar to figure 2C), and thus the prior exerts only a weak effect. The above results show examples of changes in the psychometric curve, but do not address both bias and, particularly, sensitivity, across a whole range of flanker configurations. Figure 5 depicts biases and sensitivity from Solomon et al, versus the Bayes model. The data are shown for a representative subject, but the qualitative behavior is consistent across all subjects tested. In figure 5A, bias is shown, for the condition that both flankers are tilted at the same angle. The data exhibit small attraction at near vertical flanker angles (this arrangement is close to colinear); large repulsion at intermediate flanker angles of 22.5 and 45 degrees from vertical; and minimal repulsion at large angles from vertical. This behavior is also exhibited in the Bayes model (Figure 5B). For intermediate flanker angles, the smoothest solution in the model is repulsive, and the effect of the prior is strong enough to induce a significant repulsion. For large angles, the prior exerts almost no effect. Interestingly, sensitivity is far from flat in both data and model. In the data (Figure 5C), there is most loss in sensitivity at intermediate flanker angles of 22.5 and 45 degrees (ie, the subject is less certain); and sensitivity is higher for near vertical or near horizontal flankers. The model shows the same qualitative behavior (Figure 5D). In the model, there are two factors driving sensitivity: one is the probability of completing a smooth curvature for a given flanker configuration, as in Figure 2B; this determines the strength of the prior. The other factor is certainty in a particular center estimation; this is determined by β, derived from the posterior distribution, and incorporated into the decision stage of the model Data 5 0 -60 -40 -80 -60 -40 -20 0 20 40 Flanker tilt (deg) -20 0 20 40 Flanker tilt (deg) 60 60 80 -60 -40 0.6 0.5 0.4 0.3 0.2 0.1 -20 0 20 40 Flanker tilt (deg) 60 80 60 80 -80 -60 -40 -20 0 20 40 Flanker tilt (deg) -80 -60 -40 -20 0 20 40 Flanker tilt (deg) 60 80 -20 0 20 40 Flanker tilt (deg) 60 80 (F) Bias (deg) 10 5 0 -5 0.6 0.5 0.4 0.3 0.2 0.1 -80 (D) 10 -10 -10 80 Sensitivity (1/deg) -80 5 0 -5 -80 -80 -60 -60 -40 -40 -20 0 20 40 Flanker tilt (deg) -20 0 20 40 Flanker tilt (deg) 60 -10 80 (H) 60 80 Sensitivity (1/deg) Sensititvity (1/deg) Bias (deg) 0.6 0.5 0.4 0.3 0.2 0.1 (G) Sensititvity (1/deg) 0 -5 (C) (E) 5 -5 -10 Model (B) 10 Bias (deg) Bias (deg) (A) 10 0.6 0.5 0.4 0.3 0.2 0.1 -80 -60 -40 Figure 5: Solomon et al data11 (subject FF), versus Bayesian model. (A) Data and (B) model biases with same-tilted flankers; (C) Data and (D) model sensitivities with same-tilted flankers; (E;G) data and (F;H) model as above, but for opposite-tilted flankers (note that opposite-tilted data was collected for less flanker angles). Each point in the figure is derived by fitting a cummulative Gaussian distribution N (µ, σ) to corresponding psychometric curve, and setting bias 1 equal to µ and sensitivity to σ . In all experiments, flanker and target gratings are presented in the visual periphery. Both data and model stimuli are averages of two configurations, on the left hand side (9 O’clock position) and right hand side (3 O’clock position). The configurations are similar to Figure 1 (B), but slightly shifted according to an iso-eccentric circle, so that all stimuli are similarly visible in the periphery. (equation 3). For flankers that are far from vertical, the prior has minimal effect because one cannot find a smooth solution (eg, the likelihood dominates), and thus sensitivity is higher. The low sensitivity at intermediate angles arises because the prior has considerable effect; and there is conflict between the prior (tilt, position), and likelihood (tilt, position). This leads to uncertainty in the target angle estimation . For flankers near vertical, the prior exerts a strong effect; but there is less conflict between the likelihood and prior estimates (tilt, position) for a vertical target. This leads to more confidence in the posterior estimate, and therefore, higher sensitivity. The only aspect that our model does not reproduce is the (more subtle) sensitivity difference between 0 and +/- 5 degree flankers. Figure 5E-H depict data and model for opposite tilted flankers. The bias is now close to zero in the data (Figure 5E) and model (Figure 5F), as would be expected (since the maximally smooth angle is now always roughly vertical). Perhaps more surprisingly, the sensitivities continue to to be non-flat in the data (Figure 5G) and model (Figure 5H). This behavior arises in the model due to the strength of prior, and positional uncertainty. As before, there is most loss in sensitivity at intermediate angles. Note that to fit Kapadia et al, simulations used a constant parameter of k = 9 in equation 3, whereas for the Solomon et al. simulations, k = 2.5. This indicates that, in our model, there was higher confidence in the foveal experiments than in the peripheral ones. 3 Discussion We applied a Bayesian framework to the widely studied tilt illusion, and demonstrated the model on examples from two different data sets involving foveal and peripheral estimation. Our results support the appealing hypothesis that perceptual misjudgements are not a consequence of poor system design, but rather can be described as optimal inference.4–8 Our model accounts correctly for both attraction and repulsion, determined by the smoothness prior and the geometry of the scene. We emphasized the issue of estimation confidence. The dataset showing how confidence is affected by the same issues that affect bias,11 was exactly appropriate for a Bayesian formulation; other models in the literature typically do not incorporate confidence in a thoroughly probabilistic manner. In fact, our model fits the confidence (and bias) data more proficiently than an account based on lateral interactions among a population of orientationtuned cells.11 Other Bayesian work, by Stocker et al,6 utilized the full slope of the psychometric curve in fitting a prior and likelihood to motion data, but did not examine the issue of confidence. Estimation confidence plays a central role in Bayesian formulations as a whole. Understanding how priors affect confidence should have direct bearing on many other Bayesian calculations such as multimodal integration.23 Our model is obviously over-simplified in a number of ways. First, we described it in terms of tilts and spatial positions; a more complete version should work in the pixel/filtering domain.18, 19 We have also only considered two flanking elements; the model is extendible to a full-field surround, whereby smoothness operates along a range of geometric directions, and some directions are more (smoothly) dominant than others. Second, the prior is constructed by summarizing the maximal smoothness information; a more probabilistically correct version should capture the full probability of smoothness in its prior. Third, our model does not incorporate a formal noise representation; however, sensitivities could be influenced both by stimulus-driven noise and confidence. Fourth, our model does not address attraction in the so-called indirect tilt illusion, thought to be mediated by a different mechanism. Finally, we have yet to account for neurophysiological data within this framework, and incorporate constraints at the neural implementation level. However, versions of our computations are oft suggested for intra-areal and feedback cortical circuits; and smoothness principles form a key part of the association field connection scheme in Li’s24 dynamical model of contour integration in V1. Our model is connected to a wealth of literature in computer vision and perception. Notably, occlusion and contour completion might be seen as the extreme example in which there is no likelihood information at all for the center target; a host of papers have shown that under these circumstances, smoothness principles such as elastica and variants explain many aspects of perception. The model is also associated with many studies on contour integration motivated by Gestalt principles;25, 26 and exploration of natural scene statistics and Gestalt,27, 28 including the relation to contour grouping within a Bayesian framework.29, 30 Indeed, our model could be modified to include a prior from natural scenes. There are various directions for the experimental test and refinement of our model. Most pressing is to determine bias and sensitivity for different center and flanker contrasts. As in the case of motion, our model predicts that when there is more uncertainty in the center element, prior information is more dominant. Another interesting test would be to design a task such that the center element is actually part of a different figure and unrelated to the flankers; our framework predicts that there would be minimal bias, because of segmentation. Our model should also be applied to other tilt-based illusions such as the Fraser spiral and Z¨ llner. Finally, our model can be applied to other perceptual domains;31 and given o the apparent similarities between the tilt illusion and the tilt after-effect, we plan to extend the model to adaptation, by considering smoothness in time as well as space. Acknowledgements This work was funded by the HHMI (OS, TJS) and the Gatsby Charitable Foundation (PD). We are very grateful to Serge Belongie, Leanne Chukoskie, Philip Meier and Joshua Solomon for helpful discussions. References [1] J J Gibson. Adaptation, after-effect, and contrast in the perception of tilted lines. Journal of Experimental Psychology, 20:553–569, 1937. [2] C Blakemore, R H S Carpentar, and M A Georgeson. Lateral inhibition between orientation detectors in the human visual system. Nature, 228:37–39, 1970. [3] J A Stuart and H M Burian. A study of separation difficulty: Its relationship to visual acuity in normal and amblyopic eyes. American Journal of Ophthalmology, 53:471–477, 1962. [4] A Yuille and H H Bulthoff. Perception as bayesian inference. In Knill and Whitman, editors, Bayesian decision theory and psychophysics, pages 123–161. Cambridge University Press, 1996. [5] Y Weiss, E P Simoncelli, and E H Adelson. Motion illusions as optimal percepts. Nature Neuroscience, 5:598–604, 2002. [6] A Stocker and E P Simoncelli. Constraining a bayesian model of human visual speed perception. Adv in Neural Info Processing Systems, 17, 2004. [7] D Kersten, P Mamassian, and A Yuille. Object perception as bayesian inference. Annual Review of Psychology, 55:271–304, 2004. [8] K Kording and D Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244–247, 2004. [9] L Parkes, J Lund, A Angelucci, J Solomon, and M Morgan. Compulsory averaging of crowded orientation signals in human vision. Nature Neuroscience, 4:739–744, 2001. [10] D G Pelli, M Palomares, and N J Majaj. Crowding is unlike ordinary masking: Distinguishing feature integration from detection. Journal of Vision, 4:1136–1169, 2002. [11] J Solomon, F M Felisberti, and M Morgan. Crowding and the tilt illusion: Toward a unified account. Journal of Vision, 4:500–508, 2004. [12] J A Bednar and R Miikkulainen. Tilt aftereffects in a self-organizing model of the primary visual cortex. Neural Computation, 12:1721–1740, 2000. [13] C W Clifford, P Wenderoth, and B Spehar. A functional angle on some after-effects in cortical vision. Proc Biol Sci, 1454:1705–1710, 2000. [14] M K Kapadia, G Westheimer, and C D Gilbert. Spatial distribution of contextual interactions in primary visual cortex and in visual perception. J Neurophysiology, 4:2048–262, 2000. [15] C C Chen and C W Tyler. Lateral modulation of contrast discrimination: Flanker orientation effects. Journal of Vision, 2:520–530, 2002. [16] I Mareschal, M P Sceniak, and R M Shapley. Contextual influences on orientation discrimination: binding local and global cues. Vision Research, 41:1915–1930, 2001. [17] D Mumford. Elastica and computer vision. In Chandrajit Bajaj, editor, Algebraic geometry and its applications. Springer Verlag, 1994. [18] T K Leung and J Malik. Contour continuity in region based image segmentation. In Proc. ECCV, pages 544–559, 1998. [19] E Sharon, A Brandt, and R Basri. Completion energies and scale. IEEE Pat. Anal. Mach. Intell., 22(10), 1997. [20] S W Zucker, C David, A Dobbins, and L Iverson. The organization of curve detection: coarse tangent fields. Computer Graphics and Image Processing, 9(3):213–234, 1988. [21] S Ullman. Filling in the gaps: the shape of subjective contours and a model for their generation. Biological Cybernetics, 25:1–6, 1976. [22] G E Hinton and A D Brown. Spiking boltzmann machines. Adv in Neural Info Processing Systems, 12, 1998. [23] R A Jacobs. What determines visual cue reliability? Trends in Cognitive Sciences, 6:345–350, 2002. [24] Z Li. A saliency map in primary visual cortex. Trends in Cognitive Science, 6:9–16, 2002. [25] D J Field, A Hayes, and R F Hess. Contour integration by the human visual system: evidence for a local “association field”. Vision Research, 33:173–193, 1993. [26] J Beck, A Rosenfeld, and R Ivry. Line segregation. Spatial Vision, 4:75–101, 1989. [27] M Sigman, G A Cecchi, C D Gilbert, and M O Magnasco. On a common circle: Natural scenes and gestalt rules. PNAS, 98(4):1935–1940, 2001. [28] S Mahumad, L R Williams, K K Thornber, and K Xu. Segmentation of multiple salient closed contours from real images. IEEE Pat. Anal. Mach. Intell., 25(4):433–444, 1997. [29] W S Geisler, J S Perry, B J Super, and D P Gallogly. Edge co-occurence in natural images predicts contour grouping performance. Vision Research, 6:711–724, 2001. [30] J H Elder and R M Goldberg. Ecological statistics of gestalt laws for the perceptual organization of contours. Journal of Vision, 4:324–353, 2002. [31] S R Lehky and T J Sejnowski. Neural model of stereoacuity and depth interpolation based on a distributed representation of stereo disparity. Journal of Neuroscience, 10:2281–2299, 1990.

4 0.61766452 156 nips-2005-Prediction and Change Detection

Author: Mark Steyvers, Scott Brown

Abstract: We measure the ability of human observers to predict the next datum in a sequence that is generated by a simple statistical process undergoing change at random points in time. Accurate performance in this task requires the identification of changepoints. We assess individual differences between observers both empirically, and using two kinds of models: a Bayesian approach for change detection and a family of cognitively plausible fast and frugal models. Some individuals detect too many changes and hence perform sub-optimally due to excess variability. Other individuals do not detect enough changes, and perform sub-optimally because they fail to notice short-term temporal trends. 1 Intr oduction Decision-making often requires a rapid response to change. For example, stock analysts need to quickly detect changes in the market in order to adjust investment strategies. Coaches need to track changes in a player’s performance in order to adjust strategy. When tracking changes, there are costs involved when either more or less changes are observed than actually occurred. For example, when using an overly conservative change detection criterion, a stock analyst might miss important short-term trends and interpret them as random fluctuations instead. On the other hand, a change may also be detected too readily. For example, in basketball, a player who makes a series of consecutive baskets is often identified as a “hot hand” player whose underlying ability is perceived to have suddenly increased [1,2]. This might lead to sub-optimal passing strategies, based on random fluctuations. We are interested in explaining individual differences in a sequential prediction task. Observers are shown stimuli generated from a simple statistical process with the task of predicting the next datum in the sequence. The latent parameters of the statistical process change discretely at random points in time. Performance in this task depends on the accurate detection of those changepoints, as well as inference about future outcomes based on the outcomes that followed the most recent inferred changepoint. There is much prior research in statistics on the problem of identifying changepoints [3,4,5]. In this paper, we adopt a Bayesian approach to the changepoint identification problem and develop a simple inference procedure to predict the next datum in a sequence. The Bayesian model serves as an ideal observer model and is useful to characterize the ways in which individuals deviate from optimality. The plan of the paper is as follows. We first introduce the sequential prediction task and discuss a Bayesian analysis of this prediction problem. We then discuss the results from a few individuals in this prediction task and show how the Bayesian approach can capture individual differences with a single “twitchiness” parameter that describes how readily changes are perceived in random sequences. We will show that some individuals are too twitchy: their performance is too variable because they base their predictions on too little of the recent data. Other individuals are not twitchy enough, and they fail to capture fast changes in the data. We also show how behavior can be explained with a set of fast and frugal models [6]. These are cognitively realistic models that operate under plausible computational constraints. 2 A pr ediction task wit h m ult iple c hange points In the prediction task, stimuli are presented sequentially and the task is to predict the next stimulus in the sequence. After t trials, the observer has been presented with stimuli y1, y2, …, yt and the task is to make a prediction about yt+1. After the prediction is made, the actual outcome yt+1 is revealed and the next trial proceeds to the prediction of yt+2. This procedure starts with y1 and is repeated for T trials. The observations yt are D-dimensional vectors with elements sampled from binomial distributions. The parameters of those distributions change discretely at random points in time such that the mean increases or decreases after a change point. This generates a sequence of observation vectors, y1, y2, …, yT, where each yt = {yt,1 … yt,D}. Each of the yt,d is sampled from a binomial distribution Bin(θt,d,K), so 0 ≤ yt,d ≤ K. The parameter vector θt ={θt,1 … θt,D} changes depending on the locations of the changepoints. At each time step, xt is a binary indicator for the occurrence of a changepoint occurring at time t+1. The parameter α determines the probability of a change occurring in the sequence. The generative model is specified by the following algorithm: 1. For d=1..D sample θ1,d from a Uniform(0,1) distribution 2. For t=2..T, (a) Sample xt-1 from a Bernoulli(α) distribution (b) If xt-1=0, then θt=θt-1, else for d=1..D sample θt,d from a Uniform(0,1) distribution (c) for d=1..D, sample yt from a Bin(θt,d,K) distribution Table 1 shows some data generated from the changepoint model with T=20, α=.1,and D=1. In the prediction task, y will be observed, but x and θ are not. Table 1: Example data t x θ y 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 0 0 0 1 0 0 1 0 0 0 0 0 1 0 1 0 0 0 0 0 .68 .68 .68 .68 .48 .48 .48 .74 .74 .74 .74 .74 .74 .19 .19 .87 .87 .87 .87 .87 9 7 8 7 4 4 4 9 8 3 6 7 8 2 1 8 9 9 8 8 3 A Bayesian pr ediction m ode l In both our Bayesian and fast-and-frugal analyses, the prediction task is decomposed into two inference procedures. First, the changepoint locations are identified. This is followed by predictive inference for the next outcome based on the most recent changepoint locations. Several Bayesian approaches have been developed for changepoint problems involving single or multiple changepoints [3,5]. We apply a Markov Chain Monte Carlo (MCMC) analysis to approximate the joint posterior distribution over changepoint assignments x while integrating out θ. Gibbs sampling will be used to sample from this posterior marginal distribution. The samples can then be used to predict the next outcome in the sequence. 3.1 I n f e r e nc e f o r c h a n g e p o i n t a s s i g n m e n t s . To apply Gibbs sampling, we evaluate the conditional probability of assigning a changepoint at time i, given all other changepoint assignments and the current α value. By integrating out θ, the conditional probability is P ( xi | x−i , y, α ) = ∫ P ( xi ,θ , α | x− i , y ) (1) θ where x− i represents all switch point assignments except xi. This can be simplified by considering the location of the most recent changepoint preceding and following time i and the outcomes occurring between these locations. Let niL be the number of time steps from the last changepoint up to and including the current time step i such that xi − nL =1 and xi − nL + j =0 for 0 < niL . Similarly, let niR be the number of time steps that i i follow time step i up to the next changepoint such that xi + n R =1 and xi + nR − j =0 for i R i 0 < n . Let y = L i ∑ i − niL < k ≤ i i yk and y = ∑ k < k ≤i + n R yk . The update equation for the R i i changepoint assignment can then be simplified to P ( xi = m | x−i ) ∝ ( ) ( ( ) D Γ 1 + y L + y R Γ 1 + Kn L + Kn R − y L − y R ⎧ i, j i, j i i i, j i, j ⎪ (1 − α ) ∏ L R Γ 2 + Kni + Kni ⎪ j =1 ⎪ ⎨ L L L R R R ⎪ D Γ 1 + yi, j Γ 1 + Kni − yi, j Γ 1 + yi, j Γ 1 + Kni − yi, j α∏ ⎪ Γ 2 + KniL Γ 2 + KniR ⎪ j =1 ⎩ ( ) ( ( ) ( ) ( ) ) ( ) m=0 ) (2) m =1 We initialize the Gibbs sampler by sampling each xt from a Bernoulli(α) distribution. All changepoint assignments are then updated sequentially by the Gibbs sampling equation above. The sampler is run for M iterations after which one set of changepoint assignments is saved. The Gibbs sampler is then restarted multiple times until S samples have been collected. Although we could have included an update equation for α, in this analysis we treat α as a known constant. This will be useful when characterizing the differences between human observers in terms of differences in α. 3.2 P r e d i c ti v e i n f er e n ce The next latent parameter value θt+1 and outcome yt+1 can be predicted on the basis of observed outcomes that occurred after the last inferred changepoint: θ t+1, j = t ∑ i =t* +1 yt+1, j = round (θt +1, j K ) yi, j / K , (3) where t* is the location of the most recent change point. By considering multiple Gibbs samples, we get a distribution over outcomes yt+1. We base the model predictions on the mean of this distribution. 3.3 I l l u s t r a t i o n o f m o d e l p er f o r m a n c e Figure 1 illustrates the performance of the model on a one dimensional sequence (D=1) generated from the changepoint model with T=160, α=0.05, and K=10. The Gibbs sampler was run for M=30 iterations and S=200 samples were collected. The top panel shows the actual changepoints (triangles) and the distribution of changepoint assignments averaged over samples. The bottom panel shows the observed data y (thin lines) as well as the θ values in the generative model (rescaled between 0 and 10). At locations with large changes between observations, the marginal changepoint probability is quite high. At other locations, the true change in the mean is very small, and the model is less likely to put in a changepoint. The lower right panel shows the distribution over predicted θt+1 values. xt 1 0.5 0 yt 10 1 5 θt+1 0.5 0 20 40 60 80 100 120 140 160 0 Figure 1. Results of model simulation. 4 Prediction experiment We tested performance of 9 human observers in the prediction task. The observers included the authors, a visitor, and one student who were aware of the statistical nature of the task as well as naïve students. The observers were seated in front of an LCD touch screen displaying a two-dimensional grid of 11 x 11 buttons. The changepoint model was used to generate a sequence of T=1500 stimuli for two binomial variables y1 and y2 (D=2, K=10). The change probability α was set to 0.1. The two variables y1 and y2 specified the two-dimensional button location. The same sequence was used for all observers. On each trial, the observer touched a button on the grid displayed on the touch screen. Following each button press, the button corresponding to the next {y1,y2} outcome in the sequence was highlighted. Observers were instructed to press the button that best predicted the next location of the highlighted button. The 1500 trials were divided into three blocks of 500 trials. Breaks were allowed between blocks. The whole experiment lasted between 15 and 30 minutes. Figure 2 shows the first 50 trials from the third block of the experiment. The top and bottom panels show the actual outcomes for the y1 and y2 button grid coordinates as well as the predictions for two observers (SB and MY). The figure shows that at trial 15, the y1 and y2 coordinates show a large shift followed by an immediate shift in observer’s MY predictions (on trial 16). Observer SB waits until trial 17 to make a shift. 10 5 0 outcomes SB predictions MY predictions 10 5 0 0 5 10 15 20 25 Trial 30 35 40 45 50 Figure 2. Trial by trial predictions from two observers. 4.1 T a s k er r o r We assessed prediction performance by comparing the prediction with the actual outcome in the sequence. Task error was measured by normalized city-block distance T 1 (4) task error= ∑ yt ,1 − ytO,1 + yt ,2 − ytO,2 (T − 1) t =2 where yO represents the observer’s prediction. Note that the very first trial is excluded from this calculation. Even though more suitable probabilistic measures for prediction error could have been adopted, we wanted to allow comparison of observer’s performance with both probabilistic and non-probabilistic models. Task error ranged from 2.8 (for participant MY) to 3.3 (for ML). We also assessed the performance of five models – their task errors ranged from 2.78 to 3.20. The Bayesian models (Section 3) had the lowest task errors, just below 2.8. This fits with our definition of the Bayesian models as “ideal observer” models – their task error is lower than any other model’s and any human observer’s task error. The fast and frugal models (Section 5) had task errors ranging from 2.85 to 3.20. 5 Modeling R esults We will refer to the models with the following letter codes: B=Bayesian Model, LB=limited Bayesian model, FF1..3=fast and frugal models 1..3. We assessed model fit by comparing the model’s prediction against the human observers’ predictions, again using a normalized city-block distance model error= T 1 ∑ ytM − ytO,1 + ytM − ytO,2 ,1 ,2 (T − 1) t=2 (5) where yM represents the model’s prediction. The model error for each individual observer is shown in Figure 3. It is important to note that because each model is associated with a set of free parameters, the parameters optimized for task error and model error are different. For Figure 3, the parameters were optimized to minimize Equation (5) for each individual observer, showing the extent to which these models can capture the performance of individual observers, not necessarily providing the best task performance. B LB FF1 FF2 MY MS MM EJ FF3 Model Error 2 1.5 1 0.5 0 PH NP DN SB ML 1 Figure 3. Model error for each individual observer. 5.1 B ay e s i a n p re d i ct i o n m o d e l s At each trial t, the model was provided with the sequence of all previous outcomes. The Gibbs sampling and inference procedures from Eq. (2) and (3) were applied with M=30 iterations and S=200 samples. The change probability α was a free parameter. In the full Bayesian model, the whole sequence of observations up to the current trial is available for prediction, leading to a memory requirement of up to T=1500 trials – a psychologically unreasonable assumption. We therefore also simulated a limited Bayesian model (LB) where the observed sequence was truncated to the last 10 outcomes. The LB model showed almost no decrement in task performance compared to the full Bayesian model. Figure 3 also shows that it fit human data quite well. 5.2 I n d i v i d u a l D i f f er e nc e s The right-hand panel of Figure 4 plots each observer’s task error as a function of the mean city-block distance between their subsequent button presses. This shows a clear U-shaped function. Observers with very variable predictions (e.g., ML and DN) had large average changes between successive button pushes, and also had large task error: These observers were too “twitchy”. Observers with very small average button changes (e.g., SB and NP) were not twitchy enough, and also had large task error. Observers in the middle had the lowest task error (e.g., MS and MY). The left-hand panel of Figure 4 shows the same data, but with the x-axis based on the Bayesian model fits. Instead of using mean button change distance to index twitchiness (as in 1 Error bars indicate bootstrapped 95% confidence intervals. the right-hand panel), the left-hand panel uses the estimated α parameters from the Bayesian model. A similar U-shaped pattern is observed: individuals with too large or too small α estimates have large task errors. 3.3 DN 3.2 Task Error ML SB 3.2 NP 3.1 Task Error 3.3 PH EJ 3 MM MS MY 2.9 2.8 10 -4 10 -3 10 -2 DN NP 3.1 3 PH EJ MM MS 2.9 B ML SB MY 2.8 10 -1 10 0 0.5 1 α 1.5 2 Mean Button Change 2.5 3 Figure 4. Task error vs. “twitchiness”. Left-hand panel indexes twitchiness using estimated α parameters from Bayesian model fits. Right-hand panel uses mean distance between successive predictions. 5.3 F a s t - a n d - F r u g a l ( F F ) p r e d ic t i o n m o d e l s These models perform the prediction task using simple heuristics that are cognitively plausible. The FF models keep a short memory of previous stimulus values and make predictions using the same two-step process as the Bayesian model. First, a decision is made as to whether the latent parameter θ has changed. Second, remembered stimulus values that occurred after the most recently detected changepoint are used to generate the next prediction. A simple heuristic is used to detect changepoints: If the distance between the most recent observation and prediction is greater than some threshold amount, a change is inferred. We defined the distance between a prediction (p) and an observation (y) as the difference between the log-likelihoods of y assuming θ=p and θ=y. Thus, if fB(.|θ, K) is the binomial density with parameters θ and K, the distance between observation y and prediction p is defined as d(y,p)=log(fB(y|y,K))-log(fB(y|p,K)). A changepoint on time step t+1 is inferred whenever d(yt,pt)>C. The parameter C governs the twitchiness of the model predictions. If C is large, only very dramatic changepoints will be detected, and the model will be too conservative. If C is small, the model will be too twitchy, and will detect changepoints on the basis of small random fluctuations. Predictions are based on the most recent M observations, which are kept in memory, unless a changepoint has been detected in which case only those observations occurring after the changepoint are used for prediction. The prediction for time step t+1 is simply the mean of these observations, say p. Human observers were reticent to make predictions very close to the boundaries. This was modeled by allowing the FF model to change its prediction for the next time step, yt+1, towards the mean prediction (0.5). This change reflects a two-way bet. If the probability of a change occurring is α, the best guess will be 0.5 if that change occurs, or the mean p if the change does not occur. Thus, the prediction made is actually yt+1=1/2 α+(1-α)p. Note that we do not allow perfect knowledge of the probability of a changepoint, α. Instead, an estimated value of α is used based on the number of changepoints detected in the data series up to time t. The FF model nests two simpler FF models that are psychologically interesting. If the twitchiness threshold parameter C becomes arbitrarily large, the model never detects a change and instead becomes a continuous running average model. Predictions from this model are simply a boxcar smooth of the data. Alternatively, if we assume no memory the model must based each prediction on only the previous stimulus (i.e., M=1). Above, in Figure 3, we labeled the complete FF model as FF1, the boxcar model as FF2 and the memoryless model FF3. Figure 3 showed that the complete FF model (FF1) fit the data from all observers significantly better than either the boxcar model (FF2) or the memoryless model (FF3). Exceptions were observers PH, DN and ML, for whom all three FF model fit equally well. This result suggests that our observers were (mostly) doing more than just keeping a running average of the data, or using only the most recent observation. The FF1 model fit the data about as well as the Bayesian models for all observers except MY and MS. Note that, in general, the FF1 and Bayesian model fits are very good: the average city block distance between the human data and the model prediction is around 0.75 (out of 10) buttons on both the x- and y-axes. 6 C onclusion We used an online prediction task to study changepoint detection. Human observers had to predict the next observation in stochastic sequences containing random changepoints. We showed that some observers are too “twitchy”: They perform poorly on the prediction task because they see changes where only random fluctuation exists. Other observers are not twitchy enough, and they perform poorly because they fail to see small changes. We developed a Bayesian changepoint detection model that performed the task optimally, and also provided a good fit to human data when sub-optimal parameter settings were used. Finally, we developed a fast-and-frugal model that showed how participants may be able to perform well at the task using minimal information and simple decision heuristics. Acknowledgments We thank Eric-Jan Wagenmakers and Mike Yi for useful discussions related to this work. This work was supported in part by a grant from the US Air Force Office of Scientific Research (AFOSR grant number FA9550-04-1-0317). R e f er e n ce s [1] Gilovich, T., Vallone, R. and Tversky, A. (1985). The hot hand in basketball: on the misperception of random sequences. Cognitive Psychology17, 295-314. [2] Albright, S.C. (1993a). A statistical analysis of hitting streaks in baseball. Journal of the American Statistical Association, 88, 1175-1183. [3] Stephens, D.A. (1994). Bayesian retrospective multiple changepoint identification. Applied Statistics 43(1), 159-178. [4] Carlin, B.P., Gelfand, A.E., & Smith, A.F.M. (1992). Hierarchical Bayesian analysis of changepoint problems. Applied Statistics 41(2), 389-405. [5] Green, P.J. (1995). Reversible jump Markov chain Monte Carlo computation and Bayesian model determination. Biometrika 82(4), 711-732. [6] Gigerenzer, G., & Goldstein, D.G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650-669.

5 0.58577859 14 nips-2005-A Probabilistic Interpretation of SVMs with an Application to Unbalanced Classification

Author: Yves Grandvalet, Johnny Mariethoz, Samy Bengio

Abstract: In this paper, we show that the hinge loss can be interpreted as the neg-log-likelihood of a semi-parametric model of posterior probabilities. From this point of view, SVMs represent the parametric component of a semi-parametric model fitted by a maximum a posteriori estimation procedure. This connection enables to derive a mapping from SVM scores to estimated posterior probabilities. Unlike previous proposals, the suggested mapping is interval-valued, providing a set of posterior probabilities compatible with each SVM score. This framework offers a new way to adapt the SVM optimization problem to unbalanced classification, when decisions result in unequal (asymmetric) losses. Experiments show improvements over state-of-the-art procedures. 1

6 0.57918847 106 nips-2005-Large-scale biophysical parameter estimation in single neurons via constrained linear regression

7 0.55950898 30 nips-2005-Assessing Approximations for Gaussian Process Classification

8 0.54093492 93 nips-2005-Ideal Observers for Detecting Motion: Correspondence Noise

9 0.52913225 197 nips-2005-Unbiased Estimator of Shape Parameter for Spiking Irregularities under Changing Environments

10 0.52599365 117 nips-2005-Learning from Data of Variable Quality

11 0.52486771 205 nips-2005-Worst-Case Bounds for Gaussian Process Models

12 0.49868447 133 nips-2005-Nested sampling for Potts models

13 0.48862517 35 nips-2005-Bayesian model learning in human visual perception

14 0.48005852 33 nips-2005-Bayesian Sets

15 0.47866905 34 nips-2005-Bayesian Surprise Attracts Human Attention

16 0.47316 4 nips-2005-A Bayesian Spatial Scan Statistic

17 0.45498127 113 nips-2005-Learning Multiple Related Tasks using Latent Independent Component Analysis

18 0.44471884 44 nips-2005-Computing the Solution Path for the Regularized Support Vector Regression

19 0.44444981 2 nips-2005-A Bayes Rule for Density Matrices

20 0.44380382 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification


similar papers computed by lda model

lda for this paper:

topicId topicWeight

[(3, 0.066), (10, 0.042), (11, 0.317), (27, 0.032), (31, 0.029), (34, 0.113), (39, 0.032), (55, 0.04), (57, 0.018), (69, 0.049), (73, 0.027), (88, 0.096), (91, 0.061)]

similar papers list:

simIndex simValue paperId paperTitle

1 0.8448441 183 nips-2005-Stimulus Evoked Independent Factor Analysis of MEG Data with Large Background Activity

Author: Kenneth Hild, Kensuke Sekihara, Hagai T. Attias, Srikantan S. Nagarajan

Abstract: This paper presents a novel technique for analyzing electromagnetic imaging data obtained using the stimulus evoked experimental paradigm. The technique is based on a probabilistic graphical model, which describes the data in terms of underlying evoked and interference sources, and explicitly models the stimulus evoked paradigm. A variational Bayesian EM algorithm infers the model from data, suppresses interference sources, and reconstructs the activity of separated individual brain sources. The new algorithm outperforms existing techniques on two real datasets, as well as on simulated data. 1

same-paper 2 0.81192213 140 nips-2005-Nonparametric inference of prior probabilities from Bayes-optimal behavior

Author: Liam Paninski

Abstract: We discuss a method for obtaining a subject’s a priori beliefs from his/her behavior in a psychophysics context, under the assumption that the behavior is (nearly) optimal from a Bayesian perspective. The method is nonparametric in the sense that we do not assume that the prior belongs to any fixed class of distributions (e.g., Gaussian). Despite this increased generality, the method is relatively simple to implement, being based in the simplest case on a linear programming algorithm, and more generally on a straightforward maximum likelihood or maximum a posteriori formulation, which turns out to be a convex optimization problem (with no non-global local maxima) in many important cases. In addition, we develop methods for analyzing the uncertainty of these estimates. We demonstrate the accuracy of the method in a simple simulated coin-flipping setting; in particular, the method is able to precisely track the evolution of the subject’s posterior distribution as more and more data are observed. We close by briefly discussing an interesting connection to recent models of neural population coding.

3 0.77712679 56 nips-2005-Diffusion Maps, Spectral Clustering and Eigenfunctions of Fokker-Planck Operators

Author: Boaz Nadler, Stephane Lafon, Ioannis Kevrekidis, Ronald R. Coifman

Abstract: This paper presents a diffusion based probabilistic interpretation of spectral clustering and dimensionality reduction algorithms that use the eigenvectors of the normalized graph Laplacian. Given the pairwise adjacency matrix of all points, we define a diffusion distance between any two data points and show that the low dimensional representation of the data by the first few eigenvectors of the corresponding Markov matrix is optimal under a certain mean squared error criterion. Furthermore, assuming that data points are random samples from a density p(x) = e−U (x) we identify these eigenvectors as discrete approximations of eigenfunctions of a Fokker-Planck operator in a potential 2U (x) with reflecting boundary conditions. Finally, applying known results regarding the eigenvalues and eigenfunctions of the continuous Fokker-Planck operator, we provide a mathematical justification for the success of spectral clustering and dimensional reduction algorithms based on these first few eigenvectors. This analysis elucidates, in terms of the characteristics of diffusion processes, many empirical findings regarding spectral clustering algorithms. Keywords: Algorithms and architectures, learning theory. 1

4 0.52868509 74 nips-2005-Faster Rates in Regression via Active Learning

Author: Rebecca Willett, Robert Nowak, Rui M. Castro

Abstract: This paper presents a rigorous statistical analysis characterizing regimes in which active learning significantly outperforms classical passive learning. Active learning algorithms are able to make queries or select sample locations in an online fashion, depending on the results of the previous queries. In some regimes, this extra flexibility leads to significantly faster rates of error decay than those possible in classical passive learning settings. The nature of these regimes is explored by studying fundamental performance limits of active and passive learning in two illustrative nonparametric function classes. In addition to examining the theoretical potential of active learning, this paper describes a practical algorithm capable of exploiting the extra flexibility of the active setting and provably improving upon the classical passive techniques. Our active learning theory and methods show promise in a number of applications, including field estimation using wireless sensor networks and fault line detection. 1

5 0.52788639 47 nips-2005-Consistency of one-class SVM and related algorithms

Author: Régis Vert, Jean-philippe Vert

Abstract: We determine the asymptotic limit of the function computed by support vector machines (SVM) and related algorithms that minimize a regularized empirical convex loss function in the reproducing kernel Hilbert space of the Gaussian RBF kernel, in the situation where the number of examples tends to infinity, the bandwidth of the Gaussian kernel tends to 0, and the regularization parameter is held fixed. Non-asymptotic convergence bounds to this limit in the L2 sense are provided, together with upper bounds on the classification error that is shown to converge to the Bayes risk, therefore proving the Bayes-consistency of a variety of methods although the regularization term does not vanish. These results are particularly relevant to the one-class SVM, for which the regularization can not vanish by construction, and which is shown for the first time to be a consistent density level set estimator. 1

6 0.52396077 23 nips-2005-An Application of Markov Random Fields to Range Sensing

7 0.52353495 144 nips-2005-Off-policy Learning with Options and Recognizers

8 0.52225208 49 nips-2005-Convergence and Consistency of Regularized Boosting Algorithms with Stationary B-Mixing Observations

9 0.51299965 168 nips-2005-Rodeo: Sparse Nonparametric Regression in High Dimensions

10 0.51238072 50 nips-2005-Convex Neural Networks

11 0.51200169 67 nips-2005-Extracting Dynamical Structure Embedded in Neural Activity

12 0.50943196 92 nips-2005-Hyperparameter and Kernel Learning for Graph Based Semi-Supervised Classification

13 0.50919211 136 nips-2005-Noise and the two-thirds power Law

14 0.50535285 32 nips-2005-Augmented Rescorla-Wagner and Maximum Likelihood Estimation

15 0.50344646 3 nips-2005-A Bayesian Framework for Tilt Perception and Confidence

16 0.50245512 184 nips-2005-Structured Prediction via the Extragradient Method

17 0.50196797 94 nips-2005-Identifying Distributed Object Representations in Human Extrastriate Visual Cortex

18 0.50095224 93 nips-2005-Ideal Observers for Detecting Motion: Correspondence Noise

19 0.50078821 173 nips-2005-Sensory Adaptation within a Bayesian Framework for Perception

20 0.50078273 132 nips-2005-Nearest Neighbor Based Feature Selection for Regression and its Application to Neural Activity