andrew_gelman_stats andrew_gelman_stats-2010 andrew_gelman_stats-2010-234 knowledge-graph by maker-knowledge-mining

234 andrew gelman stats-2010-08-25-Modeling constrained parameters


meta infos for this blog

Source: html

Introduction: Mike McLaughlin writes: In general, is there any way to do MCMC with a fixed constraint? E.g., suppose I measure the three internal angles of a triangle with errors ~dnorm(0, tau) where tau might be different for the three measurements. This would be an easy BUGS/WinBUGS/JAGS exercise but suppose, in addition, I wanted to include prior information to the effect that the three angles had to total 180 degrees exactly. Is this feasible? Could you point me to any BUGS model in which a constraint of this type is implemented? Note: Even in my own (non-hierarchical) code which tends to be component-wise, random-walk Metropolis with tuned Laplacian proposals, I cannot see how I could incorporate such a constraint. My reply: See page 508 of Bayesian Data Analysis (2nd edition). We have an example of such a model there (from this paper with Bois and Jiang).


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Mike McLaughlin writes: In general, is there any way to do MCMC with a fixed constraint? [sent-1, score-0.094]

2 , suppose I measure the three internal angles of a triangle with errors ~dnorm(0, tau) where tau might be different for the three measurements. [sent-4, score-1.875]

3 This would be an easy BUGS/WinBUGS/JAGS exercise but suppose, in addition, I wanted to include prior information to the effect that the three angles had to total 180 degrees exactly. [sent-5, score-1.36]

4 Could you point me to any BUGS model in which a constraint of this type is implemented? [sent-7, score-0.509]

5 Note: Even in my own (non-hierarchical) code which tends to be component-wise, random-walk Metropolis with tuned Laplacian proposals, I cannot see how I could incorporate such a constraint. [sent-8, score-0.647]

6 My reply: See page 508 of Bayesian Data Analysis (2nd edition). [sent-9, score-0.075]

7 We have an example of such a model there (from this paper with Bois and Jiang). [sent-10, score-0.121]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('angles', 0.439), ('tau', 0.338), ('constraint', 0.309), ('three', 0.194), ('jiang', 0.189), ('triangle', 0.189), ('mclaughlin', 0.181), ('tuned', 0.181), ('dnorm', 0.169), ('feasible', 0.169), ('bois', 0.161), ('suppose', 0.148), ('metropolis', 0.14), ('tends', 0.134), ('internal', 0.134), ('incorporate', 0.134), ('proposals', 0.132), ('implemented', 0.125), ('bugs', 0.12), ('exercise', 0.12), ('mcmc', 0.12), ('edition', 0.118), ('mike', 0.118), ('degrees', 0.116), ('fixed', 0.094), ('addition', 0.093), ('total', 0.093), ('code', 0.087), ('type', 0.085), ('note', 0.083), ('measure', 0.083), ('errors', 0.08), ('model', 0.079), ('wanted', 0.077), ('page', 0.075), ('include', 0.071), ('easy', 0.07), ('prior', 0.068), ('effect', 0.059), ('could', 0.058), ('reply', 0.058), ('see', 0.053), ('information', 0.053), ('bayesian', 0.052), ('general', 0.051), ('analysis', 0.043), ('paper', 0.042), ('different', 0.04), ('point', 0.036), ('might', 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 234 andrew gelman stats-2010-08-25-Modeling constrained parameters

Introduction: Mike McLaughlin writes: In general, is there any way to do MCMC with a fixed constraint? E.g., suppose I measure the three internal angles of a triangle with errors ~dnorm(0, tau) where tau might be different for the three measurements. This would be an easy BUGS/WinBUGS/JAGS exercise but suppose, in addition, I wanted to include prior information to the effect that the three angles had to total 180 degrees exactly. Is this feasible? Could you point me to any BUGS model in which a constraint of this type is implemented? Note: Even in my own (non-hierarchical) code which tends to be component-wise, random-walk Metropolis with tuned Laplacian proposals, I cannot see how I could incorporate such a constraint. My reply: See page 508 of Bayesian Data Analysis (2nd edition). We have an example of such a model there (from this paper with Bois and Jiang).

2 0.13449201 2342 andrew gelman stats-2014-05-21-Models with constraints

Introduction: I had an interesting conversation with Aki about monotonicity constraints. We were discussing a particular set of Gaussian processes that we were fitting to the arsenic well-switching data (the example from the logistic regression chapter in my book with Jennifer) but some more general issues arose that I thought might interest you. The idea was to fit a model where the response (the logit probability of switching wells) was constrained to be monotonically increasing in your current arsenic level and monotonically decreasing in your current distance to the closest safe well. These constraints seem reasonable enough, but when we actually fit the model we found that doing Bayesian inference with the constraint pulled the estimate, not just toward monotonicity, but to a strong increase (for the increasing relation) or a strong decrease (for the decreasing relation). This makes sense from a statistical standpoint because if you restrict a parameter to be nonnegative, any posterior dis

3 0.11419167 1284 andrew gelman stats-2012-04-26-Modeling probability data

Introduction: Rafael Huber writes: I conducted an experiment in which subjects where asked to estimate the probability of a certain event given a number of information (like a wheater forecaster or a stockmarket trader). These probability estimates are the dependent variable of my experiment. My goal is to model the data with a (hierarchical) Bayesian regression. A linear equation with all the presented information (quantified as log odds) defines the mu of a normal likelihood. The tau as precision is another free parameter. y[r] ~ dnorm( mu[r] , tau[ subj[r] ] ) mu[r] <- b0[ subj[r] ] + b1[ subj[r] ] * x1[r] + b2[ subj[r] ] * x2[r] + b3[ subj[r] ] * x3[r] My problem is that I do not believe that the normal is the correct probability distribution to model probability data (‌ because the error is limited). However, until now nobody was able to tell me how I can correctly model probability data. My reply: You can take the logit of the data before analyzing them. That is assuming there

4 0.11373033 101 andrew gelman stats-2010-06-20-“People with an itch to scratch”

Introduction: Derek Sonderegger writes: I have just finished my Ph.D. in statistics and am currently working in applied statistics (plant ecology) using Bayesian statistics. As the statistician in the group I only ever get the ‘hard analysis’ problems that don’t readily fit into standard models. As I delve into the computational aspects of Bayesian analysis, I find myself increasingly frustrated with the current set of tools. I was delighted to see JAGS 2.0 just came out and spent yesterday happily playing with it. My question is, where do you see the short-term future of Bayesian computing going and what can we do to steer it in a particular direction? In your book with Dr Hill, you mention that you expect BUGS (or its successor) to become increasingly sophisticated and, for example, re-parameterizations that increase convergence rates would be handled automatically. Just as R has been successful because users can extend it, I think progress here also will be made by input from ‘p

5 0.11106409 547 andrew gelman stats-2011-01-31-Using sample size in the prior distribution

Introduction: Mike McLaughlin writes: Consider the Seeds example in vol. 1 of the BUGS examples. There, a binomial likelihood has a p parameter constructed, via logit, from two covariates. What I am wondering is: Would it be legitimate, in a binomial + logit problem like this, to allow binomial p[i] to be a function of the corresponding n[i] or would that amount to using the data in the prior? In other words, in the context of the Seeds example, is r[] the only data or is n[] data as well and therefore not permissible in a prior formulation? I [McLaughlin] currently have a model with a common beta prior for all p[i] but would like to mitigate this commonality (a kind of James-Stein effect) when there are lots of observations for some i. But this seems to feed the data back into the prior. Does it really? It also occurs to me [McLaughlin] that, perhaps, a binomial likelihood is not the one to use here (not flexible enough). My reply: Strictly speaking, “n” is data, and so what you wa

6 0.10800377 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

7 0.095946051 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

8 0.089318573 773 andrew gelman stats-2011-06-18-Should we always be using the t and robit instead of the normal and logit?

9 0.088999793 2022 andrew gelman stats-2013-09-13-You heard it here first: Intense exercise can suppress appetite

10 0.085577913 1735 andrew gelman stats-2013-02-24-F-f-f-fake data

11 0.085188806 2145 andrew gelman stats-2013-12-24-Estimating and summarizing inference for hierarchical variance parameters when the number of groups is small

12 0.085115224 472 andrew gelman stats-2010-12-17-So-called fixed and random effects

13 0.084902301 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

14 0.080712549 342 andrew gelman stats-2010-10-14-Trying to be precise about vagueness

15 0.080642521 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

16 0.07810168 1106 andrew gelman stats-2012-01-08-Intro to splines—with cool graphs

17 0.077150747 1941 andrew gelman stats-2013-07-16-Priors

18 0.074866213 2273 andrew gelman stats-2014-03-29-References (with code) for Bayesian hierarchical (multilevel) modeling and structural equation modeling

19 0.074451268 41 andrew gelman stats-2010-05-19-Updated R code and data for ARM

20 0.071784601 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.111), (1, 0.092), (2, 0.014), (3, 0.013), (4, 0.026), (5, -0.016), (6, 0.024), (7, -0.029), (8, -0.004), (9, 0.0), (10, 0.004), (11, -0.006), (12, -0.009), (13, -0.022), (14, 0.03), (15, 0.036), (16, 0.018), (17, 0.022), (18, -0.006), (19, 0.009), (20, 0.001), (21, 0.055), (22, -0.005), (23, -0.05), (24, -0.042), (25, 0.01), (26, -0.009), (27, -0.027), (28, 0.024), (29, -0.02), (30, -0.027), (31, 0.014), (32, -0.033), (33, -0.002), (34, 0.025), (35, -0.011), (36, -0.017), (37, -0.063), (38, -0.026), (39, 0.032), (40, 0.024), (41, -0.003), (42, -0.004), (43, 0.024), (44, -0.002), (45, -0.01), (46, 0.03), (47, -0.028), (48, -0.016), (49, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96045303 234 andrew gelman stats-2010-08-25-Modeling constrained parameters

Introduction: Mike McLaughlin writes: In general, is there any way to do MCMC with a fixed constraint? E.g., suppose I measure the three internal angles of a triangle with errors ~dnorm(0, tau) where tau might be different for the three measurements. This would be an easy BUGS/WinBUGS/JAGS exercise but suppose, in addition, I wanted to include prior information to the effect that the three angles had to total 180 degrees exactly. Is this feasible? Could you point me to any BUGS model in which a constraint of this type is implemented? Note: Even in my own (non-hierarchical) code which tends to be component-wise, random-walk Metropolis with tuned Laplacian proposals, I cannot see how I could incorporate such a constraint. My reply: See page 508 of Bayesian Data Analysis (2nd edition). We have an example of such a model there (from this paper with Bois and Jiang).

2 0.7514829 2342 andrew gelman stats-2014-05-21-Models with constraints

Introduction: I had an interesting conversation with Aki about monotonicity constraints. We were discussing a particular set of Gaussian processes that we were fitting to the arsenic well-switching data (the example from the logistic regression chapter in my book with Jennifer) but some more general issues arose that I thought might interest you. The idea was to fit a model where the response (the logit probability of switching wells) was constrained to be monotonically increasing in your current arsenic level and monotonically decreasing in your current distance to the closest safe well. These constraints seem reasonable enough, but when we actually fit the model we found that doing Bayesian inference with the constraint pulled the estimate, not just toward monotonicity, but to a strong increase (for the increasing relation) or a strong decrease (for the decreasing relation). This makes sense from a statistical standpoint because if you restrict a parameter to be nonnegative, any posterior dis

3 0.70458776 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

Introduction: One of the new examples for the third edition of Bayesian Data Analysis is a spell-checking story. Here it is (just start at 2/3 down on the first page, with “Spelling correction”). I like this example—it demonstrates the Bayesian algebra, also gives a sense of the way that probability models (both “likelihood” and “prior”) are constructed from existing assumptions and data. The models aren’t just specified as a mathematical exercise, they represent some statement about reality. And the problem is close enough to our experience that we can consider ways in which the model can be criticized and improved, all in a simple example that has only three possibilities.

4 0.69803101 810 andrew gelman stats-2011-07-20-Adding more information can make the variance go up (depending on your model)

Introduction: Andy McKenzie writes: In their March 9 “ counterpoint ” in nature biotech to the prospect that we should try to integrate more sources of data in clinical practice (see “ point ” arguing for this), Isaac Kohane and David Margulies claim that, “Finally, how much better is our new knowledge than older knowledge? When is the incremental benefit of a genomic variant(s) or gene expression profile relative to a family history or classic histopathology insufficient and when does it add rather than subtract variance?” Perhaps I am mistaken (thus this email), but it seems that this claim runs contra to the definition of conditional probability. That is, if you have a hierarchical model, and the family history / classical histopathology already suggests a parameter estimate with some variance, how could the new genomic info possibly increase the variance of that parameter estimate? Surely the question is how much variance the new genomic info reduces and whether it therefore justifies t

5 0.69192058 1460 andrew gelman stats-2012-08-16-“Real data can be a pain”

Introduction: Michael McLaughlin sent me the following query with the above title. Some time ago, I [McLaughlin] was handed a dataset that needed to be modeled. It was generated as follows: 1. Random navigation errors, historically a binary mixture of normal and Laplace with a common mean, were collected by observation. 2. Sadly, these data were recorded with too few decimal places so that the resulting quantization is clearly visible in a scatterplot. 3. The quantized data were then interpolated (to an unobserved location). The final result looks like fuzzy points (small scale jitter) at quantized intervals spanning a much larger scale (the parent mixture distribution). This fuzziness, likely ~normal or ~Laplace, results from the interpolation. Otherwise, the data would look like a discrete analogue of the normal/Laplace mixture. I would like to characterize the latent normal/Laplace mixture distribution but the quantization is “getting in the way”. When I tried MCMC on this proble

6 0.68853581 342 andrew gelman stats-2010-10-14-Trying to be precise about vagueness

7 0.68847132 1868 andrew gelman stats-2013-05-23-Validation of Software for Bayesian Models Using Posterior Quantiles

8 0.68755358 20 andrew gelman stats-2010-05-07-Bayesian hierarchical model for the prediction of soccer results

9 0.6868338 916 andrew gelman stats-2011-09-18-Multimodality in hierarchical models

10 0.68532175 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

11 0.67882866 1374 andrew gelman stats-2012-06-11-Convergence Monitoring for Non-Identifiable and Non-Parametric Models

12 0.67854095 1422 andrew gelman stats-2012-07-20-Likelihood thresholds and decisions

13 0.67804956 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

14 0.67636555 840 andrew gelman stats-2011-08-05-An example of Bayesian model averaging

15 0.67596918 1284 andrew gelman stats-2012-04-26-Modeling probability data

16 0.67059571 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

17 0.67055887 1098 andrew gelman stats-2012-01-04-Bayesian Page Rank?

18 0.66265064 1047 andrew gelman stats-2011-12-08-I Am Too Absolutely Heteroskedastic for This Probit Model

19 0.65438575 2176 andrew gelman stats-2014-01-19-Transformations for non-normal data

20 0.65180987 1686 andrew gelman stats-2013-01-21-Finite-population Anova calculations for models with interactions


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(0, 0.02), (13, 0.318), (16, 0.055), (24, 0.112), (36, 0.039), (53, 0.056), (57, 0.021), (65, 0.042), (99, 0.208)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.93472588 1514 andrew gelman stats-2012-09-28-AdviseStat 47% Campaign Ad

Introduction: Lee Wilkinson sends me this amusing ad for his new software, AdviseStat: The ad is a parody, but the software is real !

2 0.91807491 345 andrew gelman stats-2010-10-15-Things we do on sabbatical instead of actually working

Introduction: Frank Fischer, a political scientist at Rutgers U., says his alleged plagiarism was mere sloppiness and not all that uncommon in scholarship. I’ve heard about plagiarism but I had no idea it occurred in political science.

3 0.89435375 800 andrew gelman stats-2011-07-13-I like lineplots

Introduction: These particular lineplots are called parallel coordinate plots.

same-blog 4 0.87086308 234 andrew gelman stats-2010-08-25-Modeling constrained parameters

Introduction: Mike McLaughlin writes: In general, is there any way to do MCMC with a fixed constraint? E.g., suppose I measure the three internal angles of a triangle with errors ~dnorm(0, tau) where tau might be different for the three measurements. This would be an easy BUGS/WinBUGS/JAGS exercise but suppose, in addition, I wanted to include prior information to the effect that the three angles had to total 180 degrees exactly. Is this feasible? Could you point me to any BUGS model in which a constraint of this type is implemented? Note: Even in my own (non-hierarchical) code which tends to be component-wise, random-walk Metropolis with tuned Laplacian proposals, I cannot see how I could incorporate such a constraint. My reply: See page 508 of Bayesian Data Analysis (2nd edition). We have an example of such a model there (from this paper with Bois and Jiang).

5 0.85861427 1559 andrew gelman stats-2012-11-02-The blog is back

Introduction: We had some security problem: not an actual virus or anything, but a potential leak which caused Google to blacklist us. Cord fixed us and now we’re fine. Good job, Google! Better to find the potential problem before there is any harm!

6 0.82134521 1789 andrew gelman stats-2013-04-05-Elites have alcohol problems too!

7 0.81488812 172 andrew gelman stats-2010-07-30-Why don’t we have peer reviewing for oral presentations?

8 0.77228439 437 andrew gelman stats-2010-11-29-The mystery of the U-shaped relationship between happiness and age

9 0.76535976 1509 andrew gelman stats-2012-09-24-Analyzing photon counts

10 0.76289964 1852 andrew gelman stats-2013-05-12-Crime novels for economists

11 0.75434077 971 andrew gelman stats-2011-10-25-Apply now for Earth Institute postdoctoral fellowships at Columbia University

12 0.75026023 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

13 0.74978274 424 andrew gelman stats-2010-11-21-Data cleaning tool!

14 0.74775207 597 andrew gelman stats-2011-03-02-RStudio – new cross-platform IDE for R

15 0.74418378 980 andrew gelman stats-2011-10-29-When people meet this guy, can they resist the temptation to ask him what he’s doing for breakfast??

16 0.74188095 2011 andrew gelman stats-2013-09-07-Here’s what happened when I finished my PhD thesis

17 0.73891997 1519 andrew gelman stats-2012-10-02-Job!

18 0.73617959 817 andrew gelman stats-2011-07-23-New blog home

19 0.73492229 1916 andrew gelman stats-2013-06-27-The weirdest thing about the AJPH story

20 0.73163801 1648 andrew gelman stats-2013-01-02-A important new survey of Bayesian predictive methods for model assessment, selection and comparison