andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1208 knowledge-graph by maker-knowledge-mining

1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes


meta infos for this blog

Source: html

Introduction: Deborah Mayo pointed me to this discussion by Christian Hennig of my recent article on Induction and Deduction in Bayesian Data Analysis. A couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out paragraphs from his discussion and then replying. Hennig: for me the terms “frequentist” and “subjective Bayes” point to interpretations of probability, and not to specific methods of inference. The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. Dice really can be rolled over and over again; a sample survey of 15


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. [sent-5, score-0.591]

2 Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. [sent-6, score-0.312]

3 Dice really can be rolled over and over again; a sample survey of 1500 Americans really does have essentially infinitely many possible outcomes; but there will never be anything like infinitely many presidential elections or infinitely many worldwide flu epidemics. [sent-7, score-0.73]

4 Hennig: The subjective Bayesian one is about quantifying belief in a rational way; following de Finetti, it would in fact be about belief in observable future outcomes of experiments, and not in the truth of models. [sent-8, score-0.439]

5 Priors over model parameters, according to de Finetti, are only technical devices to deal with belief distributions for future outcomes, and should not be interpreted in their own right. [sent-9, score-0.268]

6 I understand the appeal of the pure predictive approach, but what I think is missing here is that what we call “parameters” are often conduits to generalizability of inference. [sent-10, score-0.187]

7 Modeling using latent parameters is more difficult—you have to throw in lots of prior information to get it to work, as we discuss in our article—but, on the plus side, there is biological reason to suspect that these parameters generalize from person to person. [sent-13, score-0.548]

8 Similarly, the de Finetti philosophy (as described by Henning) might say that parameters are nothing but data’s way of predicting new data. [sent-16, score-0.271]

9 Parameterization encodes knowledge, and parameters with external validity encode knowledge particularly effectively. [sent-18, score-0.166]

10 So I think that it’s a quite serious omission that Gelman doesn’t tell us his interpretation (he may do that elsewhere, though). [sent-20, score-0.14]

11 Indeed, I do give my interpretation of probabilities elsewhere. [sent-21, score-0.143]

12 These were my ideas 20 years ago but I still pretty much hold on to them (except that, as I’ve discussed often on this blog and elsewhere, I’ve moved away from noninformative priors and now I think that weakly informative priors are the way to go). [sent-23, score-0.164]

13 Hennig concludes with a statement of concern about posterior predictive checking. [sent-24, score-0.357]

14 Posterior predictive checks reduce to classical goodness-of-fit tests when the test statistic is pivotal; when this is not the case, there truly is uncertainty about the fit, and I prefer to go the Bayesian route and average over that uncertainty. [sent-26, score-0.261]

15 Whatever you may think about them theoretically, posterior predictive checks really can work. [sent-28, score-0.431]

16 I’ll see exploratory graphs of raw data, pages and pages of density plots of posterior simulations, trace plots and correlation plots of iterative simulations—but no plots comparing model to data. [sent-31, score-0.878]

17 It makes me want to scream scream scream scream scream when statisticians’ philosophical scruples stop them from performing these five simple steps (or, to be precise, performing the simple steps (a), (c), (d), and (e), given that they’ve already done the hard part, which is step (b)). [sent-33, score-1.766]

18 In some settings, a posterior predictive check will essentially never “reject”; that is, there are models that have a very high probability of replicating certain aspects of the data. [sent-35, score-0.413]

19 For example, a normal distribution with flat prior distribution will reproduce the mean (but not necessarily the median) of any dataset. [sent-36, score-0.182]

20 In some of these situations I think it’s a good thing that the posterior predictive check does not “reject”; other times I am unhappy with this property. [sent-37, score-0.357]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('hennig', 0.517), ('scream', 0.278), ('infinitely', 0.205), ('predictive', 0.187), ('posterior', 0.17), ('parameters', 0.166), ('finetti', 0.154), ('frequentist', 0.133), ('plots', 0.13), ('toxin', 0.129), ('bayesian', 0.107), ('de', 0.105), ('data', 0.096), ('belief', 0.093), ('blood', 0.089), ('biological', 0.085), ('interpretation', 0.084), ('outcomes', 0.084), ('replicated', 0.083), ('priors', 0.082), ('mayo', 0.078), ('interpretations', 0.077), ('checks', 0.074), ('elsewhere', 0.073), ('reject', 0.073), ('latent', 0.071), ('performing', 0.07), ('model', 0.07), ('simulations', 0.068), ('precise', 0.066), ('subjective', 0.064), ('steps', 0.063), ('distribution', 0.061), ('define', 0.06), ('prior', 0.06), ('pages', 0.059), ('probabilities', 0.059), ('worldwide', 0.059), ('breathtakingly', 0.059), ('fatty', 0.059), ('henning', 0.059), ('settings', 0.059), ('respond', 0.058), ('probability', 0.056), ('flu', 0.056), ('omission', 0.056), ('simple', 0.055), ('checking', 0.055), ('liver', 0.053), ('phenomenological', 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

Introduction: Deborah Mayo pointed me to this discussion by Christian Hennig of my recent article on Induction and Deduction in Bayesian Data Analysis. A couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out paragraphs from his discussion and then replying. Hennig: for me the terms “frequentist” and “subjective Bayes” point to interpretations of probability, and not to specific methods of inference. The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. Dice really can be rolled over and over again; a sample survey of 15

2 0.21366319 1363 andrew gelman stats-2012-06-03-Question about predictive checks

Introduction: Klaas Metselaar writes: I [Metselaar] am currently involved in a discussion about the use of the notion “predictive” as used in “posterior predictive check”. I would argue that the notion “predictive” should be reserved for posterior checks using information not used in the determination of the posterior. I quote from the discussion: “However, the predictive uncertainty in a Bayesian calculation requires sampling from all the random variables, and this includes both the model parameters and the residual error”. My [Metselaar's] comment: This may be exactly the point I am worried about: shouldn’t the predictive uncertainty be defined as sampling from the posterior parameter distribution + residual error + sampling from the prediction error distribution? Residual error reduces to measurement error in the case of a model which is perfect for the sample of experiments. Measurement error could be reduced to almost zero by ideal and perfect measurement instruments. I would h

3 0.20863596 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

Introduction: I sent Deborah Mayo a link to my paper with Cosma Shalizi on the philosophy of statistics, and she sent me the link to this conference which unfortunately already occurred. (It’s too bad, because I’d have liked to have been there.) I summarized my philosophy as follows: I am highly sympathetic to the approach of Lakatos (or of Popper, if you consider Lakatos’s “Popper_2″ to be a reasonable simulation of the true Popperism), in that (a) I view statistical models as being built within theoretical structures, and (b) I see the checking and refutation of models to be a key part of scientific progress. A big problem I have with mainstream Bayesianism is its “inductivist” view that science can operate completely smoothly with posterior updates: the idea that new data causes us to increase the posterior probability of good models and decrease the posterior probability of bad models. I don’t buy that: I see models as ever-changing entities that are flexible and can be patched and ex

4 0.189051 2029 andrew gelman stats-2013-09-18-Understanding posterior p-values

Introduction: David Kaplan writes: I came across your paper “Understanding Posterior Predictive P-values”, and I have a question regarding your statement “If a posterior predictive p-value is 0.4, say, that means that, if we believe the model, we think there is a 40% chance that tomorrow’s value of T(y_rep) will exceed today’s T(y).” This is perfectly understandable to me and represents the idea of calibration. However, I am unsure how this relates to statements about fit. If T is the LR chi-square or Pearson chi-square, then your statement that there is a 40% chance that tomorrows value exceeds today’s value indicates bad fit, I think. Yet, some literature indicates that high p-values suggest good fit. Could you clarify this? My reply: I think that “fit” depends on the question being asked. In this case, I’d say the model fits for this particular purpose, even though it might not fit for other purposes. And here’s the abstract of the paper: Posterior predictive p-values do not i

5 0.18618354 1151 andrew gelman stats-2012-02-03-Philosophy of Bayesian statistics: my reactions to Senn

Introduction: Continuing with my discussion of the articles in the special issue of the journal Rationality, Markets and Morals on the philosophy of Bayesian statistics: Stephen Senn, “You May Believe You Are a Bayesian But You Are Probably Wrong”: I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach. As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior. The immense practical difficulties with any serious system of inference render it absurd to think that it would be possible to just write down a probability distribution to represent uncertainty. I wish, however, that Senn would recognize my Bayesian approach (which is also that of John Carlin, Hal Stern, Don Rubin, and, I believe, others). De Finetti is no longer around, but we are! I have to admit that my own Bayesian views and practices have changed. In particular, I resonate wit

6 0.18045034 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

7 0.17878705 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

8 0.17573474 1941 andrew gelman stats-2013-07-16-Priors

9 0.174006 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

10 0.17063211 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

11 0.16916199 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

12 0.16884045 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

13 0.1686186 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

14 0.16639479 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

15 0.16493796 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

16 0.16417155 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

17 0.16407402 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

18 0.15954989 2200 andrew gelman stats-2014-02-05-Prior distribution for a predicted probability

19 0.15598992 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

20 0.14807832 1092 andrew gelman stats-2011-12-29-More by Berger and me on weakly informative priors


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.248), (1, 0.21), (2, -0.036), (3, 0.104), (4, -0.075), (5, -0.079), (6, 0.021), (7, 0.048), (8, -0.031), (9, -0.034), (10, 0.015), (11, -0.009), (12, -0.015), (13, 0.02), (14, -0.033), (15, 0.012), (16, 0.045), (17, -0.027), (18, -0.002), (19, 0.014), (20, -0.002), (21, -0.0), (22, -0.03), (23, -0.023), (24, -0.008), (25, 0.003), (26, 0.001), (27, 0.028), (28, 0.036), (29, 0.009), (30, -0.021), (31, -0.027), (32, -0.019), (33, 0.008), (34, -0.007), (35, 0.007), (36, 0.007), (37, 0.002), (38, 0.013), (39, -0.008), (40, 0.015), (41, -0.014), (42, 0.046), (43, -0.016), (44, -0.022), (45, -0.027), (46, -0.029), (47, -0.019), (48, 0.004), (49, -0.033)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96454865 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

Introduction: Deborah Mayo pointed me to this discussion by Christian Hennig of my recent article on Induction and Deduction in Bayesian Data Analysis. A couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out paragraphs from his discussion and then replying. Hennig: for me the terms “frequentist” and “subjective Bayes” point to interpretations of probability, and not to specific methods of inference. The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. Dice really can be rolled over and over again; a sample survey of 15

2 0.90328544 2027 andrew gelman stats-2013-09-17-Christian Robert on the Jeffreys-Lindley paradox; more generally, it’s good news when philosophical arguments can be transformed into technical modeling issues

Introduction: X writes : This paper discusses the dual interpretation of the Jeffreys– Lindley’s paradox associated with Bayesian posterior probabilities and Bayes factors, both as a differentiation between frequentist and Bayesian statistics and as a pointer to the difficulty of using improper priors while testing. We stress the considerable impact of this paradox on the foundations of both classical and Bayesian statistics. I like this paper in that he is transforming what is often seen as a philosophical argument into a technical issue, in this case a question of priors. Certain conventional priors (the so-called spike and slab) have poor statistical properties in settings such as model comparison (in addition to not making sense as prior distributions of any realistic state of knowledge). This reminds me of the way that we nowadays think about hierarchical models. In the old days there was much thoughtful debate about exchangeability and the so-called Stein paradox that partial pooling

3 0.88148499 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

Introduction: Konrad Scheffler writes: I was interested by your paper “Induction and deduction in Bayesian data analysis” and was wondering if you would entertain a few questions: – Under the banner of objective Bayesianism, I would posit something like this as a description of Bayesian inference: “Objective Bayesian probability is not a degree of belief (which would necessarily be subjective) but a measure of the plausibility of a hypothesis, conditional on a formally specified information state. One way of specifying a formal information state is to specify a model, which involves specifying both a prior distribution (typically for a set of unobserved variables) and a likelihood function (typically for a set of observed variables, conditioned on the values of the unobserved variables). Bayesian inference involves calculating the objective degree of plausibility of a hypothesis (typically the truth value of the hypothesis is a function of the variables mentioned above) given such a

4 0.87392557 1182 andrew gelman stats-2012-02-24-Untangling the Jeffreys-Lindley paradox

Introduction: Ryan Ickert writes: I was wondering if you’d seen this post , by a particle physicist with some degree of influence. Dr. Dorigo works at CERN and Fermilab. The penultimate paragraph is: From the above expression, the Frequentist researcher concludes that the tracker is indeed biased, and rejects the null hypothesis H0, since there is a less-than-2% probability (P’<α) that a result as the one observed could arise by chance! A Frequentist thus draws, strongly, the opposite conclusion than a Bayesian from the same set of data. How to solve the riddle? He goes on to not solve the riddle. Perhaps you can? Surely with the large sample size they have (n=10^6), the precision on the frequentist p-value is pretty good, is it not? My reply: The first comment on the site (by Anonymous [who, just to be clear, is not me; I have no idea who wrote that comment], 22 Feb 2012, 21:27pm) pretty much nails it: In setting up the Bayesian model, Dorigo assumed a silly distribution on th

5 0.85164565 1723 andrew gelman stats-2013-02-15-Wacky priors can work well?

Introduction: Dave Judkins writes: I would love to see a blog entry on this article , Bayesian Model Selection in High-Dimensional Settings, by Valen Johnson and David Rossell. The simulation results are very encouraging although the choice of colors for some of the graphics is unfortunate. Unless I am colorblind in some way that I am unaware of, they have two thin charcoal lines that are indistinguishable. When Dave Judkins puts in a request, I’ll respond. Also, I’m always happy to see a new Val Johnson paper. Val and I are contemporaries—he and I got our PhD’s at around the same time, with both of us working on Bayesian image reconstruction, then in the early 1990s Val was part of the legendary group at Duke’s Institute of Statistics and Decision Sciences—a veritable ’27 Yankees featuring Mike West, Merlise Clyde, Michael Lavine, Dave Higdon, Peter Mueller, Val, and a bunch of others. I always thought it was too bad they all had to go their separate ways. Val also wrote two classic p

6 0.84078991 811 andrew gelman stats-2011-07-20-Kind of Bayesian

7 0.83908683 2182 andrew gelman stats-2014-01-22-Spell-checking example demonstrates key aspects of Bayesian data analysis

8 0.83613074 781 andrew gelman stats-2011-06-28-The holes in my philosophy of Bayesian data analysis

9 0.83437157 1157 andrew gelman stats-2012-02-07-Philosophy of Bayesian statistics: my reactions to Hendry

10 0.82474512 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

11 0.82177359 754 andrew gelman stats-2011-06-09-Difficulties with Bayesian model averaging

12 0.81899196 320 andrew gelman stats-2010-10-05-Does posterior predictive model checking fit with the operational subjective approach?

13 0.81787544 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

14 0.81758612 1877 andrew gelman stats-2013-05-30-Infill asymptotics and sprawl asymptotics

15 0.81433958 1332 andrew gelman stats-2012-05-20-Problemen met het boek

16 0.81253177 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

17 0.80789286 1041 andrew gelman stats-2011-12-04-David MacKay and Occam’s Razor

18 0.80469644 291 andrew gelman stats-2010-09-22-Philosophy of Bayes and non-Bayes: A dialogue with Deborah Mayo

19 0.80451345 1898 andrew gelman stats-2013-06-14-Progress! (on the understanding of the role of randomization in Bayesian inference)

20 0.79726422 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.012), (3, 0.083), (15, 0.016), (16, 0.058), (21, 0.036), (24, 0.247), (25, 0.021), (48, 0.015), (61, 0.01), (65, 0.03), (84, 0.024), (86, 0.034), (89, 0.018), (99, 0.248)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97256404 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

Introduction: Deborah Mayo pointed me to this discussion by Christian Hennig of my recent article on Induction and Deduction in Bayesian Data Analysis. A couple days ago I responded to comments by Mayo, Stephen Senn, and Larry Wasserman. I will respond to Hennig by pulling out paragraphs from his discussion and then replying. Hennig: for me the terms “frequentist” and “subjective Bayes” point to interpretations of probability, and not to specific methods of inference. The frequentist one refers to the idea that there is an underlying data generating process that repeatedly throws out data and would approximate the assumed distribution if one could only repeat it infinitely often. Hennig makes the good point that, if this is the way you would define “frequentist” (it’s not how I’d define the term myself, but I’ll use Hennig’s definition here), then it makes sense to be a frequentist in some settings but not others. Dice really can be rolled over and over again; a sample survey of 15

2 0.96449441 1368 andrew gelman stats-2012-06-06-Question 27 of my final exam for Design and Analysis of Sample Surveys

Introduction: 27. Which of the following problems were identified with the Burnham et al. survey of Iraq mortality? (Indicate all that apply.) (a) The survey used cluster sampling, which is inappropriate for estimating individual out- comes such as death. (b) In their report, Burnham et al. did not identify their primary sampling units. (c) The second-stage sampling was not a probability sample. (d) Survey materials supplied by the authors are incomplete and inconsistent with published descriptions of the survey. Solution to question 26 From yesterday : 26. You have just graded an an exam with 28 questions and 15 students. You fit a logistic item- response model estimating ability, difficulty, and discrimination parameters. Which of the following statements are basically true? (Indicate all that apply.) (a) If a question is answered correctly by students with very low and very high ability, but is missed by students in the middle, it will have a high value for its discrimination

3 0.96444803 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

Introduction: 26. You have just graded an an exam with 28 questions and 15 students. You fit a logistic item- response model estimating ability, difficulty, and discrimination parameters. Which of the following statements are basically true? (Indicate all that apply.) (a) If a question is answered correctly by students with very low and very high ability, but is missed by students in the middle, it will have a high value for its discrimination parameter. (b) It is not possible to fit an item-response model when you have more questions than students. In order to fit the model, you either need to reduce the number of questions (for example, by discarding some questions or by putting together some questions into a combined score) or increase the number of students in the dataset. (c) To keep the model identified, you can set one of the difficulty parameters or one of the ability parameters to zero and set one of the discrimination parameters to 1. (d) If two students answer the same number of q

4 0.95997119 953 andrew gelman stats-2011-10-11-Steve Jobs’s cancer and science-based medicine

Introduction: Interesting discussion from David Gorski (which I found via this link from Joseph Delaney). I don’t have anything really to add to this discussion except to note the value of this sort of anecdote in a statistics discussion. It’s only n=1 and adds almost nothing to the literature on the effectiveness of various treatments, but a story like this can help focus one’s thoughts on the decision problems.

5 0.95929086 1838 andrew gelman stats-2013-05-03-Setting aside the politics, the debate over the new health-care study reveals that we’re moving to a new high standard of statistical journalism

Introduction: Pointing to this news article by Megan McArdle discussing a recent study of Medicaid recipients, Jonathan Falk writes: Forget the interpretation for a moment, and the political spin, but haven’t we reached an interesting point when a journalist says things like: When you do an RCT with more than 12,000 people in it, and your defense of your hypothesis is that maybe the study just didn’t have enough power, what you’re actually saying is “the beneficial effects are probably pretty small”. and A good Bayesian—and aren’t most of us are supposed to be good Bayesians these days?—should be updating in light of this new information. Given this result, what is the likelihood that Obamacare will have a positive impact on the average health of Americans? Every one of us, for or against, should be revising that probability downwards. I’m not saying that you have to revise it to zero; I certainly haven’t. But however high it was yesterday, it should be somewhat lower today. This

6 0.95901597 2029 andrew gelman stats-2013-09-18-Understanding posterior p-values

7 0.95820558 1240 andrew gelman stats-2012-04-02-Blogads update

8 0.95768809 1757 andrew gelman stats-2013-03-11-My problem with the Lindley paradox

9 0.95738363 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

10 0.95736784 846 andrew gelman stats-2011-08-09-Default priors update?

11 0.95695031 197 andrew gelman stats-2010-08-10-The last great essayist?

12 0.95574617 847 andrew gelman stats-2011-08-10-Using a “pure infographic” to explore differences between information visualization and statistical graphics

13 0.95565319 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

14 0.95511246 1455 andrew gelman stats-2012-08-12-Probabilistic screening to get an approximate self-weighted sample

15 0.95414555 1080 andrew gelman stats-2011-12-24-Latest in blog advertising

16 0.95341063 1221 andrew gelman stats-2012-03-19-Whassup with deviance having a high posterior correlation with a parameter in the model?

17 0.95300221 1072 andrew gelman stats-2011-12-19-“The difference between . . .”: It’s not just p=.05 vs. p=.06

18 0.95284903 1062 andrew gelman stats-2011-12-16-Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets

19 0.95266557 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

20 0.95206249 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization