andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-506 knowledge-graph by maker-knowledge-mining

506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well


meta infos for this blog

Source: html

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. [sent-2, score-0.585]

2 It really feels like if you concentrate really hard, you can read minds, or predict the future, or whatever. [sent-10, score-0.31]

3 Heck, when I play squash I always feel that if I really really try hard, I should be able to win every point. [sent-11, score-0.245]

4 The only thing that stops me from really believing this is that I realize that the same logic holds symmetrically for my opponent. [sent-12, score-0.154]

5 But with ESP, absent a controlled study, it’s easy to see evidence all around you supporting your wishful thinking. [sent-13, score-0.247]

6 ) Recall the experiments reported by Ellen Langer, that people would shake their dice more forcefully when trying to roll high numbers and would roll gently when going for low numbers. [sent-15, score-0.276]

7 And, as David Weakiem and I have discussed , classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc. [sent-20, score-0.604]

8 I think it’s naive when people implicitly assume the following dichotomy: either a study’s claims are correct, or that study’s statistical methods are weak. [sent-22, score-0.196]

9 ESP is a field of small effects and so ESP researchers use high-quality statistics. [sent-24, score-0.178]

10 To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. [sent-25, score-0.172]

11 The difference is that when you’re studying a large, robust phenomenon, little statistical errors won’t be so damaging as in a study of a fragile, possibly zero effect. [sent-26, score-0.462]

12 In some ways, there’s an analogy to the difficulties of using surveys to estimate small proportions, in which case misclassification errors can loom large, as discussed here . [sent-27, score-0.22]

13 But I wouldn’t use the Bayesian methods that these critics recommend. [sent-29, score-0.304]

14 And, if you know me at all (in a professional capacity), you’ll know I hate statements like this: Another advantage of the Bayesian test that it is consistent: as the number of participants grows large, the probability of discovering the true hypothesis approaches 1. [sent-37, score-0.449]

15 I have to go to bed now (no, I’m not going to bed at 9am; I set this blog up to post entries automatically every morning). [sent-39, score-0.234]

16 If you happen to run into an experiment of interest in which psychologists are “discovering a true hypothesis,” (in the statistical sense of a precise model), feel free to wake me up and tell me. [sent-40, score-0.245]

17 Anyway, the ESP thing is pretty silly and so there are lots of ways of shooting it down. [sent-42, score-0.15]

18 because often we’re full of uncertainty about more interesting problems For example, new educational strategies and their effects on different sorts of students. [sent-44, score-0.198]

19 For these sorts of problems, I don’t think that models of null effects, verbal characterizations of Bayes factors, and reassurances about “discovering the true hypothesis” are going to cut it. [sent-45, score-0.32]

20 These methods are important, and I think that, even when criticizing silly studies, we should think carefully about what we’re doing and what our methods are actually purporting to do. [sent-46, score-0.39]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('esp', 0.485), ('critics', 0.178), ('discovering', 0.154), ('studying', 0.141), ('bayesian', 0.132), ('wishful', 0.131), ('methods', 0.126), ('null', 0.123), ('hypothesis', 0.118), ('bed', 0.117), ('evidence', 0.116), ('effects', 0.113), ('true', 0.112), ('wagenmakers', 0.108), ('roll', 0.104), ('study', 0.101), ('errors', 0.087), ('large', 0.087), ('correct', 0.086), ('really', 0.086), ('sorts', 0.085), ('lots', 0.085), ('re', 0.078), ('wouldn', 0.076), ('read', 0.075), ('purporting', 0.073), ('backyard', 0.073), ('reiterate', 0.073), ('squash', 0.073), ('weakiem', 0.073), ('statistical', 0.07), ('cool', 0.069), ('symmetrically', 0.068), ('forcefully', 0.068), ('misclassification', 0.068), ('classical', 0.067), ('converted', 0.065), ('ludicrous', 0.065), ('grows', 0.065), ('purportedly', 0.065), ('abundant', 0.065), ('dichotomy', 0.065), ('ellen', 0.065), ('silly', 0.065), ('small', 0.065), ('heck', 0.063), ('snedecor', 0.063), ('wake', 0.063), ('concentrate', 0.063), ('damaging', 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

2 0.44613874 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

3 0.3376331 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

4 0.20050889 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

5 0.18807547 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

6 0.17740718 2295 andrew gelman stats-2014-04-18-One-tailed or two-tailed?

7 0.17270106 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

8 0.17108603 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

9 0.17088759 691 andrew gelman stats-2011-05-03-Psychology researchers discuss ESP

10 0.16903584 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

11 0.15501757 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

12 0.1522712 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models

13 0.15038259 1205 andrew gelman stats-2012-03-09-Coming to agreement on philosophy of statistics

14 0.14575586 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

15 0.14533728 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty

16 0.14479201 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

17 0.14176613 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

18 0.14132676 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

19 0.13837452 1605 andrew gelman stats-2012-12-04-Write This Book

20 0.13486214 1469 andrew gelman stats-2012-08-25-Ways of knowing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.306), (1, 0.052), (2, -0.05), (3, -0.109), (4, -0.109), (5, -0.071), (6, 0.002), (7, 0.046), (8, 0.016), (9, -0.109), (10, -0.056), (11, 0.009), (12, 0.057), (13, -0.093), (14, 0.058), (15, 0.016), (16, -0.04), (17, -0.006), (18, -0.025), (19, -0.004), (20, -0.022), (21, 0.01), (22, -0.048), (23, 0.012), (24, -0.078), (25, -0.058), (26, -0.011), (27, 0.051), (28, -0.02), (29, -0.02), (30, 0.037), (31, 0.015), (32, 0.051), (33, -0.049), (34, -0.026), (35, -0.039), (36, 0.031), (37, -0.049), (38, -0.052), (39, -0.01), (40, -0.009), (41, 0.081), (42, 0.021), (43, 0.044), (44, 0.017), (45, -0.0), (46, -0.027), (47, 0.018), (48, 0.008), (49, 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96506858 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

2 0.89533156 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

3 0.87908173 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

4 0.86166078 2281 andrew gelman stats-2014-04-04-The Notorious N.H.S.T. presents: Mo P-values Mo Problems

Introduction: A recent discussion between commenters Question and Fernando captured one of the recurrent themes here from the past year. Question: The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct. Fernando: Whereas it is probably true that researchers misuse NHT, the problem with tabloid science is broader and deeper. It is systemic. Question: I do not see how anything can be deeper than replacing careful description, prediction, falsification, and independent replication with dynamite plots, p-values, affirming the consequent, and peer review. From my own experience I am confident in saying that confusion caused by NHST is at the root of this problem. Fernando: Incentives? Impact factors? Publish or die? “Interesting” and “new” above quality and reliability, or actually answering a research question, and a silly and unbecoming obsession with being quoted in NYT, etc. . . . Giv

5 0.85458666 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing

Introduction: Steve Ziliak points me to this article by the always-excellent Carl Bialik, slamming hypothesis tests. I only wish Carl had talked with me before so hastily posting, though! I would’ve argued with some of the things in the article. In particular, he writes: Reese and Brad Carlin . . . suggest that Bayesian statistics are a better alternative, because they tackle the probability that the hypothesis is true head-on, and incorporate prior knowledge about the variables involved. Brad Carlin does great work in theory, methods, and applications, and I like the bit about the prior knowledge (although I might prefer the more general phrase “additional information”), but I hate that quote! My quick response is that the hypothesis of zero effect is almost never true! The problem with the significance testing framework–Bayesian or otherwise–is in the obsession with the possibility of an exact zero effect. The real concern is not with zero, it’s with claiming a positive effect whe

6 0.83671117 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

7 0.81516129 1883 andrew gelman stats-2013-06-04-Interrogating p-values

8 0.81427085 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

9 0.8109833 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

10 0.80720133 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value

11 0.80602098 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

12 0.79307622 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

13 0.79122514 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

14 0.7871058 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

15 0.78347898 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

16 0.77881825 2093 andrew gelman stats-2013-11-07-I’m negative on the expression “false positives”

17 0.77788043 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

18 0.7742269 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

19 0.76887274 2243 andrew gelman stats-2014-03-11-The myth of the myth of the myth of the hot hand

20 0.76079804 2263 andrew gelman stats-2014-03-24-Empirical implications of Empirical Implications of Theoretical Models


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.082), (15, 0.084), (16, 0.066), (21, 0.027), (24, 0.183), (30, 0.011), (63, 0.017), (83, 0.013), (84, 0.013), (86, 0.042), (89, 0.018), (99, 0.316)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9744699 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

Introduction: I received the following note from someone who’d like to remain anonymous: I read your post on ethics and statistics, and the comments therein, with much interest. I did notice, however, that most of the dialogue was about ethical behavior of scientists. Herein I’d like to suggest a different take, one that focuses on the statistical methods of scientists. For example, fitting a line to a scatter plot of data using OLS [linear regression] gives more weight to outliers. If each data point represents a person we are weighting people differently. And surely the ethical implications are different if we use a least absolute deviation estimator. Recently I reviewed a paper where the authors claimed one advantage of non-parametric rank-based tests is their robustness to outliers. Again, maybe that outlier is the 10th person who dies from an otherwise beneficial medicine. Should we ignore him in assessing the effect of the medicine? I guess this gets me partly into loss f

2 0.97093451 618 andrew gelman stats-2011-03-18-Prior information . . . about the likelihood

Introduction: I read this story by Adrian Chen on Gawker (yeah, yeah, so sue me): Why That ‘NASA Discovers Alien Life’ Story Is Bullshit Fox News has a super-exciting article today: “Exclusive: NASA Scientist claims Evidence of Alien Life on Meteorite.” OMG, aliens exist! Except this NASA scientist has been claiming to have evidence of alien life on meteorites for years. Chen continues with a quote from the Fox News item: [NASA scientist Richard B. Hoover] gave FoxNews.com early access to the out-of-this-world research, published late Friday evening in the March edition of the Journal of Cosmology. In it, Hoover describes the latest findings in his study of an extremely rare class of meteorites, called CI1 carbonaceous chondrites — only nine such meteorites are known to exist on Earth. . . . The bad news is that Hoover reported this same sort of finding in various low-rent venues for several years. Replication, huh? Chen also helpfully points us to the website of the Journal

same-blog 3 0.96924001 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

4 0.96624666 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

5 0.96559453 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

Introduction: In my comments on academic cheating , I briefly discussed the question of how some of these papers could’ve been published in the first place, given that they tend to be of low quality. (It’s rare that people plagiarize the good stuff, and, when they do—for example when a senior scholar takes credit for a junior researcher’s contributions without giving proper credit—there’s not always a paper trail, and there can be legitimate differences of opinion about the relative contributions of the participants.) Anyway, to get back to the cases at hand: how did these rulebreakers get published in the first place? The question here is not how did they get away with cheating but how is it that top journals were publishing mediocre research? In the case of the profs who falsified data (Diederik Stapel) or did not follow scientific protocol (Mark Hauser), the answer is clear: By cheating, they were able to get the sort of too-good-to-be-true results which, if they were true, would be

6 0.96405482 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

7 0.96329254 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

8 0.96280903 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.96233308 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

10 0.96229756 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

11 0.96178746 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

12 0.96084791 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

13 0.96083856 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

14 0.96078086 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

15 0.95998907 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

16 0.95916653 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

17 0.95795643 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

18 0.95730531 1390 andrew gelman stats-2012-06-23-Traditionalist claims that modern art could just as well be replaced by a “paint-throwing chimp”

19 0.95719826 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

20 0.95712817 1906 andrew gelman stats-2013-06-19-“Behind a cancer-treatment firm’s rosy survival claims”