andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-797 knowledge-graph by maker-knowledge-mining

797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?


meta infos for this blog

Source: html

Introduction: Around these parts we see a continuing flow of unusual claims supported by some statistical evidence. The claims are varyingly plausible a priori. Some examples (I won’t bother to supply the links; regular readers will remember these examples and newcomers can find them by searching): - Obesity is contagious - People’s names affect where they live, what jobs they take, etc. - Beautiful people are more likely to have girl babies - More attractive instructors have higher teaching evaluations - In a basketball game, it’s better to be behind by a point at halftime than to be ahead by a point - Praying for someone without their knowledge improves their recovery from heart attacks - A variety of claims about ESP How should we think about these claims? The usual approach is to evaluate the statistical evidence–in particular, to look for reasons that the claimed results are not really statistically significant. If nobody can shoot down a claim, it survives. The other part of th


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The key step is to interpret the findings quantitatively: not just as significant/non-significant but as an effect size, and then looking at the implications of the estimated effect. [sent-10, score-0.399]

2 An easy example is one in which the estimated effect is completely plausible (for example, the incumbency advantage in U. [sent-12, score-0.562]

3 Neither of the examples I consider here is easy: both of the claims are odd but plausible, and both are supported by data, theory, and reasonably sophisticated analysis. [sent-15, score-0.285]

4 The effect of rain on July 4th My co-blogger John Sides linked to an article by Andreas Madestam and David Yanagizawa-Drott that reports that going to July 4th celebrations in childhood had the effect of making people more Republican. [sent-16, score-1.061]

5 We find that days without rain on Fourth of July in childhood have lifelong effects. [sent-18, score-0.504]

6 Our estimates are significant: one Fourth of July without rain before age 18 raises the likelihood of identifying as a Republican by 2 percent and voting for the Republican candidate by 4 percent. [sent-20, score-0.36]

7 Here was John’s reaction: In sum, if you were born before 1970, and experienced sunny July 4th days between the ages of 7-14, and lived in a predominantly Republican county, you may be more Republican as a consequence. [sent-24, score-0.383]

8 One July 4th without rain increases the probability of Republican vote by 4%. [sent-33, score-0.445]

9 ) on the Republican vote and 0% on the Democratic vote, then the effect on the vote share D/(D+R) is 1. [sent-38, score-0.58]

10 ] Does a childhood full of sunny July 4ths really make you 24 percentage points more likely to vote Republican? [sent-43, score-0.402]

11 (The authors find no such effect when considering the weather in a few other days in July. [sent-44, score-0.59]

12 1 of the paper) because not everyone goes to a July 4th celebration and that they don’t actually know the counties where the survey respondents lived as children. [sent-47, score-0.302]

13 It’s hard enough to believe an effect size of 24%, but it’s really hard to believe of 24% as an underestimate . [sent-48, score-0.336]

14 The most convincing part of the analysis was that they found no effect of rain on July 2, 3, 5, or 6. [sent-50, score-0.518]

15 The authors predict individual survey respondents given the July 4th weather when they were children, in the counties where they currently reside. [sent-56, score-0.393]

16 Setting aside these measurement issues, the big identification issue is that counties with more rain might be systematically different than counties with less rain. [sent-58, score-0.608]

17 I found these claims varyingly plausible: the business with the grades and the strikeouts sounded like a joke, but the claims about career choices etc seemed possible. [sent-67, score-0.695]

18 My first step in trying to understand these claims was to estimate an effect size: my crude estimate was that, if the research findings were correct, that about 1% of people choose their career based on their first names. [sent-68, score-0.549]

19 ) argued that the implied effects were too large to be believed (just as I was arguing above regarding the July 4th study), which makes more plausible his claims that the results arise from methodological artifacts. [sent-70, score-0.391]

20 The J marries J effect should not be much larger than the effect of, say, conditioning on going to the same high-school, having sat next to each other in class for a whole semester. [sent-76, score-0.657]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('july', 0.474), ('effect', 0.276), ('rain', 0.242), ('counties', 0.183), ('republican', 0.166), ('claims', 0.163), ('celebrations', 0.162), ('vote', 0.152), ('plausible', 0.152), ('madestam', 0.108), ('varyingly', 0.108), ('days', 0.106), ('weather', 0.105), ('fourth', 0.105), ('childhood', 0.105), ('names', 0.102), ('sunny', 0.098), ('strikeouts', 0.093), ('john', 0.087), ('pelham', 0.083), ('effects', 0.076), ('easy', 0.069), ('lived', 0.068), ('percent', 0.067), ('examples', 0.067), ('county', 0.067), ('esp', 0.066), ('grades', 0.065), ('simonsohn', 0.065), ('estimated', 0.065), ('born', 0.062), ('size', 0.06), ('findings', 0.058), ('larger', 0.056), ('politically', 0.056), ('supported', 0.055), ('authors', 0.054), ('systematic', 0.054), ('career', 0.052), ('without', 0.051), ('respondents', 0.051), ('claim', 0.051), ('choices', 0.051), ('marries', 0.049), ('patriotism', 0.049), ('predominantly', 0.049), ('considering', 0.049), ('reaction', 0.048), ('likely', 0.047), ('stays', 0.046)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

Introduction: Around these parts we see a continuing flow of unusual claims supported by some statistical evidence. The claims are varyingly plausible a priori. Some examples (I won’t bother to supply the links; regular readers will remember these examples and newcomers can find them by searching): - Obesity is contagious - People’s names affect where they live, what jobs they take, etc. - Beautiful people are more likely to have girl babies - More attractive instructors have higher teaching evaluations - In a basketball game, it’s better to be behind by a point at halftime than to be ahead by a point - Praying for someone without their knowledge improves their recovery from heart attacks - A variety of claims about ESP How should we think about these claims? The usual approach is to evaluate the statistical evidence–in particular, to look for reasons that the claimed results are not really statistically significant. If nobody can shoot down a claim, it survives. The other part of th

2 0.36803645 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

Introduction: A few days ago I discussed the evaluation of somewhat-plausible claims that are somewhat supported by theory and somewhat supported by statistical evidence. One point I raised was that an implausibly large estimate of effect size can be cause for concern: Uri Simonsohn (the author of the recent rebuttal of the name-choice article by Pelham et al.) argued that the implied effects were too large to be believed (just as I was arguing above regarding the July 4th study), which makes more plausible his claims that the results arise from methodological artifacts. That calculation is straight Bayes: the distribution of systematic errors has much longer tails than the distribution of random errors, so the larger the estimated effect, the more likely it is to be a mistake. This little theoretical result is a bit annoying, because it is the larger effects that are the most interesting!” Larry Bartels notes that my reasoning above is a bit incoherent: I [Bartels] strongly agree with

3 0.19046168 629 andrew gelman stats-2011-03-26-Is it plausible that 1% of people pick a career based on their first name?

Introduction: In my discussion of dentists-named-Dennis study, I referred to my back-of-the-envelope calculation that the effect (if it indeed exists) corresponds to an approximate 1% aggregate chance that you’ll pick a profession based on your first name. Even if there are nearly twice as many dentist Dennises as would be expected from chance alone, the base rate is so low that a shift of 1% of all Dennises would be enough to do this. My point was that (a) even a small effect could show up when looking at low-frequency events such as the choice to pick a particular career or live in a particular city, and (b) any small effects will inherently be difficult to detect in any direct way. Uri Simonsohn (the author of the recent rebuttal of the original name-choice article by Brett Pelham et al.) wrote: In terms of the effect size. I [Simonsohn] think of it differently and see it as too big to be believable. I don’t find it plausible that I can double the odds that my daughter will marry an

4 0.15517104 2255 andrew gelman stats-2014-03-19-How Americans vote

Introduction: An interview with me from 2012 : You’re a statistician and wrote a book,  Red State, Blue State, Rich State, Poor State , looking at why Americans vote the way they do. In an election year I think it would be a good time to revisit that question, not just for people in the US, but anyone around the world who wants to understand the realities – rather than the stereotypes – of how Americans vote. I regret the title I gave my book. I was too greedy. I wanted it to be an airport bestseller because I figured there were millions of people who are interested in politics and some subset of them are always looking at the statistics. It’s got a very grabby title and as a result people underestimated the content. They thought it was a popularisation of my work, or, at best, an expansion of an article we’d written. But it had tons of original material. If I’d given it a more serious, political science-y title, then all sorts of people would have wanted to read it, because they would

5 0.14283402 1607 andrew gelman stats-2012-12-05-The p-value is not . . .

Introduction: From a recent email exchange: I agree that you should never compare p-values directly. The p-value is a strange nonlinear transformation of data that is only interpretable under the null hypothesis. Once you abandon the null (as we do when we observe something with a very low p-value), the p-value itself becomes irrelevant. To put it another way, the p-value is a measure of evidence, it is not an estimate of effect size (as it is often treated, with the idea that a p=.001 effect is larger than a p=.01 effect, etc). Even conditional on sample size, the p-value is not a measure of effect size.

6 0.13830295 2166 andrew gelman stats-2014-01-10-3 years out of date on the whole Dennis the dentist thing!

7 0.1373118 2236 andrew gelman stats-2014-03-07-Selection bias in the reporting of shaky research

8 0.13595146 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

9 0.13413174 565 andrew gelman stats-2011-02-09-Dennis the dentist, debunked?

10 0.13413119 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)

11 0.131598 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

12 0.13103509 1744 andrew gelman stats-2013-03-01-Why big effects are more important than small effects

13 0.12923707 2180 andrew gelman stats-2014-01-21-Everything I need to know about Bayesian statistics, I learned in eight schools.

14 0.12855315 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

15 0.12118762 1968 andrew gelman stats-2013-08-05-Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people)

16 0.11848124 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

17 0.11822793 934 andrew gelman stats-2011-09-30-Nooooooooooooooooooo!

18 0.11715909 770 andrew gelman stats-2011-06-15-Still more Mr. P in public health

19 0.1168587 518 andrew gelman stats-2011-01-15-Regression discontinuity designs: looking for the keys under the lamppost?

20 0.11452862 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.259), (1, -0.045), (2, 0.157), (3, -0.093), (4, -0.022), (5, -0.06), (6, -0.018), (7, 0.01), (8, 0.002), (9, -0.066), (10, -0.038), (11, 0.043), (12, 0.064), (13, -0.062), (14, 0.072), (15, 0.049), (16, -0.017), (17, 0.036), (18, -0.022), (19, 0.034), (20, -0.069), (21, -0.009), (22, -0.001), (23, -0.039), (24, -0.006), (25, 0.028), (26, -0.028), (27, 0.066), (28, -0.028), (29, -0.087), (30, -0.024), (31, 0.018), (32, -0.064), (33, -0.03), (34, 0.02), (35, 0.005), (36, -0.031), (37, -0.069), (38, -0.085), (39, -0.018), (40, -0.015), (41, 0.009), (42, -0.056), (43, -0.006), (44, 0.025), (45, 0.014), (46, -0.043), (47, 0.022), (48, 0.017), (49, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98120946 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

Introduction: Around these parts we see a continuing flow of unusual claims supported by some statistical evidence. The claims are varyingly plausible a priori. Some examples (I won’t bother to supply the links; regular readers will remember these examples and newcomers can find them by searching): - Obesity is contagious - People’s names affect where they live, what jobs they take, etc. - Beautiful people are more likely to have girl babies - More attractive instructors have higher teaching evaluations - In a basketball game, it’s better to be behind by a point at halftime than to be ahead by a point - Praying for someone without their knowledge improves their recovery from heart attacks - A variety of claims about ESP How should we think about these claims? The usual approach is to evaluate the statistical evidence–in particular, to look for reasons that the claimed results are not really statistically significant. If nobody can shoot down a claim, it survives. The other part of th

2 0.8666907 629 andrew gelman stats-2011-03-26-Is it plausible that 1% of people pick a career based on their first name?

Introduction: In my discussion of dentists-named-Dennis study, I referred to my back-of-the-envelope calculation that the effect (if it indeed exists) corresponds to an approximate 1% aggregate chance that you’ll pick a profession based on your first name. Even if there are nearly twice as many dentist Dennises as would be expected from chance alone, the base rate is so low that a shift of 1% of all Dennises would be enough to do this. My point was that (a) even a small effect could show up when looking at low-frequency events such as the choice to pick a particular career or live in a particular city, and (b) any small effects will inherently be difficult to detect in any direct way. Uri Simonsohn (the author of the recent rebuttal of the original name-choice article by Brett Pelham et al.) wrote: In terms of the effect size. I [Simonsohn] think of it differently and see it as too big to be believable. I don’t find it plausible that I can double the odds that my daughter will marry an

3 0.86051333 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

Introduction: A few days ago I discussed the evaluation of somewhat-plausible claims that are somewhat supported by theory and somewhat supported by statistical evidence. One point I raised was that an implausibly large estimate of effect size can be cause for concern: Uri Simonsohn (the author of the recent rebuttal of the name-choice article by Pelham et al.) argued that the implied effects were too large to be believed (just as I was arguing above regarding the July 4th study), which makes more plausible his claims that the results arise from methodological artifacts. That calculation is straight Bayes: the distribution of systematic errors has much longer tails than the distribution of random errors, so the larger the estimated effect, the more likely it is to be a mistake. This little theoretical result is a bit annoying, because it is the larger effects that are the most interesting!” Larry Bartels notes that my reasoning above is a bit incoherent: I [Bartels] strongly agree with

4 0.848216 2196 andrew gelman stats-2014-02-03-One-way street fallacy again! in reporting of research on brothers and sisters

Introduction: There’s something satisfying about seeing the same error being made by commentators on the left and the right. In this case, we’re talking about the one-way street fallacy , which is the implicit assumption of unidirectionality in a setting that actually has underlying symmetry. 1. A month or so ago we reported on an op-ed by conservative New York Times columnist Ross Douthat, who was discussing recent research exemplified by the headline, “Study: Having daughters makes parents more likely to be Republican.” Douthat wrote all about different effects of having girls, without realizing that the study was comparing parents of girls to parents of boys. He just as well could have talked about the effects of having sons, and how that is associated with voting for Democrats (according to the study). But he did not do so; he was implicitly considering boy children to be the default. 2. A couple days ago, liberal NYT columnist Charles Blow ( link from commenter Steve Sailer) repo

5 0.81302238 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?

Introduction: Josef Fruehwald writes : In the past few years, the empirical foundations of the social sciences, especially Psychology, have been coming under increased scrutiny and criticism. For example, there was the New Yorker piece from 2010 called “The Truth Wears Off” about the “decline effect,” or how the effect size of a phenomenon appears to decrease over time. . . . I [Fruehwald] am a linguist. Do the problems facing psychology face me? To really answer that, I first have to decide which explanation for the decline effect I think is most likely, and I think Andrew Gelman’s proposal is a good candidate: The short story is that if you screen for statistical significance when estimating small effects, you will necessarily overestimate the magnitudes of effects, sometimes by a huge amount. I’ve put together some R code to demonstrate this point. Let’s say I’m looking at two populations, and unknown to me as a researcher, there is a small difference between the two, even though they

6 0.78997552 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

7 0.78845114 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

8 0.78634769 2193 andrew gelman stats-2014-01-31-Into the thicket of variation: More on the political orientations of parents of sons and daughters, and a return to the tradeoff between internal and external validity in design and interpretation of research studies

9 0.78427017 1744 andrew gelman stats-2013-03-01-Why big effects are more important than small effects

10 0.77090824 2165 andrew gelman stats-2014-01-09-San Fernando Valley cityscapes: An example of the benefits of fractal devastation?

11 0.76504844 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

12 0.7620464 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

13 0.75890183 1186 andrew gelman stats-2012-02-27-Confusion from illusory precision

14 0.75846845 963 andrew gelman stats-2011-10-18-Question on Type M errors

15 0.75685984 2336 andrew gelman stats-2014-05-16-How much can we learn about individual-level causal claims from state-level correlations?

16 0.75170904 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

17 0.74833667 716 andrew gelman stats-2011-05-17-Is the internet causing half the rapes in Norway? I wanna see the scatterplot.

18 0.74629104 1547 andrew gelman stats-2012-10-25-College football, voting, and the law of large numbers

19 0.74362129 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

20 0.74267936 7 andrew gelman stats-2010-04-27-Should Mister P be allowed-encouraged to reside in counter-factual populations?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.02), (8, 0.017), (9, 0.025), (13, 0.017), (15, 0.062), (16, 0.056), (21, 0.013), (24, 0.136), (34, 0.014), (48, 0.036), (55, 0.01), (66, 0.071), (86, 0.039), (88, 0.018), (89, 0.023), (99, 0.302)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97588015 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

Introduction: Around these parts we see a continuing flow of unusual claims supported by some statistical evidence. The claims are varyingly plausible a priori. Some examples (I won’t bother to supply the links; regular readers will remember these examples and newcomers can find them by searching): - Obesity is contagious - People’s names affect where they live, what jobs they take, etc. - Beautiful people are more likely to have girl babies - More attractive instructors have higher teaching evaluations - In a basketball game, it’s better to be behind by a point at halftime than to be ahead by a point - Praying for someone without their knowledge improves their recovery from heart attacks - A variety of claims about ESP How should we think about these claims? The usual approach is to evaluate the statistical evidence–in particular, to look for reasons that the claimed results are not really statistically significant. If nobody can shoot down a claim, it survives. The other part of th

2 0.97154123 1200 andrew gelman stats-2012-03-06-Some economists are skeptical about microfoundations

Introduction: A few months ago, I wrote : Economists seem to rely heavily on a sort of folk psychology, a relic of the 1920s-1950s in which people calculate utilities (or act as if they are doing so) in order to make decisions. A central tenet of economics is that inference or policy recommendation be derived from first principles from this folk-psychology model. This just seems silly to me, as if astronomers justified all their calculations with an underlying appeal to Aristotle’s mechanics. Or maybe the better analogy is the Stalinist era in which everything had to be connected to Marxist principles (followed, perhaps, by an equationful explanation of how the world can be interpreted as if Marxism were valid). Mark Thoma and Paul Krugman seem to agree with me on this one (as does my Barnard colleague Rajiv Sethi ). They don’t go so far as to identify utility etc as folk psychology, but maybe that will come next. P.S. Perhaps this will clarify: In a typical economics research pap

3 0.96462518 204 andrew gelman stats-2010-08-12-Sloppily-written slam on moderately celebrated writers is amusing nonetheless

Introduction: Via J. Robert Lennon , I discovered this amusing blog by Anis Shivani on “The 15 Most Overrated Contemporary American Writers.” Lennon found it so annoying that he refused to even link to it, but I actually enjoyed Shivani’s bit of performance art. The literary criticism I see is so focused on individual books that it’s refreshing to see someone take on an entire author’s career in a single paragraph. I agree with Lennon that Shivani’s blog doesn’t have much content –it’s full of terms such as “vacuity” and “pap,” compared to which “trendy” and “fashionable” are precision instruments–but Shivani covers a lot of ground and it’s fun to see this all in one place. My main complaint with Shivani, beyond his sloppy writing (but, hey, it’s just a blog; I’m sure he saves the good stuff for his paid gigs) is his implicit assumption that everyone should agree with him. I’m as big a Kazin fan as anyone, but I still think he completely undervalued Marquand . The other thing I noticed

4 0.96451241 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

5 0.96446985 1322 andrew gelman stats-2012-05-15-Question 5 of my final exam for Design and Analysis of Sample Surveys

Introduction: 5. Which of the following better describes changes in public opinion on most issues? (Choose only one.) (a) Dynamic stability: On any given issue, average opinion remains stable but liberals and conservatives move back and forth in opposite directions (the “accordion model”) (b) Uniform swing: Average opinion on an issue can move but the liberals and conservatives don’t move much relative to each other (the disribution of opinions is a “solid block of wood”) (c) Compensating tradeoffs: When considering multiple survey questions on the same general topic, average opinion can move sharply to the left or right on individual questions while the average over all the questions remains stable (the “rubber band model”) Solution to question 4 From yesterday : 4. Researchers have found that survey respondents overreport church attendance. Thus, naive estimates from surveys overstate the percentage of Americans who attend church regularly. Does this have a large impact on estimate

6 0.96199602 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

7 0.96169609 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

8 0.96075332 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

9 0.96047074 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

10 0.96022999 1271 andrew gelman stats-2012-04-20-Education could use some systematic evaluation

11 0.95996803 167 andrew gelman stats-2010-07-27-Why don’t more medical discoveries become cures?

12 0.95896995 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

13 0.95840728 1323 andrew gelman stats-2012-05-16-Question 6 of my final exam for Design and Analysis of Sample Surveys

14 0.95826226 1544 andrew gelman stats-2012-10-22-Is it meaningful to talk about a probability of “65.7%” that Obama will win the election?

15 0.95823121 1350 andrew gelman stats-2012-05-28-Value-added assessment: What went wrong?

16 0.95798826 922 andrew gelman stats-2011-09-24-Economists don’t think like accountants—but maybe they should

17 0.95702201 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

18 0.95687938 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

19 0.95659566 982 andrew gelman stats-2011-10-30-“There’s at least as much as an 80 percent chance . . .”

20 0.95654416 1560 andrew gelman stats-2012-11-03-Statistical methods that work in some settings but not others