andrew_gelman_stats andrew_gelman_stats-2011 andrew_gelman_stats-2011-511 knowledge-graph by maker-knowledge-mining

511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution


meta infos for this blog

Source: html

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 ) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. [sent-3, score-0.281]

2 This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. [sent-6, score-0.418]

3 This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. [sent-7, score-0.358]

4 “But if the true effect of what you are measuring is small,” said Andrew Gelman, a professor of statistics and political science at Columbia University, “then by necessity anything you discover is going to be an overestimate” of that effect. [sent-8, score-0.213]

5 Strictly speaking, one would follow “is less than 5 percent” above with “if the null hypothesis of zero effect were actually true,” but they have serious space limitations, and I doubt many readers would get much out of that elaboration, so I’m happy with what Carey put there. [sent-10, score-0.214]

6 And classical corrections for “multiple comparisons” do not solve the problem: they merely create a more rigorous statistical significance filter, but anything that survives that filter will be even more of an overestimate. [sent-15, score-0.668]

7 Psychologists have experience studying large effects, the sort of study in which data from 24 participants is enough to estimate a main effect and 50 will be enough to estimate interactions of interest. [sent-19, score-0.447]

8 I gave the example of the Stroop effect (they have a nice one of those on display right now at the Natural History Museum) as an example of a large effect where classical statistics will do just fine. [sent-20, score-0.638]

9 My point was, if you’ve gone your whole career studying large effects with methods that work, then it’s natural to think you have great methods. [sent-21, score-0.366]

10 The ESP dude was a victim of his own success : His past accomplishments studying large effects gave him an unwarranted feeling of confidence that his methods would work on small effects. [sent-24, score-0.437]

11 This sort of thing comes up a lot, and in my recent discussion of Efron’s article, I list it as my second meta-principle of statistics, the “methodological attribution problem,” which is that people think that methods that work in one sort of problem will work in others. [sent-25, score-0.209]

12 Shrinkage is key, because if all you use is a statistical significance filter–or even a Bayes factor filter–when all is said and done, you’ll still be left with overestimates. [sent-28, score-0.281]

13 Whatever filter you use–whatever rule you use to decide whether something is worth publishing–I still want to see some modeling and shrinkage (or, at least, some retrospective power analysis) to handle the overestimation problem. [sent-29, score-0.433]

14 One thing that saddens me is that, instead of using the sex-ratio example (which I think would’ve been perfect for this article, Carey uses the following completely fake example: Consider the following experiment. [sent-44, score-0.322]

15 Suppose there was reason to believe that a coin was slightly weighted toward heads. [sent-45, score-0.343]

16 In a test, the coin comes up heads 527 times out of 1,000. [sent-46, score-0.247]

17 And they he goes on two write about coin flipping. [sent-47, score-0.188]

18 But, as I showed in my article with Deb , there is no such thing as a coin weighted to have a probability p (different from 1/2) of heads. [sent-48, score-0.408]

19 I’m also disappointed he didn’t use the famous dead-fish example , where Bennett, Baird, Miller, and Wolferd found statistically significant correlations in an MRI of a dead salmon. [sent-55, score-0.302]

20 The correlations were not only statistically significant, they were large and newsworthy! [sent-56, score-0.203]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('carey', 0.358), ('esp', 0.319), ('filter', 0.237), ('coin', 0.188), ('effect', 0.141), ('significance', 0.132), ('stroop', 0.131), ('shrinkage', 0.127), ('studying', 0.116), ('fake', 0.112), ('classical', 0.111), ('survives', 0.108), ('yarkoni', 0.108), ('large', 0.107), ('cutoff', 0.098), ('correlations', 0.096), ('psychology', 0.092), ('university', 0.085), ('study', 0.083), ('overestimate', 0.081), ('thing', 0.081), ('red', 0.081), ('weighted', 0.081), ('statistical', 0.08), ('pooling', 0.078), ('gather', 0.076), ('effects', 0.074), ('toward', 0.074), ('fisher', 0.073), ('hypothesis', 0.073), ('measuring', 0.072), ('small', 0.071), ('letters', 0.071), ('example', 0.069), ('methods', 0.069), ('use', 0.069), ('partial', 0.069), ('color', 0.068), ('methodological', 0.068), ('significant', 0.068), ('whatever', 0.064), ('didn', 0.064), ('crappier', 0.06), ('mri', 0.06), ('capitalistic', 0.06), ('juxtaposition', 0.06), ('saddens', 0.06), ('sifting', 0.06), ('comes', 0.059), ('article', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

2 0.42127976 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

3 0.3376331 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

4 0.20141262 899 andrew gelman stats-2011-09-10-The statistical significance filter

Introduction: I’ve talked about this a bit but it’s never had its own blog entry (until now). Statistically significant findings tend to overestimate the magnitude of effects. This holds in general (because E(|x|) > |E(x)|) but even more so if you restrict to statistically significant results. Here’s an example. Suppose a true effect of theta is unbiasedly estimated by y ~ N (theta, 1). Further suppose that we will only consider statistically significant results, that is, cases in which |y| > 2. The estimate “|y| conditional on |y|>2″ is clearly an overestimate of |theta|. First off, if |theta|<2, the estimate |y| conditional on statistical significance is not only too high in expectation, it's always too high. This is a problem, given that |theta| is in reality probably is less than 2. (The low-hangning fruit have already been picked, remember?) But even if |theta|>2, the estimate |y| conditional on statistical significance will still be too high in expectation. For a discussion o

5 0.1844883 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing

Introduction: Steve Ziliak points me to this article by the always-excellent Carl Bialik, slamming hypothesis tests. I only wish Carl had talked with me before so hastily posting, though! I would’ve argued with some of the things in the article. In particular, he writes: Reese and Brad Carlin . . . suggest that Bayesian statistics are a better alternative, because they tackle the probability that the hypothesis is true head-on, and incorporate prior knowledge about the variables involved. Brad Carlin does great work in theory, methods, and applications, and I like the bit about the prior knowledge (although I might prefer the more general phrase “additional information”), but I hate that quote! My quick response is that the hypothesis of zero effect is almost never true! The problem with the significance testing framework–Bayesian or otherwise–is in the obsession with the possibility of an exact zero effect. The real concern is not with zero, it’s with claiming a positive effect whe

6 0.18129332 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

7 0.15580943 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies

8 0.15430421 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.15136886 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

10 0.15081114 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

11 0.15068299 1998 andrew gelman stats-2013-08-25-A new Bem theory

12 0.15044677 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

13 0.14841069 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty

14 0.14748621 1869 andrew gelman stats-2013-05-24-In which I side with Neyman over Fisher

15 0.14561628 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

16 0.14223535 256 andrew gelman stats-2010-09-04-Noooooooooooooooooooooooooooooooooooooooooooooooo!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!

17 0.14141624 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

18 0.14132658 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

19 0.138721 2245 andrew gelman stats-2014-03-12-More on publishing in journals

20 0.13712157 1605 andrew gelman stats-2012-12-04-Write This Book


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.31), (1, 0.012), (2, -0.007), (3, -0.195), (4, -0.065), (5, -0.078), (6, -0.026), (7, 0.01), (8, -0.024), (9, -0.063), (10, -0.036), (11, -0.0), (12, 0.065), (13, -0.078), (14, 0.062), (15, -0.009), (16, -0.056), (17, 0.005), (18, -0.028), (19, -0.011), (20, -0.017), (21, 0.0), (22, 0.002), (23, 0.005), (24, -0.031), (25, -0.052), (26, -0.055), (27, 0.032), (28, -0.037), (29, -0.061), (30, 0.04), (31, -0.0), (32, 0.016), (33, -0.012), (34, -0.026), (35, -0.014), (36, -0.001), (37, -0.041), (38, -0.071), (39, -0.002), (40, -0.013), (41, 0.097), (42, -0.007), (43, 0.057), (44, 0.012), (45, -0.022), (46, -0.007), (47, 0.018), (48, -0.0), (49, 0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97093612 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

2 0.93686581 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

3 0.87066936 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

Introduction: John Talbott points me to this , which I briefly mocked a couple months ago. I largely agree with the critics of this research, but I want to reiterate my point from earlier that all the statistical sophistication in the world won’t help you if you’re studying a null effect. This is not to say that the actual effect is zero—who am I to say?—just that the comments about the high-quality statistics in the article don’t say much to me. There’s lots of discussion of the lack of science underlying ESP claims. I can’t offer anything useful on that account (not being a psychologist, I could imagine all sorts of stories about brain waves or whatever), but I would like to point out something that usually doesn’t seem to get mentioned in these discussions, which is that lots of people want to believe in ESP. After all, it would be cool to read minds. (It wouldn’t be so cool, maybe, if other people could read your mind and you couldn’t read theirs, but I suspect most people don’t think

4 0.85880363 897 andrew gelman stats-2011-09-09-The difference between significant and not significant…

Introduction: E. J. Wagenmakers writes: You may be interested in a recent article [by Nieuwenhuis, Forstmann, and Wagenmakers] showing how often researchers draw conclusions by comparing p-values. As you and Hal Stern have pointed out, this is potentially misleading because the difference between significant and not significant is not necessarily significant. We were really suprised to see how often researchers in the neurosciences make this mistake. In the paper we speculate a little bit on the cause of the error. From their paper: In theory, a comparison of two experimental effects requires a statistical test on their difference. In practice, this comparison is often based on an incorrect procedure involving two separate tests in which researchers conclude that effects differ when one effect is significant (P < 0.05) but the other is not (P > 0.05). We reviewed 513 behavioral, systems and cognitive neuroscience articles in five top-ranking journals (Science, Nature, Nature Neuroscien

5 0.84486902 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

Introduction: In the discussion of the fourteen magic words that can increase voter turnout by over 10 percentage points , questions were raised about the methods used to estimate the experimental effects. I sent these on to Chris Bryan, the author of the study, and he gave the following response: We’re happy to address the questions that have come up. It’s always noteworthy when a precise psychological manipulation like this one generates a large effect on a meaningful outcome. Such findings illustrate the power of the underlying psychological process. I’ve provided the contingency tables for the two turnout experiments below. As indicated in the paper, the data are analyzed using logistic regressions. The change in chi-squared statistic represents the significance of the noun vs. verb condition variable in predicting turnout; that is, the change in the model’s significance when the condition variable is added. This is a standard way to analyze dichotomous outcomes. Four outliers were excl

6 0.83638972 1883 andrew gelman stats-2013-06-04-Interrogating p-values

7 0.8307364 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

8 0.83069897 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

9 0.83036244 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

10 0.82797182 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

11 0.82281691 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?

12 0.82039213 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies

13 0.80623209 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

14 0.80466479 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

15 0.80283934 2093 andrew gelman stats-2013-11-07-I’m negative on the expression “false positives”

16 0.8027547 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing

17 0.79936039 2183 andrew gelman stats-2014-01-23-Discussion on preregistration of research studies

18 0.79691952 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

19 0.79336214 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

20 0.79233301 1746 andrew gelman stats-2013-03-02-Fishing for cherries


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.025), (15, 0.075), (16, 0.066), (18, 0.011), (21, 0.035), (24, 0.257), (27, 0.012), (35, 0.013), (55, 0.013), (86, 0.02), (95, 0.027), (97, 0.015), (99, 0.282)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98092163 1240 andrew gelman stats-2012-04-02-Blogads update

Introduction: A few months ago I reported on someone who wanted to insert text links into the blog. I asked her how much they would pay and got no answer. Yesterday, though, I received this reply: Hello Andrew, I am sorry for the delay in getting back to you. I’d like to make a proposal for your site. Please refer below. We would like to place a simple text link ad on page http://andrewgelman.com/2011/07/super_sam_fuld/ to link to *** with the key phrase ***. We will incorporate the key phrase into a sentence so it would read well. Rest assured it won’t sound obnoxious or advertorial. We will then process the final text link code as soon as you agree to our proposal. We can offer you $200 for this with the assumption that you will keep the link “live” on that page for 12 months or longer if you prefer. Please get back to us with a quick reply on your thoughts on this and include your Paypal ID for payment process. Hoping for a positive response from you. I wrote back: Hi,

2 0.97454023 1080 andrew gelman stats-2011-12-24-Latest in blog advertising

Introduction: I received the following message from “Patricia Lopez” of “Premium Link Ads”: Hello, I am interested in placing a text link on your page: http://andrewgelman.com/2011/07/super_sam_fuld/. The link would point to a page on a website that is relevant to your page and may be useful to your site visitors. We would be happy to compensate you for your time if it is something we are able to work out. The best way to reach me is through a direct response to this email. This will help me get back to you about the right link request. Please let me know if you are interested, and if not thanks for your time. Thanks. Usually I just ignore these, but after our recent discussion I decided to reply. I wrote: How much do you pay? But no answer. I wonder what’s going on? I mean, why bother sending the email in the first place if you’re not going to follow up?

3 0.97432733 847 andrew gelman stats-2011-08-10-Using a “pure infographic” to explore differences between information visualization and statistical graphics

Introduction: Our discussion on data visualization continues. One one side are three statisticians–Antony Unwin, Kaiser Fung, and myself. We have been writing about the different goals served by information visualization and statistical graphics. On the other side are graphics experts (sorry for the imprecision, I don’t know exactly what these people do in their day jobs or how they are trained, and I don’t want to mislabel them) such as Robert Kosara and Jen Lowe , who seem a bit annoyed at how my colleagues and myself seem to follow the Tufte strategy of criticizing what we don’t understand. And on the third side are many (most?) academic statisticians, econometricians, etc., who don’t understand or respect graphs and seem to think of visualization as a toy that is unrelated to serious science or statistics. I’m not so interested in the third group right now–I tried to communicate with them in my big articles from 2003 and 2004 )–but I am concerned that our dialogue with the graphic

4 0.97413385 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

Introduction: Deborah Mayo sent me this quote from Jim Berger: Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. This caught my attention because I’ve become more and more convinced that weakly informative priors are the right way to go in many different situations. I don’t think Berger was talking about me , though, as the above quote came from a publication in 2006, at which time I’d only started writing about weakly informative priors. Going back to Berger’s article , I see that his “weakly informative priors” remark was aimed at this article by Anthony O’Hagan, who w

5 0.97320378 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

Introduction: From my new article in the journal Epidemiology: Sander Greenland and Charles Poole accept that P values are here to stay but recognize that some of their most common interpretations have problems. The casual view of the P value as posterior probability of the truth of the null hypothesis is false and not even close to valid under any reasonable model, yet this misunderstanding persists even in high-stakes settings (as discussed, for example, by Greenland in 2011). The formal view of the P value as a probability conditional on the null is mathematically correct but typically irrelevant to research goals (hence, the popularity of alternative—if wrong—interpretations). A Bayesian interpretation based on a spike-and-slab model makes little sense in applied contexts in epidemiology, political science, and other fields in which true effects are typically nonzero and bounded (thus violating both the “spike” and the “slab” parts of the model). I find Greenland and Poole’s perspective t

6 0.9727084 1838 andrew gelman stats-2013-05-03-Setting aside the politics, the debate over the new health-care study reveals that we’re moving to a new high standard of statistical journalism

7 0.97212398 197 andrew gelman stats-2010-08-10-The last great essayist?

8 0.97177601 1355 andrew gelman stats-2012-05-31-Lindley’s paradox

9 0.97101504 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

10 0.97083056 2029 andrew gelman stats-2013-09-18-Understanding posterior p-values

11 0.97025484 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

12 0.97012424 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

13 0.97008967 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

14 0.97002357 1757 andrew gelman stats-2013-03-11-My problem with the Lindley paradox

15 0.96997499 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

16 0.96994072 1062 andrew gelman stats-2011-12-16-Mr. Pearson, meet Mr. Mandelbrot: Detecting Novel Associations in Large Data Sets

same-blog 17 0.96959364 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

18 0.96921951 2208 andrew gelman stats-2014-02-12-How to think about “identifiability” in Bayesian inference?

19 0.96872532 953 andrew gelman stats-2011-10-11-Steve Jobs’s cancer and science-based medicine

20 0.96818423 1155 andrew gelman stats-2012-02-05-What is a prior distribution?