andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2223 knowledge-graph by maker-knowledge-mining

2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates


meta infos for this blog

Source: html

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. [sent-1, score-0.362]

2 Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. [sent-2, score-0.402]

3 It takes a point estimate and standard error for a significant effect (do you need sample size too) and divides point estimate by something and multiplies standard error by something to get a Posterior under the new principle of ignorance. [sent-13, score-0.591]

4 That is, suppose you don’t start with priors but I just tell you someone studied something and published a study saying something. [sent-14, score-0.292]

5 ] If my article had been published in the New York Review of Books rather than Symposium magazine, maybe things would be different, but, then again, I doubt the New York Review of Books would be particulary interested in someone expressing skepticism on early childhood intervention. [sent-24, score-0.572]

6 For example, here’s Charles Murray: The most famous evidence on behalf of early childhood intervention comes from the programs that Heckman describes, Perry Preschool and the Abecedarian Project. [sent-39, score-0.508]

7 In both cases the people who ran the program were also deeply involved in collecting and coding the evaluation data, and they were passionate advocates of early childhood intervention. [sent-42, score-0.351]

8 Given all the reasons above for suspecting that published results are overestimates, I’d guess that, in a fully controlled study with a preregistered analysis, the results for this new study would be less impressive than in the earlier published results. [sent-53, score-0.75]

9 Heckman sees the earlier published results as a lower bound because he sees improvements in the interventions (which makes sense; people are working on these interventions and they should be getting better). [sent-55, score-0.654]

10 I see the published results as an overestimate (but not an “upper bound,” because any estimate based on only 111 kids has got to be too noisy to be considered a “bound” in any sense) based on my generic understanding of open-ended statistical analyses. [sent-56, score-0.466]

11 That would be an Edlin factor of 1/2 (or, as the economists would say, “an elasticity of 0. [sent-60, score-0.393]

12 ” But it doesn’t seem that any single scale-down factor would work. [sent-62, score-0.341]

13 For example, when somebody-or-another published the claim that beautiful parents were 36% more likely to have girls, I’m pretty sure this was an overestimate of at least a factor of 100 (as well as being just about as likely or not to be in the wrong direction). [sent-63, score-0.489]

14 Using an Edlin factor of 1/2 in that case would be taking the claim way too seriously. [sent-64, score-0.341]

15 On the other hand, I’m pretty sure that, if we routinely scaled all published estimates by 1/2, we’d be a lot closer to the truth than we are now, using a default Edlin factor of 1. [sent-65, score-0.55]

16 Here’s another example where it would’ve helped to have an Edlin factor right from the start. [sent-66, score-0.289]

17 Scale-down factor is kind of ok but I think we could come up with something better. [sent-72, score-0.353]

18 I sent the above to Edlin himself, who wrote: Your refusal to name an Edlin factor seems decidedly non-Bayesian and a bit classical to me. [sent-77, score-0.365]

19 I think an Edlin factor or 1/5 to 1/2 is probably my best guess for that Jamaica intervention example. [sent-86, score-0.505]

20 But in other cases I’d give an Edlin factor of something like 1/100. [sent-87, score-0.353]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('edlin', 0.417), ('factor', 0.289), ('heckman', 0.223), ('abecedarian', 0.207), ('educare', 0.207), ('childhood', 0.168), ('intervention', 0.156), ('iron', 0.155), ('gertler', 0.141), ('published', 0.136), ('aaron', 0.135), ('early', 0.102), ('interventions', 0.1), ('bound', 0.095), ('children', 0.093), ('study', 0.092), ('results', 0.091), ('preschool', 0.089), ('programs', 0.082), ('program', 0.081), ('thumb', 0.08), ('perry', 0.08), ('name', 0.076), ('overestimates', 0.074), ('susan', 0.072), ('effect', 0.07), ('law', 0.066), ('replied', 0.066), ('rule', 0.066), ('murray', 0.066), ('sees', 0.066), ('estimates', 0.065), ('overestimate', 0.064), ('something', 0.064), ('expressing', 0.062), ('filter', 0.062), ('universal', 0.062), ('direction', 0.061), ('guess', 0.06), ('routinely', 0.06), ('standard', 0.059), ('estimate', 0.059), ('based', 0.058), ('size', 0.057), ('improvement', 0.055), ('would', 0.052), ('young', 0.052), ('group', 0.051), ('error', 0.05), ('particular', 0.049)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999917 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

2 0.2941286 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

Introduction: Hal Pashler wrote in about a recent paper , “Labor Market Returns to Early Childhood Stimulation: a 20-year Followup to an Experimental Intervention in Jamaica,” by Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor. Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t.co/rX6904zxvN Browsing pp. 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising!—and lo and behold, big positive results!”. So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. If you happen to have time and interest… My reply became this artic

3 0.17248061 1962 andrew gelman stats-2013-07-30-The Roy causal model?

Introduction: A link from Simon Jackman’s blog led me to an article by James Heckman, Hedibert Lopes, and Remi Piatek from 2011, “Treatment effects: A Bayesian perspective.” I was pleasantly surprised to see this, partly because I didn’t know that Heckman was working on Bayesian methods, and partly because the paper explicitly refers to the “potential outcomes model,” a term I associate with Don Rubin. I’ve had the impression that Heckman and Rubin don’t like each other (I was a student of Rubin and have never met Heckman, so I’m only speaking at second hand here), so I was happy to see some convergence. I was curious how Heckman et al. would source the potential outcome model. They do not refer to Rubin’s 1974 paper or to Neyman’s 1923 paper (which was republished in 1990 and is now taken to be the founding document of the Neyman-Rubin approach to causal inference). Nor, for that matter, do Heckman et al. refer to the more recent developments of these theories by Robins, Pearl, and other

4 0.16067484 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

Introduction: X and I heard about this much-publicized recent paper by Val Johnson, who suggests changing the default level of statistical significance from z=2 to z=3 (or, as he puts it, going from p=.05 to p=.005 or .001). Val argues that you need to go out to 3 standard errors to get a Bayes factor of 25 or 50 in favor of the alternative hypothesis. I don’t really buy this, first because Val’s model is a weird (to me) mixture of two point masses, which he creates in order to make a minimax argument, and second because I don’t see why you need a Bayes factor of 25 to 50 in order to make a claim. I’d think that a factor of 5:1, say, provides strong information already—if you really believe those odds. The real issue, as I see it, is that we’re getting Bayes factors and posterior probabilities we don’t believe, because we’re assuming flat priors that don’t really make sense. This is a topic that’s come up over and over in recent months on this blog, for example in this discussion of why I d

5 0.15919228 1145 andrew gelman stats-2012-01-30-A tax on inequality, or a tax to keep inequality at the current level?

Introduction: My sometime coauthor Aaron Edlin cowrote (with Ian Ayres) an op-ed recommending a clever approach to taxing the rich. In their article they employ a charming bit of economics jargon, using the word “earn” to mean “how much money you make.” They “propose an automatic extra tax on the income of the top 1 percent of earners.” I assume their tax would apply to unearned income as well, but they (or their editor at the Times) are just so used to describing income as “earnings” that they just threw that in. Funny. Also, there’s a part of the article that doesn’t make sense to me. Ayres and Edlin first describe the level of inequality: In 1980 the average 1-percenter made 12.5 times the median income, but in 2006 (the latest year for which data is available) the average income of our richest 1 percent was a whopping 36 times greater than that of the median household. Then they lay out their solution: Enough is enough. . . . we propose an automatic extra tax on the income

6 0.1359109 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

7 0.13159211 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

8 0.12371353 2222 andrew gelman stats-2014-02-24-On deck this week

9 0.12142076 1941 andrew gelman stats-2013-07-16-Priors

10 0.11831102 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

11 0.11816105 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

12 0.11735348 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

13 0.11725619 851 andrew gelman stats-2011-08-12-year + (1|year)

14 0.11413646 1586 andrew gelman stats-2012-11-21-Readings for a two-week segment on Bayesian modeling?

15 0.11391481 2220 andrew gelman stats-2014-02-22-Quickies

16 0.11333941 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

17 0.11304613 2134 andrew gelman stats-2013-12-14-Oswald evidence

18 0.11137833 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

19 0.11080459 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

20 0.11079735 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.262), (1, 0.003), (2, 0.059), (3, -0.123), (4, -0.02), (5, -0.047), (6, 0.057), (7, 0.014), (8, -0.025), (9, -0.012), (10, -0.023), (11, 0.004), (12, 0.03), (13, -0.016), (14, 0.067), (15, 0.024), (16, 0.015), (17, 0.018), (18, 0.01), (19, 0.011), (20, -0.015), (21, -0.0), (22, 0.012), (23, -0.003), (24, -0.012), (25, -0.008), (26, -0.028), (27, 0.019), (28, -0.007), (29, -0.057), (30, -0.025), (31, 0.002), (32, -0.009), (33, 0.013), (34, 0.047), (35, 0.029), (36, -0.029), (37, 0.018), (38, -0.012), (39, -0.036), (40, -0.017), (41, 0.014), (42, -0.029), (43, 0.008), (44, 0.032), (45, -0.033), (46, -0.0), (47, -0.015), (48, -0.006), (49, 0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97549963 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

2 0.85706013 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

Introduction: Hal Pashler wrote in about a recent paper , “Labor Market Returns to Early Childhood Stimulation: a 20-year Followup to an Experimental Intervention in Jamaica,” by Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor. Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t.co/rX6904zxvN Browsing pp. 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising!—and lo and behold, big positive results!”. So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. If you happen to have time and interest… My reply became this artic

3 0.85674149 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

Introduction: Ole Rogeberg points me to a discussion of a discussion of a paper: Did pre-release of my [Rogeberg's] PNAS paper on methodological problems with Meier et al’s 2012 paper on cannabis and IQ reduce the chances that it will have its intended effect? In my case, serious methodological issues related to causal inference from non-random observational data became framed as a conflict over conclusions, forcing the original research team to respond rapidly and insufficiently to my concerns, and prompting them to defend their conclusions and original paper in a way that makes a later, more comprehensive reanalysis of their data less likely. This fits with a recurring theme on this blog: the defensiveness of researchers who don’t want to admit they were wrong. Setting aside cases of outright fraud and plagiarism, I think the worst case remains that of psychologists Neil Anderson and Deniz Ones, who denied any problems even in the presence of a smoking gun of a graph revealing their data

4 0.85656017 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som

5 0.85263205 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

Introduction: Etienne LeBel writes: You’ve probably already seen it, but I thought you could have a lot of fun with this one!! The article , with the admirably clear title given above, is by James McNulty, Michael Olson, Andrea Meltzer, Matthew Shaffer, and begins as follows: For decades, social psychological theories have posited that the automatic processes captured by implicit measures have implications for social outcomes. Yet few studies have demonstrated any long-term implications of automatic processes, and some scholars have begun to question the relevance and even the validity of these theories. At baseline of our longitudinal study, 135 newlywed couples (270 individuals) completed an explicit measure of their conscious attitudes toward their relationship and an implicit measure of their automatic attitudes toward their partner. They then reported their marital satisfaction every 6 months for the next 4 years. We found no correlation between spouses’ automatic and conscious attitu

6 0.84390253 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

7 0.84374082 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

8 0.84099716 2030 andrew gelman stats-2013-09-19-Is coffee a killer? I don’t think the effect is as high as was estimated from the highest number that came out of a noisy study

9 0.84022558 706 andrew gelman stats-2011-05-11-The happiness gene: My bottom line (for now)

10 0.83213413 1893 andrew gelman stats-2013-06-11-Folic acid and autism

11 0.83180982 1838 andrew gelman stats-2013-05-03-Setting aside the politics, the debate over the new health-care study reveals that we’re moving to a new high standard of statistical journalism

12 0.83077437 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

13 0.82479137 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)

14 0.82213473 1114 andrew gelman stats-2012-01-12-Controversy about average personality differences between men and women

15 0.81949878 2114 andrew gelman stats-2013-11-26-“Please make fun of this claim”

16 0.8178311 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

17 0.81641889 2159 andrew gelman stats-2014-01-04-“Dogs are sensitive to small variations of the Earth’s magnetic field”

18 0.81341207 1364 andrew gelman stats-2012-06-04-Massive confusion about a study that purports to show that exercise may increase heart risk

19 0.81296682 629 andrew gelman stats-2011-03-26-Is it plausible that 1% of people pick a career based on their first name?

20 0.81151754 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.022), (16, 0.076), (24, 0.203), (30, 0.036), (44, 0.054), (53, 0.029), (66, 0.023), (81, 0.022), (82, 0.016), (86, 0.035), (96, 0.017), (98, 0.038), (99, 0.321)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98213494 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

2 0.97834826 788 andrew gelman stats-2011-07-06-Early stopping and penalized likelihood

Introduction: Maximum likelihood gives the beat fit to the training data but in general overfits, yielding overly-noisy parameter estimates that don’t perform so well when predicting new data. A popular solution to this overfitting problem takes advantage of the iterative nature of most maximum likelihood algorithms by stopping early. In general, an iterative optimization algorithm goes from a starting point to the maximum of some objective function. If the starting point has some good properties, then early stopping can work well, keeping some of the virtues of the starting point while respecting the data. This trick can be performed the other way, too, starting with the data and then processing it to move it toward a model. That’s how the iterative proportional fitting algorithm of Deming and Stephan (1940) works to fit multivariate categorical data to known margins. In any case, the trick is to stop at the right point–not so soon that you’re ignoring the data but not so late that you en

3 0.97708845 1117 andrew gelman stats-2012-01-13-What are the important issues in ethics and statistics? I’m looking for your input!

Introduction: I’ve recently started a regular column on ethics, appearing every three months in Chance magazine . My first column, “Open Data and Open Methods,” is here , and my second column, “Statisticians: When we teach, we don’t practice what we preach” (coauthored with Eric Loken) will be appearing in the next issue. Statistical ethics is a wide-open topic, and I’d be very interested in everyone’s thoughts, questions, and stories. I’d like to get beyond generic questions such as, Is it right to do a randomized trial when you think the treatment is probably better than the control?, and I’d also like to avoid the really easy questions such as, Is it ethical to copy Wikipedia entries and then sell the resulting publication for $2800 a year? [Note to people who are sick of hearing about this particular story: I'll consider stopping my blogging on it, the moment that the people involved consider apologizing for their behavior.] Please insert your thoughts, questions, stories, links, et

4 0.97299874 899 andrew gelman stats-2011-09-10-The statistical significance filter

Introduction: I’ve talked about this a bit but it’s never had its own blog entry (until now). Statistically significant findings tend to overestimate the magnitude of effects. This holds in general (because E(|x|) > |E(x)|) but even more so if you restrict to statistically significant results. Here’s an example. Suppose a true effect of theta is unbiasedly estimated by y ~ N (theta, 1). Further suppose that we will only consider statistically significant results, that is, cases in which |y| > 2. The estimate “|y| conditional on |y|>2″ is clearly an overestimate of |theta|. First off, if |theta|<2, the estimate |y| conditional on statistical significance is not only too high in expectation, it's always too high. This is a problem, given that |theta| is in reality probably is less than 2. (The low-hangning fruit have already been picked, remember?) But even if |theta|>2, the estimate |y| conditional on statistical significance will still be too high in expectation. For a discussion o

5 0.9699533 2161 andrew gelman stats-2014-01-07-My recent debugging experience

Introduction: OK, so this sort of thing happens sometimes. I was working on a new idea (still working on it; if it ultimately works out—or if it doesn’t—I’ll let you know) and as part of it I was fitting little models in Stan, in a loop. I thought it would make sense to start with linear regression with normal priors and known data variance, because then the exact solution is Gaussian and I can also work with the problem analytically. So I programmed up the algorithm and, no surprise, it didn’t work. I went through my R code, put in print statements here and there, and cleared out bug after bug until at least it stopped crashing. But the algorithm still wasn’t doing what it was supposed to do. So I decided to do something simpler, and just check that the Stan linear regression gave the same answer as the analytic posterior distribution: I ran Stan for tons of iterations, then computed the sample mean and variance of the simulations. It was an example with two coefficients—I’d originally cho

6 0.96952021 2149 andrew gelman stats-2013-12-26-Statistical evidence for revised standards

7 0.96927857 2210 andrew gelman stats-2014-02-13-Stopping rules and Bayesian analysis

8 0.96900946 2089 andrew gelman stats-2013-11-04-Shlemiel the Software Developer and Unknown Unknowns

9 0.96889597 427 andrew gelman stats-2010-11-23-Bayesian adaptive methods for clinical trials

10 0.9688704 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

11 0.96840203 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

12 0.96832132 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

13 0.96816921 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

14 0.96783817 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

15 0.96773124 247 andrew gelman stats-2010-09-01-How does Bayes do it?

16 0.96770954 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

17 0.96739495 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

18 0.96732044 669 andrew gelman stats-2011-04-19-The mysterious Gamma (1.4, 0.4)

19 0.96726823 970 andrew gelman stats-2011-10-24-Bell Labs

20 0.96693349 2305 andrew gelman stats-2014-04-25-Revised statistical standards for evidence (comments to Val Johnson’s comments on our comments on Val’s comments on p-values)