andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2090 knowledge-graph by maker-knowledge-mining

2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?


meta infos for this blog

Source: html

Introduction: Hal Pashler wrote in about a recent paper , “Labor Market Returns to Early Childhood Stimulation: a 20-year Followup to an Experimental Intervention in Jamaica,” by Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor. Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t.co/rX6904zxvN Browsing pp. 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising!—and lo and behold, big positive results!”. So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. If you happen to have time and interest… My reply became this artic


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t. [sent-3, score-0.329]

2 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising! [sent-5, score-0.161]

3 So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. [sent-8, score-0.163]

4 If you happen to have time and interest… My reply became this article at Symposium magazine. [sent-9, score-0.077]

5 In brief: The two key concerns seem to be: (1) very small sample size (thus, unless the effect is huge, it could get lost in the noise) and (2) correlation of the key outcome (earnings) with emigration. [sent-10, score-0.184]

6 And, as always in such settings, I’d like to see the raw comparison—what are these earnings, which, when averaged, differ by 42%? [sent-12, score-0.101]

7 I’d also like to see these data broken down by emigration status. [sent-13, score-0.184]

8 Once I have a handle on the raw comparisons, then I’d like to see how this fits into the regression analyses. [sent-15, score-0.101]

9 Overall I have no reason to doubt the direction of the effect—psychosocial stimulation should be good, right? [sent-16, score-0.249]

10 There they are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant (at least, that’s what I think they meant when they wrote of “strong and lasting effects” in one case and “no long-term effect” in the other). [sent-19, score-0.239]

11 There’s nothing wrong with speculation but at some point you’re chasing noise and picking winners, which leads to overestimates of magnitudes of effects. [sent-20, score-0.317]

12 On the other hand, there aren’t a lot of good experimental studies out there, so it does seem like this one should inform policy in some way. [sent-35, score-0.159]

13 In short, we need to keep on thinking of ways to extract the useful information out of this study in a larger policy context. [sent-36, score-0.073]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('earnings', 0.296), ('stimulation', 0.249), ('emigration', 0.184), ('gertler', 0.184), ('jamaica', 0.184), ('pashler', 0.166), ('heckman', 0.145), ('symposium', 0.145), ('susan', 0.14), ('intervention', 0.122), ('effect', 0.109), ('noise', 0.102), ('raw', 0.101), ('browsing', 0.092), ('unsatisfactory', 0.092), ('rct', 0.092), ('arianna', 0.092), ('chang', 0.092), ('christel', 0.092), ('pinto', 0.092), ('psychosocial', 0.092), ('rodrigo', 0.092), ('tweeted', 0.092), ('vermeerch', 0.092), ('zanolini', 0.092), ('children', 0.091), ('ff', 0.087), ('devising', 0.087), ('lo', 0.087), ('basically', 0.086), ('experimental', 0.086), ('hypothesizing', 0.083), ('sally', 0.083), ('moms', 0.08), ('groups', 0.08), ('comparisons', 0.078), ('averaged', 0.078), ('lasting', 0.078), ('happen', 0.077), ('walker', 0.076), ('size', 0.075), ('policy', 0.073), ('stratification', 0.072), ('matches', 0.072), ('overestimates', 0.072), ('debunk', 0.072), ('tweet', 0.072), ('chasing', 0.072), ('magnitudes', 0.071), ('followup', 0.071)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999976 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

Introduction: Hal Pashler wrote in about a recent paper , “Labor Market Returns to Early Childhood Stimulation: a 20-year Followup to an Experimental Intervention in Jamaica,” by Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor. Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t.co/rX6904zxvN Browsing pp. 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising!—and lo and behold, big positive results!”. So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. If you happen to have time and interest… My reply became this artic

2 0.2941286 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

3 0.14156805 1962 andrew gelman stats-2013-07-30-The Roy causal model?

Introduction: A link from Simon Jackman’s blog led me to an article by James Heckman, Hedibert Lopes, and Remi Piatek from 2011, “Treatment effects: A Bayesian perspective.” I was pleasantly surprised to see this, partly because I didn’t know that Heckman was working on Bayesian methods, and partly because the paper explicitly refers to the “potential outcomes model,” a term I associate with Don Rubin. I’ve had the impression that Heckman and Rubin don’t like each other (I was a student of Rubin and have never met Heckman, so I’m only speaking at second hand here), so I was happy to see some convergence. I was curious how Heckman et al. would source the potential outcome model. They do not refer to Rubin’s 1974 paper or to Neyman’s 1923 paper (which was republished in 1990 and is now taken to be the founding document of the Neyman-Rubin approach to causal inference). Nor, for that matter, do Heckman et al. refer to the more recent developments of these theories by Robins, Pearl, and other

4 0.12629324 2033 andrew gelman stats-2013-09-23-More on Bayesian methods and multilevel modeling

Introduction: Ban Chuan Cheah writes: In a previous post, http://andrewgelman.com/2013/07/30/the-roy-causal-model/ you pointed to a paper on Bayesian methods by Heckman. At around the same time I came across another one of his papers, “The Effects of Cognitive and Noncognitive Abilities on Labor Market Outcomes and Social Behavior (2006)” (http://www.nber.org/papers/w12006 or published version http://www.jstor.org/stable/10.1086/504455). In this paper they implement their model as follows: We use Bayesian Markov chain Monte Carlo methods to compute the sample likelihood. Our use of Bayesian methods is only a computational convenience. Our identification analysis is strictly classical. Under our assumptions, the priors we use are asymptotically irrelevant. Some of the authors have also done something similar earlier in: Hansen, Karsten T. & Heckman, James J. & Mullen, K.J.Kathleen J., 2004. “The effect of schooling and ability on achievement test scores,” Journal of Econometrics, Elsevi

5 0.11967159 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

6 0.11466945 1744 andrew gelman stats-2013-03-01-Why big effects are more important than small effects

7 0.1109546 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)

8 0.10811508 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

9 0.1002329 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

10 0.10014391 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

11 0.09889093 2008 andrew gelman stats-2013-09-04-Does it matter that a sample is unrepresentative? It depends on the size of the treatment interactions

12 0.09885297 643 andrew gelman stats-2011-04-02-So-called Bayesian hypothesis testing is just as bad as regular hypothesis testing

13 0.097688295 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

14 0.096405514 2140 andrew gelman stats-2013-12-19-Revised evidence for statistical standards

15 0.095405526 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

16 0.093211189 1074 andrew gelman stats-2011-12-20-Reading a research paper != agreeing with its claims

17 0.092459269 1315 andrew gelman stats-2012-05-12-Question 2 of my final exam for Design and Analysis of Sample Surveys

18 0.092249304 695 andrew gelman stats-2011-05-04-Statistics ethics question

19 0.092080295 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?

20 0.091889471 1317 andrew gelman stats-2012-05-13-Question 3 of my final exam for Design and Analysis of Sample Surveys


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.224), (1, -0.017), (2, 0.053), (3, -0.108), (4, 0.021), (5, -0.051), (6, -0.003), (7, 0.024), (8, 0.01), (9, 0.0), (10, -0.034), (11, 0.001), (12, 0.04), (13, -0.053), (14, 0.065), (15, 0.017), (16, -0.021), (17, 0.011), (18, -0.006), (19, 0.026), (20, -0.048), (21, 0.013), (22, 0.009), (23, 0.007), (24, -0.023), (25, 0.008), (26, -0.007), (27, 0.009), (28, -0.003), (29, -0.036), (30, -0.013), (31, 0.019), (32, -0.008), (33, -0.003), (34, 0.027), (35, 0.02), (36, -0.048), (37, -0.02), (38, 0.011), (39, -0.013), (40, 0.057), (41, 0.035), (42, -0.024), (43, 0.028), (44, 0.021), (45, -0.041), (46, 0.0), (47, -0.048), (48, -0.002), (49, -0.035)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97139287 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

Introduction: Hal Pashler wrote in about a recent paper , “Labor Market Returns to Early Childhood Stimulation: a 20-year Followup to an Experimental Intervention in Jamaica,” by Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor. Here’s Pashler: Dan Willingham tweeted: @DTWillingham: RCT from Jamaica: Big effects 20 years later of intervention—teaching parenting/child stimulation to moms in poverty http://t.co/rX6904zxvN Browsing pp. 4 ff, it seems the authors are basically saying “hey the stats were challenging, the sample size tiny, other problems, but we solved them all—using innovative methods of our own devising!—and lo and behold, big positive results!”. So this made me think (and tweet) basically that I hope the topic (which is pretty important) will happen to interest Andy Gelman enough to incline him to give us his take. If you happen to have time and interest… My reply became this artic

2 0.88339674 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

3 0.85495657 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som

4 0.84417093 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

Introduction: In the discussion of the fourteen magic words that can increase voter turnout by over 10 percentage points , questions were raised about the methods used to estimate the experimental effects. I sent these on to Chris Bryan, the author of the study, and he gave the following response: We’re happy to address the questions that have come up. It’s always noteworthy when a precise psychological manipulation like this one generates a large effect on a meaningful outcome. Such findings illustrate the power of the underlying psychological process. I’ve provided the contingency tables for the two turnout experiments below. As indicated in the paper, the data are analyzed using logistic regressions. The change in chi-squared statistic represents the significance of the noun vs. verb condition variable in predicting turnout; that is, the change in the model’s significance when the condition variable is added. This is a standard way to analyze dichotomous outcomes. Four outliers were excl

5 0.84343195 2159 andrew gelman stats-2014-01-04-“Dogs are sensitive to small variations of the Earth’s magnetic field”

Introduction: Two different people pointed me to this article by Vlastimil Hart et al. in the journal Frontiers in Zoology: It is for the first time that (a) magnetic sensitivity was proved in dogs, (b) a measurable, predictable behavioral reaction upon natural MF fluctuations could be unambiguously proven in a mammal, and (c) high sensitivity to small changes in polarity, rather than in intensity, of MF was identified as biologically meaningful. Our findings open new horizons in magnetoreception research. Since the MF is calm in only about 20 % of the daylight period, our findings might provide an explanation why many magnetoreception experiments were hardly replicable and why directional values of records in diverse observations are frequently compromised by scatter. Tom Passin writes: Here we seem to have multiple comparisons and (to me, at least) dubious plausibility, together with a kind of fanciful topic and a large vaiety of plausible alternative explanations. And another co

6 0.84329116 106 andrew gelman stats-2010-06-23-Scientists can read your mind . . . as long as the’re allowed to look at more than one place in your brain and then make a prediction after seeing what you actually did

7 0.84285355 797 andrew gelman stats-2011-07-11-How do we evaluate a new and wacky claim?

8 0.83609003 1944 andrew gelman stats-2013-07-18-You’ll get a high Type S error rate if you use classical statistical methods to analyze data from underpowered studies

9 0.8350181 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

10 0.83333737 2165 andrew gelman stats-2014-01-09-San Fernando Valley cityscapes: An example of the benefits of fractal devastation?

11 0.83216512 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

12 0.83022159 1400 andrew gelman stats-2012-06-29-Decline Effect in Linguistics?

13 0.82877994 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

14 0.82814378 490 andrew gelman stats-2010-12-29-Brain Structure and the Big Five

15 0.82750899 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

16 0.82565844 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

17 0.82090056 963 andrew gelman stats-2011-10-18-Question on Type M errors

18 0.81958097 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

19 0.8195591 466 andrew gelman stats-2010-12-13-“The truth wears off: Is there something wrong with the scientific method?”

20 0.81850362 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(6, 0.011), (7, 0.021), (16, 0.034), (18, 0.012), (24, 0.288), (65, 0.022), (70, 0.014), (73, 0.011), (81, 0.041), (86, 0.043), (89, 0.036), (90, 0.021), (96, 0.03), (99, 0.307)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98439217 846 andrew gelman stats-2011-08-09-Default priors update?

Introduction: Ryan King writes: I was wondering if you have a brief comment on the state of the art for objective priors for hierarchical generalized linear models (generalized linear mixed models). I have been working off the papers in Bayesian Analysis (2006) 1, Number 3 (Browne and Draper, Kass and Natarajan, Gelman). There seems to have been continuous work for matching priors in linear mixed models, but GLMMs less so because of the lack of an analytic marginal likelihood for the variance components. There are a number of additional suggestions in the literature since 2006, but little robust practical guidance. I’m interested in both mean parameters and the variance components. I’m almost always concerned with logistic random effect models. I’m fascinated by the matching-priors idea of higher-order asymptotic improvements to maximum likelihood, and need to make some kind of defensible default recommendation. Given the massive scale of the datasets (genetics …), extensive sensitivity a

2 0.98275697 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

Introduction: For awhile I’ve been fitting most of my multilevel models using lmer/glmer, which gives point estimates of the group-level variance parameters (maximum marginal likelihood estimate for lmer and an approximation for glmer). I’m usually satisfied with this–sure, point estimation understates the uncertainty in model fitting, but that’s typically the least of our worries. Sometimes, though, lmer/glmer estimates group-level variances at 0 or estimates group-level correlation parameters at +/- 1. Typically, when this happens, it’s not that we’re so sure the variance is close to zero or that the correlation is close to 1 or -1; rather, the marginal likelihood does not provide a lot of information about these parameters of the group-level error distribution. I don’t want point estimates on the boundary. I don’t want to say that the unexplained variance in some dimension is exactly zero. One way to handle this problem is full Bayes: slap a prior on sigma, do your Gibbs and Metropolis

3 0.97963762 197 andrew gelman stats-2010-08-10-The last great essayist?

Introduction: I recently read a bizarre article by Janet Malcolm on a murder trial in NYC. What threw me about the article was that the story was utterly commonplace (by the standards of today’s headlines): divorced mom kills ex-husband in a custody dispute over their four-year-old daughter. The only interesting features were (a) the wife was a doctor and the husband were a dentist, the sort of people you’d expect to sue rather than slay, and (b) the wife hired a hitman from within the insular immigrant community that she (and her husband) belonged to. But, really, neither of these was much of a twist. To add to the non-storyness of it all, there were no other suspects, the evidence against the wife and the hitman was overwhelming, and even the high-paid defense lawyers didn’t seem to be making much of an effort to convince anyone of their client’s innocents. (One of the closing arguments was that one aspect of the wife’s story was so ridiculous that it had to be true. In the lawyer’s wo

4 0.97955394 1072 andrew gelman stats-2011-12-19-“The difference between . . .”: It’s not just p=.05 vs. p=.06

Introduction: The title of this post by Sanjay Srivastava illustrates an annoying misconception that’s crept into the (otherwise delightful) recent publicity related to my article with Hal Stern, he difference between “significant” and “not significant” is not itself statistically significant. When people bring this up, they keep referring to the difference between p=0.05 and p=0.06, making the familiar (and correct) point about the arbitrariness of the conventional p-value threshold of 0.05. And, sure, I agree with this, but everybody knows that already. The point Hal and I were making was that even apparently large differences in p-values are not statistically significant. For example, if you have one study with z=2.5 (almost significant at the 1% level!) and another with z=1 (not statistically significant at all, only 1 se from zero!), then their difference has a z of about 1 (again, not statistically significant at all). So it’s not just a comparison of 0.05 vs. 0.06, even a differenc

5 0.97932452 953 andrew gelman stats-2011-10-11-Steve Jobs’s cancer and science-based medicine

Introduction: Interesting discussion from David Gorski (which I found via this link from Joseph Delaney). I don’t have anything really to add to this discussion except to note the value of this sort of anecdote in a statistics discussion. It’s only n=1 and adds almost nothing to the literature on the effectiveness of various treatments, but a story like this can help focus one’s thoughts on the decision problems.

6 0.97901416 847 andrew gelman stats-2011-08-10-Using a “pure infographic” to explore differences between information visualization and statistical graphics

7 0.97887379 1838 andrew gelman stats-2013-05-03-Setting aside the politics, the debate over the new health-care study reveals that we’re moving to a new high standard of statistical journalism

8 0.97819263 1999 andrew gelman stats-2013-08-27-Bayesian model averaging or fitting a larger model

9 0.97780406 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

10 0.97747487 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

11 0.97669661 1368 andrew gelman stats-2012-06-06-Question 27 of my final exam for Design and Analysis of Sample Surveys

12 0.97648895 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

13 0.97594988 1455 andrew gelman stats-2012-08-12-Probabilistic screening to get an approximate self-weighted sample

14 0.97512305 2017 andrew gelman stats-2013-09-11-“Informative g-Priors for Logistic Regression”

15 0.9749192 414 andrew gelman stats-2010-11-14-“Like a group of teenagers on a bus, they behave in public as if they were in private”

16 0.97459698 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

17 0.97382575 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

18 0.97370934 1224 andrew gelman stats-2012-03-21-Teaching velocity and acceleration

19 0.97367537 1240 andrew gelman stats-2012-04-02-Blogads update

20 0.97303343 2247 andrew gelman stats-2014-03-14-The maximal information coefficient