andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2042 knowledge-graph by maker-knowledge-mining

2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses


meta infos for this blog

Source: html

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). [sent-2, score-0.508]

2 A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. [sent-5, score-0.774]

3 In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. [sent-6, score-1.234]

4 (Now because there is some complex dependence structure in the data, perhaps the confidence interval for the contrast between these effects could actually be narrower. [sent-9, score-0.357]

5 ) One reason the authors like this negative result is that it is an example of where this family of analyses actually produces a null result. [sent-11, score-0.416]

6 The authors make some arguments about having adequate power, but Figure 2 makes pretty clear that the study is underpowered. [sent-13, score-0.336]

7 It is interesting to see this problem pop up in their work again, given that this was one issues with C&F;’s earlier social contagion that could be most directly understood as an error. [sent-14, score-0.399]

8 Recall that C&F; used arguments from asymmetry of friendship ties, whereby they got a significant coefficient for friendships reported only by the ego, but not for friendships reported only by the alter. [sent-15, score-0.731]

9 However, the difference between these two coefficients was not actually statistically significant. [sent-16, score-0.319]

10 ) Now, it may be that still this study should result in a tighter posterior around zero for peer effects in self-reported same-sex attraction than prior to the study. [sent-18, score-0.747]

11 Though I think most other evidence would have already suggested any such of effects would be relatively small. [sent-19, score-0.175]

12 I think one challenge with some of this work is that journals or other need to enforce or strongly encourage some of these recommendations, since some of the people doing this work may not really care that much about the truth. [sent-24, score-0.164]

13 05]) is a small effect (this is how the authors describe it), but for a behavior with p = 0. [sent-33, score-0.256]

14 047, this actually corresponds to a ~45% relative increase in the probability of the behavior when the alter is a positive case. [sent-34, score-0.32]

15 In fact, I think it is larger than the relative risks from many of the the other C&F; social contagion papers. [sent-36, score-0.394]

16 Likewise, the confidence intervals from the logistic regression are so wide as to include many values that seem not very plausible. [sent-37, score-0.244]

17 Perhaps instead all of these effects could be incorporated into a multilevel model that shared information about the sizes of network auto-correlations in various traits and behaviors, thus putting the observed associations in context and appropriately “regularizing” them. [sent-38, score-0.266]

18 The downside is, given available data, I doubt there’d be any interesting results that are statistically significant (in the sense of posterior 95% intervals that exclude zero) without imposing very strong assumptions. [sent-40, score-0.527]

19 In a conventional analysis, these assumptions are imposed via the selection of which comparisons to focus on, but in a full multilevel analysis of all possible comparisons, the researcher might have to more carefully explain why certain comparisons are believed, a priori, to be larger than others. [sent-41, score-0.412]

20 Or else you have to make fewer assumptions and accept the large amount of uncertainty about substantive conclusions of interest. [sent-42, score-0.217]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('contagion', 0.248), ('attraction', 0.21), ('peer', 0.183), ('sexual', 0.175), ('effects', 0.175), ('friendships', 0.161), ('eckles', 0.14), ('behavior', 0.138), ('comparisons', 0.136), ('statistically', 0.133), ('interpretable', 0.132), ('christakis', 0.122), ('ci', 0.122), ('significant', 0.121), ('authors', 0.118), ('fowler', 0.116), ('actually', 0.105), ('produces', 0.103), ('posterior', 0.099), ('network', 0.091), ('null', 0.09), ('results', 0.089), ('intervals', 0.085), ('work', 0.082), ('logistic', 0.082), ('coefficients', 0.081), ('tribute', 0.08), ('tighter', 0.08), ('removal', 0.08), ('relative', 0.077), ('confidence', 0.077), ('conclusions', 0.077), ('arguments', 0.076), ('homophily', 0.076), ('ego', 0.072), ('friendship', 0.072), ('recurring', 0.072), ('regularizing', 0.072), ('nhst', 0.072), ('clear', 0.072), ('uncertainty', 0.07), ('reported', 0.07), ('assumptions', 0.07), ('imposed', 0.07), ('touches', 0.07), ('adequate', 0.07), ('pointer', 0.07), ('social', 0.069), ('highlighted', 0.068), ('archives', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som

2 0.24751133 1412 andrew gelman stats-2012-07-10-More questions on the contagion of obesity, height, etc.

Introduction: AT discusses [link broken; see P.P.S. below] a new paper of his that casts doubt on the robustness of the controversial Christakis and Fowler papers. AT writes that he ran some simulations of contagion on social networks and found that (a) in a simple model assuming the contagion of the sort hypothesized by Christakis and Fowler, their procedure would indeed give the sorts of estimates they found in their papers, but (b) in another simple model assuming a different sort of contagion, the C&F; estimation would give indistinguishable estimates. Thus, if you believe AT’s simulation model, C&F;’s procedure cannot statistically distinguish between two sorts of contagion (directional and simultaneous). I have not looked at AT’s paper so I can’t fully comment, but I don’t fully understand his method for simulating network connections. AT uses what he calls a “rewiring” model. This makes sense: as time progresses, we make new friends and lose old ones—but I am confused by the details

3 0.24457714 756 andrew gelman stats-2011-06-10-Christakis-Fowler update

Introduction: After I posted on Russ Lyons’s criticisms of the work of Nicholas Christakis and James Fowler’s work on social networks, several people emailed in with links to related articles. (Nobody wants to comment on the blog anymore; all I get is emails.) Here they are: Political scientists Hans Noel and Brendan Nyhan wrote a paper called “The ‘Unfriending’ Problem: The Consequences of Homophily in Friendship Retention for Causal Estimates of Social Influence” in which they argue that the Christakis-Fowler results are subject to bias because of patterns in the time course of friendships. Statisticians Cosma Shalizi and AT wrote a paper called “Homophily and Contagion Are Generically Confounded in Observational Social Network Studies” arguing that analyses such as those of Christakis and Fowler cannot hope to disentangle different sorts of network effects. And Christakis and Fowler reply to Noel and Nyhan, Shalizi and Thomas, Lyons, and others in an article that begins: H

4 0.23422977 757 andrew gelman stats-2011-06-10-Controversy over the Christakis-Fowler findings on the contagion of obesity

Introduction: Nicholas Christakis and James Fowler are famous for finding that obesity is contagious. Their claims, which have been received with both respect and skepticism (perhaps we need a new word for this: “respecticism”?) are based on analysis of data from the Framingham heart study, a large longitudinal public-health study that happened to have some social network data (for the odd reason that each participant was asked to provide the name of a friend who could help the researchers locate them if they were to move away during the study period. The short story is that if your close contact became obese, you were likely to become obese also. The long story is a debate about the reliability of this finding (that is, can it be explained by measurement error and sampling variability) and its causal implications. This sort of study is in my wheelhouse, as it were, but I have never looked at the Christakis-Fowler work in detail. Thus, my previous and current comments are more along the line

5 0.1787407 1699 andrew gelman stats-2013-01-31-Fowlerpalooza!

Introduction: Russ Lyons points us to a discussion in Statistics in Medicine of the famous claims by Christakis and Fowler on the contagion of obesity etc. James O’Malley and Christakis and Fowler present the positive case. Andrew Thomas and Tyler VanderWeele present constructive criticism. Christakis and Fowler reply . Coincidentally, a couple weeks ago an epidemiologist was explaining to me the differences between the Framingham Heart Study and the Nurses Health Study and why Framingham got the postmenopausal supplement risks right while Nurses got it wrong. P.S. The journal issue also includes a comment on “A distribution-free test of constant mean in linear mixed effects models.” Wow! I had no idea people still did this sort of thing. How horrible. But I guess that’s what half-life is all about. These ideas last forever, they just become less and less relevant to people.

6 0.16499233 1941 andrew gelman stats-2013-07-16-Priors

7 0.16193512 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

8 0.16171747 1952 andrew gelman stats-2013-07-23-Christakis response to my comment on his comments on social science (or just skip to the P.P.P.S. at the end)

9 0.15492003 1206 andrew gelman stats-2012-03-10-95% intervals that I don’t believe, because they’re from a flat prior I don’t believe

10 0.15308928 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

11 0.15299316 888 andrew gelman stats-2011-09-03-A psychology researcher asks: Is Anova dead?

12 0.15197784 1072 andrew gelman stats-2011-12-19-“The difference between . . .”: It’s not just p=.05 vs. p=.06

13 0.14623155 1968 andrew gelman stats-2013-08-05-Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people)

14 0.14451648 899 andrew gelman stats-2011-09-10-The statistical significance filter

15 0.14202563 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

16 0.14055213 1989 andrew gelman stats-2013-08-20-Correcting for multiple comparisons in a Bayesian regression model

17 0.13846748 1672 andrew gelman stats-2013-01-14-How do you think about the values in a confidence interval?

18 0.13818504 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)

19 0.1380899 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

20 0.1380695 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.295), (1, 0.068), (2, 0.058), (3, -0.185), (4, 0.008), (5, -0.098), (6, 0.016), (7, -0.018), (8, -0.021), (9, 0.027), (10, -0.042), (11, 0.027), (12, 0.049), (13, -0.084), (14, 0.039), (15, 0.036), (16, -0.051), (17, -0.01), (18, 0.001), (19, -0.014), (20, 0.03), (21, 0.014), (22, 0.015), (23, -0.011), (24, 0.016), (25, -0.034), (26, 0.037), (27, -0.055), (28, -0.009), (29, -0.04), (30, -0.011), (31, -0.005), (32, -0.046), (33, -0.07), (34, 0.09), (35, 0.029), (36, -0.039), (37, 0.031), (38, 0.008), (39, -0.054), (40, 0.019), (41, 0.01), (42, 0.034), (43, -0.02), (44, 0.03), (45, -0.061), (46, -0.045), (47, -0.022), (48, -0.029), (49, -0.035)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96642011 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som

2 0.81972986 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

3 0.8155961 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

Introduction: Aki points us to this discussion from Rolf Zwaan: The first massive replication project in psychology has just reached completion (several others are to follow). . . . What can we learn from the ManyLabs project? The results here show the effect sizes for the replication efforts (in green and grey) as well as the original studies (in blue). The 99% confidence intervals are for the meta-analysis of the effect size (the green dots); the studies are ordered by effect size. Let’s first consider what we canNOT learn from these data. Of the 13 replication attempts (when the first four are taken together), 11 succeeded and 2 did not (in fact, at some point ManyLabs suggests that a third one, Imagined Contact also doesn’t really replicate). We cannot learn from this that the vast majority of psychological findings will replicate . . . But even if we had an accurate estimate of the percentage of findings that replicate, how useful would that be? Rather than trying to arrive at a mo

4 0.80272168 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

Introduction: I’m thinking more and more that we have to get rid of statistical significance, 95% intervals, and all the rest, and just come to a more fundamental acceptance of uncertainty. In practice, I think we use confidence intervals and hypothesis tests as a way to avoid acknowledging uncertainty. We set up some rules and then act as if we know what is real and what is not. Even in my own applied work, I’ve often enough presented 95% intervals and gone on from there. But maybe that’s just not right. I was thinking about this after receiving the following email from a psychology student: I [the student] am trying to conceptualize the lessons in your paper with Stern with comparing treatment effects across studies. When trying to understand if a certain intervention works, we must look at what the literature says. However this can be complicated if the literature has divergent results. There are four situations I am thinking of. FOr each of these situations, assume the studies are r

5 0.79467928 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

Introduction: Chris Chambers pointed me to a blog by someone called Neuroskeptic who suggested that I preregister my political science studies: So when Andrew Gelman (let’s say) is going to start using a new approach, he goes on Twitter, or on his blog, and posts a bare-bones summary of what he’s going to do. Then he does it. If he finds something interesting, he writes it up as a paper, citing that tweet or post as his preregistration. . . . I think this approach has some benefits but doesn’t really address the issues of preregistration that concern me—but I’d like to spend an entire blog post explaining why. I have two key points: 1. If your study is crap, preregistration might fix it. Preregistration is fine—indeed, the wide acceptance of preregistration might well motivate researchers to not do so many crap studies—but it doesn’t solve fundamental problems of experimental design. 2. “Preregistration” seems to mean different things in different scenarios: A. When the concern is

6 0.79338342 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

7 0.79183096 1963 andrew gelman stats-2013-07-31-Response by Jessica Tracy and Alec Beall to my critique of the methods in their paper, “Women Are More Likely to Wear Red or Pink at Peak Fertility”

8 0.78856385 2156 andrew gelman stats-2014-01-01-“Though They May Be Unaware, Newlyweds Implicitly Know Whether Their Marriage Will Be Satisfying”

9 0.78608024 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

10 0.78590655 2142 andrew gelman stats-2013-12-21-Chasing the noise

11 0.78412867 2159 andrew gelman stats-2014-01-04-“Dogs are sensitive to small variations of the Earth’s magnetic field”

12 0.77992582 106 andrew gelman stats-2010-06-23-Scientists can read your mind . . . as long as the’re allowed to look at more than one place in your brain and then make a prediction after seeing what you actually did

13 0.77861017 1968 andrew gelman stats-2013-08-05-Evidence on the impact of sustained use of polynomial regression on causal inference (a claim that coal heating is reducing lifespan by 5 years for half a billion people)

14 0.77604055 2090 andrew gelman stats-2013-11-05-How much do we trust a new claim that early childhood stimulation raised earnings by 42%?

15 0.77319133 706 andrew gelman stats-2011-05-11-The happiness gene: My bottom line (for now)

16 0.77305424 897 andrew gelman stats-2011-09-09-The difference between significant and not significant…

17 0.76695865 1910 andrew gelman stats-2013-06-22-Struggles over the criticism of the “cannabis users and IQ change” paper

18 0.7631464 1876 andrew gelman stats-2013-05-29-Another one of those “Psychological Science” papers (this time on biceps size and political attitudes among college students)

19 0.76205444 1971 andrew gelman stats-2013-08-07-I doubt they cheated

20 0.75262856 2355 andrew gelman stats-2014-05-31-Jessica Tracy and Alec Beall (authors of the fertile-women-wear-pink study) comment on our Garden of Forking Paths paper, and I comment on their comments


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.012), (15, 0.017), (16, 0.054), (20, 0.019), (21, 0.041), (24, 0.195), (36, 0.015), (40, 0.014), (52, 0.02), (53, 0.022), (59, 0.01), (61, 0.01), (77, 0.024), (81, 0.015), (84, 0.033), (98, 0.024), (99, 0.385)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99121338 2042 andrew gelman stats-2013-09-28-Difficulties of using statistical significance (or lack thereof) to sift through and compare research hypotheses

Introduction: Dean Eckles writes: Thought you might be interested in an example that touches on a couple recurring topics: 1. The difference between a statistically significant finding and one that is non-significant need not be itself statistically significant (thus highlighting the problems of using NHST to declare whether an effect exists or not). 2. Continued issues with the credibility of high profile studies of “social contagion”, especially by Christakis and Fowler . A new paper in Archives of Sexual Behavior produces observational estimates of peer effects in sexual behavior and same-sex attraction. In the text, the authors (who include C&F;) make repeated comparisons of the results for peer effects in sexual intercourse and those for peer effects in same-sex attraction. However, the 95% CI for the later actually includes the point estimate for the former! This is most clear in Figure 2, as highlighted by Real Clear Science’s blog post about the study. (Now because there is som

2 0.98653287 878 andrew gelman stats-2011-08-29-Infovis, infographics, and data visualization: Where I’m coming from, and where I’d like to go

Introduction: I continue to struggle to convey my thoughts on statistical graphics so I’ll try another approach, this time giving my own story. For newcomers to this discussion: the background is that Antony Unwin and I wrote an article on the different goals embodied in information visualization and statistical graphics, but I have difficulty communicating on this point with the infovis people. Maybe if I tell my own story, and then they tell their stories, this will point a way forward to a more constructive discussion. So here goes. I majored in physics in college and I worked in a couple of research labs during the summer. Physicists graph everything. I did most of my plotting on graph paper–this continued through my second year of grad school–and became expert at putting points at 1/5, 2/5, 3/5, and 4/5 between the x and y grid lines. In grad school in statistics, I continued my physics habits and graphed everything I could. I did notice, though, that the faculty and the other

3 0.98464811 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

Introduction: There’s a paradigm in applied statistics that goes something like this: 1. There is a scientific or policy question of some theoretical or practical importance. 2. Researchers gather data on relevant outcomes and perform a statistical analysis, ideally leading to a clear conclusion (p less than 0.05, or a strong posterior distribution, or good predictive performance, or high reliability and validity, whatever). 3. This conclusion informs policy. This paradigm has room for positive findings (for example, that a new program is statistically significantly better, or statistically significantly worse than what came before) or negative findings (data are inconclusive, further study is needed), even if negative findings seem less likely to make their way into the textbooks. But what happens when step 2 simply isn’t possible. This came up a few years ago—nearly 10 years ago, now!—with the excellent paper by Donohue and Wolfers which explained why it’s just about impossible to

4 0.98417246 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

Introduction: A few months ago I reacted (see further discussion in comments here ) to a recent study on early childhood intervention, in which researchers Paul Gertler, James Heckman, Rodrigo Pinto, Arianna Zanolini, Christel Vermeerch, Susan Walker, Susan M. Chang, and Sally Grantham-McGregor estimated that a particular intervention on young children had raised incomes on young adults by 42%. I wrote: Major decisions on education policy can turn on the statistical interpretation of small, idiosyncratic data sets — in this case, a study of 129 Jamaican children. . . . Overall, I have no reason to doubt the direction of the effect, namely, that psychosocial stimulation should be good. But I’m skeptical of the claim that income differed by 42%, due to the reason of the statistical significance filter . In section 2.3, the authors are doing lots of hypothesizing based on some comparisons being statistically significant and others being non-significant. There’s nothing wrong with speculation, b

5 0.98416877 2266 andrew gelman stats-2014-03-25-A statistical graphics course and statistical graphics advice

Introduction: Dean Eckles writes: Some of my coworkers at Facebook and I have worked with Udacity to create an online course on exploratory data analysis, including using data visualizations in R as part of EDA. The course has now launched at  https://www.udacity.com/course/ud651  so anyone can take it for free. And Kaiser Fung has  reviewed it . So definitely feel free to promote it! Criticism is also welcome (we are still fine-tuning things and adding more notes throughout). I wrote some more comments about the course  here , including highlighting the interviews with my great coworkers. I didn’t have a chance to look at the course so instead I responded with some generic comments about eda and visualization (in no particular order): - Think of a graph as a comparison. All graphs are comparison (indeed, all statistical analyses are comparisons). If you already have the graph in mind, think of what comparisons it’s enabling. Or if you haven’t settled on the graph yet, think of what

6 0.98384559 2176 andrew gelman stats-2014-01-19-Transformations for non-normal data

7 0.98382866 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

8 0.98360384 855 andrew gelman stats-2011-08-16-Infovis and statgraphics update update

9 0.98340821 1605 andrew gelman stats-2012-12-04-Write This Book

10 0.98329234 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

11 0.98314255 262 andrew gelman stats-2010-09-08-Here’s how rumors get started: Lineplots, dotplots, and nonfunctional modernist architecture

12 0.98284864 1149 andrew gelman stats-2012-02-01-Philosophy of Bayesian statistics: my reactions to Cox and Mayo

13 0.9827801 1150 andrew gelman stats-2012-02-02-The inevitable problems with statistical significance and 95% intervals

14 0.98266339 247 andrew gelman stats-2010-09-01-How does Bayes do it?

15 0.98239845 61 andrew gelman stats-2010-05-31-A data visualization manifesto

16 0.9823482 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

17 0.98232073 22 andrew gelman stats-2010-05-07-Jenny Davidson wins Mark Van Doren Award, also some reflections on the continuity of work within literary criticism or statistics

18 0.98218244 2220 andrew gelman stats-2014-02-22-Quickies

19 0.98202747 1763 andrew gelman stats-2013-03-14-Everyone’s trading bias for variance at some point, it’s just done at different places in the analyses

20 0.98192549 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty