andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2286 knowledge-graph by maker-knowledge-mining

2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph


meta infos for this blog

Source: html

Introduction: Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). We do some this in our Red State Blue State book, but before that was the BK plot, named by Howard Wainer after a 2001 paper by Stuart Baker and Barnett Kramer, although in apparently appeared earlier in a 1987 paper by Jeon, Chung, and Bae, and doubtless was made by various other people before then. Here’s Wainer’s graphical explication from 2002 (adapted from Baker and Kramer’s 2001 paper): Here’s the version from our 2007 article (with Boris Shor, Joe Bafumi, and David Park): But I recommend Wainer’s article (linked to above) as the first thing to read on the topic of presenting aggregation paradoxes in a clear and grabby way. P.S. Robert Long writes in: I noticed your post ab


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. [sent-1, score-0.092]

2 I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). [sent-2, score-0.469]

3 Robert Long writes in: I noticed your post about Simpson’s paradox and wanted to let you know about another nice teaching approach using DAGs based on a paper by Perl, but implemented in the fantastic DAGitty tool: http://dagitty. [sent-7, score-0.305]

4 net/learn/simpson/ I have used this to teach Simpson’s Paradox to masters level students recently. [sent-8, score-0.25]

5 The module (led by Mark Gilthorpe) teaches advanced modelling concepts. [sent-9, score-0.071]

6 The DAGitty tool simulates the data which you can give to the students and ask them to explore. [sent-10, score-0.356]

7 You have a main exposure X and outcome Y, and various “potential confunders” Z1, Z2 etc. [sent-11, score-0.109]

8 The beauty of this is that by running models that successively add more of the potential confounders, the estimate for the main exposure X changes from positive to negative and back again. [sent-12, score-0.288]

9 Y~X+Z1 gives a correct estimate but Y~X+Z1+Z2 gives a biased estimate and Y~X+Z1+Z2+Z3 gives the correct estimate and so on. [sent-13, score-0.774]

10 ” I have mixed feelings about this particular tool as I often work in settings where the concept of “true causal effect” doesn’t mean much. [sent-16, score-0.686]

11 On the other hand, causal reasoning is often a central goal for researchers, in which case this tool could be helpful. [sent-17, score-0.722]

12 Let me clarify the above point about causal inference as it seems to have led to a lot of confusion in the comments: There are examples of conditioning paradoxes in which causal reasoning does not arise. [sent-21, score-1.195]

13 There’s no treatment involved at all, I’m just looking at different sorts of comparisons in the data. [sent-23, score-0.074]

14 Comparisons can be interesting and important even without a causal question. [sent-24, score-0.281]

15 I’m not dismissing the importance of causal inference (obviously not; look at the title of this blog! [sent-25, score-0.441]

16 ), I’m just saying that puzzles of conditioning arise even in non-causal settings, which suggests to me that causal reasoning is not necessary for the understanding of these problems, even though in many settings it can be useful . [sent-26, score-0.781]

17 A helpful analogy here might be to statistics and decision analysis. [sent-27, score-0.27]

18 Statistics is sometimes called the science of decisions, and statistical inference is sometimes framed as a decision problem. [sent-28, score-0.272]

19 I often find this helpful (hence the inclusion of a decision analysis chapter in BDA) but I also have seen many examples of statistical analyses where there is no corresponding decision, where the goal is to learn rather than to decide. [sent-29, score-0.432]

20 Hence, although I often find decision analysis to be useful, I don’t feel that it is a necessary part of the formulation of a statistical problem. [sent-30, score-0.363]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('simpson', 0.282), ('causal', 0.281), ('wainer', 0.194), ('dagitty', 0.193), ('decision', 0.188), ('tool', 0.181), ('paradoxes', 0.175), ('kramer', 0.175), ('reasoning', 0.167), ('nielsen', 0.158), ('paradox', 0.156), ('aggregation', 0.144), ('baker', 0.136), ('settings', 0.131), ('conditioning', 0.12), ('estimate', 0.109), ('exposure', 0.109), ('gives', 0.103), ('often', 0.093), ('graphical', 0.093), ('teach', 0.092), ('explication', 0.088), ('dags', 0.088), ('dag', 0.088), ('simulates', 0.088), ('students', 0.087), ('led', 0.087), ('hence', 0.085), ('inference', 0.084), ('confounders', 0.083), ('doubtless', 0.083), ('perl', 0.083), ('necessary', 0.082), ('helpful', 0.082), ('adapted', 0.079), ('dismissing', 0.076), ('fantastic', 0.076), ('snappy', 0.076), ('chung', 0.076), ('comparisons', 0.074), ('grabby', 0.074), ('vogelstein', 0.074), ('paper', 0.073), ('bafumi', 0.072), ('boris', 0.071), ('masters', 0.071), ('module', 0.071), ('potential', 0.07), ('correct', 0.069), ('inclusion', 0.069)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

Introduction: Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). We do some this in our Red State Blue State book, but before that was the BK plot, named by Howard Wainer after a 2001 paper by Stuart Baker and Barnett Kramer, although in apparently appeared earlier in a 1987 paper by Jeon, Chung, and Bae, and doubtless was made by various other people before then. Here’s Wainer’s graphical explication from 2002 (adapted from Baker and Kramer’s 2001 paper): Here’s the version from our 2007 article (with Boris Shor, Joe Bafumi, and David Park): But I recommend Wainer’s article (linked to above) as the first thing to read on the topic of presenting aggregation paradoxes in a clear and grabby way. P.S. Robert Long writes in: I noticed your post ab

2 0.20529087 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

Introduction: Elias Bareinboim asked what I thought about his comment on selection bias in which he referred to a paper by himself and Judea Pearl, “Controlling Selection Bias in Causal Inference.” I replied that I have no problem with what he wrote, but that from my perspective I find it easier to conceptualize such problems in terms of multilevel models. I elaborated on that point in a recent post , “Hierarchical modeling as a framework for extrapolation,” which I think was read by only a few people (I say this because it received only two comments). I don’t think Bareinboim objected to anything I wrote, but like me he is comfortable working within his own framework. He wrote the following to me: In some sense, “not ad hoc” could mean logically consistent. In other words, if one agrees with the assumptions encoded in the model, one must also agree with the conclusions entailed by these assumptions. I am not aware of any other way of doing mathematics. As it turns out, to get causa

3 0.20273584 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

Introduction: Consider two broad classes of inferential questions : 1. Forward causal inference . What might happen if we do X? What are the effects of smoking on health, the effects of schooling on knowledge, the effect of campaigns on election outcomes, and so forth? 2. Reverse causal inference . What causes Y? Why do more attractive people earn more money? Why do many poor people vote for Republicans and rich people vote for Democrats? Why did the economy collapse? When statisticians and econometricians write about causal inference, they focus on forward causal questions. Rubin always told us: Never ask Why? Only ask What if? And, from the econ perspective, causation is typically framed in terms of manipulations: if x had changed by 1, how much would y be expected to change, holding all else constant? But reverse causal questions are important too. They’re a natural way to think (consider the importance of the word “Why”) and are arguably more important than forward questions.

4 0.15989687 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

Introduction: Macartan Humphreys pointed me to this excellent guide . Here are the 10 items: 1. A causal claim is a statement about what didn’t happen. 2. There is a fundamental problem of causal inference. 3. You can estimate average causal effects even if you cannot observe any individual causal effects. 4. If you know that, on average, A causes B and that B causes C, this does not mean that you know that A causes C. 5. The counterfactual model is all about contribution, not attribution. 6. X can cause Y even if there is no “causal path” connecting X and Y. 7. Correlation is not causation. 8. X can cause Y even if X is not a necessary condition or a sufficient condition for Y. 9. Estimating average causal effects does not require that treatment and control groups are identical. 10. There is no causation without manipulation. The article follows with crisp discussions of each point. My favorite is item #6, not because it’s the most important but because it brings in some real s

5 0.15954815 879 andrew gelman stats-2011-08-29-New journal on causal inference

Introduction: Judea Pearl is starting an (online) Journal of Causal Inference. The first issue is planned for Fall 2011 and the website is now open for submissions. Here’s the background (from Pearl): Existing discipline-specific journals tend to bury causal analysis in the language and methods of traditional statistical methodologies, creating the inaccurate impression that causal questions can be handled by routine methods of regression, simultaneous equations or logical implications, and glossing over the special ingredients needed for causal analysis. In contrast, Journal of Causal Inference highlights both the uniqueness and interdisciplinary nature of causal research. In addition to significant original research articles, Journal of Causal Inference also welcomes: 1) Submissions that synthesize and assess cross-disciplinary methodological research 2) Submissions that discuss the history of the causal inference field and its philosophical underpinnings 3) Unsolicited short communi

6 0.12099984 1778 andrew gelman stats-2013-03-27-My talk at the University of Michigan today 4pm

7 0.1196712 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

8 0.1155223 2285 andrew gelman stats-2014-04-07-On deck this week

9 0.11396427 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

10 0.11375821 1575 andrew gelman stats-2012-11-12-Thinking like a statistician (continuously) rather than like a civilian (discretely)

11 0.11183321 2204 andrew gelman stats-2014-02-09-Keli Liu and Xiao-Li Meng on Simpson’s paradox

12 0.10999425 878 andrew gelman stats-2011-08-29-Infovis, infographics, and data visualization: Where I’m coming from, and where I’d like to go

13 0.10719091 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

14 0.10666004 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

15 0.10284089 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

16 0.10189557 2245 andrew gelman stats-2014-03-12-More on publishing in journals

17 0.10072094 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

18 0.099073179 1409 andrew gelman stats-2012-07-08-Is linear regression unethical in that it gives more weight to cases that are far from the average?

19 0.098493621 2083 andrew gelman stats-2013-10-31-Value-added modeling in education: Gaming the system by sending kids on a field trip at test time

20 0.09775576 763 andrew gelman stats-2011-06-13-Inventor of Connect Four dies at 91


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.207), (1, 0.02), (2, -0.019), (3, -0.052), (4, -0.006), (5, -0.003), (6, -0.06), (7, 0.05), (8, 0.037), (9, 0.026), (10, -0.02), (11, 0.037), (12, 0.011), (13, -0.006), (14, 0.073), (15, -0.007), (16, -0.058), (17, 0.019), (18, -0.072), (19, 0.058), (20, -0.029), (21, -0.062), (22, 0.062), (23, 0.043), (24, 0.095), (25, 0.142), (26, 0.046), (27, -0.007), (28, -0.005), (29, 0.025), (30, 0.025), (31, -0.019), (32, -0.005), (33, -0.026), (34, -0.05), (35, -0.011), (36, 0.058), (37, -0.018), (38, 0.006), (39, 0.031), (40, -0.066), (41, 0.002), (42, -0.059), (43, -0.013), (44, -0.014), (45, 0.014), (46, -0.035), (47, 0.026), (48, 0.008), (49, -0.056)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98153973 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

Introduction: Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). We do some this in our Red State Blue State book, but before that was the BK plot, named by Howard Wainer after a 2001 paper by Stuart Baker and Barnett Kramer, although in apparently appeared earlier in a 1987 paper by Jeon, Chung, and Bae, and doubtless was made by various other people before then. Here’s Wainer’s graphical explication from 2002 (adapted from Baker and Kramer’s 2001 paper): Here’s the version from our 2007 article (with Boris Shor, Joe Bafumi, and David Park): But I recommend Wainer’s article (linked to above) as the first thing to read on the topic of presenting aggregation paradoxes in a clear and grabby way. P.S. Robert Long writes in: I noticed your post ab

2 0.83027166 1675 andrew gelman stats-2013-01-15-“10 Things You Need to Know About Causal Effects”

Introduction: Macartan Humphreys pointed me to this excellent guide . Here are the 10 items: 1. A causal claim is a statement about what didn’t happen. 2. There is a fundamental problem of causal inference. 3. You can estimate average causal effects even if you cannot observe any individual causal effects. 4. If you know that, on average, A causes B and that B causes C, this does not mean that you know that A causes C. 5. The counterfactual model is all about contribution, not attribution. 6. X can cause Y even if there is no “causal path” connecting X and Y. 7. Correlation is not causation. 8. X can cause Y even if X is not a necessary condition or a sufficient condition for Y. 9. Estimating average causal effects does not require that treatment and control groups are identical. 10. There is no causation without manipulation. The article follows with crisp discussions of each point. My favorite is item #6, not because it’s the most important but because it brings in some real s

3 0.82342809 1996 andrew gelman stats-2013-08-24-All inference is about generalizing from sample to population

Introduction: Jeff Walker writes: Your blog has skirted around the value of observational studies and chided folks for using causal language when they only have associations but I sense that you ultimately find value in these associations. I would love for you to expand this thought in a blog. Specifically: Does a measured association “suggest” a causal relationship? Are measured associations a good and efficient way to narrow the field of things that should be studied? Of all the things we should pursue, should we start with the stuff that has some largish measured association? Certainly many associations are not directly causal but due to joint association. Similarly, there must be many variables that are directly causally associated ( A -> B) but the effect, measured as an association, is masked by confounders. So if we took the “measured associations are worthwhile” approach, we’d never or rarely find the masked effects. But I’d also like to know if one is more likely to find a large causal

4 0.81309038 550 andrew gelman stats-2011-02-02-An IV won’t save your life if the line is tangled

Introduction: Alex Tabarrok quotes Randall Morck and Bernard Yeung on difficulties with instrumental variables. This reminded me of some related things I’ve written. In the official story the causal question comes first and then the clever researcher comes up with an IV. I suspect that often it’s the other way around: you find a natural experiment and look at the consequences that flow from it. And maybe that’s not such a bad thing. See section 4 of this article . More generally, I think economists and political scientists are currently a bit overinvested in identification strategies. I agree with Heckman’s point (as I understand it) that ultimately we should be building models that work for us rather than always thinking we can get causal inference on the cheap, as it were, by some trick or another. (This is a point I briefly discuss in a couple places here and also in my recent paper for the causality volume that Don Green etc are involved with.) I recently had this discussion wi

5 0.8129676 1136 andrew gelman stats-2012-01-23-Fight! (also a bit of reminiscence at the end)

Introduction: Martin Lindquist and Michael Sobel published a fun little article in Neuroimage on models and assumptions for causal inference with intermediate outcomes. As their subtitle indicates (“A response to the comments on our comment”), this is a topic of some controversy. Lindquist and Sobel write: Our original comment (Lindquist and Sobel, 2011) made explicit the types of assumptions neuroimaging researchers are making when directed graphical models (DGMs), which include certain types of structural equation models (SEMs), are used to estimate causal effects. When these assumptions, which many researchers are not aware of, are not met, parameters of these models should not be interpreted as effects. . . . [Judea] Pearl does not disagree with anything we stated. However, he takes exception to our use of potential outcomes notation, which is the standard notation used in the statistical literature on causal inference, and his comment is devoted to promoting his alternative conventions. [C

6 0.80732185 1492 andrew gelman stats-2012-09-11-Using the “instrumental variables” or “potential outcomes” approach to clarify causal thinking

7 0.80308205 1939 andrew gelman stats-2013-07-15-Forward causal reasoning statements are about estimation; reverse causal questions are about model checking and hypothesis generation

8 0.79974121 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

9 0.78916556 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

10 0.78825402 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies

11 0.78190517 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

12 0.77839041 1802 andrew gelman stats-2013-04-14-Detecting predictability in complex ecosystems

13 0.77226615 879 andrew gelman stats-2011-08-29-New journal on causal inference

14 0.76355922 1336 andrew gelman stats-2012-05-22-Battle of the Repo Man quotes: Reid Hastie’s turn

15 0.75569552 1778 andrew gelman stats-2013-03-27-My talk at the University of Michigan today 4pm

16 0.74325758 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

17 0.7432307 2097 andrew gelman stats-2013-11-11-Why ask why? Forward causal inference and reverse causal questions

18 0.73723716 1732 andrew gelman stats-2013-02-22-Evaluating the impacts of welfare reform?

19 0.73510069 1645 andrew gelman stats-2012-12-31-Statistical modeling, causal inference, and social science

20 0.72895789 807 andrew gelman stats-2011-07-17-Macro causality


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(12, 0.015), (13, 0.017), (16, 0.095), (18, 0.023), (21, 0.04), (24, 0.138), (47, 0.025), (50, 0.029), (55, 0.033), (59, 0.028), (65, 0.018), (76, 0.04), (85, 0.022), (89, 0.011), (99, 0.356)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98638803 2286 andrew gelman stats-2014-04-08-Understanding Simpson’s paradox using a graph

Introduction: Joshua Vogelstein pointed me to this post by Michael Nielsen on how to teach Simpson’s paradox. I don’t know if Nielsen (and others) are aware that people have developed some snappy graphical methods for displaying Simpson’s paradox (and, more generally, aggregation issues). We do some this in our Red State Blue State book, but before that was the BK plot, named by Howard Wainer after a 2001 paper by Stuart Baker and Barnett Kramer, although in apparently appeared earlier in a 1987 paper by Jeon, Chung, and Bae, and doubtless was made by various other people before then. Here’s Wainer’s graphical explication from 2002 (adapted from Baker and Kramer’s 2001 paper): Here’s the version from our 2007 article (with Boris Shor, Joe Bafumi, and David Park): But I recommend Wainer’s article (linked to above) as the first thing to read on the topic of presenting aggregation paradoxes in a clear and grabby way. P.S. Robert Long writes in: I noticed your post ab

2 0.97584498 702 andrew gelman stats-2011-05-09-“Discovered: the genetic secret of a happy life”

Introduction: I took the above headline from a news article in the (London) Independent by Jeremy Laurance reporting a study by Jan-Emmanuel De Neve, James Fowler, and Bruno Frey that reportedly just appeared in the Journal of Human Genetics. One of the pleasures of blogging is that I can go beyond the usual journalistic approaches to such a story: (a) puffing it, (b) debunking it, (c) reporting it completely flatly. Even convex combinations of (a), (b), (c) do not allow what I’d like to do, which is to explore the claims and follow wherever my exploration takes me. (And one of the pleasures of building my own audience is that I don’t need to endlessly explain background detail as was needed on a general-public site such as 538.) OK, back to the genetic secret of a happy life. Or, in the words the authors of the study, a gene that “explains less than one percent of the variation in life satisfaction.” “The genetic secret” or “less than one percent of the variation”? Perhaps the secre

3 0.9752143 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

Introduction: We had a discussion the other day of a paper, “The Economic Effects of Climate Change,” by economist Richard Tol. The paper came to my attention after I saw a notice from Adam Marcus that it was recently revised because of data errors. But after looking at the paper more carefully, I see a bunch of other problems that, to me, make the whole analysis close to useless as it stands. I think this is worth discussing because the paper has been somewhat influential (so far cited 328 times, according to Google Scholar) and has even been cited in the popular press as evidence that “Climate change has done more good than harm so far and is likely to continue doing so for most of this century . . . There are many likely effects of climate change: positive and negative, economic and ecological, humanitarian and financial. And if you aggregate them all, the overall effect is positive today — and likely to stay positive until around 2080. That was the conclusion of Professor Richard Tol

4 0.97466397 2091 andrew gelman stats-2013-11-06-“Marginally significant”

Introduction: Jeremy Fox writes: You’ve probably seen this [by Matthew Hankins]. . . . Everyone else on Twitter already has. It’s a graph of the frequency with which the phrase “marginally significant” occurs in association with different P values. Apparently it’s real data, from a Google Scholar search, though I haven’t tried to replicate the search myself. My reply: I admire the effort that went into the data collection and the excellent display (following Bill Cleveland etc., I’d prefer a landscape rather than portrait orientation of the graph, also I’d prefer a gritty histogram rather than a smooth density, and I don’t like the y-axis going below zero, nor do I like the box around the graph, also there’s that weird R default where the axis labels are so far from the actual axes, I don’t know whassup with that . . . but these are all minor, minor issues, certainly I’ve done much worse myself many times even in published articles; see the presentation here for lots of examples), an

5 0.97349697 2120 andrew gelman stats-2013-12-02-Does a professor’s intervention in online discussions have the effect of prolonging discussion or cutting it off?

Introduction: Usually I don’t post answers to questions right away, but Mark Liberman was kind enough to answer my question yesterday so I think I should reciprocate. Mark asks: I’ve been playing around with data from Coursera transaction logs, for an economics course and a modern poetry course so far. For the Modern Poetry course, where there’s quite a bit of activity in the forums, the instructor (Al Filreis) is interested in what the factors are that lead to discussion threads being longer or shorter. For example, he wonders whether his own (fairly frequent) interventions have the effect of prolonging discussion or cutting it off. Some background explorations are here with the relevant stuff mostly at the end, including this . With respect to Al’s specific question, my thought was to look at each of his comments, each one being the nth in some sequence, and to look at the empirical probability of continuing (at all, or perhaps for at least 1,2,3,… additional turns) in those cases c

6 0.9733572 2142 andrew gelman stats-2013-12-21-Chasing the noise

7 0.9732905 855 andrew gelman stats-2011-08-16-Infovis and statgraphics update update

8 0.97308016 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

9 0.97267771 32 andrew gelman stats-2010-05-14-Causal inference in economics

10 0.9726721 288 andrew gelman stats-2010-09-21-Discussion of the paper by Girolami and Calderhead on Bayesian computation

11 0.97247225 1452 andrew gelman stats-2012-08-09-Visually weighting regression displays

12 0.97219741 1650 andrew gelman stats-2013-01-03-Did Steven Levitt really believe in 2008 that Obama “would be the greatest president in history”?

13 0.97211653 1289 andrew gelman stats-2012-04-29-We go to war with the data we have, not the data we want

14 0.97204632 1453 andrew gelman stats-2012-08-10-Quotes from me!

15 0.97203362 257 andrew gelman stats-2010-09-04-Question about standard range for social science correlations

16 0.97199219 753 andrew gelman stats-2011-06-09-Allowing interaction terms to vary

17 0.97193623 390 andrew gelman stats-2010-11-02-Fragment of statistical autobiography

18 0.97192311 1596 andrew gelman stats-2012-11-29-More consulting experiences, this time in computational linguistics

19 0.97181201 1529 andrew gelman stats-2012-10-11-Bayesian brains?

20 0.97150433 584 andrew gelman stats-2011-02-22-“Are Wisconsin Public Employees Underpaid?”