andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2115 knowledge-graph by maker-knowledge-mining

2115 andrew gelman stats-2013-11-27-Three unblinded mice


meta infos for this blog

Source: html

Introduction: Howard Wainer points us to a recent news article by Jennifer Couzin-Frankel, who writes about the selection bias arising from the routine use of outcome criteria to exclude animals in medical trials. In statistics and econometrics, this is drilled into us: Selection on x is OK, selection on y is not OK. But apparently in biomedical research this principle is not so well known (or, perhaps, it is all too well known). Couzin-Frankel starts with an example of a drug trial in which 3 of the 10 mice in the treatment group were removed from the analysis because they had died from massive strokes. This sounds pretty bad, but it’s even worse than that: this was from a paper under review that “described how a new drug protected a rodent’s brain after a stroke.” Death isn’t a very good way to protect a rodent’s brain! The news article continues: “This isn’t fraud,” says Dirnagl [the outside reviewer who caught this particular problem], who often works with mice. Dropping animals f


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Howard Wainer points us to a recent news article by Jennifer Couzin-Frankel, who writes about the selection bias arising from the routine use of outcome criteria to exclude animals in medical trials. [sent-1, score-0.901]

2 Couzin-Frankel starts with an example of a drug trial in which 3 of the 10 mice in the treatment group were removed from the analysis because they had died from massive strokes. [sent-4, score-0.582]

3 This sounds pretty bad, but it’s even worse than that: this was from a paper under review that “described how a new drug protected a rodent’s brain after a stroke. [sent-5, score-0.314]

4 The news article continues: “This isn’t fraud,” says Dirnagl [the outside reviewer who caught this particular problem], who often works with mice. [sent-7, score-0.231]

5 Dropping animals from a research study for any number of reasons, he explains, is an entrenched, accepted part of the culture. [sent-8, score-0.266]

6 … People exclude animals at their whim, they just do it and they don’t report it. [sent-10, score-0.444]

7 ” Some animals might be fearful, or biters, or they might just be curled up in the corner, asleep. [sent-15, score-0.266]

8 Accepting uncertainty and embracing variation Ultimately I think many of these problems come from a fundamental, fundamental misunderstanding: lack of recognition of uncertainty and variability. [sent-26, score-0.44]

9 ” And, the (implicit) idea is that if it works, it works for everyone. [sent-28, score-0.218]

10 From that perspective, it makes perfect sense to exclude treated mice who die early: these are just noise cases that interfere with the signal. [sent-30, score-0.523]

11 That is, statistics is seen, not as a way to model variation, but as a way to remove uncertainty . [sent-32, score-0.241]

12 There is of course some truth to this attitude—the law of large numbers and all that—but it’s hard to use statistics well if you think you know the answer ahead of time. [sent-33, score-0.38]

13 [And, no, for the anti-Bayesians out there, using a prior distribution is not "thinking you know the answer ahead of time. [sent-34, score-0.45]

14 A prior distribution expresses what you know before you include new data. [sent-36, score-0.239]

15 Of course it does not imply that you know the answer ahead of time; indeed, the whole point of analyzing new data is that, before seeing such data, you remain uncertain about key aspects of the world. [sent-37, score-0.38]

16 ” By believing this (or acting as if you believe it), you are denying the existence of variation. [sent-40, score-0.425]

17 By believing this (or acting as if you believe it), you are denying the existence of uncertainty. [sent-44, score-0.425]

18 And this will lead you to brush aside criticisms and think of issues such as selection bias as technicalities rather than serious concerns. [sent-45, score-0.313]

19 So, for all we know, there might be dozens of papers published by that research group, all with results based on discarding dead animals from the treatment group, and we have no way of knowing about it. [sent-54, score-0.464]

20 I guess maybe someone can do a search on all published papers involving mouse trials of drugs for protecting the brain after stroke, just looking at those studies with 10 mice in the control group and 7 mice in the treatment group. [sent-56, score-0.967]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('animals', 0.266), ('mice', 0.262), ('exclude', 0.178), ('rodent', 0.165), ('whim', 0.165), ('works', 0.151), ('selection', 0.137), ('fraud', 0.134), ('brain', 0.132), ('biomedical', 0.131), ('treatment', 0.127), ('ahead', 0.116), ('denying', 0.114), ('laugh', 0.111), ('group', 0.109), ('believing', 0.106), ('acting', 0.104), ('existence', 0.101), ('variation', 0.101), ('bias', 0.101), ('uncertainty', 0.099), ('worse', 0.098), ('teaching', 0.097), ('answer', 0.095), ('course', 0.085), ('drug', 0.084), ('know', 0.084), ('cases', 0.083), ('news', 0.08), ('distribution', 0.078), ('tool', 0.077), ('ok', 0.077), ('prior', 0.077), ('mouse', 0.075), ('drilled', 0.075), ('brush', 0.075), ('perpetrator', 0.075), ('fundamental', 0.073), ('entrenched', 0.071), ('fearful', 0.071), ('way', 0.071), ('outcome', 0.07), ('medical', 0.069), ('researchers', 0.068), ('lisa', 0.068), ('stroke', 0.068), ('embracing', 0.068), ('whichever', 0.068), ('rahul', 0.068), ('idea', 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 2115 andrew gelman stats-2013-11-27-Three unblinded mice

Introduction: Howard Wainer points us to a recent news article by Jennifer Couzin-Frankel, who writes about the selection bias arising from the routine use of outcome criteria to exclude animals in medical trials. In statistics and econometrics, this is drilled into us: Selection on x is OK, selection on y is not OK. But apparently in biomedical research this principle is not so well known (or, perhaps, it is all too well known). Couzin-Frankel starts with an example of a drug trial in which 3 of the 10 mice in the treatment group were removed from the analysis because they had died from massive strokes. This sounds pretty bad, but it’s even worse than that: this was from a paper under review that “described how a new drug protected a rodent’s brain after a stroke.” Death isn’t a very good way to protect a rodent’s brain! The news article continues: “This isn’t fraud,” says Dirnagl [the outside reviewer who caught this particular problem], who often works with mice. Dropping animals f

2 0.15674226 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

3 0.13861611 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

Introduction: Robert Bell pointed me to this post by Brad De Long on Bayesian statistics, and then I also noticed this from Noah Smith, who wrote: My impression is that although the Bayesian/Frequentist debate is interesting and intellectually fun, there’s really not much “there” there… despite being so-hip-right-now, Bayesian is not the Statistical Jesus. I’m happy to see the discussion going in this direction. Twenty-five years ago or so, when I got into this biz, there were some serious anti-Bayesian attitudes floating around in mainstream statistics. Discussions in the journals sometimes devolved into debates of the form, “Bayesians: knaves or fools?”. You’d get all sorts of free-floating skepticism about any prior distribution at all, even while people were accepting without question (and doing theory on) logistic regressions, proportional hazards models, and all sorts of strong strong models. (In the subfield of survey sampling, various prominent researchers would refuse to mode

4 0.13843991 411 andrew gelman stats-2010-11-13-Ethical concerns in medical trials

Introduction: I just read this article on the treatment of medical volunteers, written by doctor and bioethicist Carl Ellliott. As a statistician who has done a small amount of consulting for pharmaceutical companies, I have a slightly different perspective. As a doctor, Elliott focuses on individual patients, whereas, as a statistician, I’ve been trained to focus on the goal of accurately estimate treatment effects. I’ll go through Elliott’s article and give my reactions. Elliott: In Miami, investigative reporters for Bloomberg Markets magazine discovered that a contract research organisation called SFBC International was testing drugs on undocumented immigrants in a rundown motel; since that report, the motel has been demolished for fire and safety violations. . . . SFBC had recently been named one of the best small businesses in America by Forbes magazine. The Holiday Inn testing facility was the largest in North America, and had been operating for nearly ten years before inspecto

5 0.13765217 1628 andrew gelman stats-2012-12-17-Statistics in a world where nothing is random

Introduction: Rama Ganesan writes: I think I am having an existential crisis. I used to work with animals (rats, mice, gerbils etc.) Then I started to work in marketing research where we did have some kind of random sampling procedure. So up until a few years ago, I was sort of okay. Now I am teaching marketing research, and I feel like there is no real random sampling anymore. I take pains to get students to understand what random means, and then the whole lot of inferential statistics. Then almost anything they do – the sample is not random. They think I am contradicting myself. They use convenience samples at every turn – for their school work, and the enormous amount on online surveying that gets done. Do you have any suggestions for me? Other than say, something like this . My reply: Statistics does not require randomness. The three essential elements of statistics are measurement, comparison, and variation. Randomness is one way to supply variation, and it’s one way to model

6 0.12743349 2245 andrew gelman stats-2014-03-12-More on publishing in journals

7 0.12728451 1527 andrew gelman stats-2012-10-10-Another reason why you can get good inferences from a bad model

8 0.12565286 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

9 0.11936584 1414 andrew gelman stats-2012-07-12-Steven Pinker’s unconvincing debunking of group selection

10 0.11935905 1418 andrew gelman stats-2012-07-16-Long discussion about causal inference and the use of hierarchical models to bridge between different inferential settings

11 0.118513 1941 andrew gelman stats-2013-07-16-Priors

12 0.11761227 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

13 0.11459213 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

14 0.11221954 2137 andrew gelman stats-2013-12-17-Replication backlash

15 0.11176213 1155 andrew gelman stats-2012-02-05-What is a prior distribution?

16 0.11157303 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

17 0.1115362 1364 andrew gelman stats-2012-06-04-Massive confusion about a study that purports to show that exercise may increase heart risk

18 0.1096192 1955 andrew gelman stats-2013-07-25-Bayes-respecting experimental design and other things

19 0.10893887 1267 andrew gelman stats-2012-04-17-Hierarchical-multilevel modeling with “big data”

20 0.10835133 1891 andrew gelman stats-2013-06-09-“Heterogeneity of variance in experimental studies: A challenge to conventional interpretations”


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.287), (1, 0.0), (2, -0.022), (3, -0.079), (4, -0.028), (5, -0.021), (6, 0.027), (7, 0.035), (8, -0.021), (9, 0.031), (10, -0.002), (11, -0.006), (12, 0.044), (13, -0.047), (14, -0.024), (15, -0.011), (16, -0.007), (17, 0.014), (18, 0.037), (19, -0.014), (20, -0.017), (21, -0.029), (22, -0.043), (23, 0.022), (24, -0.02), (25, 0.058), (26, -0.054), (27, -0.026), (28, -0.027), (29, 0.043), (30, -0.001), (31, 0.019), (32, -0.006), (33, 0.031), (34, -0.014), (35, 0.04), (36, -0.021), (37, 0.025), (38, -0.03), (39, 0.008), (40, 0.049), (41, -0.017), (42, 0.001), (43, -0.053), (44, 0.058), (45, 0.054), (46, 0.039), (47, 0.033), (48, 0.033), (49, -0.001)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97316152 2115 andrew gelman stats-2013-11-27-Three unblinded mice

Introduction: Howard Wainer points us to a recent news article by Jennifer Couzin-Frankel, who writes about the selection bias arising from the routine use of outcome criteria to exclude animals in medical trials. In statistics and econometrics, this is drilled into us: Selection on x is OK, selection on y is not OK. But apparently in biomedical research this principle is not so well known (or, perhaps, it is all too well known). Couzin-Frankel starts with an example of a drug trial in which 3 of the 10 mice in the treatment group were removed from the analysis because they had died from massive strokes. This sounds pretty bad, but it’s even worse than that: this was from a paper under review that “described how a new drug protected a rodent’s brain after a stroke.” Death isn’t a very good way to protect a rodent’s brain! The news article continues: “This isn’t fraud,” says Dirnagl [the outside reviewer who caught this particular problem], who often works with mice. Dropping animals f

2 0.8012529 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

Introduction: There’s a paradigm in applied statistics that goes something like this: 1. There is a scientific or policy question of some theoretical or practical importance. 2. Researchers gather data on relevant outcomes and perform a statistical analysis, ideally leading to a clear conclusion (p less than 0.05, or a strong posterior distribution, or good predictive performance, or high reliability and validity, whatever). 3. This conclusion informs policy. This paradigm has room for positive findings (for example, that a new program is statistically significantly better, or statistically significantly worse than what came before) or negative findings (data are inconclusive, further study is needed), even if negative findings seem less likely to make their way into the textbooks. But what happens when step 2 simply isn’t possible. This came up a few years ago—nearly 10 years ago, now!—with the excellent paper by Donohue and Wolfers which explained why it’s just about impossible to

3 0.79561543 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

Introduction: Chris Chambers pointed me to a blog by someone called Neuroskeptic who suggested that I preregister my political science studies: So when Andrew Gelman (let’s say) is going to start using a new approach, he goes on Twitter, or on his blog, and posts a bare-bones summary of what he’s going to do. Then he does it. If he finds something interesting, he writes it up as a paper, citing that tweet or post as his preregistration. . . . I think this approach has some benefits but doesn’t really address the issues of preregistration that concern me—but I’d like to spend an entire blog post explaining why. I have two key points: 1. If your study is crap, preregistration might fix it. Preregistration is fine—indeed, the wide acceptance of preregistration might well motivate researchers to not do so many crap studies—but it doesn’t solve fundamental problems of experimental design. 2. “Preregistration” seems to mean different things in different scenarios: A. When the concern is

4 0.79099268 1793 andrew gelman stats-2013-04-08-The Supreme Court meets the fallacy of the one-sided bet

Introduction: Doug Hartmann writes ( link from Jay Livingston): Justice Antonin Scalia’s comment in the Supreme Court hearings on the U.S. law defining marriage that “there’s considerable disagreement among sociologists as to what the consequences of raising a child in a single-sex family, whether that is harmful to the child or not.” Hartman argues that Scalia is factually incorrect—there is not actually “considerable disagreement among sociologists” on this issue—and quotes a recent report from the American Sociological Association to this effect. Assuming there’s no other considerable group of sociologists (Hartman knows of only one small group) arguing otherwise, it seems that Hartman has a point. Scalia would’ve been better off omitting the phrase “among sociologists”—then he’d have been on safe ground, because you can always find somebody to take a position on the issue. Jerry Falwell’s no longer around but there’s a lot more where he came from. Even among scientists, there’s

5 0.78771132 2355 andrew gelman stats-2014-05-31-Jessica Tracy and Alec Beall (authors of the fertile-women-wear-pink study) comment on our Garden of Forking Paths paper, and I comment on their comments

Introduction: Jessica Tracy and Alec Beall, authors of that paper that claimed that women at peak fertility were more likely to wear red or pink shirts (see further discussion here and here ), and then a later paper that claimed that this happens in some weather but not others, just informed me that they have posted a note in disagreement with an paper by Eric Loken and myself. Our paper is unpublished, but I do have the megaphone of this blog, and Tracy and Beall do not, so I think it’s only fair to link to their note right away. I’ll quote from their note (but if you’re interested, please follow the link and read the whole thing ) and then give some background and my own reaction. Tracy and Beall write: Although Gelman and Loken are using our work as an example of a broader problem that pervades the field–a problem we generally agree about–we are concerned that readers will take their speculations about our methods and analyses as factual claims about our scientific integrity. Fu

6 0.78301722 1114 andrew gelman stats-2012-01-12-Controversy about average personality differences between men and women

7 0.78262907 526 andrew gelman stats-2011-01-19-“If it saves the life of a single child…” and other nonsense

8 0.77888066 1282 andrew gelman stats-2012-04-26-Bad news about (some) statisticians

9 0.77722281 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

10 0.77318937 2053 andrew gelman stats-2013-10-06-Ideas that spread fast and slow

11 0.77265817 2223 andrew gelman stats-2014-02-24-“Edlin’s rule” for routinely scaling down published estimates

12 0.76782024 618 andrew gelman stats-2011-03-18-Prior information . . . about the likelihood

13 0.76627243 2137 andrew gelman stats-2013-12-17-Replication backlash

14 0.76422971 446 andrew gelman stats-2010-12-03-Is 0.05 too strict as a p-value threshold?

15 0.76225746 1666 andrew gelman stats-2013-01-10-They’d rather be rigorous than right

16 0.76034558 51 andrew gelman stats-2010-05-26-If statistics is so significantly great, why don’t statisticians use statistics?

17 0.76013249 791 andrew gelman stats-2011-07-08-Censoring on one end, “outliers” on the other, what can we do with the middle?

18 0.75931054 399 andrew gelman stats-2010-11-07-Challenges of experimental design; also another rant on the practice of mentioning the publication of an article but not naming its author

19 0.75882274 2099 andrew gelman stats-2013-11-13-“What are some situations in which the classical approach (or a naive implementation of it, based on cookbook recipes) gives worse results than a Bayesian approach, results that actually impeded the science?”

20 0.75841725 411 andrew gelman stats-2010-11-13-Ethical concerns in medical trials


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(12, 0.01), (15, 0.03), (16, 0.068), (21, 0.03), (22, 0.027), (24, 0.2), (31, 0.017), (45, 0.036), (57, 0.022), (61, 0.016), (82, 0.018), (89, 0.036), (91, 0.017), (95, 0.044), (99, 0.328)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97857511 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

Introduction: Benedict Carey writes a follow-up article on ESP studies and Bayesian statistics. ( See here for my previous thoughts on the topic.) Everything Carey writes is fine, and he even uses an example I recommended: The statistical approach that has dominated the social sciences for almost a century is called significance testing. The idea is straightforward. A finding from any well-designed study — say, a correlation between a personality trait and the risk of depression — is considered “significant” if its probability of occurring by chance is less than 5 percent. This arbitrary cutoff makes sense when the effect being studied is a large one — for example, when measuring the so-called Stroop effect. This effect predicts that naming the color of a word is faster and more accurate when the word and color match (“red” in red letters) than when they do not (“red” in blue letters), and is very strong in almost everyone. “But if the true effect of what you are measuring is small,” sai

same-blog 2 0.97653371 2115 andrew gelman stats-2013-11-27-Three unblinded mice

Introduction: Howard Wainer points us to a recent news article by Jennifer Couzin-Frankel, who writes about the selection bias arising from the routine use of outcome criteria to exclude animals in medical trials. In statistics and econometrics, this is drilled into us: Selection on x is OK, selection on y is not OK. But apparently in biomedical research this principle is not so well known (or, perhaps, it is all too well known). Couzin-Frankel starts with an example of a drug trial in which 3 of the 10 mice in the treatment group were removed from the analysis because they had died from massive strokes. This sounds pretty bad, but it’s even worse than that: this was from a paper under review that “described how a new drug protected a rodent’s brain after a stroke.” Death isn’t a very good way to protect a rodent’s brain! The news article continues: “This isn’t fraud,” says Dirnagl [the outside reviewer who caught this particular problem], who often works with mice. Dropping animals f

3 0.97569644 807 andrew gelman stats-2011-07-17-Macro causality

Introduction: David Backus writes: This is from my area of work, macroeconomics. The suggestion here is that the economy is growing slowly because consumers aren’t spending money. But how do we know it’s not the reverse: that consumers are spending less because the economy isn’t doing well. As a teacher, I can tell you that it’s almost impossible to get students to understand that the first statement isn’t obviously true. What I’d call the demand-side story (more spending leads to more output) is everywhere, including this piece, from the usually reliable David Leonhardt. This whole situation reminds me of the story of the village whose inhabitants support themselves by taking in each others’ laundry. I guess we’re rich enough in the U.S. that we can stay afloat for a few decades just buying things from each other? Regarding the causal question, I’d like to move away from the idea of “Does A causes B or does B cause A” and toward a more intervention-based framework (Rubin’s model for

4 0.97529757 1702 andrew gelman stats-2013-02-01-Don’t let your standard errors drive your research agenda

Introduction: Alexis Le Nestour writes: How do you test for no effect? I attended a seminar where the person assumed that a non significant difference between groups implied an absence of effect. In that case, the researcher needed to show that two groups were similar before being hit by a shock conditional on some observable variables. The assumption was that the two groups were similar and that the shock was random. What would be the good way to set up a test in that case? I know you’ve been through that before (http://andrewgelman.com/2009/02/not_statistical/) and there are interesting comments but I wanted to have your opinion on that. My reply: I think you have to get quantitative here. How similar is similar? Don’t let your standard errors drive your research agenda. Or, to put it another way, what would you do if you had all the data? If your sample size were 1 zillion, then everything would statistically distinguishable from everything else. And then you’d have to think about w

5 0.97527373 1946 andrew gelman stats-2013-07-19-Prior distributions on derived quantities rather than on parameters themselves

Introduction: Following up on our discussion of the other day , Nick Firoozye writes: One thing I meant by my initial query (but really didn’t manage to get across) was this: I have no idea what my prior would be on many many models, but just like Utility Theory expects ALL consumers to attach a utility to any and all consumption goods (even those I haven’t seen or heard of), Bayesian Stats (almost) expects the same for priors. (Of course it’s not a religious edict much in the way Utility Theory has, since there is no theory of a “modeler” in the Bayesian paradigm—nonetheless there is still an expectation that we should have priors over all sorts of parameters which mean almost nothing to us). For most models with sufficient complexity, I also have no idea what my informative priors are actually doing and the only way to know anything is through something I can see and experience, through data, not parameters or state variables. My question was more on the—let’s use the prior to come up

6 0.97496355 1644 andrew gelman stats-2012-12-30-Fixed effects, followed by Bayes shrinkage?

7 0.97442156 2109 andrew gelman stats-2013-11-21-Hidden dangers of noninformative priors

8 0.9737857 2340 andrew gelman stats-2014-05-20-Thermodynamic Monte Carlo: Michael Betancourt’s new method for simulating from difficult distributions and evaluating normalizing constants

9 0.9736138 2174 andrew gelman stats-2014-01-17-How to think about the statistical evidence when the statistical evidence can’t be conclusive?

10 0.97346699 247 andrew gelman stats-2010-09-01-How does Bayes do it?

11 0.97315502 1162 andrew gelman stats-2012-02-11-Adding an error model to a deterministic model

12 0.97289187 970 andrew gelman stats-2011-10-24-Bell Labs

13 0.9727934 2086 andrew gelman stats-2013-11-03-How best to compare effects measured in two different time periods?

14 0.9725703 2080 andrew gelman stats-2013-10-28-Writing for free

15 0.97240591 1363 andrew gelman stats-2012-06-03-Question about predictive checks

16 0.97231972 1390 andrew gelman stats-2012-06-23-Traditionalist claims that modern art could just as well be replaced by a “paint-throwing chimp”

17 0.9721303 2154 andrew gelman stats-2013-12-30-Bill Gates’s favorite graph of the year

18 0.9717235 2089 andrew gelman stats-2013-11-04-Shlemiel the Software Developer and Unknown Unknowns

19 0.97158384 400 andrew gelman stats-2010-11-08-Poli sci plagiarism update, and a note about the benefits of not caring

20 0.97147298 447 andrew gelman stats-2010-12-03-Reinventing the wheel, only more so.