andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2137 knowledge-graph by maker-knowledge-mining

2137 andrew gelman stats-2013-12-17-Replication backlash


meta infos for this blog

Source: html

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists. [sent-1, score-0.524]

2 But it is sometimes much easier not to replicate than to replicate studies, because the techniques and reagents are sophisticated, time-consuming and difficult to master. [sent-14, score-0.589]

3 People in my lab often need months — if not a year — to replicate some of the experiments we have done . [sent-16, score-0.608]

4 But with time and careful consideration of experimental conditions, they [Bissell's students and postdocs], and others, have always managed to replicate our previous data. [sent-21, score-0.349]

5 If e-mails and phone calls don’t solve the problems in replication, ask either to go to the original lab to reproduce the data together, or invite someone from their lab to come to yours. [sent-31, score-1.011]

6 If your published material is not clear—if a paper can’t be replicated without emails, phone calls, and a lab visit—this seems like a problem to me! [sent-41, score-0.698]

7 To put it another way, if certain findings are hard to get, requiring lots of lab technique that is nowhere published—and I accept that this is just the way things can be in modern biology—then these findings won’t necessarily apply in future work, and this seems like a serious concern. [sent-43, score-0.392]

8 Bissell gives an example of “a non-malignant human breast cell line that is now used by many for three-dimensional experiments”: A collaborator noticed that her group could not reproduce its own data convincingly when using cells from a cell bank. [sent-67, score-0.368]

9 Sure, that takes some effort by the originating lab, but it might save lots more effort for each of dozens of other labs that are trying to move forward from the published finding. [sent-81, score-0.391]

10 Bissell writes: When researchers at Amgen, a pharmaceutical company in Thousand Oaks, California, failed to replicate many important studies in preclinical cancer research, they tried to contact the authors and exchange materials. [sent-83, score-0.384]

11 If people can’t replicate a published result, what are we supposed to make of it? [sent-87, score-0.411]

12 But if the steps above are taken and the research still cannot be reproduced, then these non-valid findings will eventually be weeded out naturally when other careful scientists repeatedly fail to reproduce them. [sent-90, score-0.434]

13 I think that where Bissell went wrong is by thinking of replication in a defensive way, and thinking of the result being to “damage the reputations of careful, meticulous scientists. [sent-95, score-0.348]

14 ” This seems pretty clear: you need multiple failed replications, each involving thoughtful conversation, email, phone, and a physical lab visit. [sent-101, score-0.393]

15 Bissell is criticizing replicators for not having long talks and visits with the original researchers, but the referees don’t do any emails, phone calls, or lab visits at all! [sent-127, score-0.92]

16 If their judgments, based simply on reading the article, carry weight, then it seems odd to me to discount failed replications that are also based on the published record. [sent-128, score-0.422]

17 I’m just saying that, if somebody reads your paper and can’t figure out what you did, and can only do that through lengthy emails, phone conversations, and lab visits, then this is going to limit the contribution your paper can make. [sent-131, score-0.583]

18 A result that is not able to be independently reproduced, that cannot be translated to another lab using what most would regard as standard laboratory procedures (blinding, controls, validated reagents etc) is not a result. [sent-134, score-0.413]

19 So maybe we can all be happy: all failed replications can be listed on the website of the original paper (then grumps and skeptics like me will be satisfied), but Bissell and others can continue to believe published results on the grounds that the replications weren’t careful enough. [sent-140, score-0.916]

20 If you fail to replicate a result and you want your failed replication to be published, it should contain full details of your lab setup, with videos as necessary. [sent-142, score-1.067]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bissell', 0.469), ('lab', 0.276), ('replicate', 0.267), ('replication', 0.183), ('replications', 0.161), ('published', 0.144), ('cell', 0.133), ('replicators', 0.129), ('phone', 0.125), ('failed', 0.117), ('reproduce', 0.102), ('biology', 0.101), ('visits', 0.099), ('referees', 0.097), ('reproduced', 0.096), ('original', 0.095), ('paper', 0.091), ('calls', 0.087), ('meticulous', 0.083), ('result', 0.082), ('careful', 0.082), ('labs', 0.079), ('emails', 0.076), ('videos', 0.072), ('fail', 0.07), ('biologists', 0.068), ('experiments', 0.065), ('results', 0.065), ('publication', 0.064), ('microenvironment', 0.064), ('research', 0.063), ('replicated', 0.062), ('record', 0.061), ('effort', 0.059), ('weeded', 0.059), ('hold', 0.059), ('yes', 0.058), ('findings', 0.058), ('conditions', 0.058), ('benefit', 0.056), ('reagents', 0.055), ('push', 0.053), ('finding', 0.052), ('declaring', 0.051), ('publish', 0.05), ('takes', 0.05), ('exist', 0.05), ('someone', 0.05), ('treated', 0.049), ('suffering', 0.049)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

2 0.34905857 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

Introduction: Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. Bissell argued that there was too much of a push to replicate scientific findings. I disagreed , arguing that scientists should want others to be able to replicate their research, that it’s in everyone’s interest if replication can be done as fast and reliably as possible, and that if a published finding cannot be easily replicated, this is at best a failure of communication (in that the conditions for successful replication have not clearly been expressed), or possibly a fragile finding (that is, a phenomenon that appears under some conditions but not others), or at worst a plain old mistake (possibly associated with lab error or maybe with statistical error of some sort, such as jumping to certainty based on a statistically significant claim that arose from multiple comparisons ). So we disagreed. Fair enough. But I got to thinking about a possible source of our diffe

3 0.21859071 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

4 0.19144118 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

5 0.17045094 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

6 0.16789633 700 andrew gelman stats-2011-05-06-Suspicious pattern of too-strong replications of medical research

7 0.16400225 2245 andrew gelman stats-2014-03-12-More on publishing in journals

8 0.16331507 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

9 0.15970132 1844 andrew gelman stats-2013-05-06-Against optimism about social science

10 0.15114506 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

11 0.15075365 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

12 0.14968163 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

13 0.14832056 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

14 0.14160736 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

15 0.14084983 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

16 0.140471 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

17 0.13912287 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

18 0.13471201 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

19 0.13237834 2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”

20 0.13077866 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.279), (1, -0.076), (2, -0.067), (3, -0.161), (4, -0.033), (5, -0.054), (6, 0.02), (7, -0.084), (8, -0.038), (9, 0.011), (10, 0.063), (11, 0.009), (12, -0.041), (13, -0.017), (14, -0.02), (15, -0.011), (16, 0.042), (17, -0.004), (18, 0.013), (19, -0.01), (20, 0.001), (21, 0.027), (22, -0.032), (23, 0.022), (24, -0.046), (25, 0.015), (26, 0.01), (27, -0.011), (28, 0.006), (29, 0.053), (30, -0.028), (31, -0.03), (32, 0.031), (33, 0.003), (34, 0.003), (35, -0.008), (36, -0.027), (37, 0.056), (38, -0.014), (39, -0.006), (40, -0.002), (41, -0.016), (42, 0.024), (43, 0.022), (44, -0.005), (45, -0.001), (46, 0.009), (47, 0.066), (48, 0.025), (49, -0.016)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97591054 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

2 0.9351908 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

3 0.90407205 1844 andrew gelman stats-2013-05-06-Against optimism about social science

Introduction: Social science research has been getting pretty bad press recently, what with the Excel buccaneers who didn’t know how to handle data with different numbers of observations per country, and the psychologist who published dozens of papers based on fabricated data, and the Evilicious guy who wouldn’t let people review his data tapes, etc etc. And that’s not even considering Dr. Anil Potti. On the other hand, the revelation of all these problems can be taken as evidence that things are getting better. Psychology researcher Gary Marcus writes : There is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. . . . Now, happily, the scientific culture has changed. . . . The Reproducibility Project, from the Center for Open Science is now underway . . . And sociologist Fabio Rojas

4 0.90342432 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

5 0.89827716 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

Introduction: I recently became aware of two new entries in the ever-popular genre of, Our Peer-Review System is in Trouble; How Can We Fix It? Political scientist Brendan Nyhan, commenting on experimental and empirical sciences more generally, focuses on the selection problem that positive rather then negative findings tend to get published, leading via the statistical significance filter to an overestimation of effect sizes. Nyhan recommends that data-collection protocols be published ahead of time, with the commitment to publish the eventual results: In the case of experimental data, a better practice would be for journals to accept articles before the study was conducted. The article should be written up to the point of the results section, which would then be populated using a pre-specified analysis plan submitted by the author. The journal would then allow for post-hoc analysis and interpretation by the author that would be labeled as such and distinguished from the previously submit

6 0.89762765 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

7 0.89728975 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

8 0.89702755 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

9 0.88469565 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

10 0.87669766 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

11 0.87570244 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

12 0.87112975 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

13 0.87060988 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

14 0.85947597 1998 andrew gelman stats-2013-08-25-A new Bem theory

15 0.84673148 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

16 0.83995986 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

17 0.83409035 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

18 0.83373523 2220 andrew gelman stats-2014-02-22-Quickies

19 0.83216971 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

20 0.8317852 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(8, 0.015), (9, 0.034), (15, 0.066), (16, 0.172), (21, 0.052), (24, 0.125), (57, 0.03), (84, 0.017), (86, 0.035), (89, 0.012), (97, 0.012), (99, 0.29)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.97350174 1712 andrew gelman stats-2013-02-07-Philosophy and the practice of Bayesian statistics (with all the discussions!)

Introduction: My article with Cosma Shalizi has appeared in the British Journal of Mathematical and Statistical Psychology. I’m so glad this paper has come out. I’d been thinking about writing such a paper for almost 20 years. What got me to actually do it was an invitation a few years ago to write a chapter on Bayesian statistics for a volume on the philosophy of social sciences. Once I started doing that, I realized I had enough for a journal article. I contacted Cosma because he, unlike me, was familiar with the post-1970 philosophy literature (my knowledge went only up to Popper, Kuhn, and Lakatos). We submitted it to a couple statistics journals that didn’t want it (for reasons that weren’t always clear ), but ultimately I think it ended up in the right place, as psychologists have been as serious as anyone in thinking about statistical foundations in recent years. Here’s the issue of the journal , which also includes an introduction, several discussions, and a rejoinder: Prior app

2 0.97227365 411 andrew gelman stats-2010-11-13-Ethical concerns in medical trials

Introduction: I just read this article on the treatment of medical volunteers, written by doctor and bioethicist Carl Ellliott. As a statistician who has done a small amount of consulting for pharmaceutical companies, I have a slightly different perspective. As a doctor, Elliott focuses on individual patients, whereas, as a statistician, I’ve been trained to focus on the goal of accurately estimate treatment effects. I’ll go through Elliott’s article and give my reactions. Elliott: In Miami, investigative reporters for Bloomberg Markets magazine discovered that a contract research organisation called SFBC International was testing drugs on undocumented immigrants in a rundown motel; since that report, the motel has been demolished for fire and safety violations. . . . SFBC had recently been named one of the best small businesses in America by Forbes magazine. The Holiday Inn testing facility was the largest in North America, and had been operating for nearly ten years before inspecto

same-blog 3 0.9710443 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

4 0.96667278 1044 andrew gelman stats-2011-12-06-The K Foundation burns Cosma’s turkey

Introduction: Shalizi delivers a slow, drawn-out illustration of the point that economic efficiency is all about who’s got the $, which isn’t always related to what we would usually call “efficiency” in other settings. (His point is related to my argument that the phrase “willingness to pay” should generally be replaced by “ability to pay.”) The basic story is simple: Good guy needs a turkey, bad guy wants a turkey. Bad guy is willing and able to pay more for the turkey than good guy can afford, hence good guy starves to death. The counterargument is that a market in turkeys will motivate producers to breed more turkeys, ultimately saturating the bad guys’ desires and leaving surplus turkeys for the good guys at a reasonable price. I’m sure there’s a counter-counterargument too, but I don’t want to go there. But what really amused me about Cosma’s essay was how he scrambled the usual cultural/political associations. (I assume he did this on purpose.) In the standard version of t

5 0.96436119 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

Introduction: In our new ethics column for Chance , Eric Loken and I write about our current favorite topic: One of our ongoing themes when discussing scientific ethics is the central role of statistics in recognizing and communicating uncer- tainty. Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true. . . . We have in mind an analogy with the notorious AAA-class bonds created during the mid-2000s that led to the subprime mortgage crisis. Lower-quality mortgages—that is, mortgages with high probability of default and, thus, high uncertainty—were packaged and transformed into financial instruments that were (in retrospect, falsely) characterized as low risk. There was a tremendous interest in these securities, not just among the most unscrupulous market manipulators, but in a

6 0.96254182 564 andrew gelman stats-2011-02-08-Different attitudes about parenting, possibly deriving from different attitudes about self

7 0.96218514 960 andrew gelman stats-2011-10-15-The bias-variance tradeoff

8 0.96195531 159 andrew gelman stats-2010-07-23-Popular governor, small state

9 0.9611634 586 andrew gelman stats-2011-02-23-A statistical version of Arrow’s paradox

10 0.95779812 481 andrew gelman stats-2010-12-22-The Jumpstart financial literacy survey and the different purposes of tests

11 0.95763326 722 andrew gelman stats-2011-05-20-Why no Wegmania?

12 0.95704722 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

13 0.95684409 1729 andrew gelman stats-2013-02-20-My beef with Brooks: the alternative to “good statistics” is not “no statistics,” it’s “bad statistics”

14 0.95578778 816 andrew gelman stats-2011-07-22-“Information visualization” vs. “Statistical graphics”

15 0.9543885 503 andrew gelman stats-2011-01-04-Clarity on my email policy

16 0.95394599 177 andrew gelman stats-2010-08-02-Reintegrating rebels into civilian life: Quasi-experimental evidence from Burundi

17 0.95312983 1755 andrew gelman stats-2013-03-09-Plaig

18 0.95269358 1016 andrew gelman stats-2011-11-17-I got 99 comparisons but multiplicity ain’t one

19 0.9526639 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

20 0.95239818 2280 andrew gelman stats-2014-04-03-As the boldest experiment in journalism history, you admit you made a mistake