andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2218 knowledge-graph by maker-knowledge-mining

2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?


meta infos for this blog

Source: html

Introduction: Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. Bissell argued that there was too much of a push to replicate scientific findings. I disagreed , arguing that scientists should want others to be able to replicate their research, that it’s in everyone’s interest if replication can be done as fast and reliably as possible, and that if a published finding cannot be easily replicated, this is at best a failure of communication (in that the conditions for successful replication have not clearly been expressed), or possibly a fragile finding (that is, a phenomenon that appears under some conditions but not others), or at worst a plain old mistake (possibly associated with lab error or maybe with statistical error of some sort, such as jumping to certainty based on a statistically significant claim that arose from multiple comparisons ). So we disagreed. Fair enough. But I got to thinking about a possible source of our diffe


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. [sent-1, score-0.082]

2 Bissell argued that there was too much of a push to replicate scientific findings. [sent-2, score-0.377]

3 But I got to thinking about a possible source of our differences, arising from the different social and economic structures of our scientific fields. [sent-6, score-0.11]

4 I thought about this after receiving the following in an email from a colleague: The people who dominate both the natural and social sciences primarily think in terms of reputation and career. [sent-7, score-0.209]

5 They think that the point of making a scientific discovery is to publish a paper and further your career. [sent-8, score-0.239]

6 But the whole point of expensive, high-profile research is to save those who don’t have the same funding the trouble of making the discoveries themselves. [sent-12, score-0.083]

7 The discovery is precisely supposed to be something you can demonstrate in an ordinary workaday lab … or it just ain’t yet scientifically “demonstrated”. [sent-13, score-0.478]

8 (Consider, for example, my hobby-like goal of publishing papers in over 100 journals, or my habit of repeatedly googling myself, etc etc. [sent-15, score-0.078]

9 So, in bio, status counts for more, also perhaps there’s more insecurity, even at the top, that you might slip down if you’re not careful. [sent-17, score-0.169]

10 Also perhaps more motivation for people of lower ranks to make a reputation by tangling with someone higher up. [sent-18, score-0.383]

11 Put it all together and you have some toxic politics, much different than what you’ll see in a flatter field such as statistics. [sent-19, score-0.299]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('replicate', 0.267), ('bissell', 0.207), ('bio', 0.191), ('graduates', 0.173), ('ordinary', 0.149), ('discovery', 0.129), ('reputation', 0.128), ('replication', 0.125), ('top', 0.122), ('lab', 0.121), ('scientific', 0.11), ('lament', 0.11), ('toxic', 0.11), ('insecurity', 0.11), ('exceeding', 0.11), ('conditions', 0.108), ('possibly', 0.106), ('flatter', 0.103), ('budgets', 0.103), ('mina', 0.103), ('year', 0.102), ('easily', 0.099), ('finding', 0.098), ('impatient', 0.095), ('fairness', 0.093), ('speculate', 0.09), ('celebrating', 0.088), ('reliably', 0.088), ('academic', 0.088), ('ranks', 0.086), ('slip', 0.086), ('fragile', 0.086), ('field', 0.086), ('higher', 0.086), ('rewards', 0.085), ('biologists', 0.085), ('discoveries', 0.083), ('perhaps', 0.083), ('scientists', 0.083), ('leader', 0.082), ('identifies', 0.082), ('salaries', 0.082), ('irritated', 0.081), ('dominate', 0.081), ('aiming', 0.081), ('scientifically', 0.079), ('operating', 0.078), ('googling', 0.078), ('disagreed', 0.078), ('proceed', 0.077)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

Introduction: Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. Bissell argued that there was too much of a push to replicate scientific findings. I disagreed , arguing that scientists should want others to be able to replicate their research, that it’s in everyone’s interest if replication can be done as fast and reliably as possible, and that if a published finding cannot be easily replicated, this is at best a failure of communication (in that the conditions for successful replication have not clearly been expressed), or possibly a fragile finding (that is, a phenomenon that appears under some conditions but not others), or at worst a plain old mistake (possibly associated with lab error or maybe with statistical error of some sort, such as jumping to certainty based on a statistically significant claim that arose from multiple comparisons ). So we disagreed. Fair enough. But I got to thinking about a possible source of our diffe

2 0.34905857 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

3 0.13787237 700 andrew gelman stats-2011-05-06-Suspicious pattern of too-strong replications of medical research

Introduction: Howard Wainer writes in the Statistics Forum: The Chinese scientific literature is rarely read or cited outside of China. But the authors of this work are usually knowledgeable of the non-Chinese literature — at least the A-list journals. And so they too try to replicate the alpha finding. But do they? One would think that they would find the same diminished effect size, but they don’t! Instead they replicate the original result, even larger. Here’s one of the graphs: How did this happen? Full story here .

4 0.13106558 1844 andrew gelman stats-2013-05-06-Against optimism about social science

Introduction: Social science research has been getting pretty bad press recently, what with the Excel buccaneers who didn’t know how to handle data with different numbers of observations per country, and the psychologist who published dozens of papers based on fabricated data, and the Evilicious guy who wouldn’t let people review his data tapes, etc etc. And that’s not even considering Dr. Anil Potti. On the other hand, the revelation of all these problems can be taken as evidence that things are getting better. Psychology researcher Gary Marcus writes : There is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. . . . Now, happily, the scientific culture has changed. . . . The Reproducibility Project, from the Center for Open Science is now underway . . . And sociologist Fabio Rojas

5 0.12917137 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

6 0.12751715 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

7 0.12380809 2245 andrew gelman stats-2014-03-12-More on publishing in journals

8 0.11701728 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

9 0.11433097 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

10 0.10264248 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

11 0.099278018 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

12 0.098258518 901 andrew gelman stats-2011-09-12-Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel

13 0.097991407 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

14 0.096990518 1952 andrew gelman stats-2013-07-23-Christakis response to my comment on his comments on social science (or just skip to the P.P.P.S. at the end)

15 0.094381399 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

16 0.093788728 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

17 0.092838868 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

18 0.092729934 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

19 0.09250088 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

20 0.092499278 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.208), (1, -0.08), (2, -0.029), (3, -0.104), (4, -0.045), (5, -0.04), (6, 0.001), (7, -0.039), (8, -0.039), (9, 0.034), (10, 0.021), (11, 0.011), (12, -0.041), (13, 0.005), (14, -0.027), (15, 0.015), (16, 0.024), (17, -0.006), (18, 0.006), (19, 0.006), (20, 0.013), (21, 0.022), (22, -0.036), (23, -0.012), (24, -0.041), (25, -0.019), (26, 0.003), (27, -0.006), (28, -0.03), (29, 0.016), (30, -0.007), (31, 0.001), (32, 0.003), (33, -0.004), (34, 0.019), (35, -0.01), (36, -0.02), (37, 0.029), (38, -0.02), (39, 0.021), (40, 0.018), (41, -0.006), (42, -0.019), (43, 0.001), (44, -0.013), (45, -0.028), (46, 0.004), (47, 0.048), (48, -0.003), (49, -0.031)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96635675 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

Introduction: Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. Bissell argued that there was too much of a push to replicate scientific findings. I disagreed , arguing that scientists should want others to be able to replicate their research, that it’s in everyone’s interest if replication can be done as fast and reliably as possible, and that if a published finding cannot be easily replicated, this is at best a failure of communication (in that the conditions for successful replication have not clearly been expressed), or possibly a fragile finding (that is, a phenomenon that appears under some conditions but not others), or at worst a plain old mistake (possibly associated with lab error or maybe with statistical error of some sort, such as jumping to certainty based on a statistically significant claim that arose from multiple comparisons ). So we disagreed. Fair enough. But I got to thinking about a possible source of our diffe

2 0.91306025 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

3 0.89881349 1844 andrew gelman stats-2013-05-06-Against optimism about social science

Introduction: Social science research has been getting pretty bad press recently, what with the Excel buccaneers who didn’t know how to handle data with different numbers of observations per country, and the psychologist who published dozens of papers based on fabricated data, and the Evilicious guy who wouldn’t let people review his data tapes, etc etc. And that’s not even considering Dr. Anil Potti. On the other hand, the revelation of all these problems can be taken as evidence that things are getting better. Psychology researcher Gary Marcus writes : There is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. . . . Now, happily, the scientific culture has changed. . . . The Reproducibility Project, from the Center for Open Science is now underway . . . And sociologist Fabio Rojas

4 0.88884711 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

5 0.88273692 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

6 0.88078922 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

7 0.87738597 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

8 0.87692964 2220 andrew gelman stats-2014-02-22-Quickies

9 0.85842472 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

10 0.85123366 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

11 0.84130263 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

12 0.84071034 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

13 0.83794838 1914 andrew gelman stats-2013-06-25-Is there too much coauthorship in economics (and science more generally)? Or too little?

14 0.83604258 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

15 0.82940674 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

16 0.82481617 1599 andrew gelman stats-2012-11-30-“The scientific literature must be cleansed of everything that is fraudulent, especially if it involves the work of a leading academic”

17 0.82208854 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

18 0.82058191 2361 andrew gelman stats-2014-06-06-Hurricanes vs. Himmicanes

19 0.8149814 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

20 0.81387693 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.02), (12, 0.013), (15, 0.054), (16, 0.129), (21, 0.067), (22, 0.026), (24, 0.127), (30, 0.03), (93, 0.012), (95, 0.038), (98, 0.025), (99, 0.328)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98272252 2218 andrew gelman stats-2014-02-20-Do differences between biology and statistics explain some of our diverging attitudes regarding criticism and replication of scientific claims?

Introduction: Last month we discussed an opinion piece by Mina Bissell, a nationally-recognized leader in cancer biology. Bissell argued that there was too much of a push to replicate scientific findings. I disagreed , arguing that scientists should want others to be able to replicate their research, that it’s in everyone’s interest if replication can be done as fast and reliably as possible, and that if a published finding cannot be easily replicated, this is at best a failure of communication (in that the conditions for successful replication have not clearly been expressed), or possibly a fragile finding (that is, a phenomenon that appears under some conditions but not others), or at worst a plain old mistake (possibly associated with lab error or maybe with statistical error of some sort, such as jumping to certainty based on a statistically significant claim that arose from multiple comparisons ). So we disagreed. Fair enough. But I got to thinking about a possible source of our diffe

2 0.97757614 1824 andrew gelman stats-2013-04-25-Fascinating graphs from facebook data

Introduction: Yair points us to this page full of wonderful graphs from the Stephen Wolfram blog. Here are a few: And some words: People talk less about video games as they get older, and more about politics and the weather. Men typically talk more about sports and technology than women—and, somewhat surprisingly to me, they also talk more about movies, television and music. Women talk more about pets+animals, family+friends, relationships—and, at least after they reach child-bearing years, health. . . . Some of this is rather depressingly stereotypical. And most of it isn’t terribly surprising to anyone who’s known a reasonable diversity of people of different ages. But what to me is remarkable is how we can see everything laid out in such quantitative detail in the pictures above—kind of a signature of people’s thinking as they go through life. Of course, the pictures above are all based on aggregate data, carefully anonymized. But if we start looking at individuals, we’ll s

3 0.97664833 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

4 0.97431737 586 andrew gelman stats-2011-02-23-A statistical version of Arrow’s paradox

Introduction: Unfortunately, when we deal with scientists, statisticians are often put in a setting reminiscent of Arrow’s paradox, where we are asked to provide estimates that are informative and unbiased and confidence statements that are correct conditional on the data and also on the underlying true parameter. [It's not generally possible for an estimate to do all these things at the same time -- ed.] Larry Wasserman feels that scientists are truly frequentist, and Don Rubin has told me how he feels that scientists interpret all statistical estimates Bayesianly. I have no doubt that both Larry and Don are correct. Voters want lower taxes and more services, and scientists want both Bayesian and frequency coverage; as the saying goes, everybody wants to go to heaven but nobody wants to die.

5 0.97430551 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

Introduction: Aki points us to this discussion from Rolf Zwaan: The first massive replication project in psychology has just reached completion (several others are to follow). . . . What can we learn from the ManyLabs project? The results here show the effect sizes for the replication efforts (in green and grey) as well as the original studies (in blue). The 99% confidence intervals are for the meta-analysis of the effect size (the green dots); the studies are ordered by effect size. Let’s first consider what we canNOT learn from these data. Of the 13 replication attempts (when the first four are taken together), 11 succeeded and 2 did not (in fact, at some point ManyLabs suggests that a third one, Imagined Contact also doesn’t really replicate). We cannot learn from this that the vast majority of psychological findings will replicate . . . But even if we had an accurate estimate of the percentage of findings that replicate, how useful would that be? Rather than trying to arrive at a mo

6 0.97376966 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

7 0.97335792 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

8 0.9713586 2280 andrew gelman stats-2014-04-03-As the boldest experiment in journalism history, you admit you made a mistake

9 0.97124469 1729 andrew gelman stats-2013-02-20-My beef with Brooks: the alternative to “good statistics” is not “no statistics,” it’s “bad statistics”

10 0.96972346 2368 andrew gelman stats-2014-06-11-Bayes in the research conversation

11 0.9688406 879 andrew gelman stats-2011-08-29-New journal on causal inference

12 0.96856636 666 andrew gelman stats-2011-04-18-American Beliefs about Economic Opportunity and Income Inequality

13 0.96829009 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

14 0.9675709 1917 andrew gelman stats-2013-06-28-Econ coauthorship update

15 0.9674511 711 andrew gelman stats-2011-05-14-Steven Rhoads’s book, “The Economist’s View of the World”

16 0.96716315 966 andrew gelman stats-2011-10-20-A qualified but incomplete thanks to Gregg Easterbrook’s editor at Reuters

17 0.96704018 2007 andrew gelman stats-2013-09-03-Popper and Jaynes

18 0.96697277 814 andrew gelman stats-2011-07-21-The powerful consumer?

19 0.96678495 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

20 0.96657717 2326 andrew gelman stats-2014-05-08-Discussion with Steven Pinker on research that is attached to data that are so noisy as to be essentially uninformative