andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1865 knowledge-graph by maker-knowledge-mining

1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?


meta infos for this blog

Source: html

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. [sent-2, score-0.797]

2 The paper had nothing to offer, except that it had no obvious flaws. [sent-5, score-0.443]

3 Its top journal can choose among so many papers. [sent-7, score-0.534]

4 To pick this one, a paper that had nothing to offer, that seems to me like a sign of a serious problem. [sent-8, score-0.487]

5 Just to be clear: I’m really really really really not trying to censor such work, and I’m really really really really not saying this work should not be published. [sent-13, score-0.937]

6 What I’m saying is that the top journal in a field should not be publishing such routine work. [sent-14, score-0.826]

7 And, once you decide to start publishing mediocre papers in your top journal, you’re asking for trouble. [sent-19, score-0.784]

8 This was published in top journals and was later found to have some serious methodological issues. [sent-22, score-0.693]

9 OK, they made some mistakes, but I can’t fault a leading journal for publishing this work. [sent-25, score-0.505]

10 The difference is that Kanazawa’s papers were published in a middling place—the Journal of Theoretical Biology—not in a top journal of their field. [sent-34, score-0.948]

11 This paper had a gaping hole (not adjusting for the selection effect arising from less well-funded students dropping out) and I think it was a mistake for it to be published as is—but that’s just something the reviewers didn’t catch. [sent-38, score-0.557]

12 My point is that, in all these cases of the publication of flawed work (and one could add the work of Mark Hauser and Bruno Frey as well), the published papers either had clear strengths or else were not published in top journals. [sent-42, score-1.358]

13 When an interesting, exciting, but flawed paper (such as those by Bem, Hauser, etc) is published in a top journal, that’s too bad, but it’s understandable. [sent-43, score-0.954]

14 When a possibly interesting paper (such as those by Kanazawa) is published in an OK journal, that makes sense too. [sent-44, score-0.492]

15 But when a mediocre paper (which also happens to have serious methodological flaws) is published in a top journal, there’s something seriously wrong going on. [sent-46, score-1.242]

16 There are lots of things that can make a research paper special, and this paper had none of those things (unless anything combining voting and sex in an election year is considered special). [sent-47, score-0.696]

17 In fact, I’ve refrained linking to the paper here, just to give the authors a break. [sent-51, score-0.416]

18 I’ve done lots of little studies that happened to be flawed, and sometimes my flawed work gets published. [sent-54, score-0.487]

19 I’m criticizing the journal for publishing a mediocre paper with little to offer. [sent-57, score-1.13]

20 That’s not just a retrospective mistake; it seems like a problem with their policies that they would think that such an unremarkable paper could even be seriously considered for publication in the top journal of their field. [sent-58, score-0.992]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('journal', 0.296), ('paper', 0.266), ('top', 0.238), ('published', 0.226), ('flawed', 0.224), ('mediocre', 0.216), ('publishing', 0.209), ('kanazawa', 0.185), ('methodological', 0.121), ('papers', 0.121), ('hamilton', 0.12), ('ovulation', 0.12), ('methodologically', 0.117), ('strengths', 0.114), ('nothing', 0.113), ('mistakes', 0.111), ('serious', 0.108), ('really', 0.108), ('nonresponse', 0.103), ('voting', 0.102), ('hauser', 0.1), ('special', 0.092), ('bem', 0.091), ('clever', 0.088), ('flaws', 0.087), ('field', 0.083), ('claiming', 0.081), ('criticizing', 0.08), ('authors', 0.079), ('editors', 0.078), ('work', 0.073), ('ok', 0.073), ('ve', 0.072), ('refrained', 0.071), ('mturk', 0.071), ('schoolyard', 0.067), ('middling', 0.067), ('seriously', 0.067), ('offer', 0.065), ('mistake', 0.065), ('done', 0.065), ('except', 0.064), ('bad', 0.063), ('publication', 0.063), ('little', 0.063), ('lots', 0.062), ('unremarkable', 0.062), ('scattered', 0.062), ('comparisons', 0.06), ('study', 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

2 0.31238997 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

3 0.27593288 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

4 0.27154002 2245 andrew gelman stats-2014-03-12-More on publishing in journals

Introduction: I’m postponing today’s scheduled post (“Empirical implications of Empirical Implications of Theoretical Models”) to continue the lively discussion from yesterday, What if I were to stop publishing in journals? . An example: my papers with Basbøll Thomas Basbøll and I got into a long discussion on our blogs about business school professor Karl Weick and other cases of plagiarism copying text without attribution. We felt it useful to take our ideas to the next level and write them up as a manuscript, which ended up being logical to split into two papers. At that point I put some effort into getting these papers published, which I eventually did: To throw away data: Plagiarism as a statistical crime went into American Scientist and When do stories work? Evidence and illustration in the social sciences will appear in Sociological Methods and Research. The second paper, in particular, took some effort to place; I got some advice from colleagues in sociology as to where

5 0.25584888 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

Introduction: In my comments on academic cheating , I briefly discussed the question of how some of these papers could’ve been published in the first place, given that they tend to be of low quality. (It’s rare that people plagiarize the good stuff, and, when they do—for example when a senior scholar takes credit for a junior researcher’s contributions without giving proper credit—there’s not always a paper trail, and there can be legitimate differences of opinion about the relative contributions of the participants.) Anyway, to get back to the cases at hand: how did these rulebreakers get published in the first place? The question here is not how did they get away with cheating but how is it that top journals were publishing mediocre research? In the case of the profs who falsified data (Diederik Stapel) or did not follow scientific protocol (Mark Hauser), the answer is clear: By cheating, they were able to get the sort of too-good-to-be-true results which, if they were true, would be

6 0.24512893 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

7 0.23430806 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

8 0.23023306 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

9 0.21275835 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

10 0.20422363 1998 andrew gelman stats-2013-08-25-A new Bem theory

11 0.19694427 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

12 0.19581854 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

13 0.19560464 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

14 0.1953962 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

15 0.19277236 675 andrew gelman stats-2011-04-22-Arrow’s other theorem

16 0.19243652 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

17 0.18500675 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

18 0.18354788 1844 andrew gelman stats-2013-05-06-Against optimism about social science

19 0.17496361 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

20 0.17334381 1074 andrew gelman stats-2011-12-20-Reading a research paper != agreeing with its claims


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.279), (1, -0.113), (2, -0.084), (3, -0.221), (4, -0.097), (5, -0.113), (6, 0.029), (7, -0.152), (8, -0.068), (9, -0.014), (10, 0.211), (11, 0.044), (12, -0.125), (13, 0.01), (14, 0.032), (15, -0.11), (16, 0.035), (17, 0.043), (18, -0.038), (19, 0.008), (20, -0.027), (21, 0.037), (22, 0.035), (23, -0.008), (24, -0.029), (25, -0.014), (26, -0.077), (27, -0.012), (28, 0.004), (29, 0.023), (30, -0.034), (31, -0.033), (32, 0.01), (33, -0.034), (34, -0.034), (35, 0.004), (36, 0.016), (37, 0.056), (38, -0.106), (39, 0.063), (40, 0.009), (41, 0.039), (42, -0.043), (43, -0.01), (44, -0.021), (45, 0.024), (46, 0.024), (47, 0.04), (48, -0.008), (49, 0.015)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99057931 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

2 0.94667459 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

3 0.91543669 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

Introduction: I’ve whined before in this space that some of my most important, innovative, and influential papers are really hard to get published. I’ll go through endless hassle with a journal or sometimes several journals until I find some place willing to publish. It’s just irritating. I was thinking about this recently because a colleague and I just finished a paper that I love love love. But I can’t figure out where to submit it. This is a paper for which I would prefer the so-called reverse-journal-submission approach. Instead of sending the paper to journal after journal after journal, waiting years until an acceptance (recall that, unless you’re Bruno Frey, you’re not allowed to submit the same paper to multiple journals simultaneously), you post the paper on a public site, and then journals compete to see who gets to publish it. I think that system would work well with a paper like this which is offbeat but has a nontrivial chance of becoming highly influential. P.S. Just to clar

4 0.90145147 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

Introduction: Someone pointed me to this letter to Bruno Frey from the editor of the Journal of Economic Perspectives. ( Background here , also more here from Olaf Storbeck.) The journal editor was upset about Frey’s self-plagiarism, and Frey responded with an apology: It was a grave mistake on our part for which we deeply apologize. It should never have happened. This is deplorable. . . . Please be assured that we take all precautions and measures that this unfortunate event does not happen again, with any journal. What I wonder is: How “deplorable” does Frey really think this is? You don’t publish a paper in 5 different places by accident! Is Frey saying that he knew this was deplorable back then and he did it anyway, based on calculation balancing the gains from multiple publications vs. the potential losses if he got caught? Or is he saying that the conduct is deplorable, but he didn’t realize it was deplorable when he did it? My guess is that Frey does not actually think the r

5 0.90068024 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

Introduction: Eric Tassone points me to this news article by Christopher Shea on the challenges of debunking ESP. Shea writes : Earlier this year, a major psychology journal published a paper suggesting that there was some evidence for “pre-cognition,” a form of ESP. Stuart Ritchie, a doctoral student at the University of Edinburgh, is part of a team that tried, but failed, to replicate those results. Here, he tells the Chronicle of Higher Education’s Tom Bartlett about the difficulties he’s had getting the results published. Several journals told the team they wouldn’t publish a study that did no more than disprove a previous study. . . . An editor at another journal said he’d “only accept our paper if we ran a fourth experiment where we got a believer [in ESP] to run all the participants, to control for . . . experimenter effects.” My reaction is, this isn’t as easy a question as it might seem. At first, one’s reaction might share Ritchie’s frustration that a shoddy paper by Bem got p

6 0.89530909 834 andrew gelman stats-2011-08-01-I owe it all to the haters

7 0.89138669 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

8 0.88308704 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.88198584 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

10 0.88130254 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

11 0.87268674 1998 andrew gelman stats-2013-08-25-A new Bem theory

12 0.87003863 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

13 0.86963308 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

14 0.86900645 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

15 0.86837041 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

16 0.84552729 675 andrew gelman stats-2011-04-22-Arrow’s other theorem

17 0.84093374 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

18 0.83102244 1118 andrew gelman stats-2012-01-14-A model rejection letter

19 0.82313275 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

20 0.82244092 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.139), (16, 0.119), (18, 0.021), (21, 0.017), (24, 0.142), (28, 0.016), (45, 0.017), (48, 0.053), (63, 0.017), (86, 0.019), (99, 0.311)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9719733 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

2 0.96828246 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

3 0.96802604 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

Introduction: Blogger Deep Climate looks at another paper by the 2002 recipient of the American Statistical Association’s Founders award. This time it’s not funny, it’s just sad. Here’s Wikipedia on simulated annealing: By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random “nearby” solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly “downhill” as T goes to zero. The allowance for “uphill” moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. And here’s Wegman: During each step of the algorithm, the variable that will eventually represent the minimum is replaced by a random solution that is chosen according to a temperature

4 0.96733654 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

5 0.9634589 329 andrew gelman stats-2010-10-08-More on those dudes who will pay your professor $8000 to assign a book to your class, and related stories about small-time sleazoids

Introduction: After noticing these remarks on expensive textbooks and this comment on the company that bribes professors to use their books, Preston McAfee pointed me to this update (complete with a picture of some guy who keeps threatening to sue him but never gets around to it). The story McAfee tells is sad but also hilarious. Especially the part about “smuck.” It all looks like one more symptom of the imploding market for books. Prices for intro stat and econ books go up and up (even mediocre textbooks routinely cost $150), and the publishers put more and more effort into promotion. McAfee adds: I [McAfee] hope a publisher sues me about posting the articles I wrote. Even a takedown notice would be fun. I would be pretty happy to start posting about that, especially when some of them are charging $30 per article. Ted Bergstrom and I used state Freedom of Information acts to extract the journal price deals at state university libraries. We have about 35 of them so far. Like te

6 0.96098852 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

7 0.96093911 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

8 0.96041518 133 andrew gelman stats-2010-07-08-Gratuitous use of “Bayesian Statistics,” a branding issue?

9 0.96026063 1541 andrew gelman stats-2012-10-19-Statistical discrimination again

10 0.95997047 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

11 0.95951217 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

12 0.95680016 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

13 0.95467508 1998 andrew gelman stats-2013-08-25-A new Bem theory

14 0.95383674 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

15 0.95370924 2227 andrew gelman stats-2014-02-27-“What Can we Learn from the Many Labs Replication Project?”

16 0.95276916 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

17 0.95193493 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

18 0.95142257 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

19 0.95118964 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

20 0.95094156 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers