andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1139 knowledge-graph by maker-knowledge-mining

1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox


meta infos for this blog

Source: html

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. [sent-2, score-0.482]

2 Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cases involve law professors) than they are about scholarly misconduct. [sent-3, score-0.487]

3 Before going on, perhaps it’s worth briefly reviewing who is hurt by the publication of flawed research. [sent-4, score-0.236]

4 - Fake science news bumping real science news off the front page. [sent-7, score-0.27]

5 - When the errors and scandals come to light, a decline in the prestige of higher-quality scientific work. [sent-8, score-0.28]

6 I’m most interested in presumably sincere and honest scientific efforts that are misunderstood and misrepresented into more than they really are (the breakthrough-of-the-week mentality criticized by Ioannides and exemplfied by Bem). [sent-12, score-0.279]

7 As noted above, the cases of outright fraud have little scientific interest but I brought them up to indicate that, even in extreme cases, the groups whose reputations seem at risk from the unethical behavior often seem more inclined to bury the evidence than to stop the madness. [sent-13, score-0.592]

8 If universities, publishers, and editors are inclined to look away when confronted with out-and-out fraud and plagiarism, we can hardly be surprised if they’re not aggressive against merely dubious research claims. [sent-14, score-0.546]

9 In the last section of this post, I briefly discuss several examples of dubious research that I’ve encountered, just to give a sense of the difficulties that can arise in evaluating such reports. [sent-15, score-0.341]

10 Recall that the Bem paper was published, which means in some sense that its reviewers thought the paper’s flaws were no worse than what usually gets published in JPSP. [sent-26, score-0.245]

11 Long-term, sure, we’d like to improve methodological rigor, but in the meantime a key problem with Bem’s paper was not just its methodological flaws, it was also the implausibility of the claimed results. [sent-27, score-0.418]

12 Instead of publishing speculative results in top journals such as JPSP, Science, Nature, etc. [sent-29, score-0.235]

13 For example, Bem could publish his experiments in some specialized journal of psychological measurement. [sent-31, score-0.435]

14 (I assume there’s also a journal of parapsychology but that’s probably just for true believers; it’s fair enough that Bem etc would like to publish somewhere that outsiders would respect. [sent-34, score-0.237]

15 ) Under this system, JPSP could feel free to reject the Bem paper on the grounds that it’s too speculative to get the journal’s implicit endorsement. [sent-35, score-0.306]

16 This is not suppression or censorship or anything like it, it’s just a recommendation that the paper be sent to a more specialized journal where there will be a chance for criticism and replication. [sent-36, score-0.358]

17 At some point, if the findings are tested and replicated and seem to hold up, then it could be time for a publication in JPSP, Science, or Nature. [sent-37, score-0.288]

18 I’ve encountered a lot of these borderline research findings over the past several years, and my own reaction is typically formed by some mix of my personal scientific knowledge, the statistical work involved, and my general impressions. [sent-44, score-0.409]

19 I’m as well prepared as anyone to evaluate research claims, but as a consumer I can be pretty credulous when the research is not close to my expertise. [sent-62, score-0.292]

20 If there is any coherent message from the above examples, it is that my own rules for how to evaluate research claims are not clear, even to me. [sent-63, score-0.235]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bem', 0.343), ('jpsp', 0.215), ('dubious', 0.161), ('inclined', 0.145), ('scientific', 0.144), ('specialized', 0.136), ('scandals', 0.136), ('methodological', 0.123), ('journal', 0.121), ('research', 0.117), ('publish', 0.116), ('publicity', 0.115), ('publishers', 0.114), ('ioannides', 0.108), ('flawed', 0.102), ('paper', 0.101), ('cases', 0.099), ('journals', 0.098), ('plagiarism', 0.095), ('skeptical', 0.089), ('findings', 0.085), ('psychology', 0.083), ('science', 0.078), ('published', 0.078), ('speculative', 0.077), ('publication', 0.071), ('problem', 0.071), ('seem', 0.07), ('criticized', 0.069), ('reject', 0.066), ('honest', 0.066), ('flaws', 0.066), ('fraud', 0.064), ('universities', 0.064), ('encountered', 0.063), ('briefly', 0.063), ('could', 0.062), ('professors', 0.062), ('skepticism', 0.062), ('potentially', 0.061), ('medicine', 0.061), ('top', 0.06), ('articles', 0.06), ('review', 0.06), ('claims', 0.06), ('individual', 0.059), ('editors', 0.059), ('evaluate', 0.058), ('personally', 0.057), ('news', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000004 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

2 0.33308756 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

3 0.31386223 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

4 0.27593288 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

5 0.25003755 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

6 0.24089664 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

7 0.2312264 2245 andrew gelman stats-2014-03-12-More on publishing in journals

8 0.20397706 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

9 0.20327136 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

10 0.19859788 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

11 0.18103032 1844 andrew gelman stats-2013-05-06-Against optimism about social science

12 0.17954127 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

13 0.17550536 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

14 0.17495009 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

15 0.17031756 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

16 0.15954246 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

17 0.15897164 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

18 0.15683956 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

19 0.15582083 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

20 0.15500799 901 andrew gelman stats-2011-09-12-Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.303), (1, -0.102), (2, -0.11), (3, -0.228), (4, -0.129), (5, -0.095), (6, 0.002), (7, -0.12), (8, -0.059), (9, 0.019), (10, 0.089), (11, 0.019), (12, -0.076), (13, -0.007), (14, -0.004), (15, -0.058), (16, 0.003), (17, 0.018), (18, 0.006), (19, -0.024), (20, -0.011), (21, 0.015), (22, -0.01), (23, -0.006), (24, -0.058), (25, -0.041), (26, -0.035), (27, -0.006), (28, -0.037), (29, -0.02), (30, -0.008), (31, 0.011), (32, 0.009), (33, 0.001), (34, -0.009), (35, -0.003), (36, -0.012), (37, 0.003), (38, -0.046), (39, 0.014), (40, -0.019), (41, 0.064), (42, -0.003), (43, 0.003), (44, 0.006), (45, 0.026), (46, 0.006), (47, 0.061), (48, -0.007), (49, 0.027)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98158491 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

2 0.92975974 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

3 0.92878497 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

4 0.92394656 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

5 0.91399467 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

6 0.91191727 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

7 0.90119809 1954 andrew gelman stats-2013-07-24-Too Good To Be True: The Scientific Mass Production of Spurious Statistical Significance

8 0.90111578 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

9 0.89263314 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

10 0.87404722 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

11 0.87019062 2137 andrew gelman stats-2013-12-17-Replication backlash

12 0.86026841 1844 andrew gelman stats-2013-05-06-Against optimism about social science

13 0.85827911 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

14 0.85446531 2215 andrew gelman stats-2014-02-17-The Washington Post reprints university press releases without editing them

15 0.85250354 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

16 0.84856415 834 andrew gelman stats-2011-08-01-I owe it all to the haters

17 0.84772456 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

18 0.84380209 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

19 0.84060806 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

20 0.83645809 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.012), (15, 0.097), (16, 0.067), (21, 0.031), (24, 0.118), (27, 0.023), (53, 0.015), (55, 0.012), (57, 0.018), (62, 0.092), (72, 0.01), (76, 0.013), (77, 0.011), (86, 0.026), (98, 0.014), (99, 0.286)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96736979 715 andrew gelman stats-2011-05-16-“It doesn’t matter if you believe in God. What matters is if God believes in you.”

Introduction: Mark Chaves sent me this great article on religion and religious practice: After reading a book or article in the scientific study of religion, I [Chaves] wonder if you ever find yourself thinking, “I just don’t believe it.” I have this experience uncomfortably often, and I think it’s because of a pervasive problem in the scientific study of religion. I want to describe that problem and how to overcome it. The problem is illustrated in a story told by Meyer Fortes. He once asked a rainmaker in a native culture he was studying to perform the rainmaking ceremony for him. The rainmaker refused, replying: “Don’t be a fool, whoever makes a rain-making ceremony in the dry season?” The problem is illustrated in a different way in a story told by Jay Demerath. He was in Israel, visiting friends for a Sabbath dinner. The man of the house, a conservative rabbi, stopped in the middle of chanting the prayers to say cheerfully: “You know, we don’t believe in any of this. But then in Judai

same-blog 2 0.96629763 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

3 0.95093906 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

Introduction: Prasanta Bandyopadhyay and Gordon Brittan write : We introduce a distinction, unnoticed in the literature, between four varieties of objective Bayesianism. What we call ‘strong objective Bayesianism’ is characterized by two claims, that all scientific inference is ‘logical’ and that, given the same background information two agents will ascribe a unique probability to their priors. We think that neither of these claims can be sustained; in this sense, they are ‘dogmatic’. The first fails to recognize that some scientific inference, in particular that concerning evidential relations, is not (in the appropriate sense) logical, the second fails to provide a non-question-begging account of ‘same background information’. We urge that a suitably objective Bayesian account of scientific inference does not require either of the claims. Finally, we argue that Bayesianism needs to be fine-grained in the same way that Bayesians fine-grain their beliefs. I have not read their paper in detai

4 0.94768244 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

Introduction: In my comments on academic cheating , I briefly discussed the question of how some of these papers could’ve been published in the first place, given that they tend to be of low quality. (It’s rare that people plagiarize the good stuff, and, when they do—for example when a senior scholar takes credit for a junior researcher’s contributions without giving proper credit—there’s not always a paper trail, and there can be legitimate differences of opinion about the relative contributions of the participants.) Anyway, to get back to the cases at hand: how did these rulebreakers get published in the first place? The question here is not how did they get away with cheating but how is it that top journals were publishing mediocre research? In the case of the profs who falsified data (Diederik Stapel) or did not follow scientific protocol (Mark Hauser), the answer is clear: By cheating, they were able to get the sort of too-good-to-be-true results which, if they were true, would be

5 0.9471491 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

Introduction: Steven Hayward at the American Enterprise Institute wrote an article , sure to attract the attention of people such as myself, entitled, “The irrelevance of modern political science,” in which he discusses some silly-sounding papers presented at the recent American Political Science Association and then moves to a larger critique of quantitative political science: I [Hayward] have often taken a random article from the American Political Science Review, which resembles a mathematical journal on most of its pages, and asked students if they can envision this method providing the mathematical formula that will deliver peace in the Middle East. Even the dullest students usually grasp the point without difficulty. At the sister blog, John Sides discusses and dismisses Hayward’s arguments, point on that, among other things, political science might very well be useful even if it doesn’t deliver peace in the Middle East. After all, the U.S. Army didn’t deliver peace in the Midd

6 0.94640398 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

7 0.94619006 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

8 0.94610488 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.9453789 1848 andrew gelman stats-2013-05-09-A tale of two discussion papers

10 0.94512379 1414 andrew gelman stats-2012-07-12-Steven Pinker’s unconvincing debunking of group selection

11 0.94398022 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

12 0.94314855 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

13 0.94067222 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

14 0.94066674 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

15 0.94062984 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

16 0.9402445 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

17 0.94001198 2277 andrew gelman stats-2014-03-31-The most-cited statistics papers ever

18 0.940005 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

19 0.93993956 1395 andrew gelman stats-2012-06-27-Cross-validation (What is it good for?)

20 0.93978393 1998 andrew gelman stats-2013-08-25-A new Bem theory