andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1998 knowledge-graph by maker-knowledge-mining

1998 andrew gelman stats-2013-08-25-A new Bem theory


meta infos for this blog

Source: html

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. [sent-1, score-0.692]

2 My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. [sent-2, score-0.77]

3 I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. [sent-3, score-0.288]

4 Because he seemed oddly content to publish results that were not quite conclusive. [sent-5, score-0.176]

5 He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the . [sent-6, score-0.168]

6 If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the experiments on a bunch more students, gathering enough data so that nobody could doubt his claims. [sent-9, score-0.535]

7 Instead, once he felt he’d reached the statistical significance plateau, he stopped and submitted to the journal. [sent-11, score-0.216]

8 This behavior is consistent with the idea that he did not want to push his claims further, instead wanting to get into print before any new data could reveal problems with his study. [sent-12, score-0.459]

9 But, rather than this publication making the result more plausible, the reverse happened: the implausible claims reduced the perceived validity of psychology studies more generally. [sent-15, score-0.425]

10 The journal didn’t establish the truth of the finding; instead, the finding dragged the journal down. [sent-16, score-0.523]

11 My friend then unleashed an amazing theory: that Bem really really doesn’t believe these ESP claims, that he did this whole project with a straight face to demonstrate problems with our current system of statistical/scientific research and publishing. [sent-17, score-0.359]

12 Never breaking character, Bem will take this secret to his grave. [sent-18, score-0.147]

13 I don’t know, but my friend is the one who knows Bem, and that’s what he tells me. [sent-19, score-0.348]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bem', 0.595), ('friend', 0.242), ('esp', 0.234), ('claims', 0.129), ('plateau', 0.117), ('unleashed', 0.117), ('instead', 0.113), ('journal', 0.112), ('dragged', 0.11), ('knows', 0.106), ('finding', 0.104), ('experiments', 0.104), ('published', 0.101), ('forcing', 0.098), ('results', 0.097), ('didn', 0.094), ('bunch', 0.092), ('mocked', 0.092), ('gathering', 0.092), ('ironically', 0.087), ('top', 0.087), ('establish', 0.085), ('replicated', 0.082), ('replications', 0.08), ('daryl', 0.08), ('confident', 0.08), ('oddly', 0.079), ('perceived', 0.077), ('implausible', 0.077), ('breaking', 0.076), ('print', 0.076), ('computed', 0.076), ('shot', 0.074), ('believes', 0.074), ('reached', 0.074), ('stopped', 0.074), ('paper', 0.073), ('reduced', 0.073), ('sharing', 0.073), ('effects', 0.072), ('character', 0.072), ('agreement', 0.072), ('repeatedly', 0.071), ('wanting', 0.071), ('secret', 0.071), ('push', 0.07), ('shared', 0.07), ('phenomenon', 0.07), ('reverse', 0.069), ('submitted', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

2 0.33308756 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

3 0.31302384 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

Introduction: Chris Masse points me to this response by Daryl Bem and two statisticians (Jessica Utts and Wesley Johnson) to criticisms by Wagenmakers et.al. of Bem’s recent ESP study. I have nothing to add but would like to repeat a couple bits of my discussions of last month, of here : Classical statistical methods that work reasonably well when studying moderate or large effects (see the work of Fisher, Snedecor, Cochran, etc.) fall apart in the presence of small effects. I think it’s naive when people implicitly assume that the study’s claims are correct, or the study’s statistical methods are weak. Generally, the smaller the effects you’re studying, the better the statistics you need. ESP is a field of small effects and so ESP researchers use high-quality statistics. To put it another way: whatever methodological errors happen to be in the paper in question, probably occur in lots of researcher papers in “legitimate” psychology research. The difference is that when you’re studying a

4 0.30569017 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

5 0.20422363 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

6 0.18807547 506 andrew gelman stats-2011-01-06-That silly ESP paper and some silliness in a rebuttal as well

7 0.18730894 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

8 0.16268609 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.15241969 995 andrew gelman stats-2011-11-06-Statistical models and actual models

10 0.15068299 511 andrew gelman stats-2011-01-11-One more time on that ESP study: The problem of overestimates and the shrinkage solution

11 0.14880246 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

12 0.13064535 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty

13 0.1306196 1860 andrew gelman stats-2013-05-17-How can statisticians help psychologists do their research better?

14 0.12481496 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

15 0.12236651 2014 andrew gelman stats-2013-09-09-False memories and statistical analysis

16 0.11859749 2137 andrew gelman stats-2013-12-17-Replication backlash

17 0.11704412 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

18 0.11440656 1832 andrew gelman stats-2013-04-29-The blogroll

19 0.11352526 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

20 0.11188142 2245 andrew gelman stats-2014-03-12-More on publishing in journals


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.17), (1, -0.059), (2, -0.064), (3, -0.167), (4, -0.063), (5, -0.073), (6, 0.023), (7, -0.083), (8, -0.035), (9, -0.008), (10, 0.067), (11, 0.039), (12, -0.053), (13, -0.056), (14, 0.021), (15, -0.052), (16, -0.001), (17, 0.01), (18, -0.009), (19, 0.006), (20, -0.035), (21, 0.009), (22, -0.012), (23, 0.01), (24, -0.075), (25, -0.042), (26, -0.034), (27, 0.013), (28, 0.009), (29, -0.017), (30, 0.008), (31, -0.03), (32, 0.022), (33, -0.042), (34, -0.024), (35, -0.043), (36, -0.023), (37, 0.031), (38, -0.061), (39, 0.025), (40, -0.022), (41, 0.069), (42, 0.028), (43, 0.071), (44, 0.013), (45, 0.006), (46, -0.009), (47, 0.067), (48, -0.01), (49, 0.047)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97322351 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

2 0.89895856 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

Introduction: Sanjay Srivastava reports : Recently Ben Goldacre wrote about a group of researchers (Stuart Ritchie, Chris French, and Richard Wiseman) whose null replication of 3 experiments from the infamous Bem ESP paper was rejected by JPSP – the same journal that published Bem’s paper. Srivastava recognizes that JPSP does not usually publish replications but this is a different story because it’s an anti-replication. Here’s the paradox: - From a scientific point of view, the Ritchie et al. results are boring. To find out that there’s no evidence for ESP . . . that adds essentially zero to our scientific understanding. What next, a paper demonstrating that pigeons can fly higher than chickens? Maybe an article in the Journal of the Materials Research Society demonstrating that diamonds can scratch marble but not the reverse?? - But from a science-communication perspective, the null replication is a big deal because it adds credence to my hypothesis that the earlier ESP claims

3 0.8682493 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

Introduction: There has been an increasing discussion about the proliferation of flawed research in psychology and medicine, with some landmark events being John Ioannides’s article , “Why most published research findings are false” (according to Google Scholar, cited 973 times since its appearance in 2005), the scandals of Marc Hauser and Diederik Stapel, two leading psychology professors who resigned after disclosures of scientific misconduct, and Daryl Bem’s dubious recent paper on ESP, published to much fanfare in Journal of Personality and Social Psychology, one of the top journals in the field. Alongside all this are the plagiarism scandals, which are uninteresting from a scientific context but are relevant in that, in many cases, neither the institutions housing the plagiarists nor the editors and publishers of the plagiarized material seem to care. Perhaps these universities and publishers are more worried about bad publicity (and maybe lawsuits, given that many of the plagiarism cas

4 0.8506372 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

5 0.83506685 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

Introduction: Eric Tassone points me to this news article by Christopher Shea on the challenges of debunking ESP. Shea writes : Earlier this year, a major psychology journal published a paper suggesting that there was some evidence for “pre-cognition,” a form of ESP. Stuart Ritchie, a doctoral student at the University of Edinburgh, is part of a team that tried, but failed, to replicate those results. Here, he tells the Chronicle of Higher Education’s Tom Bartlett about the difficulties he’s had getting the results published. Several journals told the team they wouldn’t publish a study that did no more than disprove a previous study. . . . An editor at another journal said he’d “only accept our paper if we ran a fourth experiment where we got a believer [in ESP] to run all the participants, to control for . . . experimenter effects.” My reaction is, this isn’t as easy a question as it might seem. At first, one’s reaction might share Ritchie’s frustration that a shoddy paper by Bem got p

6 0.82569677 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

7 0.81816399 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

8 0.78413194 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

9 0.77958292 2137 andrew gelman stats-2013-12-17-Replication backlash

10 0.77791655 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

11 0.7726081 1954 andrew gelman stats-2013-07-24-Too Good To Be True: The Scientific Mass Production of Spurious Statistical Significance

12 0.7715826 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

13 0.77106178 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

14 0.76945329 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

15 0.74649107 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

16 0.74286264 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

17 0.73859304 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

18 0.73637003 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

19 0.73597205 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

20 0.73443979 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.043), (15, 0.189), (16, 0.054), (18, 0.03), (21, 0.036), (24, 0.091), (36, 0.014), (55, 0.072), (63, 0.024), (68, 0.01), (82, 0.024), (95, 0.013), (99, 0.296)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96657974 1998 andrew gelman stats-2013-08-25-A new Bem theory

Introduction: The other day I was talking with someone who knows Daryl Bem a bit, and he was sharing his thoughts on that notorious ESP paper that was published in a leading journal in the field but then was mocked, shot down, and was repeatedly replicated with no success. My friend said that overall the Bem paper had positive effects in forcing psychologists to think more carefully about what sorts of research results should or should not be published in top journals, the role of replications, and other things. I expressed agreement and shared my thought that, at some level, I don’t think Bem himself fully believes his ESP effects are real. Why do I say this? Because he seemed oddly content to publish results that were not quite conclusive. He ran a bunch of experiments, looked at the data, and computed some post-hoc p-values in the .01 to .05 range. If he really were confident that the phenomenon was real (that is, that the results would apply to new data), then he could’ve easily run the

2 0.96396506 329 andrew gelman stats-2010-10-08-More on those dudes who will pay your professor $8000 to assign a book to your class, and related stories about small-time sleazoids

Introduction: After noticing these remarks on expensive textbooks and this comment on the company that bribes professors to use their books, Preston McAfee pointed me to this update (complete with a picture of some guy who keeps threatening to sue him but never gets around to it). The story McAfee tells is sad but also hilarious. Especially the part about “smuck.” It all looks like one more symptom of the imploding market for books. Prices for intro stat and econ books go up and up (even mediocre textbooks routinely cost $150), and the publishers put more and more effort into promotion. McAfee adds: I [McAfee] hope a publisher sues me about posting the articles I wrote. Even a takedown notice would be fun. I would be pretty happy to start posting about that, especially when some of them are charging $30 per article. Ted Bergstrom and I used state Freedom of Information acts to extract the journal price deals at state university libraries. We have about 35 of them so far. Like te

3 0.9614231 1541 andrew gelman stats-2012-10-19-Statistical discrimination again

Introduction: Mark Johnstone writes: I’ve recently been investigating a new European Court of Justice ruling on insurance calculations (on behalf of MoneySuperMarket) and I found something related to statistics that caught my attention. . . . The ruling (which comes into effect in December 2012) states that insurers in Europe can no longer provide different premiums based on gender. Despite the fact that women are statistically safer drivers, unless it’s biologically proven there is a causal relationship between being female and being a safer driver, this is now seen as an act of discrimination (more on this from the Wall Street Journal). However, where do you stop with this? What about age? What about other factors? And what does this mean for the application of statistics in general? Is it inherently unjust in this context? One proposal has been to fit ‘black boxes’ into cars so more individual data can be collected, as opposed to relying heavily on aggregates. For fans of data and s

4 0.9601354 834 andrew gelman stats-2011-08-01-I owe it all to the haters

Introduction: Sometimes when I submit an article to a journal it is accepted right away or with minor alterations. But many of my favorite articles were rejected or had to go through an exhausting series of revisions. For example, this influential article had a very hostile referee and we had to seriously push the journal editor to accept it. This one was rejected by one or two journals before finally appearing with discussion. This paper was rejected by the American Political Science Review with no chance of revision and we had to publish it in the British Journal of Political Science, which was a bit odd given that the article was 100% about American politics. And when I submitted this instant classic (actually at the invitation of the editor), the referees found it to be trivial, and the editor did me the favor of publishing it but only by officially labeling it as a discussion of another article that appeared in the same issue. Some of my most influential papers were accepted right

5 0.95808423 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

Introduction: Mike Johns writes: Are you familiar with the work of Ai and Norton on interactions in logit/probit models? I’d be curious to hear your thoughts. Ai, C.R. and Norton E.C. 2003. Interaction terms in logit and probit models. Economics Letters 80(1): 123-129. A peer ref just cited this paper in reaction to a logistic model we tested and claimed that the “only” way to test an interaction in logit/probit regression is to use the cross derivative method of Ai & Norton. I’ve never heard of this issue or method. It leaves me wondering what the interaction term actually tests (something Ai & Norton don’t discuss) and why such an important discovery is not more widely known. Is this an issue that is of particular relevance to econometric analysis because they approach interactions from the difference-in-difference perspective? Full disclosure, I’m coming from a social science/epi background. Thus, i’m not interested in the d-in-d estimator; I want to know if any variables modify the rela

6 0.95686543 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

7 0.95056999 133 andrew gelman stats-2010-07-08-Gratuitous use of “Bayesian Statistics,” a branding issue?

8 0.94846416 1794 andrew gelman stats-2013-04-09-My talks in DC and Baltimore this week

9 0.94791484 908 andrew gelman stats-2011-09-14-Type M errors in the lab

10 0.94790947 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

11 0.940772 1833 andrew gelman stats-2013-04-30-“Tragedy of the science-communication commons”

12 0.93845975 1081 andrew gelman stats-2011-12-24-Statistical ethics violation

13 0.93628263 2278 andrew gelman stats-2014-04-01-Association for Psychological Science announces a new journal

14 0.93047702 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

15 0.92878538 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

16 0.92870808 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

17 0.92656779 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

18 0.91820109 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

19 0.9162581 1385 andrew gelman stats-2012-06-20-Reconciling different claims about working-class voters

20 0.91591704 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference