andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1626 knowledge-graph by maker-knowledge-mining

1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever


meta infos for this blog

Source: html

Introduction: In politics we’re familiar with the non-apology apology (well described in Wikipedia as “a statement that has the form of an apology but does not express the expected contrition”). Here’s the scientific equivalent: the non-retraction retraction. Sanjay Srivastava points to an amusing yet barfable story of a pair of researchers who (inadvertently, I assume) made a data coding error and were eventually moved to issue a correction notice, but even then refused to fully admit their error. As Srivastava puts it, the story “ended up with Lew [Goldberg] and colleagues [Kibeom Lee and Michael Ashton] publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal.” From the comment on the erratum: In their “erratum and addendum,” Anderson and Ones (this issue) explained that we had brought their attention to the “potential” of a “possible” misalignment and described the results computed from re-aligned data as being based on a “post-ho


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In politics we’re familiar with the non-apology apology (well described in Wikipedia as “a statement that has the form of an apology but does not express the expected contrition”). [sent-1, score-0.359]

2 Here’s the scientific equivalent: the non-retraction retraction. [sent-2, score-0.088]

3 Sanjay Srivastava points to an amusing yet barfable story of a pair of researchers who (inadvertently, I assume) made a data coding error and were eventually moved to issue a correction notice, but even then refused to fully admit their error. [sent-3, score-0.494]

4 As Srivastava puts it, the story “ended up with Lew [Goldberg] and colleagues [Kibeom Lee and Michael Ashton] publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal. [sent-4, score-0.467]

5 That is, Anderson and Ones did not plainly describe the mismatch problem as an error; instead they presented the new results merely as an alternative, supplementary reanalysis. [sent-6, score-0.159]

6 And here’s the unusual rejoinder to the comment on the correction. [sent-7, score-0.148]

7 It’s pretty annoying that, even to the end, they refuse to admit their mistake, instead referring to “clerical errors as those alleged by Goldberg et al. [sent-8, score-0.634]

8 ” and concluding: When any call is made for the retraction of two peer-reviewed and published articles, the onus of proof is on the claimant and the duty of scientific care and caution is manifestly high. [sent-9, score-0.949]

9 (2008) have offered only circumstantial and superficial explanations . [sent-11, score-0.162]

10 do not and cannot provide irrefutable proof of the alleged clerical errors. [sent-15, score-1.084]

11 To call for the retraction of peer-reviewed, published papers on the basis of alleged clerical errors in data handling is sanctimoniously misguided. [sent-16, score-0.787]

12 That’s the best they can do: “no irrefutable proof”? [sent-18, score-0.332]

13 That’s like something the killer says in the last act of a Columbo episode, right before the detective tricks him into giving himself away. [sent-20, score-0.207]

14 Once you say “no irrefutable proof,” you’ve already effectively admitted that you did it. [sent-21, score-0.396]

15 By the way, here’s the “sanctimonious” graph from Goldberg et al. [sent-23, score-0.149]

16 featuring the “no irrefutable proof”: It’s unscientific behavior not to admit error. [sent-24, score-0.612]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('goldberg', 0.383), ('irrefutable', 0.332), ('erratum', 0.29), ('proof', 0.286), ('clerical', 0.249), ('alleged', 0.217), ('et', 0.149), ('srivastava', 0.142), ('apology', 0.139), ('admit', 0.13), ('anderson', 0.128), ('retraction', 0.123), ('error', 0.093), ('comment', 0.089), ('scientific', 0.088), ('ashton', 0.088), ('circumstantial', 0.088), ('barfable', 0.088), ('claimant', 0.088), ('misalignment', 0.088), ('onus', 0.088), ('plainly', 0.088), ('ones', 0.085), ('addendum', 0.083), ('detective', 0.083), ('columbo', 0.083), ('unscientific', 0.083), ('described', 0.081), ('concluding', 0.079), ('manifestly', 0.077), ('superficial', 0.074), ('caution', 0.072), ('supplementary', 0.071), ('errors', 0.071), ('inadvertently', 0.069), ('sanjay', 0.069), ('episode', 0.068), ('featuring', 0.067), ('refuse', 0.067), ('call', 0.066), ('yet', 0.066), ('admitted', 0.064), ('killer', 0.062), ('tricks', 0.062), ('handling', 0.061), ('duty', 0.061), ('rejoinder', 0.059), ('refused', 0.059), ('issue', 0.058), ('computed', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever

Introduction: In politics we’re familiar with the non-apology apology (well described in Wikipedia as “a statement that has the form of an apology but does not express the expected contrition”). Here’s the scientific equivalent: the non-retraction retraction. Sanjay Srivastava points to an amusing yet barfable story of a pair of researchers who (inadvertently, I assume) made a data coding error and were eventually moved to issue a correction notice, but even then refused to fully admit their error. As Srivastava puts it, the story “ended up with Lew [Goldberg] and colleagues [Kibeom Lee and Michael Ashton] publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal.” From the comment on the erratum: In their “erratum and addendum,” Anderson and Ones (this issue) explained that we had brought their attention to the “potential” of a “possible” misalignment and described the results computed from re-aligned data as being based on a “post-ho

2 0.18796988 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

Introduction: Erin Jonaitis points us to this article by Christopher Ferguson and Moritz Heene, who write: Publication bias remains a controversial issue in psychological science. . . . that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices. Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact. They mention the infamous Daryl Bem article. It is pretty much only because Bem’s claims are (presumably) false that they got published in a major research journal. Had the claims been true—that is, had Bem run identical experiments, analyzed his data more carefully and objectively, and reported that the r

3 0.10172868 2272 andrew gelman stats-2014-03-29-I agree with this comment

Introduction: The anonymous commenter puts it well : The problem is simple, the researchers are disproving always false null hypotheses and taking this disproof as near proof that their theory is correct.

4 0.10062143 1521 andrew gelman stats-2012-10-04-Columbo does posterior predictive checks

Introduction: I’m already on record as saying that Ronald Reagan was a statistician so I think this is ok too . . . Here’s what Columbo does. He hears the killer’s story and he takes it very seriously (it’s murder, and Columbo never jokes about murder), examines all its implications, and finds where it doesn’t fit the data. Then Columbo carefully examines the discrepancies, tries some model expansion, and eventually concludes that he’s proved there’s a problem. OK, now you’re saying: Yeah, yeah, sure, but how does that differ from any other fictional detective? The difference, I think, is that the tradition is for the detective to find clues and use these to come up with hypotheses, or to trap the killer via internal contradictions in his or her statement. I see Columbo is different—and more in keeping with chapter 6 of Bayesian Data Analysis—in that he is taking the killer’s story seriously and exploring all its implications. That’s the essence of predictive model checking: you t

5 0.09127029 2103 andrew gelman stats-2013-11-16-Objects of the class “Objects of the class”

Introduction: Objects of the class “Foghorn Leghorn” : parodies that are more famous than the original. (“It would be as if everybody were familiar with Duchamp’s Mona-Lisa-with-a-moustache while never having heard of Leonardo’s version.”) Objects of the class “Whoopi Goldberg” : actors who are undeniably talented but are almost always in bad movies, or at least movies that aren’t worthy of their talent. (The opposite: William Holden.) Objects of the class “Weekend at Bernie’s” : low-quality movie, nobody’s actually seen it, but everybody knows what it’s about. (Other examples: Heathers and Zelig.) I love these. We need some more.

6 0.087011404 2337 andrew gelman stats-2014-05-18-Never back down: The culture of poverty and the culture of journalism

7 0.080662221 345 andrew gelman stats-2010-10-15-Things we do on sabbatical instead of actually working

8 0.079387814 1759 andrew gelman stats-2013-03-12-How tall is Jon Lee Anderson?

9 0.078028083 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

10 0.075411439 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

11 0.07392852 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

12 0.072032548 1 andrew gelman stats-2010-04-22-Political Belief Networks: Socio-cognitive Heterogeneity in American Public Opinion

13 0.071132183 562 andrew gelman stats-2011-02-06-Statistician cracks Toronto lottery

14 0.0675731 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

15 0.067348145 1805 andrew gelman stats-2013-04-16-Memo to Reinhart and Rogoff: I think it’s best to admit your errors and go on from there

16 0.06608884 1844 andrew gelman stats-2013-05-06-Against optimism about social science

17 0.06289091 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

18 0.062807232 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

19 0.062401697 1762 andrew gelman stats-2013-03-13-“I have no idea who Catalina Garcia is, but she makes a decent ruler”: I don’t know if John Lee “little twerp” Anderson actually suffers from tall-person syndrome, but he is indeed tall

20 0.061844595 1588 andrew gelman stats-2012-11-23-No one knows what it’s like to be the bad man


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.106), (1, -0.025), (2, -0.014), (3, -0.059), (4, -0.019), (5, -0.052), (6, 0.001), (7, -0.025), (8, 0.0), (9, -0.02), (10, 0.012), (11, 0.014), (12, -0.042), (13, -0.01), (14, -0.017), (15, -0.005), (16, 0.002), (17, -0.018), (18, 0.006), (19, -0.025), (20, 0.006), (21, 0.012), (22, -0.015), (23, -0.02), (24, -0.024), (25, 0.007), (26, 0.025), (27, 0.032), (28, 0.018), (29, -0.014), (30, 0.028), (31, 0.064), (32, 0.014), (33, 0.005), (34, 0.012), (35, -0.008), (36, -0.032), (37, -0.023), (38, 0.014), (39, -0.02), (40, -0.007), (41, -0.016), (42, -0.007), (43, 0.001), (44, -0.034), (45, 0.018), (46, 0.007), (47, 0.013), (48, -0.002), (49, 0.001)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94358885 1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever

Introduction: In politics we’re familiar with the non-apology apology (well described in Wikipedia as “a statement that has the form of an apology but does not express the expected contrition”). Here’s the scientific equivalent: the non-retraction retraction. Sanjay Srivastava points to an amusing yet barfable story of a pair of researchers who (inadvertently, I assume) made a data coding error and were eventually moved to issue a correction notice, but even then refused to fully admit their error. As Srivastava puts it, the story “ended up with Lew [Goldberg] and colleagues [Kibeom Lee and Michael Ashton] publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal.” From the comment on the erratum: In their “erratum and addendum,” Anderson and Ones (this issue) explained that we had brought their attention to the “potential” of a “possible” misalignment and described the results computed from re-aligned data as being based on a “post-ho

2 0.72112417 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

Introduction: Everybody’s talkin bout this paper by Joseph Simmons, Leif Nelson and Uri Simonsohn, who write : Despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We [Simmons, Nelson, and Simonsohn] present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process. Whatever you think about these recommend

3 0.71368492 1844 andrew gelman stats-2013-05-06-Against optimism about social science

Introduction: Social science research has been getting pretty bad press recently, what with the Excel buccaneers who didn’t know how to handle data with different numbers of observations per country, and the psychologist who published dozens of papers based on fabricated data, and the Evilicious guy who wouldn’t let people review his data tapes, etc etc. And that’s not even considering Dr. Anil Potti. On the other hand, the revelation of all these problems can be taken as evidence that things are getting better. Psychology researcher Gary Marcus writes : There is something positive that has come out of the crisis of replicability—something vitally important for all experimental sciences. For years, it was extremely difficult to publish a direct replication, or a failure to replicate an experiment, in a good journal. . . . Now, happily, the scientific culture has changed. . . . The Reproducibility Project, from the Center for Open Science is now underway . . . And sociologist Fabio Rojas

4 0.70430571 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

Introduction: Erin Jonaitis points us to this article by Christopher Ferguson and Moritz Heene, who write: Publication bias remains a controversial issue in psychological science. . . . that the field often constructs arguments to block the publication and interpretation of null results and that null results may be further extinguished through questionable researcher practices. Given that science is dependent on the process of falsification, we argue that these problems reduce psychological science’s capability to have a proper mechanism for theory falsification, thus resulting in the promulgation of numerous “undead” theories that are ideologically popular but have little basis in fact. They mention the infamous Daryl Bem article. It is pretty much only because Bem’s claims are (presumably) false that they got published in a major research journal. Had the claims been true—that is, had Bem run identical experiments, analyzed his data more carefully and objectively, and reported that the r

5 0.69888121 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

Introduction: In a further discussion of the discussion about the discussion of a paper in Administrative Science Quarterly, Thomas Basbøll writes: I [Basbøll] feel “entitled”, if that’s the right word (actually, I’d say I feel privileged), to express my opinions to anyone who wants to listen, and while I think it does say something about an author whether or not they answer a question (where what it says depends very much on the quality of the question), I don’t think the author has any obligation to me to respond immediately. If I succeed in raising doubts about something in the minds of many readers, then that’s obviously something an author should take seriously. The point is that an author has a responsibility to the readership of the paper, not any one critic. I agree that the ultimate audience is the scholarly community (and, beyond that, the general public) and that the critic is just serving as a conduit, the person who poses the Q in the Q-and-A. That said, I get frustrated frust

6 0.69884938 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

7 0.68651724 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

8 0.6844936 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

9 0.68192369 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

10 0.68153143 1599 andrew gelman stats-2012-11-30-“The scientific literature must be cleansed of everything that is fraudulent, especially if it involves the work of a leading academic”

11 0.68113446 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

12 0.67769575 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

13 0.67487174 1588 andrew gelman stats-2012-11-23-No one knows what it’s like to be the bad man

14 0.67304814 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

15 0.67295772 1760 andrew gelman stats-2013-03-12-Misunderstanding the p-value

16 0.67239249 1883 andrew gelman stats-2013-06-04-Interrogating p-values

17 0.66514468 2350 andrew gelman stats-2014-05-27-A whole fleet of gremlins: Looking more carefully at Richard Tol’s twice-corrected paper, “The Economic Effects of Climate Change”

18 0.66381383 2137 andrew gelman stats-2013-12-17-Replication backlash

19 0.66315395 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

20 0.662669 2004 andrew gelman stats-2013-09-01-Post-publication peer review: How it (sometimes) really works


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(13, 0.012), (15, 0.024), (16, 0.062), (21, 0.066), (24, 0.098), (41, 0.255), (42, 0.012), (64, 0.024), (77, 0.016), (81, 0.024), (86, 0.031), (89, 0.043), (99, 0.178)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.87164402 1626 andrew gelman stats-2012-12-16-The lamest, grudgingest, non-retraction retraction ever

Introduction: In politics we’re familiar with the non-apology apology (well described in Wikipedia as “a statement that has the form of an apology but does not express the expected contrition”). Here’s the scientific equivalent: the non-retraction retraction. Sanjay Srivastava points to an amusing yet barfable story of a pair of researchers who (inadvertently, I assume) made a data coding error and were eventually moved to issue a correction notice, but even then refused to fully admit their error. As Srivastava puts it, the story “ended up with Lew [Goldberg] and colleagues [Kibeom Lee and Michael Ashton] publishing a comment on an erratum – the only time I’ve ever heard of that happening in a scientific journal.” From the comment on the erratum: In their “erratum and addendum,” Anderson and Ones (this issue) explained that we had brought their attention to the “potential” of a “possible” misalignment and described the results computed from re-aligned data as being based on a “post-ho

2 0.83482891 303 andrew gelman stats-2010-09-28-“Genomics” vs. genetics

Introduction: John Cook and Joseph Delaney point to an article by Yurii Aulchenko et al., who write: 54 loci showing strong statistical evidence for association to human height were described, providing us with potential genomic means of human height prediction. In a population-based study of 5748 people, we find that a 54-loci genomic profile explained 4-6% of the sex- and age-adjusted height variance, and had limited ability to discriminate tall/short people. . . . In a family-based study of 550 people, with both parents having height measurements, we find that the Galtonian mid-parental prediction method explained 40% of the sex- and age-adjusted height variance, and showed high discriminative accuracy. . . . The message is that the simple approach of predicting child’s height using a regression model given parents’ average height performs much better than the method they have based on combining 54 genes. They also find that, if you start with the prediction based on parents’ heigh

3 0.79535306 1214 andrew gelman stats-2012-03-15-Of forecasts and graph theory and characterizing a statistical method by the information it uses

Introduction: Wayne Folta points me to “EigenBracket 2012: Using Graph Theory to Predict NCAA March Madness Basketball” and writes, “I [Folta] have got to believe that he’s simply re-invented a statistical method in a graph-ish context, but don’t know enough to judge.” I have not looked in detail at the method being presented here—I’m not much of college basketball fan—but I’d like to use this as an excuse to make one of my favorite general point, which is that a good way to characterize any statistical method is by what information it uses. The basketball ranking method here uses score differentials between teams in the past season. On the plus side, that is better than simply using one-loss records (which (a) discards score differentials and (b) discards information on who played whom). On the minus side, the method appears to be discretizing the scores (thus throwing away information on the exact score differential) and doesn’t use any external information such as external ratings. A

4 0.78433365 685 andrew gelman stats-2011-04-29-Data mining and allergies

Introduction: With all this data floating around, there are some interesting analyses one can do. I came across “The Association of Tree Pollen Concentration Peaks and Allergy Medication Sales in New York City: 2003-2008″ by Perry Sheffield . There they correlate pollen counts with anti-allergy medicine sales – and indeed find that two days after high pollen counts, the medicine sales are the highest. Of course, it would be interesting to play with the data to see *what* tree is actually causing the sales to increase the most. Perhaps this would help the arborists what trees to plant. At the moment they seem to be following a rather sexist approach to tree planting: Ogren says the city could solve the problem by planting only female trees, which don’t produce pollen like male trees do. City arborists shy away from females because many produce messy – or in the case of ginkgos, smelly – fruit that litters sidewalks. In Ogren’s opinion, that’s a mistake. He says the females only pro

5 0.77332699 516 andrew gelman stats-2011-01-14-A new idea for a science core course based entirely on computer simulation

Introduction: Columbia College has for many years had a Core Curriculum, in which students read classics such as Plato (in translation) etc. A few years ago they created a Science core course. There was always some confusion about this idea: On one hand, how much would college freshmen really learn about science by reading the classic writings of Galileo, Laplace, Darwin, Einstein, etc.? And they certainly wouldn’t get much out by puzzling over the latest issues of Nature, Cell, and Physical Review Letters. On the other hand, what’s the point of having them read Dawkins, Gould, or even Brian Greene? These sorts of popularizations give you a sense of modern science (even to the extent of conveying some of the debates in these fields), but reading them might not give the same intellectual engagement that you’d get from wrestling with the Bible or Shakespeare. I have a different idea. What about structuring the entire course around computer programming and simulation? Start with a few weeks t

6 0.76131558 1300 andrew gelman stats-2012-05-05-Recently in the sister blog

7 0.75282419 454 andrew gelman stats-2010-12-07-Diabetes stops at the state line?

8 0.75226468 2185 andrew gelman stats-2014-01-25-Xihong Lin on sparsity and density

9 0.74412644 1669 andrew gelman stats-2013-01-12-The power of the puzzlegraph

10 0.74362701 1013 andrew gelman stats-2011-11-16-My talk at Math for America on Saturday

11 0.72916436 1297 andrew gelman stats-2012-05-03-New New York data research organizations

12 0.72858739 2262 andrew gelman stats-2014-03-23-Win probabilities during a sporting event

13 0.72486293 2204 andrew gelman stats-2014-02-09-Keli Liu and Xiao-Li Meng on Simpson’s paradox

14 0.72332835 1895 andrew gelman stats-2013-06-12-Peter Thiel is writing another book!

15 0.71293211 2311 andrew gelman stats-2014-04-29-Bayesian Uncertainty Quantification for Differential Equations!

16 0.70584321 1923 andrew gelman stats-2013-07-03-Bayes pays!

17 0.70500177 447 andrew gelman stats-2010-12-03-Reinventing the wheel, only more so.

18 0.70409948 2224 andrew gelman stats-2014-02-25-Basketball Stats: Don’t model the probability of win, model the expected score differential.

19 0.69715142 1019 andrew gelman stats-2011-11-19-Validation of Software for Bayesian Models Using Posterior Quantiles

20 0.69544017 1816 andrew gelman stats-2013-04-21-Exponential increase in the number of stat majors