andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2032 knowledge-graph by maker-knowledge-mining

2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”


meta infos for this blog

Source: html

Introduction: Raghu Parthasarathy sends along this article by C. Glenn Begley listing six questions to ask when worried about unreplicable work in biology: Were experiments performed blinded? (Even animal studies should be blinded when it comes to the recording and interpretation of the data—do you hear that, Mark Hauser?) Were basic experiments repeated? (“If reports fail to state that experiments were repeated, be sceptical.”) Were all the results presented? (That one’s a biggie .) Were there positive and negative controls? (He offers some details from lab experiments.) Were reagents validated? (Whatever.) Were statistical tests appropriate? (I don’t like the idea of statistical “tests” at all, but I agree with his general point.)


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Raghu Parthasarathy sends along this article by C. [sent-1, score-0.225]

2 Glenn Begley listing six questions to ask when worried about unreplicable work in biology: Were experiments performed blinded? [sent-2, score-1.158]

3 (Even animal studies should be blinded when it comes to the recording and interpretation of the data—do you hear that, Mark Hauser? [sent-3, score-1.096]

4 (“If reports fail to state that experiments were repeated, be sceptical. [sent-5, score-0.574]

5 (I don’t like the idea of statistical “tests” at all, but I agree with his general point. [sent-13, score-0.272]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('blinded', 0.405), ('experiments', 0.286), ('repeated', 0.276), ('biggie', 0.215), ('reagents', 0.203), ('parthasarathy', 0.203), ('recording', 0.194), ('tests', 0.188), ('glenn', 0.182), ('validated', 0.182), ('begley', 0.173), ('unreplicable', 0.169), ('listing', 0.158), ('animal', 0.154), ('hauser', 0.152), ('controls', 0.138), ('offers', 0.137), ('fail', 0.128), ('biology', 0.123), ('worried', 0.121), ('performed', 0.121), ('lab', 0.119), ('six', 0.112), ('sends', 0.112), ('interpretation', 0.102), ('presented', 0.099), ('hear', 0.097), ('negative', 0.093), ('mark', 0.092), ('appropriate', 0.09), ('reports', 0.088), ('basic', 0.087), ('positive', 0.085), ('details', 0.084), ('statistical', 0.083), ('ask', 0.082), ('studies', 0.073), ('state', 0.072), ('questions', 0.072), ('along', 0.071), ('comes', 0.071), ('agree', 0.063), ('results', 0.059), ('general', 0.055), ('idea', 0.048), ('article', 0.042), ('work', 0.037), ('even', 0.033), ('data', 0.029), ('like', 0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”

Introduction: Raghu Parthasarathy sends along this article by C. Glenn Begley listing six questions to ask when worried about unreplicable work in biology: Were experiments performed blinded? (Even animal studies should be blinded when it comes to the recording and interpretation of the data—do you hear that, Mark Hauser?) Were basic experiments repeated? (“If reports fail to state that experiments were repeated, be sceptical.”) Were all the results presented? (That one’s a biggie .) Were there positive and negative controls? (He offers some details from lab experiments.) Were reagents validated? (Whatever.) Were statistical tests appropriate? (I don’t like the idea of statistical “tests” at all, but I agree with his general point.)

2 0.16783582 299 andrew gelman stats-2010-09-27-what is = what “should be” ??

Introduction: This hidden assumption is a biggie.

3 0.13237834 2137 andrew gelman stats-2013-12-17-Replication backlash

Introduction: Raghuveer Parthasarathy pointed me to an article in Nature by Mina Bissell, who writes , “The push to replicate findings could shelve promising research and unfairly damage the reputations of careful, meticulous scientists.” I can see where she’s coming from: if you work hard day after day in the lab, it’s gotta be a bit frustrating to find all your work questioned, for the frauds of the Dr. Anil Pottis and Diederik Stapels to be treated as a reason for everyone else’s work to be considered guilty until proven innocent. That said, I pretty much disagree with Bissell’s article, and really the best thing I can say about it is that I think it’s a good sign that the push for replication is so strong that now there’s a backlash against it. Traditionally, leading scientists have been able to simply ignore the push for replication. If they are feeling that the replication movement is strong enough that they need to fight it, that to me is good news. I’ll explain a bit in the conte

4 0.10190772 2354 andrew gelman stats-2014-05-30-Mmm, statistical significance . . . Evilicious!

Introduction: Just in case you didn’t check Retraction Watch yet today , Carolyn Johnson reports: The committee painstakingly reconstructed the process of data analysis and determined that Hauser had changed values, causing the result to be statistically significant, an important criterion showing that findings are probably not due to chance. As the man said : His resignation is a serious loss for Harvard, and given the nature of the attack on him, for science generally. As a statistician, I don’t mind if someone is attacked because of cheating with data. Johnson concludes her news article in a pleasantly balanced way: The committee said it carefully considered Hauser’s allegation that people in his laboratory conspired against him, due to academic rivalry and disgruntlement, but did not find evidence to support the idea. The committee also acknowledged that many of Hauser’s overall findings about the cognitive abilities of animals may stand. His results that showed that animals

5 0.09424597 1645 andrew gelman stats-2012-12-31-Statistical modeling, causal inference, and social science

Introduction: Interesting discussion by Berk Ozler (which I found following links from Tyler Cowen) of a study by Erwin Bulte, Lei Pan, Joseph Hella, Gonne Beekman, and Salvatore di Falco that compares two agricultural experiments, one blinded and one unblinded. Bulte et al. find much different results in the two experiments and attribute the difference to expectation effects (when people know they’re receiving an experiment they behave differently); Ozler is skeptical and attributes the different outcomes to various practical differences in implementation of the two experiments. I’m reminded somehow of the notorious sham experiment on the dead chickens, a story that was good for endless discussion in my Bayesian statistics class last semester. I think we can all agree that dead chickens won’t exhibit a placebo effect. Live farmers, though, that’s another story. I don’t have any stake in this particular fight, but on quick reading I’m sympathetic to Ozler’s argument that this all is wel

6 0.094057716 1612 andrew gelman stats-2012-12-08-The Case for More False Positives in Anti-doping Testing

7 0.092236184 167 andrew gelman stats-2010-07-27-Why don’t more medical discoveries become cures?

8 0.088544883 233 andrew gelman stats-2010-08-25-Lauryn Hill update

9 0.084237143 1692 andrew gelman stats-2013-01-25-Freakonomics Experiments

10 0.084186576 676 andrew gelman stats-2011-04-23-The payoff: $650. The odds: 1 in 500,000.

11 0.07832326 2312 andrew gelman stats-2014-04-29-Ken Rice presents a unifying approach to statistical inference and hypothesis testing

12 0.077597946 1901 andrew gelman stats-2013-06-16-Evilicious: Why We Evolved a Taste for Being Bad

13 0.076508641 2132 andrew gelman stats-2013-12-13-And now, here’s something that would make Ed Tufte spin in his . . . ummm, Tufte’s still around, actually, so let’s just say I don’t think he’d like it!

14 0.074472249 357 andrew gelman stats-2010-10-20-Sas and R

15 0.070620224 908 andrew gelman stats-2011-09-14-Type M errors in the lab

16 0.067016013 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

17 0.066245586 2303 andrew gelman stats-2014-04-23-Thinking of doing a list experiment? Here’s a list of reasons why you should think again

18 0.065217271 340 andrew gelman stats-2010-10-13-Randomized experiments, non-randomized experiments, and observational studies

19 0.065014273 1752 andrew gelman stats-2013-03-06-Online Education and Jazz

20 0.06477835 86 andrew gelman stats-2010-06-14-“Too much data”?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.086), (1, -0.009), (2, -0.013), (3, -0.062), (4, -0.008), (5, -0.005), (6, -0.028), (7, 0.01), (8, -0.012), (9, -0.001), (10, -0.022), (11, 0.005), (12, -0.005), (13, -0.053), (14, 0.001), (15, -0.007), (16, 0.004), (17, -0.008), (18, -0.014), (19, -0.02), (20, -0.022), (21, -0.004), (22, 0.001), (23, 0.021), (24, 0.005), (25, 0.022), (26, 0.019), (27, -0.014), (28, 0.026), (29, 0.017), (30, 0.007), (31, -0.013), (32, 0.048), (33, -0.002), (34, 0.012), (35, 0.004), (36, -0.0), (37, 0.003), (38, 0.016), (39, 0.028), (40, -0.008), (41, 0.004), (42, 0.039), (43, 0.01), (44, -0.017), (45, 0.011), (46, -0.035), (47, 0.06), (48, 0.007), (49, 0.01)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93592089 2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”

Introduction: Raghu Parthasarathy sends along this article by C. Glenn Begley listing six questions to ask when worried about unreplicable work in biology: Were experiments performed blinded? (Even animal studies should be blinded when it comes to the recording and interpretation of the data—do you hear that, Mark Hauser?) Were basic experiments repeated? (“If reports fail to state that experiments were repeated, be sceptical.”) Were all the results presented? (That one’s a biggie .) Were there positive and negative controls? (He offers some details from lab experiments.) Were reagents validated? (Whatever.) Were statistical tests appropriate? (I don’t like the idea of statistical “tests” at all, but I agree with his general point.)

2 0.66095525 32 andrew gelman stats-2010-05-14-Causal inference in economics

Introduction: Aaron Edlin points me to this issue of the Journal of Economic Perspectives that focuses on statistical methods for causal inference in economics. (Michael Bishop’s page provides some links .) To quickly summarize my reactions to Angrist and Pischke’s book: I pretty much agree with them that the potential-outcomes or natural-experiment approach is the most useful way to think about causality in economics and related fields. My main amendments to Angrist and Pischke would be to recognize that: 1. Modeling is important, especially modeling of interactions . It’s unfortunate to see a debate between experimentalists and modelers. Some experimenters (not Angrist and Pischke) make the mistake of avoiding models: Once they have their experimental data, they check their brains at the door and do nothing but simple differences, not realizing how much more can be learned. Conversely, some modelers are unduly dismissive of experiments and formal observational studies, forgetting t

3 0.62178504 2326 andrew gelman stats-2014-05-08-Discussion with Steven Pinker on research that is attached to data that are so noisy as to be essentially uninformative

Introduction: I pointed Steven Pinker to my post, How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless? , and he responded: Clearly it *is* important to call out publicized research whose conclusions are likely to be false. The only danger is that it’s so easy and fun to criticize, with all the perks of intellectual and moral superiority for so little cost, that there is a moral hazard to go overboard and become a professional slasher and snarker. (That’s a common phenomenon among literary critics, especially in the UK.) There’s also the risk of altering the incentive structure for innovative research, so that researchers stick to the safest kinds of paradigm-twiddling. I think these two considerations were what my late colleague Dan Wegner had in mind when he made the bumbler-pointer contrast — he himself was certainly a discerning critic of social science research. [Just to clarify: Wegner is the person who talked about bumblers and po

4 0.61643171 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

Introduction: Everybody’s talkin bout this paper by Joseph Simmons, Leif Nelson and Uri Simonsohn, who write : Despite empirical psychologists’ nominal endorsement of a low rate of false-positive findings (≤ .05), flexibility in data collection, analysis, and reporting dramatically increases actual false-positive rates. In many cases, a researcher is more likely to falsely find evidence that an effect exists than to correctly find evidence that it does not. We [Simmons, Nelson, and Simonsohn] present computer simulations and a pair of actual experiments that demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis. Second, we suggest a simple, low-cost, and straightforwardly effective disclosure-based solution to this problem. The solution involves six concrete requirements for authors and four guidelines for reviewers, all of which impose a minimal burden on the publication process. Whatever you think about these recommend

5 0.61254585 2093 andrew gelman stats-2013-11-07-I’m negative on the expression “false positives”

Introduction: After seeing a document sent to me and others regarding the crisis of spurious, statistically-significant research findings in psychology research, I had the following reaction: I am unhappy with the use in the document of the phrase “false positives.” I feel that this expression is unhelpful as it frames science in terms of “true” and “false” claims, which I don’t think is particularly accurate. In particular, in most of the recent disputed Psych Science type studies (the ESP study excepted, perhaps), there is little doubt that there is _some_ underlying effect. The issue, as I see it, as that the underlying effects are much smaller, and much more variable, than mainstream researchers imagine. So what happens is that Psych Science or Nature or whatever will publish a result that is purported to be some sort of universal truth, but it is actually a pattern specific to one data set, one population, and one experimental condition. In a sense, yes, these journals are publishing

6 0.59956431 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

7 0.59921473 1703 andrew gelman stats-2013-02-02-Interaction-based feature selection and classification for high-dimensional biological data

8 0.59913713 1449 andrew gelman stats-2012-08-08-Gregor Mendel’s suspicious data

9 0.59715378 1826 andrew gelman stats-2013-04-26-“A Vast Graveyard of Undead Theories: Publication Bias and Psychological Science’s Aversion to the Null”

10 0.59592003 2137 andrew gelman stats-2013-12-17-Replication backlash

11 0.59368622 785 andrew gelman stats-2011-07-02-Experimental reasoning in social science

12 0.59150231 864 andrew gelman stats-2011-08-21-Going viral — not!

13 0.59148991 410 andrew gelman stats-2010-11-12-The Wald method has been the subject of extensive criticism by statisticians for exaggerating results”

14 0.58796924 314 andrew gelman stats-2010-10-03-Disconnect between drug and medical device approval

15 0.58402377 1226 andrew gelman stats-2012-03-22-Story time meets the all-else-equal fallacy and the fallacy of measurement

16 0.5832538 897 andrew gelman stats-2011-09-09-The difference between significant and not significant…

17 0.58219159 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

18 0.58216619 1883 andrew gelman stats-2013-06-04-Interrogating p-values

19 0.57911623 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

20 0.57293493 2050 andrew gelman stats-2013-10-04-Discussion with Dan Kahan on political polarization, partisan information processing. And, more generally, the role of theory in empirical social science


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.091), (10, 0.06), (15, 0.135), (28, 0.023), (37, 0.072), (86, 0.024), (87, 0.106), (89, 0.027), (95, 0.017), (99, 0.318)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93102628 2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”

Introduction: Raghu Parthasarathy sends along this article by C. Glenn Begley listing six questions to ask when worried about unreplicable work in biology: Were experiments performed blinded? (Even animal studies should be blinded when it comes to the recording and interpretation of the data—do you hear that, Mark Hauser?) Were basic experiments repeated? (“If reports fail to state that experiments were repeated, be sceptical.”) Were all the results presented? (That one’s a biggie .) Were there positive and negative controls? (He offers some details from lab experiments.) Were reagents validated? (Whatever.) Were statistical tests appropriate? (I don’t like the idea of statistical “tests” at all, but I agree with his general point.)

2 0.88738286 2278 andrew gelman stats-2014-04-01-Association for Psychological Science announces a new journal

Introduction: The Association for Psychological Science, the leading organization of research psychologists, announced a long-awaited new journal, Speculations on Psychological Science . From the official APS press release: Speculations on Psychological Science, the flagship journal of the Association for Psychological Science, will publish cutting-edge research articles, short reports, and research reports spanning the entire spectrum of the science of psychology. We anticipate that Speculations on Psychological Science will be the highest ranked empirical journal in psychology. We recognize that many of the most noteworthy published claims in psychology and related fields are not well supported by data, hence the need for a journal for the publication of such exciting speculations without misleading claims of certainty. - Sigmund Watson, Prof. (Ret.) Miskatonic University, and editor-in-chief, Speculations on Psychological Science I applaud this development. Indeed, I’ve been talking ab

3 0.88203591 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

Introduction: Dylan Small writes: I am starting an observational studies journal that aims to publish papers on all aspects of observational studies, including study protocols for observational studies, methodologies for observational studies, descriptions of data sets for observational studies, software for observational studies and analyses of observational studies. One of the goals of the journal is to promote the planning of observational studies and to publish study plans for observational studies, like study plans are published for major clinical trials. Regular readers will know my suggestion that scientific journals move away from the idea of being unique publishers of new material and move toward a “newsletter” approach, recommending papers from Arxiv, SSRN, etc. So, instead of going through exhausting review and revision processes, the journal editors would read and review recent preprints on observational studies and then, each month or quarter or whatever, produce a list of pap

4 0.87490678 1888 andrew gelman stats-2013-06-08-New Judea Pearl journal of causal inference

Introduction: Pearl reports that his Journal of Causal Inference has just posted its first issue , which contains a mix of theoretical and applied papers. Pearl writes that they welcome submissions on all aspects of causal inference.

5 0.87176156 1624 andrew gelman stats-2012-12-15-New prize on causality in statstistics education

Introduction: Judea Pearl writes: Can you post the announcement below on your blog? And, by all means, if you find heresy in my interview with Ron Wasserstein, feel free to criticize it with your readers. I responded that I’m not religious, so he’ll have to look for someone else if he’s looking for findings of heresy. I did, however, want to share his announcement: The American Statistical Association has announced a new Prize , “Causality in Statistics Education”, aimed to encourage the teaching of basic causal inference in introductory statistics courses. The motivations for the prize are discussed in an interview I [Pearl] gave to Ron Wasserstein. I hope readers of this list will participate, either by innovating new tools for teaching causation or by nominating candidates who deserve the prize. And speaking about education, Bryant and I [Pearl] have revised our survey of econometrics textbooks, and would love to hear your suggestions on how to restore causal inference to e

6 0.87090945 2171 andrew gelman stats-2014-01-13-Postdoc with Liz Stuart on propensity score methods when the covariates are measured with error

7 0.86358571 1833 andrew gelman stats-2013-04-30-“Tragedy of the science-communication commons”

8 0.86311722 1678 andrew gelman stats-2013-01-17-Wanted: 365 stories of statistics

9 0.8612268 1998 andrew gelman stats-2013-08-25-A new Bem theory

10 0.85647309 2014 andrew gelman stats-2013-09-09-False memories and statistical analysis

11 0.84934407 1334 andrew gelman stats-2012-05-21-Question 11 of my final exam for Design and Analysis of Sample Surveys

12 0.84918928 838 andrew gelman stats-2011-08-04-Retraction Watch

13 0.84859544 2302 andrew gelman stats-2014-04-23-A short questionnaire regarding the subjective assessment of evidence

14 0.84762686 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

15 0.8462345 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

16 0.84614861 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

17 0.84516037 1385 andrew gelman stats-2012-06-20-Reconciling different claims about working-class voters

18 0.8446154 554 andrew gelman stats-2011-02-04-An addition to the model-makers’ oath

19 0.84424263 1441 andrew gelman stats-2012-08-02-“Based on my experiences, I think you could make general progress by constructing a solution to your specific problem.”

20 0.8441487 1794 andrew gelman stats-2013-04-09-My talks in DC and Baltimore this week