andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-1671 knowledge-graph by maker-knowledge-mining

1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports


meta infos for this blog

Source: html

Introduction: The traditional system of scientific and scholarly publishing is breaking down in two different directions. On one hand, we are moving away from relying on a small set of journals as gatekeepers: the number of papers and research projects is increasing, the number of publication outlets is increasing, and important manuscripts are being posted on SSRN, Arxiv, and other nonrefereed sites. At the same time, many researchers are worried about the profusion of published claims that turn out to not replicate or in plain language, to be false. This concern is not new–some prominent discussions include Rosenthal (1979), Ioannidis (2005), and Vul et al. (2009)–but there is a growing sense that the scientific signal is being swamped by noise. I recently had the opportunity to comment in the journal Political Analysis on two papers, one by Humphreys, Sierra, and Windt, and one by Monogan, on the preregistration of studies and mock reports. Here’s the issue of the journal. Given the hi


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The traditional system of scientific and scholarly publishing is breaking down in two different directions. [sent-1, score-0.285]

2 On one hand, we are moving away from relying on a small set of journals as gatekeepers: the number of papers and research projects is increasing, the number of publication outlets is increasing, and important manuscripts are being posted on SSRN, Arxiv, and other nonrefereed sites. [sent-2, score-0.757]

3 At the same time, many researchers are worried about the profusion of published claims that turn out to not replicate or in plain language, to be false. [sent-3, score-0.516]

4 This concern is not new–some prominent discussions include Rosenthal (1979), Ioannidis (2005), and Vul et al. [sent-4, score-0.164]

5 (2009)–but there is a growing sense that the scientific signal is being swamped by noise. [sent-5, score-0.413]

6 I recently had the opportunity to comment in the journal Political Analysis on two papers, one by Humphreys, Sierra, and Windt, and one by Monogan, on the preregistration of studies and mock reports. [sent-6, score-0.671]

7 Given the high cost of collecting data compared with the relatively low cost of writing a mock report, I recommend the “mock report” strategy be done more often, especially for researchers planning a new and expensive study. [sent-8, score-1.184]

8 The mock report is a form of pilot study and has similar virtues. [sent-9, score-0.75]

9 In the long term, I believe we as social scientists need to move beyond the paradigm in which a single study can establish a definitive result. [sent-10, score-0.502]

10 But registration of studies seems like a useful step in any case. [sent-12, score-0.267]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('mock', 0.402), ('increasing', 0.157), ('report', 0.157), ('studies', 0.152), ('papers', 0.152), ('nonsignificant', 0.149), ('profusion', 0.149), ('gatekeepers', 0.14), ('procedural', 0.14), ('ssrn', 0.134), ('dichotomy', 0.134), ('rosenthal', 0.134), ('swamped', 0.134), ('cost', 0.132), ('vul', 0.126), ('manuscripts', 0.126), ('humphreys', 0.123), ('innovations', 0.117), ('preregistration', 0.117), ('registration', 0.115), ('outlets', 0.115), ('ioannidis', 0.11), ('beyond', 0.109), ('pilot', 0.108), ('establish', 0.108), ('hand', 0.105), ('definitive', 0.104), ('relying', 0.103), ('plain', 0.102), ('scientific', 0.099), ('collecting', 0.099), ('paradigm', 0.098), ('arxiv', 0.098), ('integration', 0.098), ('breaking', 0.098), ('growing', 0.091), ('replicate', 0.091), ('researchers', 0.09), ('signal', 0.089), ('prominent', 0.088), ('number', 0.088), ('scholarly', 0.088), ('expensive', 0.086), ('projects', 0.085), ('worried', 0.084), ('planning', 0.083), ('study', 0.083), ('relatively', 0.08), ('new', 0.08), ('concern', 0.076)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

Introduction: The traditional system of scientific and scholarly publishing is breaking down in two different directions. On one hand, we are moving away from relying on a small set of journals as gatekeepers: the number of papers and research projects is increasing, the number of publication outlets is increasing, and important manuscripts are being posted on SSRN, Arxiv, and other nonrefereed sites. At the same time, many researchers are worried about the profusion of published claims that turn out to not replicate or in plain language, to be false. This concern is not new–some prominent discussions include Rosenthal (1979), Ioannidis (2005), and Vul et al. (2009)–but there is a growing sense that the scientific signal is being swamped by noise. I recently had the opportunity to comment in the journal Political Analysis on two papers, one by Humphreys, Sierra, and Windt, and one by Monogan, on the preregistration of studies and mock reports. Here’s the issue of the journal. Given the hi

2 0.17590931 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

3 0.15654081 2241 andrew gelman stats-2014-03-10-Preregistration: what’s in it for you?

Introduction: Chris Chambers pointed me to a blog by someone called Neuroskeptic who suggested that I preregister my political science studies: So when Andrew Gelman (let’s say) is going to start using a new approach, he goes on Twitter, or on his blog, and posts a bare-bones summary of what he’s going to do. Then he does it. If he finds something interesting, he writes it up as a paper, citing that tweet or post as his preregistration. . . . I think this approach has some benefits but doesn’t really address the issues of preregistration that concern me—but I’d like to spend an entire blog post explaining why. I have two key points: 1. If your study is crap, preregistration might fix it. Preregistration is fine—indeed, the wide acceptance of preregistration might well motivate researchers to not do so many crap studies—but it doesn’t solve fundamental problems of experimental design. 2. “Preregistration” seems to mean different things in different scenarios: A. When the concern is

4 0.14583822 2245 andrew gelman stats-2014-03-12-More on publishing in journals

Introduction: I’m postponing today’s scheduled post (“Empirical implications of Empirical Implications of Theoretical Models”) to continue the lively discussion from yesterday, What if I were to stop publishing in journals? . An example: my papers with Basbøll Thomas Basbøll and I got into a long discussion on our blogs about business school professor Karl Weick and other cases of plagiarism copying text without attribution. We felt it useful to take our ideas to the next level and write them up as a manuscript, which ended up being logical to split into two papers. At that point I put some effort into getting these papers published, which I eventually did: To throw away data: Plagiarism as a statistical crime went into American Scientist and When do stories work? Evidence and illustration in the social sciences will appear in Sociological Methods and Research. The second paper, in particular, took some effort to place; I got some advice from colleagues in sociology as to where

5 0.14004658 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

6 0.13998936 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

7 0.1263126 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

8 0.12260039 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

9 0.12119923 1209 andrew gelman stats-2012-03-12-As a Bayesian I want scientists to report their data non-Bayesianly

10 0.12063613 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

11 0.11467488 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

12 0.10657631 2137 andrew gelman stats-2013-12-17-Replication backlash

13 0.10567283 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

14 0.10489032 695 andrew gelman stats-2011-05-04-Statistics ethics question

15 0.1044811 2232 andrew gelman stats-2014-03-03-What is the appropriate time scale for blogging—the day or the week?

16 0.1018936 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

17 0.10153849 2040 andrew gelman stats-2013-09-26-Difficulties in making inferences about scientific truth from distributions of published p-values

18 0.10031221 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

19 0.098974995 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

20 0.098788336 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.178), (1, -0.043), (2, -0.031), (3, -0.16), (4, -0.046), (5, -0.049), (6, -0.008), (7, -0.07), (8, -0.066), (9, 0.0), (10, 0.052), (11, 0.031), (12, -0.017), (13, 0.003), (14, 0.006), (15, -0.012), (16, 0.018), (17, 0.008), (18, 0.004), (19, 0.003), (20, 0.001), (21, 0.025), (22, -0.032), (23, 0.001), (24, -0.018), (25, 0.027), (26, 0.042), (27, 0.004), (28, 0.023), (29, -0.019), (30, -0.04), (31, -0.046), (32, 0.013), (33, 0.026), (34, 0.024), (35, 0.061), (36, -0.06), (37, 0.035), (38, 0.027), (39, 0.005), (40, 0.037), (41, -0.002), (42, 0.016), (43, 0.053), (44, 0.043), (45, -0.043), (46, -0.042), (47, 0.032), (48, -0.009), (49, -0.028)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98638904 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

Introduction: The traditional system of scientific and scholarly publishing is breaking down in two different directions. On one hand, we are moving away from relying on a small set of journals as gatekeepers: the number of papers and research projects is increasing, the number of publication outlets is increasing, and important manuscripts are being posted on SSRN, Arxiv, and other nonrefereed sites. At the same time, many researchers are worried about the profusion of published claims that turn out to not replicate or in plain language, to be false. This concern is not new–some prominent discussions include Rosenthal (1979), Ioannidis (2005), and Vul et al. (2009)–but there is a growing sense that the scientific signal is being swamped by noise. I recently had the opportunity to comment in the journal Political Analysis on two papers, one by Humphreys, Sierra, and Windt, and one by Monogan, on the preregistration of studies and mock reports. Here’s the issue of the journal. Given the hi

2 0.88506746 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

Introduction: Dylan Small writes: I am starting an observational studies journal that aims to publish papers on all aspects of observational studies, including study protocols for observational studies, methodologies for observational studies, descriptions of data sets for observational studies, software for observational studies and analyses of observational studies. One of the goals of the journal is to promote the planning of observational studies and to publish study plans for observational studies, like study plans are published for major clinical trials. Regular readers will know my suggestion that scientific journals move away from the idea of being unique publishers of new material and move toward a “newsletter” approach, recommending papers from Arxiv, SSRN, etc. So, instead of going through exhausting review and revision processes, the journal editors would read and review recent preprints on observational studies and then, each month or quarter or whatever, produce a list of pap

3 0.85280174 908 andrew gelman stats-2011-09-14-Type M errors in the lab

Introduction: Jeff points us to this news article by Asher Mullard: Bayer halts nearly two-thirds of its target-validation projects because in-house experimental findings fail to match up with published literature claims, finds a first-of-a-kind analysis on data irreproducibility. An unspoken industry rule alleges that at least 50% of published studies from academic laboratories cannot be repeated in an industrial setting, wrote venture capitalist Bruce Booth in a recent blog post. A first-of-a-kind analysis of Bayer’s internal efforts to validate ‘new drug target’ claims now not only supports this view but suggests that 50% may be an underestimate; the company’s in-house experimental data do not match literature claims in 65% of target-validation projects, leading to project discontinuation. . . . Khusru Asadullah, Head of Target Discovery at Bayer, and his colleagues looked back at 67 target-validation projects, covering the majority of Bayer’s work in oncology, women’s health and cardiov

4 0.83379656 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

Introduction: Dan Kahan writes : The basic idea . . . is to promote identification of study designs that scholars who disagree about a proposition would agree would generate evidence relevant to their competing conjectures—regardless of what studies based on such designs actually find. Articles proposing designs of this sort would be selected for publication and only then be carried out, by the proposing researchers with funding from the journal, which would publish the results too. Now I [Kahan] am aware of a set of real journals that have a similar motivation. One is the Journal of Articles in Support of the Null Hypothesis, which as its title implies publishes papers reporting studies that fail to “reject” the null. Like JASNH, LR ≠1J would try to offset the “file drawer” bias and like bad consequences associated with the convention of publishing only findings that are “significant at p < 0.05." But it would try to do more. By publishing studies that are deemed to have valid designs an

5 0.83256251 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

Introduction: In our new ethics column for Chance , Eric Loken and I write about our current favorite topic: One of our ongoing themes when discussing scientific ethics is the central role of statistics in recognizing and communicating uncer- tainty. Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true. . . . We have in mind an analogy with the notorious AAA-class bonds created during the mid-2000s that led to the subprime mortgage crisis. Lower-quality mortgages—that is, mortgages with high probability of default and, thus, high uncertainty—were packaged and transformed into financial instruments that were (in retrospect, falsely) characterized as low risk. There was a tremendous interest in these securities, not just among the most unscrupulous market manipulators, but in a

6 0.82775086 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

7 0.81458062 1959 andrew gelman stats-2013-07-28-50 shades of gray: A research story

8 0.80074084 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

9 0.79206872 2220 andrew gelman stats-2014-02-22-Quickies

10 0.781789 2137 andrew gelman stats-2013-12-17-Replication backlash

11 0.7743988 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

12 0.77369767 1055 andrew gelman stats-2011-12-13-Data sharing update

13 0.77041757 1163 andrew gelman stats-2012-02-12-Meta-analysis, game theory, and incentives to do replicable research

14 0.76306993 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

15 0.75909048 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

16 0.75274199 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

17 0.75105977 1683 andrew gelman stats-2013-01-19-“Confirmation, on the other hand, is not sexy”

18 0.74942595 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

19 0.74885845 1054 andrew gelman stats-2011-12-12-More frustrations trying to replicate an analysis published in a reputable journal

20 0.7433663 933 andrew gelman stats-2011-09-30-More bad news: The (mis)reporting of statistical results in psychology journals


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.025), (15, 0.042), (16, 0.112), (18, 0.013), (24, 0.174), (40, 0.112), (42, 0.01), (47, 0.011), (64, 0.014), (65, 0.015), (69, 0.014), (81, 0.04), (86, 0.04), (89, 0.012), (95, 0.02), (98, 0.011), (99, 0.256)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96488488 1671 andrew gelman stats-2013-01-13-Preregistration of Studies and Mock Reports

Introduction: The traditional system of scientific and scholarly publishing is breaking down in two different directions. On one hand, we are moving away from relying on a small set of journals as gatekeepers: the number of papers and research projects is increasing, the number of publication outlets is increasing, and important manuscripts are being posted on SSRN, Arxiv, and other nonrefereed sites. At the same time, many researchers are worried about the profusion of published claims that turn out to not replicate or in plain language, to be false. This concern is not new–some prominent discussions include Rosenthal (1979), Ioannidis (2005), and Vul et al. (2009)–but there is a growing sense that the scientific signal is being swamped by noise. I recently had the opportunity to comment in the journal Political Analysis on two papers, one by Humphreys, Sierra, and Windt, and one by Monogan, on the preregistration of studies and mock reports. Here’s the issue of the journal. Given the hi

2 0.9534564 1679 andrew gelman stats-2013-01-18-Is it really true that only 8% of people who buy Herbalife products are Herbalife distributors?

Introduction: A reporter emailed me the other day with a question about a case I’d never heard of before, a company called Herbalife that is being accused of being a pyramid scheme. The reporter pointed me to this document which describes a survey conducted by “a third party firm called Lieberman Research”: Two independent studies took place using real time (aka “river”) sampling, in which respondents were intercepted across a wide array of websites Sample size of 2,000 adults 18+ matched to U.S. census on age, gender, income, region and ethnicity “River sampling” in this case appears to mean, according to the reporter, that “people were invited into it through online ads.” The survey found that 5% of U.S. households had purchased Herbalife products during the past three months (with a “0.8% margin of error,” ha ha ha). They they did a multiplication and a division to estimate that only 8% of households who bought these products were Herbalife distributors: 480,000 active distributor

3 0.95284092 149 andrew gelman stats-2010-07-16-Demographics: what variable best predicts a financial crisis?

Introduction: A few weeks ago I wrote about the importance of demographics in political trends . Today I’d like to show you how demographics help predict financial crises. Here are a few examples of countries with major crises. The working-age population in Japan peaked in the 1995 census . The 1995 Financial Crisis in Japan The working-age USA population growth slows down to unprecedented levels in 2008 (see figure below) Financial crisis of 2007-2010 . (Also, notice previous dips in 2001, 1991 and 1981, and consider the list of recessions .) China’s working-age population, age 15 to 64, has grown continuously. The labor pool will peak in 2015 and then decline. There are more charts in Demography and Growth report by the Reserve Bank of Australia: Wikipedia surveys the causes of the financial crisis, such as “liquidity shortfall in the United States banking system caused by the overvaluation of assets”. Oh my! Slightly better than the usu

4 0.94128788 1945 andrew gelman stats-2013-07-18-“How big is your chance of dying in an ordinary play?”

Introduction: At first glance, that’s what I thought Tyler Cowen was asking . I assumed he was asking about the characters, not the audience, as watching a play seems like a pretty safe activity (A. Lincoln excepted). Characters in plays die all the time. I wonder what the chance is? Something between 5% and 10%, I’d guess. I’d guess your chance of dying (as a character) in a movie would be higher. On the other hand, movies have lots of extras who just show up and leave; if you count them maybe the risk isn’t so high. Perhaps the right way to do this is to weight people by screen time? P.S. The Mezzanine aside, works of art and literature tend to focus on the dramatic moments of lives, so it makes sense that death will be overrepresented.

5 0.92977637 548 andrew gelman stats-2011-02-01-What goes around . . .

Introduction: A few weeks ago I delivered a 10-minute talk on statistical graphics that went so well, it was the best-received talk I’ve ever given. The crowd was raucous. Then some poor sap had to go on after me. He started by saying that my talk was a hard act to follow. And, indeed, the audience politely listened but did not really get involved in his presentation. Boy did I feel smug. More recently I gave a talk on Stan, at an entirely different venue. And this time the story was the exact opposite. Jim Demmel spoke first and gave a wonderful talk on optimization for linear algebra (it was an applied math conference). Then I followed, and I never really grabbed the crowd. My talk was not a disaster but it didn’t really work. This was particularly frustrating because I’m really excited about Stan and this was a group of researchers I wouldn’t usually have a chance to reach. It was the plenary session at the conference. Anyway, now I know how that guy felt from last month. My talk

6 0.92939806 1198 andrew gelman stats-2012-03-05-A cloud with a silver lining

7 0.92909783 1019 andrew gelman stats-2011-11-19-Validation of Software for Bayesian Models Using Posterior Quantiles

8 0.92482984 243 andrew gelman stats-2010-08-30-Computer models of the oil spill

9 0.91887552 2248 andrew gelman stats-2014-03-15-Problematic interpretations of confidence intervals

10 0.91800332 2179 andrew gelman stats-2014-01-20-The AAA Tranche of Subprime Science

11 0.9169634 447 andrew gelman stats-2010-12-03-Reinventing the wheel, only more so.

12 0.91691953 639 andrew gelman stats-2011-03-31-Bayes: radical, liberal, or conservative?

13 0.91682583 898 andrew gelman stats-2011-09-10-Fourteen magic words: an update

14 0.91662931 1171 andrew gelman stats-2012-02-16-“False-positive psychology”

15 0.91636258 1637 andrew gelman stats-2012-12-24-Textbook for data visualization?

16 0.91593492 1206 andrew gelman stats-2012-03-10-95% intervals that I don’t believe, because they’re from a flat prior I don’t believe

17 0.91570199 799 andrew gelman stats-2011-07-13-Hypothesis testing with multiple imputations

18 0.91529 1422 andrew gelman stats-2012-07-20-Likelihood thresholds and decisions

19 0.91436964 1117 andrew gelman stats-2012-01-13-What are the important issues in ethics and statistics? I’m looking for your input!

20 0.91403222 783 andrew gelman stats-2011-06-30-Don’t stop being a statistician once the analysis is done