andrew_gelman_stats andrew_gelman_stats-2012 andrew_gelman_stats-2012-1122 knowledge-graph by maker-knowledge-mining

1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”


meta infos for this blog

Source: html

Introduction: Sanjay Srivastava writes : As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. Groundbreaking will always be higher-risk. And definitive will be the territory of journals that publish meta-analyses and reviews. . . . Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. . . . His message to all of us: Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates.” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Sanjay Srivastava writes : As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. [sent-1, score-1.539]

2 And definitive will be the territory of journals that publish meta-analyses and reviews. [sent-3, score-0.631]

3 Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. [sent-7, score-1.646]

4 His message to all of us: Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates. [sent-11, score-0.458]

5 ” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify. [sent-12, score-0.985]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('wow', 0.284), ('rigorous', 0.277), ('press', 0.225), ('groundbreaking', 0.219), ('conclusions', 0.209), ('journals', 0.199), ('enthusiastic', 0.198), ('tentative', 0.198), ('territory', 0.185), ('releases', 0.185), ('regarded', 0.176), ('srivastava', 0.176), ('sanjay', 0.173), ('unreplicable', 0.173), ('inevitably', 0.169), ('engagement', 0.164), ('definitive', 0.153), ('justify', 0.151), ('response', 0.148), ('public', 0.141), ('boring', 0.141), ('contain', 0.137), ('equally', 0.136), ('convey', 0.131), ('rarely', 0.126), ('communication', 0.113), ('strategy', 0.108), ('publishing', 0.107), ('psychological', 0.107), ('science', 0.106), ('message', 0.102), ('nature', 0.095), ('publish', 0.094), ('teaching', 0.094), ('findings', 0.087), ('studies', 0.075), ('papers', 0.074), ('journal', 0.07), ('standard', 0.069), ('long', 0.066), ('clear', 0.065), ('best', 0.059), ('always', 0.057), ('interesting', 0.056), ('need', 0.055), ('us', 0.055), ('ll', 0.05), ('paper', 0.046), ('really', 0.037), ('make', 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

Introduction: Sanjay Srivastava writes : As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. Groundbreaking will always be higher-risk. And definitive will be the territory of journals that publish meta-analyses and reviews. . . . Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. . . . His message to all of us: Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates.” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify.

2 0.19983521 2215 andrew gelman stats-2014-02-17-The Washington Post reprints university press releases without editing them

Introduction: Somebody points me to this horrifying exposé by Paul Raeburn on a new series by the Washington Post where they reprint press releases as if they are actual news. And the gimmick is, the reason why it’s appearing on this blog, is that these are university press releases on science stories . What could possibly go wrong there? After all, Steve Chaplin, a self-identified “science-writing PIO from an R1,” writes in a comment to Raeburn’s post: We write about peer-reviewed research accepted for publication or published by the world’s leading scientific journals after that research has been determined to be legitimate. Repeatability of new research is a publication requisite. I emphasized that last sentence myself because it was such a stunner. Do people really think that??? So I guess what he’s saying is, they don’t do press releases for articles from Psychological Science or the Journal of Personality and Social Psychology . But I wonder how the profs in the psych d

3 0.15747464 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

Introduction: This seems to be the topic of the week. Yesterday I posted on the sister blog some further thoughts on those “Psychological Science” papers on menstrual cycles, biceps size, and political attitudes, tied to a horrible press release from the journal Psychological Science hyping the biceps and politics study. Then I was pointed to these suggestions from Richard Lucas and M. Brent Donnellan have on improving the replicability and reproducibility of research published in the Journal of Research in Personality: It goes without saying that editors of scientific journals strive to publish research that is not only theoretically interesting but also methodologically rigorous. The goal is to select papers that advance the field. Accordingly, editors want to publish findings that can be reproduced and replicated by other scientists. Unfortunately, there has been a recent “crisis in confidence” among psychologists about the quality of psychological research (Pashler & Wagenmakers, 2012)

4 0.14741738 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

5 0.13023682 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

Introduction: Tyler Cowen links to a paper by Bruno Frey on the lack of space for articles in economics journals. Frey writes: To further their careers, [academic economists] are required to publish in A-journals, but for the vast majority this is impossible because there are few slots open in such journals. Such academic competition maybe useful to generate hard work, however, there may be serious negative consequences: the wrong output may be produced in an inefficient way, the wrong people may be selected, and losers may react in a harmful way. According to Frey, the consensus is that there are only five top economics journals–and one of those five is Econometrica, which is so specialized that I’d say that, for most academic economists, there are only four top places they can publish. The difficulty is that demand for these slots outpaces supply: for example, in 2007 there were only 275 articles in all these journals combined (or 224 if you exclude Econometrica), while “a rough estim

6 0.12233819 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

7 0.11927658 2278 andrew gelman stats-2014-04-01-Association for Psychological Science announces a new journal

8 0.11544818 1338 andrew gelman stats-2012-05-23-Advice on writing research articles

9 0.11276412 2245 andrew gelman stats-2014-03-12-More on publishing in journals

10 0.10916723 2301 andrew gelman stats-2014-04-22-Ticket to Baaaaarf

11 0.1030677 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

12 0.10144781 1833 andrew gelman stats-2013-04-30-“Tragedy of the science-communication commons”

13 0.099943057 2014 andrew gelman stats-2013-09-09-False memories and statistical analysis

14 0.09821748 2013 andrew gelman stats-2013-09-08-What we need here is some peer review for statistical graphics

15 0.096195929 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

16 0.095680155 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

17 0.093554832 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

18 0.093227707 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

19 0.09054964 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

20 0.085516237 540 andrew gelman stats-2011-01-26-Teaching evaluations, instructor effectiveness, the Journal of Political Economy, and the Holy Roman Empire


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.099), (1, -0.052), (2, -0.061), (3, -0.113), (4, -0.051), (5, -0.041), (6, -0.023), (7, -0.079), (8, -0.054), (9, 0.01), (10, 0.104), (11, 0.028), (12, -0.042), (13, 0.021), (14, 0.001), (15, -0.054), (16, -0.0), (17, 0.035), (18, -0.019), (19, -0.02), (20, 0.015), (21, 0.007), (22, 0.003), (23, -0.001), (24, -0.028), (25, 0.017), (26, 0.018), (27, 0.0), (28, -0.002), (29, -0.001), (30, -0.019), (31, -0.035), (32, -0.015), (33, -0.006), (34, 0.004), (35, 0.009), (36, -0.021), (37, 0.034), (38, 0.01), (39, 0.006), (40, 0.016), (41, -0.013), (42, -0.01), (43, 0.005), (44, -0.018), (45, 0.034), (46, 0.018), (47, 0.031), (48, -0.025), (49, 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99044454 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

Introduction: Sanjay Srivastava writes : As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. Groundbreaking will always be higher-risk. And definitive will be the territory of journals that publish meta-analyses and reviews. . . . Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. . . . His message to all of us: Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates.” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify.

2 0.90221632 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

Introduction: John Mashey points me to a blog post by Phil Davis on “the emergence of a citation cartel.” Davis tells the story: Cell Transplantation is a medical journal published by the Cognizant Communication Corporation of Putnam Valley, New York. In recent years, its impact factor has been growing rapidly. In 2006, it was 3.482 [I think he means "3.5"---ed.]. In 2010, it had almost doubled to 6.204. When you look at which journals cite Cell Transplantation, two journals stand out noticeably: the Medical Science Monitor, and The Scientific World Journal. According to the JCR, neither of these journals cited Cell Transplantation until 2010. Then, in 2010, a review article was published in the Medical Science Monitor citing 490 articles, 445 of which were to papers published in Cell Transplantation. All 445 citations pointed to papers published in 2008 or 2009 — the citation window from which the journal’s 2010 impact factor was derived. Of the remaining 45 citations, 44 cited the Me

3 0.88061905 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

Introduction: We’ve had lots of lively discussions of fatally-flawed papers that have been published in top, top journals such as the American Economic Review or the Journal of Personality and Social Psychology or the American Sociological Review or the tabloids . And we also know about mistakes that make their way into mid-ranking outlets such as the Journal of Theoretical Biology. But what about results that appear in the lower tier of legitimate journals? I was thinking about this after reading a post by Dan Kahan slamming a paper that recently appeared in PLOS-One. I won’t discuss the paper itself here because that’s not my point. Rather, I had some thoughts regarding Kahan’s annoyance that a paper with fatal errors was published at all. I commented as follows: Read between the lines. The paper originally was released in 2009 and was published in 2013 in PLOS-One, which is one step above appearing on Arxiv. PLOS-One publishes some good things (so does Arxiv) but it’s the place

4 0.84618479 2278 andrew gelman stats-2014-04-01-Association for Psychological Science announces a new journal

Introduction: The Association for Psychological Science, the leading organization of research psychologists, announced a long-awaited new journal, Speculations on Psychological Science . From the official APS press release: Speculations on Psychological Science, the flagship journal of the Association for Psychological Science, will publish cutting-edge research articles, short reports, and research reports spanning the entire spectrum of the science of psychology. We anticipate that Speculations on Psychological Science will be the highest ranked empirical journal in psychology. We recognize that many of the most noteworthy published claims in psychology and related fields are not well supported by data, hence the need for a journal for the publication of such exciting speculations without misleading claims of certainty. - Sigmund Watson, Prof. (Ret.) Miskatonic University, and editor-in-chief, Speculations on Psychological Science I applaud this development. Indeed, I’ve been talking ab

5 0.83948463 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

Introduction: Via Yalda Afshar , a 2005 paper by Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt: Publication bias is a well known phenomenon in clinical literature, in which positive results have a better chance of being published, are published earlier, and are published in journals with higher impact factors. Conclusions exclusively based on published studies, therefore, can be misleading. Selective under-reporting of research might be more widespread and more likely to have adverse consequences for patients than publication of deliberately falsified data. We investigated whether there is preferential publication of positive papers on publication bias. They conclude, “We found no evidence of publication bias in reports on publication bias.” But of course that’s the sort of finding regarding publication bias of findings on publication bias that you’d expect would get published. What we really need is a careful meta-analysis to estimate the level of publication bias in studies of publi

6 0.83579302 1118 andrew gelman stats-2012-01-14-A model rejection letter

7 0.83278221 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

8 0.82427722 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

9 0.8235724 834 andrew gelman stats-2011-08-01-I owe it all to the haters

10 0.81228536 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

11 0.80908179 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

12 0.79982281 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

13 0.79729593 1954 andrew gelman stats-2013-07-24-Too Good To Be True: The Scientific Mass Production of Spurious Statistical Significance

14 0.79682833 2268 andrew gelman stats-2014-03-26-New research journal on observational studies

15 0.79512388 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

16 0.79401743 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

17 0.79393864 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

18 0.77021116 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

19 0.76879549 2215 andrew gelman stats-2014-02-17-The Washington Post reprints university press releases without editing them

20 0.76280934 838 andrew gelman stats-2011-08-04-Retraction Watch


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.022), (6, 0.04), (10, 0.249), (13, 0.02), (15, 0.085), (16, 0.033), (24, 0.071), (27, 0.028), (55, 0.015), (77, 0.043), (99, 0.275)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93718314 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

Introduction: Sanjay Srivastava writes : As long as a journal pursues a strategy of publishing “wow” studies, it will inevitably contain more unreplicable findings and unsupportable conclusions than equally rigorous but more “boring” journals. Groundbreaking will always be higher-risk. And definitive will be the territory of journals that publish meta-analyses and reviews. . . . Most conclusions, even those in peer-reviewed papers in rigorous journals, should be regarded as tentative at best; but press releases and other public communication rarely convey that. . . . His message to all of us: Our standard response to a paper in Science, Nature, or Psychological Science should be “wow, that’ll be really interesting if it replicates.” And in our teaching and our engagement with the press and public, we need to make clear why that is the most enthusiastic response we can justify.

2 0.8978315 2215 andrew gelman stats-2014-02-17-The Washington Post reprints university press releases without editing them

Introduction: Somebody points me to this horrifying exposé by Paul Raeburn on a new series by the Washington Post where they reprint press releases as if they are actual news. And the gimmick is, the reason why it’s appearing on this blog, is that these are university press releases on science stories . What could possibly go wrong there? After all, Steve Chaplin, a self-identified “science-writing PIO from an R1,” writes in a comment to Raeburn’s post: We write about peer-reviewed research accepted for publication or published by the world’s leading scientific journals after that research has been determined to be legitimate. Repeatability of new research is a publication requisite. I emphasized that last sentence myself because it was such a stunner. Do people really think that??? So I guess what he’s saying is, they don’t do press releases for articles from Psychological Science or the Journal of Personality and Social Psychology . But I wonder how the profs in the psych d

3 0.86865795 78 andrew gelman stats-2010-06-10-Hey, where’s my kickback?

Introduction: I keep hearing about textbook publishers who practically bribe instructors to assign their textbooks to students. And then I received this (unsolicited) email: You have recently been sent Pearson (Allyn & Bacon, Longman, Prentice Hall) texts to review for your summer and fall courses. As a thank you for reviewing our texts, I would like to invite you to participate in a brief survey (attached). If you have any questions about the survey, are not sure which books you have been sent, or if you would like to receive instructor’s materials, desk copies, etc. please let me know! If you have recently received your course assignments – let me know as well . Additionally, if you have decided to use a Pearson book in your summer or fall courses, I will provide you with an ISBN that will include discounts and resources for your students at no extra cost! All you have to do is answer the 3 simple questions on the attached survey and you will receive a $10.00 Dunkin Donuts gift card.

4 0.84192002 487 andrew gelman stats-2010-12-27-Alfred Kahn

Introduction: Appointed “inflation czar” in late 1970s, Alfred Kahn is most famous for deregulating the airline industry. At the time this seemed to make sense, although in retrospect I’m less a fan of consumer-driven policies than I used to be. When I was a kid we subscribed to Consumer Reports and so I just assumed that everything that was good for the consumer–lower prices, better products, etc.–was a good thing. Upon reflection, though, I think it’s a mistake to focus too narrowly on the interests of consumers. For example (from my Taleb review a couple years ago): The discussion on page 112 of how Ralph Nader saved lives (mostly via seat belts in cars) reminds me of his car-bumper campaign in the 1970s. My dad subscribed to Consumer Reports then (he still does, actually, and I think reads it for pleasure–it must be one of those Depression-mentality things), and at one point they were pushing heavily for the 5-mph bumpers. Apparently there was some federal regulation about how strong

5 0.81685317 2257 andrew gelman stats-2014-03-20-The candy weighing demonstration, or, the unwisdom of crowds

Introduction: From 2008: The candy weighing demonstration, or, the unwisdom of crowds My favorite statistics demonstration is the one with the bag of candies. I’ve elaborated upon it since including it in the Teaching Statistics book and I thought these tips might be useful to some of you. Preparation Buy 100 candies of different sizes and shapes and put them in a bag (the plastic bag from the store is fine). Get something like 20 large full-sized candy bars, 20 or 30 little things like mini Snickers bars and mini Peppermint Patties. And then 50 or 60 really little things like tiny Tootsie Rolls, lollipops, and individually-wrapped Life Savers. Count and make sure it’s exactly 100. You also need a digital kitchen scale that reads out in grams. Also bring a sealed envelope inside of which is a note (details below). When you get into the room, unobtrusively put the note somewhere, for example between two books on a shelf or behind a window shade. Setup Hold up the back of cand

6 0.81402957 1059 andrew gelman stats-2011-12-14-Looking at many comparisons may increase the risk of finding something statistically significant by epidemiologists, a population with relatively low multilevel modeling consumption

7 0.80254751 2014 andrew gelman stats-2013-09-09-False memories and statistical analysis

8 0.79338181 1744 andrew gelman stats-2013-03-01-Why big effects are more important than small effects

9 0.79148179 1064 andrew gelman stats-2011-12-16-The benefit of the continuous color scale

10 0.79061347 37 andrew gelman stats-2010-05-17-Is chartjunk really “more useful” than plain graphs? I don’t think so.

11 0.78897965 344 andrew gelman stats-2010-10-15-Story time

12 0.78466159 1974 andrew gelman stats-2013-08-08-Statistical significance and the dangerous lure of certainty

13 0.77855468 2032 andrew gelman stats-2013-09-20-“Six red flags for suspect work”

14 0.77570832 1810 andrew gelman stats-2013-04-17-Subway series

15 0.76847321 1402 andrew gelman stats-2012-07-01-Ice cream! and temperature

16 0.76178145 1859 andrew gelman stats-2013-05-16-How do we choose our default methods?

17 0.75986749 306 andrew gelman stats-2010-09-29-Statistics and the end of time

18 0.7559213 2171 andrew gelman stats-2014-01-13-Postdoc with Liz Stuart on propensity score methods when the covariates are measured with error

19 0.75315237 357 andrew gelman stats-2010-10-20-Sas and R

20 0.75306225 2317 andrew gelman stats-2014-05-04-Honored oldsters write about statistics