andrew_gelman_stats andrew_gelman_stats-2014 andrew_gelman_stats-2014-2353 knowledge-graph by maker-knowledge-mining

2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog


meta infos for this blog

Source: html

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. [sent-2, score-0.41]

2 The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. [sent-3, score-0.631]

3 It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. [sent-6, score-0.396]

4 The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless strongly demonstrated otherwise. [sent-8, score-0.509]

5 ” and my comment was: I am irritated that ASR published a paper with serious statistical flaws but then did not publish my letter pointing out the flaws. [sent-10, score-0.892]

6 I think this is a systematic problem with journals, that the informal rules for publication (that findings be substantively important and statistically significant) bias things toward the publication of exaggerated claims. [sent-11, score-0.69]

7 To say it again: I understand and appreciate the rationale for wanting to publish major papers with major claims. [sent-15, score-0.616]

8 It may be that in this particular situation my letter did not warrant publication (I think it did, but that’s my perspective), but in any case I think the reluctance to print criticisms is a major problem with lots of journals. [sent-19, score-1.255]

9 Related to this is the idea that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. [sent-21, score-0.548]

10 From that perspective, I can see the attitude of, “We can’t just publish every critical letter, then people will do nothing but criticism as it’s so cheap compared to original research. [sent-22, score-0.5]

11 ” But that attitude irritates me because I wasn’t writing that letter to get a chit; I was writing the letter as a public service. [sent-23, score-0.622]

12 I’d have no problems if critical letters were identified as such in the publication record so the whole chit issue wouldn’t have to come up. [sent-24, score-0.696]

13 Brayden King added: My sense is that most journals rarely publish letters of response because they don’t like to use the print space. [sent-25, score-0.542]

14 They also had some specific criticisms of my letter but I think the non-importance was key. [sent-38, score-0.498]

15 I demonstrated that the article had statistical flaws but I did not demonstrate that the flaws would have serious impact on the article’s major conclusions. [sent-39, score-0.814]

16 I think I could’ve done this but it would’ve required more work, and I was already having lots of problems getting the data (not the fault of the author of the article, it was a problem with the keepers of the dataset). [sent-40, score-0.413]

17 Here’s what I wrote in my article about the episode: The asymmetry is as follows: Hamilton’s paper represents a major research effort, whereas my criticism took very little effort (given my existing understanding of selection bias and causal inference). [sent-41, score-0.614]

18 Indeed, I am pretty sure the original paper would have needed serious revision and would have been required to fix the problem. [sent-43, score-0.559]

19 As a referee, I would not need to offer an independent data analysis and proof that the statistical error would have a major effect on the conclusions. [sent-46, score-0.464]

20 I think it would be appropriate to publish my letter (and I’d have no problem if, in the review process for my letter, I’d been told to add a paragraph emphasizing that I had not demonstrated that the statistical error had a major effect on the conclusions). [sent-50, score-1.077]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('asr', 0.416), ('letter', 0.28), ('chit', 0.208), ('major', 0.188), ('publication', 0.167), ('publish', 0.163), ('journals', 0.144), ('criticisms', 0.136), ('scarcity', 0.126), ('chits', 0.126), ('print', 0.123), ('brayden', 0.119), ('paper', 0.118), ('fault', 0.118), ('flaws', 0.116), ('demonstrated', 0.114), ('original', 0.114), ('warrant', 0.114), ('bias', 0.113), ('letters', 0.112), ('article', 0.111), ('would', 0.097), ('referee', 0.09), ('hiring', 0.084), ('online', 0.084), ('criticism', 0.084), ('problem', 0.083), ('think', 0.082), ('proof', 0.082), ('journal', 0.081), ('toward', 0.078), ('wanting', 0.077), ('critical', 0.077), ('published', 0.073), ('spread', 0.072), ('serious', 0.072), ('review', 0.07), ('pointing', 0.07), ('problems', 0.069), ('communication', 0.065), ('scratching', 0.063), ('appendices', 0.063), ('specially', 0.063), ('issue', 0.063), ('improve', 0.062), ('attitude', 0.062), ('required', 0.061), ('represent', 0.06), ('conclusions', 0.06), ('underneath', 0.06)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

2 0.24219474 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

3 0.20597017 2245 andrew gelman stats-2014-03-12-More on publishing in journals

Introduction: I’m postponing today’s scheduled post (“Empirical implications of Empirical Implications of Theoretical Models”) to continue the lively discussion from yesterday, What if I were to stop publishing in journals? . An example: my papers with Basbøll Thomas Basbøll and I got into a long discussion on our blogs about business school professor Karl Weick and other cases of plagiarism copying text without attribution. We felt it useful to take our ideas to the next level and write them up as a manuscript, which ended up being logical to split into two papers. At that point I put some effort into getting these papers published, which I eventually did: To throw away data: Plagiarism as a statistical crime went into American Scientist and When do stories work? Evidence and illustration in the social sciences will appear in Sociological Methods and Research. The second paper, in particular, took some effort to place; I got some advice from colleagues in sociology as to where

4 0.2050399 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

Introduction: Via Yalda Afshar , a 2005 paper by Hans-Hermann Dubben and Hans-Peter Beck-Bornholdt: Publication bias is a well known phenomenon in clinical literature, in which positive results have a better chance of being published, are published earlier, and are published in journals with higher impact factors. Conclusions exclusively based on published studies, therefore, can be misleading. Selective under-reporting of research might be more widespread and more likely to have adverse consequences for patients than publication of deliberately falsified data. We investigated whether there is preferential publication of positive papers on publication bias. They conclude, “We found no evidence of publication bias in reports on publication bias.” But of course that’s the sort of finding regarding publication bias of findings on publication bias that you’d expect would get published. What we really need is a careful meta-analysis to estimate the level of publication bias in studies of publi

5 0.2046091 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

Introduction: In our recent discussion of modes of publication, Joseph Wilson wrote, “The single best reform science can make right now is to decouple publication from career advancement, thereby reducing the number of publications by an order of magnitude and then move to an entirely disjointed, informal, online free-for-all communication system for research results.” My first thought on this was: Sure, yeah, that makes sense. But then I got to thinking: what would it really mean to decouple publication from career advancement? This is too late for me—I’m middle-aged and have no career advancement in my future—but it got me thinking more carefully about the role of publication in the research process, and this seemed worth a blog (the simplest sort of publication available to me). However, somewhere between writing the above paragraphs and writing the blog entry, I forgot exactly what I was going to say! I guess I should’ve just typed it all in then. In the old days I just wouldn’t run this

6 0.19581854 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

7 0.19310759 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

8 0.19001456 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.17898372 1844 andrew gelman stats-2013-05-06-Against optimism about social science

10 0.16400902 834 andrew gelman stats-2011-08-01-I owe it all to the haters

11 0.15897164 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

12 0.15404686 2235 andrew gelman stats-2014-03-06-How much time (if any) should we spend criticizing research that’s fraudulent, crappy, or just plain pointless?

13 0.15323582 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

14 0.15219374 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

15 0.15179078 1878 andrew gelman stats-2013-05-31-How to fix the tabloids? Toward replicable social science research

16 0.14968373 2177 andrew gelman stats-2014-01-19-“The British amateur who debunked the mathematics of happiness”

17 0.14804874 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

18 0.13915816 2006 andrew gelman stats-2013-09-03-Evaluating evidence from published research

19 0.13843118 2211 andrew gelman stats-2014-02-14-The popularity of certain baby names is falling off the clifffffffffffff

20 0.13691694 838 andrew gelman stats-2011-08-04-Retraction Watch


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.242), (1, -0.059), (2, -0.083), (3, -0.153), (4, -0.075), (5, -0.092), (6, 0.019), (7, -0.115), (8, -0.041), (9, 0.007), (10, 0.13), (11, 0.012), (12, -0.092), (13, 0.008), (14, 0.003), (15, -0.046), (16, 0.012), (17, 0.042), (18, -0.026), (19, 0.0), (20, 0.029), (21, 0.041), (22, 0.047), (23, 0.013), (24, -0.015), (25, 0.036), (26, -0.001), (27, -0.013), (28, -0.009), (29, 0.022), (30, -0.016), (31, -0.025), (32, -0.001), (33, 0.03), (34, -0.036), (35, -0.026), (36, -0.028), (37, 0.011), (38, -0.016), (39, 0.013), (40, -0.031), (41, -0.021), (42, -0.032), (43, 0.018), (44, 0.001), (45, 0.05), (46, 0.003), (47, 0.037), (48, -0.006), (49, 0.012)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96929306 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

2 0.91994655 834 andrew gelman stats-2011-08-01-I owe it all to the haters

Introduction: Sometimes when I submit an article to a journal it is accepted right away or with minor alterations. But many of my favorite articles were rejected or had to go through an exhausting series of revisions. For example, this influential article had a very hostile referee and we had to seriously push the journal editor to accept it. This one was rejected by one or two journals before finally appearing with discussion. This paper was rejected by the American Political Science Review with no chance of revision and we had to publish it in the British Journal of Political Science, which was a bit odd given that the article was 100% about American politics. And when I submitted this instant classic (actually at the invitation of the editor), the referees found it to be trivial, and the editor did me the favor of publishing it but only by officially labeling it as a discussion of another article that appeared in the same issue. Some of my most influential papers were accepted right

3 0.91828686 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

Introduction: John Mashey points me to a blog post by Phil Davis on “the emergence of a citation cartel.” Davis tells the story: Cell Transplantation is a medical journal published by the Cognizant Communication Corporation of Putnam Valley, New York. In recent years, its impact factor has been growing rapidly. In 2006, it was 3.482 [I think he means "3.5"---ed.]. In 2010, it had almost doubled to 6.204. When you look at which journals cite Cell Transplantation, two journals stand out noticeably: the Medical Science Monitor, and The Scientific World Journal. According to the JCR, neither of these journals cited Cell Transplantation until 2010. Then, in 2010, a review article was published in the Medical Science Monitor citing 490 articles, 445 of which were to papers published in Cell Transplantation. All 445 citations pointed to papers published in 2008 or 2009 — the citation window from which the journal’s 2010 impact factor was derived. Of the remaining 45 citations, 44 cited the Me

4 0.90795875 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

Introduction: Jeff Leek points to a post by Alex Holcombe, who disputes the idea that science is self-correcting. Holcombe writes [scroll down to get to his part]: The pace of scientific production has quickened, and self-correction has suffered. Findings that might correct old results are considered less interesting than results from more original research questions. Potential corrections are also more contested. As the competition for space in prestigious journals has become increasingly frenzied, doing and publishing studies that would confirm the rapidly accumulating new discoveries, or would correct them, became a losing proposition. Holcombe picks up on some points that we’ve discussed a lot here in the past year. Here’s Holcombe: In certain subfields, almost all new work appears in only a very few journals, all associated with a single professional society. There is then no way around the senior gatekeepers, who may then suppress corrections with impunity. . . . The bias agai

5 0.90627074 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

Introduction: The other day we discussed that paper on ovulation and voting (you may recall that the authors reported a scattered bunch of comparisons, significance tests, and p-values, and I recommended that they would’ve done better to simply report complete summaries of their data, so that readers could see the comparisons of interest in full context), and I was thinking a bit more about why I was so bothered that it was published in Psychological Science, which I’d thought of as a serious research journal. My concern isn’t just that that the paper is bad—after all, lots of bad papers get published—but rather that it had nothing really going for it, except that it was headline bait. It was a survey done on Mechanical Turk, that’s it. No clever design, no clever questions, no care in dealing with nonresponse problems, no innovative data analysis, no nothing. The paper had nothing to offer, except that it had no obvious flaws. Psychology is a huge field full of brilliant researchers.

6 0.90273631 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

7 0.89054435 1122 andrew gelman stats-2012-01-16-“Groundbreaking or Definitive? Journals Need to Pick One”

8 0.8840524 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.88348758 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox

10 0.87832105 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

11 0.87033486 1137 andrew gelman stats-2012-01-24-Difficulties in publishing non-replications of implausible findings

12 0.86888272 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal

13 0.86592734 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

14 0.86465216 1291 andrew gelman stats-2012-04-30-Systematic review of publication bias in studies on publication bias

15 0.86122066 1654 andrew gelman stats-2013-01-04-“Don’t think of it as duplication. Think of it as a single paper in a superposition of two quantum journals.”

16 0.86035275 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

17 0.86022854 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

18 0.8585856 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

19 0.85389614 1118 andrew gelman stats-2012-01-14-A model rejection letter

20 0.84790856 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.037), (15, 0.16), (16, 0.101), (18, 0.015), (21, 0.03), (24, 0.163), (50, 0.012), (65, 0.012), (72, 0.018), (77, 0.014), (79, 0.016), (96, 0.03), (99, 0.265)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9724797 945 andrew gelman stats-2011-10-06-W’man < W’pedia, again

Introduction: Blogger Deep Climate looks at another paper by the 2002 recipient of the American Statistical Association’s Founders award. This time it’s not funny, it’s just sad. Here’s Wikipedia on simulated annealing: By analogy with this physical process, each step of the SA algorithm replaces the current solution by a random “nearby” solution, chosen with a probability that depends on the difference between the corresponding function values and on a global parameter T (called the temperature), that is gradually decreased during the process. The dependency is such that the current solution changes almost randomly when T is large, but increasingly “downhill” as T goes to zero. The allowance for “uphill” moves saves the method from becoming stuck at local minima—which are the bane of greedier methods. And here’s Wegman: During each step of the algorithm, the variable that will eventually represent the minimum is replaced by a random solution that is chosen according to a temperature

2 0.96434301 1541 andrew gelman stats-2012-10-19-Statistical discrimination again

Introduction: Mark Johnstone writes: I’ve recently been investigating a new European Court of Justice ruling on insurance calculations (on behalf of MoneySuperMarket) and I found something related to statistics that caught my attention. . . . The ruling (which comes into effect in December 2012) states that insurers in Europe can no longer provide different premiums based on gender. Despite the fact that women are statistically safer drivers, unless it’s biologically proven there is a causal relationship between being female and being a safer driver, this is now seen as an act of discrimination (more on this from the Wall Street Journal). However, where do you stop with this? What about age? What about other factors? And what does this mean for the application of statistics in general? Is it inherently unjust in this context? One proposal has been to fit ‘black boxes’ into cars so more individual data can be collected, as opposed to relying heavily on aggregates. For fans of data and s

3 0.95875156 1081 andrew gelman stats-2011-12-24-Statistical ethics violation

Introduction: A colleague writes: When I was in NYC I went to this party by group of Japanese bio-scientists. There, one guy told me about how the biggest pharmaceutical company in Japan did their statistics. They ran 100 different tests and reported the most significant one. (This was in 2006 and he said they stopped doing this few years back so they were doing this until pretty recently…) I’m not sure if this was 100 multiple comparison or 100 different kinds of test but I’m sure they wouldn’t want to disclose their data… Ouch!

same-blog 4 0.9576515 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

5 0.95692509 329 andrew gelman stats-2010-10-08-More on those dudes who will pay your professor $8000 to assign a book to your class, and related stories about small-time sleazoids

Introduction: After noticing these remarks on expensive textbooks and this comment on the company that bribes professors to use their books, Preston McAfee pointed me to this update (complete with a picture of some guy who keeps threatening to sue him but never gets around to it). The story McAfee tells is sad but also hilarious. Especially the part about “smuck.” It all looks like one more symptom of the imploding market for books. Prices for intro stat and econ books go up and up (even mediocre textbooks routinely cost $150), and the publishers put more and more effort into promotion. McAfee adds: I [McAfee] hope a publisher sues me about posting the articles I wrote. Even a takedown notice would be fun. I would be pretty happy to start posting about that, especially when some of them are charging $30 per article. Ted Bergstrom and I used state Freedom of Information acts to extract the journal price deals at state university libraries. We have about 35 of them so far. Like te

6 0.95670259 133 andrew gelman stats-2010-07-08-Gratuitous use of “Bayesian Statistics,” a branding issue?

7 0.94894767 762 andrew gelman stats-2011-06-13-How should journals handle replication studies?

8 0.94882226 576 andrew gelman stats-2011-02-15-With a bit of precognition, you’d have known I was going to post again on this topic, and with a lot of precognition, you’d have known I was going to post today

9 0.9476949 1800 andrew gelman stats-2013-04-12-Too tired to mock

10 0.94739771 1779 andrew gelman stats-2013-03-27-“Two Dogmas of Strong Objective Bayesianism”

11 0.9462837 1908 andrew gelman stats-2013-06-21-Interpreting interactions in discrete-data regression

12 0.94331467 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

13 0.94186068 1794 andrew gelman stats-2013-04-09-My talks in DC and Baltimore this week

14 0.94111192 834 andrew gelman stats-2011-08-01-I owe it all to the haters

15 0.93610758 1998 andrew gelman stats-2013-08-25-A new Bem theory

16 0.93567508 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

17 0.93452078 2191 andrew gelman stats-2014-01-29-“Questioning The Lancet, PLOS, And Other Surveys On Iraqi Deaths, An Interview With Univ. of London Professor Michael Spagat”

18 0.93430829 274 andrew gelman stats-2010-09-14-Battle of the Americans: Writer at the American Enterprise Institute disparages the American Political Science Association

19 0.93349147 803 andrew gelman stats-2011-07-14-Subtleties with measurement-error models for the evaluation of wacky claims

20 0.9300065 1774 andrew gelman stats-2013-03-22-Likelihood Ratio ≠ 1 Journal