andrew_gelman_stats andrew_gelman_stats-2013 andrew_gelman_stats-2013-2055 knowledge-graph by maker-knowledge-mining

2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey


meta infos for this blog

Source: html

Introduction: Daniel Sgroi and Andrew Oswald write : Many governments wish to assess the quality of their universities. A prominent example is the UK’s new Research Excellence Framework (REF) 2014. In the REF, peer-review panels will be provided with information on publications and citations. This paper suggests a way in which panels could choose the weights to attach to these two indicators. The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). Our study should not be interpreted as the argument that only mechanistic measures ought to be used in a REF. I agree that, if you’re going to choose a weighted average, it makes sense to think about where the weights are coming from. Some aspects of Sgroi and Oswald’s proposal remind me of the old idea of evaluating journal articles by expected number of total citations. The idea is that you’d use four pieces of i


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 In the REF, peer-review panels will be provided with information on publications and citations. [sent-3, score-0.184]

2 This paper suggests a way in which panels could choose the weights to attach to these two indicators. [sent-4, score-0.442]

3 The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). [sent-5, score-0.521]

4 Some aspects of Sgroi and Oswald’s proposal remind me of the old idea of evaluating journal articles by expected number of total citations. [sent-8, score-0.327]

5 The idea is that you’d use four pieces of information on an article: its field of study, the impact factor of the journal where it was published, its total number of citations so far, and the number of years since appearance of the paper. [sent-9, score-0.925]

6 If the paper is old enough, you can just take its total number of citations and map that on to some asymptoting curve to get a predicted total. [sent-10, score-0.644]

7 The point here is not that citations are all, but rather that, to the extent that you care about impact, it makes sense to count citations in a context that adjusts for how long the paper has been sitting out there. [sent-12, score-0.88]

8 I haven’t read Sgroi and Oswald’s paper in detail, but I will comment on its general approach. [sent-13, score-0.2]

9 Here’s how they put it: Our later proposal boils down to a rather intuitive idea. [sent-14, score-0.17]

10 It is that of using citations gradually to update an initial estimate (the Prior) of a journal article’s quality to form instead a considered, more informed estimate (the Posterior) of its quality. [sent-15, score-0.599]

11 3 The simplest possible way to model this intention is to think in terms of a simple binary partitioning of the state space. [sent-18, score-0.237]

12 Essentially, either a paper submitted to the REF is considered to be making an outstanding contribution or not. [sent-19, score-0.476]

13 On that basis we can specify the state space to be ω ∈ Ω = {a, b} where “a” is taken to mean “making an outstanding contribution to setting agendas” and “b” is taken to mean “not doing so”. [sent-20, score-0.267]

14 Following the more general theoretical literature on testing and evaluation, such as Gill and Sgroi (2008, 2012), a Bayesian model in this context is simply one that produces a posterior probability, p_i, that a given submitted paper indexed by i is of type a rather than type b. [sent-21, score-0.495]

15 To the extent that citations are measuring quality or impact, I think that the underlying quantity (as well as its measures) have to be continuous. [sent-23, score-0.567]

16 It doesn’t make sense to me to characterize papers as type a or type b. [sent-24, score-0.22]

17 So I am sympathetic to the general ideas of this paper, but the particular approach they use doesn’t seem quite right to me. [sent-26, score-0.148]

18 Or, perhaps I should say, the particular approach they use doesn’t seem quite right to me, but I am sympathetic to the general ideas of this paper. [sent-27, score-0.148]

19 It would take a lot of chutzpah for Frey to criticize formal ways of measuring research performance. [sent-35, score-0.271]

20 I was curious so I googled one of the articles and found this , by Bruno Frey and Margit Osterloh, which begins as follows: Research rankings based on publications and citations today dominate governance of academia. [sent-37, score-0.48]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('citations', 0.344), ('frey', 0.318), ('sgroi', 0.283), ('ref', 0.243), ('osterloh', 0.212), ('outstanding', 0.159), ('oswald', 0.159), ('paper', 0.134), ('impact', 0.123), ('taste', 0.12), ('agendas', 0.116), ('type', 0.11), ('panels', 0.109), ('contribution', 0.108), ('intention', 0.104), ('measures', 0.096), ('intuitive', 0.092), ('total', 0.09), ('quality', 0.087), ('gradually', 0.085), ('journal', 0.083), ('sympathetic', 0.082), ('weights', 0.081), ('proposal', 0.078), ('measuring', 0.078), ('number', 0.076), ('field', 0.076), ('submitted', 0.075), ('publications', 0.075), ('binary', 0.072), ('formal', 0.072), ('general', 0.066), ('advancing', 0.064), ('distasteful', 0.064), ('substitution', 0.064), ('research', 0.063), ('attach', 0.061), ('governance', 0.061), ('partitioning', 0.061), ('doping', 0.061), ('consistent', 0.059), ('wrongly', 0.058), ('manufacturers', 0.058), ('adjusts', 0.058), ('chutzpah', 0.058), ('gill', 0.058), ('hat', 0.058), ('underlying', 0.058), ('four', 0.057), ('choose', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

Introduction: Daniel Sgroi and Andrew Oswald write : Many governments wish to assess the quality of their universities. A prominent example is the UK’s new Research Excellence Framework (REF) 2014. In the REF, peer-review panels will be provided with information on publications and citations. This paper suggests a way in which panels could choose the weights to attach to these two indicators. The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). Our study should not be interpreted as the argument that only mechanistic measures ought to be used in a REF. I agree that, if you’re going to choose a weighted average, it makes sense to think about where the weights are coming from. Some aspects of Sgroi and Oswald’s proposal remind me of the old idea of evaluating journal articles by expected number of total citations. The idea is that you’d use four pieces of i

2 0.24204554 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

Introduction: Someone pointed me to this letter to Bruno Frey from the editor of the Journal of Economic Perspectives. ( Background here , also more here from Olaf Storbeck.) The journal editor was upset about Frey’s self-plagiarism, and Frey responded with an apology: It was a grave mistake on our part for which we deeply apologize. It should never have happened. This is deplorable. . . . Please be assured that we take all precautions and measures that this unfortunate event does not happen again, with any journal. What I wonder is: How “deplorable” does Frey really think this is? You don’t publish a paper in 5 different places by accident! Is Frey saying that he knew this was deplorable back then and he did it anyway, based on calculation balancing the gains from multiple publications vs. the potential losses if he got caught? Or is he saying that the conduct is deplorable, but he didn’t realize it was deplorable when he did it? My guess is that Frey does not actually think the r

3 0.19077486 901 andrew gelman stats-2011-09-12-Some thoughts on academic cheating, inspired by Frey, Wegman, Fischer, Hauser, Stapel

Introduction: As regular readers of this blog are aware, I am fascinated by academic and scientific cheating and the excuses people give for it. Bruno Frey and colleagues published a single article (with only minor variants) in five different major journals, and these articles did not cite each other. And there have been several other cases of his self-plagiarism (see this review from Olaf Storbeck). I do not mind the general practice of repeating oneself for different audiences—in the social sciences, we call this Arrow’s Theorem —but in this case Frey seems to have gone a bit too far. Blogger Economic Logic has looked into this and concluded that this sort of common practice is standard in “the context of the German(-speaking) academic environment,” and what sets Frey apart is not his self-plagiarism or even his brazenness but rather his practice of doing it in high-visibility journals. Economic Logic writes that “[Frey's] contribution is pedagogical, he found a good and interesting

4 0.14741296 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

Introduction: Tyler Cowen links to a paper by Bruno Frey on the lack of space for articles in economics journals. Frey writes: To further their careers, [academic economists] are required to publish in A-journals, but for the vast majority this is impossible because there are few slots open in such journals. Such academic competition maybe useful to generate hard work, however, there may be serious negative consequences: the wrong output may be produced in an inefficient way, the wrong people may be selected, and losers may react in a harmful way. According to Frey, the consensus is that there are only five top economics journals–and one of those five is Econometrica, which is so specialized that I’d say that, for most academic economists, there are only four top places they can publish. The difficulty is that demand for these slots outpaces supply: for example, in 2007 there were only 275 articles in all these journals combined (or 224 if you exclude Econometrica), while “a rough estim

5 0.14362593 675 andrew gelman stats-2011-04-22-Arrow’s other theorem

Introduction: I received the following email from someone who’d like to remain anonymous: Lately I [the anonymous correspondent] witnessed that Bruno Frey has published two articles in two well known referreed journals on the Titanic disaster that try to explain survival rates of passenger on board. The articles were published in the Journal of Economic Perspectives and Rationality & Society . While looking up the name of the second journal where I stumbled across the article I even saw that they put the message in a third journal, the Proceedings of the National Academy of Sciences United States of America . To say it in Sopranos like style – with all due respect, I know Bruno Frey from conferences, I really appreciate his take on economics as a social science and he has really published more interesting stuff that most economists ever will. But putting the same message into three journals gives me headaches for at least two reasons: 1) When building a track record and scientific rep

6 0.13875765 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

7 0.13270865 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

8 0.13245861 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

9 0.13128425 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

10 0.12994194 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

11 0.11472003 2245 andrew gelman stats-2014-03-12-More on publishing in journals

12 0.11202887 1054 andrew gelman stats-2011-12-12-More frustrations trying to replicate an analysis published in a reputable journal

13 0.1037875 1321 andrew gelman stats-2012-05-15-A statistical research project: Weeding out the fraudulent citations

14 0.10354101 2244 andrew gelman stats-2014-03-11-What if I were to stop publishing in journals?

15 0.10035498 1588 andrew gelman stats-2012-11-23-No one knows what it’s like to be the bad man

16 0.098980166 1247 andrew gelman stats-2012-04-05-More philosophy of Bayes

17 0.097727984 1695 andrew gelman stats-2013-01-28-Economists argue about Bayes

18 0.09556105 937 andrew gelman stats-2011-10-02-That advice not to work so hard

19 0.094378233 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

20 0.090409786 2134 andrew gelman stats-2013-12-14-Oswald evidence


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.206), (1, 0.016), (2, -0.018), (3, -0.081), (4, -0.056), (5, -0.054), (6, 0.027), (7, -0.06), (8, -0.024), (9, -0.001), (10, 0.079), (11, -0.001), (12, -0.068), (13, -0.003), (14, -0.02), (15, -0.017), (16, 0.06), (17, 0.01), (18, -0.022), (19, 0.005), (20, 0.009), (21, 0.041), (22, 0.02), (23, -0.01), (24, 0.007), (25, -0.003), (26, -0.022), (27, 0.025), (28, 0.002), (29, -0.005), (30, 0.031), (31, -0.025), (32, 0.017), (33, 0.004), (34, 0.008), (35, -0.009), (36, 0.014), (37, 0.013), (38, -0.015), (39, 0.027), (40, -0.004), (41, -0.042), (42, -0.008), (43, -0.036), (44, -0.042), (45, 0.01), (46, 0.061), (47, -0.018), (48, -0.008), (49, 0.039)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97164661 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

Introduction: Daniel Sgroi and Andrew Oswald write : Many governments wish to assess the quality of their universities. A prominent example is the UK’s new Research Excellence Framework (REF) 2014. In the REF, peer-review panels will be provided with information on publications and citations. This paper suggests a way in which panels could choose the weights to attach to these two indicators. The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). Our study should not be interpreted as the argument that only mechanistic measures ought to be used in a REF. I agree that, if you’re going to choose a weighted average, it makes sense to think about where the weights are coming from. Some aspects of Sgroi and Oswald’s proposal remind me of the old idea of evaluating journal articles by expected number of total citations. The idea is that you’d use four pieces of i

2 0.85673672 1435 andrew gelman stats-2012-07-30-Retracted articles and unethical behavior in economics journals?

Introduction: Stan Liebowitz writes: Have you ever heard of an article being retracted in economics? I know you have only been doing this for a few years but I suspect that the answer is that none or very few are retracted. No economist would ever deceive another. There is virtually no interest in detecting cheating. And what good would that do if there is no form of punishment? I say this because I think I have found a case in one of our top journals but the editor allowed the authors of the original article to write an anonymous referee report defending themselves and used this report to reject my comment even though an independent referee recommended publication. My reply: I wonder how this sort of thing will change in the future as journals become less important. My impression is that, on one side, researchers are increasingly citing NBER reports, Arxiv preprints, and the like; while, from the other direction, journals such as Science and Nature are developing the reputations of being “t

3 0.84374183 2353 andrew gelman stats-2014-05-30-I posted this as a comment on a sociology blog

Introduction: I discussed two problems: 1. An artificial scarcity applied to journal publication, a scarcity which I believe is being enforced based on a monetary principle of not wanting to reduce the value of publication. The problem is that journals don’t just spread information and improve communication, they also represent chits for hiring and promotion. I’d prefer to separate these two aspects of publication. To keep these functions tied together seems to me like a terrible mistake. It would be as if, instead of using dollar bills as currency, we were to just use paper , and then if the government kept paper artificially scarce to retain the value of money, so that we were reduced to scratching notes to each other on walls and tables. 2. The discontinuous way in which unpublished papers and submissions to journals are taken as highly suspect and requiring a strong justification of all methods and assumptions, but once a paper becomes published its conclusions are taken as true unless

4 0.83922756 883 andrew gelman stats-2011-09-01-Arrow’s theorem update

Introduction: Someone pointed me to this letter to Bruno Frey from the editor of the Journal of Economic Perspectives. ( Background here , also more here from Olaf Storbeck.) The journal editor was upset about Frey’s self-plagiarism, and Frey responded with an apology: It was a grave mistake on our part for which we deeply apologize. It should never have happened. This is deplorable. . . . Please be assured that we take all precautions and measures that this unfortunate event does not happen again, with any journal. What I wonder is: How “deplorable” does Frey really think this is? You don’t publish a paper in 5 different places by accident! Is Frey saying that he knew this was deplorable back then and he did it anyway, based on calculation balancing the gains from multiple publications vs. the potential losses if he got caught? Or is he saying that the conduct is deplorable, but he didn’t realize it was deplorable when he did it? My guess is that Frey does not actually think the r

5 0.83854026 1654 andrew gelman stats-2013-01-04-“Don’t think of it as duplication. Think of it as a single paper in a superposition of two quantum journals.”

Introduction: Adam Marcus at Retraction Watch reports on a physicist at the University of Toronto who had this unfortunate thing happen to him: This article has been retracted at the request of the Editor-in-Chief and first and corresponding author. The article was largely a duplication of a paper that had already appeared in ACS Nano, 4 (2010) 3374–3380, http://dx.doi.org/10.1021/nn100335g. The first and the corresponding authors (Kramer and Sargent) would like to apologize for this administrative error on their part . . . “Administrative error” . . . I love that! Is that what the robber says when he knocks over a liquor store and gets caught? As Marcus points out, the two papers have different titles and a different order of authors, which makes it less plausible that this was an administrative mistake (as could happen, for example, if a secretary was given a list of journals to submit the paper to, and accidentally submitted it to the second journal on the list without realizing it

6 0.83598518 1585 andrew gelman stats-2012-11-20-“I know you aren’t the plagiarism police, but . . .”

7 0.83491892 675 andrew gelman stats-2011-04-22-Arrow’s other theorem

8 0.81539828 1865 andrew gelman stats-2013-05-20-What happened that the journal Psychological Science published a paper with no identifiable strengths?

9 0.81383562 371 andrew gelman stats-2010-10-26-Musical chairs in econ journals

10 0.81190068 2269 andrew gelman stats-2014-03-27-Beyond the Valley of the Trolls

11 0.80076861 1272 andrew gelman stats-2012-04-20-More proposals to reform the peer-review system

12 0.79820108 902 andrew gelman stats-2011-09-12-The importance of style in academic writing

13 0.79377419 1273 andrew gelman stats-2012-04-20-Proposals for alternative review systems for scientific work

14 0.79132968 1928 andrew gelman stats-2013-07-06-How to think about papers published in low-grade journals?

15 0.78948647 601 andrew gelman stats-2011-03-05-Against double-blind reviewing: Political science and statistics are not like biology and physics

16 0.78707188 834 andrew gelman stats-2011-08-01-I owe it all to the haters

17 0.78664994 2233 andrew gelman stats-2014-03-04-Literal vs. rhetorical

18 0.78583729 1393 andrew gelman stats-2012-06-26-The reverse-journal-submission system

19 0.77993691 2217 andrew gelman stats-2014-02-19-The replication and criticism movement is not about suppressing speculative research; rather, it’s all about enabling science’s fabled self-correcting nature

20 0.77789718 1139 andrew gelman stats-2012-01-26-Suggested resolution of the Bem paradox


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(15, 0.05), (16, 0.066), (22, 0.037), (24, 0.204), (28, 0.028), (36, 0.047), (56, 0.013), (57, 0.029), (58, 0.012), (63, 0.012), (65, 0.041), (68, 0.032), (76, 0.025), (86, 0.059), (88, 0.013), (99, 0.232)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96636307 2055 andrew gelman stats-2013-10-08-A Bayesian approach for peer-review panels? and a speculation about Bruno Frey

Introduction: Daniel Sgroi and Andrew Oswald write : Many governments wish to assess the quality of their universities. A prominent example is the UK’s new Research Excellence Framework (REF) 2014. In the REF, peer-review panels will be provided with information on publications and citations. This paper suggests a way in which panels could choose the weights to attach to these two indicators. The analysis draws in an intuitive way on the concept of Bayesian updating (where citations gradually reveal information about the initially imperfectly-observed importance of the research). Our study should not be interpreted as the argument that only mechanistic measures ought to be used in a REF. I agree that, if you’re going to choose a weighted average, it makes sense to think about where the weights are coming from. Some aspects of Sgroi and Oswald’s proposal remind me of the old idea of evaluating journal articles by expected number of total citations. The idea is that you’d use four pieces of i

2 0.95255685 1367 andrew gelman stats-2012-06-05-Question 26 of my final exam for Design and Analysis of Sample Surveys

Introduction: 26. You have just graded an an exam with 28 questions and 15 students. You fit a logistic item- response model estimating ability, difficulty, and discrimination parameters. Which of the following statements are basically true? (Indicate all that apply.) (a) If a question is answered correctly by students with very low and very high ability, but is missed by students in the middle, it will have a high value for its discrimination parameter. (b) It is not possible to fit an item-response model when you have more questions than students. In order to fit the model, you either need to reduce the number of questions (for example, by discarding some questions or by putting together some questions into a combined score) or increase the number of students in the dataset. (c) To keep the model identified, you can set one of the difficulty parameters or one of the ability parameters to zero and set one of the discrimination parameters to 1. (d) If two students answer the same number of q

3 0.95056498 351 andrew gelman stats-2010-10-18-“I was finding the test so irritating and boring that I just started to click through as fast as I could”

Introduction: In this article , Oliver Sacks talks about his extreme difficulty in recognizing people (even close friends) and places (even extremely familiar locations such as his apartment and his office). After reading this, I started to wonder if I have a very mild case of face-blindness. I’m very good at recognizing places, but I’m not good at faces. And I can’t really visualize faces at all. Like Sacks and some of his correspondents, I often have to do it by cheating, by recognizing certain landmarks that I can remember, thus coding the face linguistically rather than visually. (On the other hand, when thinking about mathematics or statistics, I’m very visual, as readers of this blog can attest.) Anyway, in searching for the link to Sacks’s article, I came across the “ Cambridge Face Memory Test .” My reaction when taking this test was mostly irritation. I just found it annoying to stare at all these unadorned faces, and in my attempt to memorize them, I was trying to use trick

4 0.94955391 494 andrew gelman stats-2010-12-31-Type S error rates for classical and Bayesian single and multiple comparison procedures

Introduction: Type S error: When your estimate is the wrong sign, compared to the true value of the parameter Type M error: When the magnitude of your estimate is far off, compared to the true value of the parameter More here.

5 0.9469831 1240 andrew gelman stats-2012-04-02-Blogads update

Introduction: A few months ago I reported on someone who wanted to insert text links into the blog. I asked her how much they would pay and got no answer. Yesterday, though, I received this reply: Hello Andrew, I am sorry for the delay in getting back to you. I’d like to make a proposal for your site. Please refer below. We would like to place a simple text link ad on page http://andrewgelman.com/2011/07/super_sam_fuld/ to link to *** with the key phrase ***. We will incorporate the key phrase into a sentence so it would read well. Rest assured it won’t sound obnoxious or advertorial. We will then process the final text link code as soon as you agree to our proposal. We can offer you $200 for this with the assumption that you will keep the link “live” on that page for 12 months or longer if you prefer. Please get back to us with a quick reply on your thoughts on this and include your Paypal ID for payment process. Hoping for a positive response from you. I wrote back: Hi,

6 0.94344962 1474 andrew gelman stats-2012-08-29-More on scaled-inverse Wishart and prior independence

7 0.94254136 846 andrew gelman stats-2011-08-09-Default priors update?

8 0.94220245 1713 andrew gelman stats-2013-02-08-P-values and statistical practice

9 0.94007295 779 andrew gelman stats-2011-06-25-Avoiding boundary estimates using a prior distribution as regularization

10 0.94005454 2299 andrew gelman stats-2014-04-21-Stan Model of the Week: Hierarchical Modeling of Supernovas

11 0.93902683 2161 andrew gelman stats-2014-01-07-My recent debugging experience

12 0.93897641 1637 andrew gelman stats-2012-12-24-Textbook for data visualization?

13 0.93790507 2224 andrew gelman stats-2014-02-25-Basketball Stats: Don’t model the probability of win, model the expected score differential.

14 0.9374336 1087 andrew gelman stats-2011-12-27-“Keeping things unridiculous”: Berger, O’Hagan, and me on weakly informative priors

15 0.9365859 899 andrew gelman stats-2011-09-10-The statistical significance filter

16 0.93655002 994 andrew gelman stats-2011-11-06-Josh Tenenbaum presents . . . a model of folk physics!

17 0.93588823 1454 andrew gelman stats-2012-08-11-Weakly informative priors for Bayesian nonparametric models?

18 0.93574166 1208 andrew gelman stats-2012-03-11-Gelman on Hennig on Gelman on Bayes

19 0.93547535 953 andrew gelman stats-2011-10-11-Steve Jobs’s cancer and science-based medicine

20 0.93502879 1155 andrew gelman stats-2012-02-05-What is a prior distribution?