hunch_net hunch_net-2005 hunch_net-2005-40 knowledge-graph by maker-knowledge-mining

40 hunch net-2005-03-13-Avoiding Bad Reviewing


meta infos for this blog

Source: html

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 If we accept that bad reviewing often occurs and want to fix it, the question is “how”? [sent-1, score-0.329]

2 Avoid reviewing papers that you feel competitive about. [sent-6, score-0.531]

3 You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. [sent-7, score-0.643]

4 Double blind yourself (avoid looking at the name even in a single-blind situation). [sent-12, score-0.417]

5 The significant effect of a name you recognize is making you pay close attention to a paper. [sent-13, score-0.44]

6 Since not paying enough attention is a standard failure mode of reviewers, a name you recognize is inevitably unfair to authors you do not recognize. [sent-14, score-0.614]

7 For conferences there is a tendency to review papers right at the deadline. [sent-16, score-0.589]

8 This tendency can easily result in misjudgements because you do not have the opportunity to really understand what a paper is saying. [sent-17, score-0.388]

9 If you don’t have time to really understand the papers that you review, then you should say “no” to review requests. [sent-20, score-0.437]

10 Always try to make review comments nonpersonal and constructive, especially in a rejection. [sent-24, score-0.345]

11 Given the above, observations, a few suggestions for improved review organization can be derived. [sent-25, score-0.345]

12 A common argument against double blind reviewing is that it is often defeatable. [sent-27, score-0.79]

13 The reason why double blind reviewing is helpful is that a typical reviewer who wants to review well is aided by the elimination of side information which should not effect the acceptance of a paper. [sent-29, score-1.469]

14 ) Another reason why double blind reviewing is “right”, is that it simply appears fairer. [sent-31, score-0.877]

15 Consequently, instead of having many paper reviews due on one day, having them due at the rate of one-per-day (or an even slower rate) may be helpful. [sent-35, score-0.53]

16 But a great researcher with many papers to review can only be a mediocre reviewer due to lack of available attention and time. [sent-39, score-0.75]

17 A typical issue in reviewing a paper is that some detail is unintentionally (and accidentally) unclear. [sent-42, score-0.526]

18 This communication can be easily setup to respect the double blind guarantee by routing through the conference site. [sent-44, score-0.854]

19 Access to other reviews should not be available until after completing your own review. [sent-52, score-0.306]

20 Allowing early access to other reviews increases noise by decreasing independence amongst reviewers. [sent-54, score-0.409]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('review', 0.345), ('blind', 0.286), ('double', 0.271), ('reviews', 0.235), ('reviewing', 0.233), ('paper', 0.153), ('tendency', 0.152), ('communication', 0.148), ('attention', 0.133), ('name', 0.131), ('feeling', 0.125), ('conflict', 0.11), ('feel', 0.11), ('reviewer', 0.109), ('recognize', 0.107), ('authors', 0.106), ('constructive', 0.101), ('deadlines', 0.099), ('independence', 0.099), ('competitive', 0.096), ('bad', 0.096), ('pc', 0.094), ('avoid', 0.093), ('papers', 0.092), ('careful', 0.091), ('aaai', 0.089), ('reason', 0.087), ('consequently', 0.084), ('easily', 0.083), ('allowing', 0.077), ('access', 0.075), ('please', 0.073), ('completing', 0.071), ('inevitably', 0.071), ('overconfident', 0.071), ('staggered', 0.071), ('unintentionally', 0.071), ('due', 0.071), ('typical', 0.069), ('effect', 0.069), ('reviewers', 0.066), ('prioritization', 0.066), ('writer', 0.066), ('confident', 0.066), ('ingredient', 0.066), ('overconfidence', 0.066), ('routing', 0.066), ('secret', 0.066), ('unfair', 0.066), ('writers', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000002 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

2 0.41189337 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

3 0.38006705 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.35897681 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

5 0.31485173 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

6 0.25109765 461 hunch net-2012-04-09-ICML author feedback is open

7 0.25003272 452 hunch net-2012-01-04-Why ICML? and the summer conferences

8 0.24801515 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

9 0.24645315 315 hunch net-2008-09-03-Bidding Problems

10 0.23498695 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

11 0.23386025 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.22786742 207 hunch net-2006-09-12-Incentive Compatible Reviewing

13 0.220571 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.21040258 318 hunch net-2008-09-26-The SODA Program Committee

15 0.1991069 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

16 0.19854367 331 hunch net-2008-12-12-Summer Conferences

17 0.19827494 38 hunch net-2005-03-09-Bad Reviewing

18 0.1896776 453 hunch net-2012-01-28-Why COLT?

19 0.17369208 454 hunch net-2012-01-30-ICML Posters and Scope

20 0.1726833 304 hunch net-2008-06-27-Reviewing Horror Stories


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.285), (1, -0.292), (2, 0.366), (3, 0.1), (4, 0.017), (5, 0.026), (6, -0.012), (7, 0.013), (8, 0.001), (9, 0.023), (10, -0.049), (11, -0.026), (12, 0.146), (13, -0.044), (14, -0.117), (15, 0.016), (16, -0.05), (17, -0.035), (18, -0.027), (19, -0.003), (20, -0.008), (21, -0.035), (22, 0.006), (23, 0.014), (24, 0.059), (25, 0.001), (26, -0.006), (27, 0.011), (28, -0.103), (29, 0.079), (30, 0.013), (31, 0.052), (32, -0.042), (33, -0.042), (34, -0.026), (35, -0.062), (36, -0.023), (37, -0.059), (38, -0.095), (39, -0.014), (40, 0.008), (41, -0.083), (42, 0.076), (43, 0.02), (44, -0.021), (45, 0.036), (46, 0.007), (47, -0.064), (48, -0.011), (49, -0.052)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98577917 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

2 0.89590061 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

3 0.8673237 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

4 0.86447036 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

5 0.86277467 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.82845551 315 hunch net-2008-09-03-Bidding Problems

7 0.81994075 207 hunch net-2006-09-12-Incentive Compatible Reviewing

8 0.81069142 461 hunch net-2012-04-09-ICML author feedback is open

9 0.78029567 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

10 0.7320323 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

11 0.68837488 363 hunch net-2009-07-09-The Machine Learning Forum

12 0.68660206 318 hunch net-2008-09-26-The SODA Program Committee

13 0.67519611 468 hunch net-2012-06-29-ICML survey and comments

14 0.67165136 437 hunch net-2011-07-10-ICML 2011 and the future

15 0.66611379 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.66376156 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

17 0.66265827 463 hunch net-2012-05-02-ICML: Behind the Scenes

18 0.6610837 38 hunch net-2005-03-09-Bad Reviewing

19 0.64677638 382 hunch net-2009-12-09-Future Publication Models @ NIPS

20 0.58123666 304 hunch net-2008-06-27-Reviewing Horror Stories


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(1, 0.03), (2, 0.011), (3, 0.082), (10, 0.036), (24, 0.019), (27, 0.184), (31, 0.016), (38, 0.027), (42, 0.037), (48, 0.013), (53, 0.057), (55, 0.225), (77, 0.01), (79, 0.01), (94, 0.103), (95, 0.034)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98031312 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

2 0.9517526 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.94218361 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

4 0.93735117 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

5 0.93208003 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

6 0.9259885 116 hunch net-2005-09-30-Research in conferences

7 0.92226052 437 hunch net-2011-07-10-ICML 2011 and the future

8 0.91806567 423 hunch net-2011-02-02-User preferences for search engines

9 0.91793048 454 hunch net-2012-01-30-ICML Posters and Scope

10 0.91610569 270 hunch net-2007-11-02-The Machine Learning Award goes to …

11 0.9128986 89 hunch net-2005-07-04-The Health of COLT

12 0.90459621 461 hunch net-2012-04-09-ICML author feedback is open

13 0.89545768 96 hunch net-2005-07-21-Six Months

14 0.89514625 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

15 0.89432126 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

16 0.8894338 65 hunch net-2005-05-02-Reviewing techniques for conferences

17 0.88750935 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

18 0.88604796 463 hunch net-2012-05-02-ICML: Behind the Scenes

19 0.88082314 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

20 0.87950569 315 hunch net-2008-09-03-Bidding Problems