hunch_net hunch_net-2005 hunch_net-2005-65 knowledge-graph by maker-knowledge-mining

65 hunch net-2005-05-02-Reviewing techniques for conferences


meta infos for this blog

Source: html

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The many reviews following the many paper deadlines are just about over. [sent-1, score-0.219]

2 AAAI and ICML in particular were experimenting with several reviewing techniques. [sent-2, score-0.277]

3 Double Blind: AAAI and ICML were both double blind this year. [sent-3, score-0.682]

4 It seemed (overall) beneficial, but two problems arose. [sent-4, score-0.221]

5 For theoretical papers, with a lot to say, authors often leave out the proofs. [sent-5, score-0.341]

6 This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. [sent-6, score-1.376]

7 Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. [sent-7, score-1.171]

8 On the author side, double blind reviewing is actually somewhat disruptive to research. [sent-8, score-1.367]

9 In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. [sent-9, score-0.481]

10 This is not a great drawback, but it is one not previously appreciated. [sent-10, score-0.07]

11 Author feedback: AAAI and ICML did author feedback this year. [sent-11, score-0.497]

12 The ICML-style author feedback (more space, no requirement of attacking the review to respond), appeared somewhat more helpful and natural. [sent-13, score-1.187]

13 It seems ok to pass a compliment from author to reviewer. [sent-14, score-0.493]

14 Discussion Periods: AAAI seemed more natural than ICML with respect to discussion periods. [sent-15, score-0.319]

15 For ICML, there were “dead times” when reviews were submitted but discussions amongst reviewers were not encouraged. [sent-16, score-0.294]

16 This has the drawback of letting people forget their review before discussing it. [sent-17, score-0.593]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('blind', 0.351), ('double', 0.331), ('author', 0.329), ('aaai', 0.305), ('seemed', 0.221), ('icml', 0.19), ('authors', 0.182), ('feedback', 0.168), ('review', 0.161), ('drawback', 0.155), ('reviews', 0.134), ('disruptive', 0.122), ('somewhat', 0.12), ('reviewing', 0.114), ('attacking', 0.113), ('periods', 0.113), ('letting', 0.102), ('requirement', 0.102), ('helpful', 0.102), ('trust', 0.098), ('dead', 0.098), ('discussion', 0.098), ('beneficial', 0.095), ('got', 0.095), ('respond', 0.095), ('sent', 0.095), ('appeared', 0.092), ('cope', 0.092), ('forget', 0.092), ('leave', 0.085), ('experimenting', 0.085), ('deadlines', 0.085), ('favor', 0.085), ('discussing', 0.083), ('pass', 0.083), ('talking', 0.081), ('ok', 0.081), ('submitted', 0.081), ('discussions', 0.079), ('particular', 0.078), ('reject', 0.075), ('lot', 0.074), ('mechanisms', 0.071), ('previously', 0.07), ('proof', 0.066), ('side', 0.065), ('strongly', 0.064), ('papers', 0.063), ('times', 0.062), ('overall', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

2 0.41189337 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

3 0.33740813 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

4 0.30060193 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

5 0.29739371 331 hunch net-2008-12-12-Summer Conferences

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi

6 0.28777182 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.23857301 484 hunch net-2013-06-16-Representative Reviewing

8 0.20989299 453 hunch net-2012-01-28-Why COLT?

9 0.20818952 461 hunch net-2012-04-09-ICML author feedback is open

10 0.20224999 468 hunch net-2012-06-29-ICML survey and comments

11 0.19763142 437 hunch net-2011-07-10-ICML 2011 and the future

12 0.16324444 387 hunch net-2010-01-19-Deadline Season, 2010

13 0.15821511 207 hunch net-2006-09-12-Incentive Compatible Reviewing

14 0.15475786 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

15 0.15447351 454 hunch net-2012-01-30-ICML Posters and Scope

16 0.15021996 318 hunch net-2008-09-26-The SODA Program Committee

17 0.14564925 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

18 0.14400958 315 hunch net-2008-09-03-Bidding Problems

19 0.14393586 403 hunch net-2010-07-18-ICML & COLT 2010

20 0.14065877 466 hunch net-2012-06-05-ICML acceptance statistics


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.19), (1, -0.31), (2, 0.334), (3, 0.003), (4, 0.014), (5, -0.068), (6, -0.061), (7, -0.046), (8, 0.031), (9, 0.038), (10, -0.087), (11, -0.02), (12, 0.071), (13, 0.003), (14, -0.101), (15, 0.048), (16, -0.068), (17, -0.131), (18, -0.041), (19, -0.025), (20, 0.026), (21, -0.095), (22, 0.015), (23, 0.063), (24, 0.14), (25, -0.067), (26, -0.017), (27, 0.065), (28, -0.122), (29, 0.131), (30, -0.078), (31, 0.005), (32, -0.023), (33, -0.004), (34, -0.005), (35, -0.047), (36, -0.015), (37, -0.076), (38, -0.03), (39, -0.029), (40, 0.058), (41, -0.069), (42, 0.05), (43, 0.044), (44, 0.01), (45, 0.099), (46, 0.042), (47, 0.016), (48, -0.062), (49, 0.006)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99198383 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

2 0.8311733 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

3 0.81859457 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

4 0.79860139 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

5 0.75873065 331 hunch net-2008-12-12-Summer Conferences

Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi

6 0.6772573 468 hunch net-2012-06-29-ICML survey and comments

7 0.63511115 484 hunch net-2013-06-16-Representative Reviewing

8 0.62764579 452 hunch net-2012-01-04-Why ICML? and the summer conferences

9 0.60476625 461 hunch net-2012-04-09-ICML author feedback is open

10 0.59422255 315 hunch net-2008-09-03-Bidding Problems

11 0.59042257 207 hunch net-2006-09-12-Incentive Compatible Reviewing

12 0.58032721 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

13 0.56283176 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.52644008 382 hunch net-2009-12-09-Future Publication Models @ NIPS

15 0.52485198 318 hunch net-2008-09-26-The SODA Program Committee

16 0.51266795 387 hunch net-2010-01-19-Deadline Season, 2010

17 0.47764656 363 hunch net-2009-07-09-The Machine Learning Forum

18 0.47637978 453 hunch net-2012-01-28-Why COLT?

19 0.46937093 226 hunch net-2007-01-04-2007 Summer Machine Learning Conferences

20 0.46166 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.047), (27, 0.158), (38, 0.056), (53, 0.022), (55, 0.293), (68, 0.196), (94, 0.04), (95, 0.05), (96, 0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97914487 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

2 0.85560334 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

3 0.85399902 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

4 0.84576261 90 hunch net-2005-07-07-The Limits of Learning Theory

Introduction: Suppose we had an infinitely powerful mathematician sitting in a room and proving theorems about learning. Could he solve machine learning? The answer is “no”. This answer is both obvious and sometimes underappreciated. There are several ways to conclude that some bias is necessary in order to succesfully learn. For example, suppose we are trying to solve classification. At prediction time, we observe some features X and want to make a prediction of either 0 or 1 . Bias is what makes us prefer one answer over the other based on past experience. In order to learn we must: Have a bias. Always predicting 0 is as likely as 1 is useless. Have the “right” bias. Predicting 1 when the answer is 0 is also not helpful. The implication of “have a bias” is that we can not design effective learning algorithms with “a uniform prior over all possibilities”. The implication of “have the ‘right’ bias” is that our mathematician fails since “right” is defined wi

5 0.84031445 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

6 0.83730304 448 hunch net-2011-10-24-2011 ML symposium and the bears

7 0.82892084 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

8 0.82798207 20 hunch net-2005-02-15-ESPgame and image labeling

9 0.81771821 446 hunch net-2011-10-03-Monday announcements

10 0.81625384 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

11 0.81374806 110 hunch net-2005-09-10-“Failure” is an option

12 0.81184375 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.79956543 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.79868776 331 hunch net-2008-12-12-Summer Conferences

15 0.79468554 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

16 0.78817034 443 hunch net-2011-09-03-Fall Machine Learning Events

17 0.78299326 48 hunch net-2005-03-29-Academic Mechanism Design

18 0.77974129 484 hunch net-2013-06-16-Representative Reviewing

19 0.77445751 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1

20 0.7663554 116 hunch net-2005-09-30-Research in conferences