hunch_net hunch_net-2012 hunch_net-2012-461 knowledge-graph by maker-knowledge-mining

461 hunch net-2012-04-09-ICML author feedback is open


meta infos for this blog

Source: html

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. [sent-2, score-0.599]

2 Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0. [sent-3, score-0.902]

3 Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. [sent-5, score-0.649]

4 We are trying to make all of those happen this week so authors have some chance to respond. [sent-6, score-0.269]

5 Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. [sent-8, score-0.613]

6 Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. [sent-9, score-0.428]

7 The worst failure mode by far is the last one for Program Chairs and Area Chairs, because they must catch and fix all the failures at the last minute. [sent-13, score-0.507]

8 I expect the second failure mode also impacts the quality of reviews because high speed reviewing of a deep paper often doesn’t work. [sent-14, score-0.922]

9 To do this, we’re going to pass a flake list for failure mode 3 to future program chairs who will hopefully further encourage people to schedule time well and review carefully. [sent-16, score-0.592]

10 If my experience is any guide, plenty of authors will feel disappointed by the reviews. [sent-17, score-0.27]

11 And part of it may be that the authors simply are far more expert in their subject than reviewers. [sent-21, score-0.333]

12 In author responses, my personal tendency is to be blunter than most people when reviewers make errors. [sent-22, score-0.455]

13 You should be sympathetic to reviewers who have voluntarily put significant time into reviewing your paper, but you should also use the channel to communicate real information. [sent-24, score-0.681]

14 Remotivating your paper almost never works, so concentrate on getting across errors in understanding by reviewers or answer their direct questions. [sent-25, score-0.353]

15 We did not include reviewer scores in author feedback, although we do plan to include them when the decision is made. [sent-26, score-0.54]

16 Scores should not be regarded as final by any party, since author feedback and discussion can significantly alter a reviewer’s understanding of the paper. [sent-27, score-0.538]

17 Encouraging reviewers to incorporate this additional information well before settling on a final score is one of my goals. [sent-28, score-0.274]

18 We did allow resubmission of the paper with the author response, similar to what Geoff Gordon did as program chair for AIStat . [sent-29, score-0.449]

19 This solves two problems: It helps authors create a more polished draft, and it avoids forcing an overly constrained channel in the communication. [sent-30, score-0.499]

20 If an equation has a bug, you can write it out bug free in mathematical notation rather than trying to describe by reference how to alter the equation in author response. [sent-31, score-0.779]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('reviews', 0.353), ('warning', 0.211), ('authors', 0.2), ('reviewers', 0.187), ('author', 0.18), ('mode', 0.167), ('channel', 0.149), ('equation', 0.141), ('chairs', 0.138), ('night', 0.134), ('part', 0.133), ('alter', 0.124), ('bug', 0.124), ('scores', 0.124), ('failure', 0.124), ('reviewing', 0.112), ('last', 0.108), ('late', 0.106), ('quantity', 0.106), ('give', 0.104), ('paper', 0.096), ('program', 0.093), ('missing', 0.09), ('personal', 0.088), ('final', 0.087), ('done', 0.082), ('reviewer', 0.082), ('adjusted', 0.08), ('wednesday', 0.08), ('norms', 0.08), ('afternoon', 0.08), ('resubmission', 0.08), ('voluntarily', 0.08), ('polished', 0.08), ('weekend', 0.08), ('significant', 0.079), ('include', 0.077), ('regarded', 0.074), ('sympathetic', 0.074), ('minority', 0.074), ('feedback', 0.073), ('concentrate', 0.07), ('forcing', 0.07), ('gordon', 0.07), ('finish', 0.07), ('schedule', 0.07), ('disappointed', 0.07), ('impacts', 0.07), ('responses', 0.07), ('trying', 0.069)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9999997 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

2 0.36941931 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.29013142 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

4 0.25109765 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

5 0.24722043 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

6 0.23712303 437 hunch net-2011-07-10-ICML 2011 and the future

7 0.22720419 207 hunch net-2006-09-12-Incentive Compatible Reviewing

8 0.20818952 65 hunch net-2005-05-02-Reviewing techniques for conferences

9 0.20383964 453 hunch net-2012-01-28-Why COLT?

10 0.20183015 343 hunch net-2009-02-18-Decision by Vetocracy

11 0.19703211 468 hunch net-2012-06-29-ICML survey and comments

12 0.19337027 463 hunch net-2012-05-02-ICML: Behind the Scenes

13 0.18220192 318 hunch net-2008-09-26-The SODA Program Committee

14 0.18215625 454 hunch net-2012-01-30-ICML Posters and Scope

15 0.17766879 116 hunch net-2005-09-30-Research in conferences

16 0.16602641 304 hunch net-2008-06-27-Reviewing Horror Stories

17 0.16193335 395 hunch net-2010-04-26-Compassionate Reviewing

18 0.15414342 466 hunch net-2012-06-05-ICML acceptance statistics

19 0.14324453 38 hunch net-2005-03-09-Bad Reviewing

20 0.13940327 363 hunch net-2009-07-09-The Machine Learning Forum


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.266), (1, -0.248), (2, 0.268), (3, 0.121), (4, 0.031), (5, 0.089), (6, -0.03), (7, 0.032), (8, -0.044), (9, -0.014), (10, -0.012), (11, -0.061), (12, 0.084), (13, -0.031), (14, -0.072), (15, 0.004), (16, 0.023), (17, 0.006), (18, -0.022), (19, 0.013), (20, 0.039), (21, 0.016), (22, 0.026), (23, -0.004), (24, -0.083), (25, 0.045), (26, 0.022), (27, -0.067), (28, 0.028), (29, 0.116), (30, 0.026), (31, -0.043), (32, 0.027), (33, 0.014), (34, -0.027), (35, 0.017), (36, 0.02), (37, -0.039), (38, -0.055), (39, -0.044), (40, 0.023), (41, -0.054), (42, -0.054), (43, -0.014), (44, 0.005), (45, 0.018), (46, -0.008), (47, 0.016), (48, 0.062), (49, -0.039)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98559242 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

2 0.91762608 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.86665231 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

4 0.85208488 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.83906311 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

6 0.83035821 207 hunch net-2006-09-12-Incentive Compatible Reviewing

7 0.81609571 40 hunch net-2005-03-13-Avoiding Bad Reviewing

8 0.76998991 38 hunch net-2005-03-09-Bad Reviewing

9 0.7661584 318 hunch net-2008-09-26-The SODA Program Committee

10 0.75600767 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

11 0.7482062 304 hunch net-2008-06-27-Reviewing Horror Stories

12 0.73248309 437 hunch net-2011-07-10-ICML 2011 and the future

13 0.72969401 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.72766405 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

15 0.7241354 395 hunch net-2010-04-26-Compassionate Reviewing

16 0.68978691 468 hunch net-2012-06-29-ICML survey and comments

17 0.669864 116 hunch net-2005-09-30-Research in conferences

18 0.66979593 363 hunch net-2009-07-09-The Machine Learning Forum

19 0.65182912 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

20 0.64633083 333 hunch net-2008-12-27-Adversarial Academia


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.078), (10, 0.024), (27, 0.158), (29, 0.011), (38, 0.021), (48, 0.027), (53, 0.084), (55, 0.121), (57, 0.208), (64, 0.015), (92, 0.031), (94, 0.102), (95, 0.047)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90202606 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

2 0.88698912 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

Introduction: The Second Annual Reinforcement Learning Competition is about to get started. The aim of the competition is to facilitate direct comparisons between various learning methods on important and realistic domains. This year’s event will feature well-known benchmark domains as well as more challenging problems of real-world complexity, such as helicopter control and robot soccer keepaway. The competition begins on November 1st, 2007 when training software is released. Results must be submitted by July 1st, 2008. The competition will culminate in an event at ICML-08 in Helsinki, Finland, at which the winners will be announced. For more information, visit the competition website.

3 0.73476714 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

4 0.73395598 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

5 0.72985017 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.7247445 423 hunch net-2011-02-02-User preferences for search engines

7 0.71617031 96 hunch net-2005-07-21-Six Months

8 0.71473467 141 hunch net-2005-12-17-Workshops as Franchise Conferences

9 0.71349382 116 hunch net-2005-09-30-Research in conferences

10 0.71149749 75 hunch net-2005-05-28-Running A Machine Learning Summer School

11 0.71062875 463 hunch net-2012-05-02-ICML: Behind the Scenes

12 0.70848912 207 hunch net-2006-09-12-Incentive Compatible Reviewing

13 0.70524555 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

14 0.70350051 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.70336223 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.70282191 134 hunch net-2005-12-01-The Webscience Future

17 0.70114243 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

18 0.70048362 297 hunch net-2008-04-22-Taking the next step

19 0.69990659 403 hunch net-2010-07-18-ICML & COLT 2010

20 0.69968927 452 hunch net-2012-01-04-Why ICML? and the summer conferences