hunch_net hunch_net-2008 hunch_net-2008-315 knowledge-graph by maker-knowledge-mining

315 hunch net-2008-09-03-Bidding Problems


meta infos for this blog

Source: html

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. [sent-1, score-1.346]

2 Some massaging happens, but reviewers often get approximately the papers they bid for. [sent-2, score-0.885]

3 At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. [sent-3, score-0.47]

4 A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. [sent-4, score-0.54]

5 I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. [sent-6, score-0.334]

6 If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers and express a disinterest in others. [sent-8, score-0.497]

7 There are reasonable odds that at least two of your friends (out of 3 reviewers) will get your papers, and with 2 adamantly positive reviews, your paper has good odds of acceptance. [sent-9, score-0.41]

8 It’s important to recall that there are good aspects of a bidding system. [sent-13, score-0.481]

9 If reviewers are nonstrategic (like I am), they simply pick the papers that seem the most interesting. [sent-14, score-0.534]

10 Having reviewers review the papers that most interest them isn’t terrible—it means they pay close attention and generally write better reviews than a disinterested reviewer might. [sent-15, score-1.144]

11 In many situations, simply finding reviewers who are willing to do an attentive thorough review is challenging. [sent-16, score-0.51]

12 However, since ICML I’ve come to believe there is a more serious flaw than any of the above: torpedo reviewing . [sent-17, score-0.387]

13 If a research direction is controversial in the sense that just 2-or-3 out of hundreds of reviewers object to it, those 2 or 3 people can bid for the paper, give it terrible reviews, and prevent publication. [sent-18, score-0.901]

14 A basic question is: “Does torpedo reviewing actually happen? [sent-20, score-0.387]

15 As an author, I’ve seen several reviews poor enough that a torpedo reviewer is a plausible explanation. [sent-22, score-0.682]

16 In talking to other people, I know that some folks do a lesser form: they intentionally bid for papers that they want to reject on the theory that rejections are less work than possible acceptances. [sent-23, score-0.77]

17 Even without more substantial evidence (it is hard to gather, after all), it’s clear that the potential for torpedo reviewing is real in a bidding system, and if done well by the reviewers, perhaps even undectectable. [sent-24, score-0.937]

18 The fundamental issue is: “How do you chose who reviews a paper? [sent-25, score-0.283]

19 ” We’ve discussed bidding above, but other approaches have their own advantages and drawbacks. [sent-26, score-0.411]

20 The simplest approach I have right now is “choose diversely”: perhaps a reviewer from bidding, a reviewer from assignment by a PC/SPC/area chair, and another reviewer from assignment by a different PC/SPC/area chair. [sent-27, score-0.878]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('bidding', 0.411), ('reviewers', 0.318), ('torpedo', 0.281), ('bid', 0.281), ('papers', 0.216), ('reviews', 0.208), ('reviewer', 0.193), ('review', 0.133), ('terrible', 0.117), ('odds', 0.114), ('assignment', 0.114), ('friends', 0.11), ('reviewing', 0.106), ('title', 0.098), ('abstract', 0.09), ('chair', 0.09), ('disinterested', 0.076), ('retarding', 0.076), ('indefinitely', 0.076), ('kill', 0.076), ('issue', 0.075), ('paper', 0.072), ('perhaps', 0.071), ('recall', 0.07), ('mccallum', 0.07), ('intentionally', 0.07), ('lesser', 0.07), ('visible', 0.07), ('folks', 0.07), ('massaging', 0.07), ('sympathetic', 0.07), ('community', 0.069), ('ve', 0.069), ('evidence', 0.068), ('succeeds', 0.066), ('gather', 0.066), ('participating', 0.066), ('rejections', 0.063), ('repeated', 0.063), ('hundreds', 0.063), ('uncomfortable', 0.061), ('controversial', 0.061), ('prevent', 0.061), ('thorough', 0.059), ('publish', 0.059), ('anecdotal', 0.059), ('suggested', 0.059), ('friend', 0.059), ('assign', 0.059), ('net', 0.059)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

2 0.35442632 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.32725072 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

4 0.26844916 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

5 0.24722043 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

6 0.24645315 40 hunch net-2005-03-13-Avoiding Bad Reviewing

7 0.23742361 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

8 0.21301629 207 hunch net-2006-09-12-Incentive Compatible Reviewing

9 0.20308201 395 hunch net-2010-04-26-Compassionate Reviewing

10 0.19883481 38 hunch net-2005-03-09-Bad Reviewing

11 0.1952371 304 hunch net-2008-06-27-Reviewing Horror Stories

12 0.18990414 318 hunch net-2008-09-26-The SODA Program Committee

13 0.18885912 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.17963935 454 hunch net-2012-01-30-ICML Posters and Scope

15 0.17316076 333 hunch net-2008-12-27-Adversarial Academia

16 0.17112726 98 hunch net-2005-07-27-Not goal metrics

17 0.16911963 116 hunch net-2005-09-30-Research in conferences

18 0.16517678 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

19 0.16174006 453 hunch net-2012-01-28-Why COLT?

20 0.15999943 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.256), (1, -0.237), (2, 0.302), (3, 0.157), (4, 0.036), (5, 0.1), (6, -0.023), (7, 0.021), (8, -0.002), (9, -0.029), (10, 0.012), (11, -0.035), (12, 0.07), (13, -0.038), (14, -0.099), (15, 0.025), (16, -0.007), (17, 0.015), (18, 0.021), (19, -0.042), (20, -0.011), (21, -0.012), (22, -0.027), (23, -0.017), (24, -0.026), (25, 0.023), (26, 0.024), (27, -0.02), (28, 0.059), (29, 0.01), (30, 0.067), (31, 0.008), (32, 0.02), (33, 0.04), (34, -0.034), (35, -0.007), (36, -0.02), (37, -0.006), (38, -0.011), (39, -0.029), (40, -0.009), (41, -0.002), (42, 0.007), (43, -0.033), (44, 0.0), (45, -0.028), (46, -0.023), (47, -0.041), (48, 0.032), (49, -0.042)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97430265 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

2 0.94944835 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.90220594 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

4 0.89306945 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.88580799 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

6 0.85765672 207 hunch net-2006-09-12-Incentive Compatible Reviewing

7 0.85700148 40 hunch net-2005-03-13-Avoiding Bad Reviewing

8 0.85574454 38 hunch net-2005-03-09-Bad Reviewing

9 0.80947864 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

10 0.80514878 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

11 0.80052805 318 hunch net-2008-09-26-The SODA Program Committee

12 0.78066564 343 hunch net-2009-02-18-Decision by Vetocracy

13 0.77682418 304 hunch net-2008-06-27-Reviewing Horror Stories

14 0.7585535 395 hunch net-2010-04-26-Compassionate Reviewing

15 0.72570384 363 hunch net-2009-07-09-The Machine Learning Forum

16 0.72160411 52 hunch net-2005-04-04-Grounds for Rejection

17 0.70960826 116 hunch net-2005-09-30-Research in conferences

18 0.70305032 437 hunch net-2011-07-10-ICML 2011 and the future

19 0.70198464 333 hunch net-2008-12-27-Adversarial Academia

20 0.67864799 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.041), (27, 0.211), (36, 0.283), (38, 0.027), (53, 0.016), (55, 0.135), (89, 0.022), (92, 0.016), (94, 0.124), (95, 0.044)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.90356982 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

Introduction: Yahoo!’s Key Scientific Challenges for Machine Learning grant applications are due March 11. If you are a student working on relevant research, please consider applying. It’s for $5K of unrestricted funding.

same-blog 2 0.86713076 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

3 0.70626968 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

4 0.68407154 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

5 0.68201011 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

Introduction: I’m the workshops chair for ICML this year. As such, I would like to personally encourage people to consider running a workshop. My general view of workshops is that they are excellent as opportunities to discuss and develop research directions—some of my best work has come from collaborations at workshops and several workshops have substantially altered my thinking about various problems. My experience running workshops is that setting them up and making them fly often appears much harder than it actually is, and the workshops often come off much better than expected in the end. Submissions are due January 18, two weeks before papers. Similarly, Ben Taskar is looking for good tutorials , which is complementary. Workshops are about exploring a subject, while a tutorial is about distilling it down into an easily taught essence, a vital part of the research process. Tutorials are due February 13, two weeks after papers.

6 0.67791861 423 hunch net-2011-02-02-User preferences for search engines

7 0.67765093 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

8 0.67653435 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.67554587 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

10 0.67517745 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

11 0.67459399 437 hunch net-2011-07-10-ICML 2011 and the future

12 0.67450488 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

13 0.67131555 96 hunch net-2005-07-21-Six Months

14 0.67121977 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

15 0.67030001 95 hunch net-2005-07-14-What Learning Theory might do

16 0.66900021 484 hunch net-2013-06-16-Representative Reviewing

17 0.66774273 371 hunch net-2009-09-21-Netflix finishes (and starts)

18 0.66649902 424 hunch net-2011-02-17-What does Watson mean?

19 0.66635525 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

20 0.66620135 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006