hunch_net hunch_net-2013 hunch_net-2013-484 knowledge-graph by maker-knowledge-mining

484 hunch net-2013-06-16-Representative Reviewing


meta infos for this blog

Source: html

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. [sent-1, score-0.499]

2 When reviewer/paper assignment is automated based on an affinity graph, the affinity graph may be low quality or the constraint on the maximum number of papers per reviewer can easily leave some papers with low affinity to all reviewers orphaned. [sent-17, score-1.94]

3 I’ve seen this happen at the beginning of the reviewing process, but the more insidious case is when it happens at the end, where people are pressed for time and low quality judgements can become common. [sent-19, score-0.765]

4 For ICML , there are about 3 levels of “reviewer”: the program chair who is responsible for all papers, the area chair who is responsible for organizing reviewing on a subset of papers, and the program committee member/reviewer who has primary responsibility for reviewing. [sent-22, score-1.083]

5 We used a constraint system to assign the first reviewer to each paper and two area chairs to each paper. [sent-25, score-0.776]

6 Then, we asked each area chair to find one reviewer for each paper. [sent-26, score-0.583]

7 This does not happen in the standard review process, because the reviews of others are not visible to other reviewers until they are complete. [sent-34, score-0.652]

8 A more subtle form of bias is when one reviewer is simply much louder or charismatic than others. [sent-35, score-0.506]

9 A primary issue here is time: most reviewers will submit a review within a time constraint, but it may not be high quality due to limits on time. [sent-38, score-0.677]

10 Another significant issue in reviewer quality is motivation. [sent-43, score-0.524]

11 Making reviewers not anonymous to each other helps with motivation as poor reviews will at least be known to some. [sent-44, score-0.578]

12 A third form of low quality review is based on miscommunication. [sent-47, score-0.665]

13 Sometimes this comes in the form of giving each area chair a budget of papers to “champion”. [sent-52, score-0.554]

14 Sometimes this comes in the form of an area chair deciding to override all reviews and either accept or more likely reject a paper. [sent-53, score-0.657]

15 If the reviewers disgreed, we asked the two area chairs to make decisions and if they agreed, that was the decision. [sent-58, score-0.507]

16 Overall, I think it’s a significant long term positive for a conference as “insiders” naturally become more concerned with review quality and “outsiders” are more prone to submit. [sent-64, score-0.509]

17 This improves review quality by placing a check on unfair reviews and reducing miscommunication at some cost in time. [sent-68, score-0.705]

18 This allows authors to better communicate complex ideas, at the potential cost of reviewer time. [sent-70, score-0.506]

19 These approaches can be accommodated by simply hiding authors names for a fixed period of 2 months while the initial review process is ongoing. [sent-76, score-0.656]

20 If a paper is rejected in a representative reviewing process, then perhaps it is just not of sufficient interest. [sent-78, score-0.492]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('reviewing', 0.267), ('reviewer', 0.247), ('review', 0.232), ('reviewers', 0.226), ('quality', 0.219), ('affinity', 0.199), ('reviews', 0.194), ('area', 0.18), ('chair', 0.156), ('low', 0.145), ('authors', 0.131), ('paper', 0.126), ('constraint', 0.122), ('failure', 0.113), ('acs', 0.11), ('modes', 0.106), ('chairs', 0.101), ('dictatorship', 0.099), ('representative', 0.099), ('author', 0.099), ('papers', 0.091), ('process', 0.09), ('rooms', 0.088), ('motivation', 0.086), ('program', 0.085), ('reduces', 0.077), ('appendix', 0.077), ('responsible', 0.077), ('initial', 0.077), ('seen', 0.076), ('agreed', 0.074), ('bias', 0.072), ('helps', 0.072), ('assignments', 0.071), ('form', 0.069), ('person', 0.069), ('communicate', 0.068), ('constrained', 0.068), ('formats', 0.068), ('tried', 0.066), ('period', 0.066), ('minimizing', 0.066), ('objective', 0.066), ('cost', 0.06), ('simply', 0.06), ('beginning', 0.058), ('subtle', 0.058), ('significant', 0.058), ('comes', 0.058), ('graph', 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999982 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

2 0.39773861 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

3 0.38006705 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

4 0.36941931 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

5 0.36079019 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

6 0.35442632 315 hunch net-2008-09-03-Bidding Problems

7 0.32791859 437 hunch net-2011-07-10-ICML 2011 and the future

8 0.32001153 395 hunch net-2010-04-26-Compassionate Reviewing

9 0.30475056 207 hunch net-2006-09-12-Incentive Compatible Reviewing

10 0.27610016 116 hunch net-2005-09-30-Research in conferences

11 0.27355349 304 hunch net-2008-06-27-Reviewing Horror Stories

12 0.26740739 38 hunch net-2005-03-09-Bad Reviewing

13 0.26622063 453 hunch net-2012-01-28-Why COLT?

14 0.26443759 318 hunch net-2008-09-26-The SODA Program Committee

15 0.25275147 454 hunch net-2012-01-30-ICML Posters and Scope

16 0.25115946 463 hunch net-2012-05-02-ICML: Behind the Scenes

17 0.2440666 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

18 0.23857301 65 hunch net-2005-05-02-Reviewing techniques for conferences

19 0.23458549 468 hunch net-2012-06-29-ICML survey and comments

20 0.20335461 98 hunch net-2005-07-27-Not goal metrics


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.357), (1, -0.315), (2, 0.397), (3, 0.166), (4, 0.037), (5, 0.126), (6, -0.002), (7, 0.045), (8, -0.046), (9, -0.017), (10, -0.02), (11, -0.057), (12, 0.139), (13, -0.035), (14, -0.129), (15, -0.017), (16, 0.012), (17, 0.003), (18, -0.031), (19, -0.006), (20, -0.018), (21, 0.004), (22, 0.014), (23, -0.017), (24, -0.011), (25, 0.081), (26, 0.04), (27, -0.018), (28, 0.068), (29, 0.064), (30, 0.053), (31, -0.034), (32, 0.025), (33, 0.015), (34, -0.065), (35, 0.016), (36, -0.005), (37, -0.028), (38, -0.031), (39, -0.051), (40, 0.01), (41, -0.053), (42, 0.041), (43, -0.005), (44, 0.004), (45, -0.038), (46, 0.014), (47, 0.001), (48, -0.029), (49, -0.052)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98814863 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

2 0.93101847 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

3 0.92052656 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

4 0.89055437 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.88279343 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

6 0.84956127 463 hunch net-2012-05-02-ICML: Behind the Scenes

7 0.84754258 207 hunch net-2006-09-12-Incentive Compatible Reviewing

8 0.82907563 38 hunch net-2005-03-09-Bad Reviewing

9 0.81491798 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

10 0.79855394 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

11 0.79016322 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.7897442 318 hunch net-2008-09-26-The SODA Program Committee

13 0.78791457 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.76287311 304 hunch net-2008-06-27-Reviewing Horror Stories

15 0.75665098 437 hunch net-2011-07-10-ICML 2011 and the future

16 0.75496554 116 hunch net-2005-09-30-Research in conferences

17 0.69664693 333 hunch net-2008-12-27-Adversarial Academia

18 0.69051504 363 hunch net-2009-07-09-The Machine Learning Forum

19 0.67311388 65 hunch net-2005-05-02-Reviewing techniques for conferences

20 0.66637218 468 hunch net-2012-06-29-ICML survey and comments


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.215), (10, 0.039), (27, 0.238), (36, 0.01), (38, 0.032), (48, 0.014), (53, 0.067), (55, 0.207), (83, 0.017), (94, 0.054), (95, 0.037)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94975477 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

2 0.90145528 40 hunch net-2005-03-13-Avoiding Bad Reviewing

Introduction: If we accept that bad reviewing often occurs and want to fix it, the question is “how”? Reviewing is done by paper writers just like yourself, so a good proxy for this question is asking “How can I be a better reviewer?” Here are a few things I’ve learned by trial (and error), as a paper writer, and as a reviewer. The secret ingredient is careful thought. There is no good substitution for a deep and careful understanding. Avoid reviewing papers that you feel competitive about. You almost certainly will be asked to review papers that feel competitive if you work on subjects of common interest. But, the feeling of competition can easily lead to bad judgement. If you feel biased for some other reason, then you should avoid reviewing. For example… Feeling angry or threatened by a paper is a form of bias. See above. Double blind yourself (avoid looking at the name even in a single-blind situation). The significant effect of a name you recognize is making you pay close a

3 0.8998059 31 hunch net-2005-02-26-Problem: Reductions and Relative Ranking Metrics

Introduction: This, again, is something of a research direction rather than a single problem. There are several metrics people care about which depend upon the relative ranking of examples and there are sometimes good reasons to care about such metrics. Examples include AROC , “F1″, the proportion of the time that the top ranked element is in some class, the proportion of the top 10 examples in some class ( google ‘s problem), the lowest ranked example of some class, and the “sort distance” from a predicted ranking to a correct ranking. See here for an example of some of these. Problem What does the ability to classify well imply about performance under these metrics? Past Work Probabilistic classification under squared error can be solved with a classifier. A counterexample shows this does not imply a good AROC. Sample complexity bounds for AROC (and here ). A paper on “ Learning to Order Things “. Difficulty Several of these may be easy. Some of them may be h

4 0.88888848 298 hunch net-2008-04-26-Eliminating the Birthday Paradox for Universal Features

Introduction: I want to expand on this post which describes one of the core tricks for making Vowpal Wabbit fast and easy to use when learning from text. The central trick is converting a word (or any other parseable quantity) into a number via a hash function. Kishore tells me this is a relatively old trick in NLP land, but it has some added advantages when doing online learning, because you can learn directly from the existing data without preprocessing the data to create features (destroying the online property) or using an expensive hashtable lookup (slowing things down). A central concern for this approach is collisions, which create a loss of information. If you use m features in an index space of size n the birthday paradox suggests a collision if m > n 0.5 , essentially because there are m 2 pairs. This is pretty bad, because it says that with a vocabulary of 10 5 features, you might need to have 10 10 entries in your table. It turns out that redundancy is gr

5 0.88862395 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

6 0.8844772 213 hunch net-2006-10-08-Incompatibilities between classical confidence intervals and learning.

7 0.88336951 243 hunch net-2007-05-08-Conditional Tournaments for Multiclass to Binary

8 0.87941015 289 hunch net-2008-02-17-The Meaning of Confidence

9 0.87278652 391 hunch net-2010-03-15-The Efficient Robust Conditional Probability Estimation Problem

10 0.86883646 461 hunch net-2012-04-09-ICML author feedback is open

11 0.85600752 183 hunch net-2006-06-14-Explorations of Exploration

12 0.84787869 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

13 0.8477239 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.84058499 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.83722377 116 hunch net-2005-09-30-Research in conferences

16 0.83039778 463 hunch net-2012-05-02-ICML: Behind the Scenes

17 0.82802767 466 hunch net-2012-06-05-ICML acceptance statistics

18 0.82628763 403 hunch net-2010-07-18-ICML & COLT 2010

19 0.8256315 89 hunch net-2005-07-04-The Health of COLT

20 0.82331038 343 hunch net-2009-02-18-Decision by Vetocracy