hunch_net hunch_net-2005 hunch_net-2005-38 knowledge-graph by maker-knowledge-mining

38 hunch net-2005-03-09-Bad Reviewing


meta infos for this blog

Source: html

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 This is a difficult subject to talk about for many reasons, but a discussion may be helpful. [sent-1, score-0.095]

2 The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. [sent-3, score-0.39]

3 Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. [sent-4, score-0.57]

4 (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. [sent-5, score-0.519]

5 (NIPS) Reviewer believes a technique will not work despite experimental validation. [sent-6, score-0.086]

6 (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). [sent-7, score-0.455]

7 Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references! [sent-8, score-0.298]

8 ) Reviewer inverts the message of a paper and concludes it says nothing important. [sent-9, score-0.425]

9 (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). [sent-10, score-0.07]

10 Reviewer is enthusiastic about paper but clearly does not understand (ICML). [sent-11, score-0.313]

11 Reviewer erroneously believe that the “birthday paradox” is relevant (CCS). [sent-12, score-0.222]

12 The above is only for cases where there was sufficient reviewer comments to actually understand reviewer failure modes. [sent-13, score-1.291]

13 Many reviewers fail to leave sufficient comments and it’s easy to imagine they commit similar mistakes. [sent-14, score-0.574]

14 Bad reviewing should be clearly distinguished from rejections—note that some of the above examples are actually accepts. [sent-15, score-0.421]

15 The standard psychological reaction to any rejected paper is trying to find fault with the reviewers. [sent-16, score-0.302]

16 You, as a paper writer, have invested significant work (weeks? [sent-17, score-0.196]

17 ) in the process of creating a paper, so it is extremely difficult to step back and read the reviews objectively. [sent-20, score-0.185]

18 One distinguishing characteristic of a bad review from a rejection is that it bothers you years later. [sent-21, score-0.559]

19 If we accept that bad reviewing happens and want to address the issue, we are left with a very difficult problem. [sent-22, score-0.454]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('reviewer', 0.502), ('concludes', 0.222), ('erroneously', 0.222), ('bad', 0.221), ('theorem', 0.19), ('soda', 0.158), ('reviewing', 0.138), ('nips', 0.12), ('paper', 0.117), ('fail', 0.115), ('clearly', 0.105), ('comments', 0.103), ('commit', 0.099), ('dag', 0.099), ('disbelieve', 0.099), ('internally', 0.099), ('quantifiers', 0.099), ('reaction', 0.099), ('swaps', 0.099), ('sufficient', 0.097), ('difficult', 0.095), ('reviewers', 0.092), ('birthday', 0.091), ('distinguished', 0.091), ('enthusiastic', 0.091), ('paradox', 0.091), ('references', 0.091), ('trivially', 0.091), ('writer', 0.091), ('step', 0.09), ('years', 0.09), ('actually', 0.087), ('characteristic', 0.086), ('believes', 0.086), ('distinguishing', 0.086), ('fault', 0.086), ('message', 0.086), ('jmlr', 0.082), ('rejections', 0.082), ('colt', 0.081), ('admitting', 0.079), ('crypto', 0.079), ('invested', 0.079), ('rejection', 0.076), ('claims', 0.076), ('notice', 0.076), ('flaw', 0.074), ('distinguish', 0.07), ('months', 0.068), ('leave', 0.068)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

2 0.28989938 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

3 0.26740739 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.23255721 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.22150099 304 hunch net-2008-06-27-Reviewing Horror Stories

Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App

6 0.21151029 207 hunch net-2006-09-12-Incentive Compatible Reviewing

7 0.19883481 315 hunch net-2008-09-03-Bidding Problems

8 0.19827494 40 hunch net-2005-03-13-Avoiding Bad Reviewing

9 0.15660539 52 hunch net-2005-04-04-Grounds for Rejection

10 0.1539274 333 hunch net-2008-12-27-Adversarial Academia

11 0.14696799 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.14324453 461 hunch net-2012-04-09-ICML author feedback is open

13 0.14193313 437 hunch net-2011-07-10-ICML 2011 and the future

14 0.14020979 116 hunch net-2005-09-30-Research in conferences

15 0.13978164 318 hunch net-2008-09-26-The SODA Program Committee

16 0.12602957 98 hunch net-2005-07-27-Not goal metrics

17 0.12373362 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

18 0.12331507 463 hunch net-2012-05-02-ICML: Behind the Scenes

19 0.11321685 452 hunch net-2012-01-04-Why ICML? and the summer conferences

20 0.11320861 453 hunch net-2012-01-28-Why COLT?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.205), (1, -0.165), (2, 0.258), (3, 0.111), (4, 0.035), (5, 0.085), (6, 0.026), (7, 0.009), (8, -0.004), (9, 0.021), (10, 0.012), (11, -0.031), (12, 0.061), (13, -0.004), (14, -0.051), (15, 0.004), (16, 0.026), (17, 0.014), (18, 0.016), (19, -0.003), (20, -0.074), (21, 0.044), (22, -0.02), (23, -0.098), (24, -0.044), (25, 0.038), (26, 0.028), (27, -0.061), (28, 0.076), (29, -0.056), (30, -0.009), (31, 0.023), (32, -0.057), (33, 0.063), (34, -0.043), (35, 0.017), (36, -0.057), (37, 0.076), (38, -0.04), (39, -0.054), (40, -0.026), (41, -0.01), (42, -0.037), (43, -0.036), (44, 0.056), (45, -0.098), (46, -0.022), (47, 0.024), (48, 0.047), (49, -0.076)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98764127 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

2 0.85563385 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

3 0.82675511 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.81951034 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

5 0.77705866 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

6 0.77188188 461 hunch net-2012-04-09-ICML author feedback is open

7 0.76665235 463 hunch net-2012-05-02-ICML: Behind the Scenes

8 0.75442028 207 hunch net-2006-09-12-Incentive Compatible Reviewing

9 0.74815059 343 hunch net-2009-02-18-Decision by Vetocracy

10 0.73236352 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

11 0.72518468 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.72347248 304 hunch net-2008-06-27-Reviewing Horror Stories

13 0.67733383 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.65808362 485 hunch net-2013-06-29-The Benefits of Double-Blind Review

15 0.64933366 333 hunch net-2008-12-27-Adversarial Academia

16 0.6329754 318 hunch net-2008-09-26-The SODA Program Committee

17 0.6015777 98 hunch net-2005-07-27-Not goal metrics

18 0.57062477 437 hunch net-2011-07-10-ICML 2011 and the future

19 0.55918998 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

20 0.55500275 395 hunch net-2010-04-26-Compassionate Reviewing


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.427), (27, 0.174), (38, 0.025), (53, 0.06), (55, 0.135), (94, 0.046), (95, 0.038)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.99065375 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

Introduction: A reminder that the New York Academy of Sciences will be hosting the  7th Annual Machine Learning Symposium tomorrow from 9:30am. The main program will feature invited talks from Peter Bartlett ,  William Freeman , and Vladimir Vapnik , along with numerous spotlight talks and a poster session. Following the main program, hackNY and Microsoft Research are sponsoring a networking hour with talks from machine learning practitioners at NYC startups (specifically bit.ly , Buzzfeed , Chartbeat , and Sense Networks , Visual Revenue ). This should be of great interest to everyone considering working in machine learning.

same-blog 2 0.96716893 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

3 0.91661441 199 hunch net-2006-07-26-Two more UAI papers of interest

Introduction: In addition to Ed Snelson’s paper, there were (at least) two other papers that caught my eye at UAI. One was this paper by Sanjoy Dasgupta, Daniel Hsu and Nakul Verma at UCSD which shows in a surprisingly general and strong way that almost all linear projections of any jointly distributed vector random variable with finite first and second moments look sphereical and unimodal (in fact look like a scale mixture of Gaussians). Great result, as you’d expect from Sanjoy. The other paper which I found intriguing but which I just haven’t groked yet is this beast by Manfred and Dima Kuzmin. You can check out the (beautiful) slides if that helps. I feel like there is something deep here, but my brain is too small to understand it. The COLT and last NIPS papers/slides are also on Manfred’s page. Hopefully someone here can illuminate.

4 0.90617234 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?

Introduction: Steve Smale and I have a debate about goals of learning theory. Steve likes theorems with a dependence on unobservable quantities. For example, if D is a distribution over a space X x [0,1] , you can state a theorem about the error rate dependent on the variance, E (x,y)~D (y-E y’~D|x [y']) 2 . I dislike this, because I want to use the theorems to produce code solving learning problems. Since I don’t know (and can’t measure) the variance, a theorem depending on the variance does not help me—I would not know what variance to plug into the learning algorithm. Recast more broadly, this is a debate between “declarative” and “operative” mathematics. A strong example of “declarative” mathematics is “a new kind of science” . Roughly speaking, the goal of this kind of approach seems to be finding a way to explain the observations we make. Examples include “some things are unpredictable”, “a phase transition exists”, etc… “Operative” mathematics helps you make predictions a

5 0.90344119 434 hunch net-2011-05-09-CI Fellows, again

Introduction: Lev and Hal point out the CI Fellows program is on again for this year. Lev visited me for a year under this program, and I quite enjoyed it. Due May 31.

6 0.85849673 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

7 0.85268563 240 hunch net-2007-04-21-Videolectures.net

8 0.70671099 454 hunch net-2012-01-30-ICML Posters and Scope

9 0.67594278 332 hunch net-2008-12-23-Use of Learning Theory

10 0.57618964 207 hunch net-2006-09-12-Incentive Compatible Reviewing

11 0.56425905 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.56043851 437 hunch net-2011-07-10-ICML 2011 and the future

13 0.55004388 40 hunch net-2005-03-13-Avoiding Bad Reviewing

14 0.54313147 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

15 0.53817081 484 hunch net-2013-06-16-Representative Reviewing

16 0.53557563 363 hunch net-2009-07-09-The Machine Learning Forum

17 0.53481859 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

18 0.53381848 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

19 0.53332353 464 hunch net-2012-05-03-Microsoft Research, New York City

20 0.52959591 463 hunch net-2012-05-02-ICML: Behind the Scenes