hunch_net hunch_net-2005 hunch_net-2005-52 knowledge-graph by maker-knowledge-mining

52 hunch net-2005-04-04-Grounds for Rejection


meta infos for this blog

Source: html

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. [sent-1, score-0.21]

2 The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. [sent-3, score-0.741]

3 Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. [sent-4, score-0.959]

4 Not doing so would severely harm the integrity of the conference. [sent-5, score-0.342]

5 Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. [sent-7, score-0.655]

6 This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. [sent-8, score-0.578]

7 The tactic sometimes succeeds with some reviewers (but not with me). [sent-9, score-0.289]

8 As a reviewer, I sometimes get lost for stupid reasons. [sent-10, score-0.193]

9 This is why an anonymized communication channel with the author can be very helpful. [sent-11, score-0.302]

10 Bad idea Rarely, a paper comes along with an obviously bad idea. [sent-12, score-0.357]

11 These also must be rejected for the integrity of science The following flaws have a strong negative impact on my opinion of the paper. [sent-13, score-0.989]

12 “Kneecapping the giants” papers take a previously published idea, cripple it, and then come up with an improvement on the crippled version. [sent-15, score-0.074]

13 This often looks great experimentally, but is unconvincing because it does not improve on the state of the art. [sent-16, score-0.104]

14 The paper emphasizes experimental evidence on datasets specially created to show the good performance of their algorithm. [sent-18, score-0.368]

15 Unfortunately, because learning is worst-case-impossible, I have little trust that performing well on a toy dataset implies good performance on real-world datasets. [sent-19, score-0.392]

16 My actual standard for reviewing is quite low, and I’m happy to approve of incremental improvements. [sent-20, score-0.583]

17 Unfortunately, even that standard is such that I suggest rejection on most reviewed papers. [sent-21, score-0.269]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('kneecapping', 0.268), ('giants', 0.238), ('integrity', 0.238), ('rejected', 0.226), ('lemma', 0.22), ('incorrect', 0.208), ('flaws', 0.19), ('paper', 0.17), ('must', 0.148), ('unfortunately', 0.124), ('approve', 0.119), ('impressing', 0.119), ('stupid', 0.119), ('toys', 0.119), ('typo', 0.119), ('theorem', 0.114), ('reviewers', 0.111), ('reviewing', 0.111), ('experimentally', 0.11), ('anonymized', 0.11), ('channel', 0.11), ('bad', 0.106), ('toy', 0.104), ('emphasizes', 0.104), ('harm', 0.104), ('succeeds', 0.104), ('unconvincing', 0.104), ('season', 0.099), ('indicates', 0.099), ('maximize', 0.099), ('performing', 0.099), ('following', 0.098), ('controversial', 0.095), ('sounds', 0.095), ('trust', 0.095), ('performance', 0.094), ('standard', 0.093), ('rejection', 0.092), ('incremental', 0.092), ('happy', 0.089), ('opinion', 0.089), ('reality', 0.086), ('reviewed', 0.084), ('understanding', 0.084), ('communication', 0.082), ('idea', 0.081), ('ok', 0.079), ('actual', 0.079), ('sometimes', 0.074), ('published', 0.074)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

2 0.15660539 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

3 0.15618528 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.15304293 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

5 0.13554174 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

6 0.12912384 304 hunch net-2008-06-27-Reviewing Horror Stories

7 0.12892784 233 hunch net-2007-02-16-The Forgetting

8 0.12861708 395 hunch net-2010-04-26-Compassionate Reviewing

9 0.12171853 207 hunch net-2006-09-12-Incentive Compatible Reviewing

10 0.1214142 454 hunch net-2012-01-30-ICML Posters and Scope

11 0.11775981 315 hunch net-2008-09-03-Bidding Problems

12 0.11575833 40 hunch net-2005-03-13-Avoiding Bad Reviewing

13 0.11242525 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

14 0.11105658 98 hunch net-2005-07-27-Not goal metrics

15 0.1109352 461 hunch net-2012-04-09-ICML author feedback is open

16 0.10844799 325 hunch net-2008-11-10-ICML Reviewing Criteria

17 0.10706714 177 hunch net-2006-05-05-An ICML reject

18 0.10108203 19 hunch net-2005-02-14-Clever Methods of Overfitting

19 0.094023056 318 hunch net-2008-09-26-The SODA Program Committee

20 0.090045825 23 hunch net-2005-02-19-Loss Functions for Discriminative Training of Energy-Based Models


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.19), (1, -0.082), (2, 0.167), (3, 0.088), (4, 0.022), (5, 0.031), (6, 0.007), (7, 0.01), (8, 0.019), (9, -0.003), (10, -0.007), (11, 0.04), (12, 0.043), (13, -0.018), (14, -0.032), (15, -0.013), (16, 0.029), (17, 0.029), (18, 0.052), (19, -0.025), (20, -0.013), (21, 0.026), (22, -0.066), (23, -0.074), (24, -0.04), (25, -0.026), (26, -0.025), (27, -0.101), (28, 0.001), (29, -0.075), (30, -0.042), (31, 0.062), (32, -0.02), (33, 0.03), (34, 0.042), (35, -0.06), (36, -0.063), (37, 0.055), (38, 0.015), (39, 0.035), (40, -0.008), (41, 0.009), (42, -0.016), (43, 0.007), (44, 0.089), (45, -0.012), (46, -0.055), (47, 0.018), (48, 0.007), (49, -0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97535974 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

2 0.80109084 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

3 0.74108392 315 hunch net-2008-09-03-Bidding Problems

Introduction: One way that many conferences in machine learning assign reviewers to papers is via bidding, which has steps something like: Invite people to review Accept papers Reviewers look at title and abstract and state the papers they are interested in reviewing. Some massaging happens, but reviewers often get approximately the papers they bid for. At the ICML business meeting, Andrew McCallum suggested getting rid of bidding for papers. A couple reasons were given: Privacy The title and abstract of the entire set of papers is visible to every participating reviewer. Some authors might be uncomfortable about this for submitted papers. I’m not sympathetic to this reason: the point of submitting a paper to review is to publish it, so the value (if any) of not publishing a part of it a little bit earlier seems limited. Cliques A bidding system is gameable. If you have 3 buddies and you inform each other of your submissions, you can each bid for your friend’s papers a

4 0.73642361 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

5 0.72403383 207 hunch net-2006-09-12-Incentive Compatible Reviewing

Introduction: Reviewing is a fairly formal process which is integral to the way academia is run. Given this integral nature, the quality of reviewing is often frustrating. I’ve seen plenty of examples of false statements, misbeliefs, reading what isn’t written, etc…, and I’m sure many other people have as well. Recently, mechanisms like double blind review and author feedback have been introduced to try to make the process more fair and accurate in many machine learning (and related) conferences. My personal experience is that these mechanisms help, especially the author feedback. Nevertheless, some problems remain. The game theory take on reviewing is that the incentive for truthful reviewing isn’t there. Since reviewers are also authors, there are sometimes perverse incentives created and acted upon. (Incidentially, these incentives can be both positive and negative.) Setting up a truthful reviewing system is tricky because their is no final reference truth available in any acce

6 0.72297525 98 hunch net-2005-07-27-Not goal metrics

7 0.70627761 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

8 0.70127809 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

9 0.69991481 343 hunch net-2009-02-18-Decision by Vetocracy

10 0.68793827 304 hunch net-2008-06-27-Reviewing Horror Stories

11 0.67562342 233 hunch net-2007-02-16-The Forgetting

12 0.6647737 484 hunch net-2013-06-16-Representative Reviewing

13 0.64321166 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

14 0.63805401 463 hunch net-2012-05-02-ICML: Behind the Scenes

15 0.63504958 177 hunch net-2006-05-05-An ICML reject

16 0.62908417 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.62282288 318 hunch net-2008-09-26-The SODA Program Committee

18 0.62004524 461 hunch net-2012-04-09-ICML author feedback is open

19 0.60702467 30 hunch net-2005-02-25-Why Papers?

20 0.59815931 202 hunch net-2006-08-10-Precision is not accuracy


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.04), (27, 0.207), (38, 0.025), (53, 0.094), (55, 0.053), (83, 0.363), (94, 0.103), (95, 0.024)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92262763 261 hunch net-2007-08-28-Live ML Class

Introduction: Davor and Chunnan point out that MLSS 2007 in Tuebingen has live video for the majority of the world that is not there (heh).

2 0.89395404 321 hunch net-2008-10-19-NIPS 2008 workshop on Kernel Learning

Introduction: We’d like to invite hunch.net readers to participate in the NIPS 2008 workshop on kernel learning. While the main focus is on automatically learning kernels from data, we are also also looking at the broader questions of feature selection, multi-task learning and multi-view learning. There are no restrictions on the learning problem being addressed (regression, classification, etc), and both theoretical and applied work will be considered. The deadline for submissions is October 24 . More detail can be found here . Corinna Cortes, Arthur Gretton, Gert Lanckriet, Mehryar Mohri, Afshin Rostamizadeh

same-blog 3 0.89345431 52 hunch net-2005-04-04-Grounds for Rejection

Introduction: It’s reviewing season right now, so I thought I would list (at a high level) the sorts of problems which I see in papers. Hopefully, this will help us all write better papers. The following flaws are fatal to any paper: Incorrect theorem or lemma statements A typo might be “ok”, if it can be understood. Any theorem or lemma which indicates an incorrect understanding of reality must be rejected. Not doing so would severely harm the integrity of the conference. A paper rejected for this reason must be fixed. Lack of Understanding If a paper is understood by none of the (typically 3) reviewers then it must be rejected for the same reason. This is more controversial than it sounds because there are some people who maximize paper complexity in the hope of impressing the reviewer. The tactic sometimes succeeds with some reviewers (but not with me). As a reviewer, I sometimes get lost for stupid reasons. This is why an anonymized communication channel with the author can

4 0.85995513 135 hunch net-2005-12-04-Watchword: model

Introduction: In everyday use a model is a system which explains the behavior of some system, hopefully at the level where some alteration of the model predicts some alteration of the real-world system. In machine learning “model” has several variant definitions. Everyday . The common definition is sometimes used. Parameterized . Sometimes model is a short-hand for “parameterized model”. Here, it refers to a model with unspecified free parameters. In the Bayesian learning approach, you typically have a prior over (everyday) models. Predictive . Even further from everyday use is the predictive model. Examples of this are “my model is a decision tree” or “my model is a support vector machine”. Here, there is no real sense in which an SVM explains the underlying process. For example, an SVM tells us nothing in particular about how alterations to the real-world system would create a change. Which definition is being used at any particular time is important information. For examp

5 0.80259365 228 hunch net-2007-01-15-The Machine Learning Department

Introduction: Carnegie Mellon School of Computer Science has the first academic Machine Learning department . This department already existed as the Center for Automated Learning and Discovery , but recently changed it’s name. The reason for changing the name is obvious: very few people think of themselves as “Automated Learner and Discoverers”, but there are number of people who think of themselves as “Machine Learners”. Machine learning is both more succinct and recognizable—good properties for a name. A more interesting question is “Should there be a Machine Learning Department?”. Tom Mitchell has a relevant whitepaper claiming that machine learning is answering a different question than other fields or departments. The fundamental debate here is “Is machine learning different from statistics?” At a cultural level, there is no real debate: they are different. Machine learning is characterized by several very active large peer reviewed conferences, operating in a computer

6 0.74104923 98 hunch net-2005-07-27-Not goal metrics

7 0.58261788 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

8 0.5771786 345 hunch net-2009-03-08-Prediction Science

9 0.57230061 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

10 0.56952363 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

11 0.56729835 347 hunch net-2009-03-26-Machine Learning is too easy

12 0.56621128 351 hunch net-2009-05-02-Wielding a New Abstraction

13 0.56466037 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

14 0.56297362 95 hunch net-2005-07-14-What Learning Theory might do

15 0.56185114 41 hunch net-2005-03-15-The State of Tight Bounds

16 0.56178164 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

17 0.56153136 370 hunch net-2009-09-18-Necessary and Sufficient Research

18 0.56109154 237 hunch net-2007-04-02-Contextual Scaling

19 0.56095755 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

20 0.56019974 258 hunch net-2007-08-12-Exponentiated Gradient