hunch_net hunch_net-2008 hunch_net-2008-325 knowledge-graph by maker-knowledge-mining

325 hunch net-2008-11-10-ICML Reviewing Criteria


meta infos for this blog

Source: html

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. [sent-1, score-0.473]

2 I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. [sent-2, score-0.199]

3 That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. [sent-4, score-0.398]

4 I do my best to be fair, and there are zero subjects that I consider categorical rejects. [sent-5, score-0.594]

5 I don’t consider several arguments for rejection-not-on-the-merits reasonable . [sent-6, score-0.256]

6 I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. [sent-7, score-0.939]

7 I often find papers with only one element harder to accept, including papers with just (a). [sent-10, score-0.487]

8 I rarely jump-up-and-down about a paper, because I believe that great progress is rarely made. [sent-12, score-0.481]

9 I’m not very interested in new algorithms with the same theorems as older algorithms. [sent-13, score-0.599]

10 I’m also cautious about new analysis for older algorithms, since I like to see analysis driving algorithm rather than vice-versa. [sent-14, score-0.789]

11 I prioritize a proof-of-possibility over a quantitative improvement. [sent-15, score-0.408]

12 I consider quantitative improvements of small constant factors in sample complexity significant. [sent-16, score-0.865]

13 For computationaly complexity, I generally want to see at least an order of magnitude improvement. [sent-17, score-0.268]

14 I generally disregard any experiments on toy data, because I’ve found that toy data and real data can too-easily differ in their behavior. [sent-18, score-1.103]

15 My personal interests are pretty well covered by existing papers , but this is perhaps not too important a criteria, compared to the above, as I easily believe other subjects are interesting. [sent-19, score-0.78]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('quantitative', 0.277), ('toy', 0.261), ('older', 0.201), ('subjects', 0.193), ('papers', 0.193), ('rarely', 0.186), ('generally', 0.169), ('consider', 0.155), ('franchise', 0.149), ('categorical', 0.149), ('cautious', 0.149), ('factors', 0.149), ('reviewing', 0.139), ('prioritize', 0.131), ('littman', 0.131), ('analysis', 0.116), ('complexity', 0.115), ('data', 0.114), ('bottou', 0.112), ('mention', 0.112), ('believe', 0.109), ('criteria', 0.109), ('new', 0.106), ('reviewed', 0.106), ('interests', 0.106), ('interested', 0.104), ('leon', 0.103), ('algorithms', 0.102), ('driving', 0.101), ('element', 0.101), ('arguments', 0.101), ('magnitude', 0.099), ('implications', 0.099), ('decided', 0.097), ('zero', 0.097), ('differ', 0.097), ('covered', 0.097), ('analyze', 0.091), ('fair', 0.091), ('chair', 0.088), ('real', 0.087), ('wanted', 0.087), ('considered', 0.087), ('constant', 0.086), ('michael', 0.086), ('theorems', 0.086), ('chairs', 0.086), ('improvements', 0.083), ('plausibly', 0.083), ('personal', 0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 325 hunch net-2008-11-10-ICML Reviewing Criteria

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b

2 0.22904359 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

3 0.1631639 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.15129197 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo

5 0.14476521 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.14257488 395 hunch net-2010-04-26-Compassionate Reviewing

7 0.13988352 403 hunch net-2010-07-18-ICML & COLT 2010

8 0.13862659 343 hunch net-2009-02-18-Decision by Vetocracy

9 0.13785857 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

10 0.13488646 233 hunch net-2007-02-16-The Forgetting

11 0.12724316 452 hunch net-2012-01-04-Why ICML? and the summer conferences

12 0.12637028 318 hunch net-2008-09-26-The SODA Program Committee

13 0.12340264 304 hunch net-2008-06-27-Reviewing Horror Stories

14 0.12283596 30 hunch net-2005-02-25-Why Papers?

15 0.11832941 453 hunch net-2012-01-28-Why COLT?

16 0.11831463 332 hunch net-2008-12-23-Use of Learning Theory

17 0.11459301 315 hunch net-2008-09-03-Bidding Problems

18 0.10844799 52 hunch net-2005-04-04-Grounds for Rejection

19 0.10576347 466 hunch net-2012-06-05-ICML acceptance statistics

20 0.10306152 40 hunch net-2005-03-13-Avoiding Bad Reviewing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.259), (1, -0.09), (2, 0.091), (3, 0.03), (4, 0.105), (5, 0.041), (6, -0.035), (7, 0.017), (8, -0.008), (9, -0.053), (10, 0.051), (11, 0.01), (12, -0.01), (13, -0.011), (14, 0.001), (15, -0.015), (16, -0.004), (17, 0.02), (18, 0.012), (19, -0.053), (20, 0.117), (21, -0.002), (22, -0.029), (23, -0.03), (24, -0.0), (25, -0.054), (26, -0.002), (27, -0.071), (28, 0.007), (29, -0.101), (30, 0.022), (31, -0.031), (32, 0.038), (33, 0.027), (34, 0.044), (35, -0.041), (36, 0.076), (37, 0.014), (38, 0.051), (39, -0.018), (40, -0.051), (41, 0.141), (42, 0.026), (43, 0.077), (44, 0.036), (45, 0.035), (46, 0.041), (47, -0.04), (48, 0.05), (49, 0.057)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98258239 325 hunch net-2008-11-10-ICML Reviewing Criteria

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b

2 0.73298955 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

3 0.71969748 318 hunch net-2008-09-26-The SODA Program Committee

Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam

4 0.71919078 304 hunch net-2008-06-27-Reviewing Horror Stories

Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App

5 0.7055558 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo

6 0.6950981 233 hunch net-2007-02-16-The Forgetting

7 0.63922369 30 hunch net-2005-02-25-Why Papers?

8 0.63755685 188 hunch net-2006-06-30-ICML papers

9 0.63546664 315 hunch net-2008-09-03-Bidding Problems

10 0.62989545 52 hunch net-2005-04-04-Grounds for Rejection

11 0.62097734 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.61030865 403 hunch net-2010-07-18-ICML & COLT 2010

13 0.60775673 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.60718244 437 hunch net-2011-07-10-ICML 2011 and the future

15 0.59228379 463 hunch net-2012-05-02-ICML: Behind the Scenes

16 0.58270025 306 hunch net-2008-07-02-Proprietary Data in Academic Research?

17 0.57484794 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

18 0.57307404 466 hunch net-2012-06-05-ICML acceptance statistics

19 0.57059366 98 hunch net-2005-07-27-Not goal metrics

20 0.57020301 288 hunch net-2008-02-10-Complexity Illness


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(9, 0.044), (10, 0.04), (27, 0.301), (38, 0.081), (53, 0.013), (55, 0.131), (81, 0.184), (94, 0.083), (95, 0.023)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94122499 325 hunch net-2008-11-10-ICML Reviewing Criteria

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b

2 0.8599695 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

3 0.85389644 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

Introduction: In the quest to understand what good reviewing is, perhaps it’s worthwhile to think about what good research is. One way to think about good research is in terms of a producer/consumer model. In the producer/consumer model of research, for any element of research there are producers (authors and coauthors of papers, for example) and consumers (people who use the papers to make new papers or code solving problems). An produced bit of research is judged as “good” if it is used by many consumers. There are two basic questions which immediately arise: Is this a good model of research? Are there alternatives? The producer/consumer model has some difficulties which can be (partially) addressed. Disconnect. A group of people doing research on some subject may become disconnected from the rest of the world. Each person uses the research of other people in the group so it appears good research is being done, but the group has no impact on the rest of the world. One way

4 0.84933364 360 hunch net-2009-06-15-In Active Learning, the question changes

Introduction: A little over 4 years ago, Sanjoy made a post saying roughly “we should study active learning theoretically, because not much is understood”. At the time, we did not understand basic things such as whether or not it was possible to PAC-learn with an active algorithm without making strong assumptions about the noise rate. In other words, the fundamental question was “can we do it?” The nature of the question has fundamentally changed in my mind. The answer is to the previous question is “yes”, both information theoretically and computationally, most places where supervised learning could be applied. In many situation, the question has now changed to: “is it worth it?” Is the programming and computational overhead low enough to make the label cost savings of active learning worthwhile? Currently, there are situations where this question could go either way. Much of the challenge for the future is in figuring out how to make active learning easier or more worthwhile.

5 0.84676373 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

6 0.84465158 304 hunch net-2008-06-27-Reviewing Horror Stories

7 0.84448433 406 hunch net-2010-08-22-KDD 2010

8 0.84395158 95 hunch net-2005-07-14-What Learning Theory might do

9 0.84335619 160 hunch net-2006-03-02-Why do people count for learning?

10 0.84299254 196 hunch net-2006-07-13-Regression vs. Classification as a Primitive

11 0.84251499 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

12 0.8418752 194 hunch net-2006-07-11-New Models

13 0.84137487 44 hunch net-2005-03-21-Research Styles in Machine Learning

14 0.84056646 259 hunch net-2007-08-19-Choice of Metrics

15 0.83952236 309 hunch net-2008-07-10-Interesting papers, ICML 2008

16 0.83900404 220 hunch net-2006-11-27-Continuizing Solutions

17 0.83680469 378 hunch net-2009-11-15-The Other Online Learning

18 0.83604604 225 hunch net-2007-01-02-Retrospective

19 0.83508247 235 hunch net-2007-03-03-All Models of Learning have Flaws

20 0.83364898 432 hunch net-2011-04-20-The End of the Beginning of Active Learning