hunch_net hunch_net-2006 hunch_net-2006-180 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: John Platt , who is PC-chair for NIPS 2006 has organized a NIPS paper evaluation criteria document with input from the program committee and others. The document contains specific advice about what is appropriate for the various subareas within NIPS. It may be very helpful, because the standards of evaluation for papers varies significantly. This is a bit of an experiment: the hope is that by carefully thinking about and stating what is important, authors can better understand whether and where their work fits. Update: The general submission page and Author instruction including how to submit an appendix .
sentIndex sentText sentNum sentScore
1 John Platt , who is PC-chair for NIPS 2006 has organized a NIPS paper evaluation criteria document with input from the program committee and others. [sent-1, score-1.523]
2 The document contains specific advice about what is appropriate for the various subareas within NIPS. [sent-2, score-1.244]
3 It may be very helpful, because the standards of evaluation for papers varies significantly. [sent-3, score-0.77]
4 This is a bit of an experiment: the hope is that by carefully thinking about and stating what is important, authors can better understand whether and where their work fits. [sent-4, score-1.008]
5 Update: The general submission page and Author instruction including how to submit an appendix . [sent-5, score-1.072]
wordName wordTfidf (topN-words)
[('document', 0.431), ('evaluation', 0.302), ('instruction', 0.228), ('appendix', 0.216), ('nips', 0.2), ('stating', 0.191), ('standards', 0.185), ('criteria', 0.179), ('organized', 0.175), ('contains', 0.175), ('varies', 0.17), ('submission', 0.17), ('submit', 0.16), ('committee', 0.154), ('advice', 0.151), ('update', 0.141), ('experiment', 0.139), ('appropriate', 0.137), ('specific', 0.135), ('page', 0.132), ('john', 0.13), ('input', 0.128), ('within', 0.124), ('authors', 0.123), ('carefully', 0.123), ('author', 0.11), ('thinking', 0.108), ('whether', 0.104), ('helpful', 0.103), ('including', 0.1), ('program', 0.095), ('various', 0.091), ('hope', 0.089), ('understand', 0.083), ('bit', 0.074), ('important', 0.071), ('general', 0.066), ('papers', 0.064), ('better', 0.061), ('paper', 0.059), ('work', 0.052), ('may', 0.049)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 180 hunch net-2006-05-21-NIPS paper evaluation criteria
Introduction: John Platt , who is PC-chair for NIPS 2006 has organized a NIPS paper evaluation criteria document with input from the program committee and others. The document contains specific advice about what is appropriate for the various subareas within NIPS. It may be very helpful, because the standards of evaluation for papers varies significantly. This is a bit of an experiment: the hope is that by carefully thinking about and stating what is important, authors can better understand whether and where their work fits. Update: The general submission page and Author instruction including how to submit an appendix .
2 0.17260499 453 hunch net-2012-01-28-Why COLT?
Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi
3 0.14084181 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006
Introduction: Bob Williamson and I are the learning theory PC members at NIPS this year. This is some attempt to state the standards and tests I applied to the papers. I think it is a good idea to talk about this for two reasons: Making community standards a matter of public record seems healthy. It give us a chance to debate what is and is not the right standard. It might even give us a bit more consistency across the years. It may save us all time. There are a number of papers submitted which just aren’t there yet. Avoiding submitting is the right decision in this case. There are several criteria for judging a paper. All of these were active this year. Some criteria are uncontroversial while others may be so. The paper must have a theorem establishing something new for which it is possible to derive high confidence in the correctness of the results. A surprising number of papers fail this test. This criteria seems essential to the definition of “theory”. Missing theo
4 0.12751128 304 hunch net-2008-06-27-Reviewing Horror Stories
Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App
5 0.11460889 454 hunch net-2012-01-30-ICML Posters and Scope
Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene
6 0.11404837 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
7 0.1000344 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013
8 0.097485662 437 hunch net-2011-07-10-ICML 2011 and the future
9 0.097227797 484 hunch net-2013-06-16-Representative Reviewing
10 0.096261673 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
11 0.091305852 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
12 0.079299606 409 hunch net-2010-09-13-AIStats
13 0.077270426 403 hunch net-2010-07-18-ICML & COLT 2010
14 0.075482577 116 hunch net-2005-09-30-Research in conferences
15 0.074758992 65 hunch net-2005-05-02-Reviewing techniques for conferences
16 0.074613124 318 hunch net-2008-09-26-The SODA Program Committee
17 0.072166286 461 hunch net-2012-04-09-ICML author feedback is open
18 0.071310341 233 hunch net-2007-02-16-The Forgetting
19 0.070084125 71 hunch net-2005-05-14-NIPS
20 0.069630988 288 hunch net-2008-02-10-Complexity Illness
topicId topicWeight
[(0, 0.132), (1, -0.104), (2, 0.081), (3, -0.007), (4, 0.048), (5, 0.067), (6, 0.009), (7, -0.02), (8, -0.0), (9, -0.047), (10, 0.012), (11, -0.05), (12, -0.036), (13, 0.023), (14, -0.03), (15, -0.058), (16, 0.032), (17, -0.041), (18, -0.003), (19, 0.012), (20, -0.0), (21, 0.089), (22, 0.025), (23, -0.026), (24, -0.004), (25, 0.038), (26, 0.005), (27, 0.013), (28, -0.019), (29, -0.064), (30, 0.046), (31, -0.035), (32, -0.03), (33, 0.006), (34, -0.006), (35, 0.08), (36, 0.015), (37, 0.122), (38, 0.006), (39, 0.035), (40, 0.079), (41, 0.066), (42, 0.019), (43, 0.053), (44, -0.037), (45, 0.02), (46, 0.082), (47, -0.096), (48, -0.02), (49, 0.023)]
simIndex simValue blogId blogTitle
same-blog 1 0.98584837 180 hunch net-2006-05-21-NIPS paper evaluation criteria
Introduction: John Platt , who is PC-chair for NIPS 2006 has organized a NIPS paper evaluation criteria document with input from the program committee and others. The document contains specific advice about what is appropriate for the various subareas within NIPS. It may be very helpful, because the standards of evaluation for papers varies significantly. This is a bit of an experiment: the hope is that by carefully thinking about and stating what is important, authors can better understand whether and where their work fits. Update: The general submission page and Author instruction including how to submit an appendix .
2 0.66168427 304 hunch net-2008-06-27-Reviewing Horror Stories
Introduction: Essentially everyone who writes research papers suffers rejections. They always sting immediately, but upon further reflection many of these rejections come to seem reasonable. Maybe the equations had too many typos or maybe the topic just isn’t as important as was originally thought. A few rejections do not come to seem acceptable, and these form the basis of reviewing horror stories, a great material for conversations. I’ve decided to share three of mine, now all safely a bit distant in the past. Prediction Theory for Classification Tutorial . This is a tutorial about tight sample complexity bounds for classification that I submitted to JMLR . The first decision I heard was a reject which appeared quite unjust to me—for example one of the reviewers appeared to claim that all the content was in standard statistics books. Upon further inquiry, several citations were given, none of which actually covered the content. Later, I was shocked to hear the paper was accepted. App
3 0.63214964 409 hunch net-2010-09-13-AIStats
Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.
4 0.62135607 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making
Introduction: This is a very difficult post to write, because it is about a perenially touchy subject. Nevertheless, it is an important one which needs to be thought about carefully. There are a few things which should be understood: The system is changing and responsive. We-the-authors are we-the-reviewers, we-the-PC, and even we-the-NIPS-board. NIPS has implemented ‘secondary program chairs’, ‘author response’, and ‘double blind reviewing’ in the last few years to help with the decision process, and more changes may happen in the future. Agreement creates a perception of correctness. When any PC meets and makes a group decision about a paper, there is a strong tendency for the reinforcement inherent in a group decision to create the perception of correctness. For the many people who have been on the NIPS PC it’s reasonable to entertain a healthy skepticism in the face of this reinforcing certainty. This post is about structural problems. What problems arise because of the structure
5 0.57162571 318 hunch net-2008-09-26-The SODA Program Committee
Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam
6 0.56844079 453 hunch net-2012-01-28-Why COLT?
7 0.55871975 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
8 0.53230637 437 hunch net-2011-07-10-ICML 2011 and the future
9 0.51935267 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept
10 0.51612216 325 hunch net-2008-11-10-ICML Reviewing Criteria
11 0.50658476 484 hunch net-2013-06-16-Representative Reviewing
12 0.5032258 199 hunch net-2006-07-26-Two more UAI papers of interest
13 0.4998152 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
14 0.49943805 417 hunch net-2010-11-18-ICML 2011 – Call for Tutorials
15 0.48170513 463 hunch net-2012-05-02-ICML: Behind the Scenes
16 0.48105845 30 hunch net-2005-02-25-Why Papers?
17 0.48081303 315 hunch net-2008-09-03-Bidding Problems
18 0.45840523 88 hunch net-2005-07-01-The Role of Impromptu Talks
19 0.45824516 288 hunch net-2008-02-10-Complexity Illness
20 0.45436251 454 hunch net-2012-01-30-ICML Posters and Scope
topicId topicWeight
[(10, 0.073), (27, 0.109), (53, 0.07), (55, 0.251), (67, 0.356)]
simIndex simValue blogId blogTitle
same-blog 1 0.91993165 180 hunch net-2006-05-21-NIPS paper evaluation criteria
Introduction: John Platt , who is PC-chair for NIPS 2006 has organized a NIPS paper evaluation criteria document with input from the program committee and others. The document contains specific advice about what is appropriate for the various subareas within NIPS. It may be very helpful, because the standards of evaluation for papers varies significantly. This is a bit of an experiment: the hope is that by carefully thinking about and stating what is important, authors can better understand whether and where their work fits. Update: The general submission page and Author instruction including how to submit an appendix .
2 0.83736151 192 hunch net-2006-07-08-Some recent papers
Introduction: It was a fine time for learning in Pittsburgh. John and Sam mentioned some of my favorites. Here’s a few more worth checking out: Online Multitask Learning Ofer Dekel, Phil Long, Yoram Singer This is on my reading list. Definitely an area I’m interested in. Maximum Entropy Distribution Estimation with Generalized Regularization Miroslav DudÃÂk, Robert E. Schapire Learning near-optimal policies with Bellman-residual minimization based fitted policy iteration and a single sample path András Antos, Csaba Szepesvári, Rémi Munos Again, on the list to read. I saw Csaba and Remi talk about this and related work at an ICML Workshop on Kernel Reinforcement Learning. The big question in my head is how this compares/contrasts with existing work in reductions to reinforcement learning. Are there advantages/disadvantages? Higher Order Learning On Graphs> by Sameer Agarwal, Kristin Branson, and Serge Belongie, looks to be interesteding. They seem to poo-poo “tensorization
3 0.69319737 296 hunch net-2008-04-21-The Science 2.0 article
Introduction: I found the article about science using modern tools interesting , especially the part about ‘blogophobia’, which in my experience is often a substantial issue: many potential guest posters aren’t quite ready, because of the fear of a permanent public mistake, because it is particularly hard to write about the unknown (the essence of research), and because the system for public credit doesn’t yet really handle blog posts. So far, science has been relatively resistant to discussing research on blogs. Some things need to change to get there. Public tolerance of the occasional mistake is essential, as is a willingness to cite (and credit) blogs as freely as papers. I’ve often run into another reason for holding back myself: I don’t want to overtalk my own research. Nevertheless, I’m slowly changing to the opinion that I’m holding back too much: the real power of a blog in research is that it can be used to confer with many people, and that just makes research work better.
4 0.64126134 463 hunch net-2012-05-02-ICML: Behind the Scenes
Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze
5 0.61083114 331 hunch net-2008-12-12-Summer Conferences
Introduction: Here’s a handy table for the summer conferences. Conference Deadline Reviewer Targeting Double Blind Author Feedback Location Date ICML ( wrong ICML ) January 26 Yes Yes Yes Montreal, Canada June 14-17 COLT February 13 No No Yes Montreal June 19-21 UAI March 13 No Yes No Montreal June 19-21 KDD February 2/6 No No No Paris, France June 28-July 1 Reviewer targeting is new this year. The idea is that many poor decisions happen because the papers go to reviewers who are unqualified, and the hope is that allowing authors to point out who is qualified results in better decisions. In my experience, this is a reasonable idea to test. Both UAI and COLT are experimenting this year as well with double blind and author feedback, respectively. Of the two, I believe author feedback is more important, as I’ve seen it make a difference. However, I still consider double blind reviewing a net wi
6 0.60802305 90 hunch net-2005-07-07-The Limits of Learning Theory
7 0.60051376 20 hunch net-2005-02-15-ESPgame and image labeling
8 0.59529847 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge
9 0.59481883 395 hunch net-2010-04-26-Compassionate Reviewing
10 0.59464258 448 hunch net-2011-10-24-2011 ML symposium and the bears
11 0.59332556 270 hunch net-2007-11-02-The Machine Learning Award goes to …
12 0.59232867 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning
13 0.59027737 446 hunch net-2011-10-03-Monday announcements
14 0.57433122 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013
15 0.57364386 453 hunch net-2012-01-28-Why COLT?
16 0.57267225 240 hunch net-2007-04-21-Videolectures.net
17 0.57246631 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?
18 0.56975251 387 hunch net-2010-01-19-Deadline Season, 2010
19 0.56056416 356 hunch net-2009-05-24-2009 ICML discussion site
20 0.55553526 452 hunch net-2012-01-04-Why ICML? and the summer conferences