hunch_net hunch_net-2012 hunch_net-2012-454 knowledge-graph by maker-knowledge-mining

454 hunch net-2012-01-30-ICML Posters and Scope


meta infos for this blog

Source: html

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. [sent-3, score-0.508]

2 Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. [sent-5, score-0.408]

3 ” Exhibiting new things that you can do with ML provides good reference points for what is possible, provides a sense of what works, and compelling new ideas about what to work on can be valuable to the community. [sent-20, score-0.417]

4 New Algorithms Often, authors find that existing learning algorithms for solving some problem are lacking in some way, so they propose new better algorithms. [sent-24, score-0.666]

5 For these papers it’s important to have an empirical comparison to existing baselines. [sent-27, score-0.522]

6 Some authors use synthetic datasets which do not seem significant to me, because good results on such datasets may not transfer to real-world problems well as the real world tends to be quite a bit more complex than the synthetic processes which are natural to program. [sent-29, score-0.83]

7 One problem with relying on real datasets is dataset selection—choosing the dataset for which your algorithm seems to perform best. [sent-31, score-0.385]

8 Asking around a bit when developing the paper might help here, but in the end this can be a tough judgement call: Is the paper convincing enough that people interested in solving the problem should use this algorithm? [sent-35, score-0.504]

9 Another class of new algorithms papers is new algorithms for new areas of machine learning, blending into the previous category. [sent-36, score-0.856]

10 For papers like this, one way I’ve seen difficulties is when authors are very invested in a particular approach to solving the problem. [sent-38, score-0.36]

11 Another difficulty I’ve observed is reviewers used to the well-studied problems reject an interesting paper because (essentially) they assume that the authors left out a good baseline which does not exist. [sent-40, score-0.65]

12 To prevent the first, authors who ask around might get some valuable early feedback. [sent-41, score-0.358]

13 Algorithmic studies A relatively rare but potentially valuable form of paper is an algorithmic study. [sent-43, score-0.484]

14 Here, the authors do not propose a new algorithm, but instead do a comprehensive empirical comparison of different algorithms. [sent-44, score-0.809]

15 The standards here are quite high—the empirical comparison needs to be first-class to convince people, so the empirical comparison comments under new algorithms apply strongly. [sent-45, score-0.906]

16 I am personally most interested in theory that helps us design new learning algorithms, but broadly interested in what is possible. [sent-48, score-0.482]

17 In essence, authors who choose to analyze an existing algorithm are sometimes forced to make many unnatural assumptions for the theory to be correct. [sent-53, score-0.517]

18 At the extreme, you might have a new algorithm which solves a new X well, empirically and theoretically. [sent-56, score-0.376]

19 Reviewers can fall into a trap where they are most interested in 1 of the 4 questions answered above, and find 1/4 of the paper devoted to their question relatively weak compared to the paper that devotes all the pages to the same question. [sent-57, score-0.73]

20 The exception The set of papers I expect to see at ICML is more diverse than the above—there are often exceptions of one sort or another. [sent-59, score-0.378]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('authors', 0.237), ('comparison', 0.182), ('icml', 0.167), ('exceptions', 0.156), ('new', 0.148), ('empirical', 0.143), ('paper', 0.128), ('theory', 0.126), ('papers', 0.123), ('datasets', 0.123), ('valuable', 0.121), ('answered', 0.119), ('reviewers', 0.111), ('reviewing', 0.111), ('algorithms', 0.108), ('interesting', 0.105), ('synthetic', 0.104), ('interested', 0.104), ('questions', 0.104), ('propose', 0.099), ('exception', 0.099), ('dataset', 0.091), ('generally', 0.09), ('rare', 0.089), ('criteria', 0.087), ('interests', 0.084), ('asking', 0.082), ('ml', 0.081), ('algorithm', 0.08), ('conference', 0.08), ('tricky', 0.077), ('broad', 0.076), ('question', 0.075), ('existing', 0.074), ('algorithmic', 0.074), ('encourage', 0.074), ('worthwhile', 0.073), ('advice', 0.073), ('areas', 0.073), ('help', 0.072), ('relatively', 0.072), ('bit', 0.072), ('call', 0.072), ('another', 0.071), ('left', 0.069), ('program', 0.069), ('chairs', 0.068), ('natural', 0.067), ('three', 0.067), ('plausibly', 0.066)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999934 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

2 0.26928386 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

3 0.25275147 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

4 0.24667071 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

5 0.2368523 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

6 0.2366171 332 hunch net-2008-12-23-Use of Learning Theory

7 0.22904649 343 hunch net-2009-02-18-Decision by Vetocracy

8 0.22904359 325 hunch net-2008-11-10-ICML Reviewing Criteria

9 0.21019347 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

10 0.2055268 395 hunch net-2010-04-26-Compassionate Reviewing

11 0.20538136 403 hunch net-2010-07-18-ICML & COLT 2010

12 0.18893285 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

13 0.18215625 461 hunch net-2012-04-09-ICML author feedback is open

14 0.17963935 315 hunch net-2008-09-03-Bidding Problems

15 0.17882884 318 hunch net-2008-09-26-The SODA Program Committee

16 0.17745242 304 hunch net-2008-06-27-Reviewing Horror Stories

17 0.17369208 40 hunch net-2005-03-13-Avoiding Bad Reviewing

18 0.1728583 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

19 0.17131402 116 hunch net-2005-09-30-Research in conferences

20 0.16945647 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.442), (1, -0.167), (2, 0.11), (3, 0.081), (4, 0.057), (5, 0.011), (6, -0.005), (7, -0.004), (8, -0.014), (9, -0.033), (10, -0.013), (11, -0.021), (12, -0.011), (13, 0.09), (14, 0.037), (15, -0.007), (16, 0.076), (17, -0.06), (18, -0.047), (19, -0.001), (20, 0.06), (21, -0.03), (22, -0.011), (23, -0.021), (24, -0.003), (25, -0.084), (26, -0.017), (27, -0.094), (28, -0.057), (29, -0.053), (30, -0.041), (31, -0.007), (32, -0.012), (33, 0.046), (34, -0.006), (35, -0.04), (36, 0.033), (37, -0.037), (38, 0.081), (39, -0.025), (40, 0.052), (41, 0.051), (42, 0.031), (43, 0.037), (44, 0.083), (45, -0.041), (46, 0.038), (47, -0.026), (48, -0.075), (49, 0.064)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97771281 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

2 0.79687732 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

3 0.79357255 325 hunch net-2008-11-10-ICML Reviewing Criteria

Introduction: Michael Littman and Leon Bottou have decided to use a franchise program chair approach to reviewing at ICML this year. I’ll be one of the area chairs, so I wanted to mention a few things if you are thinking about naming me. I take reviewing seriously. That means papers to be reviewed are read, the implications are considered, and decisions are only made after that. I do my best to be fair, and there are zero subjects that I consider categorical rejects. I don’t consider several arguments for rejection-not-on-the-merits reasonable . I am generally interested in papers that (a) analyze new models of machine learning, (b) provide new algorithms, and (c) show that they work empirically on plausibly real problems. If a paper has the trifecta, I’m particularly interested. With 2 out of 3, I might be interested. I often find papers with only one element harder to accept, including papers with just (a). I’m a bit tough. I rarely jump-up-and-down about a paper, because I b

4 0.76671219 343 hunch net-2009-02-18-Decision by Vetocracy

Introduction: Few would mistake the process of academic paper review for a fair process, but sometimes the unfairness seems particularly striking. This is most easily seen by comparison: Paper Banditron Offset Tree Notes Problem Scope Multiclass problems where only the loss of one choice can be probed. Strictly greater: Cost sensitive multiclass problems where only the loss of one choice can be probed. Often generalizations don’t matter. That’s not the case here, since every plausible application I’ve thought of involves loss functions substantially different from 0/1. What’s new Analysis and Experiments Algorithm, Analysis, and Experiments As far as I know, the essence of the more general problem was first stated and analyzed with the EXP4 algorithm (page 16) (1998). It’s also the time horizon 1 simplification of the Reinforcement Learning setting for the random trajectory method (page 15) (2002). The Banditron algorithm itself is functionally identi

5 0.76012665 318 hunch net-2008-09-26-The SODA Program Committee

Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam

6 0.75074416 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

7 0.74794561 452 hunch net-2012-01-04-Why ICML? and the summer conferences

8 0.74465114 333 hunch net-2008-12-27-Adversarial Academia

9 0.73077428 395 hunch net-2010-04-26-Compassionate Reviewing

10 0.71165133 304 hunch net-2008-06-27-Reviewing Horror Stories

11 0.70292997 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

12 0.70002562 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

13 0.69273251 315 hunch net-2008-09-03-Bidding Problems

14 0.68947965 52 hunch net-2005-04-04-Grounds for Rejection

15 0.6873222 202 hunch net-2006-08-10-Precision is not accuracy

16 0.68077594 177 hunch net-2006-05-05-An ICML reject

17 0.67779601 468 hunch net-2012-06-29-ICML survey and comments

18 0.67750388 484 hunch net-2013-06-16-Representative Reviewing

19 0.67741263 42 hunch net-2005-03-17-Going all the Way, Sometimes

20 0.67263997 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.226), (27, 0.192), (38, 0.078), (51, 0.026), (53, 0.08), (55, 0.192), (94, 0.072), (95, 0.051)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.9673062 38 hunch net-2005-03-09-Bad Reviewing

Introduction: This is a difficult subject to talk about for many reasons, but a discussion may be helpful. Bad reviewing is a problem in academia. The first step in understanding this is admitting to the problem, so here is a short list of examples of bad reviewing. Reviewer disbelieves theorem proof (ICML), or disbelieve theorem with a trivially false counterexample. (COLT) Reviewer internally swaps quantifiers in a theorem, concludes it has been done before and is trivial. (NIPS) Reviewer believes a technique will not work despite experimental validation. (COLT) Reviewers fail to notice flaw in theorem statement (CRYPTO). Reviewer erroneously claims that it has been done before (NIPS, SODA, JMLR)—(complete with references!) Reviewer inverts the message of a paper and concludes it says nothing important. (NIPS*2) Reviewer fails to distinguish between a DAG and a tree (SODA). Reviewer is enthusiastic about paper but clearly does not understand (ICML). Reviewer erroneously

2 0.95681208 55 hunch net-2005-04-10-Is the Goal Understanding or Prediction?

Introduction: Steve Smale and I have a debate about goals of learning theory. Steve likes theorems with a dependence on unobservable quantities. For example, if D is a distribution over a space X x [0,1] , you can state a theorem about the error rate dependent on the variance, E (x,y)~D (y-E y’~D|x [y']) 2 . I dislike this, because I want to use the theorems to produce code solving learning problems. Since I don’t know (and can’t measure) the variance, a theorem depending on the variance does not help me—I would not know what variance to plug into the learning algorithm. Recast more broadly, this is a debate between “declarative” and “operative” mathematics. A strong example of “declarative” mathematics is “a new kind of science” . Roughly speaking, the goal of this kind of approach seems to be finding a way to explain the observations we make. Examples include “some things are unpredictable”, “a phase transition exists”, etc… “Operative” mathematics helps you make predictions a

3 0.93465209 240 hunch net-2007-04-21-Videolectures.net

Introduction: Davor has been working to setup videolectures.net which is the new site for the many lectures mentioned here . (Tragically, they seem to only be available in windows media format.) I went through my own projects and added a few links to the videos. The day when every result is a set of {paper, slides, video} isn’t quite here yet, but it’s within sight. (For many papers, of course, code is a 4th component.)

same-blog 4 0.92830414 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

5 0.89175433 199 hunch net-2006-07-26-Two more UAI papers of interest

Introduction: In addition to Ed Snelson’s paper, there were (at least) two other papers that caught my eye at UAI. One was this paper by Sanjoy Dasgupta, Daniel Hsu and Nakul Verma at UCSD which shows in a surprisingly general and strong way that almost all linear projections of any jointly distributed vector random variable with finite first and second moments look sphereical and unimodal (in fact look like a scale mixture of Gaussians). Great result, as you’d expect from Sanjoy. The other paper which I found intriguing but which I just haven’t groked yet is this beast by Manfred and Dima Kuzmin. You can check out the (beautiful) slides if that helps. I feel like there is something deep here, but my brain is too small to understand it. The COLT and last NIPS papers/slides are also on Manfred’s page. Hopefully someone here can illuminate.

6 0.88990319 182 hunch net-2006-06-05-Server Shift, Site Tweaks, Suggestions?

7 0.85481894 434 hunch net-2011-05-09-CI Fellows, again

8 0.84362066 474 hunch net-2012-10-18-7th Annual Machine Learning Symposium

9 0.83329034 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.82074344 40 hunch net-2005-03-13-Avoiding Bad Reviewing

11 0.81918389 332 hunch net-2008-12-23-Use of Learning Theory

12 0.80903071 207 hunch net-2006-09-12-Incentive Compatible Reviewing

13 0.80764878 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.80002975 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

15 0.79817736 464 hunch net-2012-05-03-Microsoft Research, New York City

16 0.79606521 484 hunch net-2013-06-16-Representative Reviewing

17 0.78762329 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

18 0.7870245 466 hunch net-2012-06-05-ICML acceptance statistics

19 0.78551364 395 hunch net-2010-04-26-Compassionate Reviewing

20 0.78334266 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer