hunch_net hunch_net-2011 hunch_net-2011-437 knowledge-graph by maker-knowledge-mining

437 hunch net-2011-07-10-ICML 2011 and the future


meta infos for this blog

Source: html

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 If that can be done, I believe there is substantial interest—I understand there was substantial interest in the joint symposium this year. [sent-15, score-0.549]

2 What we did manage was achieving a colocation with COLT and there is an outside chance that a machine learning summer school will precede the main conference. [sent-16, score-0.362]

3 The colocation with COLT is in both time and space, with COLT organized as (essentially) a separate track in a nearby building. [sent-17, score-0.247]

4 There is a small chance we’ll be able to organize a machine learning summer school as a prequel, which would be quite cool, but several things have to break right for this to occur. [sent-21, score-0.292]

5 Two possibilities here are impromptu talks and perhaps a joint open problems session with COLT. [sent-28, score-0.392]

6 I am personally a big believer in workshops as a mechanism for further research, so I hope this works out well. [sent-40, score-0.241]

7 I tend to believe that we should be shifting to a journal format for ICML papers, as per many past discussions. [sent-42, score-0.34]

8 My best guess is that this would never displace the baseline conference review process, but it would help some papers that don’t naturally fit into a conference format while keeping quality high. [sent-46, score-0.594]

9 Nevertheless, some basic goals are: Double Blind [routine now] Two identical papers with different authors should have the same chance of success. [sent-50, score-0.392]

10 In terms of reviewing quality, I think double blind makes little difference in the short term, but the public commitment to fair reviewing makes a real difference in the long term. [sent-51, score-0.656]

11 Author Feedback [routine now] Author feedback makes a difference in only a small minority of decisions, but I believe its effect is larger as (a) reviewer quality improves and (b) reviewer understanding improves. [sent-52, score-0.747]

12 Somewhat less routine, we are seeking a mechanism for authors to be able to provide feedback if additional reviews are requested, as I’ve become cautious of the late-breaking highly negative review. [sent-54, score-0.452]

13 Geoff Gordon tweaked AIStats this year to allow authors to revise papers during feedback. [sent-56, score-0.411]

14 Our plan at the moment is that one review will be assigned by bidding, one by a primary area chair, and one by a secondary area chair. [sent-62, score-0.325]

15 This made a difference on about 5-10% of decisions, and (I believe) improved overall quality a bit. [sent-65, score-0.249]

16 Nevertheless, we believe it is important to try to do the reviewing both quickly and well. [sent-69, score-0.434]

17 Doing things quickly implies that we can push the submission deadline back later, providing authors more time to make quality papers. [sent-70, score-0.574]

18 Altogether, we believe at the moment that two weeks can be shaved from our reviewing process. [sent-76, score-0.404]

19 This provides a lasting backing store for ICML papers, as well as a mechanism for revisions. [sent-86, score-0.216]

20 Implementing all the changes above is ambitious, but I believe feasible and that each is individually beneficial and to some extent individually evaluatable. [sent-89, score-0.36]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('believe', 0.192), ('authors', 0.183), ('conference', 0.158), ('colocation', 0.157), ('reviewing', 0.147), ('quality', 0.142), ('papers', 0.136), ('workshops', 0.13), ('routine', 0.126), ('icml', 0.114), ('talks', 0.112), ('joint', 0.111), ('mechanism', 0.111), ('colt', 0.108), ('difference', 0.107), ('lasting', 0.105), ('decisions', 0.103), ('interest', 0.098), ('impromptu', 0.097), ('quickly', 0.095), ('website', 0.093), ('revise', 0.092), ('richard', 0.092), ('track', 0.09), ('jmlr', 0.088), ('assigned', 0.088), ('reviews', 0.086), ('area', 0.086), ('individually', 0.084), ('things', 0.082), ('reviewer', 0.08), ('traditionally', 0.079), ('posters', 0.076), ('makes', 0.074), ('journal', 0.074), ('organize', 0.074), ('shifting', 0.074), ('hear', 0.074), ('substantial', 0.074), ('chance', 0.073), ('session', 0.072), ('tutorials', 0.072), ('submission', 0.072), ('feedback', 0.072), ('manage', 0.069), ('single', 0.069), ('nevertheless', 0.067), ('moment', 0.065), ('ve', 0.063), ('school', 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000012 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

2 0.32791859 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.29541099 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

4 0.2613301 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

5 0.24947953 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

Introduction: Although I’m greatly interested in machine learning, I think it must be admitted that there is a large amount of low quality logic being used in reviews. The problem is bad enough that sometimes I wonder if the Byzantine generals limit has been exceeded. For example, I’ve seen recent reviews where the given reasons for rejecting are: [ NIPS ] Theorem A is uninteresting because Theorem B is uninteresting. [ UAI ] When you learn by memorization, the problem addressed is trivial. [NIPS] The proof is in the appendix. [NIPS] This has been done before. (… but not giving any relevant citations) Just for the record I want to point out what’s wrong with these reviews. A future world in which such reasons never come up again would be great, but I’m sure these errors will be committed many times more in the future. This is nonsense. A theorem should be evaluated based on it’s merits, rather than the merits of another theorem. Learning by memorization requires an expon

6 0.24667071 454 hunch net-2012-01-30-ICML Posters and Scope

7 0.24431685 116 hunch net-2005-09-30-Research in conferences

8 0.23712303 461 hunch net-2012-04-09-ICML author feedback is open

9 0.2345365 318 hunch net-2008-09-26-The SODA Program Committee

10 0.23366518 343 hunch net-2009-02-18-Decision by Vetocracy

11 0.23326327 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.23066622 207 hunch net-2006-09-12-Incentive Compatible Reviewing

13 0.22981119 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

14 0.22963612 141 hunch net-2005-12-17-Workshops as Franchise Conferences

15 0.22247857 304 hunch net-2008-06-27-Reviewing Horror Stories

16 0.220571 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.21288082 468 hunch net-2012-06-29-ICML survey and comments

18 0.19763142 65 hunch net-2005-05-02-Reviewing techniques for conferences

19 0.19224717 403 hunch net-2010-07-18-ICML & COLT 2010

20 0.18885912 315 hunch net-2008-09-03-Bidding Problems


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.399), (1, -0.365), (2, 0.141), (3, -0.025), (4, 0.03), (5, 0.117), (6, 0.018), (7, 0.014), (8, -0.013), (9, 0.008), (10, -0.034), (11, 0.019), (12, 0.045), (13, -0.004), (14, 0.037), (15, 0.001), (16, 0.062), (17, 0.026), (18, -0.0), (19, 0.052), (20, 0.068), (21, 0.002), (22, 0.091), (23, -0.004), (24, -0.02), (25, 0.054), (26, -0.011), (27, 0.025), (28, 0.052), (29, 0.041), (30, 0.039), (31, -0.016), (32, 0.033), (33, -0.007), (34, 0.002), (35, 0.015), (36, -0.03), (37, -0.001), (38, 0.02), (39, -0.073), (40, 0.026), (41, -0.018), (42, 0.03), (43, -0.001), (44, 0.034), (45, 0.03), (46, 0.036), (47, 0.0), (48, -0.083), (49, 0.042)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98442036 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

2 0.82680333 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

Introduction: This is a very difficult post to write, because it is about a perenially touchy subject. Nevertheless, it is an important one which needs to be thought about carefully. There are a few things which should be understood: The system is changing and responsive. We-the-authors are we-the-reviewers, we-the-PC, and even we-the-NIPS-board. NIPS has implemented ‘secondary program chairs’, ‘author response’, and ‘double blind reviewing’ in the last few years to help with the decision process, and more changes may happen in the future. Agreement creates a perception of correctness. When any PC meets and makes a group decision about a paper, there is a strong tendency for the reinforcement inherent in a group decision to create the perception of correctness. For the many people who have been on the NIPS PC it’s reasonable to entertain a healthy skepticism in the face of this reinforcing certainty. This post is about structural problems. What problems arise because of the structure

3 0.82601625 318 hunch net-2008-09-26-The SODA Program Committee

Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam

4 0.81061924 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

5 0.79395825 461 hunch net-2012-04-09-ICML author feedback is open

Introduction: as of last night, late. When the reviewing deadline passed Wednesday night 15% of reviews were still missing, much higher than I expected. Between late reviews coming in, ACs working overtime through the weekend, and people willing to help in the pinch another ~390 reviews came in, reducing the missing mass to 0.2%. Nailing that last bit and a similar quantity of papers with uniformly low confidence reviews is what remains to be done in terms of basic reviews. We are trying to make all of those happen this week so authors have some chance to respond. I was surprised by the quantity of late reviews, and I think that’s an area where ICML needs to improve in future years. Good reviews are not done in a rush—they are done by setting aside time (like an afternoon), and carefully reading the paper while thinking about implications. Many reviewers do this well but a significant minority aren’t good at scheduling their personal time. In this situation there are several ways to fail:

6 0.76085013 453 hunch net-2012-01-28-Why COLT?

7 0.75934029 463 hunch net-2012-05-02-ICML: Behind the Scenes

8 0.75182796 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

9 0.74973488 315 hunch net-2008-09-03-Bidding Problems

10 0.73800528 468 hunch net-2012-06-29-ICML survey and comments

11 0.73760843 343 hunch net-2009-02-18-Decision by Vetocracy

12 0.73748708 304 hunch net-2008-06-27-Reviewing Horror Stories

13 0.73285538 207 hunch net-2006-09-12-Incentive Compatible Reviewing

14 0.72032535 116 hunch net-2005-09-30-Research in conferences

15 0.71567154 40 hunch net-2005-03-13-Avoiding Bad Reviewing

16 0.71105695 395 hunch net-2010-04-26-Compassionate Reviewing

17 0.69942021 452 hunch net-2012-01-04-Why ICML? and the summer conferences

18 0.69869304 333 hunch net-2008-12-27-Adversarial Academia

19 0.68057013 454 hunch net-2012-01-30-ICML Posters and Scope

20 0.67101377 447 hunch net-2011-10-10-ML Symposium and ICML details


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.02), (4, 0.014), (9, 0.01), (10, 0.042), (26, 0.015), (27, 0.206), (34, 0.018), (38, 0.053), (48, 0.018), (53, 0.077), (55, 0.16), (90, 0.014), (92, 0.145), (94, 0.086), (95, 0.048)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.98154831 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS

2 0.96605599 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov

same-blog 3 0.94058794 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.90679723 272 hunch net-2007-11-14-BellKor wins Netflix

Introduction: … but only the little prize. The BellKor team focused on integrating predictions from many different methods. The base methods consist of: Nearest Neighbor Methods Matrix Factorization Methods (asymmetric and symmetric) Linear Regression on various feature spaces Restricted Boltzman Machines The final predictor was an ensemble (as was reasonable to expect), although it’s a little bit more complicated than just a weighted average—it’s essentially a customized learning algorithm. Base approaches (1)-(3) seem like relatively well-known approaches (although I haven’t seen the asymmetric factorization variant before). RBMs are the new approach. The writeup is pretty clear for more details. The contestants are close to reaching the big prize, but the last 1.5% is probably at least as hard as what’s been done. A few new structurally different methods for making predictions may need to be discovered and added into the mixture. In other words, research may be require

5 0.88539255 463 hunch net-2012-05-02-ICML: Behind the Scenes

Introduction: This is a rather long post, detailing the ICML 2012 review process. The goal is to make the process more transparent, help authors understand how we came to a decision, and discuss the strengths and weaknesses of this process for future conference organizers. Microsoft’s Conference Management Toolkit (CMT) We chose to use CMT over other conference management software mainly because of its rich toolkit. The interface is sub-optimal (to say the least!) but it has extensive capabilities (to handle bids, author response, resubmissions, etc.), good import/export mechanisms (to process the data elsewhere), excellent technical support (to answer late night emails, add new functionalities). Overall, it was the right choice, although we hope a designer will look at that interface sometime soon! Toronto Matching System (TMS) TMS is now being used by many major conferences in our field (including NIPS and UAI). It is an automated system (developed by Laurent Charlin and Rich Ze

6 0.87847447 454 hunch net-2012-01-30-ICML Posters and Scope

7 0.8770479 461 hunch net-2012-04-09-ICML author feedback is open

8 0.87366074 452 hunch net-2012-01-04-Why ICML? and the summer conferences

9 0.87276292 75 hunch net-2005-05-28-Running A Machine Learning Summer School

10 0.87200159 40 hunch net-2005-03-13-Avoiding Bad Reviewing

11 0.87026572 207 hunch net-2006-09-12-Incentive Compatible Reviewing

12 0.86306918 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

13 0.86242032 343 hunch net-2009-02-18-Decision by Vetocracy

14 0.86055344 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.8602823 116 hunch net-2005-09-30-Research in conferences

16 0.8573193 466 hunch net-2012-06-05-ICML acceptance statistics

17 0.85729229 484 hunch net-2013-06-16-Representative Reviewing

18 0.85625422 333 hunch net-2008-12-27-Adversarial Academia

19 0.85593599 403 hunch net-2010-07-18-ICML & COLT 2010

20 0.8542791 464 hunch net-2012-05-03-Microsoft Research, New York City