hunch_net hunch_net-2012 hunch_net-2012-468 knowledge-graph by maker-knowledge-mining

468 hunch net-2012-06-29-ICML survey and comments


meta infos for this blog

Source: html

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. [sent-1, score-0.108]

2 Consequently, I have only secondhand reports that the conference is going well. [sent-2, score-0.217]

3 For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. [sent-3, score-0.536]

4 Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. [sent-4, score-0.433]

5 We also conducted a survey before the conference and have the survey results now. [sent-5, score-0.677]

6 This can be compared with the ICML 2010 survey results . [sent-6, score-0.23]

7 Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. [sent-7, score-0.431]

8 Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-. [sent-8, score-0.182]

9 Achieving this was one of our subgoals in the pursuit of high quality decisions. [sent-12, score-0.478]

10 This was something that we worried about significantly in shifting the paper deadline and otherwise massaging the schedule. [sent-14, score-0.406]

11 Most people also thought the review period was sufficiently long and most reviews were high quality (+. [sent-15, score-0.457]

12 2010) About 1/4 of reviewers say that author response changed their mind on a paper and 2/3 of reviewers say discussion changed their mind on a paper. [sent-19, score-1.559]

13 The expectation of decision impact from author response is reduced from 2010 (-. [sent-20, score-0.675]

14 The existence of author response is overwhelmingly preferred. [sent-22, score-0.472]

15 A substantial bump in reviewing quality was a primary goal. [sent-32, score-0.248]

16 The ACs spent substantially more time (43 hours on average) than PC members (28 hours on average). [sent-33, score-0.324]

17 The AC load we had this year was probably too high and will need to be reduced somewhat for next year. [sent-35, score-0.479]

18 2/3 of authors prefer the option to revise a paper during author response. [sent-36, score-0.579]

19 The choice of how to deal with increased submissions is deeply undecided, with a slight preference for short talk+poster as we did. [sent-37, score-0.31]

20 There is a strong preference for COLT and UAI colocation with the next tier of preference for IJCAI, KDD, AAAI, and CVPR. [sent-39, score-0.593]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('survey', 0.23), ('preference', 0.21), ('author', 0.193), ('response', 0.192), ('acs', 0.18), ('average', 0.163), ('hours', 0.162), ('reduced', 0.153), ('quality', 0.147), ('changed', 0.143), ('expectation', 0.137), ('icml', 0.135), ('mind', 0.13), ('discussion', 0.129), ('high', 0.128), ('conference', 0.109), ('arrived', 0.108), ('conducted', 0.108), ('reports', 0.108), ('subgoals', 0.108), ('ac', 0.108), ('authors', 0.107), ('somewhat', 0.106), ('paper', 0.103), ('significantly', 0.103), ('people', 0.101), ('reviewers', 0.101), ('reviewing', 0.101), ('subscribed', 0.1), ('slight', 0.1), ('remote', 0.1), ('worried', 0.1), ('ranging', 0.1), ('massaging', 0.1), ('cvpr', 0.1), ('say', 0.097), ('revise', 0.095), ('pursuit', 0.095), ('subscribe', 0.095), ('next', 0.092), ('agrees', 0.09), ('existence', 0.087), ('comparable', 0.084), ('reid', 0.084), ('scores', 0.084), ('ijcai', 0.081), ('expertise', 0.081), ('period', 0.081), ('colocation', 0.081), ('option', 0.081)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

2 0.23458549 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

3 0.22974727 403 hunch net-2010-07-18-ICML & COLT 2010

Introduction: The papers which interested me most at ICML and COLT 2010 were: Thomas Walsh , Kaushik Subramanian , Michael Littman and Carlos Diuk Generalizing Apprenticeship Learning across Hypothesis Classes . This paper formalizes and provides algorithms with guarantees for mixed-mode apprenticeship and traditional reinforcement learning algorithms, allowing RL algorithms that perform better than for either setting alone. István Szita and Csaba Szepesvári Model-based reinforcement learning with nearly tight exploration complexity bounds . This paper and another represent the frontier of best-known algorithm for Reinforcement Learning in a Markov Decision Process. James Martens Deep learning via Hessian-free optimization . About a new not-quite-online second order gradient algorithm for learning deep functional structures. Potentially this is very powerful because while people have often talked about end-to-end learning, it has rarely worked in practice. Chrisoph

4 0.21328501 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

5 0.21288082 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.20490986 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.20224999 65 hunch net-2005-05-02-Reviewing techniques for conferences

8 0.19703211 461 hunch net-2012-04-09-ICML author feedback is open

9 0.19044225 453 hunch net-2012-01-28-Why COLT?

10 0.18485835 318 hunch net-2008-09-26-The SODA Program Committee

11 0.18121721 401 hunch net-2010-06-20-2010 ICML discussion site

12 0.17990722 356 hunch net-2009-05-24-2009 ICML discussion site

13 0.1779197 466 hunch net-2012-06-05-ICML acceptance statistics

14 0.17762142 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

15 0.17025806 343 hunch net-2009-02-18-Decision by Vetocracy

16 0.1687243 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

17 0.15257895 207 hunch net-2006-09-12-Incentive Compatible Reviewing

18 0.15247418 40 hunch net-2005-03-13-Avoiding Bad Reviewing

19 0.15228869 395 hunch net-2010-04-26-Compassionate Reviewing

20 0.14705889 454 hunch net-2012-01-30-ICML Posters and Scope


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.243), (1, -0.274), (2, 0.208), (3, -0.008), (4, 0.05), (5, 0.024), (6, -0.037), (7, -0.039), (8, -0.036), (9, 0.003), (10, -0.02), (11, -0.003), (12, -0.083), (13, 0.06), (14, 0.073), (15, 0.002), (16, -0.144), (17, -0.048), (18, -0.06), (19, 0.035), (20, 0.024), (21, -0.078), (22, 0.009), (23, 0.076), (24, -0.013), (25, -0.057), (26, 0.024), (27, -0.066), (28, 0.023), (29, 0.076), (30, 0.008), (31, -0.049), (32, 0.012), (33, 0.012), (34, -0.006), (35, 0.006), (36, 0.008), (37, -0.0), (38, 0.022), (39, 0.058), (40, -0.026), (41, -0.064), (42, 0.012), (43, -0.026), (44, -0.001), (45, 0.064), (46, -0.016), (47, 0.026), (48, 0.018), (49, -0.018)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98091078 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

2 0.73010641 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

3 0.70991093 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

4 0.70180571 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

5 0.70112717 401 hunch net-2010-06-20-2010 ICML discussion site

Introduction: A substantial difficulty with the 2009 and 2008 ICML discussion system was a communication vacuum, where authors were not informed of comments, and commenters were not informed of responses to their comments without explicit monitoring. Mark Reid has setup a new discussion system for 2010 with the goal of addressing this. Mark didn’t want to make it to intrusive, so you must opt-in. As an author, find your paper and “Subscribe by email” to the comments. As a commenter, you have the option of providing an email for follow-up notification.

6 0.6986568 305 hunch net-2008-06-30-ICML has a comment system

7 0.69765341 461 hunch net-2012-04-09-ICML author feedback is open

8 0.6905933 356 hunch net-2009-05-24-2009 ICML discussion site

9 0.67396045 318 hunch net-2008-09-26-The SODA Program Committee

10 0.67391855 40 hunch net-2005-03-13-Avoiding Bad Reviewing

11 0.67356962 403 hunch net-2010-07-18-ICML & COLT 2010

12 0.66918498 453 hunch net-2012-01-28-Why COLT?

13 0.66590405 315 hunch net-2008-09-03-Bidding Problems

14 0.65942329 463 hunch net-2012-05-02-ICML: Behind the Scenes

15 0.65871215 484 hunch net-2013-06-16-Representative Reviewing

16 0.65174621 116 hunch net-2005-09-30-Research in conferences

17 0.64510375 395 hunch net-2010-04-26-Compassionate Reviewing

18 0.63483083 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

19 0.61488581 447 hunch net-2011-10-10-ML Symposium and ICML details

20 0.60946453 117 hunch net-2005-10-03-Not ICML


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(3, 0.032), (10, 0.025), (27, 0.166), (38, 0.011), (48, 0.307), (53, 0.071), (55, 0.141), (94, 0.085), (95, 0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.94502312 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

2 0.89950502 232 hunch net-2007-02-11-24

Introduction: To commemorate the Twenty Fourth Annual International Conference on Machine Learning (ICML-07), the FOX Network has decided to launch a new spin-off series in prime time. Through unofficial sources, I have obtained the story arc for the first season, which appears frighteningly realistic.

3 0.85825872 46 hunch net-2005-03-24-The Role of Workshops

Introduction: A good workshop is often far more interesting than the papers at a conference. This happens because a workshop has a much tighter focus than a conference. Since you choose the workshops fitting your interest, the increased relevance can greatly enhance the level of your interest and attention. Roughly speaking, a workshop program consists of elements related to a subject of your interest. The main conference program consists of elements related to someone’s interest (which is rarely your own). Workshops are more about doing research while conferences are more about presenting research. Several conferences have associated workshop programs, some with deadlines due shortly. ICML workshops Due April 1 IJCAI workshops Deadlines Vary KDD workshops Not yet finalized Anyone going to these conferences should examine the workshops and see if any are of interest. (If none are, then maybe you should organize one next year.)

4 0.84951073 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

5 0.8483572 318 hunch net-2008-09-26-The SODA Program Committee

Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam

6 0.83425051 303 hunch net-2008-06-09-The Minimum Sample Complexity of Importance Weighting

7 0.72720855 327 hunch net-2008-11-16-Observations on Linearity for Reductions to Regression

8 0.70207894 466 hunch net-2012-06-05-ICML acceptance statistics

9 0.67602015 116 hunch net-2005-09-30-Research in conferences

10 0.65531009 75 hunch net-2005-05-28-Running A Machine Learning Summer School

11 0.65400577 403 hunch net-2010-07-18-ICML & COLT 2010

12 0.65298146 461 hunch net-2012-04-09-ICML author feedback is open

13 0.65004855 464 hunch net-2012-05-03-Microsoft Research, New York City

14 0.64019418 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

15 0.63723338 463 hunch net-2012-05-02-ICML: Behind the Scenes

16 0.63630748 452 hunch net-2012-01-04-Why ICML? and the summer conferences

17 0.63467097 449 hunch net-2011-11-26-Giving Thanks

18 0.62767583 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

19 0.62480223 40 hunch net-2005-03-13-Avoiding Bad Reviewing

20 0.6242497 333 hunch net-2008-12-27-Adversarial Academia