hunch_net hunch_net-2012 hunch_net-2012-452 knowledge-graph by maker-knowledge-mining

452 hunch net-2012-01-04-Why ICML? and the summer conferences


meta infos for this blog

Source: html

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 NIPS and AIStat have few competing venues while ICML implicitly competes with many other conferences accepting machine learning related papers. [sent-6, score-0.326]

2 COLT was historically a conference for learning-interested Computer Science theory people. [sent-8, score-0.349]

3 A significant subset of COLT papers could easily be published at ICML instead. [sent-10, score-0.345]

4 ICML now has a significant theory community, including many pure theory papers and significant overlap with COLT attendees. [sent-11, score-0.689]

5 Good candidates for an ICML submission are learning theory papers motivated by real machine learning problems (example: the agnostic active learning paper) or which propose and analyze new plausibly useful algorithms (example: the adaptive gradient papers). [sent-12, score-0.561]

6 The other is that ICML is committed to fair reviewing—papers are double blind so reviewers are not forced to take into account the author identity. [sent-18, score-0.57]

7 Plenty of people will argue that author names don’t matter to them, but I’ve personally seen several cases as a reviewer where author identity affected the decision, typically towards favoring insiders or bigwigs at theory conferences as common sense would suggest. [sent-19, score-0.56]

8 The double blind aspect of ICML reviewing is an open invitation to outsiders to submit to ICML. [sent-20, score-0.485]

9 Many UAI papers could easily go to ICML because they are explicitly about machine learning or connections with machine learning. [sent-21, score-0.596]

10 For example, pure prediction market s are a stretch for ICML, but connections between machine learning and prediction markets, which seem to come up in multiple ways, are a good fit. [sent-22, score-0.463]

11 ICML provides a significantly larger potential audience and, due to it’s size, tends to be more diverse. [sent-31, score-0.284]

12 KDD is a large conference (a bit larger than ICML by attendance) which, as I understand it, initially started from the viewpoint of database people trying to do interesting things with the data they had. [sent-32, score-0.286]

13 Significant parts of the academic track are about machine learning technology and could have been submitted to ICML instead. [sent-34, score-0.342]

14 I was impressed by the double robust sampling work and the out of core learning paper is cool. [sent-35, score-0.325]

15 KDD doesn’t do double blind review, which was discussed above. [sent-39, score-0.386]

16 To me, a more significant drawback of KDD is the ACM paywall . [sent-40, score-0.285]

17 Depending on many circumstances, ICML might be a good candidate for a place to send a paper on a new empirically useful piece of machine learning technology. [sent-52, score-0.321]

18 Machine Learning has grown radically and gone industrial over the last decade, providing plenty of motivation for a conference on developing new core machine learning technology. [sent-54, score-0.477]

19 In most cases, the best place to send a paper is to the conference where it will be most appreciated. [sent-56, score-0.318]

20 So, when the choice is unclear, sending the paper to a conference designed simultaneously for fair high quality reviewing and broad distribution of your work is a good call as it provides the most meaningful acceptance. [sent-58, score-0.482]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('icml', 0.364), ('blind', 0.213), ('paywall', 0.18), ('double', 0.173), ('conferences', 0.163), ('kdd', 0.162), ('conference', 0.16), ('feb', 0.148), ('acm', 0.139), ('colt', 0.131), ('aistat', 0.128), ('author', 0.119), ('overlap', 0.116), ('uai', 0.115), ('significant', 0.105), ('machine', 0.1), ('reviewing', 0.099), ('scotland', 0.098), ('circumstances', 0.098), ('papers', 0.096), ('theory', 0.096), ('edinburgh', 0.093), ('historically', 0.093), ('connections', 0.093), ('paper', 0.089), ('easily', 0.083), ('grown', 0.082), ('year', 0.08), ('motivated', 0.08), ('august', 0.075), ('pure', 0.075), ('tends', 0.075), ('community', 0.073), ('audience', 0.072), ('plenty', 0.072), ('date', 0.07), ('june', 0.07), ('send', 0.069), ('provides', 0.069), ('larger', 0.068), ('prediction', 0.066), ('fair', 0.065), ('nips', 0.065), ('learning', 0.063), ('personally', 0.063), ('response', 0.063), ('could', 0.061), ('academic', 0.059), ('technology', 0.059), ('viewpoint', 0.058)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

2 0.31203076 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

3 0.28777182 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

4 0.2846576 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

5 0.26928386 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

6 0.2613301 437 hunch net-2011-07-10-ICML 2011 and the future

7 0.25808507 331 hunch net-2008-12-12-Summer Conferences

8 0.25003272 40 hunch net-2005-03-13-Avoiding Bad Reviewing

9 0.24148722 116 hunch net-2005-09-30-Research in conferences

10 0.21246901 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

11 0.21133138 387 hunch net-2010-01-19-Deadline Season, 2010

12 0.20490986 468 hunch net-2012-06-29-ICML survey and comments

13 0.2028162 89 hunch net-2005-07-04-The Health of COLT

14 0.19522096 484 hunch net-2013-06-16-Representative Reviewing

15 0.18876611 403 hunch net-2010-07-18-ICML & COLT 2010

16 0.17451827 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

17 0.17241347 17 hunch net-2005-02-10-Conferences, Dates, Locations

18 0.16939621 21 hunch net-2005-02-17-Learning Research Programs

19 0.16918556 141 hunch net-2005-12-17-Workshops as Franchise Conferences

20 0.16733252 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.361), (1, -0.357), (2, 0.131), (3, -0.145), (4, 0.012), (5, -0.15), (6, -0.08), (7, 0.0), (8, -0.006), (9, 0.034), (10, -0.025), (11, -0.001), (12, -0.042), (13, 0.071), (14, 0.069), (15, -0.025), (16, 0.018), (17, -0.059), (18, -0.05), (19, -0.006), (20, 0.066), (21, -0.093), (22, 0.052), (23, 0.134), (24, 0.011), (25, -0.07), (26, -0.013), (27, 0.09), (28, -0.093), (29, 0.071), (30, -0.022), (31, 0.056), (32, -0.087), (33, 0.062), (34, -0.006), (35, -0.038), (36, -0.005), (37, -0.017), (38, 0.059), (39, 0.023), (40, -0.027), (41, -0.007), (42, 0.047), (43, 0.009), (44, 0.009), (45, 0.022), (46, 0.008), (47, -0.044), (48, -0.04), (49, 0.031)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96493685 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

2 0.77111799 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

3 0.74625552 65 hunch net-2005-05-02-Reviewing techniques for conferences

Introduction: The many reviews following the many paper deadlines are just about over. AAAI and ICML in particular were experimenting with several reviewing techniques. Double Blind: AAAI and ICML were both double blind this year. It seemed (overall) beneficial, but two problems arose. For theoretical papers, with a lot to say, authors often leave out the proofs. This is very hard to cope with under a double blind review because (1) you can not trust the authors got the proof right but (2) a blanket “reject” hits many probably-good papers. Perhaps authors should more strongly favor proof-complete papers sent to double blind conferences. On the author side, double blind reviewing is actually somewhat disruptive to research. In particular, it discourages the author from talking about the subject, which is one of the mechanisms of research. This is not a great drawback, but it is one not previously appreciated. Author feedback: AAAI and ICML did author feedback this year. It seem

4 0.74547535 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

5 0.74031597 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.69027859 403 hunch net-2010-07-18-ICML & COLT 2010

7 0.68764305 331 hunch net-2008-12-12-Summer Conferences

8 0.67553014 116 hunch net-2005-09-30-Research in conferences

9 0.6717242 447 hunch net-2011-10-10-ML Symposium and ICML details

10 0.66897994 395 hunch net-2010-04-26-Compassionate Reviewing

11 0.66414648 454 hunch net-2012-01-30-ICML Posters and Scope

12 0.65621448 387 hunch net-2010-01-19-Deadline Season, 2010

13 0.65177566 89 hunch net-2005-07-04-The Health of COLT

14 0.64728349 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

15 0.62616551 416 hunch net-2010-10-29-To Vidoelecture or not

16 0.62403482 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.60673171 318 hunch net-2008-09-26-The SODA Program Committee

18 0.60457563 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

19 0.59202188 21 hunch net-2005-02-17-Learning Research Programs

20 0.58849108 184 hunch net-2006-06-15-IJCAI is out of season


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.158), (38, 0.04), (42, 0.033), (48, 0.029), (49, 0.023), (53, 0.082), (55, 0.227), (56, 0.017), (58, 0.011), (64, 0.012), (73, 0.141), (88, 0.01), (92, 0.015), (94, 0.07), (95, 0.045)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9176591 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

2 0.89029789 257 hunch net-2007-07-28-Asking questions

Introduction: There are very substantial differences in how question asking is viewed culturally. For example, all of the following are common: If no one asks a question, then no one is paying attention. To ask a question is disrespectful of the speaker. Asking a question is admitting your own ignorance. The first view seems to be the right one for research, for several reasons. Research is quite hard—it’s difficult to guess how people won’t understand something in advance while preparing a presentation. Consequently, it’s very common to lose people. No worthwhile presenter wants that. Real understanding is precious. By asking a question, you are really declaring “I want to understand”, and everyone should respect that. Asking a question wakes you up. I don’t mean from “asleep” to “awake” but from “awake” to “really awake”. It’s easy to drift through something sort-of-understanding. When you ask a question, especially because you are on the spot, you will do much better.

3 0.87939274 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

4 0.86543554 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

5 0.86498052 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

6 0.86029702 116 hunch net-2005-09-30-Research in conferences

7 0.85601175 40 hunch net-2005-03-13-Avoiding Bad Reviewing

8 0.84788823 90 hunch net-2005-07-07-The Limits of Learning Theory

9 0.84226471 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.837865 476 hunch net-2012-12-29-Simons Institute Big Data Program

11 0.83353394 89 hunch net-2005-07-04-The Health of COLT

12 0.82926363 454 hunch net-2012-01-30-ICML Posters and Scope

13 0.82450163 331 hunch net-2008-12-12-Summer Conferences

14 0.81537628 466 hunch net-2012-06-05-ICML acceptance statistics

15 0.81523556 448 hunch net-2011-10-24-2011 ML symposium and the bears

16 0.81504524 484 hunch net-2013-06-16-Representative Reviewing

17 0.81461179 65 hunch net-2005-05-02-Reviewing techniques for conferences

18 0.81177795 423 hunch net-2011-02-02-User preferences for search engines

19 0.80901891 464 hunch net-2012-05-03-Microsoft Research, New York City

20 0.80563569 485 hunch net-2013-06-29-The Benefits of Double-Blind Review