hunch_net hunch_net-2012 hunch_net-2012-453 knowledge-graph by maker-knowledge-mining

453 hunch net-2012-01-28-Why COLT?


meta infos for this blog

Source: html

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. [sent-1, score-0.46]

2 We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. [sent-2, score-0.381]

3 This is the tradition of the field and most theory papers are sent to COLT. [sent-7, score-0.363]

4 This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. [sent-8, score-0.265]

5 Additionally, this year COLT and ICML are tightly co-located, with joint plenary sessions (i. [sent-13, score-0.222]

6 some COLT papers will be presented in a plenary session to the entire combined COLT/ICML audience, as will some ICML papers), and many other opportunities for exposure to the wider ICML audience. [sent-15, score-0.428]

7 And so, by submitting to COLT, you have the potential of reaching both the captive theory audience at COLT and the wider ML audience at ICML. [sent-16, score-0.536]

8 The COLT program committee is comprised entirely of established, mostly fairly senior, researchers. [sent-18, score-0.604]

9 Program committee members read and review papers themselves, or potentially use a sub-reviewer that they know personally and carefully select for the paper, but still check and maintain responsibility for the review. [sent-19, score-0.986]

10 Your paper will get reviewed by at least three program committee members, who will likely be experts on the topics covered by the paper. [sent-20, score-0.686]

11 The reviewing process is less rushed and program committee members (and sub-reviewers were appropriate) are expected to do a careful job on each and every paper. [sent-23, score-1.051]

12 All papers are then discussed by the program committee, and there is generally significant and meaningful discussions on papers. [sent-24, score-0.394]

13 This also means the COLT reviewing process is far from having a “single point of failure”, as the paper will be carefully considered and argued for by multiple (senior) program committee members. [sent-25, score-0.908]

14 Program committee members have access to the author identities (as do area chairs in ICML), as this is essential in order to select sub-reviewers. [sent-28, score-0.952]

15 However, the author names do not appear on the papers, both in order to reduce the effect of first impressions, and to allow program committee members to utilize reviewers who are truly blind to the author’s identities. [sent-29, score-1.167]

16 It should be noted that the COLT anonimization guidelines are a bit more relaxed, which we hope makes it easier to create an anonimized version for conference submission (authors are still allowed to, and even encouraged, to post their papers online, with their names on them of course). [sent-30, score-0.238]

17 Frankly, with the higher quality, less random, reviews, we feel it is not needed, and the hassle to authors and program committee members is not worth it. [sent-32, score-1.116]

18 However, the tradition in COLT, which we plan to follow, is to contact authors as needed during the review and discussion process to ask for clarification on issues that came up during review. [sent-33, score-0.519]

19 In particular, if a concern is raised on the soundness or other technical aspect of a paper, the authors will be contacted to give them a chance to set things straight. [sent-34, score-0.3]

20 But no, there is no generic author response where authors can argue and plead for acceptance. [sent-35, score-0.275]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('colt', 0.451), ('committee', 0.359), ('program', 0.245), ('members', 0.224), ('icml', 0.215), ('authors', 0.172), ('papers', 0.149), ('plenary', 0.13), ('theory', 0.122), ('audience', 0.117), ('author', 0.103), ('review', 0.101), ('senior', 0.101), ('chairs', 0.099), ('appropriate', 0.096), ('select', 0.096), ('wider', 0.096), ('tradition', 0.092), ('sessions', 0.092), ('names', 0.089), ('place', 0.085), ('submitting', 0.084), ('process', 0.084), ('paper', 0.082), ('reviewers', 0.081), ('reviewing', 0.081), ('single', 0.076), ('submit', 0.075), ('advantages', 0.073), ('area', 0.071), ('technical', 0.07), ('needed', 0.07), ('blind', 0.066), ('reviews', 0.063), ('less', 0.058), ('hassle', 0.058), ('shape', 0.058), ('dedicated', 0.058), ('relaxed', 0.058), ('contacted', 0.058), ('additionally', 0.058), ('rigorous', 0.058), ('rebuttal', 0.058), ('seniority', 0.058), ('carefully', 0.057), ('arguing', 0.053), ('considerations', 0.053), ('frequently', 0.053), ('exposure', 0.053), ('impressions', 0.053)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

2 0.31203076 452 hunch net-2012-01-04-Why ICML? and the summer conferences

Introduction: Here’s a quick reference for summer ML-related conferences sorted by due date: Conference Due date Location Reviewing KDD Feb 10 August 12-16, Beijing, China Single Blind COLT Feb 14 June 25-June 27, Edinburgh, Scotland Single Blind? (historically) ICML Feb 24 June 26-July 1, Edinburgh, Scotland Double Blind, author response, zero SPOF UAI March 30 August 15-17, Catalina Islands, California Double Blind, author response Geographically, this is greatly dispersed and the UAI/KDD conflict is unfortunate. Machine Learning conferences are triannual now, between NIPS , AIStat , and ICML . This has not always been the case: the academic default is annual summer conferences, then NIPS started with a December conference, and now AIStat has grown into an April conference. However, the first claim is not quite correct. NIPS and AIStat have few competing venues while ICML implicitly competes with many other conf

3 0.29541099 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.29299176 89 hunch net-2005-07-04-The Health of COLT

Introduction: The health of COLT (Conference on Learning Theory or Computational Learning Theory depending on who you ask) has been questioned over the last few years. Low points for the conference occurred when EuroCOLT merged with COLT in 2001, and the attendance at the 2002 Sydney COLT fell to a new low. This occurred in the general context of machine learning conferences rising in both number and size over the last decade. Any discussion of why COLT has had difficulties is inherently controversial as is any story about well-intentioned people making the wrong decisions. Nevertheless, this may be worth discussing in the hope of avoiding problems in the future and general understanding. In any such discussion there is a strong tendency to identify with a conference/community in a patriotic manner that is detrimental to thinking. Keep in mind that conferences exist to further research. My understanding (I wasn’t around) is that COLT started as a subcommunity of the computer science

5 0.26622063 484 hunch net-2013-06-16-Representative Reviewing

Introduction: When thinking about how best to review papers, it seems helpful to have some conception of what good reviewing is. As far as I can tell, this is almost always only discussed in the specific context of a paper (i.e. your rejected paper), or at most an area (i.e. what a “good paper” looks like for that area) rather than general principles. Neither individual papers or areas are sufficiently general for a large conference—every paper differs in the details, and what if you want to build a new area and/or cross areas? An unavoidable reason for reviewing is that the community of research is too large. In particular, it is not possible for a researcher to read every paper which someone thinks might be of interest. This reason for reviewing exists independent of constraints on rooms or scheduling formats of individual conferences. Indeed, history suggests that physical constraints are relatively meaningless over the long term — growing conferences simply use more rooms and/or change fo

6 0.23999655 304 hunch net-2008-06-27-Reviewing Horror Stories

7 0.2368523 454 hunch net-2012-01-30-ICML Posters and Scope

8 0.23497254 318 hunch net-2008-09-26-The SODA Program Committee

9 0.21029922 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

10 0.20989299 65 hunch net-2005-05-02-Reviewing techniques for conferences

11 0.20383964 461 hunch net-2012-04-09-ICML author feedback is open

12 0.20288488 324 hunch net-2008-11-09-A Healthy COLT

13 0.20005669 86 hunch net-2005-06-28-The cross validation problem: cash reward

14 0.19305907 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

15 0.19044225 468 hunch net-2012-06-29-ICML survey and comments

16 0.1896776 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.18808562 403 hunch net-2010-07-18-ICML & COLT 2010

18 0.18466169 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

19 0.18292062 116 hunch net-2005-09-30-Research in conferences

20 0.17260499 180 hunch net-2006-05-21-NIPS paper evaluation criteria


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.27), (1, -0.331), (2, 0.258), (3, -0.035), (4, 0.041), (5, -0.012), (6, -0.068), (7, -0.029), (8, -0.078), (9, -0.077), (10, 0.071), (11, 0.023), (12, -0.077), (13, 0.1), (14, 0.146), (15, -0.074), (16, 0.165), (17, -0.039), (18, -0.084), (19, 0.023), (20, 0.032), (21, 0.064), (22, 0.171), (23, 0.102), (24, -0.008), (25, 0.043), (26, 0.025), (27, 0.054), (28, 0.044), (29, 0.044), (30, 0.015), (31, -0.003), (32, 0.019), (33, 0.05), (34, -0.053), (35, 0.083), (36, 0.008), (37, 0.039), (38, 0.047), (39, -0.07), (40, 0.005), (41, 0.037), (42, -0.039), (43, -0.023), (44, -0.017), (45, 0.002), (46, 0.027), (47, -0.064), (48, -0.017), (49, 0.082)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99143827 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

2 0.79518157 324 hunch net-2008-11-09-A Healthy COLT

Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made

3 0.74142122 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

4 0.73709726 318 hunch net-2008-09-26-The SODA Program Committee

Introduction: Claire asked me to be on the SODA program committee this year, which was quite a bit of work. I had a relatively light load—merely 49 theory papers. Many of these papers were not on subjects that I was expert about, so (as is common for theory conferences) I found various reviewers that I trusted to help review the papers. I ended up reviewing about 1/3 personally. There were a couple instances where I ended up overruling a subreviewer whose logic seemed off, but otherwise I generally let their reviews stand. There are some differences in standards for paper reviews between the machine learning and theory communities. In machine learning it is expected that a review be detailed, while in the theory community this is often not the case. Every paper given to me ended up with a review varying between somewhat and very detailed. I’m sure not every author was happy with the outcome. While we did our best to make good decisions, they were difficult decisions to make. For exam

5 0.70902508 89 hunch net-2005-07-04-The Health of COLT

Introduction: The health of COLT (Conference on Learning Theory or Computational Learning Theory depending on who you ask) has been questioned over the last few years. Low points for the conference occurred when EuroCOLT merged with COLT in 2001, and the attendance at the 2002 Sydney COLT fell to a new low. This occurred in the general context of machine learning conferences rising in both number and size over the last decade. Any discussion of why COLT has had difficulties is inherently controversial as is any story about well-intentioned people making the wrong decisions. Nevertheless, this may be worth discussing in the hope of avoiding problems in the future and general understanding. In any such discussion there is a strong tendency to identify with a conference/community in a patriotic manner that is detrimental to thinking. Keep in mind that conferences exist to further research. My understanding (I wasn’t around) is that COLT started as a subcommunity of the computer science

6 0.70138901 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long

7 0.69719398 88 hunch net-2005-07-01-The Role of Impromptu Talks

8 0.68071723 452 hunch net-2012-01-04-Why ICML? and the summer conferences

9 0.6716584 468 hunch net-2012-06-29-ICML survey and comments

10 0.65428144 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

11 0.65379328 447 hunch net-2011-10-10-ML Symposium and ICML details

12 0.64339626 304 hunch net-2008-06-27-Reviewing Horror Stories

13 0.62438226 463 hunch net-2012-05-02-ICML: Behind the Scenes

14 0.62371123 461 hunch net-2012-04-09-ICML author feedback is open

15 0.61092019 484 hunch net-2013-06-16-Representative Reviewing

16 0.60027844 180 hunch net-2006-05-21-NIPS paper evaluation criteria

17 0.59890628 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

18 0.58381242 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?

19 0.57631028 315 hunch net-2008-09-03-Bidding Problems

20 0.56627268 403 hunch net-2010-07-18-ICML & COLT 2010


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(10, 0.03), (27, 0.178), (38, 0.025), (48, 0.035), (53, 0.02), (55, 0.272), (60, 0.195), (92, 0.025), (94, 0.089), (95, 0.043)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.92692441 159 hunch net-2006-02-27-The Peekaboom Dataset

Introduction: Luis von Ahn ‘s Peekaboom project has yielded data (830MB). Peekaboom is the second attempt (after Espgame ) to produce a dataset which is useful for learning to solve vision problems based on voluntary game play. As a second attempt, it is meant to address all of the shortcomings of the first attempt. In particular: The locations of specific objects are provided by the data. The data collection is far more complete and extensive. The data consists of: The source images. (1 file per image, just short of 60K images.) The in-game events. (1 file per image, in a lispy syntax.) A description of the event language. There is a great deal of very specific and relevant data here so the hope that this will help solve vision problems seems quite reasonable.

same-blog 2 0.91584665 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

3 0.85636759 395 hunch net-2010-04-26-Compassionate Reviewing

Introduction: Most long conversations between academics seem to converge on the topic of reviewing where almost no one is happy. A basic question is: Should most people be happy? The case against is straightforward. Anyone who watches the flow of papers realizes that most papers amount to little in the longer term. By it’s nature research is brutal, where the second-best method is worthless, and the second person to discover things typically gets no credit. If you think about this for a moment, it’s very different from most other human endeavors. The second best migrant laborer, construction worker, manager, conductor, quarterback, etc… all can manage quite well. If a reviewer has even a vaguely predictive sense of what’s important in the longer term, then most people submitting papers will be unhappy. But this argument unravels, in my experience. Perhaps half of reviews are thoughtless or simply wrong with a small part being simply malicious. And yet, I’m sure that most reviewers genuine

4 0.84741467 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

5 0.84134829 424 hunch net-2011-02-17-What does Watson mean?

Introduction: Watson convincingly beat the best champion Jeopardy! players. The apparent significance of this varies hugely, depending on your background knowledge about the related machine learning, NLP, and search technology. For a random person, this might seem evidence of serious machine intelligence, while for people working on the system itself, it probably seems like a reasonably good assemblage of existing technologies with several twists to make the entire system work. Above all, I think we should congratulate the people who managed to put together and execute this project—many years of effort by a diverse set of highly skilled people were needed to make this happen. In academia, it’s pretty difficult for one professor to assemble that quantity of talent, and in industry it’s rarely the case that such a capable group has both a worthwhile project and the support needed to pursue something like this for several years before success. Alina invited me to the Jeopardy watching party

6 0.83204406 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.82721794 90 hunch net-2005-07-07-The Limits of Learning Theory

8 0.82488614 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

9 0.82407582 40 hunch net-2005-03-13-Avoiding Bad Reviewing

10 0.81907743 448 hunch net-2011-10-24-2011 ML symposium and the bears

11 0.80652112 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

12 0.80173767 20 hunch net-2005-02-15-ESPgame and image labeling

13 0.80127585 116 hunch net-2005-09-30-Research in conferences

14 0.80022842 65 hunch net-2005-05-02-Reviewing techniques for conferences

15 0.79937828 454 hunch net-2012-01-30-ICML Posters and Scope

16 0.79124999 437 hunch net-2011-07-10-ICML 2011 and the future

17 0.79101259 446 hunch net-2011-10-03-Monday announcements

18 0.78585017 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

19 0.78276277 331 hunch net-2008-12-12-Summer Conferences

20 0.78084022 160 hunch net-2006-03-02-Why do people count for learning?