hunch_net hunch_net-2008 hunch_net-2008-324 knowledge-graph by maker-knowledge-mining

324 hunch net-2008-11-09-A Healthy COLT


meta infos for this blog

Source: html

Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 A while ago , we discussed the health of COLT . [sent-1, score-0.281]

2 COLT 2008 substantially addressed my concerns. [sent-2, score-0.243]

3 The papers were diverse and several were interesting. [sent-3, score-0.161]

4 Attendance was up, which is particularly notable in Europe. [sent-4, score-0.118]

5 In my opinion, the colocation with UAI and ICML was the best colocation since 1998. [sent-5, score-0.82]

6 And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . [sent-6, score-0.717]

7 Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. [sent-9, score-1.262]

8 Organization-wise, the Finns were great with hordes of volunteers helping set everything up. [sent-10, score-0.668]

9 Having too many volunteers is a good failure mode. [sent-11, score-0.384]

10 Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. [sent-12, score-0.482]

11 Facilities-wise, proximity in time and space made the colocation much more real than many others have been in the past. [sent-13, score-0.659]

12 Program-wise, COLT notably had two younger program chairs, Tong and Rocco , which seemed to work well. [sent-14, score-0.515]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('colocation', 0.364), ('volunteers', 0.301), ('colt', 0.267), ('grants', 0.251), ('chairs', 0.186), ('proximity', 0.162), ('younger', 0.162), ('hordes', 0.162), ('euro', 0.162), ('rocco', 0.15), ('finland', 0.15), ('cost', 0.148), ('reduces', 0.142), ('academy', 0.135), ('substantially', 0.134), ('notably', 0.13), ('program', 0.125), ('opinion', 0.121), ('tong', 0.121), ('helping', 0.118), ('notable', 0.118), ('ended', 0.115), ('health', 0.115), ('addressed', 0.109), ('went', 0.109), ('ibm', 0.107), ('university', 0.107), ('attendance', 0.105), ('designing', 0.103), ('registration', 0.101), ('diverse', 0.099), ('location', 0.099), ('seemed', 0.098), ('yahoo', 0.094), ('best', 0.092), ('google', 0.092), ('compared', 0.089), ('uai', 0.088), ('ago', 0.087), ('everything', 0.087), ('failure', 0.083), ('students', 0.083), ('sufficient', 0.08), ('discussed', 0.079), ('free', 0.075), ('space', 0.072), ('clear', 0.068), ('high', 0.064), ('several', 0.062), ('others', 0.061)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 324 hunch net-2008-11-09-A Healthy COLT

Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made

2 0.2175453 89 hunch net-2005-07-04-The Health of COLT

Introduction: The health of COLT (Conference on Learning Theory or Computational Learning Theory depending on who you ask) has been questioned over the last few years. Low points for the conference occurred when EuroCOLT merged with COLT in 2001, and the attendance at the 2002 Sydney COLT fell to a new low. This occurred in the general context of machine learning conferences rising in both number and size over the last decade. Any discussion of why COLT has had difficulties is inherently controversial as is any story about well-intentioned people making the wrong decisions. Nevertheless, this may be worth discussing in the hope of avoiding problems in the future and general understanding. In any such discussion there is a strong tendency to identify with a conference/community in a patriotic manner that is detrimental to thinking. Keep in mind that conferences exist to further research. My understanding (I wasn’t around) is that COLT started as a subcommunity of the computer science

3 0.20288488 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

4 0.14521825 309 hunch net-2008-07-10-Interesting papers, ICML 2008

Introduction: Here are some papers from ICML 2008 that I found interesting. Risi Kondor and Karsten Borgwardt , The Skew Spectrum of Graphs . This paper is about a new family of functions on graphs which is invariant under node label permutation. They show that these quantities appear to yield good features for learning. Sanjoy Dasgupta and Daniel Hsu . Hierarchical sampling for active learning. This is the first published practical consistent active learning algorithm. The abstract is also pretty impressive. Lihong Li , Michael Littman , and Thomas Walsh Knows What It Knows: A Framework For Self-Aware Learning. This is an attempt to create learning algorithms that know when they err, (other work includes Vovk ). It’s not yet clear to me what the right model for feature-dependent confidence intervals is. Novi Quadrianto , Alex Smola , TIberio Caetano , and Quoc Viet Le Estimating Labels from Label Proportions . This is an example of learning in a speciali

5 0.14105982 437 hunch net-2011-07-10-ICML 2011 and the future

Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI

6 0.11662339 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.11244772 86 hunch net-2005-06-28-The cross validation problem: cash reward

8 0.10790846 416 hunch net-2010-10-29-To Vidoelecture or not

9 0.098026283 403 hunch net-2010-07-18-ICML & COLT 2010

10 0.096015528 374 hunch net-2009-10-10-ALT 2009

11 0.095974162 283 hunch net-2008-01-07-2008 Summer Machine Learning Conference Schedule

12 0.093488432 468 hunch net-2012-06-29-ICML survey and comments

13 0.092619516 482 hunch net-2013-05-04-COLT and ICML registration

14 0.091640614 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long

15 0.088349998 422 hunch net-2011-01-16-2011 Summer Conference Deadline Season

16 0.086231731 132 hunch net-2005-11-26-The Design of an Optimal Research Environment

17 0.086072795 484 hunch net-2013-06-16-Representative Reviewing

18 0.085460074 47 hunch net-2005-03-28-Open Problems for Colt

19 0.084804058 75 hunch net-2005-05-28-Running A Machine Learning Summer School

20 0.08306966 242 hunch net-2007-04-30-COLT 2007


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.157), (1, -0.145), (2, -0.004), (3, -0.069), (4, -0.023), (5, -0.064), (6, -0.054), (7, 0.037), (8, -0.069), (9, -0.007), (10, 0.079), (11, 0.059), (12, -0.095), (13, 0.063), (14, 0.122), (15, -0.003), (16, 0.117), (17, 0.005), (18, -0.072), (19, 0.025), (20, 0.068), (21, 0.121), (22, 0.182), (23, 0.072), (24, -0.029), (25, 0.082), (26, 0.073), (27, 0.014), (28, 0.026), (29, 0.037), (30, 0.024), (31, -0.049), (32, 0.011), (33, 0.072), (34, 0.001), (35, 0.071), (36, 0.023), (37, 0.002), (38, -0.031), (39, -0.028), (40, 0.012), (41, -0.022), (42, -0.033), (43, -0.054), (44, -0.005), (45, -0.081), (46, 0.053), (47, -0.009), (48, 0.015), (49, 0.063)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98596096 324 hunch net-2008-11-09-A Healthy COLT

Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made

2 0.7435053 453 hunch net-2012-01-28-Why COLT?

Introduction: By Shie and Nati Following John’s advertisement for submitting to ICML, we thought it appropriate to highlight the advantages of COLT, and the reasons it is often the best place for theory papers. We would like to emphasize that we both respect ICML, and are active in ICML, both as authors and as area chairs, and certainly are not arguing that ICML is a bad place for your papers. For many papers, ICML is the best venue. But for many theory papers, COLT is a better and more appropriate place. Why should you submit to COLT? By-and-large, theory papers go to COLT. This is the tradition of the field and most theory papers are sent to COLT. This is the place to present your ground-breaking theorems and new models that will shape the theory of machine learning. COLT is more focused then ICML with a single track session. Unlike ICML, the norm in COLT is for people to sit through most sessions, and hear most of the talks presented. There is also often a lively discussion followi

3 0.70554203 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long

Introduction: For about 5 years, I’ve been the treasurer of the Association for Computational Learning, otherwise known as COLT, taking over from John Case before me. A transfer of duties to Phil Long is now about complete. This probably matters to almost no one, but I wanted to describe things a bit for those interested. The immediate impetus for this decision was unhappiness over reviewing decisions at COLT 2009 , one as an author and several as a member of the program committee. I seem to have disagreements fairly often about what is important work, partly because I’m focused on learning theory with practical implications, partly because I define learning theory more broadly than is typical amongst COLT members, and partly because COLT suffers a bit from insider-clique issues. The degree to which these issues come up varies substantially each year so last year is not predictive of this one. And, it’s important to understand that COLT remains healthy with these issues not nearly so bad

4 0.70329952 89 hunch net-2005-07-04-The Health of COLT

Introduction: The health of COLT (Conference on Learning Theory or Computational Learning Theory depending on who you ask) has been questioned over the last few years. Low points for the conference occurred when EuroCOLT merged with COLT in 2001, and the attendance at the 2002 Sydney COLT fell to a new low. This occurred in the general context of machine learning conferences rising in both number and size over the last decade. Any discussion of why COLT has had difficulties is inherently controversial as is any story about well-intentioned people making the wrong decisions. Nevertheless, this may be worth discussing in the hope of avoiding problems in the future and general understanding. In any such discussion there is a strong tendency to identify with a conference/community in a patriotic manner that is detrimental to thinking. Keep in mind that conferences exist to further research. My understanding (I wasn’t around) is that COLT started as a subcommunity of the computer science

5 0.64751059 86 hunch net-2005-06-28-The cross validation problem: cash reward

Introduction: I just presented the cross validation problem at COLT . The problem now has a cash prize (up to $500) associated with it—see the presentation for details. The write-up for colt .

6 0.57574689 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

7 0.57343924 88 hunch net-2005-07-01-The Role of Impromptu Talks

8 0.55676794 242 hunch net-2007-04-30-COLT 2007

9 0.4968828 184 hunch net-2006-06-15-IJCAI is out of season

10 0.4902364 437 hunch net-2011-07-10-ICML 2011 and the future

11 0.48568419 482 hunch net-2013-05-04-COLT and ICML registration

12 0.47705004 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.46810547 447 hunch net-2011-10-10-ML Symposium and ICML details

14 0.45332223 374 hunch net-2009-10-10-ALT 2009

15 0.45218399 429 hunch net-2011-04-06-COLT open questions

16 0.44628128 416 hunch net-2010-10-29-To Vidoelecture or not

17 0.44080502 47 hunch net-2005-03-28-Open Problems for Colt

18 0.41967249 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

19 0.41921568 464 hunch net-2012-05-03-Microsoft Research, New York City

20 0.41584486 147 hunch net-2006-01-08-Debugging Your Brain


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.134), (51, 0.46), (53, 0.053), (55, 0.172), (95, 0.067)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.95630383 489 hunch net-2013-09-20-No NY ML Symposium in 2013, and some good news

Introduction: There will be no New York ML Symposium this year. The core issue is that NYAS is disorganized by people leaving, pushing back the date, with the current candidate a spring symposium on March 28. Gunnar and I were outvoted here—we were gung ho on organizing a fall symposium, but the rest of the committee wants to wait. In some good news, most of the ICML 2012 videos have been restored from a deep backup.

same-blog 2 0.87961864 324 hunch net-2008-11-09-A Healthy COLT

Introduction: A while ago , we discussed the health of COLT . COLT 2008 substantially addressed my concerns. The papers were diverse and several were interesting. Attendance was up, which is particularly notable in Europe. In my opinion, the colocation with UAI and ICML was the best colocation since 1998. And, perhaps best of all, registration ended up being free for all students due to various grants from the Academy of Finland , Google , IBM , and Yahoo . A basic question is: what went right? There seem to be several answers. Cost-wise, COLT had sufficient grants to alleviate the high cost of the Euro and location at a university substantially reduces the cost compared to a hotel. Organization-wise, the Finns were great with hordes of volunteers helping set everything up. Having too many volunteers is a good failure mode. Organization-wise, it was clear that all 3 program chairs were cooperating in designing the program. Facilities-wise, proximity in time and space made

3 0.71750689 393 hunch net-2010-04-14-MLcomp: a website for objectively comparing ML algorithms

Introduction: Much of the success and popularity of machine learning has been driven by its practical impact. Of course, the evaluation of empirical work is an integral part of the field. But are the existing mechanisms for evaluating algorithms and comparing results good enough? We ( Percy and Jake ) believe there are currently a number of shortcomings: Incomplete Disclosure: You read a paper that proposes Algorithm A which is shown to outperform SVMs on two datasets.  Great.  But what about on other datasets?  How sensitive is this result?   What about compute time – does the algorithm take two seconds on a laptop or two weeks on a 100-node cluster? Lack of Standardization: Algorithm A beats Algorithm B on one version of a dataset.  Algorithm B beats Algorithm A on another version yet uses slightly different preprocessing.  Though doing a head-on comparison would be ideal, it would be tedious since the programs probably use different dataset formats and have a large array of options

4 0.70581418 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

Introduction: Several talks seem potentially interesting to ML folks at this year’s SODA. Maria-Florina Balcan , Avrim Blum , and Anupam Gupta , Approximate Clustering without the Approximation . This paper gives reasonable algorithms with provable approximation guarantees for k-median and other notions of clustering. It’s conceptually interesting, because it’s the second example I’ve seen where NP hardness is subverted by changing the problem definition subtle but reasonable way. Essentially, they show that if any near-approximation to an optimal solution is good, then it’s computationally easy to find a near-optimal solution. This subtle shift bears serious thought. A similar one occurred in our ranking paper with respect to minimum feedback arcset. With two known examples, it suggests that many more NP-complete problems might be finessed into irrelevance in this style. Yury Lifshits and Shengyu Zhang , Combinatorial Algorithms for Nearest Neighbors, Near-Duplicates, and Smal

5 0.66004449 179 hunch net-2006-05-16-The value of the orthodox view of Boosting

Introduction: The term “boosting” comes from the idea of using a meta-algorithm which takes “weak” learners (that may be able to only barely predict slightly better than random) and turn them into strongly capable learners (which predict very well). Adaboost in 1995 was the first widely used (and useful) boosting algorithm, although there were theoretical boosting algorithms floating around since 1990 (see the bottom of this page ). Since then, many different interpretations of why boosting works have arisen. There is significant discussion about these different views in the annals of statistics , including a response by Yoav Freund and Robert Schapire . I believe there is a great deal of value to be found in the original view of boosting (meta-algorithm for creating a strong learner from a weak learner). This is not a claim that one particular viewpoint obviates the value of all others, but rather that no other viewpoint seems to really capture important properties. Comparing wit

6 0.60128248 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

7 0.46062899 235 hunch net-2007-03-03-All Models of Learning have Flaws

8 0.45100474 281 hunch net-2007-12-21-Vowpal Wabbit Code Release

9 0.44567117 416 hunch net-2010-10-29-To Vidoelecture or not

10 0.43182227 270 hunch net-2007-11-02-The Machine Learning Award goes to …

11 0.43140554 395 hunch net-2010-04-26-Compassionate Reviewing

12 0.42901927 452 hunch net-2012-01-04-Why ICML? and the summer conferences

13 0.42592353 90 hunch net-2005-07-07-The Limits of Learning Theory

14 0.42521018 403 hunch net-2010-07-18-ICML & COLT 2010

15 0.42448506 454 hunch net-2012-01-30-ICML Posters and Scope

16 0.42078778 439 hunch net-2011-08-01-Interesting papers at COLT 2011

17 0.42063993 453 hunch net-2012-01-28-Why COLT?

18 0.41526517 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

19 0.41473317 309 hunch net-2008-07-10-Interesting papers, ICML 2008

20 0.41240332 406 hunch net-2010-08-22-KDD 2010