hunch_net hunch_net-2005 hunch_net-2005-47 knowledge-graph by maker-knowledge-mining

47 hunch net-2005-03-28-Open Problems for Colt


meta infos for this blog

Source: html

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . [sent-1, score-0.384]

2 This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science. [sent-2, score-2.239]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('klivans', 0.332), ('servedio', 0.332), ('crisp', 0.307), ('kddcup', 0.307), ('rocco', 0.307), ('definitions', 0.241), ('attack', 0.235), ('adam', 0.229), ('idea', 0.225), ('advance', 0.214), ('yield', 0.18), ('challenge', 0.173), ('looking', 0.171), ('solutions', 0.171), ('science', 0.14), ('open', 0.14), ('anyone', 0.137), ('colt', 0.137), ('good', 0.106), ('theory', 0.1), ('way', 0.084), ('problems', 0.073), ('problem', 0.06), ('learning', 0.02)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999994 47 hunch net-2005-03-28-Open Problems for Colt

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.

2 0.22838546 326 hunch net-2008-11-11-COLT CFP

Introduction: Adam Klivans , points out the COLT call for papers . The important points are: Due Feb 13. Montreal, June 18-21. This year, there is author feedback.

3 0.14465047 429 hunch net-2011-04-06-COLT open questions

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.

4 0.10375197 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

Introduction: Several talks seem potentially interesting to ML folks at this year’s SODA. Maria-Florina Balcan , Avrim Blum , and Anupam Gupta , Approximate Clustering without the Approximation . This paper gives reasonable algorithms with provable approximation guarantees for k-median and other notions of clustering. It’s conceptually interesting, because it’s the second example I’ve seen where NP hardness is subverted by changing the problem definition subtle but reasonable way. Essentially, they show that if any near-approximation to an optimal solution is good, then it’s computationally easy to find a near-optimal solution. This subtle shift bears serious thought. A similar one occurred in our ranking paper with respect to minimum feedback arcset. With two known examples, it suggests that many more NP-complete problems might be finessed into irrelevance in this style. Yury Lifshits and Shengyu Zhang , Combinatorial Algorithms for Nearest Neighbors, Near-Duplicates, and Smal

5 0.10339832 292 hunch net-2008-03-15-COLT Open Problems

Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).

6 0.10139763 89 hunch net-2005-07-04-The Health of COLT

7 0.085460074 324 hunch net-2008-11-09-A Healthy COLT

8 0.085432321 100 hunch net-2005-08-04-Why Reinforcement Learning is Important

9 0.080594257 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

10 0.073821709 453 hunch net-2012-01-28-Why COLT?

11 0.070283964 239 hunch net-2007-04-18-$50K Spock Challenge

12 0.068577424 86 hunch net-2005-06-28-The cross validation problem: cash reward

13 0.067689806 400 hunch net-2010-06-13-The Good News on Exploration and Learning

14 0.06057616 332 hunch net-2008-12-23-Use of Learning Theory

15 0.059997387 48 hunch net-2005-03-29-Academic Mechanism Design

16 0.059196714 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

17 0.056142569 481 hunch net-2013-04-15-NEML II

18 0.055492707 61 hunch net-2005-04-25-Embeddings: what are they good for?

19 0.054751068 96 hunch net-2005-07-21-Six Months

20 0.054398909 64 hunch net-2005-04-28-Science Fiction and Research


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.096), (1, -0.031), (2, -0.019), (3, -0.008), (4, -0.035), (5, -0.059), (6, -0.001), (7, 0.004), (8, -0.009), (9, -0.024), (10, 0.02), (11, 0.074), (12, -0.096), (13, 0.067), (14, 0.072), (15, -0.03), (16, 0.128), (17, -0.057), (18, -0.023), (19, 0.044), (20, -0.119), (21, 0.185), (22, 0.017), (23, -0.003), (24, -0.006), (25, 0.017), (26, 0.084), (27, 0.217), (28, -0.012), (29, 0.029), (30, 0.035), (31, 0.066), (32, 0.011), (33, 0.105), (34, 0.026), (35, -0.053), (36, -0.08), (37, 0.05), (38, 0.074), (39, -0.007), (40, 0.048), (41, 0.054), (42, -0.055), (43, -0.025), (44, 0.03), (45, 0.098), (46, 0.08), (47, -0.016), (48, 0.032), (49, -0.007)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96899259 47 hunch net-2005-03-28-Open Problems for Colt

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.

2 0.73088115 429 hunch net-2011-04-06-COLT open questions

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.

3 0.66127503 292 hunch net-2008-03-15-COLT Open Problems

Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).

4 0.61158228 326 hunch net-2008-11-11-COLT CFP

Introduction: Adam Klivans , points out the COLT call for papers . The important points are: Due Feb 13. Montreal, June 18-21. This year, there is author feedback.

5 0.52877009 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap

6 0.52769125 86 hunch net-2005-06-28-The cross validation problem: cash reward

7 0.48190853 89 hunch net-2005-07-04-The Health of COLT

8 0.445151 324 hunch net-2008-11-09-A Healthy COLT

9 0.42603168 88 hunch net-2005-07-01-The Role of Impromptu Talks

10 0.39916244 82 hunch net-2005-06-17-Reopening RL->Classification

11 0.39427975 453 hunch net-2012-01-28-Why COLT?

12 0.37585545 418 hunch net-2010-12-02-Traffic Prediction Problem

13 0.35271618 184 hunch net-2006-06-15-IJCAI is out of season

14 0.35064021 291 hunch net-2008-03-07-Spock Challenge Winners

15 0.34295136 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long

16 0.33762273 220 hunch net-2006-11-27-Continuizing Solutions

17 0.33172822 64 hunch net-2005-04-28-Science Fiction and Research

18 0.32454085 481 hunch net-2013-04-15-NEML II

19 0.31068039 409 hunch net-2010-09-13-AIStats

20 0.30102861 94 hunch net-2005-07-13-Text Entailment at AAAI


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(2, 0.408), (27, 0.175), (53, 0.112), (55, 0.128)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9130798 47 hunch net-2005-03-28-Open Problems for Colt

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.

2 0.6374386 207 hunch net-2006-09-12-Incentive Compatible Reviewing

Introduction: Reviewing is a fairly formal process which is integral to the way academia is run. Given this integral nature, the quality of reviewing is often frustrating. I’ve seen plenty of examples of false statements, misbeliefs, reading what isn’t written, etc…, and I’m sure many other people have as well. Recently, mechanisms like double blind review and author feedback have been introduced to try to make the process more fair and accurate in many machine learning (and related) conferences. My personal experience is that these mechanisms help, especially the author feedback. Nevertheless, some problems remain. The game theory take on reviewing is that the incentive for truthful reviewing isn’t there. Since reviewers are also authors, there are sometimes perverse incentives created and acted upon. (Incidentially, these incentives can be both positive and negative.) Setting up a truthful reviewing system is tricky because their is no final reference truth available in any acce

3 0.54832584 244 hunch net-2007-05-09-The Missing Bound

Introduction: Sham Kakade points out that we are missing a bound. Suppose we have m samples x drawn IID from some distribution D . Through the magic of exponential moment method we know that: If the range of x is bounded by an interval of size I , a Chernoff/Hoeffding style bound gives us a bound on the deviations like O(I/m 0.5 ) (at least in crude form). A proof is on page 9 here . If the range of x is bounded, and the variance (or a bound on the variance) is known, then Bennett’s bound can give tighter results (*). This can be a huge improvment when the true variance small. What’s missing here is a bound that depends on the observed variance rather than a bound on the variance. This means that many people attempt to use Bennett’s bound (incorrectly) by plugging the observed variance in as the true variance, invalidating the bound application. Most of the time, they get away with it, but this is a dangerous move when doing machine learning. In machine learning,

4 0.48644993 131 hunch net-2005-11-16-The Everything Ensemble Edge

Introduction: Rich Caruana , Alexandru Niculescu , Geoff Crew, and Alex Ksikes have done a lot of empirical testing which shows that using all methods to make a prediction is more powerful than using any single method. This is in rough agreement with the Bayesian way of solving problems, but based upon a different (essentially empirical) motivation. A rough summary is: Take all of {decision trees, boosted decision trees, bagged decision trees, boosted decision stumps, K nearest neighbors, neural networks, SVM} with all reasonable parameter settings. Run the methods on each problem of 8 problems with a large test set, calibrating margins using either sigmoid fitting or isotonic regression . For each loss of {accuracy, area under the ROC curve, cross entropy, squared error, etc…} evaluate the average performance of the method. A series of conclusions can be drawn from the observations. ( Calibrated ) boosted decision trees appear to perform best, in general although support v

5 0.47999132 116 hunch net-2005-09-30-Research in conferences

Introduction: Conferences exist as part of the process of doing research. They provide many roles including “announcing research”, “meeting people”, and “point of reference”. Not all conferences are alike so a basic question is: “to what extent do individual conferences attempt to aid research?” This question is very difficult to answer in any satisfying way. What we can do is compare details of the process across multiple conferences. Comments The average quality of comments across conferences can vary dramatically. At one extreme, the tradition in CS theory conferences is to provide essentially zero feedback. At the other extreme, some conferences have a strong tradition of providing detailed constructive feedback. Detailed feedback can give authors significant guidance about how to improve research. This is the most subjective entry. Blind Virtually all conferences offer single blind review where authors do not know reviewers. Some also provide double blind review where rev

6 0.47407132 452 hunch net-2012-01-04-Why ICML? and the summer conferences

7 0.47166288 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

8 0.47148737 225 hunch net-2007-01-02-Retrospective

9 0.469275 437 hunch net-2011-07-10-ICML 2011 and the future

10 0.46845567 151 hunch net-2006-01-25-1 year

11 0.46120024 484 hunch net-2013-06-16-Representative Reviewing

12 0.4598892 201 hunch net-2006-08-07-The Call of the Deep

13 0.45968038 134 hunch net-2005-12-01-The Webscience Future

14 0.45821542 89 hunch net-2005-07-04-The Health of COLT

15 0.45735708 466 hunch net-2012-06-05-ICML acceptance statistics

16 0.45710257 22 hunch net-2005-02-18-What it means to do research.

17 0.45661306 77 hunch net-2005-05-29-Maximum Margin Mismatch?

18 0.45620292 149 hunch net-2006-01-18-Is Multitask Learning Black-Boxable?

19 0.45491302 407 hunch net-2010-08-23-Boosted Decision Trees for Deep Learning

20 0.45467162 454 hunch net-2012-01-30-ICML Posters and Scope