hunch_net hunch_net-2011 hunch_net-2011-429 knowledge-graph by maker-knowledge-mining

429 hunch net-2011-04-06-COLT open questions


meta infos for this blog

Source: html

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Alina and Jake point out the COLT Call for Open Questions due May 11. [sent-1, score-0.184]

2 In general, this is cool, and worth doing if you can come up with a crisp question. [sent-2, score-0.534]

3 In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. [sent-3, score-2.325]

4 But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value. [sent-4, score-1.104]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('confront', 0.294), ('fallacy', 0.294), ('crisp', 0.273), ('open', 0.249), ('targeting', 0.236), ('jake', 0.227), ('matters', 0.22), ('case', 0.209), ('forced', 0.203), ('smart', 0.194), ('attention', 0.183), ('precisely', 0.183), ('alina', 0.183), ('cool', 0.177), ('call', 0.177), ('focused', 0.174), ('enjoyed', 0.164), ('worth', 0.155), ('questions', 0.128), ('colt', 0.121), ('really', 0.108), ('come', 0.106), ('value', 0.104), ('less', 0.099), ('due', 0.097), ('particularly', 0.095), ('question', 0.092), ('form', 0.092), ('get', 0.087), ('point', 0.087), ('real', 0.086), ('general', 0.079), ('papers', 0.076), ('way', 0.074), ('would', 0.073), ('make', 0.065), ('may', 0.058), ('problem', 0.053), ('people', 0.046)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 429 hunch net-2011-04-06-COLT open questions

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.

2 0.15673554 292 hunch net-2008-03-15-COLT Open Problems

Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).

3 0.14465047 47 hunch net-2005-03-28-Open Problems for Colt

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.

4 0.12849604 367 hunch net-2009-08-16-Centmail comments

Introduction: Centmail is a scheme which makes charity donations have a secondary value, as a stamp for email. When discussed on newscientist , slashdot , and others, some of the comments make the academic review process appear thoughtful . Some prominent fallacies are: Costing money fallacy. Some commenters appear to believe the system charges money per email. Instead, the basic idea is that users get an extra benefit from donations to a charity and participation is strictly voluntary. The solution to this fallacy is simply reading the details . Single solution fallacy. Some commenters seem to think this is proposed as a complete solution to spam, and since not everyone will opt to participate, it won’t work. But a complete solution is not at all necessary or even possible given the flag-day problem . Deployed machine learning systems for fighting spam are great at taking advantage of a partial solution. The solution to this fallacy is learning about machine learning. In the

5 0.098885179 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for

6 0.095324591 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

7 0.087276079 453 hunch net-2012-01-28-Why COLT?

8 0.087018058 326 hunch net-2008-11-11-COLT CFP

9 0.085076846 424 hunch net-2011-02-17-What does Watson mean?

10 0.082048595 454 hunch net-2012-01-30-ICML Posters and Scope

11 0.080040872 273 hunch net-2007-11-16-MLSS 2008

12 0.079259321 82 hunch net-2005-06-17-Reopening RL->Classification

13 0.077830918 89 hunch net-2005-07-04-The Health of COLT

14 0.075323477 452 hunch net-2012-01-04-Why ICML? and the summer conferences

15 0.073047221 331 hunch net-2008-12-12-Summer Conferences

16 0.071823508 360 hunch net-2009-06-15-In Active Learning, the question changes

17 0.070044853 383 hunch net-2009-12-09-Inherent Uncertainty

18 0.069023088 42 hunch net-2005-03-17-Going all the Way, Sometimes

19 0.066806421 336 hunch net-2009-01-19-Netflix prize within epsilon

20 0.066773891 40 hunch net-2005-03-13-Avoiding Bad Reviewing


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.141), (1, -0.053), (2, -0.013), (3, 0.004), (4, -0.037), (5, -0.031), (6, -0.002), (7, -0.041), (8, -0.022), (9, -0.039), (10, 0.009), (11, 0.023), (12, -0.07), (13, 0.098), (14, 0.022), (15, -0.045), (16, 0.07), (17, -0.004), (18, -0.067), (19, 0.06), (20, -0.149), (21, 0.202), (22, 0.095), (23, -0.045), (24, 0.014), (25, -0.012), (26, 0.041), (27, 0.114), (28, 0.054), (29, 0.04), (30, 0.026), (31, 0.002), (32, -0.0), (33, 0.014), (34, 0.022), (35, -0.104), (36, -0.06), (37, -0.038), (38, 0.08), (39, -0.054), (40, 0.036), (41, 0.02), (42, 0.038), (43, 0.003), (44, 0.002), (45, 0.024), (46, 0.005), (47, 0.062), (48, -0.059), (49, -0.013)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.98300523 429 hunch net-2011-04-06-COLT open questions

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.

2 0.8269614 292 hunch net-2008-03-15-COLT Open Problems

Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).

3 0.74865407 47 hunch net-2005-03-28-Open Problems for Colt

Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.

4 0.65081757 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions

Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap

5 0.55995435 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for

6 0.49360153 326 hunch net-2008-11-11-COLT CFP

7 0.49332505 82 hunch net-2005-06-17-Reopening RL->Classification

8 0.46019611 88 hunch net-2005-07-01-The Role of Impromptu Talks

9 0.45887613 273 hunch net-2007-11-16-MLSS 2008

10 0.44884446 394 hunch net-2010-04-24-COLT Treasurer is now Phil Long

11 0.44656837 324 hunch net-2008-11-09-A Healthy COLT

12 0.44468036 86 hunch net-2005-06-28-The cross validation problem: cash reward

13 0.42165762 294 hunch net-2008-04-12-Blog compromised

14 0.41637954 297 hunch net-2008-04-22-Taking the next step

15 0.40397349 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

16 0.40251103 453 hunch net-2012-01-28-Why COLT?

17 0.39504901 367 hunch net-2009-08-16-Centmail comments

18 0.3916581 89 hunch net-2005-07-04-The Health of COLT

19 0.37743971 25 hunch net-2005-02-20-At One Month

20 0.3691701 424 hunch net-2011-02-17-What does Watson mean?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.201), (55, 0.17), (72, 0.343), (94, 0.122), (95, 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.90358216 429 hunch net-2011-04-06-COLT open questions

Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.

2 0.87636441 396 hunch net-2010-04-28-CI Fellows program renewed

Introduction: Lev Reyzin points out the CI Fellows program is renewed . CI Fellows are essentially NSF funded computer science postdocs for universities and industry research labs. I’ve been lucky and happy to have Lev visit me for a year under last year’s program , so I strongly recommend participating if it suits you. As with last year, the application timeline is very short, with everything due by May 23.

3 0.82497847 294 hunch net-2008-04-12-Blog compromised

Introduction: Iain noticed that hunch.net had zero width divs hiding spammy URLs. Some investigation reveals that the wordpress version being used (2.0.3) had security flaws. I’ve upgraded to the latest, rotated passwords, and removed the spammy URLs. I don’t believe any content was lost. You can check your own and other sites for a similar problem by greping for “width:0″ or “width: 0″ in the delivered html source.

4 0.72897106 78 hunch net-2005-06-06-Exact Online Learning for Classification

Introduction: Jacob Abernethy and I have found a computationally tractable method for computing an optimal (or near optimal depending on setting) master algorithm combining expert predictions addressing this open problem . A draft is here . The effect of this improvement seems to be about a factor of 2 decrease in the regret (= error rate minus best possible error rate) for the low error rate situation. (At large error rates, there may be no significant difference.) There are some unfinished details still to consider: When we remove all of the approximation slack from online learning, is the result a satisfying learning algorithm, in practice? I consider online learning is one of the more compelling methods of analyzing and deriving algorithms, but that expectation must be either met or not by this algorithm Some extra details: The algorithm is optimal given a small amount of side information ( k in the draft). What is the best way to remove this side information? The removal

5 0.66200012 230 hunch net-2007-02-02-Thoughts regarding “Is machine learning different from statistics?”

Introduction: Given John’s recent posts on CMU’s new machine learning department and “Deep Learning,” I asked for an opportunity to give a computational learning theory perspective on these issues. To my mind, the answer to the question “Are the core problems from machine learning different from the core problems of statistics?” is a clear Yes. The point of this post is to describe a core problem in machine learning that is computational in nature and will appeal to statistical learning folk (as an extreme example note that if P=NP– which, for all we know, is true– then we would suddenly find almost all of our favorite machine learning problems considerably more tractable). If the central question of statistical learning theory were crudely summarized as “given a hypothesis with a certain loss bound over a test set, how well will it generalize?” then the central question of computational learning theory might be “how can we find such a hypothesis efficently (e.g., in polynomial-time)?” With t

6 0.62461984 379 hunch net-2009-11-23-ICML 2009 Workshops (and Tutorials)

7 0.61891741 40 hunch net-2005-03-13-Avoiding Bad Reviewing

8 0.60877442 315 hunch net-2008-09-03-Bidding Problems

9 0.60694188 453 hunch net-2012-01-28-Why COLT?

10 0.60613906 452 hunch net-2012-01-04-Why ICML? and the summer conferences

11 0.60256279 51 hunch net-2005-04-01-The Producer-Consumer Model of Research

12 0.60125601 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

13 0.59872919 423 hunch net-2011-02-02-User preferences for search engines

14 0.59697729 437 hunch net-2011-07-10-ICML 2011 and the future

15 0.59671623 116 hunch net-2005-09-30-Research in conferences

16 0.59652454 96 hunch net-2005-07-21-Six Months

17 0.59471583 325 hunch net-2008-11-10-ICML Reviewing Criteria

18 0.59421462 395 hunch net-2010-04-26-Compassionate Reviewing

19 0.59362781 463 hunch net-2012-05-02-ICML: Behind the Scenes

20 0.59341282 343 hunch net-2009-02-18-Decision by Vetocracy