hunch_net hunch_net-2008 hunch_net-2008-292 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
sentIndex sentText sentNum sentScore
1 COLT has a call for open problems due March 21. [sent-1, score-0.586]
2 I encourage anyone with a specifiable open problem to write it down and send it in. [sent-2, score-1.228]
3 Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. [sent-3, score-1.688]
4 To increase the chance someone will take it up, you can even put a bounty on the solution. [sent-4, score-1.322]
5 (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved). [sent-5, score-0.979]
wordName wordTfidf (topN-words)
[('bounty', 0.545), ('open', 0.307), ('specifiable', 0.242), ('concisely', 0.242), ('chance', 0.225), ('raise', 0.224), ('hasn', 0.181), ('increase', 0.163), ('specifying', 0.16), ('send', 0.157), ('precisely', 0.151), ('encourage', 0.151), ('march', 0.148), ('call', 0.146), ('write', 0.139), ('put', 0.137), ('problem', 0.132), ('solutions', 0.125), ('solved', 0.115), ('someone', 0.112), ('effort', 0.11), ('helpful', 0.101), ('anyone', 0.1), ('colt', 0.1), ('others', 0.09), ('substantial', 0.085), ('solve', 0.085), ('due', 0.08), ('take', 0.08), ('yet', 0.078), ('perhaps', 0.075), ('even', 0.06), ('problems', 0.053)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 292 hunch net-2008-03-15-COLT Open Problems
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
2 0.16943486 273 hunch net-2007-11-16-MLSS 2008
Introduction: … is in Kioloa, Australia from March 3 to March 14. It’s a great chance to learn something about Machine Learning and I’ve enjoyed several previous Machine Learning Summer Schools . The website has many more details , but registration is open now for the first 80 to sign up.
3 0.15673554 429 hunch net-2011-04-06-COLT open questions
Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.
4 0.14378303 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
5 0.13299237 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for
6 0.1252861 409 hunch net-2010-09-13-AIStats
7 0.10339832 47 hunch net-2005-03-28-Open Problems for Colt
8 0.10216361 11 hunch net-2005-02-02-Paper Deadlines
9 0.091029473 297 hunch net-2008-04-22-Taking the next step
10 0.072648786 424 hunch net-2011-02-17-What does Watson mean?
11 0.072102107 494 hunch net-2014-03-11-The New York ML Symposium, take 2
12 0.072047696 195 hunch net-2006-07-12-Who is having visa problems reaching US conferences?
13 0.071407482 358 hunch net-2009-06-01-Multitask Poisoning
14 0.070638873 375 hunch net-2009-10-26-NIPS workshops
15 0.069676541 151 hunch net-2006-01-25-1 year
16 0.066744886 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS
17 0.066281877 86 hunch net-2005-06-28-The cross validation problem: cash reward
18 0.06606061 60 hunch net-2005-04-23-Advantages and Disadvantages of Bayesian Learning
19 0.064840242 326 hunch net-2008-11-11-COLT CFP
20 0.064431772 231 hunch net-2007-02-10-Best Practices for Collaboration
topicId topicWeight
[(0, 0.109), (1, -0.058), (2, -0.049), (3, -0.018), (4, -0.06), (5, -0.045), (6, 0.035), (7, -0.003), (8, -0.041), (9, -0.029), (10, -0.053), (11, -0.005), (12, -0.038), (13, 0.099), (14, 0.075), (15, 0.011), (16, 0.044), (17, -0.066), (18, -0.084), (19, 0.031), (20, -0.111), (21, 0.263), (22, 0.045), (23, -0.123), (24, 0.045), (25, 0.056), (26, -0.015), (27, 0.129), (28, 0.085), (29, 0.082), (30, 0.069), (31, 0.066), (32, 0.048), (33, 0.025), (34, 0.022), (35, -0.076), (36, -0.057), (37, -0.028), (38, 0.074), (39, 0.024), (40, 0.045), (41, 0.007), (42, 0.017), (43, 0.07), (44, -0.019), (45, -0.014), (46, -0.016), (47, 0.044), (48, -0.061), (49, -0.058)]
simIndex simValue blogId blogTitle
same-blog 1 0.98151547 292 hunch net-2008-03-15-COLT Open Problems
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
2 0.82385713 429 hunch net-2011-04-06-COLT open questions
Introduction: Alina and Jake point out the COLT Call for Open Questions due May 11. In general, this is cool, and worth doing if you can come up with a crisp question. In my case, I particularly enjoyed crafting an open question with precisely a form such that a critic targeting my papers would be forced to confront their fallacy or make a case for the reward. But less esoterically, this is a way to get the attention of some very smart people focused on a problem that really matters, which is the real value.
3 0.72564262 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
Introduction: Sasha is the open problems chair for both COLT and ICML . Open problems will be presented in a joint session in the evening of the COLT/ICML overlap day. COLT has a history of open sessions, but this is new for ICML. If you have a difficult theoretically definable problem in machine learning, consider submitting it for review, due March 16 . You’ll benefit three ways: The effort of writing down a precise formulation of what you want often helps you understand the nature of the problem. Your problem will be officially published and citable. You might have it solved by some very intelligent bored people. The general idea could easily be applied to any problem which can be crisply stated with an easily verifiable solution, and we may consider expanding this in later years, but for this year all problems need to be of a theoretical variety. Joelle and I (and Mahdi , and Laurent ) finished an initial assignment of Program Committee and Area Chairs to pap
4 0.6839124 47 hunch net-2005-03-28-Open Problems for Colt
Introduction: Adam Klivans and Rocco Servedio are looking for open (learning theory) problems for COLT . This is a good idea in the same way that the KDDcup challenge is a good idea: crisp problem definitions that anyone can attack yield solutions that advance science.
5 0.60964137 273 hunch net-2007-11-16-MLSS 2008
Introduction: … is in Kioloa, Australia from March 3 to March 14. It’s a great chance to learn something about Machine Learning and I’ve enjoyed several previous Machine Learning Summer Schools . The website has many more details , but registration is open now for the first 80 to sign up.
6 0.60623091 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
7 0.59140009 82 hunch net-2005-06-17-Reopening RL->Classification
8 0.50228113 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification
9 0.48262033 294 hunch net-2008-04-12-Blog compromised
10 0.4427861 297 hunch net-2008-04-22-Taking the next step
11 0.42258149 100 hunch net-2005-08-04-Why Reinforcement Learning is Important
12 0.42211586 86 hunch net-2005-06-28-The cross validation problem: cash reward
13 0.3847439 367 hunch net-2009-08-16-Centmail comments
14 0.37481371 68 hunch net-2005-05-10-Learning Reductions are Reductionist
15 0.36184108 11 hunch net-2005-02-02-Paper Deadlines
16 0.36109161 428 hunch net-2011-03-27-Vowpal Wabbit, v5.1
17 0.35212255 409 hunch net-2010-09-13-AIStats
18 0.34281534 324 hunch net-2008-11-09-A Healthy COLT
19 0.34225333 421 hunch net-2011-01-03-Herman Goldstine 2011
20 0.33337018 389 hunch net-2010-02-26-Yahoo! ML events
topicId topicWeight
[(27, 0.074), (30, 0.357), (38, 0.028), (49, 0.051), (53, 0.128), (55, 0.197)]
simIndex simValue blogId blogTitle
same-blog 1 0.86434466 292 hunch net-2008-03-15-COLT Open Problems
Introduction: COLT has a call for open problems due March 21. I encourage anyone with a specifiable open problem to write it down and send it in. Just the effort of specifying an open problem precisely and concisely has been very helpful for my own solutions, and there is a substantial chance others will solve it. To increase the chance someone will take it up, you can even put a bounty on the solution. (Perhaps I should raise the $500 bounty on the K-fold cross-validation problem as it hasn’t yet been solved).
2 0.80973756 364 hunch net-2009-07-11-Interesting papers at KDD
Introduction: I attended KDD this year. The conference has always had a strong grounding in what works based on the KDDcup , but it has developed a halo of workshops on various subjects. It seems that KDD has become a place where the economy meets machine learning in a stronger sense than many other conferences. There were several papers that other people might like to take a look at. Yehuda Koren Collaborative Filtering with Temporal Dynamics . This paper describes how to incorporate temporal dynamics into a couple of collaborative filtering approaches. This was also a best paper award. D. Sculley , Robert Malkin, Sugato Basu , Roberto J. Bayardo , Predicting Bounce Rates in Sponsored Search Advertisements . The basic claim of this paper is that the probability people immediately leave (“bounce”) after clicking on an advertisement is predictable. Frank McSherry and Ilya Mironov Differentially Private Recommender Systems: Building Privacy into the Netflix Prize Contende
3 0.78921348 189 hunch net-2006-07-05-more icml papers
Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a
4 0.75270969 154 hunch net-2006-02-04-Research Budget Changes
Introduction: The announcement of an increase in funding for basic research in the US is encouraging. There is some discussion of this at the Computing Research Policy blog. One part of this discussion has a graph of NSF funding over time, presumably in dollar budgets. I don’t believe that dollar budgets are the right way to judge the impact of funding changes on researchers. A better way to judge seems to be in terms of dollar budget divided by GDP which provides a measure of the relative emphasis on research. This graph was assembled by dividing the NSF budget by the US GDP . For 2005 GDP, I used the current estimate and for 2006 and 2007 assumed an increase by a factor of 1.04 per year. The 2007 number also uses the requested 2007 budget which is certain to change. This graph makes it clear why researchers were upset: research funding emphasis has fallen for 3 years in a row. The reality has been significantly more severe due to DARPA decreasing funding and industrial
5 0.73046887 444 hunch net-2011-09-07-KDD and MUCMD 2011
Introduction: At KDD I enjoyed Stephen Boyd ‘s invited talk about optimization quite a bit. However, the most interesting talk for me was David Haussler ‘s. His talk started out with a formidable load of biological complexity. About half-way through you start wondering, “can this be used to help with cancer?” And at the end he connects it directly to use with a call to arms for the audience: cure cancer. The core thesis here is that cancer is a complex set of diseases which can be distentangled via genetic assays, allowing attacking the specific signature of individual cancers. However, the data quantity and complex dependencies within the data require systematic and relatively automatic prediction and analysis algorithms of the kind that we are best familiar with. Some of the papers which interested me are: Kai-Wei Chang and Dan Roth , Selective Block Minimization for Faster Convergence of Limited Memory Large-Scale Linear Models , which is about effectively using a hard-example
6 0.68155688 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop
7 0.54696649 85 hunch net-2005-06-28-A COLT paper
8 0.53676701 331 hunch net-2008-12-12-Summer Conferences
9 0.51490223 90 hunch net-2005-07-07-The Limits of Learning Theory
10 0.50631654 387 hunch net-2010-01-19-Deadline Season, 2010
11 0.49835438 395 hunch net-2010-04-26-Compassionate Reviewing
12 0.49692681 356 hunch net-2009-05-24-2009 ICML discussion site
13 0.49602085 20 hunch net-2005-02-15-ESPgame and image labeling
14 0.49521217 270 hunch net-2007-11-02-The Machine Learning Award goes to …
15 0.48919255 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge
16 0.48741797 448 hunch net-2011-10-24-2011 ML symposium and the bears
17 0.48693532 452 hunch net-2012-01-04-Why ICML? and the summer conferences
18 0.47968596 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning
19 0.47908461 446 hunch net-2011-10-03-Monday announcements
20 0.47609136 116 hunch net-2005-09-30-Research in conferences