hunch_net hunch_net-2010 hunch_net-2010-389 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.
sentIndex sentText sentNum sentScore
1 is sponsoring two machine learning events that might interest people. [sent-2, score-0.64]
2 The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! [sent-3, score-0.977]
3 If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. [sent-4, score-1.158]
4 The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. [sent-6, score-1.18]
5 Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. [sent-7, score-0.426]
6 If you think you know how to rank, you should give it a try, and we might all learn something. [sent-8, score-0.378]
wordName wordTfidf (topN-words)
[('rank', 0.321), ('offers', 0.308), ('ranking', 0.249), ('bonuses', 0.192), ('press', 0.192), ('confident', 0.178), ('sponsoring', 0.168), ('boundary', 0.168), ('prizes', 0.16), ('plus', 0.154), ('interest', 0.144), ('scientific', 0.14), ('working', 0.137), ('months', 0.133), ('challenges', 0.133), ('avoiding', 0.133), ('ensemble', 0.133), ('due', 0.127), ('prize', 0.127), ('might', 0.126), ('days', 0.124), ('submit', 0.124), ('presentations', 0.124), ('graduate', 0.124), ('netflix', 0.124), ('unlike', 0.12), ('events', 0.118), ('march', 0.118), ('competition', 0.114), ('left', 0.112), ('yahoo', 0.112), ('think', 0.106), ('search', 0.103), ('statistics', 0.103), ('challenge', 0.1), ('key', 0.099), ('students', 0.099), ('dataset', 0.098), ('place', 0.094), ('core', 0.091), ('already', 0.091), ('aren', 0.089), ('machine', 0.084), ('give', 0.083), ('next', 0.081), ('try', 0.079), ('reason', 0.079), ('program', 0.074), ('really', 0.071), ('learn', 0.063)]
simIndex simValue blogId blogTitle
same-blog 1 1.0 389 hunch net-2010-02-26-Yahoo! ML events
Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.
2 0.21283787 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11
Introduction: Yahoo!’s Key Scientific Challenges for Machine Learning grant applications are due March 11. If you are a student working on relevant research, please consider applying. It’s for $5K of unrestricted funding.
3 0.16203298 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium
Introduction: For graduate students, the Yahoo! Key Scientific Challenges program including in machine learning is on again, due March 9 . The application is easy and the $5K award is high quality “no strings attached” funding. Consider submitting. Those in Washington DC, Philadelphia, and New York, may consider attending the Franklin Institute Symposium April 25 which has several speakers and an award for V . Attendance is free with an RSVP.
4 0.13231359 371 hunch net-2009-09-21-Netflix finishes (and starts)
Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu
5 0.12948102 156 hunch net-2006-02-11-Yahoo’s Learning Problems.
Introduction: I just visited Yahoo Research which has several fundamental learning problems near to (or beyond) the set of problems we know how to solve well. Here are 3 of them. Ranking This is the canonical problem of all search engines. It is made extra difficult for several reasons. There is relatively little “good” supervised learning data and a great deal of data with some signal (such as click through rates). The learning must occur in a partially adversarial environment. Many people very actively attempt to place themselves at the top of rankings. It is not even quite clear whether the problem should be posed as ‘ranking’ or as ‘regression’ which is then used to produce a ranking. Collaborative filtering Yahoo has a large number of recommendation systems for music, movies, etc… In these sorts of systems, users specify how they liked a set of things, and then the system can (hopefully) find some more examples of things they might like by reasoning across multiple
6 0.12902462 339 hunch net-2009-01-27-Key Scientific Challenges
7 0.11749841 113 hunch net-2005-09-19-NIPS Workshops
8 0.11056939 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize
9 0.1075526 304 hunch net-2008-06-27-Reviewing Horror Stories
10 0.10071271 362 hunch net-2009-06-26-Netflix nearly done
11 0.10034648 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)
12 0.096719794 336 hunch net-2009-01-19-Netflix prize within epsilon
13 0.096209273 464 hunch net-2012-05-03-Microsoft Research, New York City
14 0.094701849 228 hunch net-2007-01-15-The Machine Learning Department
15 0.092388511 175 hunch net-2006-04-30-John Langford –> Yahoo Research, NY
16 0.090622015 375 hunch net-2009-10-26-NIPS workshops
17 0.084290884 452 hunch net-2012-01-04-Why ICML? and the summer conferences
18 0.083217286 273 hunch net-2007-11-16-MLSS 2008
19 0.08267232 239 hunch net-2007-04-18-$50K Spock Challenge
20 0.078596093 454 hunch net-2012-01-30-ICML Posters and Scope
topicId topicWeight
[(0, 0.176), (1, -0.071), (2, -0.12), (3, -0.026), (4, -0.07), (5, 0.004), (6, -0.044), (7, 0.05), (8, -0.085), (9, -0.126), (10, 0.001), (11, 0.192), (12, 0.009), (13, 0.038), (14, -0.001), (15, 0.089), (16, 0.013), (17, -0.074), (18, -0.028), (19, -0.106), (20, 0.004), (21, 0.076), (22, 0.077), (23, -0.134), (24, 0.042), (25, 0.059), (26, 0.081), (27, -0.125), (28, -0.028), (29, 0.089), (30, -0.068), (31, 0.141), (32, -0.044), (33, -0.052), (34, -0.07), (35, 0.013), (36, -0.061), (37, 0.015), (38, 0.012), (39, -0.009), (40, -0.009), (41, -0.049), (42, 0.136), (43, 0.034), (44, -0.027), (45, 0.07), (46, 0.004), (47, -0.061), (48, 0.063), (49, -0.004)]
simIndex simValue blogId blogTitle
same-blog 1 0.95007765 389 hunch net-2010-02-26-Yahoo! ML events
Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.
2 0.71852237 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium
Introduction: For graduate students, the Yahoo! Key Scientific Challenges program including in machine learning is on again, due March 9 . The application is easy and the $5K award is high quality “no strings attached” funding. Consider submitting. Those in Washington DC, Philadelphia, and New York, may consider attending the Franklin Institute Symposium April 25 which has several speakers and an award for V . Attendance is free with an RSVP.
3 0.68090075 339 hunch net-2009-01-27-Key Scientific Challenges
Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…
4 0.67533672 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11
Introduction: Yahoo!’s Key Scientific Challenges for Machine Learning grant applications are due March 11. If you are a student working on relevant research, please consider applying. It’s for $5K of unrestricted funding.
5 0.53112584 156 hunch net-2006-02-11-Yahoo’s Learning Problems.
Introduction: I just visited Yahoo Research which has several fundamental learning problems near to (or beyond) the set of problems we know how to solve well. Here are 3 of them. Ranking This is the canonical problem of all search engines. It is made extra difficult for several reasons. There is relatively little “good” supervised learning data and a great deal of data with some signal (such as click through rates). The learning must occur in a partially adversarial environment. Many people very actively attempt to place themselves at the top of rankings. It is not even quite clear whether the problem should be posed as ‘ranking’ or as ‘regression’ which is then used to produce a ranking. Collaborative filtering Yahoo has a large number of recommendation systems for music, movies, etc… In these sorts of systems, users specify how they liked a set of things, and then the system can (hopefully) find some more examples of things they might like by reasoning across multiple
6 0.5192067 335 hunch net-2009-01-08-Predictive Analytics World
7 0.48196194 171 hunch net-2006-04-09-Progress in Machine Translation
8 0.46699569 336 hunch net-2009-01-19-Netflix prize within epsilon
9 0.4666357 276 hunch net-2007-12-10-Learning Track of International Planning Competition
10 0.46359083 290 hunch net-2008-02-27-The Stats Handicap
11 0.45278463 270 hunch net-2007-11-02-The Machine Learning Award goes to …
12 0.43474641 63 hunch net-2005-04-27-DARPA project: LAGR
13 0.43378609 375 hunch net-2009-10-26-NIPS workshops
14 0.42199039 175 hunch net-2006-04-30-John Langford –> Yahoo Research, NY
15 0.41639817 178 hunch net-2006-05-08-Big machine learning
16 0.39430597 273 hunch net-2007-11-16-MLSS 2008
17 0.39168212 464 hunch net-2012-05-03-Microsoft Research, New York City
18 0.39013287 172 hunch net-2006-04-14-JMLR is a success
19 0.38729405 458 hunch net-2012-03-06-COLT-ICML Open Questions and ICML Instructions
20 0.37871316 371 hunch net-2009-09-21-Netflix finishes (and starts)
topicId topicWeight
[(27, 0.135), (55, 0.102), (94, 0.054), (95, 0.592)]
simIndex simValue blogId blogTitle
1 0.98962092 479 hunch net-2013-01-31-Remote large scale learning class participation
Introduction: Yann and I have arranged so that people who are interested in our large scale machine learning class and not able to attend in person can follow along via two methods. Videos will be posted with about a 1 day delay on techtalks . This is a side-by-side capture of video+slides from Weyond . We are experimenting with Piazza as a discussion forum. Anyone is welcome to subscribe to Piazza and ask questions there, where I will be monitoring things. update2 : Sign up here . The first lecture is up now, including the revised version of the slides which fixes a few typos and rounds out references.
2 0.98587704 390 hunch net-2010-03-12-Netflix Challenge 2 Canceled
Introduction: The second Netflix prize is canceled due to privacy problems . I continue to believe my original assessment of this paper, that the privacy break was somewhat overstated. I still haven’t seen any serious privacy failures on the scale of the AOL search log release . I expect privacy concerns to continue to be a big issue when dealing with data releases by companies or governments. The theory of maintaining privacy while using data is improving, but it is not yet in a state where the limits of what’s possible are clear let alone how to achieve these limits in a manner friendly to a prediction competition.
3 0.96998632 319 hunch net-2008-10-01-NIPS 2008 workshop on ‘Learning over Empirical Hypothesis Spaces’
Introduction: This workshop asks for insights how far we may/can push the theoretical boundary of using data in the design of learning machines. Can we express our classification rule in terms of the sample, or do we have to stick to a core assumption of classical statistical learning theory, namely that the hypothesis space is to be defined independent from the sample? This workshop is particularly interested in – but not restricted to – the ‘luckiness framework’ and the recently introduced notion of ‘compatibility functions’ in a semi-supervised learning context (more information can be found at http://www.kuleuven.be/wehys ).
4 0.95182276 30 hunch net-2005-02-25-Why Papers?
Introduction: Makc asked a good question in comments—”Why bother to make a paper, at all?” There are several reasons for writing papers which may not be immediately obvious to people not in academia. The basic idea is that papers have considerably more utility than the obvious “present an idea”. Papers are a formalized units of work. Academics (especially young ones) are often judged on the number of papers they produce. Papers have a formalized method of citing and crediting other—the bibliography. Academics (especially older ones) are often judged on the number of citations they receive. Papers enable a “more fair” anonymous review. Conferences receive many papers, from which a subset are selected. Discussion forums are inherently not anonymous for anyone who wants to build a reputation for good work. Papers are an excuse to meet your friends. Papers are the content of conferences, but much of what you do is talk to friends about interesting problems while there. Sometimes yo
same-blog 5 0.93100131 389 hunch net-2010-02-26-Yahoo! ML events
Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.
6 0.8764652 456 hunch net-2012-02-24-ICML+50%
7 0.86758482 127 hunch net-2005-11-02-Progress in Active Learning
8 0.80962098 344 hunch net-2009-02-22-Effective Research Funding
9 0.78106761 373 hunch net-2009-10-03-Static vs. Dynamic multiclass prediction
10 0.7750867 462 hunch net-2012-04-20-Both new: STOC workshops and NEML
11 0.71962851 234 hunch net-2007-02-22-Create Your Own ICML Workshop
12 0.69977236 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
13 0.57622504 7 hunch net-2005-01-31-Watchword: Assumption
14 0.54079187 464 hunch net-2012-05-03-Microsoft Research, New York City
15 0.53976953 455 hunch net-2012-02-20-Berkeley Streaming Data Workshop
16 0.53835344 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch
17 0.53131974 275 hunch net-2007-11-29-The Netflix Crack
18 0.51789701 216 hunch net-2006-11-02-2006 NIPS workshops
19 0.51786745 36 hunch net-2005-03-05-Funding Research
20 0.51722622 466 hunch net-2012-06-05-ICML acceptance statistics