hunch_net hunch_net-2011 hunch_net-2011-427 knowledge-graph by maker-knowledge-mining

427 hunch net-2011-03-20-KDD Cup 2011


meta infos for this blog

Source: html

Introduction: Yehuda points out KDD-Cup 2011 which Markus and Gideon helped setup. This is a prediction and recommendation contest for music. In addition to being a fun chance to show your expertise, there are cash prizes of $5K/$2K/$1K.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yehuda points out KDD-Cup 2011 which Markus and Gideon helped setup. [sent-1, score-0.377]

2 This is a prediction and recommendation contest for music. [sent-2, score-0.633]

3 In addition to being a fun chance to show your expertise, there are cash prizes of $5K/$2K/$1K. [sent-3, score-1.413]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('gideon', 0.371), ('markus', 0.343), ('cash', 0.309), ('prizes', 0.309), ('yehuda', 0.309), ('expertise', 0.277), ('recommendation', 0.262), ('contest', 0.256), ('fun', 0.231), ('helped', 0.227), ('show', 0.198), ('addition', 0.193), ('chance', 0.173), ('points', 0.15), ('prediction', 0.115)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 427 hunch net-2011-03-20-KDD Cup 2011

Introduction: Yehuda points out KDD-Cup 2011 which Markus and Gideon helped setup. This is a prediction and recommendation contest for music. In addition to being a fun chance to show your expertise, there are cash prizes of $5K/$2K/$1K.

2 0.17979431 211 hunch net-2006-10-02-$1M Netflix prediction contest

Introduction: Netflix is running a contest to improve recommender prediction systems. A 10% improvement over their current system yields a $1M prize. Failing that, the best smaller improvement yields a smaller $50K prize. This contest looks quite real, and the $50K prize money is almost certainly achievable with a bit of thought. The contest also comes with a dataset which is apparently 2 orders of magnitude larger than any other public recommendation system datasets.

3 0.14014338 371 hunch net-2009-09-21-Netflix finishes (and starts)

Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu

4 0.13142632 86 hunch net-2005-06-28-The cross validation problem: cash reward

Introduction: I just presented the cross validation problem at COLT . The problem now has a cash prize (up to $500) associated with it—see the presentation for details. The write-up for colt .

5 0.1170435 362 hunch net-2009-06-26-Netflix nearly done

Introduction: A $1M qualifying result was achieved on the public Netflix test set by a 3-way ensemble team . This is just in time for Yehuda ‘s presentation at KDD , which I’m sure will be one of the best attended ever. This isn’t quite over—there are a few days for another super-conglomerate team to come together and there is some small chance that the performance is nonrepresentative of the final test set, but I expect not. Regardless of the final outcome, the biggest lesson for ML from the Netflix contest has been the formidable performance edge of ensemble methods.

6 0.091328546 92 hunch net-2005-07-11-AAAI blog

7 0.089430764 430 hunch net-2011-04-11-The Heritage Health Prize

8 0.07931786 444 hunch net-2011-09-07-KDD and MUCMD 2011

9 0.078870237 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

10 0.073833279 129 hunch net-2005-11-07-Prediction Competitions

11 0.072048739 172 hunch net-2006-04-14-JMLR is a success

12 0.061499871 42 hunch net-2005-03-17-Going all the Way, Sometimes

13 0.059164792 270 hunch net-2007-11-02-The Machine Learning Award goes to …

14 0.058605954 447 hunch net-2011-10-10-ML Symposium and ICML details

15 0.054071724 369 hunch net-2009-08-27-New York Area Machine Learning Events

16 0.051406857 410 hunch net-2010-09-17-New York Area Machine Learning Events

17 0.050905362 326 hunch net-2008-11-11-COLT CFP

18 0.049568236 389 hunch net-2010-02-26-Yahoo! ML events

19 0.046780422 156 hunch net-2006-02-11-Yahoo’s Learning Problems.

20 0.045797773 418 hunch net-2010-12-02-Traffic Prediction Problem


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.046), (1, -0.007), (2, -0.037), (3, -0.034), (4, -0.036), (5, 0.029), (6, -0.084), (7, -0.041), (8, -0.031), (9, -0.082), (10, -0.065), (11, 0.198), (12, -0.015), (13, 0.02), (14, -0.012), (15, 0.018), (16, 0.035), (17, -0.012), (18, -0.004), (19, -0.04), (20, -0.071), (21, 0.022), (22, 0.022), (23, -0.053), (24, 0.037), (25, -0.002), (26, -0.028), (27, -0.094), (28, -0.074), (29, 0.033), (30, 0.103), (31, -0.044), (32, 0.086), (33, 0.061), (34, 0.027), (35, -0.01), (36, -0.098), (37, 0.034), (38, -0.033), (39, 0.023), (40, -0.124), (41, 0.026), (42, 0.047), (43, 0.058), (44, 0.148), (45, -0.026), (46, -0.106), (47, 0.069), (48, -0.004), (49, -0.002)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99346483 427 hunch net-2011-03-20-KDD Cup 2011

Introduction: Yehuda points out KDD-Cup 2011 which Markus and Gideon helped setup. This is a prediction and recommendation contest for music. In addition to being a fun chance to show your expertise, there are cash prizes of $5K/$2K/$1K.

2 0.8084603 211 hunch net-2006-10-02-$1M Netflix prediction contest

Introduction: Netflix is running a contest to improve recommender prediction systems. A 10% improvement over their current system yields a $1M prize. Failing that, the best smaller improvement yields a smaller $50K prize. This contest looks quite real, and the $50K prize money is almost certainly achievable with a bit of thought. The contest also comes with a dataset which is apparently 2 orders of magnitude larger than any other public recommendation system datasets.

3 0.51480412 362 hunch net-2009-06-26-Netflix nearly done

Introduction: A $1M qualifying result was achieved on the public Netflix test set by a 3-way ensemble team . This is just in time for Yehuda ‘s presentation at KDD , which I’m sure will be one of the best attended ever. This isn’t quite over—there are a few days for another super-conglomerate team to come together and there is some small chance that the performance is nonrepresentative of the final test set, but I expect not. Regardless of the final outcome, the biggest lesson for ML from the Netflix contest has been the formidable performance edge of ensemble methods.

4 0.50606936 371 hunch net-2009-09-21-Netflix finishes (and starts)

Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu

5 0.491301 418 hunch net-2010-12-02-Traffic Prediction Problem

Introduction: Slashdot points out the Traffic Prediction Challenge which looks pretty fun. The temporal aspect seems to be very common in many real-world problems and somewhat understudied.

6 0.48375294 446 hunch net-2011-10-03-Monday announcements

7 0.46969929 430 hunch net-2011-04-11-The Heritage Health Prize

8 0.4135924 129 hunch net-2005-11-07-Prediction Competitions

9 0.34337357 459 hunch net-2012-03-13-The Submodularity workshop and Lucca Professorship

10 0.3068701 336 hunch net-2009-01-19-Netflix prize within epsilon

11 0.303083 190 hunch net-2006-07-06-Branch Prediction Competition

12 0.29223904 433 hunch net-2011-04-23-ICML workshops due

13 0.27463403 389 hunch net-2010-02-26-Yahoo! ML events

14 0.2741777 86 hunch net-2005-06-28-The cross validation problem: cash reward

15 0.27096415 312 hunch net-2008-08-04-Electoralmarkets.com

16 0.26964125 284 hunch net-2008-01-18-Datasets

17 0.26554078 19 hunch net-2005-02-14-Clever Methods of Overfitting

18 0.26477408 119 hunch net-2005-10-08-We have a winner

19 0.25660616 421 hunch net-2011-01-03-Herman Goldstine 2011

20 0.25575453 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(39, 0.79)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.91483504 427 hunch net-2011-03-20-KDD Cup 2011

Introduction: Yehuda points out KDD-Cup 2011 which Markus and Gideon helped setup. This is a prediction and recommendation contest for music. In addition to being a fun chance to show your expertise, there are cash prizes of $5K/$2K/$1K.

2 0.51583129 71 hunch net-2005-05-14-NIPS

Introduction: NIPS is the big winter conference of learning. Paper due date: June 3rd. (Tweaked thanks to Fei Sha .) Location: Vancouver (main program) Dec. 5-8 and Whistler (workshops) Dec 9-10, BC, Canada NIPS is larger than all of the other learning conferences, partly because it’s the only one at that time of year. I recommend the workshops which are often quite interesting and energetic.

3 0.31701437 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

Introduction: In October 2006, the online movie renter, Netflix, announced the Netflix Prize contest. They published a comprehensive dataset including more than 100 million movie ratings, which were performed by about 480,000 real customers on 17,770 movies.   Competitors in the challenge are required to estimate a few million ratings.   To win the “grand prize,” they need to deliver a 10% improvement in the prediction error compared with the results of Cinematch, Netflix’s proprietary recommender system. Best current results deliver 9.12% improvement , which is quite close to the 10% goal, yet painfully distant.   The Netflix Prize breathed new life and excitement into recommender systems research. The competition allowed the wide research community to access a large scale, real life dataset. Beyond this, the competition changed the rules of the game. Claiming that your nice idea could outperform some mediocre algorithms on some toy dataset is no longer acceptable. Now researcher

4 0.30822101 475 hunch net-2012-10-26-ML Symposium and Strata-Hadoop World

Introduction: The New York ML symposium was last Friday. There were 303 registrations, up a bit from last year . I particularly enjoyed talks by Bill Freeman on vision and ML, Jon Lenchner on strategy in Jeopardy, and Tara N. Sainath and Brian Kingsbury on deep learning for speech recognition . If anyone has suggestions or thoughts for next year, please speak up. I also attended Strata + Hadoop World for the first time. This is primarily a trade conference rather than an academic conference, but I found it pretty interesting as a first time attendee. This is ground zero for the Big data buzzword, and I see now why. It’s about data, and the word “big” is so ambiguous that everyone can lay claim to it. There were essentially zero academic talks. Instead, the focus was on war stories, product announcements, and education. The general level of education is much lower—explaining Machine Learning to the SQL educated is the primary operating point. Nevertheless that’s happening, a

5 0.03410266 371 hunch net-2009-09-21-Netflix finishes (and starts)

Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu

6 0.0 1 hunch net-2005-01-19-Why I decided to run a weblog.

7 0.0 2 hunch net-2005-01-24-Holy grails of machine learning?

8 0.0 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

9 0.0 4 hunch net-2005-01-26-Summer Schools

10 0.0 5 hunch net-2005-01-26-Watchword: Probability

11 0.0 6 hunch net-2005-01-27-Learning Complete Problems

12 0.0 7 hunch net-2005-01-31-Watchword: Assumption

13 0.0 8 hunch net-2005-02-01-NIPS: Online Bayes

14 0.0 9 hunch net-2005-02-01-Watchword: Loss

15 0.0 10 hunch net-2005-02-02-Kolmogorov Complexity and Googling

16 0.0 11 hunch net-2005-02-02-Paper Deadlines

17 0.0 12 hunch net-2005-02-03-Learning Theory, by assumption

18 0.0 13 hunch net-2005-02-04-JMLG

19 0.0 14 hunch net-2005-02-07-The State of the Reduction

20 0.0 15 hunch net-2005-02-08-Some Links