hunch_net hunch_net-2009 hunch_net-2009-339 knowledge-graph by maker-knowledge-mining

339 hunch net-2009-01-27-Key Scientific Challenges


meta infos for this blog

Source: html

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Yahoo released the Key Scientific Challenges program. [sent-1, score-0.129]

2 There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. [sent-2, score-0.706]

3 I’m hoping this is taken quite seriously by graduate students. [sent-3, score-0.826]

4 The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. [sent-4, score-1.649]

5 A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. [sent-5, score-1.336]

6 The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. [sent-6, score-1.105]

7 A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. [sent-7, score-1.117]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('graduate', 0.372), ('directions', 0.351), ('primary', 0.213), ('unrestricted', 0.192), ('worked', 0.186), ('deepak', 0.177), ('sure', 0.168), ('pursue', 0.167), ('advances', 0.167), ('list', 0.167), ('sit', 0.16), ('hoping', 0.16), ('secondary', 0.148), ('publicly', 0.139), ('scientific', 0.139), ('seriously', 0.139), ('travel', 0.136), ('challenges', 0.132), ('released', 0.129), ('valuable', 0.129), ('sufficiently', 0.129), ('strategy', 0.129), ('beginning', 0.126), ('advance', 0.124), ('gave', 0.122), ('money', 0.117), ('beyond', 0.113), ('specify', 0.113), ('pick', 0.113), ('yahoo', 0.111), ('student', 0.11), ('taken', 0.102), ('statistics', 0.102), ('key', 0.099), ('students', 0.099), ('progress', 0.096), ('serious', 0.094), ('chance', 0.089), ('make', 0.085), ('etc', 0.08), ('applications', 0.075), ('value', 0.068), ('substantial', 0.067), ('reasonable', 0.063), ('us', 0.062), ('point', 0.057), ('quite', 0.053), ('general', 0.051), ('would', 0.048), ('research', 0.042)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.99999988 339 hunch net-2009-01-27-Key Scientific Challenges

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…

2 0.26499856 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

Introduction: Yahoo!’s Key Scientific Challenges for Machine Learning grant applications are due March 11. If you are a student working on relevant research, please consider applying. It’s for $5K of unrestricted funding.

3 0.16170736 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

Introduction: For graduate students, the Yahoo! Key Scientific Challenges program including in machine learning is on again, due March 9 . The application is easy and the $5K award is high quality “no strings attached” funding. Consider submitting. Those in Washington DC, Philadelphia, and New York, may consider attending the Franklin Institute Symposium April 25 which has several speakers and an award for V . Attendance is free with an RSVP.

4 0.14931008 110 hunch net-2005-09-10-“Failure” is an option

Introduction: This is about the hard choices that graduate students must make. The cultural definition of success in academic research is to: Produce good research which many other people appreciate. Produce many students who go on to do the same. There are fundamental reasons why this is success in the local culture. Good research appreciated by others means access to jobs. Many students succesful in the same way implies that there are a number of people who think in a similar way and appreciate your work. In order to graduate, a phd student must live in an academic culture for a period of several years. It is common to adopt the culture’s definition of success during this time. It’s also common for many phd students discover they are not suited to an academic research lifestyle. This collision of values and abilities naturally results in depression. The most fundamental advice when this happens is: change something. Pick a new advisor. Pick a new research topic. Or leave th

5 0.12902462 389 hunch net-2010-02-26-Yahoo! ML events

Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.

6 0.10688657 175 hunch net-2006-04-30-John Langford –> Yahoo Research, NY

7 0.089244731 449 hunch net-2011-11-26-Giving Thanks

8 0.080932632 156 hunch net-2006-02-11-Yahoo’s Learning Problems.

9 0.080469191 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

10 0.078570232 329 hunch net-2008-11-28-A Bumper Crop of Machine Learning Graduates

11 0.07653743 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

12 0.074802019 378 hunch net-2009-11-15-The Other Online Learning

13 0.072607286 295 hunch net-2008-04-12-It Doesn’t Stop

14 0.072573528 414 hunch net-2010-10-17-Partha Niyogi has died

15 0.070193738 464 hunch net-2012-05-03-Microsoft Research, New York City

16 0.0669083 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

17 0.064604327 290 hunch net-2008-02-27-The Stats Handicap

18 0.063372433 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

19 0.060369819 370 hunch net-2009-09-18-Necessary and Sufficient Research

20 0.060349677 397 hunch net-2010-05-02-What’s the difference between gambling and rewarding good prediction?


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.125), (1, -0.035), (2, -0.11), (3, 0.071), (4, -0.087), (5, -0.039), (6, -0.006), (7, 0.047), (8, -0.1), (9, -0.029), (10, 0.083), (11, 0.047), (12, 0.007), (13, -0.01), (14, 0.02), (15, 0.087), (16, -0.07), (17, -0.002), (18, -0.035), (19, -0.109), (20, 0.096), (21, 0.062), (22, 0.078), (23, -0.098), (24, 0.104), (25, 0.014), (26, 0.062), (27, -0.124), (28, 0.046), (29, 0.075), (30, -0.045), (31, 0.043), (32, -0.033), (33, 0.005), (34, 0.046), (35, 0.081), (36, -0.099), (37, 0.012), (38, -0.027), (39, 0.056), (40, -0.101), (41, 0.018), (42, 0.136), (43, -0.02), (44, -0.015), (45, 0.081), (46, -0.027), (47, -0.016), (48, -0.047), (49, -0.019)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.97863936 339 hunch net-2009-01-27-Key Scientific Challenges

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…

2 0.79307812 425 hunch net-2011-02-25-Yahoo! Machine Learning grant due March 11

Introduction: Yahoo!’s Key Scientific Challenges for Machine Learning grant applications are due March 11. If you are a student working on relevant research, please consider applying. It’s for $5K of unrestricted funding.

3 0.75318897 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

Introduction: For graduate students, the Yahoo! Key Scientific Challenges program including in machine learning is on again, due March 9 . The application is easy and the $5K award is high quality “no strings attached” funding. Consider submitting. Those in Washington DC, Philadelphia, and New York, may consider attending the Franklin Institute Symposium April 25 which has several speakers and an award for V . Attendance is free with an RSVP.

4 0.63138568 389 hunch net-2010-02-26-Yahoo! ML events

Introduction: Yahoo! is sponsoring two machine learning events that might interest people. The Key Scientific Challenges program (due March 5) for Machine Learning and Statistics offers $5K (plus bonuses) for graduate students working on a core problem of interest to Y! If you are already working on one of these problems, there is no reason not to submit, and if you aren’t you might want to think about it for next year, as I am confident they all press the boundary of the possible in Machine Learning. There are 7 days left. The Learning to Rank challenge (due May 31) offers an $8K first prize for the best ranking algorithm on a real (and really used) dataset for search ranking, with presentations at an ICML workshop. Unlike the Netflix competition, there are prizes for 2nd, 3rd, and 4th place, perhaps avoiding the heartbreak the ensemble encountered. If you think you know how to rank, you should give it a try, and we might all learn something. There are 3 months left.

5 0.62224293 175 hunch net-2006-04-30-John Langford –> Yahoo Research, NY

Introduction: I will join Yahoo Research (in New York) after my contract ends at TTI-Chicago . The deciding reasons are: Yahoo is running into many hard learning problems. This is precisely the situation where basic research might hope to have the greatest impact. Yahoo Research understands research including publishing, conferences, etc… Yahoo Research is growing, so there is a chance I can help it grow well. Yahoo understands the internet, including (but not at all limited to) experimenting with research blogs. In the end, Yahoo Research seems like the place where I might have a chance to make the greatest difference. Yahoo (as a company) has made a strong bet on Yahoo Research. We-the-researchers all hope that bet will pay off, and this seems plausible. I’ll certainly have fun trying.

6 0.58632332 110 hunch net-2005-09-10-“Failure” is an option

7 0.57537305 290 hunch net-2008-02-27-The Stats Handicap

8 0.53013951 73 hunch net-2005-05-17-A Short Guide to PhD Graduate Study

9 0.49320447 464 hunch net-2012-05-03-Microsoft Research, New York City

10 0.49091992 156 hunch net-2006-02-11-Yahoo’s Learning Problems.

11 0.48606455 335 hunch net-2009-01-08-Predictive Analytics World

12 0.44595554 449 hunch net-2011-11-26-Giving Thanks

13 0.44145665 270 hunch net-2007-11-02-The Machine Learning Award goes to …

14 0.42565092 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

15 0.41498235 69 hunch net-2005-05-11-Visa Casualties

16 0.41166496 48 hunch net-2005-03-29-Academic Mechanism Design

17 0.41045341 414 hunch net-2010-10-17-Partha Niyogi has died

18 0.40184018 142 hunch net-2005-12-22-Yes , I am applying

19 0.39398283 178 hunch net-2006-05-08-Big machine learning

20 0.36251417 195 hunch net-2006-07-12-Who is having visa problems reaching US conferences?


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.146), (36, 0.106), (38, 0.619)]

similar blogs list:

simIndex simValue blogId blogTitle

1 0.96815658 125 hunch net-2005-10-20-Machine Learning in the News

Introduction: The New York Times had a short interview about machine learning in datamining being used pervasively by the IRS and large corporations to predict who to audit and who to target for various marketing campaigns. This is a big application area of machine learning. It can be harmful (learning + databases = another way to invade privacy) or beneficial (as google demonstrates, better targeting of marketing campaigns is far less annoying). This is yet more evidence that we can not rely upon “I’m just another fish in the school” logic for our expectations about treatment by government and large corporations.

same-blog 2 0.94746244 339 hunch net-2009-01-27-Key Scientific Challenges

Introduction: Yahoo released the Key Scientific Challenges program. There is a Machine Learning list I worked on and a Statistics list which Deepak worked on. I’m hoping this is taken quite seriously by graduate students. The primary value, is that it gave us a chance to sit down and publicly specify directions of research which would be valuable to make progress on. A good strategy for a beginning graduate student is to pick one of these directions, pursue it, and make substantial advances for a PhD. The directions are sufficiently general that I’m sure any serious advance has applications well beyond Yahoo. A secondary point, (which I’m sure is primary for many ) is that there is money for graduate students here. It’s unrestricted, so you can use it for any reasonable travel, supplies, etc…

3 0.91576719 488 hunch net-2013-08-31-Extreme Classification workshop at NIPS

Introduction: Manik and I are organizing the extreme classification workshop at NIPS this year. We have a number of good speakers lined up, but I would further encourage anyone working in the area to submit an abstract by October 9. I believe this is an idea whose time has now come. The NIPS website doesn’t have other workshops listed yet, but I expect several others to be of significant interest.

4 0.90415132 181 hunch net-2006-05-23-What is the best regret transform reduction from multiclass to binary?

Introduction: This post is about an open problem in learning reductions. Background A reduction might transform a a multiclass prediction problem where there are k possible labels into a binary learning problem where there are only 2 possible labels. On this induced binary problem we might learn a binary classifier with some error rate e . After subtracting the minimum possible (Bayes) error rate b , we get a regret r = e – b . The PECOC (Probabilistic Error Correcting Output Code) reduction has the property that binary regret r implies multiclass regret at most 4r 0.5 . The problem This is not the “rightest” answer. Consider the k=2 case, where we reduce binary to binary. There exists a reduction (the identity) with the property that regret r implies regret r . This is substantially superior to the transform given by the PECOC reduction, which suggests that a better reduction may exist for general k . For example, we can not rule out the possibility that a reduction

5 0.89750272 83 hunch net-2005-06-18-Lower Bounds for Learning Reductions

Introduction: Learning reductions transform a solver of one type of learning problem into a solver of another type of learning problem. When we analyze these for robustness we can make statement of the form “Reduction R has the property that regret r (or loss) on subproblems of type A implies regret at most f ( r ) on the original problem of type B “. A lower bound for a learning reduction would have the form “for all reductions R , there exists a learning problem of type B and learning algorithm for problems of type A where regret r on induced problems implies at least regret f ( r ) for B “. The pursuit of lower bounds is often questionable because, unlike upper bounds, they do not yield practical algorithms. Nevertheless, they may be helpful as a tool for thinking about what is learnable and how learnable it is. This has already come up here and here . At the moment, there is no coherent theory of lower bounds for learning reductions, and we have little understa

6 0.8560282 170 hunch net-2006-04-06-Bounds greater than 1

7 0.75146192 353 hunch net-2009-05-08-Computability in Artificial Intelligence

8 0.74544704 233 hunch net-2007-02-16-The Forgetting

9 0.71101767 236 hunch net-2007-03-15-Alternative Machine Learning Reductions Definitions

10 0.70863849 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

11 0.6505664 239 hunch net-2007-04-18-$50K Spock Challenge

12 0.61293429 72 hunch net-2005-05-16-Regret minimizing vs error limiting reductions

13 0.59475058 284 hunch net-2008-01-18-Datasets

14 0.53759414 26 hunch net-2005-02-21-Problem: Cross Validation

15 0.49515998 82 hunch net-2005-06-17-Reopening RL->Classification

16 0.47421619 19 hunch net-2005-02-14-Clever Methods of Overfitting

17 0.46606433 131 hunch net-2005-11-16-The Everything Ensemble Edge

18 0.46229464 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

19 0.4611918 327 hunch net-2008-11-16-Observations on Linearity for Reductions to Regression

20 0.45426404 391 hunch net-2010-03-15-The Efficient Robust Conditional Probability Estimation Problem