hunch_net hunch_net-2005 hunch_net-2005-119 knowledge-graph by maker-knowledge-mining

119 hunch net-2005-10-08-We have a winner


meta infos for this blog

Source: html

Introduction: The DARPA grandchallenge is a big contest for autonomous robot vehicle driving. It was run once in 2004 for the first time and all teams did badly. This year was notably different with the Stanford and CMU teams succesfully completing the course. A number of details are here and wikipedia has continuing coverage . A formal winner hasn’t been declared yet although Stanford completed the course quickest. The Stanford and CMU teams deserve a large round of applause as they have strongly demonstrated the feasibility of autonomous vehicles. The good news for machine learning is that the Stanford team (at least) is using some machine learning techniques.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 The DARPA grandchallenge is a big contest for autonomous robot vehicle driving. [sent-1, score-0.767]

2 It was run once in 2004 for the first time and all teams did badly. [sent-2, score-0.523]

3 This year was notably different with the Stanford and CMU teams succesfully completing the course. [sent-3, score-0.873]

4 A number of details are here and wikipedia has continuing coverage . [sent-4, score-0.484]

5 A formal winner hasn’t been declared yet although Stanford completed the course quickest. [sent-5, score-0.675]

6 The Stanford and CMU teams deserve a large round of applause as they have strongly demonstrated the feasibility of autonomous vehicles. [sent-6, score-1.355]

7 The good news for machine learning is that the Stanford team (at least) is using some machine learning techniques. [sent-7, score-0.396]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('stanford', 0.557), ('teams', 0.394), ('autonomous', 0.338), ('cmu', 0.213), ('completing', 0.15), ('declared', 0.15), ('deserve', 0.15), ('vehicle', 0.15), ('completed', 0.139), ('continuing', 0.131), ('coverage', 0.131), ('feasibility', 0.131), ('succesfully', 0.125), ('team', 0.125), ('robot', 0.12), ('notably', 0.12), ('wikipedia', 0.12), ('demonstrated', 0.116), ('formal', 0.116), ('hasn', 0.112), ('round', 0.112), ('darpa', 0.106), ('contest', 0.104), ('winner', 0.101), ('news', 0.086), ('strongly', 0.078), ('course', 0.069), ('details', 0.066), ('run', 0.063), ('techniques', 0.062), ('big', 0.055), ('although', 0.052), ('least', 0.048), ('yet', 0.048), ('year', 0.045), ('machine', 0.044), ('different', 0.039), ('using', 0.037), ('first', 0.036), ('number', 0.036), ('large', 0.036), ('time', 0.03), ('good', 0.024), ('learning', 0.018)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 119 hunch net-2005-10-08-We have a winner

Introduction: The DARPA grandchallenge is a big contest for autonomous robot vehicle driving. It was run once in 2004 for the first time and all teams did badly. This year was notably different with the Stanford and CMU teams succesfully completing the course. A number of details are here and wikipedia has continuing coverage . A formal winner hasn’t been declared yet although Stanford completed the course quickest. The Stanford and CMU teams deserve a large round of applause as they have strongly demonstrated the feasibility of autonomous vehicles. The good news for machine learning is that the Stanford team (at least) is using some machine learning techniques.

2 0.24087399 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

Introduction: The results have been posted , with CMU first , Stanford second , and Virginia Tech Third . Considering that this was an open event (at least for people in the US), this was a very strong showing for research at universities (instead of defense contractors, for example). Some details should become public at the NIPS workshops . Slashdot has a post with many comments.

3 0.11200967 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

Introduction: Maverick Woo and the Aladdin group at CMU have started a CS theory-related blog here .

4 0.10754501 336 hunch net-2009-01-19-Netflix prize within epsilon

Introduction: The competitors for the Netflix Prize are tantalizingly close winning the million dollar prize. This year, BellKor and Commendo Research sent a combined solution that won the progress prize . Reading the writeups 2 is instructive. Several aspects of solutions are taken for granted including stochastic gradient descent, ensemble prediction, and targeting residuals (a form of boosting). Relatively to last year, it appears that many approaches have added parameterizations, especially for the purpose of modeling through time. The big question is: will they make the big prize? At this point, the level of complexity in entering the competition is prohibitive, so perhaps only the existing competitors will continue to try. (This equation might change drastically if the teams open source their existing solutions, including parameter settings.) One fear is that the progress is asymptoting on the wrong side of the 10% threshold. In the first year, the teams progressed through

5 0.083657153 445 hunch net-2011-09-28-Somebody’s Eating Your Lunch

Introduction: Since we last discussed the other online learning , Stanford has very visibly started pushing mass teaching in AI , Machine Learning , and Databases . In retrospect, it’s not too surprising that the next step up in serious online teaching experiments are occurring at the computer science department of a university embedded in the land of startups. Numbers on the order of 100000 are quite significant—similar in scale to the number of computer science undergraduate students/year in the US. Although these populations surely differ, the fact that they could overlap is worth considering for the future. It’s too soon to say how successful these classes will be and there are many easy criticisms to make: Registration != Learning … but if only 1/10th complete these classes, the scale of teaching still surpasses the scale of any traditional process. 1st year excitement != nth year routine … but if only 1/10th take future classes, the scale of teaching still surpass

6 0.07169348 371 hunch net-2009-09-21-Netflix finishes (and starts)

7 0.065184757 362 hunch net-2009-06-26-Netflix nearly done

8 0.062361952 372 hunch net-2009-09-29-Machine Learning Protests at the G20

9 0.059095658 430 hunch net-2011-04-11-The Heritage Health Prize

10 0.054633617 134 hunch net-2005-12-01-The Webscience Future

11 0.054552738 208 hunch net-2006-09-18-What is missing for online collaborative research?

12 0.051445402 211 hunch net-2006-10-02-$1M Netflix prediction contest

13 0.04943151 449 hunch net-2011-11-26-Giving Thanks

14 0.047924206 129 hunch net-2005-11-07-Prediction Competitions

15 0.047789875 50 hunch net-2005-04-01-Basic computer science research takes a hit

16 0.043157734 300 hunch net-2008-04-30-Concerns about the Large Scale Learning Challenge

17 0.034861412 369 hunch net-2009-08-27-New York Area Machine Learning Events

18 0.033896152 138 hunch net-2005-12-09-Some NIPS papers

19 0.033571929 272 hunch net-2007-11-14-BellKor wins Netflix

20 0.033000056 491 hunch net-2013-11-21-Ben Taskar is gone


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.058), (1, -0.011), (2, -0.054), (3, 0.009), (4, -0.03), (5, 0.018), (6, -0.031), (7, -0.0), (8, -0.014), (9, -0.005), (10, -0.035), (11, 0.1), (12, -0.007), (13, -0.039), (14, -0.058), (15, -0.012), (16, 0.016), (17, -0.043), (18, -0.008), (19, 0.021), (20, -0.069), (21, 0.012), (22, -0.007), (23, -0.035), (24, 0.027), (25, -0.067), (26, -0.009), (27, 0.042), (28, 0.05), (29, 0.015), (30, 0.058), (31, -0.026), (32, 0.014), (33, 0.03), (34, -0.079), (35, -0.009), (36, 0.046), (37, -0.083), (38, 0.078), (39, -0.007), (40, -0.045), (41, 0.015), (42, 0.029), (43, 0.056), (44, 0.042), (45, -0.074), (46, 0.052), (47, 0.029), (48, 0.115), (49, -0.04)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.93093598 119 hunch net-2005-10-08-We have a winner

Introduction: The DARPA grandchallenge is a big contest for autonomous robot vehicle driving. It was run once in 2004 for the first time and all teams did badly. This year was notably different with the Stanford and CMU teams succesfully completing the course. A number of details are here and wikipedia has continuing coverage . A formal winner hasn’t been declared yet although Stanford completed the course quickest. The Stanford and CMU teams deserve a large round of applause as they have strongly demonstrated the feasibility of autonomous vehicles. The good news for machine learning is that the Stanford team (at least) is using some machine learning techniques.

2 0.68666601 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

Introduction: The results have been posted , with CMU first , Stanford second , and Virginia Tech Third . Considering that this was an open event (at least for people in the US), this was a very strong showing for research at universities (instead of defense contractors, for example). Some details should become public at the NIPS workshops . Slashdot has a post with many comments.

3 0.56005913 372 hunch net-2009-09-29-Machine Learning Protests at the G20

Introduction: The machine learning department at CMU turned out en masse to protest the G20 summit in Pittsburgh. Arthur Gretton uploaded some great photos covering the event

4 0.54796654 336 hunch net-2009-01-19-Netflix prize within epsilon

Introduction: The competitors for the Netflix Prize are tantalizingly close winning the million dollar prize. This year, BellKor and Commendo Research sent a combined solution that won the progress prize . Reading the writeups 2 is instructive. Several aspects of solutions are taken for granted including stochastic gradient descent, ensemble prediction, and targeting residuals (a form of boosting). Relatively to last year, it appears that many approaches have added parameterizations, especially for the purpose of modeling through time. The big question is: will they make the big prize? At this point, the level of complexity in entering the competition is prohibitive, so perhaps only the existing competitors will continue to try. (This equation might change drastically if the teams open source their existing solutions, including parameter settings.) One fear is that the progress is asymptoting on the wrong side of the 10% threshold. In the first year, the teams progressed through

5 0.46469972 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers

Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for

6 0.43981689 371 hunch net-2009-09-21-Netflix finishes (and starts)

7 0.43526623 59 hunch net-2005-04-22-New Blog: [Lowerbounds,Upperbounds]

8 0.41888967 275 hunch net-2007-11-29-The Netflix Crack

9 0.39347774 430 hunch net-2011-04-11-The Heritage Health Prize

10 0.39213043 211 hunch net-2006-10-02-$1M Netflix prediction contest

11 0.3888433 29 hunch net-2005-02-25-Solution: Reinforcement Learning with Classification

12 0.36871809 313 hunch net-2008-08-18-Radford Neal starts a blog

13 0.36841932 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

14 0.35987031 270 hunch net-2007-11-02-The Machine Learning Award goes to …

15 0.34298241 301 hunch net-2008-05-23-Three levels of addressing the Netflix Prize

16 0.32957944 424 hunch net-2011-02-17-What does Watson mean?

17 0.32572266 362 hunch net-2009-06-26-Netflix nearly done

18 0.31377047 427 hunch net-2011-03-20-KDD Cup 2011

19 0.30580732 78 hunch net-2005-06-06-Exact Online Learning for Classification

20 0.30395773 190 hunch net-2006-07-06-Branch Prediction Competition


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.054), (31, 0.629), (55, 0.122), (94, 0.034)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9189648 119 hunch net-2005-10-08-We have a winner

Introduction: The DARPA grandchallenge is a big contest for autonomous robot vehicle driving. It was run once in 2004 for the first time and all teams did badly. This year was notably different with the Stanford and CMU teams succesfully completing the course. A number of details are here and wikipedia has continuing coverage . A formal winner hasn’t been declared yet although Stanford completed the course quickest. The Stanford and CMU teams deserve a large round of applause as they have strongly demonstrated the feasibility of autonomous vehicles. The good news for machine learning is that the Stanford team (at least) is using some machine learning techniques.

2 0.54529232 223 hunch net-2006-12-06-The Spam Problem

Introduction: The New York Times has an article on the growth of spam . Interesting facts include: 9/10 of all email is spam, spam source identification is nearly useless due to botnet spam senders, and image based spam (emails which consist of an image only) are on the growth. Estimates of the cost of spam are almost certainly far to low, because they do not account for the cost in time lost by people. The image based spam which is currently penetrating many filters should be catchable with a more sophisticated application of machine learning technology. For the spam I see, the rendered images come in only a few formats, which would be easy to recognize via a support vector machine (with RBF kernel), neural network, or even nearest-neighbor architecture. The mechanics of setting this up to run efficiently is the only real challenge. This is the next step in the spam war. The response to this system is to make the image based spam even more random. We should (essentially) expect to see

3 0.20982246 448 hunch net-2011-10-24-2011 ML symposium and the bears

Introduction: The New York ML symposium was last Friday. Attendance was 268, significantly larger than last year . My impression was that the event mostly still fit the space, although it was crowded. If anyone has suggestions for next year, speak up. The best student paper award went to Sergiu Goschin for a cool video of how his system learned to play video games (I can’t find the paper online yet). Choosing amongst the submitted talks was pretty difficult this year, as there were many similarly good ones. By coincidence all the invited talks were (at least potentially) about faster learning algorithms. Stephen Boyd talked about ADMM . Leon Bottou spoke on single pass online learning via averaged SGD . Yoav Freund talked about parameter-free hedging . In Yoav’s case the talk was mostly about a better theoretical learning algorithm, but it has the potential to unlock an exponential computational complexity improvement via oraclization of experts algorithms… but some serious

4 0.20808674 302 hunch net-2008-05-25-Inappropriate Mathematics for Machine Learning

Introduction: Reviewers and students are sometimes greatly concerned by the distinction between: An open set and a closed set . A Supremum and a Maximum . An event which happens with probability 1 and an event that always happens. I don’t appreciate this distinction in machine learning & learning theory. All machine learning takes place (by definition) on a machine where every parameter has finite precision. Consequently, every set is closed, a maximal element always exists, and probability 1 events always happen. The fundamental issue here is that substantial parts of mathematics don’t appear well-matched to computation in the physical world, because the mathematics has concerns which are unphysical. This mismatched mathematics makes irrelevant distinctions. We can ask “what mathematics is appropriate to computation?” Andrej has convinced me that a pretty good answer to this question is constructive mathematics . So, here’s a basic challenge: Can anyone name a situati

5 0.20746657 270 hunch net-2007-11-02-The Machine Learning Award goes to …

Introduction: Perhaps the biggest CS prize for research is the Turing Award , which has a $0.25M cash prize associated with it. It appears none of the prizes so far have been for anything like machine learning (the closest are perhaps database awards). In CS theory, there is the Gödel Prize which is smaller and newer, offering a $5K prize along and perhaps (more importantly) recognition. One such award has been given for Machine Learning, to Robert Schapire and Yoav Freund for Adaboost. In Machine Learning, there seems to be no equivalent of these sorts of prizes. There are several plausible reasons for this: There is no coherent community. People drift in and out of the central conferences all the time. Most of the author names from 10 years ago do not occur in the conferences of today. In addition, the entire subject area is fairly new. There are at least a core group of people who have stayed around. Machine Learning work doesn’t last Almost every paper is fo

6 0.20698409 90 hunch net-2005-07-07-The Limits of Learning Theory

7 0.20670775 20 hunch net-2005-02-15-ESPgame and image labeling

8 0.20670015 453 hunch net-2012-01-28-Why COLT?

9 0.2064127 446 hunch net-2011-10-03-Monday announcements

10 0.20581448 395 hunch net-2010-04-26-Compassionate Reviewing

11 0.20483637 271 hunch net-2007-11-05-CMU wins DARPA Urban Challenge

12 0.19905536 472 hunch net-2012-08-27-NYAS ML 2012 and ICML 2013

13 0.19685695 331 hunch net-2008-12-12-Summer Conferences

14 0.19340084 65 hunch net-2005-05-02-Reviewing techniques for conferences

15 0.19311091 452 hunch net-2012-01-04-Why ICML? and the summer conferences

16 0.19019823 40 hunch net-2005-03-13-Avoiding Bad Reviewing

17 0.1886176 326 hunch net-2008-11-11-COLT CFP

18 0.1886176 465 hunch net-2012-05-12-ICML accepted papers and early registration

19 0.18522492 387 hunch net-2010-01-19-Deadline Season, 2010

20 0.183939 116 hunch net-2005-09-30-Research in conferences