hunch_net hunch_net-2005 hunch_net-2005-63 knowledge-graph by maker-knowledge-mining

63 hunch net-2005-04-27-DARPA project: LAGR


meta infos for this blog

Source: html

Introduction: Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. Features include: Many participants (8 going on 12?) Standardized hardware. In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. Similarly, contestants using more powerful sensors can gain huge advantages. Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). One of the premises of the program is that robust systems are desired. Monthly evaluations at different locations can help measure this and provide data. Attacks a known hard problem. (cross country driving)


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. [sent-1, score-0.19]

2 Features include: Many participants (8 going on 12? [sent-2, score-0.174]

3 In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. [sent-4, score-1.392]

4 Similarly, contestants using more powerful sensors can gain huge advantages. [sent-5, score-0.834]

5 Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). [sent-6, score-0.458]

6 One of the premises of the program is that robust systems are desired. [sent-7, score-0.235]

7 Monthly evaluations at different locations can help measure this and provide data. [sent-8, score-0.606]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('monthly', 0.389), ('contestants', 0.32), ('entering', 0.302), ('standardized', 0.302), ('contests', 0.173), ('evaluations', 0.173), ('shipped', 0.173), ('lagr', 0.16), ('sensors', 0.16), ('attacks', 0.151), ('country', 0.151), ('larry', 0.151), ('locations', 0.151), ('grand', 0.144), ('ground', 0.144), ('hardware', 0.144), ('robotics', 0.129), ('disadvantage', 0.125), ('darpa', 0.122), ('gain', 0.119), ('driving', 0.116), ('cross', 0.109), ('severe', 0.109), ('powerful', 0.106), ('full', 0.102), ('competition', 0.102), ('participants', 0.1), ('robust', 0.099), ('project', 0.097), ('similarly', 0.092), ('measure', 0.09), ('challenge', 0.09), ('huge', 0.087), ('code', 0.084), ('include', 0.083), ('feedback', 0.079), ('provide', 0.078), ('going', 0.074), ('features', 0.073), ('applied', 0.073), ('known', 0.071), ('help', 0.07), ('systems', 0.069), ('program', 0.067), ('hard', 0.057), ('since', 0.049), ('quite', 0.048), ('set', 0.045), ('different', 0.044), ('using', 0.042)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 63 hunch net-2005-04-27-DARPA project: LAGR

Introduction: Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. Features include: Many participants (8 going on 12?) Standardized hardware. In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. Similarly, contestants using more powerful sensors can gain huge advantages. Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). One of the premises of the program is that robust systems are desired. Monthly evaluations at different locations can help measure this and provide data. Attacks a known hard problem. (cross country driving)

2 0.095207922 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

Introduction: Today brings a new release of the Vowpal Wabbit fast online learning software. This time, unlike the previous release, the project itself is going open source, developing via github . For example, the lastest and greatest can be downloaded via: git clone git://github.com/JohnLangford/vowpal_wabbit.git If you aren’t familiar with git , it’s a distributed version control system which supports quick and easy branching, as well as reconciliation. This version of the code is confirmed to compile without complaint on at least some flavors of OSX as well as Linux boxes. As much of the point of this project is pushing the limits of fast and effective machine learning, let me mention a few datapoints from my experience. The program can effectively scale up to batch-style training on sparse terafeature (i.e. 10 12 sparse feature) size datasets. The limiting factor is typically i/o. I started using the the real datasets from the large-scale learning workshop as a conve

3 0.077059038 143 hunch net-2005-12-27-Automated Labeling

Introduction: One of the common trends in machine learning has been an emphasis on the use of unlabeled data. The argument goes something like “there aren’t many labeled web pages out there, but there are a huge number of web pages, so we must find a way to take advantage of them.” There are several standard approaches for doing this: Unsupervised Learning . You use only unlabeled data. In a typical application, you cluster the data and hope that the clusters somehow correspond to what you care about. Semisupervised Learning. You use both unlabeled and labeled data to build a predictor. The unlabeled data influences the learned predictor in some way. Active Learning . You have unlabeled data and access to a labeling oracle. You interactively choose which examples to label so as to optimize prediction accuracy. It seems there is a fourth approach worth serious investigation—automated labeling. The approach goes as follows: Identify some subset of observed values to predict

4 0.074187927 371 hunch net-2009-09-21-Netflix finishes (and starts)

Introduction: I attended the Netflix prize ceremony this morning. The press conference part is covered fine elsewhere , with the basic outcome being that BellKor’s Pragmatic Chaos won over The Ensemble by 15-20 minutes , because they were tied in performance on the ultimate holdout set. I’m sure the individual participants will have many chances to speak about the solution. One of these is Bell at the NYAS ML symposium on Nov. 6 . Several additional details may interest ML people. The degree of overfitting exhibited by the difference in performance on the leaderboard test set and the ultimate hold out set was small, but determining at .02 to .03%. A tie was possible, because the rules cut off measurements below the fourth digit based on significance concerns. In actuality, of course, the scores do differ before rounding, but everyone I spoke to claimed not to know how. The complete dataset has been released on UCI , so each team could compute their own score to whatever accu

5 0.067814857 345 hunch net-2009-03-08-Prediction Science

Introduction: One view of machine learning is that it’s about how to program computers to predict well. This suggests a broader research program centered around the more pervasive goal of simply predicting well. There are many distinct strands of this broader research program which are only partially unified. Here are the ones that I know of: Learning Theory . Learning theory focuses on several topics related to the dynamics and process of prediction. Convergence bounds like the VC bound give an intellectual foundation to many learning algorithms. Online learning algorithms like Weighted Majority provide an alternate purely game theoretic foundation for learning. Boosting algorithms yield algorithms for purifying prediction abiliity. Reduction algorithms provide means for changing esoteric problems into well known ones. Machine Learning . A great deal of experience has accumulated in practical algorithm design from a mixture of paradigms, including bayesian, biological, opt

6 0.06034857 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

7 0.059444945 26 hunch net-2005-02-21-Problem: Cross Validation

8 0.058356628 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

9 0.058074109 272 hunch net-2007-11-14-BellKor wins Netflix

10 0.055930078 276 hunch net-2007-12-10-Learning Track of International Planning Competition

11 0.055772882 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

12 0.049494397 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

13 0.048822179 229 hunch net-2007-01-26-Parallel Machine Learning Problems

14 0.047231767 465 hunch net-2012-05-12-ICML accepted papers and early registration

15 0.047186792 403 hunch net-2010-07-18-ICML & COLT 2010

16 0.046996631 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

17 0.046478834 19 hunch net-2005-02-14-Clever Methods of Overfitting

18 0.046193503 336 hunch net-2009-01-19-Netflix prize within epsilon

19 0.04552589 190 hunch net-2006-07-06-Branch Prediction Competition

20 0.044971276 467 hunch net-2012-06-15-Normal Deviate and the UCSC Machine Learning Summer School


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.083), (1, 0.004), (2, -0.03), (3, 0.002), (4, 0.005), (5, 0.0), (6, -0.033), (7, 0.008), (8, -0.026), (9, 0.005), (10, -0.04), (11, 0.06), (12, 0.006), (13, -0.036), (14, -0.017), (15, -0.03), (16, 0.045), (17, 0.005), (18, 0.016), (19, 0.028), (20, 0.053), (21, -0.004), (22, 0.009), (23, -0.054), (24, -0.017), (25, -0.02), (26, 0.005), (27, -0.004), (28, -0.023), (29, 0.035), (30, 0.024), (31, 0.026), (32, 0.056), (33, -0.047), (34, -0.081), (35, 0.027), (36, -0.027), (37, 0.049), (38, 0.04), (39, -0.049), (40, 0.014), (41, 0.038), (42, 0.024), (43, -0.036), (44, -0.044), (45, 0.074), (46, 0.037), (47, 0.045), (48, 0.095), (49, 0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.9603554 63 hunch net-2005-04-27-DARPA project: LAGR

Introduction: Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. Features include: Many participants (8 going on 12?) Standardized hardware. In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. Similarly, contestants using more powerful sensors can gain huge advantages. Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). One of the premises of the program is that robust systems are desired. Monthly evaluations at different locations can help measure this and provide data. Attacks a known hard problem. (cross country driving)

2 0.55003595 155 hunch net-2006-02-07-Pittsburgh Mind Reading Competition

Introduction: Francisco Pereira points out a fun Prediction Competition . Francisco says: DARPA is sponsoring a competition to analyze data from an unusual functional Magnetic Resonance Imaging experiment. Subjects watch videos inside the scanner while fMRI data are acquired. Unbeknownst to these subjects, the videos have been seen by a panel of other subjects that labeled each instant with labels in categories such as representation (are there tools, body parts, motion, sound), location, presence of actors, emotional content, etc. The challenge is to predict all of these different labels on an instant-by-instant basis from the fMRI data. A few reasons why this is particularly interesting: This is beyond the current state of the art, but not inconceivably hard. This is a new type of experiment design current analysis methods cannot deal with. This is an opportunity to work with a heavily examined and preprocessed neuroimaging dataset. DARPA is offering prizes!

3 0.49222538 56 hunch net-2005-04-14-Families of Learning Theory Statements

Introduction: The diagram above shows a very broad viewpoint of learning theory. arrow Typical statement Examples Past->Past Some prediction algorithm A does almost as well as any of a set of algorithms. Weighted Majority Past->Future Assuming independent samples, past performance predicts future performance. PAC analysis, ERM analysis Future->Future Future prediction performance on subproblems implies future prediction performance using algorithm A . ECOC, Probing A basic question is: Are there other varieties of statements of this type? Avrim noted that there are also “arrows between arrows”: generic methods for transforming between Past->Past statements and Past->Future statements. Are there others?

4 0.48492187 171 hunch net-2006-04-09-Progress in Machine Translation

Introduction: I just visited ISI where Daniel Marcu and others are working on machine translation. Apparently, machine translation is rapidly improving. A particularly dramatic year was 2002->2003 when systems switched from word-based translation to phrase-based translation. From a (now famous) slide by Charles Wayne at DARPA (which funds much of the work on machine translation) here is some anecdotal evidence: 2002 2003 insistent Wednesday may recurred her trips to Libya tomorrow for flying. Cairo 6-4 ( AFP ) – An official announced today in the Egyptian lines company for flying Tuesday is a company “insistent for flying” may resumed a consideration of a day Wednesday tomorrow her trips to Libya of Security Council decision trace international the imposed ban comment. And said the official “the institution sent a speech to Ministry of Foreign Affairs of lifting on Libya air, a situation her recieving replying are so a trip will pull to Libya a morning Wednesday.” E

5 0.48098105 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

Introduction: The Second Annual Reinforcement Learning Competition is about to get started. The aim of the competition is to facilitate direct comparisons between various learning methods on important and realistic domains. This year’s event will feature well-known benchmark domains as well as more challenging problems of real-world complexity, such as helicopter control and robot soccer keepaway. The competition begins on November 1st, 2007 when training software is released. Results must be submitted by July 1st, 2008. The competition will culminate in an event at ICML-08 in Helsinki, Finland, at which the winners will be announced. For more information, visit the competition website.

6 0.44561607 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

7 0.41339973 190 hunch net-2006-07-06-Branch Prediction Competition

8 0.4093149 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

9 0.40533885 276 hunch net-2007-12-10-Learning Track of International Planning Competition

10 0.40194535 487 hunch net-2013-07-24-ICML 2012 videos lost

11 0.39354053 112 hunch net-2005-09-14-The Predictionist Viewpoint

12 0.38302138 138 hunch net-2005-12-09-Some NIPS papers

13 0.38270429 389 hunch net-2010-02-26-Yahoo! ML events

14 0.37981817 143 hunch net-2005-12-27-Automated Labeling

15 0.36585405 262 hunch net-2007-09-16-Optimizing Machine Learning Programs

16 0.36107183 32 hunch net-2005-02-27-Antilearning: When proximity goes bad

17 0.35296732 75 hunch net-2005-05-28-Running A Machine Learning Summer School

18 0.34620416 119 hunch net-2005-10-08-We have a winner

19 0.34023607 49 hunch net-2005-03-30-What can Type Theory teach us about Machine Learning?

20 0.33484811 210 hunch net-2006-09-28-Programming Languages for Machine Learning Implementations


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.063), (37, 0.571), (53, 0.037), (55, 0.056), (94, 0.14)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95122838 63 hunch net-2005-04-27-DARPA project: LAGR

Introduction: Larry Jackal has set up the LAGR (“Learning Applied to Ground Robotics”) project (and competition) which seems to be quite well designed. Features include: Many participants (8 going on 12?) Standardized hardware. In the DARPA grand challenge contestants entering with motorcycles are at a severe disadvantage to those entering with a Hummer. Similarly, contestants using more powerful sensors can gain huge advantages. Monthly contests, with full feedback (but since the hardware is standardized, only code is shipped). One of the premises of the program is that robust systems are desired. Monthly evaluations at different locations can help measure this and provide data. Attacks a known hard problem. (cross country driving)

2 0.82850236 431 hunch net-2011-04-18-A paper not at Snowbird

Introduction: Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop , otherwise known as Snowbird, when it’s at Snowbird . At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me ( Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task. Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft .)

3 0.58293182 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning

Introduction: When I was thinking about the best “10 year paper” for ICML , I also took a look at a few other conferences. Here is one from 10 years ago that interested me: David McAllester PAC-Bayesian Model Averaging , COLT 1999. 2001 Journal Draft . Prior to this paper, the only mechanism known for controlling or estimating the necessary sample complexity for learning over continuously parameterized predictors was VC theory and variants, all of which suffered from a basic problem: they were incredibly pessimistic in practice. This meant that only very gross guidance could be provided for learning algorithm design. The PAC-Bayes bound provided an alternative approach to sample complexity bounds which was radically tighter, quantitatively. It also imported and explained many of the motivations for Bayesian learning in a way that learning theory and perhaps optimization people might appreciate. Since this paper came out, there have been a number of moderately successful attempts t

4 0.55471241 1 hunch net-2005-01-19-Why I decided to run a weblog.

Introduction: I have decided to run a weblog on machine learning and learning theory research. Here are some reasons: 1) Weblogs enable new functionality: Public comment on papers. No mechanism for this exists at conferences and most journals. I have encountered it once for a science paper. Some communities have mailing lists supporting this, but not machine learning or learning theory. I have often read papers and found myself wishing there was some method to consider other’s questions and read the replies. Conference shortlists. One of the most common conversations at a conference is “what did you find interesting?” There is no explicit mechanism for sharing this information at conferences, and it’s easy to imagine that it would be handy to do so. Evaluation and comment on research directions. Papers are almost exclusively about new research, rather than evaluation (and consideration) of research directions. This last role is satisfied by funding agencies to some extent, but

5 0.5503543 138 hunch net-2005-12-09-Some NIPS papers

Introduction: Here is a set of papers that I found interesting (and why). A PAC-Bayes approach to the Set Covering Machine improves the set covering machine. The set covering machine approach is a new way to do classification characterized by a very close connection between theory and algorithm. At this point, the approach seems to be competing well with SVMs in about all dimensions: similar computational speed, similar accuracy, stronger learning theory guarantees, more general information source (a kernel has strictly more structure than a metric), and more sparsity. Developing a classification algorithm is not very easy, but the results so far are encouraging. Off-Road Obstacle Avoidance through End-to-End Learning and Learning Depth from Single Monocular Images both effectively showed that depth information can be predicted from camera images (using notably different techniques). This ability is strongly enabling because cameras are cheap, tiny, light, and potentially provider lo

6 0.42829683 21 hunch net-2005-02-17-Learning Research Programs

7 0.3099511 194 hunch net-2006-07-11-New Models

8 0.30633503 144 hunch net-2005-12-28-Yet more nips thoughts

9 0.28137037 276 hunch net-2007-12-10-Learning Track of International Planning Competition

10 0.27436322 81 hunch net-2005-06-13-Wikis for Summer Schools and Workshops

11 0.27380276 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science

12 0.26602969 42 hunch net-2005-03-17-Going all the Way, Sometimes

13 0.26532036 120 hunch net-2005-10-10-Predictive Search is Coming

14 0.26487941 346 hunch net-2009-03-18-Parallel ML primitives

15 0.26387656 35 hunch net-2005-03-04-The Big O and Constants in Learning

16 0.26151109 221 hunch net-2006-12-04-Structural Problems in NIPS Decision Making

17 0.25199634 143 hunch net-2005-12-27-Automated Labeling

18 0.23974031 229 hunch net-2007-01-26-Parallel Machine Learning Problems

19 0.23524919 136 hunch net-2005-12-07-Is the Google way the way for machine learning?

20 0.23465149 286 hunch net-2008-01-25-Turing’s Club for Machine Learning