hunch_net hunch_net-2005 hunch_net-2005-53 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: Geoff Gordon made an interesting presentation at the snowbird learning workshop discussing the use of no-regret algorithms for the use of several robot-related learning problems. There seems to be a draft here . This seems interesting in two ways: Drawback Removal One of the significant problems with these online algorithms is that they can’t cope with structure very easily. This drawback is addressed for certain structures. Experiments One criticism of such algorithms is that they are too “worst case”. Several experiments suggest that protecting yourself against this worst case does not necessarily incur a great loss.
sentIndex sentText sentNum sentScore
1 Geoff Gordon made an interesting presentation at the snowbird learning workshop discussing the use of no-regret algorithms for the use of several robot-related learning problems. [sent-1, score-1.435]
2 This seems interesting in two ways: Drawback Removal One of the significant problems with these online algorithms is that they can’t cope with structure very easily. [sent-3, score-1.03]
3 Experiments One criticism of such algorithms is that they are too “worst case”. [sent-5, score-0.371]
4 Several experiments suggest that protecting yourself against this worst case does not necessarily incur a great loss. [sent-6, score-1.422]
wordName wordTfidf (topN-words)
[('drawback', 0.329), ('worst', 0.317), ('experiments', 0.281), ('incur', 0.259), ('gordon', 0.227), ('removal', 0.227), ('snowbird', 0.216), ('cope', 0.194), ('criticism', 0.194), ('case', 0.184), ('geoff', 0.183), ('draft', 0.179), ('algorithms', 0.177), ('discussing', 0.175), ('addressed', 0.175), ('certain', 0.153), ('interesting', 0.152), ('suggest', 0.151), ('necessarily', 0.151), ('presentation', 0.142), ('structure', 0.119), ('use', 0.11), ('seems', 0.105), ('loss', 0.103), ('workshop', 0.101), ('several', 0.099), ('ways', 0.095), ('made', 0.091), ('significant', 0.085), ('online', 0.082), ('great', 0.079), ('two', 0.059), ('one', 0.059), ('problems', 0.057), ('learning', 0.031)]
simIndex simValue blogId blogTitle
same-blog 1 0.99999994 53 hunch net-2005-04-06-Structured Regret Minimization
Introduction: Geoff Gordon made an interesting presentation at the snowbird learning workshop discussing the use of no-regret algorithms for the use of several robot-related learning problems. There seems to be a draft here . This seems interesting in two ways: Drawback Removal One of the significant problems with these online algorithms is that they can’t cope with structure very easily. This drawback is addressed for certain structures. Experiments One criticism of such algorithms is that they are too “worst case”. Several experiments suggest that protecting yourself against this worst case does not necessarily incur a great loss.
2 0.15391311 431 hunch net-2011-04-18-A paper not at Snowbird
Introduction: Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop , otherwise known as Snowbird, when it’s at Snowbird . At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me ( Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task. Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft .)
3 0.12958115 409 hunch net-2010-09-13-AIStats
Introduction: Geoff Gordon points out AIStats 2011 in Ft. Lauderdale, Florida. The call for papers is now out, due Nov. 1. The plan is to experiment with the review process to encourage quality in several ways. I expect to submit a paper and would encourage others with good research to do likewise.
4 0.12121099 306 hunch net-2008-07-02-Proprietary Data in Academic Research?
Introduction: Should results of experiments on proprietary datasets be in the academic research literature? The arguments I can imagine in the “against” column are: Experiments are not repeatable. Repeatability in experiments is essential to science because it allows others to compare new methods with old and discover which is better. It’s unfair. Academics who don’t have insider access to proprietary data are at a substantial disadvantage when competing with others who do. I’m unsympathetic to argument (2). To me, it looks like their are simply some resource constraints, and these should not prevent research progress. For example, we wouldn’t prevent publishing about particle accelerator experiments by physicists at CERN because physicists at CMU couldn’t run their own experiments. Argument (1) seems like a real issue. The argument for is: Yes, they are another form of evidence that an algorithm is good. The degree to which they are evidence is less than for public
5 0.10699689 78 hunch net-2005-06-06-Exact Online Learning for Classification
Introduction: Jacob Abernethy and I have found a computationally tractable method for computing an optimal (or near optimal depending on setting) master algorithm combining expert predictions addressing this open problem . A draft is here . The effect of this improvement seems to be about a factor of 2 decrease in the regret (= error rate minus best possible error rate) for the low error rate situation. (At large error rates, there may be no significant difference.) There are some unfinished details still to consider: When we remove all of the approximation slack from online learning, is the result a satisfying learning algorithm, in practice? I consider online learning is one of the more compelling methods of analyzing and deriving algorithms, but that expectation must be either met or not by this algorithm Some extra details: The algorithm is optimal given a small amount of side information ( k in the draft). What is the best way to remove this side information? The removal
6 0.10267436 127 hunch net-2005-11-02-Progress in Active Learning
7 0.09821108 65 hunch net-2005-05-02-Reviewing techniques for conferences
8 0.095508121 80 hunch net-2005-06-10-Workshops are not Conferences
9 0.093406558 54 hunch net-2005-04-08-Fast SVMs
10 0.09015549 177 hunch net-2006-05-05-An ICML reject
11 0.086489998 148 hunch net-2006-01-13-Benchmarks for RL
12 0.086425275 371 hunch net-2009-09-21-Netflix finishes (and starts)
13 0.084555231 9 hunch net-2005-02-01-Watchword: Loss
14 0.083756879 109 hunch net-2005-09-08-Online Learning as the Mathematics of Accountability
15 0.083450668 343 hunch net-2009-02-18-Decision by Vetocracy
16 0.081772149 307 hunch net-2008-07-04-More Presentation Preparation
17 0.081507459 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”
18 0.074548408 245 hunch net-2007-05-12-Loss Function Semantics
19 0.074001424 115 hunch net-2005-09-26-Prediction Bounds as the Mathematics of Science
20 0.07152234 332 hunch net-2008-12-23-Use of Learning Theory
topicId topicWeight
[(0, 0.143), (1, 0.041), (2, 0.004), (3, -0.07), (4, 0.006), (5, 0.089), (6, -0.038), (7, -0.021), (8, 0.02), (9, 0.061), (10, 0.018), (11, -0.028), (12, 0.034), (13, -0.017), (14, 0.03), (15, 0.025), (16, 0.037), (17, 0.025), (18, -0.151), (19, -0.005), (20, -0.031), (21, -0.013), (22, -0.061), (23, 0.041), (24, 0.066), (25, 0.018), (26, 0.061), (27, -0.011), (28, -0.02), (29, 0.098), (30, -0.138), (31, -0.012), (32, -0.03), (33, 0.09), (34, 0.118), (35, -0.088), (36, 0.088), (37, 0.001), (38, 0.016), (39, -0.005), (40, -0.018), (41, 0.086), (42, -0.077), (43, 0.147), (44, 0.068), (45, 0.021), (46, 0.063), (47, 0.057), (48, 0.043), (49, 0.08)]
simIndex simValue blogId blogTitle
same-blog 1 0.95923388 53 hunch net-2005-04-06-Structured Regret Minimization
Introduction: Geoff Gordon made an interesting presentation at the snowbird learning workshop discussing the use of no-regret algorithms for the use of several robot-related learning problems. There seems to be a draft here . This seems interesting in two ways: Drawback Removal One of the significant problems with these online algorithms is that they can’t cope with structure very easily. This drawback is addressed for certain structures. Experiments One criticism of such algorithms is that they are too “worst case”. Several experiments suggest that protecting yourself against this worst case does not necessarily incur a great loss.
2 0.61747289 431 hunch net-2011-04-18-A paper not at Snowbird
Introduction: Unfortunately, a scheduling failure meant I missed all of AIStat and most of the learning workshop , otherwise known as Snowbird, when it’s at Snowbird . At snowbird, the talk on Sum-Product networks by Hoifung Poon stood out to me ( Pedro Domingos is a coauthor.). The basic point was that by appropriately constructing networks based on sums and products, the normalization problem in probabilistic models is eliminated, yielding a highly tractable yet flexible representation+learning algorithm. As an algorithm, this is noticeably cleaner than deep belief networks with a claim to being an order of magnitude faster and working better on an image completion task. Snowbird doesn’t have real papers—just the abstract above. I look forward to seeing the paper. (added: Rodrigo points out the deep learning workshop draft .)
3 0.59903616 54 hunch net-2005-04-08-Fast SVMs
Introduction: There was a presentation at snowbird about parallelized support vector machines. In many cases, people parallelize by ignoring serial operations, but that is not what happened here—they parallelize with optimizations. Consequently, this seems to be the fastest SVM in existence. There is a related paper here .
4 0.4693813 346 hunch net-2009-03-18-Parallel ML primitives
Introduction: Previously, we discussed parallel machine learning a bit. As parallel ML is rather difficult, I’d like to describe my thinking at the moment, and ask for advice from the rest of the world. This is particularly relevant right now, as I’m attending a workshop tomorrow on parallel ML. Parallelizing slow algorithms seems uncompelling. Parallelizing many algorithms also seems uncompelling, because the effort required to parallelize is substantial. This leaves the question: Which one fast algorithm is the best to parallelize? What is a substantially different second? One compellingly fast simple algorithm is online gradient descent on a linear representation. This is the core of Leon’s sgd code and Vowpal Wabbit . Antoine Bordes showed a variant was competitive in the large scale learning challenge . It’s also a decades old primitive which has been reused in many algorithms, and continues to be reused. It also applies to online learning rather than just online optimiz
5 0.4657926 80 hunch net-2005-06-10-Workshops are not Conferences
Introduction: … and you should use that fact. A workshop differs from a conference in that it is about a focused group of people worrying about a focused topic. It also differs in that a workshop is typically a “one-time affair” rather than a series. (The Snowbird learning workshop counts as a conference in this respect.) A common failure mode of both organizers and speakers at a workshop is to treat it as a conference. This is “ok”, but it is not really taking advantage of the situation. Here are some things I’ve learned: For speakers: A smaller audience means it can be more interactive. Interactive means a better chance to avoid losing your audience and a more interesting presentation (because you can adapt to your audience). Greater focus amongst the participants means you can get to the heart of the matter more easily, and discuss tradeoffs more carefully. Unlike conferences, relevance is more valued than newness. For organizers: Not everything needs to be in a conference st
6 0.45084038 109 hunch net-2005-09-08-Online Learning as the Mathematics of Accountability
7 0.44803885 404 hunch net-2010-08-20-The Workshop on Cores, Clusters, and Clouds
8 0.4453356 334 hunch net-2009-01-07-Interesting Papers at SODA 2009
9 0.4219574 307 hunch net-2008-07-04-More Presentation Preparation
10 0.42152449 126 hunch net-2005-10-26-Fallback Analysis is a Secret to Useful Algorithms
11 0.41781765 451 hunch net-2011-12-13-Vowpal Wabbit version 6.1 & the NIPS tutorial
12 0.41695312 229 hunch net-2007-01-26-Parallel Machine Learning Problems
13 0.41677019 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial
14 0.40495113 28 hunch net-2005-02-25-Problem: Online Learning
15 0.39798012 306 hunch net-2008-07-02-Proprietary Data in Academic Research?
16 0.39405072 148 hunch net-2006-01-13-Benchmarks for RL
17 0.39076951 219 hunch net-2006-11-22-Explicit Randomization in Learning algorithms
18 0.39025646 163 hunch net-2006-03-12-Online learning or online preservation of learning?
19 0.38836795 234 hunch net-2007-02-22-Create Your Own ICML Workshop
20 0.38589972 374 hunch net-2009-10-10-ALT 2009
topicId topicWeight
[(27, 0.198), (94, 0.102), (96, 0.547)]
simIndex simValue blogId blogTitle
same-blog 1 0.80683768 53 hunch net-2005-04-06-Structured Regret Minimization
Introduction: Geoff Gordon made an interesting presentation at the snowbird learning workshop discussing the use of no-regret algorithms for the use of several robot-related learning problems. There seems to be a draft here . This seems interesting in two ways: Drawback Removal One of the significant problems with these online algorithms is that they can’t cope with structure very easily. This drawback is addressed for certain structures. Experiments One criticism of such algorithms is that they are too “worst case”. Several experiments suggest that protecting yourself against this worst case does not necessarily incur a great loss.
2 0.78150928 175 hunch net-2006-04-30-John Langford –> Yahoo Research, NY
Introduction: I will join Yahoo Research (in New York) after my contract ends at TTI-Chicago . The deciding reasons are: Yahoo is running into many hard learning problems. This is precisely the situation where basic research might hope to have the greatest impact. Yahoo Research understands research including publishing, conferences, etc… Yahoo Research is growing, so there is a chance I can help it grow well. Yahoo understands the internet, including (but not at all limited to) experimenting with research blogs. In the end, Yahoo Research seems like the place where I might have a chance to make the greatest difference. Yahoo (as a company) has made a strong bet on Yahoo Research. We-the-researchers all hope that bet will pay off, and this seems plausible. I’ll certainly have fun trying.
3 0.64292634 104 hunch net-2005-08-22-Do you believe in induction?
Introduction: Foster Provost gave a talk at the ICML metalearning workshop on “metalearning” and the “no free lunch theorem” which seems worth summarizing. As a review: the no free lunch theorem is the most complicated way we know of to say that a bias is required in order to learn. The simplest way to see this is in a nonprobabilistic setting. If you are given examples of the form (x,y) and you wish to predict y from x then any prediction mechanism errs half the time in expectation over all sequences of examples. The proof of this is very simple: on every example a predictor must make some prediction and by symmetry over the set of sequences it will be wrong half the time and right half the time. The basic idea of this proof has been applied to many other settings. The simplistic interpretation of this theorem which many people jump to is “machine learning is dead” since there can be no single learning algorithm which can solve all learning problems. This is the wrong way to thi
4 0.55296057 443 hunch net-2011-09-03-Fall Machine Learning Events
Introduction: Many Machine Learning related events are coming up this fall. September 9 , abstracts for the New York Machine Learning Symposium are due. Send a 2 page pdf, if interested, and note that we: widened submissions to be from anybody rather than students. set aside a larger fraction of time for contributed submissions. September 15 , there is a machine learning meetup , where I’ll be discussing terascale learning at AOL. September 16 , there is a CS&Econ; day at New York Academy of Sciences. This is not ML focused, but it’s easy to imagine interest. September 23 and later NIPS workshop submissions start coming due. As usual, there are too many good ones, so I won’t be able to attend all those that interest me. I do hope some workshop makers consider ICML this coming summer, as we are increasing to a 2 day format for you. Here are a few that interest me: Big Learning is about dealing with lots of data. Abstracts are due September 30 . The Bayes
5 0.40415826 105 hunch net-2005-08-23-(Dis)similarities between academia and open source programmers
Introduction: Martin Pool and I recently discussed the similarities and differences between academia and open source programming. Similarities: Cost profile Research and programming share approximately the same cost profile: A large upfront effort is required to produce something useful, and then “anyone” can use it. (The “anyone” is not quite right for either group because only sufficiently technical people could use it.) Wealth profile A “wealthy” academic or open source programmer is someone who has contributed a lot to other people in research or programs. Much of academia is a “gift culture”: whoever gives the most is most respected. Problems Both academia and open source programming suffer from similar problems. Whether or not (and which) open source program is used are perhaps too-often personality driven rather than driven by capability or usefulness. Similar phenomena can happen in academia with respect to directions of research. Funding is often a problem for
6 0.37207279 478 hunch net-2013-01-07-NYU Large Scale Machine Learning Class
7 0.35843837 252 hunch net-2007-07-01-Watchword: Online Learning
8 0.35363013 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class
9 0.3533814 258 hunch net-2007-08-12-Exponentiated Gradient
10 0.35264114 45 hunch net-2005-03-22-Active learning
11 0.35241076 352 hunch net-2009-05-06-Machine Learning to AI
12 0.35124403 43 hunch net-2005-03-18-Binomial Weighting
13 0.34897545 400 hunch net-2010-06-13-The Good News on Exploration and Learning
14 0.34799266 190 hunch net-2006-07-06-Branch Prediction Competition
15 0.34742156 345 hunch net-2009-03-08-Prediction Science
16 0.34725186 244 hunch net-2007-05-09-The Missing Bound
17 0.34633002 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design
18 0.34624517 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity
19 0.34621409 41 hunch net-2005-03-15-The State of Tight Bounds
20 0.34563732 351 hunch net-2009-05-02-Wielding a New Abstraction