hunch_net hunch_net-2006 hunch_net-2006-188 knowledge-graph by maker-knowledge-mining

188 hunch net-2006-06-30-ICML papers


meta infos for this blog

Source: html

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. [sent-2, score-1.021]

2 Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. [sent-4, score-0.492]

3 Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. [sent-5, score-1.047]

4 A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. [sent-6, score-1.573]

5 Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. [sent-7, score-0.986]

6 Papers (2), (3), and (4), all seem like an initial pass at solving interesting problems which push the domain in which learning is applicable. [sent-8, score-0.533]

7 I’d like to encourage discussion of what papers interested you and why. [sent-9, score-0.377]

8 Maybe we’ll all learn a little bit, and it’s very likely that we all missed interesting papers in a multitrack conference. [sent-10, score-0.701]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('similarity', 0.384), ('function', 0.184), ('kernel', 0.176), ('papers', 0.165), ('marty', 0.16), ('zinkevich', 0.16), ('paper', 0.152), ('core', 0.152), ('abbeel', 0.148), ('fuse', 0.148), ('pieter', 0.148), ('multitrack', 0.148), ('bad', 0.143), ('blum', 0.14), ('optimality', 0.14), ('balcan', 0.14), ('discusses', 0.133), ('nina', 0.133), ('bagnell', 0.128), ('transition', 0.128), ('push', 0.128), ('describing', 0.128), ('inequality', 0.128), ('path', 0.124), ('avrim', 0.12), ('ng', 0.116), ('drew', 0.116), ('interested', 0.112), ('costs', 0.11), ('missed', 0.11), ('domain', 0.11), ('discussing', 0.108), ('pass', 0.108), ('notes', 0.106), ('learn', 0.105), ('rl', 0.103), ('andrew', 0.101), ('attempts', 0.101), ('encourage', 0.1), ('statements', 0.096), ('situations', 0.096), ('planning', 0.095), ('share', 0.095), ('interesting', 0.094), ('initial', 0.093), ('maybe', 0.089), ('given', 0.089), ('implying', 0.084), ('requires', 0.081), ('likely', 0.079)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0000001 188 hunch net-2006-06-30-ICML papers

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in

2 0.16069943 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

Introduction: Pieter Abbeel presented a paper with Andrew Ng at ICML on Exploration and Apprenticeship Learning in Reinforcement Learning . The basic idea of this algorithm is: Collect data from a human controlling a machine. Build a transition model based upon the experience. Build a policy which optimizes the transition model. Evaluate the policy. If it works well, halt, otherwise add the experience into the pool and go to (2). The paper proves that this technique will converge to some policy with expected performance near human expected performance assuming the world fits certain assumptions (MDP or linear dynamics). This general idea of apprenticeship learning (i.e. incorporating data from an expert) seems very compelling because (a) humans often learn this way and (b) much harder problems can be solved. For (a), the notion of teaching is about transferring knowledge from an expert to novices, often via demonstration. To see (b), note that we can create intricate rei

3 0.12567431 8 hunch net-2005-02-01-NIPS: Online Bayes

Introduction: One nice use for this blog is to consider and discuss papers that that have appeared at recent conferences. I really enjoyed Andrew Ng and Sham Kakade’s paper Online Bounds for Bayesian Algorithms . From the paper: The philosophy taken in the Bayesian methodology is often at odds with that in the online learning community…. the online learning setting makes rather minimal assumptions on the conditions under which the data are being presented to the learner —usually, Nature could provide examples in an adversarial manner. We study the performance of Bayesian algorithms in a more adversarial setting… We provide competitive bounds when the cost function is the log loss, and we compare our performance to the best model in our model class (as in the experts setting). It’s a very nice analysis of some of my favorite algorithms that all hinges around a beautiful theorem: Let Q be any distribution over parameters theta. Then for all sequences S: L_{Bayes}(S) leq L_Q(S)

4 0.11876392 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

Introduction: Several talks seem potentially interesting to ML folks at this year’s SODA. Maria-Florina Balcan , Avrim Blum , and Anupam Gupta , Approximate Clustering without the Approximation . This paper gives reasonable algorithms with provable approximation guarantees for k-median and other notions of clustering. It’s conceptually interesting, because it’s the second example I’ve seen where NP hardness is subverted by changing the problem definition subtle but reasonable way. Essentially, they show that if any near-approximation to an optimal solution is good, then it’s computationally easy to find a near-optimal solution. This subtle shift bears serious thought. A similar one occurred in our ranking paper with respect to minimum feedback arcset. With two known examples, it suggests that many more NP-complete problems might be finessed into irrelevance in this style. Yury Lifshits and Shengyu Zhang , Combinatorial Algorithms for Nearest Neighbors, Near-Duplicates, and Smal

5 0.11466659 454 hunch net-2012-01-30-ICML Posters and Scope

Introduction: Normally, I don’t indulge in posters for ICML , but this year is naturally an exception for me. If you want one, there are a small number left here , if you sign up before February. It also seems worthwhile to give some sense of the scope and reviewing criteria for ICML for authors considering submitting papers. At ICML, the (very large) program committee does the reviewing which informs final decisions by area chairs on most papers. Program chairs setup the process, deal with exceptions or disagreements, and provide advice for the reviewing process. Providing advice is tricky (and easily misleading) because a conference is a community, and in the end the aggregate interests of the community determine the conference. Nevertheless, as a program chair this year it seems worthwhile to state the overall philosophy I have and what I plan to encourage (and occasionally discourage). At the highest level, I believe ICML exists to further research into machine learning, which I gene

6 0.10940952 276 hunch net-2007-12-10-Learning Track of International Planning Competition

7 0.10764015 237 hunch net-2007-04-02-Contextual Scaling

8 0.10029401 235 hunch net-2007-03-03-All Models of Learning have Flaws

9 0.098699123 186 hunch net-2006-06-24-Online convex optimization at COLT

10 0.092547968 30 hunch net-2005-02-25-Why Papers?

11 0.091353983 3 hunch net-2005-01-24-The Humanloop Spectrum of Machine Learning

12 0.090631366 437 hunch net-2011-07-10-ICML 2011 and the future

13 0.089846775 233 hunch net-2007-02-16-The Forgetting

14 0.088950641 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

15 0.0884903 385 hunch net-2009-12-27-Interesting things at NIPS 2009

16 0.086299315 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

17 0.085674852 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

18 0.083066188 343 hunch net-2009-02-18-Decision by Vetocracy

19 0.081322797 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

20 0.080273971 368 hunch net-2009-08-26-Another 10-year paper in Machine Learning


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.197), (1, -0.01), (2, 0.065), (3, -0.021), (4, 0.098), (5, 0.001), (6, -0.021), (7, -0.023), (8, 0.061), (9, -0.018), (10, 0.043), (11, -0.01), (12, -0.078), (13, -0.009), (14, 0.083), (15, 0.022), (16, 0.009), (17, 0.005), (18, 0.062), (19, -0.018), (20, 0.031), (21, -0.045), (22, -0.135), (23, -0.035), (24, 0.047), (25, 0.052), (26, 0.039), (27, -0.002), (28, -0.008), (29, -0.033), (30, -0.01), (31, 0.001), (32, 0.028), (33, -0.054), (34, 0.029), (35, -0.062), (36, -0.028), (37, 0.018), (38, 0.083), (39, -0.049), (40, 0.044), (41, -0.017), (42, 0.034), (43, 0.088), (44, 0.035), (45, -0.072), (46, -0.001), (47, -0.025), (48, -0.001), (49, -0.036)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.95572865 188 hunch net-2006-06-30-ICML papers

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in

2 0.70311016 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

Introduction: Pieter Abbeel presented a paper with Andrew Ng at ICML on Exploration and Apprenticeship Learning in Reinforcement Learning . The basic idea of this algorithm is: Collect data from a human controlling a machine. Build a transition model based upon the experience. Build a policy which optimizes the transition model. Evaluate the policy. If it works well, halt, otherwise add the experience into the pool and go to (2). The paper proves that this technique will converge to some policy with expected performance near human expected performance assuming the world fits certain assumptions (MDP or linear dynamics). This general idea of apprenticeship learning (i.e. incorporating data from an expert) seems very compelling because (a) humans often learn this way and (b) much harder problems can be solved. For (a), the notion of teaching is about transferring knowledge from an expert to novices, often via demonstration. To see (b), note that we can create intricate rei

3 0.67354083 189 hunch net-2006-07-05-more icml papers

Introduction: Here are a few other papers I enjoyed from ICML06. Topic Models: Dynamic Topic Models David Blei, John Lafferty A nice model for how topics in LDA type models can evolve over time, using a linear dynamical system on the natural parameters and a very clever structured variational approximation (in which the mean field parameters are pseudo-observations of a virtual LDS). Like all Blei papers, he makes it look easy, but it is extremely impressive. Pachinko Allocation Wei Li, Andrew McCallum A very elegant (but computationally challenging) model which induces correlation amongst topics using a multi-level DAG whose interior nodes are “super-topics” and “sub-topics” and whose leaves are the vocabulary words. Makes the slumbering monster of structure learning stir. Sequence Analysis (I missed these talks since I was chairing another session) Online Decoding of Markov Models with Latency Constraints Mukund Narasimhan, Paul Viola, Michael Shilman An “a

4 0.66998267 97 hunch net-2005-07-23-Interesting papers at ACL

Introduction: A recent discussion indicated that one goal of this blog might be to allow people to post comments about recent papers that they liked. I think this could potentially be very useful, especially for those with diverse interests but only finite time to read through conference proceedings. ACL 2005 recently completed, and here are four papers from that conference that I thought were either good or perhaps of interest to a machine learning audience. David Chiang, A Hierarchical Phrase-Based Model for Statistical Machine Translation . (Best paper award.) This paper takes the standard phrase-based MT model that is popular in our field (basically, translate a sentence by individually translating phrases and reordering them according to a complicated statistical model) and extends it to take into account hierarchy in phrases, so that you can learn things like “X ‘s Y” -> “Y de X” in chinese, where X and Y are arbitrary phrases. This takes a step toward linguistic syntax for MT, whic

5 0.63945138 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

Introduction: Following up on Hal Daume’s post and John’s post on cool and interesting things seen at NIPS I’ll post my own little list of neat papers here as well. Of course it’s going to be biased towards what I think is interesting. Also, I have to say that I hadn’t been able to see many papers this year at nips due to myself being too busy, so please feel free to contribute the papers that you liked 1. P. Mudigonda, V. Kolmogorov, P. Torr. An Analysis of Convex Relaxations for MAP Estimation. A surprising paper which shows that many of the more sophisticated convex relaxations that had been proposed recently turns out to be subsumed by the simplest LP relaxation. Be careful next time you try a cool new convex relaxation! 2. D. Sontag, T. Jaakkola. New Outer Bounds on the Marginal Polytope. The title says it all. The marginal polytope is the set of local marginal distributions over subsets of variables that are globally consistent in the sense that there is at least one distributio

6 0.63734609 361 hunch net-2009-06-24-Interesting papers at UAICMOLT 2009

7 0.62266523 334 hunch net-2009-01-07-Interesting Papers at SODA 2009

8 0.6117723 403 hunch net-2010-07-18-ICML & COLT 2010

9 0.59920722 192 hunch net-2006-07-08-Some recent papers

10 0.59359848 256 hunch net-2007-07-20-Motivation should be the Responsibility of the Reviewer

11 0.58922797 325 hunch net-2008-11-10-ICML Reviewing Criteria

12 0.58826411 385 hunch net-2009-12-27-Interesting things at NIPS 2009

13 0.58195496 77 hunch net-2005-05-29-Maximum Margin Mismatch?

14 0.57326758 454 hunch net-2012-01-30-ICML Posters and Scope

15 0.57203007 406 hunch net-2010-08-22-KDD 2010

16 0.56864959 251 hunch net-2007-06-24-Interesting Papers at ICML 2007

17 0.56846744 204 hunch net-2006-08-28-Learning Theory standards for NIPS 2006

18 0.56820273 8 hunch net-2005-02-01-NIPS: Online Bayes

19 0.56563163 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

20 0.56423891 233 hunch net-2007-02-16-The Forgetting


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(27, 0.223), (34, 0.423), (38, 0.012), (53, 0.09), (55, 0.049), (94, 0.098)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.86962175 188 hunch net-2006-06-30-ICML papers

Introduction: Here are some ICML papers which interested me. Arindam Banerjee had a paper which notes that PAC-Bayes bounds, a core theorem in online learning, and the optimality of Bayesian learning statements share a core inequality in their proof. Pieter Abbeel , Morgan Quigley and Andrew Y. Ng have a paper discussing RL techniques for learning given a bad (but not too bad) model of the world. Nina Balcan and Avrim Blum have a paper which discusses how to learn given a similarity function rather than a kernel. A similarity function requires less structure than a kernel, implying that a learning algorithm using a similarity function might be applied in situations where no effective kernel is evident. Nathan Ratliff , Drew Bagnell , and Marty Zinkevich have a paper describing an algorithm which attempts to fuse A * path planning with learning of transition costs based on human demonstration. Papers (2), (3), and (4), all seem like an initial pass at solving in

2 0.84811401 15 hunch net-2005-02-08-Some Links

Introduction: Yaroslav Bulatov collects some links to other technical blogs.

3 0.84252977 153 hunch net-2006-02-02-Introspectionism as a Disease

Introduction: In the AI-related parts of machine learning, it is often tempting to examine how you do things in order to imagine how a machine should do things. This is introspection, and it can easily go awry. I will call introspection gone awry introspectionism. Introspectionism is almost unique to AI (and the AI-related parts of machine learning) and it can lead to huge wasted effort in research. It’s easiest to show how introspectionism arises by an example. Suppose we want to solve the problem of navigating a robot from point A to point B given a camera. Then, the following research action plan might seem natural when you examine your own capabilities: Build an edge detector for still images. Build an object recognition system given the edge detector. Build a system to predict distance and orientation to objects given the object recognition system. Build a system to plan a path through the scene you construct from {object identification, distance, orientation} predictions.

4 0.7845183 415 hunch net-2010-10-28-NY ML Symposium 2010

Introduction: About 200 people attended the 2010 NYAS ML Symposium this year. (It was about 170 last year .) I particularly enjoyed several talks. Yann has a new live demo of (limited) real-time object recognition learning. Sanjoy gave a fairly convincing and comprehensible explanation of why a modified form of single-linkage clustering is consistent in higher dimensions, and why consistency is a critical feature for clustering algorithms. I’m curious how well this algorithm works in practice. Matt Hoffman ‘s poster covering online LDA seemed pretty convincing to me as an algorithmic improvement. This year, we allocated more time towards posters & poster spotlights. For next year, we are considering some further changes. The format has traditionally been 4 invited Professor speakers, with posters and poster spotlight for students. Demand from other parties to participate is growing, for example from postdocs and startups in the area. Another growing concern is the fa

5 0.74646074 82 hunch net-2005-06-17-Reopening RL->Classification

Introduction: In research, it’s often the case that solving a problem helps you realize that it wasn’t the right problem to solve. This is the case for the “ reduce RL to classification ” problem with the solution hinted at here and turned into a paper here . The essential difficulty is that the method of stating and analyzing reductions ends up being nonalgorithmic (unlike previous reductions) unless you work with learning from teleoperated robots as Greg Grudic does. The difficulty here is due to the reduction being dependent on the optimal policy (which a human teleoperator might simulate, but which is otherwise unavailable). So, this problem is “open” again with the caveat that this time we want a more algorithmic solution. Whether or not this is feasible at all is still unclear and evidence in either direction would greatly interest me. A positive answer might have many practical implications in the long run.

6 0.62723964 457 hunch net-2012-02-29-Key Scientific Challenges and the Franklin Symposium

7 0.54742926 101 hunch net-2005-08-08-Apprenticeship Reinforcement Learning for Control

8 0.52695924 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

9 0.52233458 33 hunch net-2005-02-28-Regularization

10 0.52045572 424 hunch net-2011-02-17-What does Watson mean?

11 0.5189659 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms

12 0.51844448 347 hunch net-2009-03-26-Machine Learning is too easy

13 0.5158428 286 hunch net-2008-01-25-Turing’s Club for Machine Learning

14 0.51528025 158 hunch net-2006-02-24-A Fundamentalist Organization of Machine Learning

15 0.51457334 258 hunch net-2007-08-12-Exponentiated Gradient

16 0.51263469 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity

17 0.51235896 351 hunch net-2009-05-02-Wielding a New Abstraction

18 0.51081461 41 hunch net-2005-03-15-The State of Tight Bounds

19 0.50965965 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

20 0.50961787 95 hunch net-2005-07-14-What Learning Theory might do