hunch_net hunch_net-2008 hunch_net-2008-293 knowledge-graph by maker-knowledge-mining
Source: html
Introduction: A new direction of research seems to be arising in machine learning: Interactive Machine Learning. This isn’t a familiar term, although it does include some familiar subjects. What is Interactive Machine Learning? The fundamental requirement is (a) learning algorithms which interact with the world and (b) learn. For our purposes, let’s define learning as efficiently competing with a large set of possible predictors. Examples include: Online learning against an adversary ( Avrim’s Notes ). The interaction is almost trivial: the learning algorithm makes a prediction and then receives feedback. The learning is choosing based upon the advice of many experts. Active Learning . In active learning, the interaction is choosing which examples to label, and the learning is choosing from amongst a large set of hypotheses. Contextual Bandits . The interaction is choosing one of several actions and learning only the value of the chosen action (weaker than active learning
sentIndex sentText sentNum sentScore
1 The interaction is almost trivial: the learning algorithm makes a prediction and then receives feedback. [sent-7, score-0.653]
2 In active learning, the interaction is choosing which examples to label, and the learning is choosing from amongst a large set of hypotheses. [sent-10, score-1.207]
3 The interaction is choosing one of several actions and learning only the value of the chosen action (weaker than active learning feedback). [sent-12, score-0.99]
4 More forms of interaction will doubtless be noted and tackled as time progresses. [sent-13, score-0.516]
5 I created a webpage for my own research on interactive learning which helps define the above subjects a bit more. [sent-14, score-0.718]
6 There are several learning settings which fail either the interaction or the learning test. [sent-16, score-0.839]
7 The basic paradigm in supervised learning is that you ask experts to label examples, and then you learn a predictor based upon the predictions of these experts. [sent-18, score-0.602]
8 The interaction is there, but the set of policies learned over is still too limited—essentially the policies just memorize what to do in each state. [sent-26, score-0.764]
9 All of these not-quite-interactive-learning topics are of course very useful background information for interactive machine learning. [sent-28, score-0.568]
10 We know from other fields and various examples that interaction is very powerful. [sent-31, score-0.636]
11 From online learning against an adversary, we know that independence of samples is unnecessary in an interactive setting—in fact you can even function against an adversary. [sent-32, score-0.898]
12 From active learning, we know that interaction sometimes allows us to use exponentially fewer labeled samples than in supervised learning. [sent-33, score-0.951]
13 From context bandits, we gain the ability to learn in settings where traditional supervised learning just doesn’t apply. [sent-34, score-0.538]
14 From complexity theory we have “ IP = PSPACE ” roughly: interactive proofs are as powerful as polynomial space algorithms, which is a strong statement about the power of interaction. [sent-35, score-0.515]
15 Several other variations of interactive settings have been proposed and analyzed. [sent-38, score-0.627]
16 There are plenty of kinds of natural interaction which haven’t been formalized or analyzed. [sent-45, score-0.519]
17 Many people doing machine learning want to reach AI, and it seems clear that any AI must engage in interactive learning. [sent-48, score-0.756]
18 Some of the techniques for other methods of interactive learning may be helpful. [sent-51, score-0.646]
19 How do we blend interactive and noninteractive learning? [sent-52, score-0.633]
20 Are there general methods for reducing interactive learning problems to supervised learning problems (which we know better)? [sent-54, score-1.208]
wordName wordTfidf (topN-words)
[('interactive', 0.515), ('interaction', 0.465), ('supervised', 0.215), ('choosing', 0.135), ('learning', 0.131), ('active', 0.128), ('contextual', 0.124), ('adversary', 0.121), ('bandit', 0.121), ('settings', 0.112), ('ad', 0.102), ('semisupervised', 0.098), ('bandits', 0.098), ('doesn', 0.097), ('mdp', 0.095), ('essentially', 0.093), ('examples', 0.089), ('policies', 0.087), ('know', 0.082), ('learn', 0.08), ('ai', 0.074), ('define', 0.072), ('problems', 0.067), ('familiar', 0.066), ('upon', 0.065), ('set', 0.064), ('except', 0.064), ('arising', 0.061), ('memorize', 0.061), ('blend', 0.061), ('mastering', 0.061), ('samples', 0.061), ('label', 0.06), ('amongst', 0.06), ('include', 0.059), ('online', 0.058), ('engage', 0.057), ('receives', 0.057), ('noninteractive', 0.057), ('formalized', 0.054), ('weaker', 0.054), ('ip', 0.054), ('throw', 0.054), ('machine', 0.053), ('unnecessary', 0.051), ('doubtless', 0.051), ('paradigm', 0.051), ('requirement', 0.051), ('closer', 0.051), ('exception', 0.051)]
simIndex simValue blogId blogTitle
same-blog 1 1.0000001 293 hunch net-2008-03-23-Interactive Machine Learning
Introduction: A new direction of research seems to be arising in machine learning: Interactive Machine Learning. This isn’t a familiar term, although it does include some familiar subjects. What is Interactive Machine Learning? The fundamental requirement is (a) learning algorithms which interact with the world and (b) learn. For our purposes, let’s define learning as efficiently competing with a large set of possible predictors. Examples include: Online learning against an adversary ( Avrim’s Notes ). The interaction is almost trivial: the learning algorithm makes a prediction and then receives feedback. The learning is choosing based upon the advice of many experts. Active Learning . In active learning, the interaction is choosing which examples to label, and the learning is choosing from amongst a large set of hypotheses. Contextual Bandits . The interaction is choosing one of several actions and learning only the value of the chosen action (weaker than active learning
2 0.23362526 400 hunch net-2010-06-13-The Good News on Exploration and Learning
Introduction: Consider the contextual bandit setting where, repeatedly: A context x is observed. An action a is taken given the context x . A reward r is observed, dependent on x and a . Where the goal of a learning agent is to find a policy for step 2 achieving a large expected reward. This setting is of obvious importance, because in the real world we typically make decisions based on some set of information and then get feedback only about the single action taken. It also fundamentally differs from supervised learning settings because knowing the value of one action is not equivalent to knowing the value of all actions. A decade ago the best machine learning techniques for this setting where implausibly inefficient. Dean Foster once told me he thought the area was a research sinkhole with little progress to be expected. Now we are on the verge of being able to routinely attack these problems, in almost exactly the same sense that we routinely attack bread and but
3 0.21743451 269 hunch net-2007-10-24-Contextual Bandits
Introduction: One of the fundamental underpinnings of the internet is advertising based content. This has become much more effective due to targeted advertising where ads are specifically matched to interests. Everyone is familiar with this, because everyone uses search engines and all search engines try to make money this way. The problem of matching ads to interests is a natural machine learning problem in some ways since there is much information in who clicks on what. A fundamental problem with this information is that it is not supervised—in particular a click-or-not on one ad doesn’t generally tell you if a different ad would have been clicked on. This implies we have a fundamental exploration problem. A standard mathematical setting for this situation is “ k -Armed Bandits”, often with various relevant embellishments. The k -Armed Bandit setting works on a round-by-round basis. On each round: A policy chooses arm a from 1 of k arms (i.e. 1 of k ads). The world reveals t
4 0.16934831 360 hunch net-2009-06-15-In Active Learning, the question changes
Introduction: A little over 4 years ago, Sanjoy made a post saying roughly “we should study active learning theoretically, because not much is understood”. At the time, we did not understand basic things such as whether or not it was possible to PAC-learn with an active algorithm without making strong assumptions about the noise rate. In other words, the fundamental question was “can we do it?” The nature of the question has fundamentally changed in my mind. The answer is to the previous question is “yes”, both information theoretically and computationally, most places where supervised learning could be applied. In many situation, the question has now changed to: “is it worth it?” Is the programming and computational overhead low enough to make the label cost savings of active learning worthwhile? Currently, there are situations where this question could go either way. Much of the challenge for the future is in figuring out how to make active learning easier or more worthwhile.
5 0.166596 264 hunch net-2007-09-30-NIPS workshops are out.
Introduction: Here . I’m particularly interested in the Web Search , Efficient ML , and (of course) Learning Problem Design workshops but there are many others to check out as well. Workshops are a great chance to make progress on or learn about a topic. Relevance and interaction amongst diverse people can sometimes be magical.
6 0.15775268 432 hunch net-2011-04-20-The End of the Beginning of Active Learning
7 0.15009263 388 hunch net-2010-01-24-Specializations of the Master Problem
8 0.14948905 370 hunch net-2009-09-18-Necessary and Sufficient Research
9 0.14328513 410 hunch net-2010-09-17-New York Area Machine Learning Events
10 0.14047515 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS
11 0.13459204 299 hunch net-2008-04-27-Watchword: Supervised Learning
12 0.1317035 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems
13 0.13113771 375 hunch net-2009-10-26-NIPS workshops
14 0.12521029 183 hunch net-2006-06-14-Explorations of Exploration
15 0.11749483 317 hunch net-2008-09-12-How do we get weak action dependence for learning with partial observations?
16 0.1169645 347 hunch net-2009-03-26-Machine Learning is too easy
17 0.11681201 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms
18 0.11434552 309 hunch net-2008-07-10-Interesting papers, ICML 2008
19 0.10850855 353 hunch net-2009-05-08-Computability in Artificial Intelligence
20 0.10617594 45 hunch net-2005-03-22-Active learning
topicId topicWeight
[(0, 0.245), (1, 0.099), (2, -0.088), (3, -0.008), (4, 0.106), (5, -0.025), (6, 0.014), (7, 0.004), (8, 0.022), (9, 0.062), (10, 0.177), (11, 0.008), (12, 0.042), (13, 0.19), (14, -0.189), (15, 0.084), (16, 0.025), (17, -0.046), (18, 0.037), (19, 0.136), (20, 0.038), (21, 0.004), (22, 0.043), (23, 0.001), (24, -0.008), (25, -0.02), (26, 0.032), (27, 0.009), (28, -0.057), (29, 0.038), (30, 0.016), (31, -0.008), (32, 0.066), (33, -0.042), (34, 0.06), (35, -0.03), (36, -0.048), (37, -0.042), (38, -0.042), (39, 0.065), (40, -0.004), (41, -0.077), (42, -0.027), (43, 0.071), (44, -0.121), (45, 0.013), (46, -0.097), (47, 0.033), (48, 0.086), (49, 0.053)]
simIndex simValue blogId blogTitle
same-blog 1 0.92018777 293 hunch net-2008-03-23-Interactive Machine Learning
Introduction: A new direction of research seems to be arising in machine learning: Interactive Machine Learning. This isn’t a familiar term, although it does include some familiar subjects. What is Interactive Machine Learning? The fundamental requirement is (a) learning algorithms which interact with the world and (b) learn. For our purposes, let’s define learning as efficiently competing with a large set of possible predictors. Examples include: Online learning against an adversary ( Avrim’s Notes ). The interaction is almost trivial: the learning algorithm makes a prediction and then receives feedback. The learning is choosing based upon the advice of many experts. Active Learning . In active learning, the interaction is choosing which examples to label, and the learning is choosing from amongst a large set of hypotheses. Contextual Bandits . The interaction is choosing one of several actions and learning only the value of the chosen action (weaker than active learning
2 0.82844114 400 hunch net-2010-06-13-The Good News on Exploration and Learning
Introduction: Consider the contextual bandit setting where, repeatedly: A context x is observed. An action a is taken given the context x . A reward r is observed, dependent on x and a . Where the goal of a learning agent is to find a policy for step 2 achieving a large expected reward. This setting is of obvious importance, because in the real world we typically make decisions based on some set of information and then get feedback only about the single action taken. It also fundamentally differs from supervised learning settings because knowing the value of one action is not equivalent to knowing the value of all actions. A decade ago the best machine learning techniques for this setting where implausibly inefficient. Dean Foster once told me he thought the area was a research sinkhole with little progress to be expected. Now we are on the verge of being able to routinely attack these problems, in almost exactly the same sense that we routinely attack bread and but
3 0.74840206 269 hunch net-2007-10-24-Contextual Bandits
Introduction: One of the fundamental underpinnings of the internet is advertising based content. This has become much more effective due to targeted advertising where ads are specifically matched to interests. Everyone is familiar with this, because everyone uses search engines and all search engines try to make money this way. The problem of matching ads to interests is a natural machine learning problem in some ways since there is much information in who clicks on what. A fundamental problem with this information is that it is not supervised—in particular a click-or-not on one ad doesn’t generally tell you if a different ad would have been clicked on. This implies we have a fundamental exploration problem. A standard mathematical setting for this situation is “ k -Armed Bandits”, often with various relevant embellishments. The k -Armed Bandit setting works on a round-by-round basis. On each round: A policy chooses arm a from 1 of k arms (i.e. 1 of k ads). The world reveals t
4 0.70853329 360 hunch net-2009-06-15-In Active Learning, the question changes
Introduction: A little over 4 years ago, Sanjoy made a post saying roughly “we should study active learning theoretically, because not much is understood”. At the time, we did not understand basic things such as whether or not it was possible to PAC-learn with an active algorithm without making strong assumptions about the noise rate. In other words, the fundamental question was “can we do it?” The nature of the question has fundamentally changed in my mind. The answer is to the previous question is “yes”, both information theoretically and computationally, most places where supervised learning could be applied. In many situation, the question has now changed to: “is it worth it?” Is the programming and computational overhead low enough to make the label cost savings of active learning worthwhile? Currently, there are situations where this question could go either way. Much of the challenge for the future is in figuring out how to make active learning easier or more worthwhile.
5 0.68437636 299 hunch net-2008-04-27-Watchword: Supervised Learning
Introduction: I recently discovered that supervised learning is a controversial term. The two definitions are: Known Loss Supervised learning corresponds to the situation where you have unlabeled examples plus knowledge of the loss of each possible predicted choice. This is the definition I’m familiar and comfortable with. One reason to prefer this definition is that the analysis of sample complexity for this class of learning problems are all pretty similar. Any kind of signal Supervised learning corresponds to the situation where you have unlabeled examples plus any source of side information about what the right choice is. This notion of supervised learning seems to subsume reinforcement learning, which makes me uncomfortable, because it means there are two words for the same class. This also means there isn’t a convenient word to describe the first definition. Reviews suggest there are people who are dedicated to the second definition out there, so it can be important to discr
6 0.6767953 432 hunch net-2011-04-20-The End of the Beginning of Active Learning
7 0.64861149 317 hunch net-2008-09-12-How do we get weak action dependence for learning with partial observations?
8 0.64803064 183 hunch net-2006-06-14-Explorations of Exploration
9 0.64683431 376 hunch net-2009-11-06-Yisong Yue on Self-improving Systems
10 0.61519182 388 hunch net-2010-01-24-Specializations of the Master Problem
11 0.58826149 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS
12 0.58386523 345 hunch net-2009-03-08-Prediction Science
13 0.57350874 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design
14 0.56788027 384 hunch net-2009-12-24-Top graduates this season
15 0.56144512 386 hunch net-2010-01-13-Sam Roweis died
16 0.54180992 370 hunch net-2009-09-18-Necessary and Sufficient Research
17 0.53821111 309 hunch net-2008-07-10-Interesting papers, ICML 2008
18 0.53086513 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)
19 0.52895141 237 hunch net-2007-04-02-Contextual Scaling
20 0.52870345 392 hunch net-2010-03-26-A Variance only Deviation Bound
topicId topicWeight
[(3, 0.017), (10, 0.022), (23, 0.031), (27, 0.311), (38, 0.028), (53, 0.04), (55, 0.102), (77, 0.064), (92, 0.191), (94, 0.089), (95, 0.015)]
simIndex simValue blogId blogTitle
1 0.93838137 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept
Introduction: Last year about this time, we received a conditional accept for the searn paper , which asked us to reference a paper that was not reasonable to cite because there was strictly more relevant work by the same authors that we already cited. We wrote a response explaining this, and didn’t cite it in the final draft, giving the SPC an excuse to reject the paper , leading to unhappiness for all. Later, Sanjoy Dasgupta suggested that an alternative was to talk to the PC chair instead, as soon as you see that a conditional accept is unreasonable. William Cohen and I spoke about this by email, the relevant bit of which is: If an SPC asks for a revision that is inappropriate, the correct action is to contact the chairs as soon as the decision is made, clearly explaining what the problem is, so we can decide whether or not to over-rule the SPC. As you say, this is extra work for us chairs, but that’s part of the job, and we’re willing to do that sort of work to improve the ov
same-blog 2 0.93291974 293 hunch net-2008-03-23-Interactive Machine Learning
Introduction: A new direction of research seems to be arising in machine learning: Interactive Machine Learning. This isn’t a familiar term, although it does include some familiar subjects. What is Interactive Machine Learning? The fundamental requirement is (a) learning algorithms which interact with the world and (b) learn. For our purposes, let’s define learning as efficiently competing with a large set of possible predictors. Examples include: Online learning against an adversary ( Avrim’s Notes ). The interaction is almost trivial: the learning algorithm makes a prediction and then receives feedback. The learning is choosing based upon the advice of many experts. Active Learning . In active learning, the interaction is choosing which examples to label, and the learning is choosing from amongst a large set of hypotheses. Contextual Bandits . The interaction is choosing one of several actions and learning only the value of the chosen action (weaker than active learning
3 0.91078389 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei
Introduction: The 2006 Machine Learning Summer School in Taipei, Taiwan ended on August 4, 2006. It has been a very exciting two weeks for a record crowd of 245 participants (including speakers and organizers) from 18 countries. We had a lineup of speakers that is hard to match up for other similar events (see our WIKI for more information). With this lineup, it is difficult for us as organizers to screw it up too bad. Also, since we have pretty good infrastructure for international meetings and experienced staff at NTUST and Academia Sinica, plus the reputation established by previous MLSS series, it was relatively easy for us to attract registrations and simply enjoyed this two-week long party of machine learning. In the end of MLSS we distributed a survey form for participants to fill in. I will report what we found from this survey, together with the registration data and word-of-mouth from participants. The first question is designed to find out how our participants learned about MLSS
4 0.88804871 272 hunch net-2007-11-14-BellKor wins Netflix
Introduction: … but only the little prize. The BellKor team focused on integrating predictions from many different methods. The base methods consist of: Nearest Neighbor Methods Matrix Factorization Methods (asymmetric and symmetric) Linear Regression on various feature spaces Restricted Boltzman Machines The final predictor was an ensemble (as was reasonable to expect), although it’s a little bit more complicated than just a weighted average—it’s essentially a customized learning algorithm. Base approaches (1)-(3) seem like relatively well-known approaches (although I haven’t seen the asymmetric factorization variant before). RBMs are the new approach. The writeup is pretty clear for more details. The contestants are close to reaching the big prize, but the last 1.5% is probably at least as hard as what’s been done. A few new structurally different methods for making predictions may need to be discovered and added into the mixture. In other words, research may be require
5 0.86607713 437 hunch net-2011-07-10-ICML 2011 and the future
Introduction: Unfortunately, I ended up sick for much of this ICML. I did manage to catch one interesting paper: Richard Socher , Cliff Lin , Andrew Y. Ng , and Christopher D. Manning Parsing Natural Scenes and Natural Language with Recursive Neural Networks . I invited Richard to share his list of interesting papers, so hopefully we’ll hear from him soon. In the meantime, Paul and Hal have posted some lists. the future Joelle and I are program chairs for ICML 2012 in Edinburgh , which I previously enjoyed visiting in 2005 . This is a huge responsibility, that we hope to accomplish well. A part of this (perhaps the most fun part), is imagining how we can make ICML better. A key and critical constraint is choosing things that can be accomplished. So far we have: Colocation . The first thing we looked into was potential colocations. We quickly discovered that many other conferences precomitted their location. For the future, getting a colocation with ACL or SIGI
6 0.8509109 317 hunch net-2008-09-12-How do we get weak action dependence for learning with partial observations?
7 0.85023224 220 hunch net-2006-11-27-Continuizing Solutions
8 0.84975624 463 hunch net-2012-05-02-ICML: Behind the Scenes
9 0.84804672 258 hunch net-2007-08-12-Exponentiated Gradient
10 0.84722239 304 hunch net-2008-06-27-Reviewing Horror Stories
11 0.84711242 388 hunch net-2010-01-24-Specializations of the Master Problem
12 0.84593016 435 hunch net-2011-05-16-Research Directions for Machine Learning and Algorithms
13 0.84576887 235 hunch net-2007-03-03-All Models of Learning have Flaws
14 0.84505206 320 hunch net-2008-10-14-Who is Responsible for a Bad Review?
15 0.84487045 79 hunch net-2005-06-08-Question: “When is the right time to insert the loss function?”
16 0.84443361 351 hunch net-2009-05-02-Wielding a New Abstraction
17 0.84421682 347 hunch net-2009-03-26-Machine Learning is too easy
18 0.84376651 311 hunch net-2008-07-26-Compositional Machine Learning Algorithm Design
19 0.84270233 259 hunch net-2007-08-19-Choice of Metrics
20 0.84134954 337 hunch net-2009-01-21-Nearly all natural problems require nonlinearity