hunch_net hunch_net-2009 hunch_net-2009-338 knowledge-graph by maker-knowledge-mining

338 hunch net-2009-01-23-An Active Learning Survey


meta infos for this blog

Source: html

Introduction: Burr Settles wrote a fairly comprehensive survey of active learning . He intends to maintain and update the survey, so send him any suggestions you have.


Summary: the most important sentenses genereted by tfidf model

sentIndex sentText sentNum sentScore

1 Burr Settles wrote a fairly comprehensive survey of active learning . [sent-1, score-1.448]

2 He intends to maintain and update the survey, so send him any suggestions you have. [sent-2, score-0.985]


similar blogs computed by tfidf model

tfidf for this blog:

wordName wordTfidf (topN-words)

[('survey', 0.509), ('burr', 0.36), ('settles', 0.36), ('maintain', 0.333), ('wrote', 0.315), ('comprehensive', 0.269), ('send', 0.233), ('suggestions', 0.213), ('update', 0.206), ('fairly', 0.183), ('active', 0.151), ('learning', 0.021)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 1.0 338 hunch net-2009-01-23-An Active Learning Survey

Introduction: Burr Settles wrote a fairly comprehensive survey of active learning . He intends to maintain and update the survey, so send him any suggestions you have.

2 0.11698352 468 hunch net-2012-06-29-ICML survey and comments

Introduction: Just about nothing could keep me from attending ICML , except for Dora who arrived on Monday. Consequently, I have only secondhand reports that the conference is going well. For those who are remote (like me) or after the conference (like everyone), Mark Reid has setup the ICML discussion site where you can comment on any paper or subscribe to papers. Authors are automatically subscribed to their own papers, so it should be possible to have a discussion significantly after the fact, as people desire. We also conducted a survey before the conference and have the survey results now. This can be compared with the ICML 2010 survey results . Looking at the comparable questions, we can sometimes order the answers to have scores ranging from 0 to 3 or 0 to 4 with 3 or 4 being best and 0 worst, then compute the average difference between 2012 and 2010. Glancing through them, I see: Most people found the papers they reviewed a good fit for their expertise (-.037 w.r.t 20

3 0.10839057 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

Introduction: At NIPS, Andrew Ng asked me what should be in a large scale learning class. After some discussion with him and Nando and mulling it over a bit, these are the topics that I think should be covered. There are many different kinds of scaling. Scaling in examples This is the most basic kind of scaling. Online Gradient Descent This is an old algorithm—I’m not sure if anyone can be credited with it in particular. Perhaps the Perceptron is a good precursor, but substantial improvements come from the notion of a loss function of which squared loss , logistic loss , Hinge Loss, and Quantile Loss are all worth covering. It’s important to cover the semantics of these loss functions as well. Vowpal Wabbit is a reasonably fast codebase implementing these. Second Order Gradient Descent methods For some problems, methods taking into account second derivative information can be more effective. I’ve seen preconditioned conjugate gradient work well, for which Jonath

4 0.072569959 403 hunch net-2010-07-18-ICML & COLT 2010

Introduction: The papers which interested me most at ICML and COLT 2010 were: Thomas Walsh , Kaushik Subramanian , Michael Littman and Carlos Diuk Generalizing Apprenticeship Learning across Hypothesis Classes . This paper formalizes and provides algorithms with guarantees for mixed-mode apprenticeship and traditional reinforcement learning algorithms, allowing RL algorithms that perform better than for either setting alone. István Szita and Csaba Szepesvári Model-based reinforcement learning with nearly tight exploration complexity bounds . This paper and another represent the frontier of best-known algorithm for Reinforcement Learning in a Markov Decision Process. James Martens Deep learning via Hessian-free optimization . About a new not-quite-online second order gradient algorithm for learning deep functional structures. Potentially this is very powerful because while people have often talked about end-to-end learning, it has rarely worked in practice. Chrisoph

5 0.068900928 127 hunch net-2005-11-02-Progress in Active Learning

Introduction: Several bits of progress have been made since Sanjoy pointed out the significant lack of theoretical understanding of active learning . This is an update on the progress I know of. As a refresher, active learning as meant here is: There is a source of unlabeled data. There is an oracle from which labels can be requested for unlabeled data produced by the source. The goal is to perform well with minimal use of the oracle. Here is what I’ve learned: Sanjoy has developed sufficient and semi-necessary conditions for active learning given the assumptions of IID data and “realizability” (that one of the classifiers is a correct classifier). Nina , Alina , and I developed an algorithm for active learning relying on only the assumption of IID data. A draft is here . Nicolo , Claudio , and Luca showed that it is possible to do active learning in an entirely adversarial setting for linear threshold classifiers here . This was published a year or two ago and I r

6 0.066814497 360 hunch net-2009-06-15-In Active Learning, the question changes

7 0.063761696 442 hunch net-2011-08-20-The Large Scale Learning Survey Tutorial

8 0.059751697 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

9 0.055101637 278 hunch net-2007-12-17-New Machine Learning mailing list

10 0.043970872 45 hunch net-2005-03-22-Active learning

11 0.039890941 452 hunch net-2012-01-04-Why ICML? and the summer conferences

12 0.038802646 203 hunch net-2006-08-18-Report of MLSS 2006 Taipei

13 0.037754256 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy

14 0.037521642 238 hunch net-2007-04-13-What to do with an unreasonable conditional accept

15 0.037347294 387 hunch net-2010-01-19-Deadline Season, 2010

16 0.036442861 292 hunch net-2008-03-15-COLT Open Problems

17 0.035981596 207 hunch net-2006-09-12-Incentive Compatible Reviewing

18 0.035031177 48 hunch net-2005-03-29-Academic Mechanism Design

19 0.033844817 494 hunch net-2014-03-11-The New York ML Symposium, take 2

20 0.033363186 109 hunch net-2005-09-08-Online Learning as the Mathematics of Accountability


similar blogs computed by lsi model

lsi for this blog:

topicId topicWeight

[(0, 0.034), (1, -0.013), (2, 0.0), (3, -0.021), (4, 0.032), (5, 0.009), (6, -0.027), (7, -0.026), (8, -0.016), (9, 0.017), (10, 0.039), (11, -0.024), (12, -0.016), (13, 0.05), (14, -0.024), (15, -0.016), (16, -0.035), (17, 0.013), (18, -0.014), (19, 0.033), (20, 0.003), (21, -0.015), (22, 0.049), (23, -0.004), (24, 0.005), (25, -0.11), (26, -0.017), (27, 0.026), (28, 0.042), (29, 0.021), (30, 0.068), (31, 0.028), (32, -0.042), (33, -0.036), (34, 0.056), (35, 0.032), (36, 0.018), (37, 0.041), (38, 0.03), (39, 0.063), (40, -0.035), (41, -0.041), (42, -0.022), (43, -0.072), (44, 0.013), (45, 0.064), (46, 0.002), (47, 0.014), (48, -0.047), (49, 0.005)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96023005 338 hunch net-2009-01-23-An Active Learning Survey

Introduction: Burr Settles wrote a fairly comprehensive survey of active learning . He intends to maintain and update the survey, so send him any suggestions you have.

2 0.5753513 127 hunch net-2005-11-02-Progress in Active Learning

Introduction: Several bits of progress have been made since Sanjoy pointed out the significant lack of theoretical understanding of active learning . This is an update on the progress I know of. As a refresher, active learning as meant here is: There is a source of unlabeled data. There is an oracle from which labels can be requested for unlabeled data produced by the source. The goal is to perform well with minimal use of the oracle. Here is what I’ve learned: Sanjoy has developed sufficient and semi-necessary conditions for active learning given the assumptions of IID data and “realizability” (that one of the classifiers is a correct classifier). Nina , Alina , and I developed an algorithm for active learning relying on only the assumption of IID data. A draft is here . Nicolo , Claudio , and Luca showed that it is possible to do active learning in an entirely adversarial setting for linear threshold classifiers here . This was published a year or two ago and I r

3 0.50996071 360 hunch net-2009-06-15-In Active Learning, the question changes

Introduction: A little over 4 years ago, Sanjoy made a post saying roughly “we should study active learning theoretically, because not much is understood”. At the time, we did not understand basic things such as whether or not it was possible to PAC-learn with an active algorithm without making strong assumptions about the noise rate. In other words, the fundamental question was “can we do it?” The nature of the question has fundamentally changed in my mind. The answer is to the previous question is “yes”, both information theoretically and computationally, most places where supervised learning could be applied. In many situation, the question has now changed to: “is it worth it?” Is the programming and computational overhead low enough to make the label cost savings of active learning worthwhile? Currently, there are situations where this question could go either way. Much of the challenge for the future is in figuring out how to make active learning easier or more worthwhile.

4 0.50270158 432 hunch net-2011-04-20-The End of the Beginning of Active Learning

Introduction: This post is by Daniel Hsu and John Langford. In selective sampling style active learning, a learning algorithm chooses which examples to label. We now have an active learning algorithm that is: Efficient in label complexity, unlabeled complexity, and computational complexity. Competitive with supervised learning anywhere that supervised learning works. Compatible with online learning, with any optimization-based learning algorithm, with any loss function, with offline testing, and even with changing learning algorithms. Empirically effective. The basic idea is to combine disagreement region-based sampling with importance weighting : an example is selected to be labeled with probability proportional to how useful it is for distinguishing among near-optimal classifiers, and labeled examples are importance-weighted by the inverse of these probabilities. The combination of these simple ideas removes the sampling bias problem that has plagued many previous he

5 0.35793838 310 hunch net-2008-07-15-Interesting papers at COLT (and a bit of UAI & workshops)

Introduction: Here are a few papers from COLT 2008 that I found interesting. Maria-Florina Balcan , Steve Hanneke , and Jenn Wortman , The True Sample Complexity of Active Learning . This paper shows that in an asymptotic setting, active learning is always better than supervised learning (although the gap may be small). This is evidence that the only thing in the way of universal active learning is us knowing how to do it properly. Nir Ailon and Mehryar Mohri , An Efficient Reduction of Ranking to Classification . This paper shows how to robustly rank n objects with n log(n) classifications using a quicksort based algorithm. The result is applicable to many ranking loss functions and has implications for others. Michael Kearns and Jennifer Wortman . Learning from Collective Behavior . This is about learning in a new model, where the goal is to predict how a collection of interacting agents behave. One claim is that learning in this setting can be reduced to IID lear

6 0.35699296 45 hunch net-2005-03-22-Active learning

7 0.31699556 377 hunch net-2009-11-09-NYAS ML Symposium this year.

8 0.31090221 492 hunch net-2013-12-01-NIPS tutorials and Vowpal Wabbit 7.4

9 0.31024289 278 hunch net-2007-12-17-New Machine Learning mailing list

10 0.30760199 403 hunch net-2010-07-18-ICML & COLT 2010

11 0.30364165 416 hunch net-2010-10-29-To Vidoelecture or not

12 0.29976457 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

13 0.29468894 354 hunch net-2009-05-17-Server Update

14 0.28927749 143 hunch net-2005-12-27-Automated Labeling

15 0.28529757 279 hunch net-2007-12-19-Cool and interesting things seen at NIPS

16 0.28420407 468 hunch net-2012-06-29-ICML survey and comments

17 0.28308859 473 hunch net-2012-09-29-Vowpal Wabbit, version 7.0

18 0.26117036 12 hunch net-2005-02-03-Learning Theory, by assumption

19 0.25917339 57 hunch net-2005-04-16-Which Assumptions are Reasonable?

20 0.25912556 419 hunch net-2010-12-04-Vowpal Wabbit, version 5.0, and the second heresy


similar blogs computed by lda model

lda for this blog:

topicId topicWeight

[(49, 0.757)]

similar blogs list:

simIndex simValue blogId blogTitle

same-blog 1 0.96345431 338 hunch net-2009-01-23-An Active Learning Survey

Introduction: Burr Settles wrote a fairly comprehensive survey of active learning . He intends to maintain and update the survey, so send him any suggestions you have.

2 0.81125671 224 hunch net-2006-12-12-Interesting Papers at NIPS 2006

Introduction: Here are some papers that I found surprisingly interesting. Yoshua Bengio , Pascal Lamblin, Dan Popovici, Hugo Larochelle, Greedy Layer-wise Training of Deep Networks . Empirically investigates some of the design choices behind deep belief networks. Long Zhu , Yuanhao Chen, Alan Yuille Unsupervised Learning of a Probabilistic Grammar for Object Detection and Parsing. An unsupervised method for detecting objects using simple feature filters that works remarkably well on the (supervised) caltech-101 dataset . Shai Ben-David , John Blitzer , Koby Crammer , and Fernando Pereira , Analysis of Representations for Domain Adaptation . This is the first analysis I’ve seen of learning with respect to samples drawn differently from the evaluation distribution which depends on reasonable measurable quantities. All of these papers turn out to have a common theme—the power of unlabeled data to do generically useful things.

3 0.76878291 122 hunch net-2005-10-13-Site tweak

Introduction: Several people have had difficulty with comments which seem to have an allowed language significantly poorer than posts. The set of allowed html tags has been increased and the markdown filter has been put in place to try to make commenting easier. I’ll put some examples into the comments of this post.

4 0.45400417 365 hunch net-2009-07-31-Vowpal Wabbit Open Source Project

Introduction: Today brings a new release of the Vowpal Wabbit fast online learning software. This time, unlike the previous release, the project itself is going open source, developing via github . For example, the lastest and greatest can be downloaded via: git clone git://github.com/JohnLangford/vowpal_wabbit.git If you aren’t familiar with git , it’s a distributed version control system which supports quick and easy branching, as well as reconciliation. This version of the code is confirmed to compile without complaint on at least some flavors of OSX as well as Linux boxes. As much of the point of this project is pushing the limits of fast and effective machine learning, let me mention a few datapoints from my experience. The program can effectively scale up to batch-style training on sparse terafeature (i.e. 10 12 sparse feature) size datasets. The limiting factor is typically i/o. I started using the the real datasets from the large-scale learning workshop as a conve

5 0.42207664 37 hunch net-2005-03-08-Fast Physics for Learning

Introduction: While everyone is silently working on ICML submissions, I found this discussion about a fast physics simulator chip interesting from a learning viewpoint. In many cases, learning attempts to predict the outcome of physical processes. Access to a fast simulator for these processes might be quite helpful in predicting the outcome. Bayesian learning in particular may directly benefit while many other algorithms (like support vector machines) might have their speed greatly increased. The biggest drawback is that writing software for these odd architectures is always difficult and time consuming, but a several-orders-of-magnitude speedup might make that worthwhile.

6 0.35567796 23 hunch net-2005-02-19-Loss Functions for Discriminative Training of Energy-Based Models

7 0.34041664 348 hunch net-2009-04-02-Asymmophobia

8 0.16275045 359 hunch net-2009-06-03-Functionally defined Nonlinear Dynamic Models

9 0.13632908 438 hunch net-2011-07-11-Interesting Neural Network Papers at ICML 2011

10 0.12166054 292 hunch net-2008-03-15-COLT Open Problems

11 0.090467222 431 hunch net-2011-04-18-A paper not at Snowbird

12 0.084632605 144 hunch net-2005-12-28-Yet more nips thoughts

13 0.078743652 280 hunch net-2007-12-20-Cool and Interesting things at NIPS, take three

14 0.07250654 426 hunch net-2011-03-19-The Ideal Large Scale Learning Class

15 0.064466588 268 hunch net-2007-10-19-Second Annual Reinforcement Learning Competition

16 0.06393858 493 hunch net-2014-02-16-Metacademy: a package manager for knowledge

17 0.053408574 349 hunch net-2009-04-21-Interesting Presentations at Snowbird

18 0.050967973 444 hunch net-2011-09-07-KDD and MUCMD 2011

19 0.048587825 381 hunch net-2009-12-07-Vowpal Wabbit version 4.0, and a NIPS heresy

20 0.04837257 262 hunch net-2007-09-16-Optimizing Machine Learning Programs