nips nips2009 nips2009-240 nips2009-240-reference knowledge-graph by maker-knowledge-mining

240 nips-2009-Sufficient Conditions for Agnostic Active Learnable


Source: pdf

Author: Liwei Wang

Abstract: We study pool-based active learning in the presence of noise, i.e. the agnostic setting. Previous works have shown that the effectiveness of agnostic active learning depends on the learning problem and the hypothesis space. Although there are many cases on which active learning is very useful, it is also easy to construct examples that no active learning algorithm can have advantage. In this paper, we propose intuitively reasonable sufficient conditions under which agnostic active learning algorithm is strictly superior to passive supervised learning. We show that under some noise condition, if the Bayesian classification boundary and the underlying distribution are smooth to a finite order, active learning achieves polynomial improvement in the label complexity; if the boundary and the distribution are infinitely smooth, the improvement is exponential.


reference text

[Ang88] D. Angluin. Queries and concept learning. Machine Learning, 2:319–342, 1988. [BAL06] M.-F. Balcan, A.Beygelzimer, and J. Langford. Agnostic active learning. In 23th International Conference on Machine Learning, 2006. [CN07] R. Castro and R. Nowak. Minimax bounds for active learning. In 20th Annual Conference on Learning Theory, 2007. [Das05] S. Dasgupta. Coarse sample complexity bounds for active learning. In Advances in Neural Information Processing Systems, 2005. [DHM07] S. Dasgupta, D. Hsu, and C. Monteleoni. A general agnostic active learning algorithm. In Advances in Neural Information Processing Systems, 2007. [Dud99] R.M. Dudley. Uniform Central Limit Theorems. Cambridge University Press, 1999. [Fri09] E. Friedman. Active learning for smooth problems. In 22th Annual Conference on Learning Theory, 2009. [GKW03] V.E. Gine, V.I. Koltchinskii, and J. Wellner. Ratio limit theorems for empirical processes. Stochastic Inequalities and Applications, 56:249–278, 2003. [Han07] S. Hanneke. A bound on the label complexity of agnostic active learning. In 24th International Conference on Machine Learning, 2007. [Han09] S. Hanneke. Adaptive rates of convergence in active learning. In 22th Annual Conference on Learning Theory, 2009. [Kaa06] M. Kaariainen. Active learning in the non-realizable case. In 17th International Conference on Algorithmic Learning Theory, 2006. [Tsy04] A. Tsybakov. Optimal aggregation of classifiers in statistical learning. The Annals of Statistics, 32:135–166, 2004. [vdG00] S. van de Geer. Applications of Empirical Process Theory. Cambridge University Press, 2000. [vdVW96] A. van der Vaart and J. Wellner. Weak Convergence and Empirical Processes with Application to Statistics. Springer Verlag, 1996. 9