nips nips2013 nips2013-149 nips2013-149-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Wenjie Luo, Alex Schwing, Raquel Urtasun
Abstract: In this paper we present active learning algorithms in the context of structured prediction problems. To reduce the amount of labeling necessary to learn good models, our algorithms operate with weakly labeled data and we query additional examples based on entropies of local marginals, which are a good surrogate for uncertainty. We demonstrate the effectiveness of our approach in the task of 3D layout prediction from single images, and show that good models are learned when labeling only a handful of random variables. In particular, the same performance as using the full training set can be obtained while only labeling ∼10% of the random variables. 1
[1] P. Auer, N. Cesa-Bianchi, and P. Fischer. Finite-time analysis of the multi-armed bandit problem. Machine Learning, 2002.
[2] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The non-stochastic multi-armed bandit problem. SIAM J. on Computing, 2002.
[3] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning and Games. Cambridge University Press, 2006.
[4] D. Cohn, L. Atlas, and R. Ladner. Improving generalization with active learning. Machine Learning, 1994.
[5] D. Cohn, L. Atlas, R. Ladner, M. El-Sharkawi, R. Marks II, M. Aggoune, and D. Park. Training connectionist networks with queries and selective sampling. In Proc. NIPS, 1990.
[6] D. Cohn, Z. Ghahramani, and M. I. Jordan. Active learning with statistical models. J. of Artificial Intelligence Research, 1996.
[7] A. Culotta and A. McCallum. Reducing labeling effort for structured prediction tasks. In Proc. AAAI, 2005.
[8] A. Farhangfar, R. Greiner, and C. Szepesvari. Learning to Segment from a Few Well-Selected Training Images. In Proc. ICML, 2009.
[9] A. Fathi, M. F. Balcan, X. Ren, and J. M. Rehg. Combining Self Training and Active Learning for Video Segmentation. In Proc. BMVC, 2011. 8
[10] T. Hazan and A. Shashua. Norm-Product Belief Propagation: Primal-Dual Message-Passing for LPRelaxation and Approximate-Inference. Trans. Information Theory, 2010.
[11] T. Hazan and R. Urtasun. A Primal-Dual Message-Passing Algorithm for Approximated Large Scale Structured Prediction. In Proc. NIPS, 2010.
[12] V. Hedau, D. Hoiem, and D. A. Forsyth. Recovering the Spatial Layout of Cluttered Rooms . In Proc. ICCV, 2009.
[13] D. Hoiem, A. A. Efros, and M. Hebert. Recovering Surface Layout from an Image. IJCV, 2007.
[14] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active Learning with Gaussian Processes for Object Categorization . In Proc. ICCV, 2007.
[15] P. Kohli and P. Torr. Measuring Uncertainty in Graph Cut Solutions -Efficiently Computing Min-marginal Energies using Dynamic Graph Cuts. In Proc. ECCV, 2006.
[16] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML, 2001.
[17] D. C. Lee, A. Gupta, M. Hebert, and T. Kanade. Estimating Spatial Layout of Rooms using Volumetric Reasoning about Objects and Surfaces. In Proc. NIPS, 2010.
[18] D. C. Lee, M. Hebert, and T. Kanade. Geometric Reasoning for Single Image Structure Recovery. In Proc. CVPR, 2009.
[19] D. Lewis and J. Catlett. Heterogeneous uncertainty sampling for supervised learning. In Proc. ICML, 1994.
[20] D. Lewis and W. Gale. A sequential algorithm for training text classifiers. In Proc. Research and Development in Info. Retrieval, 1994.
[21] T. Mensink, J. Verbeek, and G. Csurka. Learning Structured Prediction Models for Interactive Image Labeling. In Proc. CVPR, 2011.
[22] D. Roth and K. Small. Margin-based Active Learning for Structured Output Spaces. In Proc. ECML, 2006.
[23] C. Rother, V. Kolmogorov, and A. Blake. GrabCut Interactive Foreground Extraction using Iterated Graph Cuts. In Proc. SIGGRAPH, 2004.
[24] N. Roy and A. McCallum. Toward optimal active learning through sampling estimation of error reduction. In Proc. ICML, 2001.
[25] T. Scheffer, C. Decomain, and S. Wrobel. Active hidden Markov models for information extraction. In Proc. Int’l Conf. Advances in Intelligent Data Analysis, 2001.
[26] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Distributed Message Passing for Large Scale Graphical Models. In Proc. CVPR, 2011.
[27] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction for 3D Indoor Scene Understanding. In Proc. CVPR, 2012.
[28] A. G. Schwing, T. Hazan, M. Pollefeys, and R. Urtasun. Efficient Structured Prediction with Latent Variables for General Graphical Models. In Proc. ICML, 2012.
[29] B. Settles, M. Craven, and S. Ray. Multiple-instance active learning. In Proc. NIPS, 2008.
[30] P. Shivaswamy and T. Joachims. Online Structured Prediction via Coactive Learning. In Proc. ICML, 2012.
[31] B. Siddiquie and A. Gupta. Beyond Active Noun Tagging: Modeling Contextual Interactions for MultiClass Active Learning. In Proc. CVPR, 2010.
[32] S. Tong and D. Koller. Support vector machine active learning with applications to text classification. JMLR, 2001.
[33] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large Margin Methods for Structured and Interdependent Output Variables. JMLR, 2005.
[34] A. Vezhnevets, V. Ferrari, and J. M. Buhmann. Active Learning for Semantic Segmentation with Expected Change. In Proc. CVPR, 2012.
[35] S. Vijayanarasimhan and K. Grauman. Cost-Sensitive Active Visual Category Learning. IJCV, 2010.
[36] S. Vijayanarasimhan and K. Grauman. Active Frame Selection for Label Propagation in Videos. In Proc. ECCV, 2012.
[37] A. L. Yuille and A. Rangarajan. The Concave-Convex Procedure. Neural Computation, 2003. 9