nips nips2009 nips2009-212 nips2009-212-reference knowledge-graph by maker-knowledge-mining

212 nips-2009-Semi-Supervised Learning in Gigantic Image Collections


Source: pdf

Author: Rob Fergus, Yair Weiss, Antonio Torralba

Abstract: With the advent of the Internet it is now possible to collect hundreds of millions of images. These images come with varying degrees of label information. “Clean labels” can be manually obtained on a small fraction, “noisy labels” may be extracted automatically from surrounding text, while for most images there are no labels at all. Semi-supervised learning is a principled framework for combining these different label sources. However, it scales polynomially with the number of images, making it impractical for use on gigantic collections with hundreds of millions of images and thousands of classes. In this paper we show how to utilize recent results in machine learning to obtain highly efficient approximations for semi-supervised learning that are linear in the number of images. Specifically, we use the convergence of the eigenvectors of the normalized graph Laplacian to eigenfunctions of weighted Laplace-Beltrami operators. Our algorithm enables us to apply semi-supervised learning to a database of 80 million images gathered from the Internet. 1


reference text

[1] M. Belkin and P. Niyogi. Towards a theoretical foundation for laplacian based manifold methods. Journal of Computer and System Sciences, 2007.

[2] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. JMLR, 7:2399–2434, 2006.

[3] Y. Bengio, O. Delalleau, N. L. Roux, J.-F. Paiement, P. Vincent, and M. Ouimet. Learning eigenfunctions links spectral embedding and kernel PCA. In NIPS, pages 2197–2219, 2004.

[4] T. Berg and D. Forsyth. Animals on the web. In CVPR, pages 1463–1470, 2006.

[5] O. Chapelle, B. Sch¨ lkopf, and A. Zien. Semi-Supervised Learning. MIT Press, 2006. o

[6] R. R. Coifman, S. Lafon, A. Lee, M. Maggioni, B. Nadler, F. Warner, and S. Zucker. Geometric diffusion as a tool for harmonic analysis and structure definition of data, part i: Diffusion maps. PNAS, 21(102):7426–7431, 2005.

[7] R. Datta, D. Joshi, J. Li, and J. Z. Wang. Image retrieval: Ideas, influences, and trends of the new age. ACM Computing Surveys, 2008.

[8] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google’s image search. In ICCV, volume 2, pages 1816–1823, Oct. 2005.

[9] J. Garcke and M. Griebel. Semi-supervised learning with sparse grids. In ICML workshop on learning with partially classified training data, 2005.

[10] A. Kapoor, K. Grauman, R. Urtasun, and T. Darrell. Active learning with gaussian processes for object categorization. In CVPR, 2007.

[11] A. Krizhevsky and G. E. Hinton. Learning multiple layers of features from tiny images. Technical report, Computer Science Department, University of Toronto, 2009.

[12] S. Kumar, M. Mohri, and A. Talwalkar. Sampling techniques for the Nystrom method. In AISTATS, 2009.

[13] L. J. Li, G. Wang, and L. Fei-Fei. Optimol: automatic object picture collection via incremental model learning. In CVPR, 2007.

[14] B. Nadler, S. Lafon, R. R. Coifman, and I. G. Kevrekidis. Diffusion maps, spectral clustering and reaction coordinates of dynamical systems. Applied and Computational Harmonic Analysis, 21:113–127, 2006.

[15] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. IJCV, 42:145–175, 2001.

[16] B. C. Russell, A. Torralba, K. P. Murphy, and W. T. Freeman. Labelme: a database and webbased tool for image annotation. IJCV, 77(1):157–173, 2008.

[17] B. Schoelkopf and A. Smola. Learning with Kernels Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press,, 2002.

[18] A. Talwalkar, S. Kumar, and H. Rowley. Large-scale manifold learning. In CVPR, 2008.

[19] A. Torralba, R. Fergus, and W. T. Freeman. 80 million tiny images: a large database for nonparametric object and scene recognition. IEEE PAMI, 30(11):1958–1970, November 2008.

[20] I. Tsang and J. Kwok. Large-scale sparsified manifold regularization. In NIPS, 2006.

[21] L. van Ahn. The ESP game, 2006.

[22] S. Vijayanarasimhan and K. Grauman. Keywords to visual categories: Multiple-instance learning for weakly supervised object categorization. In CVPR, 2008.

[23] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.

[24] B. Yao, X. Yang, and S. C. Zhu. Introduction to a large scale general purpose ground truth dataset: methodology, annotation tool, and benchmarks. In EMMCVPR, 2007.

[25] K. Yu, S. Yu, and V. Tresp. Blockwise supervised inference on large graphs. In ICML workshop on learning with partially classified training data, 2005.

[26] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch¨ lkopf. Learning with local and global o consistency. In NIPS, 2004.

[27] X. Zhu. Semi-supervised learning literature survey. Technical Report 1530, University of Wisconsin Madison, 2008.

[28] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In In ICML, pages 912–919, 2003.

[29] X. Zhu and J. Lafferty. Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi-supervised learning. In ICML, 2005. 9