nips nips2011 nips2011-248 nips2011-248-reference knowledge-graph by maker-knowledge-mining

248 nips-2011-Semi-supervised Regression via Parallel Field Regularization


Source: pdf

Author: Binbin Lin, Chiyuan Zhang, Xiaofei He

Abstract: This paper studies the problem of semi-supervised learning from the vector field perspective. Many of the existing work use the graph Laplacian to ensure the smoothness of the prediction function on the data manifold. However, beyond smoothness, it is suggested by recent theoretical work that we should ensure second order smoothness for achieving faster rates of convergence for semisupervised regression problems. To achieve this goal, we show that the second order smoothness measures the linearity of the function, and the gradient field of a linear function has to be a parallel vector field. Consequently, we propose to find a function which minimizes the empirical error, and simultaneously requires its gradient field to be as parallel as possible. We give a continuous objective function on the manifold and discuss how to discretize it by using random points. The discretized optimization problem turns out to be a sparse linear system which can be solved very efficiently. The experimental results have demonstrated the effectiveness of our proposed approach. 1


reference text

[1] J. Tenenbaum, V. de Silva, and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, 2000.

[2] D. L. Donoho and C. E. Grimes. Hessian eigenmaps: Locally linear embedding techniques for high-dimensional data. Proceedings of the National Academy of Sciences of the United States of America, 100(10):5591–5596, 2003.

[3] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326, 2000.

[4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems 14, pages 585–591. 2001.

[5] Fan R. K. Chung. Spectral Graph Theory, volume 92 of Regional Conference Series in Mathematics. AMS, 1997.

[6] X. Zhu and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic functions. In Proc. of the 20th Internation Conference on Machine Learning, 2003.

[7] D. Zhou, O. Bousquet, T.N. Lal, J. Weston, and B. Sch¨lkopf. Learning with local and global o consistency. In Advances in Neural Information Processing Systems 16, 2003.

[8] Mikhail Belkin, Irina Matveeva, and Partha Niyogi. Regularization and semi-supervised learning on large graphs. In Conference on Learning Theory, pages 624–638, 2004.

[9] John Lafferty and Larry Wasserman. Statistical analysis of semi-supervised regression. In Advances in Neural Information Processing Systems 20, pages 801–808, 2007.

[10] P. Petersen. Riemannian Geometry. Springer, New York, 1998.

[11] G. H. Golub and C. F. Van Loan. Matrix computations. Johns Hopkins University Press, 3rd edition, 1996.

[12] B. Chow, P. Lu, and L. Ni. Hamilton’s Ricci Flow. AMS, Providence, Rhode Island, 2006.

[13] Mikhail Belkin and Partha Niyogi. Towards a theoretical foundation for laplacian-based manifold methods. In Conference on Learning Theory, pages 486–500, 2005.

[14] Matthias Hein, Jean yves Audibert, and Ulrike Von Luxburg. From graphs to manifolds - weak and strong pointwise consistency of graph laplacians. In Conference on Learning Theory, pages 470–485, 2005.

[15] K. I. Kim, F. Steinke, and M. Hein. Semi-supervised regression using hessian energy with an application to semi-supervised dimensionality reduction. In Advances in Neural Information Processing Systems 22, pages 979–987. 2009. 9