nips nips2009 nips2009-169 nips2009-169-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Kai Yu, Tong Zhang, Yihong Gong
Abstract: This paper introduces a new method for semi-supervised learning on high dimensional nonlinear manifolds, which includes a phase of unsupervised basis learning and a phase of supervised function learning. The learned bases provide a set of anchor points to form a local coordinate system, such that each data point x on the manifold can be locally approximated by a linear combination of its nearby anchor points, and the linear weights become its local coordinate coding. We show that a high dimensional nonlinear function can be approximated by a global linear function with respect to this coding scheme, and the approximation quality is ensured by the locality of such coding. The method turns a difficult nonlinear learning problem into a simple global linear learning problem, which overcomes some drawbacks of traditional local learning methods. 1
[1] Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373 – 1396, 2003.
[2] Leon Bottou and Vladimir Vapnik. Local learning algorithms. Neural Computation, 4:888 – 900, 1992.
[3] Robert M. Gray and David L. Neuhoff. Quantization. IEEE Transaction on Information Theory, pages 2325 – 2383, 1998.
[4] Trevor Hastie and Clive Loader. Local regression: Automatic kernel carpentry. Statistical Science, 8:139 – 143, 1993.
[5] Geoffrey E. Hinton and Ruslan R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313:504 – 507, 2006.
[6] Honglak Lee, Alexis Battle, Rajat Raina, and Andrew Y. Ng. Efficient sparse coding algorithms. Neural Information Processing Systems (NIPS) 19, 2007.
[7] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught learning: Transfer learning from unlabeled data. International Conference on Machine Learning, 2007.
[8] Sam Roweis and Lawrence Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323 – 2326, 2000.
[9] Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290:2319 – 2323, 2000.
[10] Alon Zakai and Ya’acov Ritov. Consistency and localizability. Journal of Machine Learning Research, 10:827 – 856, 2009. 9