nips nips2009 nips2009-91 nips2009-91-reference knowledge-graph by maker-knowledge-mining

91 nips-2009-Fast, smooth and adaptive regression in metric spaces


Source: pdf

Author: Samory Kpotufe

Abstract: It was recently shown that certain nonparametric regressors can escape the curse of dimensionality when the intrinsic dimension of data is low ([1, 2]). We prove some stronger results in more general settings. In particular, we consider a regressor which, by combining aspects of both tree-based regression and kernel regression, adapts to intrinsic dimension, operates on general metrics, yields a smooth function, and evaluates in time O(log n). We derive a tight convergence rate of the form n−2/(2+d) where d is the Assouad dimension of the input space. 1


reference text

[1] P. Bickel and B. Li. Local polynomial regression on unknown manifolds. Tech. Re. Dep. of Stats. UC Berkley, 2006.

[2] S. Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. COLT, 2009.

[3] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348–1360, 1980.

[4] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist., 10:1340– 1353, 1982.

[5] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290:2323–2326, 2000.

[6] M. Belkin and N. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15:1373–1396, 2003.

[7] J.B. TenenBaum, V. De Silva, and J. Langford. A global geometric framework for non-linear dimensionality reduction. Science, 290:2319–2323, 2000.

[8] S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. STOC, 2008.

[9] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor Methods for Learning and Vision: Theory and Practice, 2005.

[10] S. Kulkarni and S. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling. IEEE Transactions on Information Theory, 41, 1995.

[11] S. Schaal and C. Atkeson. Robot Juggling: An Implementation of Memory-based Learning. Control Systems Magazine, IEEE, 1994.

[12] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric Regression. Springer, New York, NY, 2002.

[13] D. Lee and A. Grey. Faster gaussian summation: Theory and experiment. UAI, 2006.

[14] D. Lee and A. Grey. Fast high-dimensional kernel summations using the monte carlo multipole method. NIPS, 2008.

[15] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 1997.

[16] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbors. ICML, 2006.

[17] R. Krauthgamer and J. Lee. Navigating nets: Simple algorithms for proximity search. SODA, 2004. 9