nips nips2008 nips2008-122 nips2008-122-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Haixuan Yang, Irwin King, Michael Lyu
Abstract: Regularized Least Squares (RLS) algorithms have the ability to avoid over-fitting problems and to express solutions as kernel expansions. However, we observe that the current RLS algorithms cannot provide a satisfactory interpretation even on the penalty of a constant function. Based on the intuition that a good kernelbased inductive function should be consistent with both the data and the kernel, a novel learning scheme is proposed. The advantages of this scheme lie in its corresponding Representer Theorem, its strong interpretation ability about what kind of functions should not be penalized, and its promising accuracy improvements shown in a number of experiments. Furthermore, we provide a detailed technical description about heat kernels, which serves as an example for the readers to apply similar techniques for other kernels. Our work provides a preliminary step in a new direction to explore the varying consistency between inductive functions and kernels under various distributions. 1
[1] GMikhail Belkin, Partha Niyogi, and Vikas Sindhwani. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434, 2006.
[2] F. Cucker and S. Smale. On the mathematical foundations of learning. Bulletin (New Series) of the American Mathematical Society, 39(1):1–49, 2002.
[3] Lokenath Debnath and Piotr Mikusinski. Introduction to Hilbert Spaces with Applications. Academic Press, San Diego, second edition, 1999.
[4] T. Evgeniou, M. Pontil, and T. Poggio. Regularization networks and support vector machines. Advances in Computational Mathematics, 13:1–50, 2000.
[5] T. Hastie and C. Loader. Local regression: Automatic kernel carpentry. Statistical Science, 8(1):120–129, 1993.
[6] John Lafferty and Guy Lebanon. Diffusion kernels on statistical manifolds. Journal of Machine Learning Research, 6:129–163, 2005.
[7] Wenye Li, Kin-Hong Lee, and Kwong-Sak Leung. Generalized regularized least-squares learning with predefined features in a Hilbert space. In NIPS, 2006.
[8] E. A. Nadaraya. On estimating regression. Theory of Probability and Its Applications, 9(1):141–142, 1964.
[9] R.M. Rifkin and R.A. Lippert. Notes on regularized least-squares. Technical Report 2007-019, Massachusetts Institute of Technology, 2007.
[10] S. Rosenberg. The Laplacian on a Riemmannian Manifold. Cambridge University Press, 1997.
[11] Bernhard Sch¨ lkopf, Ralf Herbrich, and Alex J. Smola. A generalized representer theorem. In o COLT, 2001.
[12] I. Sch¨ nberg. Spline functions and the problem of graduation. Proc. Nat. Acad. Sci. USA, o 52:947–950, 1964.
[13] A. N. Tikhonov and V. Y. Arsenin. Solutions of Ill-posed Problems. W. H. Winston, 1977.
[14] G. S. Watson. Smooth regression analysis. Sankhy´ , Series A, 26:359–372, 1964. a