nips nips2004 nips2004-127 nips2004-127-reference knowledge-graph by maker-knowledge-mining

127 nips-2004-Neighbourhood Components Analysis


Source: pdf

Author: Jacob Goldberger, Geoffrey E. Hinton, Sam T. Roweis, Ruslan Salakhutdinov

Abstract: In this paper we propose a novel method for learning a Mahalanobis distance measure to be used in the KNN classification algorithm. The algorithm directly maximizes a stochastic variant of the leave-one-out KNN score on the training set. It can also learn a low-dimensional linear embedding of labeled data that can be used for data visualization and fast classification. Unlike other methods, our classification model is non-parametric, making no assumptions about the shape of the class distributions or the boundaries between them. The performance of the method is demonstrated on several data sets, both for metric learning and linear dimensionality reduction. 1


reference text

[1] A. Bar-Hillel, T. Hertz, N. Shental, and D. Weinshall. Learning distance functions using equivalence relation. In International Conference on Machine Learning, 2003.

[2] L. Chen, H. Liao, M. Ko, J. Lin, and G. Yu. A new lda-based face recognition system which can solve the small sample size problem. In Pattern Recognition, pages 1713–1726, 2000.

[3] R. A. Fisher. The use of multiple measurements in taxonomic problems. In Annual of Eugenic, pages 179–188, 1936.

[4] J. Friedman, J.bentley, and R. Finkel. An algorithm for finding best matches in logarithmic expected time. In ACM, 1977.

[5] Y. Koren and L. Carmel. Robust linear dimensionality reduction. In IEEE Trans. Vis. and Comp. Graph., pages 459–470, 2004.

[6] D. Lowe. Similarity metric learning for a variable kernel classifier. In Neural Computation, pages 72–85, 1995.

[7] E.P. Xing, A. Y. Ng, M.I. Jordan, and S. Russell. Distance learning metric. In Proc. of Neural Information Processing Systems, 2003.