nips nips2004 nips2004-125 nips2004-125-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Roland Memisevic, Geoffrey E. Hinton
Abstract: We describe a way of using multiple different types of similarity relationship to learn a low-dimensional embedding of a dataset. Our method chooses different, possibly overlapping representations of similarity by individually reweighting the dimensions of a common underlying latent space. When applied to a single similarity relation that is based on Euclidean distances between the input data points, the method reduces to simple dimensionality reduction. If additional information is available about the dataset or about subsets of it, we can use this information to clean up or otherwise improve the embedding. We demonstrate the potential usefulness of this form of semi-supervised dimensionality reduction on some simple examples. 1
[1] Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, pages 2319–2323, 2000.
[2] S.T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science, 290, 2000.
[3] Geoffrey Hinton and Sam Roweis. Stochastic neighbor embedding. In Advances in Neural Information Processing Systems 15, pages 833–840. MIT Press, 2003.
[4] A. Paccanaro and G. E. Hinton. Learning hierarchical structures with linear relational embedding. In Advances in Neural Information Processing Systems 14, Cambridge, MA, 2002. MIT Press.
[5] David Cohn. Informed projections. In Advances in Neural Information Processing Systems 15, pages 849–856. MIT Press, 2003.
[6] Joshua B. Tenenbaum and William T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247–1283, 2000.
[7] Eric P. Xing, Andrew Y. Ng, Michael I. Jordan, and Stuart Russell. Distance metric learning with application to clustering with side-information. In Advances in Neural Information Processing Systems 15, pages 505–512. MIT Press, Cambridge, MA, 2003.
[8] Michinari Momma Tijl De Bie and Nello Cristianini. Efficiently learning the metric using sideinformation. In Proc. of the 14th International Conference on Algorithmic Learning Theory, 2003.
[9] J. Douglas Carroll and Jih-Jie Chang. Analysis of individual differences in multidimensional scaling via an n-way generalization of ”eckart-young” decomposition. Psychometrika, 35(3), 1970.
[10] J. H. Ham, D. D. Lee, and L. K. Saul. Learning high dimensional correspondences from low dimensional manifolds. In In Proceedings of the ICML 2003 Workshop on The Continuum from Labeled to Unlabeled Data in Machine Learning and Data Mining, pages 34–41, Washington, D.C., 2003.
[11] Jakob J. Verbeek, Sam T. Roweis, and Nikos Vlassis. Non-linear cca and pca by alignment of local models. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge, MA, 2004.
[12] S. A. Nene, S. K. Nayar, and H. Murase. Columbia object image library (coil-20). Technical report, 1996.
[13] Daniel B Graham and Nigel M Allinson. Characterizing virtual eigensignatures for general purpose face recognition. 163, 1998.