nips nips2004 nips2004-145 nips2004-145-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Tomoharu Iwata, Kazumi Saito, Naonori Ueda, Sean Stromsten, Thomas L. Griffiths, Joshua B. Tenenbaum
Abstract: In this paper, we propose a new method, Parametric Embedding (PE), for visualizing the posteriors estimated over a mixture model. PE simultaneously embeds both objects and their classes in a low-dimensional space. PE takes as input a set of class posterior vectors for given data points, and tries to preserve the posterior structure in an embedding space by minimizing a sum of Kullback-Leibler divergences, under the assumption that samples are generated by a Gaussian mixture with equal covariances in the embedding space. PE has many potential uses depending on the source of the input data, providing insight into the classifier’s behavior in supervised, semi-supervised and unsupervised settings. The PE algorithm has a computational advantage over conventional embedding methods based on pairwise object relations since its complexity scales with the product of the number of objects and the number of classes. We demonstrate PE by visualizing supervised categorization of web pages, semi-supervised categorization of digits, and the relations of words and latent topics found by an unsupervised algorithm, Latent Dirichlet Allocation. 1
[1] D. Blei, A. Ng and M. Jordan. Latent dirichlet allocation. NIPS 15, 2002.
[2] V. de Silva, J. B. Tenenbaum. Global versus local methods in nonlinear dimensionality reduction. NIPS 15, pp. 705-712, 2002.
[3] R. Fisher. The use of multiple measurements in taxonomic problem. Annuals of Eugenics 7, pp.179–188, 1950.
[4] G. Hinton and S. Roweis. Stochastic neighbor embedding. NIPS 15, 2002.
[5] I.T. Joliffe. Principal Component Analysis. Springer, 1980.
[6] J. Tenenbaum, V. de Silva and J. Langford. A global geometric framework for nonlinear dimensionality reduction. Science 290 pp. 2319–2323, 2000.
[7] W. Torgerson. Theory and Methods of Scaling. New York, Wiley, 1958.