nips nips2005 nips2005-79 nips2005-79-reference knowledge-graph by maker-knowledge-mining

79 nips-2005-Fusion of Similarity Data in Clustering


Source: pdf

Author: Tilman Lange, Joachim M. Buhmann

Abstract: Fusing multiple information sources can yield significant benefits to successfully accomplish learning tasks. Many studies have focussed on fusing information in supervised learning contexts. We present an approach to utilize multiple information sources in the form of similarity data for unsupervised learning. Based on similarity information, the clustering task is phrased as a non-negative matrix factorization problem of a mixture of similarity measurements. The tradeoff between the informativeness of data sources and the sparseness of their mixture is controlled by an entropy-based weighting mechanism. For the purpose of model selection, a stability-based approach is employed to ensure the selection of the most self-consistent hypothesis. The experiments demonstrate the performance of the method on toy as well as real world data sets. 1


reference text

[1] F. R. Bach and M. I. Jordan. Learning spectral clustering. In NIPS, volume 16. MIT Press, 2004.

[2] J. Burbea and C. R. Rao. On the convexity of some divergence measures based on entropy functions. IEEE Trans. Inform. Theory, 28(3), 1982.

[3] K. Crammer, J. Keshet, and Y. Singer. Kernel design using boosting. In NIPS, volume 15. MIT Press, 2003.

[4] B. Fischer, V. Roth, and J. M. Buhmann. Clustering with the connectivity kernel. In NIPS, volume 16. MIT Press, 2004.

[5] Thomas Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Mach. Learn., 42(1-2):177–196, 2001.

[6] E. T. Jaynes. Information theory and statistical mechanics, I and II. Physical Reviews, 106 and 108:620–630 and 171–190, 1957.

[7] G. R. G. Lanckriet, M. Deng, N. Cristianini, M. I. Jordan, and W. S. Noble. Kernelbased data fusion and its application to protein function prediction in yeast. In Pacific Symposium on Biocomputing, pages 300–311, 2004.

[8] Kenneth Lange. Optimization. Springer Texts in Statistics. Springer, 2004.

[9] T. Lange, M. Braun, V. Roth, and J.M. Buhmann. Stability-based model selection. In NIPS, volume 15. MIT Press, 2003.

[10] M. H. C. Law, M. A. T. Figueiredo, and A. K. Jain. Simultaneous feature selection and clustering using mixture models. IEEE Trans. Pattern Anal. Mach. Intell., 26(9):1154–1166, 2004.

[11] Daniel D. Lee and H. Sebastian Seung. Algorithms for non-negative matrix factorization. In NIPS, volume 13, pages 556–562, 2000.

[12] B. S. Manjunath and W. Y. Ma. Texture features for browsing and retrieval of image data. IEEE Trans. Pattern Anal. Mach. Intell., 18(8):837–842, 1996.

[13] D. S. Modha and W. S. Spangler. Feature weighting in k-means clustering. Mach. Learn., 52(3):217–237, 2003.

[14] V. Roth and T. Lange. Feature selection in clustering problems. In NIPS, volume 16. MIT Press, 2004.

[15] Jianbo Shi and Jitendra Malik. Normalized cuts and image segmentation. IEEE Trans. Pattern Anal. Mach. Intell., 22(8):888–905, 2000.

[16] C. K. I. Williams and M. Seeger. Using the Nystr¨¿ 1 m method to speed up kernel ı 2 machines. In NIPS, volume 13. MIT Press, 2001.

[17] E. Xing, A. Ng, M. Jordan, and S. Russell. Distance metric learning with application to clustering with side-information. In NIPS, volume 15, 2003.

[18] W. Xu, X. Liu, and Y. Gong. Document clustering based on non-negative matrix factorization. In SIGIR ’03, pages 267–273. ACM Press, 2003.