nips nips2011 nips2011-263 nips2011-263-reference knowledge-graph by maker-knowledge-mining

263 nips-2011-Sparse Manifold Clustering and Embedding


Source: pdf

Author: Ehsan Elhamifar, René Vidal

Abstract: We propose an algorithm called Sparse Manifold Clustering and Embedding (SMCE) for simultaneous clustering and dimensionality reduction of data lying in multiple nonlinear manifolds. Similar to most dimensionality reduction methods, SMCE finds a small neighborhood around each data point and connects each point to its neighbors with appropriate weights. The key difference is that SMCE finds both the neighbors and the weights automatically. This is done by solving a sparse optimization problem, which encourages selecting nearby points that lie in the same manifold and approximately span a low-dimensional affine subspace. The optimal solution encodes information that can be used for clustering and dimensionality reduction using spectral clustering and embedding. Moreover, the size of the optimal neighborhood of a data point, which can be different for different points, provides an estimate of the dimension of the manifold to which the point belongs. Experiments demonstrate that our method can effectively handle multiple manifolds that are very close to each other, manifolds with non-uniform sampling and holes, as well as estimate the intrinsic dimensions of the manifolds. 1 1.1


reference text

[1] S. Roweis and L. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000.

[2] D. Donoho and C. Grimes, “Hessian eigenmaps: Locally linear embedding techniques for highdimensional data,” National Academy of Sciences, vol. 100, no. 10, pp. 5591–5596, 2003.

[3] M. Belkin and P. Niyogi, “Laplacian eigenmaps and spectral techniques for embedding and clustering,” in Neural Information Processing Systems, 2002, pp. 585–591.

[4] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, no. 5500, pp. 2319–2323, 2000.

[5] K. Q. Weinberger and L. Saul, “Unsupervised learning of image manifolds by semidefinite programming,” in IEEE Conference on Computer Vision and Pattern Recognition, 2004, pp. 988–955.

[6] B. Shaw and T. Jebara, “Minimum volume embedding,” in Artificial Intelligence and Statistics, 2007.

[7] ——, “Structure preserving embedding,” in International Conference on Machine Learning, 2009.

[8] R. Vidal, “Subspace clustering,” Signal Processing Magazine, vol. 28, no. 2, pp. 52–68, 2011.

[9] D. Barbar´ and P. Chen, “Using the fractal dimension to cluster datasets,” in KDD ’00: Proceedings of a the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, 2000, pp. 260–264.

[10] P. Mordohai and G. G. Medioni, “Unsupervised dimensionality estimation and manifold learning in highdimensional spaces by tensor voting.” in International Joint Conference on Artificial Intelligence, 2005, pp. 798–803.

[11] A. Gionis, A. Hinneburg, S. Papadimitriou, and P. Tsaparas, “Dimension induced clustering,” in KDD ’05: Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, 2005, pp. 51–60.

[12] E. Levina and P. J. Bickel, “Maximum likelihood estimation of intrinsic dimension.” in NIPS, 2004.

[13] G. Haro, G. Randall, and G. Sapiro, “Translated poisson mixture model for stratification learning,” International Journal of Computer Vision, 2008.

[14] M. Polito and P. Perona, “Grouping and dimensionality reduction by locally linear embedding,” in Neural Information Processing Systems, 2002.

[15] A. Goh and R. Vidal, “Segmenting motions of different types by unsupervised manifold clustering,” in IEEE Conference on Computer Vision and Pattern Recognition, 2007.

[16] ——, “Clustering and dimensionality reduction on Riemannian manifolds,” in IEEE Conference on Computer Vision and Pattern Recognition, 2008.

[17] D. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” IEEE Trans. Information Theory, vol. 47, no. 7, pp. 2845–2862, Nov. 2001.

[18] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society B, vol. 58, no. 1, pp. 267–288, 1996.

[19] A. Ng, Y. Weiss, and M. Jordan, “On spectral clustering: analysis and an algorithm,” in Neural Information Processing Systems, 2001, pp. 849–856.

[20] U. von Luxburg, “A tutorial on spectral clustering,” Statistics and Computing, vol. 17, 2007.

[21] Z. Zhang and H. Zha, “Principal manifolds and nonlinear dimensionality reduction via tangent space alignment,” SIAM J. Sci. Comput., vol. 26, no. 1, pp. 313–338, 2005.

[22] K.-C. Lee, J. Ho, and D. Kriegman, “Acquiring linear subspaces for face recognition under variable lighting,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 5, pp. 684–698, 2005.

[23] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, “Gradient-based learning applied to document recognition,” in Proceedings of the IEEE, 1998, pp. 2278 – 2324. 9