nips nips2005 nips2005-189 nips2005-189-reference knowledge-graph by maker-knowledge-mining

189 nips-2005-Tensor Subspace Analysis


Source: pdf

Author: Xiaofei He, Deng Cai, Partha Niyogi

Abstract: Previous work has demonstrated that the image variations of many objects (human faces in particular) under variable lighting can be effectively modeled by low dimensional linear spaces. The typical linear subspace learning algorithms include Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Locality Preserving Projection (LPP). All of these methods consider an n1 × n2 image as a high dimensional vector in Rn1 ×n2 , while an image represented in the plane is intrinsically a matrix. In this paper, we propose a new algorithm called Tensor Subspace Analysis (TSA). TSA considers an image as the second order tensor in Rn1 ⊗ Rn2 , where Rn1 and Rn2 are two vector spaces. The relationship between the column vectors of the image matrix and that between the row vectors can be naturally characterized by TSA. TSA detects the intrinsic local geometrical structure of the tensor space by learning a lower dimensional tensor subspace. We compare our proposed approach with PCA, LDA and LPP methods on two standard databases. Experimental results demonstrate that TSA achieves better recognition rate, while being much more efficient. 1


reference text

[1] P.N. Belhumeur, J.P. Hepanha, and D.J. Kriegman, “Eigenfaces vs. fisherfaces: recognition using class specific linear projection,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711-720, July 1997.

[2] M. Belkin and P. Niyogi, “Laplacian Eigenmaps and Spectral Techniques for Embedding and Clustering ,” Advances in Neural Information Processing Systems 14, 2001.

[3] Fan R. K. Chung, Spectral Graph Theory, Regional Conference Series in Mathematics, number 92, 1997.

[4] X. He and P. Niyogi, “Locality Preserving Projections,”Advance in Neural Information Processing Systems 16, Vancouver, Canada, December 2003.

[5] X. He, S. Yan, Y. Hu, P. Niyogi, and H.-J. Zhang, “Face Recognition using Laplacianfaces,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 27, No. 3, 2005.

[6] S. Roweis, and L. K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol 290, 22 December 2000.

[7] J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A Global Geometric Framework for Nonlinear Dimensionality Reduction,” Science, vol 290, 22 December 2000.

[8] M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, 3(1):71-86, 1991.

[9] M. A. O. Vasilescu and D. Terzopoulos, “Multilinear Subspace Analysis for Image Ensembles,” IEEE Conference on Computer Vision and Pattern Recognition, 2003.

[10] K. Q. Weinberger and L. K. Saul, “Unsupervised Learning of Image Manifolds by SemiDefinite Programming,” IEEE Conference on Computer Vision and Pattern Recognition, Washington, DC, 2004.

[11] J. Yang, D. Zhang, A. Frangi, and J. Yang, “Two-dimensional PCA: a new approach to appearance-based face representation and recognition,”IEEE. Trans. Pattern Analysis and Machine Intelligence, vol. 26, No. 1, 2004.

[12] J. Ye, R. Janardan, Q. Li, “Two-Dimensional Linear Discriminant Analysis ,” Advances in Neural Information Processing Systems 17, 2004.