nips nips2006 nips2006-179 nips2006-179-reference knowledge-graph by maker-knowledge-mining

179 nips-2006-Sparse Representation for Signal Classification


Source: pdf

Author: Ke Huang, Selin Aviyente

Abstract: In this paper, application of sparse representation (factorization) of signals over an overcomplete basis (dictionary) for signal classification is discussed. Searching for the sparse representation of a signal over an overcomplete dictionary is achieved by optimizing an objective function that includes two terms: one that measures the signal reconstruction error and another that measures the sparsity. This objective function works well in applications where signals need to be reconstructed, like coding and denoising. On the other hand, discriminative methods, such as linear discriminative analysis (LDA), are better suited for classification tasks. However, discriminative methods are usually sensitive to corruption in signals due to lacking crucial properties for signal reconstruction. In this paper, we present a theoretical framework for signal classification with sparse representation. The approach combines the discrimination power of the discriminative methods with the reconstruction property and the sparsity of the sparse representation that enables one to deal with signal corruptions: noise, missing data and outliers. The proposed approach is therefore capable of robust classification with a sparse representation of signals. The theoretical results are demonstrated with signal classification tasks, showing that the proposed approach outperforms the standard discriminative methods and the standard sparse representation in the case of corrupted signals. 1


reference text

[1] S. Mallat and Z. Zhang, “Matching pursuits with time-frequency dictionaries,” IEEE Trans. on Signal Processing, vol. 41, pp. 3397–3415, 1993.

[2] Y. Pati, R. Rezaiifar, and P. Krishnaprasad, “Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition,” in 27th Annual Asilomar Conference on Signals, Systems, and Computers, 1993.

[3] S. Chen, D. Donoho, and M. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Scientific Computing, vol. 20, no. 1, pp. 33–61, 1999.

[4] I. Drori and D. Donoho, “Solution of L1 minimization problems by LARS/Homotopy methods,” in ICASSP, 2006, vol. 3, pp. 636–639.

[5] M. Aharon, M. Elad, and A. Bruckstein, “The K-SVD: An algorithm for designing of overcomplete dictionaries for sparse representation,” IEEE Trans. On Signal Processing, to appear.

[6] J. Starck, M. Elad, and D. Donoho, “Image decomposition via the combination of sparse representation and a variational approach,” IEEE Trans. on Image Processing, vol. 14, no. 10, pp. 1570–1582, 2005.

[7] Y. Li, A. Cichocki, and S. Amari, “Analysis of sparse representation and blind source separation,” Neural Computation, vol. 16, no. 6, pp. 1193–1234, 2004.

[8] B. Olshausen, P. Sallee, and M. Lewicki, “Learning sparse image codes using a wavelet pyramid architecture,” in NIPS, 2001, pp. 887–893.

[9] M. Elad and M. Aharon, “Image denoising via learned dictionaries and sparse representation,” in CVPR, 2006.

[10] M. Elad, B. Matalon, and M. Zibulevsky, “Image denoising with shrinkage and redundant representation,” in CVPR, 2006.

[11] D. Donoho and X. Huo, “Uncertainty principles and ideal atomic decomposition,” IEEE Trans. on Information Theory, vol. 47, no. 7, pp. 2845–2862, 2001.

[12] Y. Lin and D. Lee, “Bayesian L1-Norm sparse learning,” in ICASSP, 2006, vol. 5, pp. 605–608.

[13] D. Wipf and B. Rao, “Sparse bayesian learning for basis selection,” IEEE Trans. on Signal Processing, vol. 52, no. 8, pp. 2153–2164, 2004.

[14] R. Duda, P. Hart, and D. Stork, Pattern classification (2nd ed.), Wiley-Interscience, 2000.

[15] P. Belhumeur, J. Hespanha, and D. Kriegman, “Eigenfaces vs. fisherfaces: Recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 19, no. 7, pp. 711–720, 1997.

[16] A. Martinez and A. Kak, “PCA versus LDA,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 23, no. 2, pp. 228–233, 2001.

[17] S. Fidler, D. Skocaj, and A. Leonardis, “Combining reconstructive and discriminative subspace methods for robust classification and regression by subsampling,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 28, no. 3, pp. 337–350, 2006.

[18] M. Elad, J. Starck, P. Querre, and D.L. Donoho, “Simultaneous cartoon and texture image inpainting using morphological component analysis (MCA),” Journal on Applied and Computational Harmonic Analysis, vol. 19, pp. 340–358, 2005.

[19] A. Leonardis and H. Bischof, “Robust recognition using eigenimages,” Computer Vision and Image Understanding, vol. 78, pp. 99–118, 2000.

[20] J. Tropp, A. Gilbert, and M. Strauss, “Algorithms for simultaneous sparse approximation. part I: Greedy pursuit,” Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 572–588, 2006.

[21] J. Tropp, A. Gilbert, and M. Strauss, “Algorithms for simultaneous sparse approximation. part II: Convex relaxation,” Signal Processing, special issue on Sparse approximations in signal and image processing, vol. 86, no. 4, pp. 589–602, 2006.

[22] USPS Handwritten Digit Database, “available at: http://www.cs.toronto.edu/ roweis/data.html,” .

[23] R. Kohavi, “A study of cross-validation and bootstrap for accuracy estimation and model selection,” in IJCAI, 1995, pp. 1137–1145.

[24] M. Tipping, “Sparse bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001.