nips nips2009 nips2009-79 nips2009-79-reference knowledge-graph by maker-knowledge-mining

79 nips-2009-Efficient Recovery of Jointly Sparse Vectors


Source: pdf

Author: Liang Sun, Jun Liu, Jianhui Chen, Jieping Ye

Abstract: We consider the reconstruction of sparse signals in the multiple measurement vector (MMV) model, in which the signal, represented as a matrix, consists of a set of jointly sparse vectors. MMV is an extension of the single measurement vector (SMV) model employed in standard compressive sensing (CS). Recent theoretical studies focus on the convex relaxation of the MMV problem based on the (2, 1)-norm minimization, which is an extension of the well-known 1-norm minimization employed in SMV. However, the resulting convex optimization problem in MMV is significantly much more difficult to solve than the one in SMV. Existing algorithms reformulate it as a second-order cone programming (SOCP) or semidefinite programming (SDP) problem, which is computationally expensive to solve for problems of moderate size. In this paper, we propose a new (dual) reformulation of the convex optimization problem in MMV and develop an efficient algorithm based on the prox-method. Interestingly, our theoretical analysis reveals the close connection between the proposed reformulation and multiple kernel learning. Our simulation studies demonstrate the scalability of the proposed algorithm.


reference text

[1] A. Ben-Tal and A. Nemirovski. Non-Euclidean restricted memory level method for large-scale convex optimization. Mathematical Programming, 102(3):407–56, 2005.

[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, UK, 2004.

[3] E. Cand` s. Compressive sampling. In International Congress of Mathematics, number 3, pages 1433– e 1452, Madrid, Spain, 2006.

[4] E. Cand` s, J. Romberg, and T. Tao. Robust uncertainty principles: exact signal reconstruction from highly e incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006.

[5] J. Chen and X. Huo. Theoretical results on sparse representations of multiple-measurement vectors. IEEE Transactions on Signal Processing, 54(12):4634–4643, 2006.

[6] S.F. Cotter and B.D. Rao. Sparse channel estimation via matching pursuit with application to equalization. IEEE Transactions on Communications, 50(3):374–377, 2002.

[7] S.F. Cotter, B.D. Rao, Kjersti Engan, and K. Kreutz-Delgado. Sparse solutions to linear inverse problems with multiple measurement vectors. IEEE Transactions on Signal Processing, 53(7):2477–2488, 2005.

[8] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289–1306, 2006.

[9] D.L. Duttweiler. Proportionate normalized least-mean-squares adaptation in echo cancelers. IEEE Transactions on Speech and Audio Processing, 8(5):508–518, 2000.

[10] Y.C. Eldar and M. Mishali. Robust recovery of signals from a structured union of subspaces. To Appear in IEEE Transactions on Information Theory, 2009.

[11] S. Erickson and C. Sabatti. Empirical bayes estimation of a sparse vector of gene expression changes. Statistical Applications in Genetics and Molecular Biology, 4(1):22, 2008.

[12] I.F. Gorodnitsky, J.S. George, and B.D. Rao. Neuromagnetic source imaging with focuss: a recursive weighted minimum norm algorithm. Electroencephalography and Clinical Neurophysiology, 95(4):231– 251, 1995.

[13] H. Jean-Baptiste and C. Lemarechal. Convex Analysis and Minimization Algorithms I: Fundamentals (Grundlehren Der Mathematischen Wissenschaften). Springer, Berlin, 1993.

[14] G.R.G. Lanckriet, N. Cristianini, P. Bartlett, L. El Ghaoui, and M.I. Jordan. Learning the kernel matrix with semidefinite programming. Jouranl of Machine Learning Research, 5:27–72, 2004.

[15] M. Lobo, L. Vandenberghe, S. Boyd, and H. Lebret. Applications of second-order cone programming. Linear Algebra and its Applications, 284(1-3):193–228, 1998.

[16] F. Parvaresh M. Stojnic and B. Hassibi. On the reconstruction of block-sparse signals with an optimal number of measurements. CoRR, 2008.

[17] D. Malioutov, M. Cetin, and A. Willsky. Source localization by enforcing sparsity through a laplacian. In IEEE Workshop on Statistical Signal Processing, pages 553–556, 2003.

[18] M. Mishali and Y.C. Eldar. Reduce and boost: Recovering arbitrary sets of jointly sparse vectors. IEEE Transactions on Signal Processing, 56(10):4692–4702, 2008.

[19] A. Nemirovski. Prox-method with rate of convergence o(1/t) for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1):229–251, 2005.

[20] Y.E. Nesterov and A.S. Nemirovskii. Interior-point Polynomial Algorithms in Convex Programming. SIAM Publications, Philadelphia, PA, 1994.

[21] M. Duarte R.G. Baraniuk, V. Cevher and C. Hegde. Model-based compressive sensing. Submitted to IEEE Transactions on Information Theory, 2008.

[22] S. Sonnenburg, G. R¨ tsch, C. Sch¨ fer, and B. Sch¨ lkopf. Large scale multiple kernel learning. Journal of a a o Machine Learning Research, 7:1531–1565, 2006.

[23] J.F. Sturm. Using sedumi 1.02, a MATLAB toolbox for optimization over symmetric cones. Optimization Methods and Software, 11(12):625–653, 1999.

[24] J.A. Tropp. Algorithms for simultaneous sparse approximation. Part II: Convex relaxation. Signal Processing, 86(3):589–602, 2006.

[25] J.A. Tropp, A.C. Gilbert, and M.J. Strauss. Algorithms for simultaneous sparse approximation. Part I: Greedy pursuit. Signal Processing, 86(3):572–588, 2006.

[26] E. van den Berg and M. P. Friedlander. Joint-sparse recovery from multiple measurements. Technical Report, Department of Computer Science, University of British Columbia, 2009.

[27] Z. Xu, R. Jin, I. King, and M.R. Lyu. An extended level method for efficient multiple kernel learning. In Advances in Neural Information Processing Systems, pages 1825–1832, 2008. 9