nips nips2011 nips2011-259 nips2011-259-reference knowledge-graph by maker-knowledge-mining

259 nips-2011-Sparse Estimation with Structured Dictionaries


Source: pdf

Author: David P. Wipf

Abstract: In the vast majority of recent work on sparse estimation algorithms, performance has been evaluated using ideal or quasi-ideal dictionaries (e.g., random Gaussian or Fourier) characterized by unit ℓ2 norm, incoherent columns or features. But in reality, these types of dictionaries represent only a subset of the dictionaries that are actually used in practice (largely restricted to idealized compressive sensing applications). In contrast, herein sparse estimation is considered in the context of structured dictionaries possibly exhibiting high coherence between arbitrary groups of columns and/or rows. Sparse penalized regression models are analyzed with the purpose of finding, to the extent possible, regimes of dictionary invariant performance. In particular, a Type II Bayesian estimator with a dictionarydependent sparsity penalty is shown to have a number of desirable invariance properties leading to provable advantages over more conventional penalties such as the ℓ1 norm, especially in areas where existing theoretical recovery guarantees no longer hold. This can translate into improved performance in applications such as model selection with correlated features, source localization, and compressive sensing with constrained measurement directions. 1


reference text

[1] S. Baillet, J.C. Mosher, and R.M. Leahy, “Electromagnetic brain mapping,” IEEE Signal Processing Magazine, pp. 14–30, Nov. 2001.

[2] A.M. Bruckstein, M. Elad, and M. Zibulevsky, “A non-negative and sparse enough solution of an underdetermined linear system of equations is unique,” IEEE Trans. Information Theory, vol. 54, no. 11, pp. 4813–4820, Nov. 2008.

[3] E. Cand` s, J. Romberg, and T. Tao, “Robust uncertainty principles: Exact signal reconstruction e from highly incomplete frequency information,” IEEE Trans. Information Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.

[4] E. Cand` s, M. Wakin, and S. Boyd, “Enhancing sparsity by reweighted ℓ1 minimization,” J. e Fourier Anal. Appl., vol. 14, no. 5, pp. 877–905, 2008.

[5] R. Chartrand and W. Yin, “Iteratively reweighted algorithms for compressive sensing,” Proc. Int. Conf. Accoustics, Speech, and Signal Proc., 2008.

[6] S.F. Cotter, B.D. Rao, K. Engan, and K. Kreutz-Delgado, “Sparse solutions to linear inverse problems with multiple measurement vectors,” IEEE Trans. Signal Processing, vol. 53, no. 7, pp. 2477–2488, April 2005.

[7] D.L. Donoho and M. Elad, “Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ1 minimization,” Proc. National Academy of Sciences, vol. 100, no. 5, pp. 2197– 2202, March 2003.

[8] J.J. Fuchs, “On sparse representations in arbitrary redundant bases,” IEEE Trans. Information Theory, vol. 50, no. 6, pp. 1341–1344, June 2004.

[9] S. Ji, D. Dunson, and L. Carin, “Multi-task compressive sensing,” IEEE Trans. Signal Processing, vol. 57, no. 1, pp. 92–106, Jan 2009.

[10] D.M. Malioutov, M. Cetin, and A.S. Willsky, “Sparse signal reconstruction perspective for ¸ source localization with sensor arrays,” IEEE Transactions on Signal Processing, vol. 53, no. 8, pp. 3010–3022, August 2005.

[11] B.D. Rao, K. Engan, S. F. Cotter, J. Palmer, and K. Kreutz-Delgado, “Subset selection in noise based on diversity measure minimization,” IEEE Trans. Signal Processing, vol. 51, no. 3, pp. 760–770, March 2003.

[12] J. Sarvas, “Basic methematical and electromagnetic concepts of the biomagnetic inverse problem,” Phys. Med. Biol., vol. 32, pp. 11–22, 1987.

[13] J.G. Silva, J.S. Marques, and J.M. Lemos, “Selecting landmark points for sparse manifold learning,” Advances in Neural Information Processing Systems 18, pp. 1241–1248, 2006.

[14] M.E. Tipping, “Sparse Bayesian learning and the relevance vector machine,” Journal of Machine Learning Research, vol. 1, pp. 211–244, 2001.

[15] J.A. Tropp, “Algorithms for simultaneous sparse approximation. Part II: Convex relaxation,” Signal Processing, vol. 86, pp. 589–602, April 2006.

[16] M.B. Wakin, M.F. Duarte, S. Sarvotham, D. Baron, and R.G. Baraniuk, “Recovery of jointly sparse signals from a few random projections,” Advances in Neural Information Processing Systems 18, pp. 1433–1440, 2006.

[17] D.P. Wipf, Bayesian Methods for Finding Sparse Representations, PhD Thesis, University of California, San Diego, 2006.

[18] D.P. Wipf and S. Nagarajan, “Iterative reweighted ℓ1 and ℓ2 methods for finding sparse solutions,” J. Selected Topics in Signal Processing (Special Issue on Compressive Sensing), vol. 4, no. 2, April 2010. 9