nips nips2008 nips2008-215 nips2008-215-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Volkan Cevher, Marco F. Duarte, Chinmay Hegde, Richard Baraniuk
Abstract: Compressive Sensing (CS) combines sampling and compression into a single subNyquist linear measurement process for sparse and compressible signals. In this paper, we extend the theory of CS to include signals that are concisely represented in terms of a graphical model. In particular, we use Markov Random Fields (MRFs) to represent sparse signals whose nonzero coefficients are clustered. Our new model-based recovery algorithm, dubbed Lattice Matching Pursuit (LaMP), stably recovers MRF-modeled signals using many fewer measurements and computations than the current state-of-the-art algorithms.
[1] D. L. Donoho. Compressed sensing. IEEE Trans. Info. Theory, 52(4):1289–1306, Sept. 2006.
[2] E. J. Cand` s. Compressive sampling. In Proc. International Congress of Mathematicians, volume 3, e pages 1433–1452, Madrid, Spain, 2006.
[3] S. L. Lauritzen. Graphical Models. Oxford University Press, 1996.
[4] D. Needell and J. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate samples. Applied and Computational Harmonic Analysis, June 2008. To appear.
[5] C. La and M. N. Do. Tree-based orthogonal matching pursuit algorithm for signal reconstruction. In IEEE Int. Conf. Image Processing (ICIP), pages 1277–1280, Atlanta, GA, Oct. 2006.
[6] M. F. Duarte, M. B. Wakin, and R. G. Baraniuk. Wavelet-domain compressive signal reconstruction using a hidden Markov tree model. In ICASSP, pages 5137–5140, Las Vegas, NV, April 2008.
[7] V. Cevher, A. Sankaranarayanan, M. F. Duarte, D. Reddy, R. G. Baraniuk, and R. Chellappa. Compressive sensing for background subtraction. In ECCV, Marseille, France, Oct. 2008.
[8] R. G. Baraniuk, M. Davenport, R. A. DeVore, and M. B. Wakin. A simple proof of the restricted isometry property for random matrices. 2006. To appear in Const. Approx.
[9] T. Blumensath and M. E. Davies. Sampling theorems for signals from the union of linear subspaces. 2007. Preprint.
[10] B. M. McCoy and T. T. Wu. The two-dimensional Ising model. Harvard Univ. Press, 1973.
[11] M. J. Wainwright, P. Ravikumar, and J. D. Lafferty. High-dimensional graphical model selection using ℓ1 -regularized logistic regression. In Proc. of Advances in NIPS, 2006.
[12] D. P. Wipf and B. D. Rao. Sparse bayesian learning for basis selection. IEEE Trans. Sig. Proc., 52(8):2153–2164, August 2004.
[13] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference. Morgan Kaufmann Publishers, 1988.
[14] V. Kolmogorov and R. Zabin. What energy functions can be minimized via graph cuts? IEEE Trans. on Pattern Anal. and Mach. Int., 26(2):147–159, 2004.
[15] Y. Boykov, O. Veksler, and R. Zabih. Efficient approximate energy minimization via graph cuts. IEEE Trans. on Pattern Anal. and Mach. Int., 20(12):1222–1239, Nov. 2001.
[16] Y. Boykov and V. Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Trans. on Pattern Anal. and Mach. Int., 26(9):1124–1137, Sept. 2004.
[17] E. T. Hale, W Yin, and Y. Zhang. A fixed-point continuation method for ℓ1 -regularized minimization with applications to compressed sensing. Technical Report TR07-07, Rice University, CAM Dept., 2007. 8