nips nips2011 nips2011-264 nips2011-264-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Alexandra Carpentier, Odalric-ambrym Maillard, Rémi Munos
Abstract: We consider the problem of recovering the parameter α ∈ RK of a sparse function f (i.e. the number of non-zero entries of α is small compared to the number K of features) given noisy evaluations of f at a set of well-chosen sampling points. We introduce an additional randomization process, called Brownian sensing, based on the computation of stochastic integrals, which produces a Gaussian sensing matrix, for which good recovery properties are proven, independently on the number of sampling points N , even when the features are arbitrarily non-orthogonal. Under the assumption that f is H¨ lder continuous with exponent at least √ we proo 1/2, vide an estimate α of the parameter such that �α − α�2 = O(�η�2 / N ), where � � η is the observation noise. The method uses a set of sampling points uniformly distributed along a one-dimensional curve selected according to the features. We report numerical experiments illustrating our method. 1
[1] R. Baraniuk, M. Davenport, R. DeVore, and M. Wakin. A simple proof of the restricted isometry property for random matrices. Constructive Approximation, 28(3):253–263, 2008.
[2] G. Bennett. Probability inequalities for the sum of independent random variables. Journal of the American Statistical Association, 57(297):33–45, 1962.
[3] E. Cand` s and J. Romberg. Sparsity and incoherence in compressive sampling. Inverse Probe lems, 23:969–985, 2007.
[4] E. Candes and T. Tao. The Dantzig selector: statistical estimation when p is much larger than n. Annals of Statistics, 35(6):2313–2351, 2007.
[5] E.J. Cand` s, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruce tion from highly incomplete frequency information. IEEE Transactions on information theory, 52(2):489–509, 2006.
[6] E.J. Cand` s, J.K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate e measurements. Communications on Pure and Applied Mathematics, 59(8):1207, 2006.
[7] D.L. Donoho. Compressed sensing. IEEE Transactions on Information Theory, 52(4):1289– 1306, 2006.
[8] D.L. Donoho and P.B. Stark. Uncertainty principles and signal recovery. SIAM Journal on Applied Mathematics, 49(3):906–931, 1989.
[9] M. Fornasier and H. Rauhut. Compressive Sensing. In O. Scherzer, editor, Handbook of Mathematical Methods in Imaging. Springer, to appear.
[10] S. Foucart and M.J. Lai. Sparsest solutions of underdetermined linear systems via lqminimization for 0 < q < p. Applied and Computational Harmonic Analysis, 26(3):395–407, 2009.
[11] V. Koltchinskii. The Dantzig selector and sparsity oracle inequalities. Bernoulli, 15(3):799– 828, 2009.
[12] H. Rauhut. Compressive Sensing and Structured Random Matrices. Theoretical Foundations and Numerical Methods for Sparse Recovery, 9, 2010.
[13] H. Rauhut and R. Ward. Sparse legendre expansions via l1 minimization. Arxiv preprint arXiv:1003.0251, 2010.
[14] M. Rudelson and R. Vershynin. On sparse reconstruction from Fourier and Gaussian measurements. Communications on Pure and Applied Mathematics, 61(8):1025–1045, 2008.
[15] Robert Tibshirani. Regression shrinkage and selection via the Lasso. Journal of the Royal Statistical Society, Series B, 58:267–288, 1994.
[16] Sara A. van de Geer. The deterministic lasso. Seminar f¨ r Statistik, Eidgen¨ ssische Technische u o Hochschule (ETH) Z¨ rich, 2007. u
[17] Sara A. van de Geer and Peter Buhlmann. On the conditions used to prove oracle results for the lasso. Electronic Journal of Statistics, 3:1360–1392, 2009.
[18] P. Zhao and B. Yu. On model selection consistency of Lasso. The Journal of Machine Learning Research, 7:2563, 2006. 9