nips nips2013 nips2013-91 nips2013-91-reference knowledge-graph by maker-knowledge-mining

91 nips-2013-Dirty Statistical Models


Source: pdf

Author: Eunho Yang, Pradeep Ravikumar

Abstract: We provide a unified framework for the high-dimensional analysis of “superposition-structured” or “dirty” statistical models: where the model parameters are a superposition of structurally constrained parameters. We allow for any number and types of structures, and any statistical model. We consider the general class of M -estimators that minimize the sum of any loss function, and an instance of what we call a “hybrid” regularization, that is the infimal convolution of weighted regularization functions, one for each structural component. We provide corollaries showcasing our unified framework for varied statistical models such as linear regression, multiple regression and principal component analysis, over varied superposition structures. 1


reference text

[1] A. Agarwal, S. Negahban, and M. J. Wainwright. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Annals of Statistics, 40(2):1171–1197, 2012.

[2] E. J. Cand` s, J. K. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccurate e measurements. Communications on Pure and Applied Mathematics, 59(8):1207–1223, 2006.

[3] E. J. Cand` s, X. Li, Y. Ma, and J. Wright. Robust principal component analysis? Journal of e the ACM, 58(3), May 2011.

[4] V. Chandrasekaran, B. Recht, P. A. Parrilo, and A. S. Willsky. The convex geometry of linear inverse problems. In 48th Annual Allerton Conference on Communication, Control and Computing, 2010.

[5] V. Chandrasekaran, S. Sanghavi, P. A. Parrilo, and A. S. Willsky. Rank-sparsity incoherence for matrix decomposition. SIAM Journal on Optimization, 21(2), 2011.

[6] V. Chandrasekaran, P. A. Parrilo, and A. S. Willsky. Latent variable graphical model selection via convex optimization. Annals of Statistics (with discussion), 40(4), 2012.

[7] D. Hsu, S. M. Kakade, and T. Zhang. Robust matrix decomposition with sparse corruptions. IEEE Trans. Inform. Theory, 57:7221–7234, 2011.

[8] A. Jalali, P. Ravikumar, S. Sanghavi, and C. Ruan. A dirty model for multi-task learning. In Neur. Info. Proc. Sys. (NIPS), 23, 2010.

[9] M. McCoy and J. A. Tropp. Two proposals for robust pca using semidefinite programming. Electron. J. Statist., 5:1123–1160, 2011.

[10] S. Negahban and M. J. Wainwright. Estimation of (near) low-rank matrices with noise and high-dimensional scaling. Annals of Statistics, 39(2):1069–1097, 2011.

[11] S. Negahban, P. Ravikumar, M. J. Wainwright, and B. Yu. A unified framework for highdimensional analysis of M-estimators with decomposable regularizers. Statistical Science, 27 (4):538–557, 2012.

[12] G. Raskutti, M. J. Wainwright, and B. Yu. Restricted eigenvalue properties for correlated gaussian designs. Journal of Machine Learning Research (JMLR), 99:2241–2259, 2010.

[13] R. Vershynin. Introduction to the non-asymptotic analysis of random matrices. In Compressed Sensing: Theory and Applications. Cambridge University Press, 2012.

[14] H. Xu and C. Leng. Robust multi-task regression with grossly corrupted observations. Inter. Conf. on AI and Statistics (AISTATS), 2012.

[15] H. Xu, C. Caramanis, and S. Sanghavi. Robust pca via outlier pursuit. IEEE Transactions on Information Theory, 58(5):3047–3064, 2012. 9