jmlr jmlr2010 jmlr2010-72 jmlr2010-72-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Raghunandan H. Keshavan, Andrea Montanari, Sewoong Oh
Abstract: Given a matrix M of low-rank, we consider the problem of reconstructing it from noisy observations of a small, random subset of its entries. The problem arises in a variety of applications, from collaborative filtering (the ‘Netflix problem’) to structure-from-motion and positioning. We study a low complexity algorithm introduced by Keshavan, Montanari, and Oh (2010), based on a combination of spectral techniques and manifold optimization, that we call here O PT S PACE. We prove performance guarantees that are order-optimal in a number of circumstances. Keywords: matrix completion, low-rank matrices, spectral methods, manifold optimization
P.-A. Absil, R. Mahony, and R. Sepulchrer. Optimization Algorithms on Matrix Manifolds. Princeton University Press, 2008. D. Achlioptas and F. McSherry. Fast computation of low-rank matrix approximations. J. ACM, 54 (2):9, 2007. J-F Cai, E. J. Cand` s, and Z. Shen. A singular value thresholding algorithm for matrix completion. e arXiv:0810.3286, 2008. E. J. Cand` s and Y. Plan. Matrix completion with noise. arXiv:0903.3131, 2009. e E. J. Cand` s and B. Recht. Exact matrix completion via convex optimization. arxiv:0805.4471, e 2008. E. J. Cand` s and T. Tao. The power of convex relaxation: Near-optimal matrix completion. e arXiv:0903.1476, 2009. A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality constraints. SIAM J. Matr. Anal. Appl., 20:303–353, 1999. M. Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford University, 2002. A. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank approximations. J. ACM, 51(6):1025–1041, 2004. ISSN 0004-5411. 2077 K ESHAVAN , M ONTANARI AND O H R. H. Keshavan and S. Oh. Optspace: A gradient descent algorithm on the grassman manifold for matrix completion. arXiv:0910.5260, 2009. R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries. IEEE Trans. Inform. Theory, 56(6):2980–2998, June 2010. K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation. arXiv:0905.0044, 2009. S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank minimization. arXiv:0905.1643, 2009. B. Recht, M. Fazel, and P. Parrilo. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. arxiv:0706.4138, 2007. R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Advances in Neural Information Processing Systems, volume 20, 2008. R. Salakhutdinov and N. Srebro. Collaborative filtering in a non-uniform world: Learning with the weighted trace norm. arXiv:1002.2780, 2010. R. Salakhutdinov, A. Mnih, and G. Hinton. Restricted Boltzmann machines for collaborative filtering. In Proceedings of the International Conference on Machine Learning, volume 24, pages 791–798, 2007. Y. Seginer. The expected norm of random matrices. Comb. Probab. Comput., 9:149– 166, March 2000. ISSN 0963-5483. doi: 10.1017/S096354830000420X. URL http://portal.acm.org/citation.cfm?id=971471.971475. N. Srebro and T. S. Jaakkola. Weighted low-rank approximations. In In 20th International Conference on Machine Learning, pages 720–727. AAAI Press, 2003. N. Srebro, J. D. M. Rennie, and T. S. Jaakola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems 17, pages 1329–1336. MIT Press, 2005. M. Talagrand. A new look at independence. The Annals of Probability, 24(1):1–34, 1996. ISSN 00911798. URL http://www.jstor.org/stable/2244830. K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized least squares problems. http://www.math.nus.edu.sg/∼matys, 2009. 2078