jmlr jmlr2011 jmlr2011-6 jmlr2011-6-reference knowledge-graph by maker-knowledge-mining

6 jmlr-2011-A Simpler Approach to Matrix Completion


Source: pdf

Author: Benjamin Recht

Abstract: This paper provides the best bounds to date on the number of randomly sampled entries required to reconstruct an unknown low-rank matrix. These results improve on prior work by Cand` s and e Recht (2009), Cand` s and Tao (2009), and Keshavan et al. (2009). The reconstruction is accome plished by minimizing the nuclear norm, or sum of the singular values, of the hidden matrix subject to agreement with the provided entries. If the underlying matrix satisfies a certain incoherence condition, then the number of entries required is equal to a quadratic logarithmic factor times the number of parameters in the singular value decomposition. The proof of this assertion is short, self contained, and uses very elementary analysis. The novel techniques herein are based on recent work in quantum information theory. Keywords: matrix completion, low-rank matrices, convex optimization, nuclear norm minimization, random matrices, operator Chernoff bound, compressed sensing


reference text

Rudolf Ahlswede and Andreas Winter. Strong converse for identification via quantum channels. IEEE Transactions on Information Theory, 48(3):569–579, 2002. Andreas Argyriou, Charles A. Micchelli, and Massimiliano Pontil. Convex multi-task feature learning. Machine Learning, 2008. Published online first at http://www.springerlink.com/. Carolyn Beck and Raffaello D’Andrea. Computational study and comparisons of LFT reducibility methods. In Proceedings of the American Control Conference, 1998. Emmanuel Cand` s and Benjamin Recht. Exact matrix completion via convex optimization. Foune dations of Computational Mathematics, 9(6):717–772, 2009. Emmanuel J. Cand` s and Yaniv Plan. Matrix completion with noise. Proceedings of the IEEE, 98 e (6):925–936, 2009. Emmanuel J. Cand` s and Terence Tao. The power of convex relaxation: Near-optimal matrix come pletion. IEEE Transactions on Information Theory, 56(5):2053–2080, 2009. Emmanuel J. Cand´ s, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal e reconstruction from highly incomplete frequency information. IEEE Transactions on Information Theory, 52(2):489–509, 2006. ISSN 0018-9448. Alexander L. Chistov and Dima Yu. Grigoriev. Complexity of quantifier elimination in the theory of algebraically closed fields. In Proceedings of the 11th Symposium on Mathematical Foundations of Computer Science, volume 176 of Lecture Notes in Computer Science, pages 17–31. Springer Verlag, 1984. Victor H. de la Pe˜ a and Stephen J. Montgomery-Smith. Decoupling inequalities for the tail probn abilities of multivariate U-statistics. Annals of Probability, 23(2):806–816, 1995. ISSN 00911798. Maryam Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford University, 2002. Maryam Fazel, Haitham Hindi, and Stephen Boyd. A rank minimization heuristic with application to minimum order system approximation. In Proceedings of the American Control Conference, 2001. Jean Jacques Fuchs. On sparse representations in arbitrary redundant bases. IEEE Transactions on Information Theory, 50:1341–1344, 2004. Sideny Golden. Lower bounds for the Helmholtz function. Physical Review, 137B(4):B1127–1128, 1965. Gaston H. Gonnet. Expected length of the longest probe sequence in hash code searching. Journal of the Association for Computing Machinery, 28(2):289–304, 1981. David Gross. Recovering low-rank matrices from few coefficients in any basis. IEEE Transactions on Information Theory, 57:1548–1566, 2011. 3429 R ECHT David Gross, Yi-Kai Liu, Steven T. Flammia, Stephen Becker, and Jens Eisert. Quantum state tomography via compressed sensing. Physical Review Letters, 105(15):150401, 2010. Torben Hagerup and Christine R¨ b. A guided tour of Chernoff bounds. Information Processing u Letters, 33:305–308, 1990. Raghunandan H. Keshavan, Andrea Montanari, and Sewoong Oh. Matrix completion from a few entries. IEEE Transactions on Information Theory, 56(6):2980–2998, 2009. Yehuda Koren, Robert Bell, and Chris Volinsky. Matrix factorization techniques for recommender systems. Computer, 42(8):30–37, 2009. Nathan Linial, Eran London, and Yuri Rabinovich. The geometry of graphs and some of its algorithmic applications. Combinatorica, 15:215–245, 1995. Mehran Mesbahi and George P. Papavassilopoulos. On the rank minimization problem over a positive semidefinite linear matrix inequality. IEEE Transactions on Automatic Control, 42(2):239– 243, 1997. Guillaume Obozinski, Ben Taskar, and Michael I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, pages 1–22, 2009. Dmitry Panchenko. MIT 18.465: Statistical Learning Theory. MIT Open Courseware http:// ocw.mit.edu/OcwWeb/Mathematics/18-465Spring-2007/CourseHome/, 2007. Benjamin Recht, Maryam Fazel, and Pablo Parrilo. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. SIAM Review, 52(3):471–501, 2010. Jason D. M. Rennie and Nathan Srebro. Fast maximum margin matrix factorization for collaborative prediction. In Proceedings of the International Conference of Machine Learning, 2005. Anthony Man-Cho So and Yinyu Ye. Theory of semidefinite programming for sensor network localization. Mathematical Programming, Series B, 109:367–384, 2007. Nathan Srebro. Learning with Matrix Factorizations. PhD thesis, Massachusetts Institute of Technology, 2004. Colin J. Thompson. Inequality with applications in statistical mechanics. Journal of Mathematical Physics, 6(11):1812–1823, 1965. Kilian Q. Weinberger and Lawrence K. Saul. Unsupervised learning of image manifolds by semidefinite programming. International Journal of Computer Vision, 70(1):77–90, 2006. 3430