nips nips2011 nips2011-202 nips2011-202-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Nati Srebro, Karthik Sridharan, Ambuj Tewari
Abstract: We show that for a general class of convex online learning problems, Mirror Descent can always achieve a (nearly) optimal regret guarantee. 1
[1] J. Abernethy, P. L. Bartlett, A. Rakhlin, and A. Tewari. Optimal strategies and minimax lower bounds for online convex games. In Proceedings of the Nineteenth Annual Conference on Computational Learning Theory, 2008.
[2] Alekh Agarwal, Peter L. Bartlett, Pradeep Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of convex optimization. 8
[3] Keith Ball, Eric A. Carlen, and Elliott H. Lieb. Sharp uniform convexity and smoothness inequalities for trace norms. Invent. Math., 115:463–482, 1994.
[4] A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31(3):167–175, 2003.
[5] H. D. Block. The perceptron: A model for brain functioning. Reviews of Modern Physics, 34:123–135, 1962. Reprinted in ”Neurocomputing” by Anderson and Rosenfeld.
[6] V. Chandrasekaran, S. Sanghavi, P. Parrilo, and A. Willsky. Sparse and low-rank matrix decompositions. In IFAC Symposium on System Identification, 2009.
[7] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the 1 -ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, 2008.
[8] Ali Jalali, Pradeep Ravikumar, Sujay Sanghavi, and Chao Ruan. A Dirty Model for Multi-task Learning. In NIPS, December 2010.
[9] A. Juditsky, G. Lan, A. Nemirovski, and A. Shapiro. Stochastic approximation approach to stochastic programming. SIAM J. Optim, 19(4):1574–1609, 2009.
[10] Sham M. Kakade, Shai Shalev-shwartz, and Ambuj Tewari. On the duality of strong convexity and strong smoothness: Learning applications and matrix regularization, 2010.
[11] J. Kivinen and M. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1–64, January 1997.
[12] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. In Advances in Neural Information Processing Systems 21, pages 905–912, 2009.
[13] N. Littlestone. Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2:285–318, 1988.
[14] A. Nemirovski and D. Yudin. On cesaro’s convergence of the gradient descent method for finding saddle points of convex-concave functions. Doklady Akademii Nauk SSSR, 239(4), 1978.
[15] A. Nemirovski and D. Yudin. Problem complexity and method efficiency in optimization. Nauka Publishers, Moscow, 1978.
[16] G. Pisier. Martingales with values in uniformly convex spaces. Israel Journal of Mathematics, 20(3–4):326–350, 1975.
[17] G. Pisier. Martingales in banach spaces (in connection with type and cotype). Winter School/IHP Graduate Course, 2011.
[18] A. Rakhlin, K. Sridharan, and A. Tewari. Online learning: Random averages, combinatorial parameters, and learnability. NIPS, 2010.
[19] S. Shalev-Shwartz, O. Shamir, N. Srebro, and K. Sridharan. Stochastic convex optimization. In COLT, 2009.
[20] S. Shalev-Shwartz and Y. Singer. Convex repeated games and fenchel duality. Advances in Neural Information Processing Systems, 19:1265, 2007.
[21] Nathan Srebro, Jason D. M. Rennie, and Tommi S. Jaakola. Maximum-margin matrix factorization. In Advances in Neural Information Processing Systems 17, pages 1329–1336. MIT Press, 2005.
[22] Nathan Srebro and Adi Shraibman. Rank, trace-norm and max-norm. In Proceedings of the 18th Annual Conference on Learning Theory, pages 545–560. Springer-Verlag, 2005.
[23] Nathan Srebro and Ambuj Tewari. Stochastic optimization for machine learning. In ICML 2010, tutorial, 2010.
[24] K. Sridharan and A. Tewari. Convex games in Banach spaces. In Proceedings of the 23nd Annual Conference on Learning Theory, 2010.
[25] S.Shalev-Shwartz. Online Learning: Theory, Algorithms, and Applications. PhD thesis, Hebrew University of Jerusalem, 2007.
[26] Manfred K. Warmuth and Dima Kuzmin. Randomized online pca algorithms with regret bounds that are logarithmic in the dimension, 2007.
[27] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, 2003.
[28] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301–320, 2005. 9