nips nips2007 nips2007-21 nips2007-21-reference knowledge-graph by maker-knowledge-mining

21 nips-2007-Adaptive Online Gradient Descent


Source: pdf

Author: Elad Hazan, Alexander Rakhlin, Peter L. Bartlett

Abstract: We study the rates of growth of the regret in online convex optimization. First, we show that a simple extension of the algorithm of Hazan et al eliminates the need for a priori knowledge of the lower bound on the second derivatives of the observed functions. We then provide an algorithm, Adaptive Online Gradient Descent, which interpolates between the results of Zinkevich for linear functions and of Hazan et al for strongly convex functions, achieving intermediate rates √ between T and log T . Furthermore, we show strong optimality of the algorithm. Finally, we provide an extension of our results to general norms. 1


reference text

[1] Nicol` Cesa-Bianchi and G´ bor Lugosi. Prediction, Learning, and Games. Cambridge University Press, o a 2006.

[2] Martin Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In ICML, pages 928–936, 2003.

[3] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In COLT, pages 499–513, 2006.

[4] Shai Shalev-Shwartz and Yoram Singer. Convex repeated games and Fenchel duality. In B. Sch¨ lkopf, o J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19. MIT Press, Cambridge, MA, 2007.

[5] Shai Shalev-Shwartz and Yoram Singer. Logarithmic regret algorithms for strongly convex repeated games. In Technical Report 2007-42. The Hebrew University, 2007.

[6] C. Gentile and M. K. Warmuth. Proving relative loss bounds for on-line learning algorithms using Bregman divergences. In COLT. Tutorial, 2000. 8