nips nips2011 nips2011-98 nips2011-98-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Shie Mannor, Ohad Shamir
Abstract: We consider an adversarial online learning setting where a decision maker can choose an action in every stage of the game. In addition to observing the reward of the chosen action, the decision maker gets side observations on the reward he would have obtained had he chosen some of the other actions. The observation structure is encoded as a graph, where node i is linked to node j if sampling i provides information on the reward of j. This setting naturally interpolates between the well-known “experts” setting, where the decision maker can view all rewards, and the multi-armed bandits setting, where the decision maker can only view the reward of the chosen action. We develop practical algorithms with provable regret guarantees, which depend on non-trivial graph-theoretic properties of the information feedback structure. We also provide partially-matching lower bounds. 1
[1] R. Agrawal. The continuum-armed bandit problem. SIAM J. Control and Optimization, 33:1926–1951, 1995.
[2] H. Arslan, Z. N. Chen, and M. G. Di Benedetto. Ultra Wideband Wireless Communication. Wiley - Interscience, 2006.
[3] J.-Y. Audibert and S. Bubeck. Minimax policies for adversarial and stochastic bandits. In COLT, 2009.
[4] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. Schapire. The nonstochastic multiarmed bandit problem. SIAM J. Comput., 32(1):48–77, 2002.
[5] V. Baston. Some cyclic inequalities. Proceedings of the Edinburgh Mathematical Society (Series 2), 19:115–118, 1974.
[6] N. Cesa-Bianchi and G. Lugosi. Prediction, learning, and games. Cambridge University Press, 2006.
[7] N. Cesa-Bianchi and G. Lugosi. Combinatorial bandits. In COLT, 2009.
[8] R. Kleinberg, A. Slivkins, and E. Upfal. Multi-armed bandits in metric spaces. In STOC, pages 681–690, 2008.
[9] J. Langford and T. Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In NIPS, 2007.
[10] L. Li, W. Chu, J. Langford, and R. Schapire. A contextual-bandit approach to personalized news article recommendation. In Proceedings of the 19th international conference on World wide web, pages 661–670. ACM, 2010.
[11] P. Rusmevichientong and J. Tsitsiklis. Linearly parameterized bandits. Math. Oper. Res., 35(2):395–411, 2010.
[12] D. Zuckerman. Linear degree extractors and the inapproximability of max clique and chromatic number. Theory of Computing, 3(1):103–128, 2007. 9