nips nips2004 nips2004-64 nips2004-64-reference knowledge-graph by maker-knowledge-mining

64 nips-2004-Experts in a Markov Decision Process


Source: pdf

Author: Eyal Even-dar, Sham M. Kakade, Yishay Mansour

Abstract: We consider an MDP setting in which the reward function is allowed to change during each time step of play (possibly in an adversarial manner), yet the dynamics remain fixed. Similar to the experts setting, we address the question of how well can an agent do when compared to the reward achieved under the best stationary policy over time. We provide efficient algorithms, which have regret bounds with no dependence on the size of state space. Instead, these bounds depend only on a certain horizon time of the process and logarithmically on the number of actions. We also show that in the case that the dynamics change over time, the problem becomes computationally hard. 1


reference text

Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and Economic Behavior, 29:79–103, 1999. J. Hastad. Some optimal inapproximability results. J. ACM, 48(4):798–859, 2001. S. Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis, University College London, 2003. A. Kalai and S. Vempala. Efficient algorithms for on-line optimization. Proceedings of COLT, 2003. M. Kearns and S. Singh. Near-optimal reinforcement learning in polynomial time. Proceedings of ICML, 1998. H. McMahan, G. Gordon, and A. Blum. Planning in the presence of cost functions controlled by an adversary. In In the 20th ICML, 2003.