nips nips2007 nips2007-191 nips2007-191-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Marcus Hutter, Shane Legg
Abstract: We derive an equation for temporal difference learning from statistical principles. Specifically, we start with the variational principle and then bootstrap to produce an updating rule for discounted state value estimates. The resulting equation is similar to the standard equation for temporal difference learning with eligibility traces, so called TD(λ), however it lacks the parameter α that specifies the learning rate. In the place of this free parameter there is now an equation for the learning rate that is specific to each state transition. We experimentally test this new learning rule against TD(λ) and find that it offers superior performance in various settings. Finally, we make some preliminary investigations into how to extend our new temporal difference algorithm to reinforcement learning. To do this we combine our update equation with both Watkins’ Q(λ) and Sarsa(λ) and find that it again offers superior performance without a learning rate parameter. 1
[1] A. P. George and W. B. Powell. Adaptive stepsizes for recursive estimation with applications in approximate dynamic programming. Journal of Machine Learning, 65(1):167–198, 2006.
[2] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research, 4:1107–1149, 2003.
[3] J. Peng and R. J. Williams. Increamental multi-step Q-learning. Machine Learning, 22:283–290, 1996.
[4] G. A. Rummery. Problem solving with reinforcement learning. PhD thesis, Cambridge University, 1995.
[5] G. A. Rummery and M. Niranjan. On-line Q-learning using connectionist systems. Technial Report CUED/F-INFENG/TR 166, Engineering Department, Cambridge University, 1994.
[6] R. Sutton and A. Barto. Reinforcement learning: An introduction. Cambridge, MA, MIT Press, 1998.
[7] R. S. Sutton. Learning to predict by the methods of temporal differences. Machine Learning, 3:9–44, 1988.
[8] C.J.C.H Watkins. Learning from Delayed Rewards. PhD thesis, King’s College, Oxford, 1989.
[9] I. H. Witten. An adaptive optimal controller for discrete-time markov environments. Information and Control, 34:286–295, 1977. 8