jmlr jmlr2005 jmlr2005-5 jmlr2005-5-reference knowledge-graph by maker-knowledge-mining

5 jmlr-2005-A Generalization Error for Q-Learning


Source: pdf

Author: Susan A. Murphy

Abstract: Planning problems that involve learning a policy from a single training set of finite horizon trajectories arise in both social science and medical fields. We consider Q-learning with function approximation for this setting and derive an upper bound on the generalization error. This upper bound is in terms of quantities minimized by a Q-learning algorithm, the complexity of the approximation space and an approximation term due to the mismatch between Q-learning and the goal of learning a policy that maximizes the value function. Keywords: multistage decisions, dynamic programming, reinforcement learning, batch data


reference text

M. Altfeld and B. D. Walker. Less is more? STI in acute and chronic HIV-1 infection. Nature Medicine 7:881–884, 2001. 1095 M URPHY M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge, UK: Cambridge University Press, 1999. L. Baird. Advantage updating. Technical Report. WL-TR-93-1146, Wright-Patterson Air Force Base, 1993. J. Baxter and P. L. Bartlett. Infinite-horizon policy-gradient estimation. J. Artificial Intelligence Research 15:319–350, 2001. R. E. Bellman Dynamic Programming. Princeton: Princeton University Press, 1957. D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Belmont, MA.: Athena Scientific. 1996. R. K. Brooner and M. Kidorf. Using behavioral reinforcement to improve methadone treatment participation. Science and Practice Perspectives 1:38–48, 2002. M. Fava, A. J. Rush, M. H. Trivedi, A. A. Nierenberg, M. E. Thase, H. A. Sackeim, F. M. Quitkin, S. Wisniewski, P. W. Lavori, J. F. Rosenbaum, D. J. Kupfer. Background and rationale for the sequenced treatment alternative to relieve depression (STAR*D) study. Psychiatric Clinics of North America 26(3):457–494, 2003. C. N. Fiechter Efficient Reinforcement Learning. In Proceedings of the Seventh Annual ACM Conference on Computational Learning Theory (COLT 1994), pages 88–97, New Brunswick, NJ, 1994. C. N. Fiechter. Expected Mistake Bound Model for On-Line Reinforcement Learning. In Proceedings of the Fourteenth International Conference on Machine Learning, Douglas H. Fisher (Ed.), Nashville, Tennessee, pages 116–124, 1997. S. M. Kakade. On the Sample Complexity of Reinforcement Learning. Ph.D. thesis, University College, London, 2003. M. Kearns, Y. Mansour, and A. Y. Ng. A Sparse sampling algorithm for near-optimal planning in large Markov decision processes. Machine Learning, 49(2-3): 193–208, 1999. M. Kearns, Y. Mansour and A. Y. Ng. Approximate planning in large POMDPs via reusable trajectories. In Advances in Neural Information Processing Systems, 12, MIT Press, 2000. D. Ormoneit and S. Sen: Kernel-Based Reinforcement Learning. Machine Learning 49(2-3):161– 178 2002. L. Peshkin and C. R. Shelton. Learning from scarce experience. In Proceedings of the Nineteenth International Conference on Machine Learning (ICML 2002) Claude Sammut, Achim G. Hoffmann (Eds.) pages 498–505, Sydney, Australia, 2002. A. J. Rush, M. L. Crismon, T. M. Kashner, M. G. Toprac, T. J. Carmody, M. H. Trivedi, T. Suppes, A. L. Miller, M. M. Biggs, K. Shores-Wilson, B. P. Witte, S. P. Shon, W. V. Rago, K. Z. Altshuler, TMAP Research Group. Texas medication algorithm project, phase 3 (TMAP-3): Rationale and study design. Journal of Clinical Psychiatry, 64(4):357–69, 2003. 1096 G ENERALIZATION E RROR L. S. Schneider, P. N. Tariot, C. G. Lyketsos, K. S. Dagerman, K. L. Davis, S. Davis, J. K. Hsiao, D. V. Jeste, I. R. Katz, J. T. Olin, B. G. Pollock, P. V. Rabins, R. A. Rosenheck, G. W. Small, B. Lebowitz, J. A. Lieberman. National Institute of Mental Health clinical antipsychotic trials of intervention effectiveness (CATIE). American Journal of Geriatric Psychiatry, 9(4):346–360, 2001. R. E. Schapire, P. Bartlett, Y. Freund and W. S. Lee. Boosting the margin: A new explanation for the effectiveness of voting methods. The Annals of Statistics, 26(5):1651–1686, 1998. D. I. Simester, P. Sun, and J. N. Tsitsiklis. Dynamic catalog mailing policies. unpublished manuscript, Available electronically at http://web.mit.edu/jnt/www/Papers/P-03-sun-catalogrev2.pdf, 2003. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, Mass, 1998. P. F. Thall, R. E. Millikan and H. G. Sung. Evaluating multiple treatment courses in clinical trials. Statistics and Medicine 19:1011-1028, 2000. P. F. Thall, H. G. Sung and E. H. Estey. Selecting therapeutic strategies based on efficacy and death in multicourse clinical trials. Journal of the American Statistical Association, 97:29-39, 2002. J. N. Tsitsiklis and B. Van Roy. Feature-based methods for large scale dynamic programming, Machine Learning, 22:59-94, 1996. J. N. Tsitsiklis and B. Van Roy. An analysis of temporal-difference learning with function approximation. IEEE Transactions on Automatic Control, 42(5):674-690, 1997. A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes. Springer, New York, 1996. C. J. C. H. Watkins. Learning from Delayed Rewards. Ph.D. thesis, Cambridge University, 1989. 1097