nips nips2004 nips2004-159 nips2004-159-reference knowledge-graph by maker-knowledge-mining
Source: pdf
Author: Michael P. Holmes, Charles Jr.
Abstract: Schema learning is a way to discover probabilistic, constructivist, predictive action models (schemas) from experience. It includes methods for finding and using hidden state to make predictions more accurate. We extend the original schema mechanism [1] to handle arbitrary discrete-valued sensors, improve the original learning criteria to handle POMDP domains, and better maintain hidden state by using schema predictions. These extensions show large improvement over the original schema mechanism in several rewardless POMDPs, and achieve very low prediction error in a difficult speech modeling task. Further, we compare extended schema learning to the recently introduced predictive state representations [2], and find their predictions of next-step action effects to be approximately equal in accuracy. This work lays the foundation for a schema-based system of integrated learning and planning. 1
[1] G. Drescher. Made-up minds: a constructivist approach to artificial intelligence. MIT Press, 1991.
[2] M. L. Littman, R. S. Sutton, and S. Singh. Predictive representations of state. In Advances in Neural Information Processing Systems, pages 1555–1561. MIT Press, 2002.
[3] R. E. Fikes and N. J. Nilsson. STRIPS: a new approach to the application of theorem proving to problem solving. Artificial Intelligence, 2:189–208, 1971.
[4] C. T. Morrison, T. Oates, and G. King. Grounding the unobservable in the observable: the role and representation of hidden state in concept formation and refinement. In AAAI Spring Symposium on Learning Grounded Representations, pages 45–49. AAAI Press, 2001.
[5] S. Singh, M. L. Littman, N. K. Jong, D. Pardoe, and P. Stone. Learning predictive state representations. In International Conference on Machine Learning, pages 712–719. AAAI Press, 2003.
[6] M. Kudo, J. Toyama, and M. Shimbo. Multidimensional curve classification using passingthrough regions. Pattern Recognition Letters, 20(11–13):1103–1111, 1999.
[7] X. Wang. Learning by observation and practice: An incremental approach for planning operator acquisition. In International Conference on Machine Learning, pages 549–557. AAAI Press, 1995.
[8] Y. Gil. Learning by experimentation: Incremental refinement of incomplete planning domains. In International Conference on Machine Learning, pages 87–95. AAAI Press, 1994.
[9] W.-M. Shen. Discovery as autonomous learning from the environment. Machine Learning, 12:143–165, 1993.
[10] Scott Benson. Inductive learning of reactive action models. In International Conference on Machine Learning, pages 47–54. AAAI Press, 1995.
[11] N. Balac, D. M. Gaines, and D. Fisher. Using regression trees to learn action models. In IEEE Systems, Man and Cybernetics Conference, 2000.
[12] A. W. McCallum. Reinforcement Learning with Selective Perception and Hidden State. PhD thesis, University of Rochester, 1995.