nips nips2001 nips2001-128 nips2001-128-reference knowledge-graph by maker-knowledge-mining

128 nips-2001-Multiagent Planning with Factored MDPs


Source: pdf

Author: Carlos Guestrin, Daphne Koller, Ronald Parr

Abstract: We present a principled and efficient planning algorithm for cooperative multiagent dynamic systems. A striking feature of our method is that the coordination and communication between the agents is not imposed, but derived directly from the system dynamics and function approximation architecture. We view the entire multiagent system as a single, large Markov decision process (MDP), which we assume can be represented in a factored way using a dynamic Bayesian network (DBN). The action space of the resulting MDP is the joint action space of the entire set of agents. Our approach is based on the use of factored linear value functions as an approximation to the joint value function. This factorization of the value function allows the agents to coordinate their actions at runtime using a natural message passing scheme. We provide a simple and efficient method for computing such an approximate value function by solving a single linear program, whose size is determined by the interaction between the value function structure and the DBN. We thereby avoid the exponential blowup in the state and action space. We show that our approach compares favorably with approaches based on reward sharing. We also show that our algorithm is an efficient alternative to more complicated algorithms even in the single agent case.


reference text

[1] U. Bertele and F. Brioschi. Nonserial Dynamic Programming. Academic Press, 1972.

[2] C. Boutilier, T. Dean, and S. Hanks. Decision theoretic planning: Structural assumptions and computational leverage. Journal of Artificial Intelligence Research, 11:1 – 94, 1999.

[3] D.P. de Farias and B. Van Roy. The linear programming approach to approximate dynamic programming. submitted to the IEEE Transactions on Automatic Control, January 2001.

[4] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation. Computational Intelligence, 5(3):142–150, 1989.

[5] R. Dechter. Bucket elimination: A unifying framework for reasoning. Artificial Intelligence, 113(1–2):41–85, 1999.

[6] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In Proc. 17th IJCAI, 2001.

[7] F. Jensen, F. Jensen, and S. Dittmer. From influence diagrams to junction trees. In Uncertainty in Artificial Intelligence: Proceedings of the Tenth Conference, pages 367–373, Seattle, Washington, July 1994. Morgan Kaufmann.

[8] D. Koller and R. Parr. Computing factored value functions for policies in structured MDPs. In Proceedings of the Sixteenth International Joint Conference on Artificial Intelligence (IJCAI99). Morgan Kaufmann, 1999.

[9] D. Koller and R. Parr. Policy iteration for factored MDPs. In Proc. 16th UAI, 2000.

[10] L. Peshkin, N. Meuleau, K. Kim, and L. Kaelbling. Learning to cooperate via policy search. In Proc. 16th UAI, 2000.

[11] J. Schneider, W. Wong, A. Moore, and M. Riedmiller. Distributed value functions. In Proc. 16th ICML, 1999.

[12] P. Schweitzer and A. Seidmann. Generalized polynomial approximations in Markovian decision processes. Journal of Mathematical Analysis and Applications, 110:568 – 582, 1985.

[13] D. Wolpert, K. Wheller, and K. Tumer. General principles of learning-based multi-agent systems. In Proc. 3rd Agents Conference, 1999.