MultiAgentDecisionProcess  Release 0.2.1
MultiAgentDecisionProcess Reference Documentation

Introduction

MultiAgentDecisionProcess (MADP) is a toolbox for scientific research in decision-theoretic planning and learning in multiagent systems. It is designed to be rather general, but most effort has been put in planning algorithms for discrete Dec-POMDPs.

The PDF doc/MADPToolbox.pdf provides more general background about MADP models, and documents general design principles and details about indices and history representations.

Authors: Frans Oliehoek and Matthijs Spaan.

MADP Libraries

The framework consists of several parts, grouped in different libraries. The base library (libMADPBase) contains:

  • Auxiliary functionality regarding manipulating indices, exception handling and printing: E, IndexTools, PrintTools. Some project-wide definitions are stored in the Globals namespace.

The parser library (libMADPParser) only requires the base library, and contains:

  • A parser for dpomdp problem specifications, which is a fileformat for discrete Dec-POMDPs. A set of benchmark problem files can be found in the problems/ directory, and the dpomdp syntax is documented in example.dpomdp. The format is based on Tony's POMDP file format, and the formal specification is found in dpomdp.spirit. The parser uses the Boost Spirit library. See MADPParser.

The support library (libMADPSupport) contains basic data types and support useful for planning:

  • Functionality for handling command-line arguments is provided by ArgumentHandlers.

Finally, the planning library (libMADPplanning) contains functionality for planning algorithms, as well as some solution methods.

  • POMDP solution techniques: Perseus.

Programs using the MADP libraries

In the src/examples/ and src/utils/ directories are a number of programs included that use the MADP libraries. Running each binary with as argument –help will display a short summary of usage.

  • JESP runs the JESPDynamicProgrammingPlanner on a dpomdp problem specification, for instance
    JESP -h 3 <PATH_TO>/dectiger.dpomdp
    
    or
    JESP -h 3 DT
    
    runs JESP for horizon 3 on the DecTiger problem. First one parses the dectiger.dpomdp file, the second one uses the ProblemDecTiger class. Many more problem files are provided in the problems/ directory.
  • Perseus runs the Perseus POMDP or BG planner.
  • printProblem loads a dpomdp problem description and prints it to standard out. printJointPolicyPureVector prints out a particular joint policy given its index.
  • evaluateJointPolicyPureVector simulates a particular joint policy for a problem. evaluateRandomPolicy uses a policy that chooses actions uniformly at random.
  • analyzeRewardResults and getAvgReward print information about the expected reward of simulation runs, saved using SimulationResult::Save().

Acknowledgments

The work reported here is part of the Interactive Collaborative Information Systems (ICIS) project, supported by the Dutch Ministry of Economic Affairs, grant nr: BSIK03024. This work was partially supported by Fundação para a Ciência e a Tecnologia (ISR/IST pluriannual funding) through the POS_Conhecimento Program that includes FEDER funds and through grant PTDC/EEA-ACR/73266/2006.