 ArgumentHandlers | ArgumentHandlers contains functionality for parsing and handling command-line arguments |
  Arguments | Arguments contains all defined parameters to be set on the command line |
 ArgumentUtils | ArgumentUtils provides a way to get problem instantations directly from the command-line arguments |
 BeliefValue | BeliefValue is a namespace for functions that compute the value of a particular belief |
 BGIP_SolverType | |
 directories | |
 Globals | Globals contains several definitions global to the MADP toolbox |
 GMAAtype | |
 IndexTools | IndexTools contains functionality for manipulating indices |
 JESPtype | |
 PolicyGlobals | |
 PrintTools | PrintTools contains functionality for printing vectors etc |
 ProblemType | ProblemType is a class that represents TODO: fill out.. |
 qheur | |
 std | STL namespace |
  less< JointPolicyValuePair * > | Overload the less<Type> template for JointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
  less< JPPVValuePair * > | Overload the less<Type> template for JPolValuePair* (we want less to give an ordering according to values, not addresses...) |
  less< PartialJointPolicyValuePair * > | Overload the less<Type> template for PartialJointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
  less< PartialJPDPValuePair * > | Overload the less<Type> template for JPolValPair* (we want less to give an ordering according to values, not addresses...) |
 StringTools | StringTools is a namespace that contains utility functions for strings |
 TimeTools | |
 VectorTools | |
 Action | Action is a class that represent actions |
 ActionDiscrete | ActionDiscrete represents discrete actions |
 ActionHistory | ActionHistory represents an action history of a single agent |
 ActionHistoryTree | ActionHistoryTree is a wrapper for ActionHistory |
 ActionObservationHistory | ActionObservationHistory represents an action-observation history of an agent |
 ActionObservationHistoryTree | ActionObservationHistoryTree is a wrapper for ActionObservationHistory |
 Agent | Agent represents an agent |
 AgentBG | AgentBG represents an agent which uses a BG-based policy |
 AgentDecPOMDPDiscrete | AgentDecPOMDPDiscrete represents an agent in a discrete DecPOMDP setting |
 AgentDelayedSharedObservations | AgentDelayedSharedObservations represents an agent that acts on local observations and the shared observation at the previous time step |
 AgentFullyObservable | AgentFullyObservable represents an agent that receives the true state, the joint observation and also the reward signal |
 AgentLocalObservations | AgentLocalObservations represents an agent that acts on local observations |
 AgentPOMDP | AgentPOMDP represents an agent which POMDP-based policy |
 AgentQMDP | AgentQMDP represents an agent which uses a QMDP-based policy |
 AgentRandom | AgentRandom represents an agent which chooses action uniformly at random |
 AgentSharedObservations | AgentSharedObservations is represents an agent that benefits from free communication, i.e., it can share all its observations |
 AlphaVector | AlphaVector represent an alpha vector used in POMDP solving |
 AlphaVectorBG | AlphaVectorBG implements Bayesian Game specific functionality for alpha-vector based planning |
 AlphaVectorPlanning | AlphaVectorPlanning provides base functionality for alpha-vector based POMDP or BG techniques |
 AlphaVectorPOMDP | AlphaVectorPOMDP implements POMDP specific functionality for alpha-vector based planning |
 BayesianGame | BayesianGame is a class that represents a general Bayesian game in which each agent has its own utility function |
 BayesianGameBase | BayesianGameBase is a class that represents a Bayesian game |
 BayesianGameForDecPOMDPStage | BayesianGameForDecPOMDPStage represents a BG for a single stage |
 BayesianGameForDecPOMDPStageInterface | BayesianGameForDecPOMDPStageInterface is a class that represents TODO: fill out.. |
 BayesianGameIdenticalPayoff | BayesianGameIdenticalPayoff is a class that represents a Bayesian game with identical payoffs |
 BayesianGameIdenticalPayoffInterface | BayesianGameIdenticalPayoffInterface provides an interface for Bayesian Games with identical payoffs |
 BayesianGameIdenticalPayoffSolver | BayesianGameIdenticalPayoffSolver is an interface for solvers for Bayesian games with identical payoff |
 Belief | Belief represents a probability distribution over the state space |
 BeliefInterface | BeliefInterface is an interface for beliefs, i.e., probability distributions over the state space |
 BeliefIterator | BeliefIterator is an iterator for dense beliefs |
 BeliefIteratorGeneric | BeliefIteratorGeneric is an iterator for beliefs |
 BeliefIteratorInterface | BeliefIteratorInterface is an interface for iterators over beliefs |
 BeliefIteratorSparse | BeliefIteratorSparse is an iterator for sparse beliefs |
 BeliefSparse | BeliefSparse represents a probability distribution over the state space |
 BGforStageCreation | BGforStageCreation is a class that provides some functions to aid the construction of Bayesian games for a stage of a Dec-POMDP |
 BGIP_SolverAlternatingMaximization | BGIP_SolverAlternatingMaximization implements an approximate solver for identical payoff Bayesian games, based on alternating maximization |
 BGIP_SolverBruteForceSearch | BGIP_SolverBruteForceSearch is a class that performs Brute force search for identical payoff Bayesian Games |
 BGIP_SolverCreator_AM | BGIP_SolverCreator_AM returns an Alternating Maximization BGIP_Solver |
 BGIP_SolverCreator_BFS | BGIP_SolverCreator_BFS returns a Brute Force Search BGIP_Solver |
 BGIP_SolverCreatorInterface | BGIP_SolverCreatorInterface is an interface for classes that create BGIP solvers |
 BGIP_SolverRandom | BGIP_SolverRandom creates random solutions to Bayesian games for testing purposes |
 BGIPSolution | BGIPSolution represents a solution for BayesianGameIdenticalPayoff |
 BruteForceSearchPlanner | BruteForceSearchPlanner implements an exact solution algorithm |
 DecPOMDP | DecPOMDP is a simple implementation of DecPOMDPInterface |
 DecPOMDPDiscrete | DecPOMDPDiscrete represent a discrete DEC-POMDP model |
 DecPOMDPDiscreteInterface | DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions |
 DecPOMDPInterface | DecPOMDPInterface is an interface for DecPOMDPs |
 DICEPSPlanner | DICEPSPlanner implements the Direct Cross-Entropy Policy Search method |
 DiscreteEntity | DiscreteEntity is a general class for tracking discrete entities |
 E | E is a class that represents a basic exception |
 EInvalidIndex | EInvalidIndex represents an invalid index exception |
 ENotCached | ENotCached represents an invalid index exception |
 EOverflow | EOverflow represents an integer overflow exception |
 EParse | EParse represents a parser exception |
 FixedCapacityPriorityQueue | FixedCapacityPriorityQueue is a class that represents a priority queue with a fixed size |
 GeneralizedMAAStarPlanner | GeneralizedMAAStarPlanner is a class that represents the Generalized MAA* planner class |
 GeneralizedMAAStarPlannerForDecPOMDPDiscrete | GeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner |
 GMAA_kGMAA | GMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e |
 GMAA_MAAstar | GMAA_MAAstar is a class that represents a planner that performs MAA* as described by Szer et al |
 History | History is a general class for histories |
 IndividualBeliefJESP | IndividualBeliefJESP stores individual beliefs for the JESP algorithm |
 IndividualHistory | IndividualHistory represents a history for a single agent |
 Interface_ProblemToPolicyDiscrete | Interface_ProblemToPolicyDiscrete is an interface from discrete problems to policies |
 Interface_ProblemToPolicyDiscretePure | Interface_ProblemToPolicyDiscretePure is an interface from discrete problems to pure policies |
 JESPDynamicProgrammingPlanner | JESPDynamicProgrammingPlanner plans with the DP JESP algorithm |
 JESPExhaustivePlanner | JESPExhaustivePlanner plans with the Exhaustive JESP algorithm |
 JointAction | JointAction represents a joint action |
 JointActionDiscrete | JointActionDiscrete represents discrete joint actions |
 JointActionHistory | JointActionHistory represents a joint action history |
 JointActionHistoryTree | JointActionHistoryTree is a wrapper for JointActionHistory |
 JointActionObservationHistory | JointActionObservationHistory represents a joint action observation history |
 JointActionObservationHistoryTree | JointActionObservationHistoryTree is derived from TreeNode, and similar to ObservationHistoryTree: |
 JointBelief | JointBelief stores a joint belief, represented as a regular (dense) vector of doubles |
 JointBeliefInterface | JointBeliefInterface represents an interface for joint beliefs |
 JointBeliefSparse | JointBeliefSparse represents a sparse joint belief |
 JointHistory | JointHistory represents a joint history, i.e., a history for each agent |
 JointObservation | JointObservation is represents joint observations |
 JointObservationDiscrete | JointObservationDiscrete represents discrete joint observations |
 JointObservationHistory | JointObservationHistory represents a joint observation history |
 JointObservationHistoryTree | JointObservationHistoryTree is a class that represents a wrapper for the JointObservationHistory class |
 JointPolicy | JointPolicy is a class that represents a joint policy |
 JointPolicyDiscrete | JointPolicyDiscrete is a class that represents a discrete joint policy |
 JointPolicyDiscretePure | JointPolicyDiscretePure is represents a pure joint policy for a discrete MADP |
 JointPolicyPureVector | JointPolicyPureVector represents a discrete pure joint policy |
 JointPolicyValuePair | JointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value |
 JPolComponent_VectorImplementation | JPolComponent_VectorImplementation implements functionality common to several joint policy implementations |
 JPPVIndexValuePair | JPPVIndexValuePair represents a (JointPolicyPureVector,Value) pair |
 JPPVValuePair | JPPVValuePair represents a (JointPolicyPureVector,Value) pair, which stores the full JointPolicyPureVector |
 MADPComponentDiscreteActions | MADPComponentDiscreteActions contains functionality for discrete action spaces |
 MADPComponentDiscreteObservations | MADPComponentDiscreteObservations contains functionality for discrete observation spaces |
 MADPComponentDiscreteStates | MADPComponentDiscreteStates is a class that represents a discrete state space |
 MADPParser | MADPParser is a general class for parsers in MADP |
 MDPSolver | MDPSolver is an interface for MDP solvers |
 MDPValueIteration | MDPValueIteration implements value iteration for MDPs |
 MultiAgentDecisionProcess | MultiAgentDecisionProcess is an class that defines the primary properties of a decision process |
 MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
 MultiAgentDecisionProcessDiscreteInterface | MultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement |
 MultiAgentDecisionProcessInterface | MultiAgentDecisionProcessInterface is an abstract base class that declares the primary properties of a multiagent decision process |
 NamedDescribedEntity | NamedDescribedEntity represents named entities |
 NullPlanner | NullPlanner represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecPOMDPDiscrete |
 NullPlannerTOI | NullPlannerTOI represents a planner which does nothing, but can be used to instantiate a PlanningUnitTOIDecPOMDPDiscrete |
 Observation | Observation represents observations |
 ObservationDiscrete | ObservationDiscrete represents discrete observations |
 ObservationHistory | ObservationHistory represents an action history of a single agent |
 ObservationHistoryTree | ObservationHistoryTree is a wrapper for the ObservationHistory class |
 ObservationModel | ObservationModel represents the observation model in a decision process |
 ObservationModelDiscrete | ObservationModelDiscrete represents a discrete observation model |
 ObservationModelMapping | ObservationModelMapping implements an ObservationModelDiscrete |
 ObservationModelMappingSparse | ObservationModelMappingSparse implements an ObservationModelDiscrete |
 OGet | OGet can be used for direct access to the observation model |
 OGet_ObservationModelMapping | OGet_ObservationModelMapping can be used for direct access to a ObservationModelMapping |
 OGet_ObservationModelMappingSparse | OGet_ObservationModelMappingSparse can be used for direct access to a ObservationModelMappingSparse |
 ParserInterface | ParserInterface is an interface for parsers |
 ParserTOICompactRewardDecPOMDPDiscrete | ParserTOICompactRewardDecPOMDPDiscrete is a parser for TOICompactRewardDecPOMDPDiscrete |
 ParserTOIDecMDPDiscrete | ParserTOIDecMDPDiscrete is a parser for TOIDecMDPDiscrete |
 ParserTOIDecPOMDPDiscrete | ParserTOIDecPOMDPDiscrete is a parser for TOIDecPOMDPDiscrete |
 ParserTOIFactoredRewardDecPOMDPDiscrete | ParserTOIFactoredRewardDecPOMDPDiscrete is a parser for TransitionObservationIndependentFactoredRewardDecPOMDPDiscrete |
 PartialJointPolicy | PartialJointPolicy represents a joint policy that is only specified for t time steps instead of for every time step |
 PartialJointPolicyDiscretePure | PartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy |
 PartialJointPolicyPureVector | PartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions |
 PartialJointPolicyValuePair | PartialJointPolicyValuePair is a wrapper for a partial joint *policy and its heuristic value |
 PartialJPDPValuePair | PartialJPDPValuePair represents a (PartialJointPolicyDiscretePure,Value) pair, which stores the full PartialJointPolicyDiscretePure |
 PartialJPPVIndexValuePair | PartialJPPVIndexValuePair represents a (PartialJointPolicyPureVector,Value) pair |
 PartialPolicyPoolInterface | PartialPolicyPoolInterface is an interface for PolicyPools containing Partial Joint Policies |
 PartialPolicyPoolItemInterface | PartialPolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
 Perseus | Perseus contains basic functionality for the Perseus planner |
 PerseusBGPlanner | PerseusBGPlanner implements the Perseus planning algorithm for BGs |
 PerseusPOMDPPlanner | PerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs |
 PerseusQFunctionPlanner | PerseusQFunctionPlanner is a Perseus planner that uses QFunctions |
 PerseusStationary | PerseusStationary is Perseus for stationary policies |
 PlanningUnit | PlanningUnit represents a planning unit, i.e., a planning algorithm |
 PlanningUnitDecPOMDPDiscrete | PlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs |
 PlanningUnitMADPDiscrete | PlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states) |
 PlanningUnitMADPDiscreteParameters | PlanningUnitMADPDiscreteParameters stores parameters of PlanningUnitMADPDiscrete |
 PlanningUnitTOIDecPOMDPDiscrete | PlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs |
 Policy | Policy is a class that represents a policy for a single agent |
 PolicyDiscrete | PolicyDiscrete is a class that represents a discrete policy |
 PolicyDiscretePure | PolicyDiscretePure is an abstract class that represents a pure policy for a discrete MADP |
 PolicyPoolInterface | PolicyPoolInterface is an interface for PolicyPools containing fully defined Joint Policies |
 PolicyPoolItemInterface | PolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
 PolicyPoolJPolValPair | PolicyPoolJPolValPair is a policy pool with joint policy - value pairs |
 PolicyPoolPartialJPolValPair | PolicyPoolJPolValPair is a policy pool with partial joint policy - value pairs |
 PolicyPureVector | PolicyPureVector is a class that represents a pure (=deterministic) policy |
 POSG | POSG is a simple implementation of POSGInterface |
 POSGDiscrete | POSGDiscrete represent a discrete POSG model |
 POSGDiscreteInterface | POSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions |
 POSGInterface | POSGInterface is an interface for POSGs |
 ProblemDecTiger | ProblemDecTiger implements the DecTiger problem |
 ProblemFireFighting | ProblemFireFighting is a class that represents the firefighting problem as described in refGMAA (DOC-references.h) |
 QAV | QAV implements a QFunctionJointBelief using a planner based on alpha functions, for instance the Perseus planners |
 QAVParameters | |
 QBG | QBG is a class that represents the QBG heuristic |
 QFunction | QFunction is an abstract base class containing nothing |
 QFunctionForDecPOMDP | QFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP |
 QFunctionForDecPOMDPInterface | QFunctionForDecPOMDPInterface is a class that represents a Q function for a Dec-POMDP |
 QFunctionInterface | QFunctionInterface is an abstract base class containing nothing |
 QFunctionJAOH | QFunctionJAOH represents a Q-function that operates on joint action-observation histories |
 QFunctionJAOHInterface | QFunctionJAOHInterface is a class that is an interface for heuristics of the shape Q(JointActionObservationHistory, JointAction) |
 QFunctionJAOHTree | QFunctionJAOHTree is represents QFunctionJAOH which store Qvalues in a tree |
 QFunctionJointBelief | QFunctionJointBelief represents a Q-function that operates on joint beliefs |
 QFunctionJointBeliefInterface | QFunctionJointBeliefInterface is an interface for QFunctionJointBelief |
 QMDP | QMDP is a class that represents the QMDP heuristic |
 QPOMDP | QPOMDP is a class that represents the QPOMDP heuristic |
 QTable | QTable implements QTableInterface using a full matrix |
 QTableInterface | QTableInterface is the abstract base class for Q(., a) functions |
 Referrer | Referrer is a template class that represents objects that refer another |
 RewardModel | RewardModel represents the reward model in a decision process |
 RewardModelMapping | RewardModelMapping represents a discrete reward model |
 RewardModelMappingSparse | RewardModelMappingSparse represents a discrete reward model |
 RewardModelTOISparse | RewardModelTOISparse represents a discrete reward model based on vectors of states and actions. |
 RGet | RGet can be used for direct access to a reward model |
 RGet_RewardModelMapping | RGet can be used for direct access to a RewardModelMapping |
 RGet_RewardModelMappingSparse | RGet can be used for direct access to a RewardModelMappingSparse |
 Simulation | Simulation is a class that simulates policies in order to test their control quality |
 SimulationAgent | SimulationAgent represents an agent in for class Simulation |
 SimulationDecPOMDPDiscrete | SimulationDecPOMDPDiscrete simulates policies in DecPOMDPDiscrete's |
 SimulationResult | SimulationResult stores the results from simulating a joint policy, the obtained rewards in particular |
 State | State is a class that represent states |
 StateDiscrete | StateDiscrete represents discrete states |
 StateDistribution | StateDistribution is an interface for probability distributions over states |
 StateDistributionVector | StateDistributionVector represents a probability distribution over states as a vector of doubles |
 TGet | TGet can be used for direct access to the transition model |
 TGet_TransitionModelMapping | TGet_TransitionModelMapping can be used for direct access to a TransitionModelMapping |
 TGet_TransitionModelMappingSparse | TGet_TransitionModelMappingSparse can be used for direct access to a TransitionModelMappingSparse |
 TimedAlgorithm | TimedAlgorithm allows for easy timekeeping of parts of an algorithm |
 Timing | Timing provides a simple way of timing code |
  Times | Stores the start and end of a timespan, in clock cycles |
 TOICompactRewardDecPOMDPDiscrete | TOICompactRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
 TOIDecMDPDiscrete | TOIDecMDPDiscrete is a class that represents a transition observation indepedent discrete DecMDP |
 TOIDecPOMDPDiscrete | TOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP |
 TOIFactoredRewardDecPOMDPDiscrete | TOIFactoredRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
 TransitionModel | TransitionModel represents the transition model in a decision process |
 TransitionModelDiscrete | TransitionModelDiscrete represents a discrete transition model |
 TransitionModelMapping | TransitionModelMapping implements a TransitionModelDiscrete |
 TransitionModelMappingSparse | TransitionModelMappingSparse implements a TransitionModelDiscrete |
 TransitionObservationIndependentMADPDiscrete | TransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process |
 TreeNode | TreeNode represents a node in a tree of histories, for instance observation histories |
 Type | Type is an abstract class that represents a Type (e.g |
 Type_AOHIndex | Type_AOHIndex is a implementation (extenstion) of Type and represents a type in e.g |
 ValueFunction | ValueFunction is a class that represents a value function of a joint policy |
 ValueFunctionDecPOMDPDiscrete | ValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP |