 ActionHistoryTree | ActionHistoryTree is a wrapper for ActionHistory |
 AlphaVector | AlphaVector represent an alpha vector used in POMDP solving |
 ArgumentHandlers::Arguments | Arguments contains all defined parameters to be set on the command line |
 BayesianGameForDecPOMDPStageInterface | BayesianGameForDecPOMDPStageInterface is a class that represents TODO: fill out.. |
  BayesianGameForDecPOMDPStage | BayesianGameForDecPOMDPStage represents a BG for a single stage |
 BeliefInterface | BeliefInterface is an interface for beliefs, i.e., probability distributions over the state space |
  Belief | Belief represents a probability distribution over the state space |
   IndividualBeliefJESP | IndividualBeliefJESP stores individual beliefs for the JESP algorithm |
   JointBelief | JointBelief stores a joint belief, represented as a regular (dense) vector of doubles |
  BeliefSparse | BeliefSparse represents a probability distribution over the state space |
   JointBeliefSparse | JointBeliefSparse represents a sparse joint belief |
  JointBeliefInterface | JointBeliefInterface represents an interface for joint beliefs |
   JointBelief | JointBelief stores a joint belief, represented as a regular (dense) vector of doubles |
   JointBeliefSparse | JointBeliefSparse represents a sparse joint belief |
 BeliefIteratorGeneric | BeliefIteratorGeneric is an iterator for beliefs |
 BeliefIteratorInterface | BeliefIteratorInterface is an interface for iterators over beliefs |
  BeliefIterator | BeliefIterator is an iterator for dense beliefs |
  BeliefIteratorSparse | BeliefIteratorSparse is an iterator for sparse beliefs |
 BGforStageCreation | BGforStageCreation is a class that provides some functions to aid the construction of Bayesian games for a stage of a Dec-POMDP |
 BGIP_SolverCreatorInterface< JP > | BGIP_SolverCreatorInterface is an interface for classes that create BGIP solvers |
  BGIP_SolverCreator_AM< JP > | BGIP_SolverCreator_AM returns an Alternating Maximization BGIP_Solver |
  BGIP_SolverCreator_BFS< JP > | BGIP_SolverCreator_BFS returns a Brute Force Search BGIP_Solver |
 BGIPSolution | BGIPSolution represents a solution for BayesianGameIdenticalPayoff |
 DiscreteEntity | DiscreteEntity is a general class for tracking discrete entities |
  ActionDiscrete | ActionDiscrete represents discrete actions |
  Agent | Agent represents an agent |
  JointActionDiscrete | JointActionDiscrete represents discrete joint actions |
  JointObservationDiscrete | JointObservationDiscrete represents discrete joint observations |
  ObservationDiscrete | ObservationDiscrete represents discrete observations |
  StateDiscrete | StateDiscrete represents discrete states |
 E | E is a class that represents a basic exception |
  EInvalidIndex | EInvalidIndex represents an invalid index exception |
  ENotCached | ENotCached represents an invalid index exception |
  EOverflow | EOverflow represents an integer overflow exception |
  EParse | EParse represents a parser exception |
 FixedCapacityPriorityQueue< T > | FixedCapacityPriorityQueue is a class that represents a priority queue with a fixed size |
 History | History is a general class for histories |
  IndividualHistory | IndividualHistory represents a history for a single agent |
   ActionHistory | ActionHistory represents an action history of a single agent |
   ActionObservationHistory | ActionObservationHistory represents an action-observation history of an agent |
   ObservationHistory | ObservationHistory represents an action history of a single agent |
  JointHistory | JointHistory represents a joint history, i.e., a history for each agent |
   JointActionHistory | JointActionHistory represents a joint action history |
   JointActionObservationHistory | JointActionObservationHistory represents a joint action observation history |
   JointObservationHistory | JointObservationHistory represents a joint observation history |
 Interface_ProblemToPolicyDiscrete | Interface_ProblemToPolicyDiscrete is an interface from discrete problems to policies |
  Interface_ProblemToPolicyDiscretePure | Interface_ProblemToPolicyDiscretePure is an interface from discrete problems to pure policies |
   BayesianGameBase | BayesianGameBase is a class that represents a Bayesian game |
    BayesianGame | BayesianGame is a class that represents a general Bayesian game in which each agent has its own utility function |
    BayesianGameIdenticalPayoffInterface | BayesianGameIdenticalPayoffInterface provides an interface for Bayesian Games with identical payoffs |
     BayesianGameIdenticalPayoff | BayesianGameIdenticalPayoff is a class that represents a Bayesian game with identical payoffs |
      BayesianGameForDecPOMDPStage | BayesianGameForDecPOMDPStage represents a BG for a single stage |
   PlanningUnitMADPDiscrete | PlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states) |
    PlanningUnitDecPOMDPDiscrete | PlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs |
     BruteForceSearchPlanner | BruteForceSearchPlanner implements an exact solution algorithm |
     DICEPSPlanner | DICEPSPlanner implements the Direct Cross-Entropy Policy Search method |
     GeneralizedMAAStarPlannerForDecPOMDPDiscrete | GeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner |
      GMAA_kGMAA | GMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e |
      GMAA_MAAstar | GMAA_MAAstar is a class that represents a planner that performs MAA* as described by Szer et al |
     JESPDynamicProgrammingPlanner | JESPDynamicProgrammingPlanner plans with the DP JESP algorithm |
     JESPExhaustivePlanner | JESPExhaustivePlanner plans with the Exhaustive JESP algorithm |
     NullPlanner | NullPlanner represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecPOMDPDiscrete |
     PlanningUnitTOIDecPOMDPDiscrete | PlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs |
      NullPlannerTOI | NullPlannerTOI represents a planner which does nothing, but can be used to instantiate a PlanningUnitTOIDecPOMDPDiscrete |
 JointAction | JointAction represents a joint action |
  JointActionDiscrete | JointActionDiscrete represents discrete joint actions |
 JointActionHistoryTree | JointActionHistoryTree is a wrapper for JointActionHistory |
 JointObservation | JointObservation is represents joint observations |
  JointObservationDiscrete | JointObservationDiscrete represents discrete joint observations |
 JointPolicy | JointPolicy is a class that represents a joint policy |
  JointPolicyDiscrete | JointPolicyDiscrete is a class that represents a discrete joint policy |
   JointPolicyDiscretePure | JointPolicyDiscretePure is represents a pure joint policy for a discrete MADP |
    JointPolicyPureVector | JointPolicyPureVector represents a discrete pure joint policy |
    PartialJointPolicyDiscretePure | PartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy |
     PartialJointPolicyPureVector | PartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions |
 JPolComponent_VectorImplementation | JPolComponent_VectorImplementation implements functionality common to several joint policy implementations |
  JointPolicyPureVector | JointPolicyPureVector represents a discrete pure joint policy |
  PartialJointPolicyPureVector | PartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions |
 std::less< JointPolicyValuePair * > | Overload the less<Type> template for JointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
 std::less< JPPVValuePair * > | Overload the less<Type> template for JPolValuePair* (we want less to give an ordering according to values, not addresses...) |
 std::less< PartialJointPolicyValuePair * > | Overload the less<Type> template for PartialJointPolicyValuePair* (we want less to give an ordering according to values, not addresses...) |
 std::less< PartialJPDPValuePair * > | Overload the less<Type> template for JPolValPair* (we want less to give an ordering according to values, not addresses...) |
 MADPParser | MADPParser is a general class for parsers in MADP |
 MDPSolver | MDPSolver is an interface for MDP solvers |
  MDPValueIteration | MDPValueIteration implements value iteration for MDPs |
 MultiAgentDecisionProcessInterface | MultiAgentDecisionProcessInterface is an abstract base class that declares the primary properties of a multiagent decision process |
  MultiAgentDecisionProcess | MultiAgentDecisionProcess is an class that defines the primary properties of a decision process |
   MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
    DecPOMDPDiscrete | DecPOMDPDiscrete represent a discrete DEC-POMDP model |
     ProblemDecTiger | ProblemDecTiger implements the DecTiger problem |
     ProblemFireFighting | ProblemFireFighting is a class that represents the firefighting problem as described in refGMAA (DOC-references.h) |
    POSGDiscrete | POSGDiscrete represent a discrete POSG model |
   TransitionObservationIndependentMADPDiscrete | TransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process |
    TOIDecPOMDPDiscrete | TOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP |
     TOICompactRewardDecPOMDPDiscrete | TOICompactRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
     TOIDecMDPDiscrete | TOIDecMDPDiscrete is a class that represents a transition observation indepedent discrete DecMDP |
     TOIFactoredRewardDecPOMDPDiscrete | TOIFactoredRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward |
  MultiAgentDecisionProcessDiscreteInterface | MultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement |
   MADPComponentDiscreteActions | MADPComponentDiscreteActions contains functionality for discrete action spaces |
    MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
   MADPComponentDiscreteObservations | MADPComponentDiscreteObservations contains functionality for discrete observation spaces |
    MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
   MADPComponentDiscreteStates | MADPComponentDiscreteStates is a class that represents a discrete state space |
    MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
   MultiAgentDecisionProcessDiscrete | MultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process |
   POSGDiscreteInterface | POSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions |
    DecPOMDPDiscreteInterface | DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions |
     DecPOMDPDiscrete | DecPOMDPDiscrete represent a discrete DEC-POMDP model |
     TOIDecPOMDPDiscrete | TOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP |
    POSGDiscrete | POSGDiscrete represent a discrete POSG model |
   TransitionObservationIndependentMADPDiscrete | TransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process |
  POSGInterface | POSGInterface is an interface for POSGs |
   DecPOMDPInterface | DecPOMDPInterface is an interface for DecPOMDPs |
    DecPOMDP | DecPOMDP is a simple implementation of DecPOMDPInterface |
     DecPOMDPDiscrete | DecPOMDPDiscrete represent a discrete DEC-POMDP model |
     TOIDecPOMDPDiscrete | TOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP |
    DecPOMDPDiscreteInterface | DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions |
   POSG | POSG is a simple implementation of POSGInterface |
    POSGDiscrete | POSGDiscrete represent a discrete POSG model |
   POSGDiscreteInterface | POSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions |
 NamedDescribedEntity | NamedDescribedEntity represents named entities |
  Action | Action is a class that represent actions |
   ActionDiscrete | ActionDiscrete represents discrete actions |
  Agent | Agent represents an agent |
  MultiAgentDecisionProcess | MultiAgentDecisionProcess is an class that defines the primary properties of a decision process |
  Observation | Observation represents observations |
   ObservationDiscrete | ObservationDiscrete represents discrete observations |
  State | State is a class that represent states |
   StateDiscrete | StateDiscrete represents discrete states |
 ObservationModel | ObservationModel represents the observation model in a decision process |
  ObservationModelDiscrete | ObservationModelDiscrete represents a discrete observation model |
   ObservationModelMapping | ObservationModelMapping implements an ObservationModelDiscrete |
   ObservationModelMappingSparse | ObservationModelMappingSparse implements an ObservationModelDiscrete |
 OGet | OGet can be used for direct access to the observation model |
  OGet_ObservationModelMapping | OGet_ObservationModelMapping can be used for direct access to a ObservationModelMapping |
  OGet_ObservationModelMappingSparse | OGet_ObservationModelMappingSparse can be used for direct access to a ObservationModelMappingSparse |
 ParserInterface | ParserInterface is an interface for parsers |
  ParserTOIDecMDPDiscrete | ParserTOIDecMDPDiscrete is a parser for TOIDecMDPDiscrete |
  ParserTOIDecPOMDPDiscrete | ParserTOIDecPOMDPDiscrete is a parser for TOIDecPOMDPDiscrete |
   ParserTOICompactRewardDecPOMDPDiscrete | ParserTOICompactRewardDecPOMDPDiscrete is a parser for TOICompactRewardDecPOMDPDiscrete |
   ParserTOIFactoredRewardDecPOMDPDiscrete | ParserTOIFactoredRewardDecPOMDPDiscrete is a parser for TransitionObservationIndependentFactoredRewardDecPOMDPDiscrete |
 PartialJointPolicy | PartialJointPolicy represents a joint policy that is only specified for t time steps instead of for every time step |
  PartialJointPolicyDiscretePure | PartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy |
 PartialPolicyPoolInterface | PartialPolicyPoolInterface is an interface for PolicyPools containing Partial Joint Policies |
  PolicyPoolPartialJPolValPair | PolicyPoolJPolValPair is a policy pool with partial joint policy - value pairs |
 PartialPolicyPoolItemInterface | PartialPolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
  PartialJointPolicyValuePair | PartialJointPolicyValuePair is a wrapper for a partial joint *policy and its heuristic value |
   PartialJPDPValuePair | PartialJPDPValuePair represents a (PartialJointPolicyDiscretePure,Value) pair, which stores the full PartialJointPolicyDiscretePure |
   PartialJPPVIndexValuePair | PartialJPPVIndexValuePair represents a (PartialJointPolicyPureVector,Value) pair |
 PlanningUnit | PlanningUnit represents a planning unit, i.e., a planning algorithm |
  PlanningUnitMADPDiscrete | PlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states) |
 PlanningUnitMADPDiscreteParameters | PlanningUnitMADPDiscreteParameters stores parameters of PlanningUnitMADPDiscrete |
 Policy | Policy is a class that represents a policy for a single agent |
  PolicyDiscrete | PolicyDiscrete is a class that represents a discrete policy |
   PolicyDiscretePure | PolicyDiscretePure is an abstract class that represents a pure policy for a discrete MADP |
    PolicyPureVector | PolicyPureVector is a class that represents a pure (=deterministic) policy |
 PolicyPoolInterface | PolicyPoolInterface is an interface for PolicyPools containing fully defined Joint Policies |
  PolicyPoolJPolValPair | PolicyPoolJPolValPair is a policy pool with joint policy - value pairs |
 PolicyPoolItemInterface | PolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem |
  JointPolicyValuePair | JointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value |
   JPPVIndexValuePair | JPPVIndexValuePair represents a (JointPolicyPureVector,Value) pair |
   JPPVValuePair | JPPVValuePair represents a (JointPolicyPureVector,Value) pair, which stores the full JointPolicyPureVector |
 QAVParameters | |
 QFunctionInterface | QFunctionInterface is an abstract base class containing nothing |
  QFunction | QFunction is an abstract base class containing nothing |
   QFunctionForDecPOMDP | QFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP |
    QFunctionJAOH | QFunctionJAOH represents a Q-function that operates on joint action-observation histories |
     QFunctionJAOHTree | QFunctionJAOHTree is represents QFunctionJAOH which store Qvalues in a tree |
      QBG | QBG is a class that represents the QBG heuristic |
      QPOMDP | QPOMDP is a class that represents the QPOMDP heuristic |
     QMDP | QMDP is a class that represents the QMDP heuristic |
    QFunctionJointBelief | QFunctionJointBelief represents a Q-function that operates on joint beliefs |
     QAV< P > | QAV implements a QFunctionJointBelief using a planner based on alpha functions, for instance the Perseus planners |
  QFunctionForDecPOMDPInterface | QFunctionForDecPOMDPInterface is a class that represents a Q function for a Dec-POMDP |
   QFunctionForDecPOMDP | QFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP |
   QFunctionJAOHInterface | QFunctionJAOHInterface is a class that is an interface for heuristics of the shape Q(JointActionObservationHistory, JointAction) |
    QFunctionJAOH | QFunctionJAOH represents a Q-function that operates on joint action-observation histories |
   QFunctionJointBeliefInterface | QFunctionJointBeliefInterface is an interface for QFunctionJointBelief |
    QFunctionJointBelief | QFunctionJointBelief represents a Q-function that operates on joint beliefs |
    QMDP | QMDP is a class that represents the QMDP heuristic |
 QTableInterface | QTableInterface is the abstract base class for Q(., a) functions |
  QTable | QTable implements QTableInterface using a full matrix |
  RewardModel | RewardModel represents the reward model in a decision process |
   RewardModelMapping | RewardModelMapping represents a discrete reward model |
   RewardModelMappingSparse | RewardModelMappingSparse represents a discrete reward model |
 Referrer< T > | Referrer is a template class that represents objects that refer another |
 Referrer< BayesianGameIdenticalPayoffInterface > | |
  BayesianGameIdenticalPayoffSolver< JP > | BayesianGameIdenticalPayoffSolver is an interface for solvers for Bayesian games with identical payoff |
   BGIP_SolverAlternatingMaximization< JP > | BGIP_SolverAlternatingMaximization implements an approximate solver for identical payoff Bayesian games, based on alternating maximization |
   BGIP_SolverBruteForceSearch< JP > | BGIP_SolverBruteForceSearch is a class that performs Brute force search for identical payoff Bayesian Games |
  BayesianGameIdenticalPayoffSolver< JointPolicyPureVector > | |
   BGIP_SolverRandom | BGIP_SolverRandom creates random solutions to Bayesian games for testing purposes |
 Referrer< DecPOMDPDiscreteInterface > | |
  PlanningUnitDecPOMDPDiscrete | PlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs |
 Referrer< JointPolicyDiscretePure > | |
  ValueFunctionDecPOMDPDiscrete | ValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP |
 Referrer< MultiAgentDecisionProcessDiscreteInterface > | |
  PlanningUnitMADPDiscrete | PlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states) |
 Referrer< PlanningUnitDecPOMDPDiscrete > | |
  ValueFunctionDecPOMDPDiscrete | ValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP |
 Referrer< PlanningUnitMADPDiscrete > | |
  ActionHistory | ActionHistory represents an action history of a single agent |
  ActionObservationHistory | ActionObservationHistory represents an action-observation history of an agent |
  JointActionHistory | JointActionHistory represents a joint action history |
  JointActionObservationHistory | JointActionObservationHistory represents a joint action observation history |
  JointObservationHistory | JointObservationHistory represents a joint observation history |
  ObservationHistory | ObservationHistory represents an action history of a single agent |
 Referrer< TOIDecPOMDPDiscrete > | |
  PlanningUnitTOIDecPOMDPDiscrete | PlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs |
 RewardModelTOISparse | RewardModelTOISparse represents a discrete reward model based on vectors of states and actions. |
 RGet | RGet can be used for direct access to a reward model |
  RGet_RewardModelMapping | RGet can be used for direct access to a RewardModelMapping |
  RGet_RewardModelMappingSparse | RGet can be used for direct access to a RewardModelMappingSparse |
 Simulation | Simulation is a class that simulates policies in order to test their control quality |
  SimulationDecPOMDPDiscrete | SimulationDecPOMDPDiscrete simulates policies in DecPOMDPDiscrete's |
 SimulationAgent | SimulationAgent represents an agent in for class Simulation |
  AgentDecPOMDPDiscrete | AgentDecPOMDPDiscrete represents an agent in a discrete DecPOMDP setting |
   AgentDelayedSharedObservations | AgentDelayedSharedObservations represents an agent that acts on local observations and the shared observation at the previous time step |
    AgentBG | AgentBG represents an agent which uses a BG-based policy |
   AgentFullyObservable | AgentFullyObservable represents an agent that receives the true state, the joint observation and also the reward signal |
    AgentRandom | AgentRandom represents an agent which chooses action uniformly at random |
   AgentLocalObservations | AgentLocalObservations represents an agent that acts on local observations |
    AgentRandom | AgentRandom represents an agent which chooses action uniformly at random |
   AgentSharedObservations | AgentSharedObservations is represents an agent that benefits from free communication, i.e., it can share all its observations |
    AgentPOMDP | AgentPOMDP represents an agent which POMDP-based policy |
    AgentQMDP | AgentQMDP represents an agent which uses a QMDP-based policy |
 SimulationResult | SimulationResult stores the results from simulating a joint policy, the obtained rewards in particular |
 StateDistribution | StateDistribution is an interface for probability distributions over states |
  StateDistributionVector | StateDistributionVector represents a probability distribution over states as a vector of doubles |
 TGet | TGet can be used for direct access to the transition model |
  TGet_TransitionModelMapping | TGet_TransitionModelMapping can be used for direct access to a TransitionModelMapping |
  TGet_TransitionModelMappingSparse | TGet_TransitionModelMappingSparse can be used for direct access to a TransitionModelMappingSparse |
 TimedAlgorithm | TimedAlgorithm allows for easy timekeeping of parts of an algorithm |
  AlphaVectorPlanning | AlphaVectorPlanning provides base functionality for alpha-vector based POMDP or BG techniques |
   AlphaVectorBG | AlphaVectorBG implements Bayesian Game specific functionality for alpha-vector based planning |
    PerseusBGPlanner | PerseusBGPlanner implements the Perseus planning algorithm for BGs |
   AlphaVectorPOMDP | AlphaVectorPOMDP implements POMDP specific functionality for alpha-vector based planning |
    PerseusPOMDPPlanner | PerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs |
   Perseus | Perseus contains basic functionality for the Perseus planner |
    PerseusStationary | PerseusStationary is Perseus for stationary policies |
     PerseusPOMDPPlanner | PerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs |
     PerseusQFunctionPlanner | PerseusQFunctionPlanner is a Perseus planner that uses QFunctions |
      PerseusBGPlanner | PerseusBGPlanner implements the Perseus planning algorithm for BGs |
  DICEPSPlanner | DICEPSPlanner implements the Direct Cross-Entropy Policy Search method |
  GeneralizedMAAStarPlanner | GeneralizedMAAStarPlanner is a class that represents the Generalized MAA* planner class |
   GeneralizedMAAStarPlannerForDecPOMDPDiscrete | GeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner |
  MDPValueIteration | MDPValueIteration implements value iteration for MDPs |
 Timing::Times | Stores the start and end of a timespan, in clock cycles |
 Timing | Timing provides a simple way of timing code |
 TransitionModel | TransitionModel represents the transition model in a decision process |
  TransitionModelDiscrete | TransitionModelDiscrete represents a discrete transition model |
   TransitionModelMapping | TransitionModelMapping implements a TransitionModelDiscrete |
   TransitionModelMappingSparse | TransitionModelMappingSparse implements a TransitionModelDiscrete |
 TreeNode< Tcontained > | TreeNode represents a node in a tree of histories, for instance observation histories |
 TreeNode< ActionObservationHistory > | |
  ActionObservationHistoryTree | ActionObservationHistoryTree is a wrapper for ActionObservationHistory |
 TreeNode< JointActionObservationHistory > | |
  JointActionObservationHistoryTree | JointActionObservationHistoryTree is derived from TreeNode, and similar to ObservationHistoryTree: |
 TreeNode< JointObservationHistory > | |
  JointObservationHistoryTree | JointObservationHistoryTree is a class that represents a wrapper for the JointObservationHistory class |
 TreeNode< ObservationHistory > | |
  ObservationHistoryTree | ObservationHistoryTree is a wrapper for the ObservationHistory class |
 Type | Type is an abstract class that represents a Type (e.g |
  Type_AOHIndex | Type_AOHIndex is a implementation (extenstion) of Type and represents a type in e.g |
 ValueFunction | ValueFunction is a class that represents a value function of a joint policy |
  ValueFunctionDecPOMDPDiscrete | ValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP |