MultiAgentDecisionProcess  Release 0.2.1
Class List
Here are the classes, structs, unions and interfaces with brief descriptions:
[detail level 12]
oNArgumentHandlersArgumentHandlers contains functionality for parsing and handling command-line arguments
oNArgumentUtilsArgumentUtils provides a way to get problem instantations directly from the command-line arguments
oNBeliefValueBeliefValue is a namespace for functions that compute the value of a particular belief
oNBGIP_SolverType
oNdirectories
oNGlobalsGlobals contains several definitions global to the MADP toolbox
oNGMAAtype
oNIndexToolsIndexTools contains functionality for manipulating indices
oNJESPtype
oNPolicyGlobals
oNPrintToolsPrintTools contains functionality for printing vectors etc
oNProblemTypeProblemType is a class that represents TODO: fill out..
oNqheur
oNstdSTL namespace
oNStringToolsStringTools is a namespace that contains utility functions for strings
oNTimeTools
oNVectorTools
oCActionAction is a class that represent actions
oCActionDiscreteActionDiscrete represents discrete actions
oCActionHistoryActionHistory represents an action history of a single agent
oCActionHistoryTreeActionHistoryTree is a wrapper for ActionHistory
oCActionObservationHistoryActionObservationHistory represents an action-observation history of an agent
oCActionObservationHistoryTreeActionObservationHistoryTree is a wrapper for ActionObservationHistory
oCAgentAgent represents an agent
oCAgentBGAgentBG represents an agent which uses a BG-based policy
oCAgentDecPOMDPDiscreteAgentDecPOMDPDiscrete represents an agent in a discrete DecPOMDP setting
oCAgentDelayedSharedObservationsAgentDelayedSharedObservations represents an agent that acts on local observations and the shared observation at the previous time step
oCAgentFullyObservableAgentFullyObservable represents an agent that receives the true state, the joint observation and also the reward signal
oCAgentLocalObservationsAgentLocalObservations represents an agent that acts on local observations
oCAgentPOMDPAgentPOMDP represents an agent which POMDP-based policy
oCAgentQMDPAgentQMDP represents an agent which uses a QMDP-based policy
oCAgentRandomAgentRandom represents an agent which chooses action uniformly at random
oCAgentSharedObservationsAgentSharedObservations is represents an agent that benefits from free communication, i.e., it can share all its observations
oCAlphaVectorAlphaVector represent an alpha vector used in POMDP solving
oCAlphaVectorBGAlphaVectorBG implements Bayesian Game specific functionality for alpha-vector based planning
oCAlphaVectorPlanningAlphaVectorPlanning provides base functionality for alpha-vector based POMDP or BG techniques
oCAlphaVectorPOMDPAlphaVectorPOMDP implements POMDP specific functionality for alpha-vector based planning
oCBayesianGameBayesianGame is a class that represents a general Bayesian game in which each agent has its own utility function
oCBayesianGameBaseBayesianGameBase is a class that represents a Bayesian game
oCBayesianGameForDecPOMDPStageBayesianGameForDecPOMDPStage represents a BG for a single stage
oCBayesianGameForDecPOMDPStageInterfaceBayesianGameForDecPOMDPStageInterface is a class that represents TODO: fill out..
oCBayesianGameIdenticalPayoffBayesianGameIdenticalPayoff is a class that represents a Bayesian game with identical payoffs
oCBayesianGameIdenticalPayoffInterfaceBayesianGameIdenticalPayoffInterface provides an interface for Bayesian Games with identical payoffs
oCBayesianGameIdenticalPayoffSolverBayesianGameIdenticalPayoffSolver is an interface for solvers for Bayesian games with identical payoff
oCBeliefBelief represents a probability distribution over the state space
oCBeliefInterfaceBeliefInterface is an interface for beliefs, i.e., probability distributions over the state space
oCBeliefIteratorBeliefIterator is an iterator for dense beliefs
oCBeliefIteratorGenericBeliefIteratorGeneric is an iterator for beliefs
oCBeliefIteratorInterfaceBeliefIteratorInterface is an interface for iterators over beliefs
oCBeliefIteratorSparseBeliefIteratorSparse is an iterator for sparse beliefs
oCBeliefSparseBeliefSparse represents a probability distribution over the state space
oCBGforStageCreationBGforStageCreation is a class that provides some functions to aid the construction of Bayesian games for a stage of a Dec-POMDP
oCBGIP_SolverAlternatingMaximizationBGIP_SolverAlternatingMaximization implements an approximate solver for identical payoff Bayesian games, based on alternating maximization
oCBGIP_SolverBruteForceSearchBGIP_SolverBruteForceSearch is a class that performs Brute force search for identical payoff Bayesian Games
oCBGIP_SolverCreator_AMBGIP_SolverCreator_AM returns an Alternating Maximization BGIP_Solver
oCBGIP_SolverCreator_BFSBGIP_SolverCreator_BFS returns a Brute Force Search BGIP_Solver
oCBGIP_SolverCreatorInterfaceBGIP_SolverCreatorInterface is an interface for classes that create BGIP solvers
oCBGIP_SolverRandomBGIP_SolverRandom creates random solutions to Bayesian games for testing purposes
oCBGIPSolutionBGIPSolution represents a solution for BayesianGameIdenticalPayoff
oCBruteForceSearchPlannerBruteForceSearchPlanner implements an exact solution algorithm
oCDecPOMDPDecPOMDP is a simple implementation of DecPOMDPInterface
oCDecPOMDPDiscreteDecPOMDPDiscrete represent a discrete DEC-POMDP model
oCDecPOMDPDiscreteInterfaceDecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions
oCDecPOMDPInterfaceDecPOMDPInterface is an interface for DecPOMDPs
oCDICEPSPlannerDICEPSPlanner implements the Direct Cross-Entropy Policy Search method
oCDiscreteEntityDiscreteEntity is a general class for tracking discrete entities
oCEE is a class that represents a basic exception
oCEInvalidIndexEInvalidIndex represents an invalid index exception
oCENotCachedENotCached represents an invalid index exception
oCEOverflowEOverflow represents an integer overflow exception
oCEParseEParse represents a parser exception
oCFixedCapacityPriorityQueueFixedCapacityPriorityQueue is a class that represents a priority queue with a fixed size
oCGeneralizedMAAStarPlannerGeneralizedMAAStarPlanner is a class that represents the Generalized MAA* planner class
oCGeneralizedMAAStarPlannerForDecPOMDPDiscreteGeneralizedMAAStarPlannerForDecPOMDPDiscrete is a class that represents the Generalized MAA* planner
oCGMAA_kGMAAGMAA_kGMAA is a class that represents a GMAA planner that performs k-GMAA, i.e
oCGMAA_MAAstarGMAA_MAAstar is a class that represents a planner that performs MAA* as described by Szer et al
oCHistoryHistory is a general class for histories
oCIndividualBeliefJESPIndividualBeliefJESP stores individual beliefs for the JESP algorithm
oCIndividualHistoryIndividualHistory represents a history for a single agent
oCInterface_ProblemToPolicyDiscreteInterface_ProblemToPolicyDiscrete is an interface from discrete problems to policies
oCInterface_ProblemToPolicyDiscretePureInterface_ProblemToPolicyDiscretePure is an interface from discrete problems to pure policies
oCJESPDynamicProgrammingPlannerJESPDynamicProgrammingPlanner plans with the DP JESP algorithm
oCJESPExhaustivePlannerJESPExhaustivePlanner plans with the Exhaustive JESP algorithm
oCJointActionJointAction represents a joint action
oCJointActionDiscreteJointActionDiscrete represents discrete joint actions
oCJointActionHistoryJointActionHistory represents a joint action history
oCJointActionHistoryTreeJointActionHistoryTree is a wrapper for JointActionHistory
oCJointActionObservationHistoryJointActionObservationHistory represents a joint action observation history
oCJointActionObservationHistoryTreeJointActionObservationHistoryTree is derived from TreeNode, and similar to ObservationHistoryTree:
oCJointBeliefJointBelief stores a joint belief, represented as a regular (dense) vector of doubles
oCJointBeliefInterfaceJointBeliefInterface represents an interface for joint beliefs
oCJointBeliefSparseJointBeliefSparse represents a sparse joint belief
oCJointHistoryJointHistory represents a joint history, i.e., a history for each agent
oCJointObservationJointObservation is represents joint observations
oCJointObservationDiscreteJointObservationDiscrete represents discrete joint observations
oCJointObservationHistoryJointObservationHistory represents a joint observation history
oCJointObservationHistoryTreeJointObservationHistoryTree is a class that represents a wrapper for the JointObservationHistory class
oCJointPolicyJointPolicy is a class that represents a joint policy
oCJointPolicyDiscreteJointPolicyDiscrete is a class that represents a discrete joint policy
oCJointPolicyDiscretePureJointPolicyDiscretePure is represents a pure joint policy for a discrete MADP
oCJointPolicyPureVectorJointPolicyPureVector represents a discrete pure joint policy
oCJointPolicyValuePairJointPolicyValuePair is a wrapper for a partial joint policy and its heuristic value
oCJPolComponent_VectorImplementationJPolComponent_VectorImplementation implements functionality common to several joint policy implementations
oCJPPVIndexValuePairJPPVIndexValuePair represents a (JointPolicyPureVector,Value) pair
oCJPPVValuePairJPPVValuePair represents a (JointPolicyPureVector,Value) pair, which stores the full JointPolicyPureVector
oCMADPComponentDiscreteActionsMADPComponentDiscreteActions contains functionality for discrete action spaces
oCMADPComponentDiscreteObservationsMADPComponentDiscreteObservations contains functionality for discrete observation spaces
oCMADPComponentDiscreteStatesMADPComponentDiscreteStates is a class that represents a discrete state space
oCMADPParserMADPParser is a general class for parsers in MADP
oCMDPSolverMDPSolver is an interface for MDP solvers
oCMDPValueIterationMDPValueIteration implements value iteration for MDPs
oCMultiAgentDecisionProcessMultiAgentDecisionProcess is an class that defines the primary properties of a decision process
oCMultiAgentDecisionProcessDiscreteMultiAgentDecisionProcessDiscrete is defines the primary properties of a discrete decision process
oCMultiAgentDecisionProcessDiscreteInterfaceMultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement
oCMultiAgentDecisionProcessInterfaceMultiAgentDecisionProcessInterface is an abstract base class that declares the primary properties of a multiagent decision process
oCNamedDescribedEntityNamedDescribedEntity represents named entities
oCNullPlannerNullPlanner represents a planner which does nothing, but can be used to instantiate a PlanningUnitDecPOMDPDiscrete
oCNullPlannerTOINullPlannerTOI represents a planner which does nothing, but can be used to instantiate a PlanningUnitTOIDecPOMDPDiscrete
oCObservationObservation represents observations
oCObservationDiscreteObservationDiscrete represents discrete observations
oCObservationHistoryObservationHistory represents an action history of a single agent
oCObservationHistoryTreeObservationHistoryTree is a wrapper for the ObservationHistory class
oCObservationModelObservationModel represents the observation model in a decision process
oCObservationModelDiscreteObservationModelDiscrete represents a discrete observation model
oCObservationModelMappingObservationModelMapping implements an ObservationModelDiscrete
oCObservationModelMappingSparseObservationModelMappingSparse implements an ObservationModelDiscrete
oCOGetOGet can be used for direct access to the observation model
oCOGet_ObservationModelMappingOGet_ObservationModelMapping can be used for direct access to a ObservationModelMapping
oCOGet_ObservationModelMappingSparseOGet_ObservationModelMappingSparse can be used for direct access to a ObservationModelMappingSparse
oCParserInterfaceParserInterface is an interface for parsers
oCParserTOICompactRewardDecPOMDPDiscreteParserTOICompactRewardDecPOMDPDiscrete is a parser for TOICompactRewardDecPOMDPDiscrete
oCParserTOIDecMDPDiscreteParserTOIDecMDPDiscrete is a parser for TOIDecMDPDiscrete
oCParserTOIDecPOMDPDiscreteParserTOIDecPOMDPDiscrete is a parser for TOIDecPOMDPDiscrete
oCParserTOIFactoredRewardDecPOMDPDiscreteParserTOIFactoredRewardDecPOMDPDiscrete is a parser for TransitionObservationIndependentFactoredRewardDecPOMDPDiscrete
oCPartialJointPolicyPartialJointPolicy represents a joint policy that is only specified for t time steps instead of for every time step
oCPartialJointPolicyDiscretePurePartialJointPolicyDiscretePure is a discrete and pure PartialJointPolicy
oCPartialJointPolicyPureVectorPartialJointPolicyPureVector implements a PartialJointPolicy using a mapping of history indices to actions
oCPartialJointPolicyValuePairPartialJointPolicyValuePair is a wrapper for a partial joint *policy and its heuristic value
oCPartialJPDPValuePairPartialJPDPValuePair represents a (PartialJointPolicyDiscretePure,Value) pair, which stores the full PartialJointPolicyDiscretePure
oCPartialJPPVIndexValuePairPartialJPPVIndexValuePair represents a (PartialJointPolicyPureVector,Value) pair
oCPartialPolicyPoolInterfacePartialPolicyPoolInterface is an interface for PolicyPools containing Partial Joint Policies
oCPartialPolicyPoolItemInterfacePartialPolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem
oCPerseusPerseus contains basic functionality for the Perseus planner
oCPerseusBGPlannerPerseusBGPlanner implements the Perseus planning algorithm for BGs
oCPerseusPOMDPPlannerPerseusPOMDPPlanner implements the Perseus planning algorithm for POMDPs
oCPerseusQFunctionPlannerPerseusQFunctionPlanner is a Perseus planner that uses QFunctions
oCPerseusStationaryPerseusStationary is Perseus for stationary policies
oCPlanningUnitPlanningUnit represents a planning unit, i.e., a planning algorithm
oCPlanningUnitDecPOMDPDiscretePlanningUnitDecPOMDPDiscrete represents a planning unit for discrete Dec-POMDPs
oCPlanningUnitMADPDiscretePlanningUnitMADPDiscrete represents a Planning unit for a discrete MADP (discrete actions, observations and states)
oCPlanningUnitMADPDiscreteParametersPlanningUnitMADPDiscreteParameters stores parameters of PlanningUnitMADPDiscrete
oCPlanningUnitTOIDecPOMDPDiscretePlanningUnitTOIDecPOMDPDiscrete represents a planning unit for transition observation independent discrete Dec-POMDPs
oCPolicyPolicy is a class that represents a policy for a single agent
oCPolicyDiscretePolicyDiscrete is a class that represents a discrete policy
oCPolicyDiscretePurePolicyDiscretePure is an abstract class that represents a pure policy for a discrete MADP
oCPolicyPoolInterfacePolicyPoolInterface is an interface for PolicyPools containing fully defined Joint Policies
oCPolicyPoolItemInterfacePolicyPoolItemInterface is a class that gives the interface for a PolicyPoolItem
oCPolicyPoolJPolValPairPolicyPoolJPolValPair is a policy pool with joint policy - value pairs
oCPolicyPoolPartialJPolValPairPolicyPoolJPolValPair is a policy pool with partial joint policy - value pairs
oCPolicyPureVectorPolicyPureVector is a class that represents a pure (=deterministic) policy
oCPOSGPOSG is a simple implementation of POSGInterface
oCPOSGDiscretePOSGDiscrete represent a discrete POSG model
oCPOSGDiscreteInterfacePOSGDiscreteInterface is the interface for a discrete POSG model: it defines the set/get reward functions
oCPOSGInterfacePOSGInterface is an interface for POSGs
oCProblemDecTigerProblemDecTiger implements the DecTiger problem
oCProblemFireFightingProblemFireFighting is a class that represents the firefighting problem as described in refGMAA (DOC-references.h)
oCQAVQAV implements a QFunctionJointBelief using a planner based on alpha functions, for instance the Perseus planners
oCQAVParameters
oCQBGQBG is a class that represents the QBG heuristic
oCQFunctionQFunction is an abstract base class containing nothing
oCQFunctionForDecPOMDPQFunctionForDecPOMDP is a class that represents a Q function for a Dec-POMDP
oCQFunctionForDecPOMDPInterfaceQFunctionForDecPOMDPInterface is a class that represents a Q function for a Dec-POMDP
oCQFunctionInterfaceQFunctionInterface is an abstract base class containing nothing
oCQFunctionJAOHQFunctionJAOH represents a Q-function that operates on joint action-observation histories
oCQFunctionJAOHInterfaceQFunctionJAOHInterface is a class that is an interface for heuristics of the shape Q(JointActionObservationHistory, JointAction)
oCQFunctionJAOHTreeQFunctionJAOHTree is represents QFunctionJAOH which store Qvalues in a tree
oCQFunctionJointBeliefQFunctionJointBelief represents a Q-function that operates on joint beliefs
oCQFunctionJointBeliefInterfaceQFunctionJointBeliefInterface is an interface for QFunctionJointBelief
oCQMDPQMDP is a class that represents the QMDP heuristic
oCQPOMDPQPOMDP is a class that represents the QPOMDP heuristic
oCQTableQTable implements QTableInterface using a full matrix
oCQTableInterfaceQTableInterface is the abstract base class for Q(., a) functions
oCReferrerReferrer is a template class that represents objects that refer another
oCRewardModelRewardModel represents the reward model in a decision process
oCRewardModelMappingRewardModelMapping represents a discrete reward model
oCRewardModelMappingSparseRewardModelMappingSparse represents a discrete reward model
oCRewardModelTOISparseRewardModelTOISparse represents a discrete reward model based on vectors of states and actions.
oCRGetRGet can be used for direct access to a reward model
oCRGet_RewardModelMappingRGet can be used for direct access to a RewardModelMapping
oCRGet_RewardModelMappingSparseRGet can be used for direct access to a RewardModelMappingSparse
oCSimulationSimulation is a class that simulates policies in order to test their control quality
oCSimulationAgentSimulationAgent represents an agent in for class Simulation
oCSimulationDecPOMDPDiscreteSimulationDecPOMDPDiscrete simulates policies in DecPOMDPDiscrete's
oCSimulationResultSimulationResult stores the results from simulating a joint policy, the obtained rewards in particular
oCStateState is a class that represent states
oCStateDiscreteStateDiscrete represents discrete states
oCStateDistributionStateDistribution is an interface for probability distributions over states
oCStateDistributionVectorStateDistributionVector represents a probability distribution over states as a vector of doubles
oCTGetTGet can be used for direct access to the transition model
oCTGet_TransitionModelMappingTGet_TransitionModelMapping can be used for direct access to a TransitionModelMapping
oCTGet_TransitionModelMappingSparseTGet_TransitionModelMappingSparse can be used for direct access to a TransitionModelMappingSparse
oCTimedAlgorithmTimedAlgorithm allows for easy timekeeping of parts of an algorithm
oCTimingTiming provides a simple way of timing code
oCTOICompactRewardDecPOMDPDiscreteTOICompactRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward
oCTOIDecMDPDiscreteTOIDecMDPDiscrete is a class that represents a transition observation indepedent discrete DecMDP
oCTOIDecPOMDPDiscreteTOIDecPOMDPDiscrete is a class that represents a transition observation independent discrete DecPOMDP
oCTOIFactoredRewardDecPOMDPDiscreteTOIFactoredRewardDecPOMDPDiscrete is a class that represents a transition observation independent Dec-POMDP, in which the reward is the sum of each agent's individual reward plus some shared reward
oCTransitionModelTransitionModel represents the transition model in a decision process
oCTransitionModelDiscreteTransitionModelDiscrete represents a discrete transition model
oCTransitionModelMappingTransitionModelMapping implements a TransitionModelDiscrete
oCTransitionModelMappingSparseTransitionModelMappingSparse implements a TransitionModelDiscrete
oCTransitionObservationIndependentMADPDiscreteTransitionObservationIndependentMADPDiscrete is an base class that defines the primary properties of a Transition and Observation independent decision process
oCTreeNodeTreeNode represents a node in a tree of histories, for instance observation histories
oCTypeType is an abstract class that represents a Type (e.g
oCType_AOHIndexType_AOHIndex is a implementation (extenstion) of Type and represents a type in e.g
oCValueFunctionValueFunction is a class that represents a value function of a joint policy
\CValueFunctionDecPOMDPDiscreteValueFunctionDecPOMDPDiscrete represents and calculates the value function of a (pure) joint policy for a discrete Dec-POMDP