MultiAgentDecisionProcess
Release 0.2.1
|
MultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement. More...
#include <MultiAgentDecisionProcessDiscreteInterface.h>
Public Member Functions | |
virtual MultiAgentDecisionProcessDiscreteInterface * | Clone () const =0 |
Returns a pointer to a copy of this class. | |
virtual const Action * | GetAction (Index agentI, Index a) const =0 |
Return a ref to the a-th action of agent agentI. | |
virtual double | GetInitialStateProbability (Index sI) const =0 |
virtual const StateDistribution * | GetISD () const =0 |
Returns the complete initial state distribution. | |
virtual const JointAction * | GetJointAction (Index i) const =0 |
Return a ref to the i-th joint action. | |
virtual const JointObservation * | GetJointObservation (Index i) const =0 |
Return a ref to the i-th joint observation. | |
virtual const std::vector < size_t > & | GetNrActions () const =0 |
Return the number of actions vector. | |
virtual size_t | GetNrActions (Index AgentI) const =0 |
Return the number of actions of agent agentI. | |
virtual size_t | GetNrJointActions () const =0 |
Return the number of joiny actions. | |
virtual size_t | GetNrJointObservations () const =0 |
Return the number of joiny observations. | |
virtual const std::vector < size_t > & | GetNrObservations () const =0 |
Return the number of observations vector. | |
virtual size_t | GetNrObservations (Index AgentI) const =0 |
Return the number of observations of agent agentI. | |
virtual size_t | GetNrStates () const =0 |
Return the number of states. | |
virtual const Observation * | GetObservation (Index agentI, Index a) const =0 |
Return a ref to the a-th observation of agent agentI. | |
virtual const ObservationModelDiscrete * | GetObservationModelDiscretePtr () const =0 |
Returns a pointer to the underlying observation model. | |
virtual double | GetObservationProbability (Index jaI, Index sucSI, Index joI) const =0 |
Return the probability of joint observation joI: P(joI|jaI,sucSI). | |
virtual OGet * | GetOGet () const =0 |
virtual const State * | GetState (Index i) const =0 |
Returns a pointer to state i. | |
virtual TGet * | GetTGet () const =0 |
virtual const TransitionModelDiscrete * | GetTransitionModelDiscretePtr () const =0 |
Returns a pointer to the underlying transition model. | |
virtual double | GetTransitionProbability (Index sI, Index jaI, Index sucSI) const =0 |
Return the probability of successor state sucSI: P(sucSI|sI,jaI). | |
virtual Index | IndividualToJointActionIndices (const Index *AI_ar) const =0 |
Returns the joint action index that corresponds to the array of specified individual action indices. | |
virtual Index | IndividualToJointActionIndices (const std::vector< Index > &indivActionIndices) const =0 |
Returns the joint action index that corresponds to the vector of specified individual action indices. | |
virtual Index | IndividualToJointObservationIndices (const std::vector< Index > &indivObservationIndices) const =0 |
Returns the joint observation index that corresponds to the vector of specified individual observation indices. | |
virtual const std::vector < Index > & | JointToIndividualActionIndices (Index jaI) const =0 |
Returns a vector of indices to indiv. | |
virtual const std::vector < Index > & | JointToIndividualObservationIndices (Index joI) const =0 |
Returns a vector of indices to indiv. | |
virtual Index | SampleInitialState (void) const =0 |
Sample a state according to the initial state PDF. | |
virtual Index | SampleJointObservation (Index jaI, Index sucI) const =0 |
Sample an observation - needed for simulations. | |
virtual Index | SampleSuccessorState (Index sI, Index jaI) const =0 |
Sample a successor state - needed by simulations. | |
virtual std::string | SoftPrint () const =0 |
Prints some information on the MultiAgentDecisionProcessDiscreteInterface. | |
virtual std::string | SoftPrintState (Index sI) const =0 |
virtual | ~MultiAgentDecisionProcessDiscreteInterface () |
Destructor. Can't make a virt.destr. pure abstract! | |
![]() | |
virtual size_t | GetNrAgents () const =0 |
Return the number of agents. | |
virtual std::string | GetUnixName () const =0 |
Returns the base part of the problem filename. | |
virtual | ~MultiAgentDecisionProcessInterface () |
Destructor. |
MultiAgentDecisionProcessDiscreteInterface is an abstract base class that defines publicly accessible member functions that a discrete multiagent decision process must implement.
This interface is currently implemented by MultiAgentDecisionProcessDiscrete and MultiAgentDecisionProcessDiscreteFactoredStates.
The functions this interface defines relate to actions, observations, transition and observation probabilities. *
Definition at line 66 of file MultiAgentDecisionProcessDiscreteInterface.h.
|
inlinevirtual |
Destructor. Can't make a virt.destr. pure abstract!
Definition at line 76 of file MultiAgentDecisionProcessDiscreteInterface.h.
|
pure virtual |
Returns a pointer to a copy of this class.
Implements MultiAgentDecisionProcessInterface.
Implemented in TOIDecPOMDPDiscrete, MultiAgentDecisionProcessDiscrete, DecPOMDPDiscrete, POSGDiscrete, DecPOMDPDiscreteInterface, TOICompactRewardDecPOMDPDiscrete, TOIFactoredRewardDecPOMDPDiscrete, and POSGDiscreteInterface.
|
pure virtual |
Return a ref to the a-th action of agent agentI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
|
pure virtual |
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
|
pure virtual |
Returns the complete initial state distribution.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
Referenced by PlanningUnitMADPDiscrete::GetNewJointBeliefFromISD(), PlanningUnitMADPDiscrete::InitializeJointActionObservationHistories(), and AlphaVectorPlanning::SampleBeliefs().
|
pure virtual |
Return a ref to the i-th joint action.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
Referenced by AlphaVectorPlanning::ExportPOMDPFile().
|
pure virtual |
Return a ref to the i-th joint observation.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
Referenced by AlphaVectorPlanning::ExportPOMDPFile().
|
pure virtual |
Return the number of actions vector.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
Referenced by AgentBG::AgentBG(), AlphaVectorBG::AlphaVectorBG(), TransitionObservationIndependentMADPDiscrete::CreateJointActions(), TransitionObservationIndependentMADPDiscrete::GetNrActions(), PlanningUnitMADPDiscrete::InitializeActionHistories(), PlanningUnitMADPDiscrete::InitializeActionObservationHistories(), DICEPSPlanner::Plan(), and DICEPSPlanner::UpdateCEProbDistribution().
|
pure virtual |
Return the number of actions of agent agentI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
|
pure virtual |
Return the number of joiny actions.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
Referenced by TOIDecPOMDPDiscrete::CreateNewRewardModel(), ParserTOICompactRewardDecPOMDPDiscrete::ParseRewards(), ParserTOIDecPOMDPDiscrete::ParseRewards(), and TOICompactRewardDecPOMDPDiscrete::SetInitialized().
|
pure virtual |
Return the number of joiny observations.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
|
pure virtual |
Return the number of observations vector.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
Referenced by AlphaVectorBG::AlphaVectorBG(), TransitionObservationIndependentMADPDiscrete::CreateJointObservations(), TransitionObservationIndependentMADPDiscrete::GetNrObservations(), PlanningUnitMADPDiscrete::InitializeActionObservationHistories(), and PlanningUnitMADPDiscrete::InitializeObservationHistories().
|
pure virtual |
Return the number of observations of agent agentI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
|
pure virtual |
Return the number of states.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
Referenced by TransitionObservationIndependentMADPDiscrete::CreateJointStates(), TOIDecPOMDPDiscrete::CreateNewRewardModel(), PlanningUnitMADPDiscrete::GetNrStates(), ParserTOICompactRewardDecPOMDPDiscrete::ParseRewards(), ParserTOIDecPOMDPDiscrete::ParseRewards(), TOICompactRewardDecPOMDPDiscrete::SetInitialized(), and JointBelief::Update().
|
pure virtual |
Return a ref to the a-th observation of agent agentI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
|
pure virtual |
Returns a pointer to the underlying observation model.
If speed is required (for instance when looping through all states) the pointer can be requested by an algorithm. It can than obtain a pointer to the actual implementation type by runtime type identification. (i.e., using typeid and dynamic_cast).
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by JointBeliefSparse::Update().
|
pure virtual |
Return the probability of joint observation joI: P(joI|jaI,sucSI).
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by SimulationDecPOMDPDiscrete::Step(), JointBelief::Update(), and JointBeliefSparse::UpdateSlow().
|
pure virtual |
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
|
pure virtual |
Returns a pointer to state i.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
Referenced by TransitionObservationIndependentMADPDiscrete::CreateJointStates(), AlphaVectorPlanning::ExportPOMDPFile(), TOICompactRewardDecPOMDPDiscrete::GetReward(), TransitionObservationIndependentMADPDiscrete::GetState(), SimulationDecPOMDPDiscrete::RunSimulations(), and SimulationDecPOMDPDiscrete::Step().
|
pure virtual |
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by JointBelief::Update(), and JointBeliefSparse::Update().
|
pure virtual |
Returns a pointer to the underlying transition model.
If speed is required (for instance when looping through all states) the pointer can be requested by an algorithm. It can than obtain a pointer to the actual implementation type by runtime type identification. (i.e., using typeid and dynamic_cast).
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
|
pure virtual |
Return the probability of successor state sucSI: P(sucSI|sI,jaI).
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by SimulationDecPOMDPDiscrete::Step(), JointBelief::Update(), and JointBeliefSparse::UpdateSlow().
|
pure virtual |
Returns the joint action index that corresponds to the array of specified individual action indices.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
|
pure virtual |
Returns the joint action index that corresponds to the vector of specified individual action indices.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
|
pure virtual |
Returns the joint observation index that corresponds to the vector of specified individual observation indices.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
|
pure virtual |
Returns a vector of indices to indiv.
action indicies corr. to joint action index jaI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteActions.
|
pure virtual |
Returns a vector of indices to indiv.
observation indicies corr. to joint observation index jaI.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteObservations.
|
pure virtual |
Sample a state according to the initial state PDF.
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
Referenced by SimulationDecPOMDPDiscrete::RunSimulation(), SimulationDecPOMDPDiscrete::RunSimulations(), AlphaVectorPlanning::SampleBeliefs(), and TransitionObservationIndependentMADPDiscrete::SampleInitialStates().
|
pure virtual |
Sample an observation - needed for simulations.
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by AlphaVectorPlanning::SampleBeliefs(), and SimulationDecPOMDPDiscrete::Step().
|
pure virtual |
Sample a successor state - needed by simulations.
Implemented in TransitionObservationIndependentMADPDiscrete, and MultiAgentDecisionProcessDiscrete.
Referenced by AlphaVectorPlanning::SampleBeliefs(), and SimulationDecPOMDPDiscrete::Step().
|
pure virtual |
Prints some information on the MultiAgentDecisionProcessDiscreteInterface.
Implemented in TransitionObservationIndependentMADPDiscrete, MADPComponentDiscreteActions, MADPComponentDiscreteObservations, MultiAgentDecisionProcessDiscrete, TOIDecPOMDPDiscrete, MADPComponentDiscreteStates, DecPOMDPDiscrete, POSGDiscrete, TOICompactRewardDecPOMDPDiscrete, and TOIFactoredRewardDecPOMDPDiscrete.
Referenced by PlanningUnitMADPDiscrete::SetProblem().
|
pure virtual |
Implemented in TransitionObservationIndependentMADPDiscrete, and MADPComponentDiscreteStates.
Referenced by TransitionObservationIndependentMADPDiscrete::SoftPrintState().