MultiAgentDecisionProcess
Release 0.2.1
|
DecPOMDPDiscrete represent a discrete DEC-POMDP model. More...
#include <DecPOMDPDiscrete.h>
Public Member Functions | |
virtual DecPOMDPDiscrete * | Clone () const |
Returns a pointer to a copy of this class. | |
void | CreateNewRewardModel () |
Creates a new reward model. | |
void | CreateNewRewardModelForAgent (Index agentI) |
implementation of POSGDiscreteInterface | |
DecPOMDPDiscrete (std::string name="received unspec. by DecPOMDPDiscrete", std::string descr="received unspec. by DecPOMDPDiscrete", std::string pf="received unspec. by DecPOMDPDiscrete") | |
Default constructor. | |
void | ExtractMADPDiscrete (MultiAgentDecisionProcessDiscrete *madp) |
Get the MADPDiscrete components from this DecPOMDPDiscrete. | |
double | GetReward (Index sI, Index jaI) const |
Return the reward for state, joint action indices. | |
double | GetReward (State *s, JointAction *ja) const |
implements the DecPOMDPInterface | |
double | GetRewardForAgent (Index agentI, State *s, JointAction *ja) const |
Function that returns the reward for a state and joint action. | |
double | GetRewardForAgent (Index agentI, Index sI, Index jaI) const |
Return the reward for state, joint action indices. | |
RewardModel * | GetRewardModelPtr () const |
Get a pointer to the reward model. | |
RGet * | GetRGet () const |
bool | SetInitialized (bool b) |
Sets _m_initialized to b. | |
void | SetReward (Index sI, Index jaI, double r) |
Set the reward for state, joint action indices. | |
void | SetReward (Index sI, Index jaI, Index sucSI, double r) |
Set the reward for state, joint action , suc. state indices. | |
void | SetReward (Index sI, Index jaI, Index sucSI, Index joI, double r) |
Set the reward for state, joint action, suc.state, joint obs indices. | |
void | SetReward (State *s, JointAction *ja, double r) |
implements the DecPOMDPInterface | |
void | SetRewardForAgent (Index agentI, State *s, JointAction *ja, double r) |
Function that sets the reward for an agent, state and joint action. | |
void | SetRewardForAgent (Index agentI, Index sI, Index jaI, double r) |
Set the reward for state, joint action indices. | |
void | SetRewardForAgent (Index agentI, Index sI, Index jaI, Index sucSI, double r) |
Set the reward for state, joint action , suc. state indices. | |
void | SetRewardForAgent (Index agentI, Index sI, Index jaI, Index sucSI, Index joI, double r) |
Set the reward for state, joint action, suc.state, joint obs indices. | |
std::string | SoftPrint () const |
Prints some information on the DecPOMDPDiscrete. | |
~DecPOMDPDiscrete () | |
Destructor. | |
![]() | |
virtual | ~DecPOMDPDiscreteInterface () |
import the GetReward function from the base class in current scope. | |
![]() | |
virtual | ~POSGDiscreteInterface () |
Destructor.Can't make a virt.destr. pure abstract! | |
![]() | |
virtual | ~MultiAgentDecisionProcessDiscreteInterface () |
Destructor. Can't make a virt.destr. pure abstract! | |
![]() | |
virtual | ~MultiAgentDecisionProcessInterface () |
Destructor. | |
![]() | |
virtual | ~POSGInterface () |
Virtual destructor. | |
![]() | |
virtual | ~DecPOMDPInterface () |
Virtual destructor. | |
![]() | |
void | CreateNewObservationModel () |
Creates a new observation model mapping. | |
void | CreateNewTransitionModel () |
Creates a new transition model mapping. | |
const ObservationModelDiscrete * | GetObservationModelDiscretePtr () const |
Returns a pointer to the underlying observation model. | |
double | GetObservationProbability (Index jaI, Index sucSI, Index joI) const |
Return the probability of joint observation joI: P(joI|jaI,sucSI). | |
OGet * | GetOGet () const |
bool | GetSparse () const |
Are we using sparse transition and observation models? | |
TGet * | GetTGet () const |
const TransitionModelDiscrete * | GetTransitionModelDiscretePtr () const |
Returns a pointer to the underlying transition model. | |
double | GetTransitionProbability (Index sI, Index jaI, Index sucSI) const |
Return the probability of successor state sucSI: P(sucSI|sI,jaI). | |
bool | Initialize () |
A function that can be called by other classes in order to request a MultiAgentDecisionProcessDiscrete to (try to) initialize. | |
MultiAgentDecisionProcessDiscrete () | |
Default constructor. | |
MultiAgentDecisionProcessDiscrete (std::string name="received unspec. by MultiAgentDecisionProcessDiscrete", std::string descr="received unspec.by MultiAgentDecisionProcessDiscrete", std::string pf="received unspec. by MultiAgentDecisionProcessDiscrete") | |
Constructor that sets the. | |
MultiAgentDecisionProcessDiscrete (int nrAgents, int nrS, std::string name="received unspec. by MultiAgentDecisionProcessDiscrete", std::string descr="received unspec.by MultiAgentDecisionProcessDiscrete", std::string pf="received unspec. by MultiAgentDecisionProcessDiscrete") | |
Constructor that sets the. | |
void | Print () const |
Prints some information on the MultiAgentDecisionProcessDiscrete. | |
Index | SampleJointObservation (Index jaI, Index sucI) const |
Sample an observation. | |
Index | SampleSuccessorState (Index sI, Index jaI) const |
Sample a successor state. | |
void | SetObservationModelPtr (ObservationModelDiscrete *ptr) |
Set the obversation model. | |
void | SetObservationProbability (Index jaI, Index sucSI, Index joI, double p) |
Set the probability of joint observation joI: P(joI|jaI,sucSI). | |
void | SetSparse (bool sparse) |
Indicate whether sparse transition and observation models should be used. | |
void | SetTransitionModelPtr (TransitionModelDiscrete *ptr) |
Set the transition model. | |
void | SetTransitionProbability (Index sI, Index jaI, Index sucSI, double p) |
Set the probability of successor state sucSI: P(sucSI|sI,jaI). | |
~MultiAgentDecisionProcessDiscrete () | |
Destructor. | |
![]() | |
DecPOMDP () | |
Default constructor. sets RewardType to REWARD and discount to 1.0. | |
double | GetDiscount () const |
Returns the discount parameter. | |
double | GetDiscountForAgent (Index agentI) const |
Returns the discount parameter. | |
reward_t | GetRewardType () const |
Returns the reward type. | |
reward_t | GetRewardTypeForAgent (Index agentI) const |
Returns the reward type. | |
void | SetDiscount (double d) |
Sets the discount parameter to d. | |
void | SetDiscountForAgent (Index agentI, double d) |
Functions needed for POSGInterface: | |
void | SetRewardType (reward_t r) |
Sets the reward type to reward_t r. | |
void | SetRewardTypeForAgent (Index agentI, reward_t r) |
Sets the reward type to reward_t r. |
Protected Attributes | |
RewardModel * | _m_p_rModel |
The reward model used by DecPOMDPDiscrete. |
Private Attributes | |
bool | _m_initialized |
Boolean that tracks whether this DecPOMDP is initialized. |
DecPOMDPDiscrete represent a discrete DEC-POMDP model.
It implements DecPOMDPDiscreteInterface.
Also it inherits -MultiAgentDecisionProcessDiscrete -DecPOMDP
and thus implements -DecPOMDPInterface -MultiAgentDecisionProcessDiscreteInterface -MultiAgentDecisionProcessInterface
Definition at line 56 of file DecPOMDPDiscrete.h.
DecPOMDPDiscrete::DecPOMDPDiscrete | ( | std::string | name = "received unspec. by DecPOMDPDiscrete" , |
std::string | descr = "received unspec. by DecPOMDPDiscrete" , |
||
std::string | pf = "received unspec. by DecPOMDPDiscrete" |
||
) |
Default constructor.
Constructor that sets the name, description, and problem file, and subsequently loads this problem file.
Definition at line 40 of file DecPOMDPDiscrete.cpp.
References _m_initialized, and _m_p_rModel.
Referenced by Clone().
DecPOMDPDiscrete::~DecPOMDPDiscrete | ( | ) |
Destructor.
Definition at line 47 of file DecPOMDPDiscrete.cpp.
References _m_p_rModel, and DEBUG_DPOMDPD.
|
inlinevirtual |
Returns a pointer to a copy of this class.
Implements DecPOMDPDiscreteInterface.
Definition at line 171 of file DecPOMDPDiscrete.h.
References DecPOMDPDiscrete().
|
virtual |
Creates a new reward model.
Implements DecPOMDPDiscreteInterface.
Definition at line 65 of file DecPOMDPDiscrete.cpp.
References _m_initialized, _m_p_rModel, MADPComponentDiscreteActions::GetNrJointActions(), MADPComponentDiscreteStates::GetNrStates(), and MultiAgentDecisionProcessDiscrete::GetSparse().
Referenced by CreateNewRewardModelForAgent(), ProblemDecTiger::ProblemDecTiger(), and ProblemFireFighting::ProblemFireFighting().
|
inlinevirtual |
implementation of POSGDiscreteInterface
Implements POSGDiscreteInterface.
Definition at line 149 of file DecPOMDPDiscrete.h.
References CreateNewRewardModel().
void DecPOMDPDiscrete::ExtractMADPDiscrete | ( | MultiAgentDecisionProcessDiscrete * | madp | ) |
Get the MADPDiscrete components from this DecPOMDPDiscrete.
Definition at line 113 of file DecPOMDPDiscrete.cpp.
References MADPComponentDiscreteActions::AddAction(), MADPComponentDiscreteObservations::AddObservation(), MADPComponentDiscreteStates::AddState(), MADPComponentDiscreteActions::ConstructJointActions(), MADPComponentDiscreteObservations::ConstructJointObservations(), MADPComponentDiscreteActions::GetAction(), NamedDescribedEntity::GetDescription(), MADPComponentDiscreteStates::GetISD(), NamedDescribedEntity::GetName(), MADPComponentDiscreteActions::GetNrActions(), MultiAgentDecisionProcess::GetNrAgents(), MADPComponentDiscreteObservations::GetNrObservations(), MADPComponentDiscreteStates::GetNrStates(), MADPComponentDiscreteObservations::GetObservation(), MultiAgentDecisionProcessDiscrete::GetObservationModelDiscretePtr(), MADPComponentDiscreteStates::GetState(), MultiAgentDecisionProcessDiscrete::GetTransitionModelDiscretePtr(), MultiAgentDecisionProcessDiscrete::Initialize(), NamedDescribedEntity::SetDescription(), MADPComponentDiscreteStates::SetISD(), NamedDescribedEntity::SetName(), MultiAgentDecisionProcess::SetNrAgents(), MultiAgentDecisionProcessDiscrete::SetObservationModelPtr(), and MultiAgentDecisionProcessDiscrete::SetTransitionModelPtr().
Referenced by ParserTOICompactRewardDecPOMDPDiscrete::StoreDecPOMDP(), ParserTOIFactoredRewardDecPOMDPDiscrete::StoreDecPOMDP(), and ParserTOIDecPOMDPDiscrete::StoreDecPOMDP().
|
inlinevirtual |
Return the reward for state, joint action indices.
Implements DecPOMDPDiscreteInterface.
Definition at line 104 of file DecPOMDPDiscrete.h.
References _m_p_rModel, and RewardModel::Get().
Referenced by GetReward(), GetRewardForAgent(), and SetReward().
|
inlinevirtual |
implements the DecPOMDPInterface
Implements DecPOMDPInterface.
Definition at line 119 of file DecPOMDPDiscrete.h.
References GetReward().
|
inlinevirtual |
Function that returns the reward for a state and joint action.
This should be very generic.
Implements POSGInterface.
Definition at line 143 of file DecPOMDPDiscrete.h.
References GetReward().
|
inlinevirtual |
Return the reward for state, joint action indices.
Implements POSGDiscreteInterface.
Definition at line 166 of file DecPOMDPDiscrete.h.
References GetReward().
|
inline |
Get a pointer to the reward model.
Definition at line 114 of file DecPOMDPDiscrete.h.
References _m_p_rModel.
Referenced by ParserTOICompactRewardDecPOMDPDiscrete::StoreDecPOMDP(), and ParserTOIFactoredRewardDecPOMDPDiscrete::StoreDecPOMDP().
|
virtual |
Implements DecPOMDPDiscreteInterface.
Definition at line 76 of file DecPOMDPDiscrete.cpp.
References _m_p_rModel.
bool DecPOMDPDiscrete::SetInitialized | ( | bool | b | ) |
Sets _m_initialized to b.
When setting to true, a verification of member elements is performed. (i.e. a check whether all vectors have the correct size and non-zero entries)
Reimplemented from MultiAgentDecisionProcessDiscrete.
Definition at line 54 of file DecPOMDPDiscrete.cpp.
References _m_initialized, and MultiAgentDecisionProcessDiscrete::SetInitialized().
Referenced by ProblemDecTiger::ProblemDecTiger(), and ProblemFireFighting::ProblemFireFighting().
|
inlinevirtual |
Set the reward for state, joint action indices.
Implements DecPOMDPDiscreteInterface.
Definition at line 92 of file DecPOMDPDiscrete.h.
References _m_p_rModel, and RewardModel::Set().
Referenced by ProblemFireFighting::FillRewardModel(), ProblemDecTiger::FillRewardModel(), SetReward(), and SetRewardForAgent().
|
virtual |
Set the reward for state, joint action , suc. state indices.
Implements DecPOMDPDiscreteInterface.
Definition at line 99 of file DecPOMDPDiscrete.cpp.
References GetReward(), MultiAgentDecisionProcessDiscrete::GetTransitionProbability(), and SetReward().
|
virtual |
Set the reward for state, joint action, suc.state, joint obs indices.
Implements DecPOMDPDiscreteInterface.
Definition at line 106 of file DecPOMDPDiscrete.cpp.
|
inlinevirtual |
implements the DecPOMDPInterface
Implements DecPOMDPInterface.
Definition at line 126 of file DecPOMDPDiscrete.h.
References SetReward().
|
inlinevirtual |
Function that sets the reward for an agent, state and joint action.
This should be very generic.
Implements POSGInterface.
Definition at line 139 of file DecPOMDPDiscrete.h.
References SetReward().
|
inlinevirtual |
Set the reward for state, joint action indices.
Implements POSGDiscreteInterface.
Definition at line 152 of file DecPOMDPDiscrete.h.
References SetReward().
|
inlinevirtual |
Set the reward for state, joint action , suc. state indices.
Implements POSGDiscreteInterface.
Definition at line 156 of file DecPOMDPDiscrete.h.
References SetReward().
|
inlinevirtual |
Set the reward for state, joint action, suc.state, joint obs indices.
Implements POSGDiscreteInterface.
Definition at line 161 of file DecPOMDPDiscrete.h.
References SetReward().
string DecPOMDPDiscrete::SoftPrint | ( | ) | const |
Prints some information on the DecPOMDPDiscrete.
Reimplemented from DecPOMDP.
Definition at line 82 of file DecPOMDPDiscrete.cpp.
References _m_initialized, _m_p_rModel, and RewardModel::SoftPrint().
|
private |
Boolean that tracks whether this DecPOMDP is initialized.
Definition at line 63 of file DecPOMDPDiscrete.h.
Referenced by CreateNewRewardModel(), DecPOMDPDiscrete(), SetInitialized(), and SoftPrint().
|
protected |
The reward model used by DecPOMDPDiscrete.
Definition at line 68 of file DecPOMDPDiscrete.h.
Referenced by CreateNewRewardModel(), DecPOMDPDiscrete(), GetReward(), GetRewardModelPtr(), GetRGet(), SetReward(), SoftPrint(), and ~DecPOMDPDiscrete().