MultiAgentDecisionProcess  Release 0.2.1
DecPOMDPDiscreteInterface Class Reference

DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions. More...

#include <DecPOMDPDiscreteInterface.h>

+ Inheritance diagram for DecPOMDPDiscreteInterface:
+ Collaboration diagram for DecPOMDPDiscreteInterface:

Public Member Functions

virtual DecPOMDPDiscreteInterfaceClone () const =0
 Returns a pointer to a copy of this class.
virtual void CreateNewRewardModel ()=0
 Creates a new reward model mapping.
virtual double GetReward (Index sI, Index jaI) const =0
 Return the reward for state, joint action indices.
virtual RGetGetRGet () const =0
virtual void SetReward (Index sI, Index jaI, double r)=0
 Set the reward for state, joint action indices.
virtual void SetReward (Index sI, Index jaI, Index sucSI, double r)=0
 Set the reward for state, joint action , suc. state indices.
virtual void SetReward (Index sI, Index jaI, Index sucSI, Index joI, double r)=0
 Set the reward for state, joint action, suc.state, joint obs indices.
virtual ~DecPOMDPDiscreteInterface ()
 import the GetReward function from the base class in current scope.
- Public Member Functions inherited from POSGDiscreteInterface
virtual void CreateNewRewardModelForAgent (Index agentI)=0
 Creates a new reward model mapping.
virtual double GetRewardForAgent (Index agentI, Index sI, Index jaI) const =0
 Return the reward for state, joint action indices.
virtual void SetRewardForAgent (Index agentI, Index sI, Index jaI, double r)=0
 Set the reward for state, joint action indices.
virtual void SetRewardForAgent (Index agentI, Index sI, Index jaI, Index sucSI, double r)=0
 Set the reward for state, joint action , suc. state indices.
virtual void SetRewardForAgent (Index agentI, Index sI, Index jaI, Index sucSI, Index joI, double r)=0
 Set the reward for state, joint action, suc.state, joint obs indices.
virtual ~POSGDiscreteInterface ()
 Destructor.Can't make a virt.destr. pure abstract!
- Public Member Functions inherited from MultiAgentDecisionProcessDiscreteInterface
virtual const ActionGetAction (Index agentI, Index a) const =0
 Return a ref to the a-th action of agent agentI.
virtual double GetInitialStateProbability (Index sI) const =0
virtual const StateDistributionGetISD () const =0
 Returns the complete initial state distribution.
virtual const JointActionGetJointAction (Index i) const =0
 Return a ref to the i-th joint action.
virtual const JointObservationGetJointObservation (Index i) const =0
 Return a ref to the i-th joint observation.
virtual const std::vector
< size_t > & 
GetNrActions () const =0
 Return the number of actions vector.
virtual size_t GetNrActions (Index AgentI) const =0
 Return the number of actions of agent agentI.
virtual size_t GetNrJointActions () const =0
 Return the number of joiny actions.
virtual size_t GetNrJointObservations () const =0
 Return the number of joiny observations.
virtual const std::vector
< size_t > & 
GetNrObservations () const =0
 Return the number of observations vector.
virtual size_t GetNrObservations (Index AgentI) const =0
 Return the number of observations of agent agentI.
virtual size_t GetNrStates () const =0
 Return the number of states.
virtual const ObservationGetObservation (Index agentI, Index a) const =0
 Return a ref to the a-th observation of agent agentI.
virtual const
ObservationModelDiscrete
GetObservationModelDiscretePtr () const =0
 Returns a pointer to the underlying observation model.
virtual double GetObservationProbability (Index jaI, Index sucSI, Index joI) const =0
 Return the probability of joint observation joI: P(joI|jaI,sucSI).
virtual OGetGetOGet () const =0
virtual const StateGetState (Index i) const =0
 Returns a pointer to state i.
virtual TGetGetTGet () const =0
virtual const
TransitionModelDiscrete
GetTransitionModelDiscretePtr () const =0
 Returns a pointer to the underlying transition model.
virtual double GetTransitionProbability (Index sI, Index jaI, Index sucSI) const =0
 Return the probability of successor state sucSI: P(sucSI|sI,jaI).
virtual Index IndividualToJointActionIndices (const Index *AI_ar) const =0
 Returns the joint action index that corresponds to the array of specified individual action indices.
virtual Index IndividualToJointActionIndices (const std::vector< Index > &indivActionIndices) const =0
 Returns the joint action index that corresponds to the vector of specified individual action indices.
virtual Index IndividualToJointObservationIndices (const std::vector< Index > &indivObservationIndices) const =0
 Returns the joint observation index that corresponds to the vector of specified individual observation indices.
virtual const std::vector
< Index > & 
JointToIndividualActionIndices (Index jaI) const =0
 Returns a vector of indices to indiv.
virtual const std::vector
< Index > & 
JointToIndividualObservationIndices (Index joI) const =0
 Returns a vector of indices to indiv.
virtual Index SampleInitialState (void) const =0
 Sample a state according to the initial state PDF.
virtual Index SampleJointObservation (Index jaI, Index sucI) const =0
 Sample an observation - needed for simulations.
virtual Index SampleSuccessorState (Index sI, Index jaI) const =0
 Sample a successor state - needed by simulations.
virtual std::string SoftPrint () const =0
 Prints some information on the MultiAgentDecisionProcessDiscreteInterface.
virtual std::string SoftPrintState (Index sI) const =0
virtual ~MultiAgentDecisionProcessDiscreteInterface ()
 Destructor. Can't make a virt.destr. pure abstract!
- Public Member Functions inherited from MultiAgentDecisionProcessInterface
virtual size_t GetNrAgents () const =0
 Return the number of agents.
virtual std::string GetUnixName () const =0
 Returns the base part of the problem filename.
virtual ~MultiAgentDecisionProcessInterface ()
 Destructor.
- Public Member Functions inherited from POSGInterface
virtual double GetDiscountForAgent (Index agentI) const =0
 Returns the discount parameter.
virtual double GetRewardForAgent (Index agentI, State *s, JointAction *ja) const =0
 Function that returns the reward for a state and joint action.
virtual reward_t GetRewardTypeForAgent (Index agentI) const =0
 Returns the reward type.
virtual void SetDiscountForAgent (Index agentI, double d)=0
 Sets the discount parameter to 0 < d <= 1.
virtual void SetRewardForAgent (Index agentI, State *s, JointAction *ja, double r)=0
 Function that sets the reward for an agent, state and joint action.
virtual void SetRewardTypeForAgent (Index agentI, reward_t r)=0
 Sets the reward type to reward_t r.
virtual ~POSGInterface ()
 Virtual destructor.
- Public Member Functions inherited from DecPOMDPInterface
virtual double GetDiscount () const =0
 Returns the discount parameter.
virtual double GetReward (State *s, JointAction *ja) const =0
 Function that returns the reward for a state and joint action.
virtual reward_t GetRewardType () const =0
 Returns the reward type.
virtual void SetDiscount (double d)=0
 Sets the discount parameter to 0 < d <= 1.
virtual void SetReward (State *s, JointAction *ja, double r)=0
 Function that sets the reward for a state and joint action.
virtual void SetRewardType (reward_t r)=0
 Sets the reward type to reward_t r.
virtual ~DecPOMDPInterface ()
 Virtual destructor.

Detailed Description

DecPOMDPDiscreteInterface is the interface for a discrete DEC-POMDP model: it defines the set/get reward functions.

DecPOMDPDiscreteInterface is an interface (i.e. pure abstract class) for a discrete DEC-POMDP model. This means that there is a single reward function and that states, actions and observations are discrete.

Classes that implement this interface are, for instance, DecPOMDPDiscrete and TransitionObservationIndependentDecPOMDPDiscrete.

Definition at line 51 of file DecPOMDPDiscreteInterface.h.

Constructor & Destructor Documentation

virtual DecPOMDPDiscreteInterface::~DecPOMDPDiscreteInterface ( )
inlinevirtual

import the GetReward function from the base class in current scope.

Destructor.Can't make a virt.destr. pure abstract!

Definition at line 69 of file DecPOMDPDiscreteInterface.h.

Member Function Documentation

virtual DecPOMDPDiscreteInterface* DecPOMDPDiscreteInterface::Clone ( ) const
pure virtual

Returns a pointer to a copy of this class.

Implements DecPOMDPInterface.

Implemented in TOIDecPOMDPDiscrete, DecPOMDPDiscrete, TOICompactRewardDecPOMDPDiscrete, and TOIFactoredRewardDecPOMDPDiscrete.

virtual void DecPOMDPDiscreteInterface::CreateNewRewardModel ( )
pure virtual

Creates a new reward model mapping.

Implemented in TOIDecPOMDPDiscrete, and DecPOMDPDiscrete.

virtual double DecPOMDPDiscreteInterface::GetReward ( Index  sI,
Index  jaI 
) const
pure virtual

Return the reward for state, joint action indices.

Implemented in TOIDecPOMDPDiscrete, DecPOMDPDiscrete, TOICompactRewardDecPOMDPDiscrete, and TOIFactoredRewardDecPOMDPDiscrete.

Referenced by SimulationDecPOMDPDiscrete::Step().

virtual RGet* DecPOMDPDiscreteInterface::GetRGet ( ) const
pure virtual

Implemented in TOIDecPOMDPDiscrete, and DecPOMDPDiscrete.

virtual void DecPOMDPDiscreteInterface::SetReward ( Index  sI,
Index  jaI,
double  r 
)
pure virtual

Set the reward for state, joint action indices.

Implemented in TOIDecPOMDPDiscrete, and DecPOMDPDiscrete.

virtual void DecPOMDPDiscreteInterface::SetReward ( Index  sI,
Index  jaI,
Index  sucSI,
double  r 
)
pure virtual

Set the reward for state, joint action , suc. state indices.

Implemented in TOIDecPOMDPDiscrete, and DecPOMDPDiscrete.

virtual void DecPOMDPDiscreteInterface::SetReward ( Index  sI,
Index  jaI,
Index  sucSI,
Index  joI,
double  r 
)
pure virtual

Set the reward for state, joint action, suc.state, joint obs indices.

Implemented in TOIDecPOMDPDiscrete, and DecPOMDPDiscrete.


The documentation for this class was generated from the following file: