MultiAgentDecisionProcess
Release 0.2.1
|
Globals contains several definitions global to the MADP toolbox. More...
Typedefs | |
typedef unsigned int | Index |
A general index. | |
typedef unsigned long long int | LIndex |
A long long index. |
Enumerations | |
enum | reward_t { REWARD, COST } |
Inherited from Tony's POMDP file format. More... |
Functions | |
bool | EqualProbability (double p1, double p2) |
bool | EqualReward (double r1, double r2) |
Variables | |
const Index | INITIAL_JAOHI = 0 |
The initial (=empty) joint action-observation history index. | |
const Index | INITIAL_JOHI = 0 |
The initial (=empty) joint observation history index. | |
const unsigned int | MAXHORIZON = 999999 |
The highest horizon we will consider. | |
const double | PROB_PRECISION = 1e-8 |
The precision for probabilities. | |
const double | REWARD_PRECISION = 1e-8 |
Used to determine when two (immediate) rewards are considered equal. |
Globals contains several definitions global to the MADP toolbox.
typedef unsigned int Globals::Index |
typedef unsigned long long int Globals::LIndex |
enum Globals::reward_t |
bool Globals::EqualProbability | ( | double | p1, |
double | p2 | ||
) |
Definition at line 32 of file Globals.cpp.
References PROB_PRECISION.
Referenced by BayesianGameBase::SanityCheck(), and BayesianGameBase::SanityCheckBGBase().
bool Globals::EqualReward | ( | double | r1, |
double | r2 | ||
) |
Definition at line 37 of file Globals.cpp.
References REWARD_PRECISION.
const Index Globals::INITIAL_JAOHI = 0 |
The initial (=empty) joint action-observation history index.
Definition at line 69 of file Globals.h.
Referenced by QFunctionJAOHTree::ComputeQ(), and PlanningUnitMADPDiscrete::GetJAOHProbs().
const Index Globals::INITIAL_JOHI = 0 |
The initial (=empty) joint observation history index.
Definition at line 67 of file Globals.h.
Referenced by ValueFunctionDecPOMDPDiscrete::CalculateV0RecursivelyCached(), ValueFunctionDecPOMDPDiscrete::CalculateV0RecursivelyNotCached(), and SimulationDecPOMDPDiscrete::RunSimulation().
const unsigned int Globals::MAXHORIZON = 999999 |
The highest horizon we will consider.
When the horizon of a problem is set to this value, we consider it an infinite-horizon problem.
Definition at line 53 of file Globals.h.
Referenced by PlanningUnitMADPDiscrete::Deinitialize(), Perseus::GetInitialValueFunction(), BayesianGameBase::GetNrPolicyDomainElements(), PlanningUnitMADPDiscrete::GetNrPolicyDomainElements(), MDPValueIteration::Initialize(), PlanningUnitMADPDiscrete::Initialize(), SimulationDecPOMDPDiscrete::Initialize(), MDPSolver::Print(), AlphaVectorPlanning::SampleBeliefs(), and PlanningUnitDecPOMDPDiscrete::SanityCheck().
const double Globals::PROB_PRECISION = 1e-8 |
The precision for probabilities.
Used to determine when two probabilities are considered equal, for instance when converting full beliefs to sparse beliefs.
Definition at line 59 of file Globals.h.
Referenced by QPOMDP::ComputeRecursively(), QBG::ComputeRecursively(), EqualProbability(), AlphaVectorPlanning::GetDuplicateIndices(), PlanningUnitDecPOMDPDiscrete::SanityCheck(), MultiAgentDecisionProcessDiscrete::SanityCheck(), Belief::SanityCheck(), BeliefSparse::SanityCheck(), ObservationModelMappingSparse::Set(), TransitionModelMappingSparse::Set(), MADPComponentDiscreteStates::SetInitialized(), JointBeliefSparse::Update(), JointBeliefSparse::UpdateSlow(), and AlphaVectorPlanning::VectorIsInValueFunction().
const double Globals::REWARD_PRECISION = 1e-8 |
Used to determine when two (immediate) rewards are considered equal.
Definition at line 61 of file Globals.h.
Referenced by EqualReward(), and RewardModelMappingSparse::Set().