Here is a list of all class members with links to the classes they belong to:
- _ -
- _m_action
: AlphaVector
- _m_actionHistoryTreeRootPointers
: PlanningUnitMADPDiscrete
- _m_actionHistoryTreeVectors
: PlanningUnitMADPDiscrete
- _m_actionI
: ActionHistory
- _m_actionObservationHistoryTreeRootPointers
: PlanningUnitMADPDiscrete
- _m_actionObservationHistoryTreeVectors
: PlanningUnitMADPDiscrete
- _m_actionStepSize
: TransitionObservationIndependentMADPDiscrete
, MADPComponentDiscreteActions
- _m_actionVecs
: MADPComponentDiscreteActions
- _m_agentI
: IndividualBeliefJESP
, IndividualHistory
, PlanningUnit
, Policy
, PolicyPureVector
- _m_agents
: MultiAgentDecisionProcess
- _m_ahI
: ActionObservationHistory
- _m_aIndexVector
: JointActionDiscrete
- _m_aIs
: AgentBG
- _m_alpha
: DICEPSPlanner
- _m_aohI
: Type_AOHIndex
- _m_apVector
: JointActionDiscrete
- _m_areCachedImmediateRewards
: BayesianGameForDecPOMDPStage
- _m_avg_reward
: SimulationResult
- _m_b
: Belief
, BeliefSparse
- _m_backupType
: PerseusBGPlanner
- _m_belief
: BeliefIterator
, BeliefIteratorSparse
- _m_beliefs
: PerseusStationary
- _m_beliefsInitialized
: Perseus
- _m_bestValue
: Perseus
- _m_betaI
: AlphaVector
- _m_bgBaseFilename
: GeneralizedMAAStarPlanner
- _m_bgCounter
: GeneralizedMAAStarPlanner
- _m_bgip
: AgentBG
, AlphaVectorBG
- _m_cached
: ValueFunctionDecPOMDPDiscrete
- _m_cachedAllJointActions
: MADPComponentDiscreteActions
- _m_cachedAllJointObservations
: MADPComponentDiscreteObservations
- _m_capacity
: FixedCapacityPriorityQueue< T >
- _m_computeVectorForEachBelief
: Perseus
- _m_containedElem
: TreeNode< Tcontained >
- _m_containsEmptyJOI
: JointObservationHistory
- _m_containsEmptyOI
: ObservationHistory
- _m_depth
: JointPolicy
, Policy
- _m_description
: NamedDescribedEntity
- _m_discount
: DecPOMDP
, POSG
- _m_domainToActionIndices
: PolicyPureVector
- _m_dryrun
: Perseus
- _m_error
: E
- _m_expectedRewardFoundPolicy
: BruteForceSearchPlanner
, DICEPSPlanner
, GeneralizedMAAStarPlanner
, JESPDynamicProgrammingPlanner
, JESPExhaustivePlanner
- _m_finiteHorizon
: MDPValueIteration
- _m_firstAHIforT
: PlanningUnitMADPDiscrete
- _m_firstAOHIforT
: PlanningUnitMADPDiscrete
- _m_firstJAHIforT
: PlanningUnitMADPDiscrete
- _m_firstJAOHIforT
: PlanningUnitMADPDiscrete
- _m_firstJOHIforT
: PlanningUnitMADPDiscrete
- _m_firstOHIforT
: PlanningUnitMADPDiscrete
- _m_foundPolicy
: BruteForceSearchPlanner
, DICEPSPlanner
, GeneralizedMAAStarPlanner
, JESPDynamicProgrammingPlanner
, JESPExhaustivePlanner
- _m_h
: ValueFunctionDecPOMDPDiscrete
- _m_horizon
: PlanningUnit
, SimulationDecPOMDPDiscrete
, SimulationResult
- _m_i
: BeliefIterator
, BeliefIteratorSparse
- _m_I_PTPD
: PolicyDiscrete
- _m_id
: SimulationAgent
- _m_idc
: JPolComponent_VectorImplementation
- _m_identification
: Perseus
- _m_immR
: BayesianGameForDecPOMDPStage
- _m_includePositions
: ProblemFireFighting
- _m_index
: DiscreteEntity
, TreeNode< Tcontained >
- _m_indexDomCat
: JointPolicyDiscrete
, PolicyDiscrete
- _m_indexValid
: TreeNode< Tcontained >
- _m_indivActionIndices
: JointPolicyPureVector
, JPolComponent_VectorImplementation
- _m_individualActionHistories
: JointActionHistory
, PlanningUnitMADPDiscreteParameters
- _m_individualActionObservationHistories
: JointActionObservationHistory
, PlanningUnitMADPDiscreteParameters
- _m_individualDecPOMDPDs
: TOIDecPOMDPDiscrete
- _m_individualMADPDs
: TransitionObservationIndependentMADPDiscrete
- _m_individualObservationHistories
: JointObservationHistory
, PlanningUnitMADPDiscreteParameters
- _m_indivObs
: TransitionObservationIndependentMADPDiscrete
- _m_indivPols_PolicyPureVector
: JointPolicyPureVector
, JPolComponent_VectorImplementation
- _m_indivStateIndices
: TransitionObservationIndependentMADPDiscrete
- _m_indivStateIndicesMap
: TransitionObservationIndependentMADPDiscrete
- _m_initialized
: DecPOMDPDiscrete
, MADPComponentDiscreteActions
, MADPComponentDiscreteObservations
, MADPComponentDiscreteStates
, MultiAgentDecisionProcessDiscrete
, POSG
, POSGDiscrete
, TOICompactRewardDecPOMDPDiscrete
, TOIDecMDPDiscrete
, TOIDecPOMDPDiscrete
, TOIFactoredRewardDecPOMDPDiscrete
, TransitionObservationIndependentMADPDiscrete
, PlanningUnitMADPDiscrete
, AlphaVectorPlanning
, BayesianGame
, BayesianGameBase
, BayesianGameIdenticalPayoff
, MDPValueIteration
, QFunctionJAOHTree
, QMDP
- _m_initializeWithImmediateReward
: Perseus
- _m_initializeWithZero
: Perseus
- _m_initialStateDistribution
: MADPComponentDiscreteStates
, TransitionObservationIndependentMADPDiscrete
- _m_intermediateResultFile
: GeneralizedMAAStarPlanner
- _m_intermediateResultsFilename
: SimulationDecPOMDPDiscrete
- _m_intermediateTimingFilename
: GeneralizedMAAStarPlanner
- _m_isEmpty
: ActionHistory
, JointActionHistory
- _m_it
: BeliefIteratorGeneric
- _m_ja_str
: RewardModelMapping
, RewardModelMappingSparse
, RewardModelTOISparse
- _m_jaI
: JointActionObservationHistory
- _m_jaIfirst
: AgentBG
- _m_jaohConditionalProbs
: PlanningUnitMADPDiscrete
- _m_jaohProbs
: PlanningUnitMADPDiscrete
- _m_jb
: AgentPOMDP
, AgentQMDP
- _m_jBeliefCache
: PlanningUnitMADPDiscrete
- _m_JBs
: BayesianGameForDecPOMDPStage
- _m_joI
: JointActionObservationHistory
- _m_jointActionHistories
: PlanningUnitMADPDiscreteParameters
- _m_jointActionHistoryTreeRoot
: PlanningUnitMADPDiscrete
- _m_jointActionHistoryTreeVector
: PlanningUnitMADPDiscrete
- _m_jointActionI
: JointActionHistory
- _m_jointActionIndices
: MADPComponentDiscreteActions
- _m_jointActionMap
: TransitionObservationIndependentMADPDiscrete
- _m_jointActionObservationHistories
: PlanningUnitMADPDiscreteParameters
- _m_jointActionObservationHistoryTreeMap
: PlanningUnitMADPDiscrete
- _m_jointActionObservationHistoryTreeRoot
: PlanningUnitMADPDiscrete
- _m_jointActionObservationHistoryTreeVector
: PlanningUnitMADPDiscrete
- _m_jointActionVec
: MADPComponentDiscreteActions
, TransitionObservationIndependentMADPDiscrete
- _m_JointBeliefs
: PlanningUnitMADPDiscreteParameters
- _m_jointIndicesValid
: MADPComponentDiscreteActions
, MADPComponentDiscreteObservations
- _m_jointObs
: TransitionObservationIndependentMADPDiscrete
- _m_jointObservationHistories
: PlanningUnitMADPDiscreteParameters
- _m_jointObservationHistoryTreeRoot
: PlanningUnitMADPDiscrete
- _m_jointObservationHistoryTreeVector
: PlanningUnitMADPDiscrete
- _m_jointObservationI
: JointObservationHistory
- _m_jointObservationIndices
: MADPComponentDiscreteObservations
- _m_jointObservationVec
: MADPComponentDiscreteObservations
- _m_jointObsMap
: TransitionObservationIndependentMADPDiscrete
- _m_jointStates
: TransitionObservationIndependentMADPDiscrete
- _m_jointStatesMap
: TransitionObservationIndependentMADPDiscrete
- _m_jointToIndActionCache
: TransitionObservationIndependentMADPDiscrete
- _m_jointToIndObsCache
: TransitionObservationIndependentMADPDiscrete
- _m_jointToIndTypes
: BayesianGameBase
- _m_jointToIndTypesMap
: BayesianGameBase
- _m_jpol
: AgentBG
, JPPVIndexValuePair
, JPPVValuePair
, PartialJPDPValuePair
, PartialJPPVIndexValuePair
, ValueFunctionDecPOMDPDiscrete
- _m_jpolDepth
: JPPVIndexValuePair
, PartialJPPVIndexValuePair
- _m_jpolIndex
: BGIPSolution
, JPPVIndexValuePair
, PartialJPPVIndexValuePair
- _m_jpolIndices
: BGIPSolution
- _m_jpvpQueue_p
: PolicyPoolJPolValPair
, PolicyPoolPartialJPolValPair
- _m_jTypeProbs
: BayesianGameBase
- _m_jTypeProbsSparse
: BayesianGameBase
- _m_l
: FixedCapacityPriorityQueue< T >
- _m_length
: History
- _m_maximumNumberOfIterations
: Perseus
- _m_maxJPolPoolSize
: GeneralizedMAAStarPlanner
- _m_minimumNumberOfIterations
: Perseus
- _m_name
: NamedDescribedEntity
- _m_newBGIP_Solver
: GMAA_kGMAA
- _m_nodeType
: ActionObservationHistoryTree
, JointActionObservationHistoryTree
- _m_noJointModels
: TransitionObservationIndependentMADPDiscrete
- _m_nr_agents
: MADPComponentDiscreteActions
, TransitionObservationIndependentMADPDiscrete
- _m_nr_stored
: SimulationResult
- _m_nrActionHistories
: PlanningUnitMADPDiscrete
- _m_nrActionHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrActionObservationHistories
: PlanningUnitMADPDiscrete
- _m_nrActionObservationHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrActions
: MADPComponentDiscreteActions
, BayesianGameBase
- _m_nrAgents
: MultiAgentDecisionProcess
, POSG
, IndividualBeliefJESP
, JointPolicy
, ProblemFireFighting
, BayesianGameBase
- _m_nrEvalRuns
: DICEPSPlanner
- _m_nrFireLevels
: ProblemFireFighting
- _m_nrFLs_vec
: ProblemFireFighting
- _m_nrHouses
: ProblemFireFighting
- _m_nrIndivActions
: TransitionObservationIndependentMADPDiscrete
- _m_nrIndivObs
: TransitionObservationIndependentMADPDiscrete
- _m_nrIndivStates
: TransitionObservationIndependentMADPDiscrete
- _m_nrIterations
: DICEPSPlanner
- _m_nrJA
: BayesianGameBase
- _m_nrJO
: ValueFunctionDecPOMDPDiscrete
- _m_nrJOH
: ValueFunctionDecPOMDPDiscrete
- _m_nrJointActionHistories
: PlanningUnitMADPDiscrete
- _m_nrJointActionHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrJointActionObservationHistories
: PlanningUnitMADPDiscrete
- _m_nrJointActionObservationHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrJointActions
: RewardModel
, MADPComponentDiscreteActions
, ObservationModelDiscrete
, TransitionModelDiscrete
, TransitionObservationIndependentMADPDiscrete
- _m_nrJointFirelevels
: ProblemFireFighting
- _m_nrJointObservationHistories
: PlanningUnitMADPDiscrete
- _m_nrJointObservationHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrJointObservations
: TransitionObservationIndependentMADPDiscrete
, MADPComponentDiscreteObservations
, ObservationModelDiscrete
- _m_nrJointPoliciesForUpdate
: DICEPSPlanner
- _m_nrJointStates
: TransitionObservationIndependentMADPDiscrete
- _m_nrJPolBGsEvaluated
: GeneralizedMAAStarPlanner
- _m_nrJTypes
: BayesianGameBase
- _m_nrObservationHistories
: PlanningUnitMADPDiscrete
- _m_nrObservationHistoriesT
: PlanningUnitMADPDiscrete
- _m_nrObservations
: MADPComponentDiscreteObservations
- _m_nrOH_others
: IndividualBeliefJESP
- _m_nrPerStateFeatureVec
: ProblemFireFighting
- _m_nrPoliciesToProcess
: GeneralizedMAAStarPlanner
- _m_nrRestarts
: BGIP_SolverCreator_AM< JP >
, BGIP_SolverAlternatingMaximization< JP >
, DICEPSPlanner
- _m_nrRuns
: Simulation
- _m_nrS
: ValueFunctionDecPOMDPDiscrete
- _m_nrSampledJointPolicies
: DICEPSPlanner
- _m_nrSolutions
: BGIP_SolverBruteForceSearch< JP >
, BGIP_SolverCreator_AM< JP >
, BGIP_SolverCreator_BFS< JP >
, BGIPSolution
- _m_nrStateFeatures
: ProblemFireFighting
- _m_nrStates
: MADPComponentDiscreteStates
, ObservationModelDiscrete
, TransitionModelDiscrete
, RewardModel
- _m_nrTwoAgentActions
: TOICompactRewardDecPOMDPDiscrete
- _m_nrTwoAgentStates
: TOICompactRewardDecPOMDPDiscrete
- _m_nrTypes
: BayesianGameBase
- _m_O
: ObservationModelMapping
, ObservationModelMappingSparse
, OGet_ObservationModelMapping
, AlphaVectorPlanning
, OGet_ObservationModelMappingSparse
- _m_observationHistoryTreeRootPointers
: PlanningUnitMADPDiscrete
- _m_observationHistoryTreeVectors
: PlanningUnitMADPDiscrete
- _m_observationI
: ObservationHistory
- _m_observationStepSize
: MADPComponentDiscreteObservations
- _m_observationVecs
: MADPComponentDiscreteObservations
- _m_ohI
: ActionObservationHistory
- _m_oIndexVector
: JointObservationDiscrete
- _m_oIs
: AgentBG
- _m_opVector
: JointObservationDiscrete
- _m_Os
: AlphaVectorPlanning
- _m_OsForBackup
: AlphaVectorPlanning
- _m_others
: IndividualBeliefJESP
- _m_outputConvergenceFile
: DICEPSPlanner
- _m_outputConvergenceStatistics
: DICEPSPlanner
- _m_p
: QAV< P >
, QMDP
- _m_p_oModel
: MultiAgentDecisionProcessDiscrete
, TransitionObservationIndependentMADPDiscrete
- _m_p_rModel
: DecPOMDPDiscrete
, POSGDiscrete
, TOIDecPOMDPDiscrete
- _m_p_rModels
: TOIFactoredRewardDecPOMDPDiscrete
, TOICompactRewardDecPOMDPDiscrete
- _m_p_tModel
: MultiAgentDecisionProcessDiscrete
, TransitionObservationIndependentMADPDiscrete
- _m_p_V
: ValueFunctionDecPOMDPDiscrete
- _m_params
: PlanningUnitMADPDiscrete
- _m_pastR
: PartialJPPVIndexValuePair
- _m_pastReward
: PartialJointPolicy
- _m_payoff
: BGIPSolution
- _m_pJPol
: BayesianGameForDecPOMDPStageInterface
- _m_policy
: BGIPSolution
- _m_pred
: ActionObservationHistory
, ActionHistory
, JointActionObservationHistory
, ObservationHistory
, JointObservationHistory
, TreeNode< Tcontained >
, JointActionHistory
- _m_prevJaI
: AgentPOMDP
, AgentQMDP
- _m_prevJaIs
: AgentBG
- _m_prevJB
: AgentBG
- _m_prevJoIs
: AgentBG
- _m_problem
: ParserTOIDecPOMDPDiscrete
, ParserTOIFactoredRewardDecPOMDPDiscrete
, PlanningUnit
, ParserTOIDecMDPDiscrete
, ParserTOICompactRewardDecPOMDPDiscrete
- _m_problemFile
: MultiAgentDecisionProcess
- _m_PTPD
: JointPolicyDiscrete
- _m_PTPDP
: JPolComponent_VectorImplementation
- _m_pu
: AlphaVectorPlanning
, JPPVIndexValuePair
, BGIPSolution
, MDPSolver
, Perseus
, ValueFunctionDecPOMDPDiscrete
, PartialJPPVIndexValuePair
, SimulationDecPOMDPDiscrete
, QFunctionForDecPOMDP
, BayesianGameForDecPOMDPStage
, AgentDecPOMDPDiscrete
- _m_pumadp
: IndividualBeliefJESP
- _m_q
: BGIPSolution
- _m_Q
: AgentQMDP
- _m_QBG
: AgentBG
- _m_qFunction
: PerseusStationary
- _m_qHeuristic
: BayesianGameForDecPOMDPStage
, GeneralizedMAAStarPlannerForDecPOMDPDiscrete
- _m_QPOMDP
: AgentPOMDP
- _m_QValues
: QFunctionJAOH
, MDPValueIteration
- _m_R
: RewardModelMapping
, RewardModelMappingSparse
, RGet_RewardModelMappingSparse
, RewardModelTOISparse
, RGet_RewardModelMapping
- _m_random_seed
: Simulation
, SimulationResult
- _m_referred
: Referrer< T >
- _m_results_f
: BayesianGameIdenticalPayoffSolver< JP >
- _m_resultsFilename
: Perseus
- _m_rewards
: SimulationResult
- _m_rewardType
: DecPOMDP
, POSG
- _m_s_str
: RewardModelMapping
, RewardModelMappingSparse
, RewardModelTOISparse
- _m_saveIntermediateResults
: SimulationDecPOMDPDiscrete
- _m_saveIntermediateTiming
: GeneralizedMAAStarPlanner
- _m_sc
: Type
- _m_seed
: PlanningUnit
- _m_sizeVec
: IndividualBeliefJESP
- _m_slack
: GeneralizedMAAStarPlanner
- _m_solution
: BayesianGameIdenticalPayoffSolver< JP >
- _m_sparse
: TransitionObservationIndependentMADPDiscrete
, MultiAgentDecisionProcessDiscrete
- _m_stage
: IndividualBeliefJESP
- _m_stateVec
: MADPComponentDiscreteStates
- _m_stepSizeActions
: BayesianGameBase
- _m_stepsizeJOHOH
: IndividualBeliefJESP
- _m_stepsizeSJOH
: IndividualBeliefJESP
- _m_stepSizeTypes
: BayesianGameBase
- _m_storeIntermediateValueFunctions
: Perseus
- _m_storeTimings
: Perseus
- _m_successor
: TreeNode< Tcontained >
- _m_T
: TGet_TransitionModelMapping
- _m_t
: AgentBG
, AgentPOMDP
- _m_T
: TGet_TransitionModelMappingSparse
- _m_t
: AgentQMDP
, BayesianGameForDecPOMDPStageInterface
- _m_T
: TransitionModelMapping
, AlphaVectorPlanning
, TransitionModelMappingSparse
- _m_timeAtInitialization
: Timing
- _m_timer
: TimedAlgorithm
- _m_timesMap
: Timing
- _m_timings_f
: BayesianGameIdenticalPayoffSolver< JP >
- _m_Ts
: AlphaVectorPlanning
- _m_TsForBackup
: AlphaVectorPlanning
- _m_TsOsForBackup
: AlphaVectorPlanning
- _m_unixName
: MultiAgentDecisionProcess
- _m_use_gamma
: DICEPSPlanner
- _m_useJaohQValuesCache
: QMDP
- _m_useSparse
: AlphaVectorPlanning
, BayesianGameBase
- _m_useSparseBeliefs
: PlanningUnitMADPDiscreteParameters
, GeneralizedMAAStarPlanner
- _m_utilFuncs
: BayesianGame
- _m_utilFunction
: BayesianGameIdenticalPayoff
- _m_V_initialized
: ValueFunctionDecPOMDPDiscrete
- _m_val
: JointPolicyValuePair
, PartialJointPolicyValuePair
- _m_valueFunction
: PerseusStationary
- _m_valueFunctionFilename
: Perseus
- _m_values
: AlphaVector
- _m_verbose
: DICEPSPlanner
, BGIP_SolverAlternatingMaximization< JP >
, BGIP_SolverCreator_BFS< JP >
, Perseus
, SimulationAgent
, Simulation
, BGIP_SolverCreator_AM< JP >
- _m_verboseness
: GeneralizedMAAStarPlanner
, BayesianGameBase
- _m_verbosity
: BGIP_SolverBruteForceSearch< JP >
- _m_writeAnyTimeResults
: BayesianGameIdenticalPayoffSolver< JP >