Approximate Inference For First-Order Probabilistic Languages

Approximate Inference For First-Order Probabilistic Languages

Hanna Pasula
Stuart Russell

Abstract: A new, general approach is described for ap-proximate inference in first-order probabilistic languages, using Markov chain Monte Carlo (MCMC)techniques in the space of concrete possible worlds underlying any given knowledge base. The sim-plicity of the approach and its lazy construction of possible worlds make it possible to consider quiteexpressive languages. In particular, we consider two extensions to the basic relational probabilitymodels (RPMs) defined by Koller and Pfeffer, both of which have caused difficulties for exact algo-rithms. The first extension deals with uncertainty about relations among objects, where MCMC sam-ples over relational structures. The second extension deals with uncertainty about the identity ofindividuals, where MCMC samples over sets of equivalence classes of objects. In both cases, weidentify types of probability distributions that allow local decomposition of inference while encod-ing possible domains in a plausible way. We apply our algorithms to simple examples and show thatthe MCMC approach scales well.

Appeared in: Proceedings of the International Joint Conferences on Artificial Intelligence 2001 (IJCAI-01).

Download: PS version


Hanna Pasula
Last modified: Mon Aug 16 19:22:13 EDT 2004