Abstract
Content
Intended audience
Topic
Format
References
Links
Presenters

"Probabilistic ILP" - Tutorial


Content

The tutorial will focus on settings for probabilistic inductive logic programming (PILP). The outline and the content of the tutorial will be adopted from:

  • L. De Raedt, K. Kersting. Probabilistic Inductive Logic Programming. Invited paper in S. Ben-David, J. Case and A. Maruoka, editors, Proceedings of the 15th International Conference on Algorithmic Learning Theory (ALT-2004), pages 19-36. Padova, Italy, October 2-5, 2004.

  • L. De Raedt, K. Kersting. Probabilistic Logic Learning. In ACM-SIGKDD Explorations, special issue on Multi-Relational Data Mining, Vol. 5(1), pp. 31-48, July 2003, [link]
In particular, we also start from inductive logic programming (ILP) and show how classical ILP settings can be extended to deal with probabilistic issues, namely learning from probabilistic entailment, learning from probabilistic interpretations, and learning from probabilistic proofs.

For each learning setting, we will survey state-of-the-art statistical relational learning approaches. More precisely, we will focus on probabilistic-logical models (PLMs) which fit the setting most naturally. PLMs integrate a traditional probabilistic model with some first-order logical or relational representation language. For instance, Bayesian networks or probabilistic context-free grammars are selected and upgraded by incorporating entity-relationship (ER) models, Datalog, or Prolog. Depending on the selected probabilistic model, some learning setting are more natural than others. For instance, PLMs upgrading Bayesian networks determine probabilities in terms of possible worlds and, hence, are natural candidates for learning from probabilistic interpretations, whereas PLMs extending probabilistic context-free grammars specify probabilities by means of possible proofs with respect to a given goal and, hence, are natural candidates for learning from probabilistic proofs. This is not a hard classification; of course, most PLMs can in principle be learned within all three learning settings.

More precisely, the structure of the tutorial will be roughly:

1) Introduction to PLL
(10 min.):
  • objectives of PILP
  • a motivating application
2) Foundations of PILP
( 30 min.)
basic concepts from
  • logic programming,
  • traditional probabilistic models, and
  • inductive logic programming.
3) Probabilistic ILP
(10 min)
  • general problem definition of PILP,
  • probabilistic {\it covers} relation,
  • parameter estimation,
  • and
  • structure learning.
4) Learning Settings for PILP (90 min.)
  • Probabilistic Entailment (30 min.):
    David Poole's probabilistic Horn abduction (PHA) [Poo93],
    Muggleton's stochastic logic programs (SLPs) [Mug96, Cus00],
    Sato's PRISM [Sat95, SK01].
Break (30 min)  
(contd.)
  • Probabilistic Interpretations (30 min.):
    Ngo and Haddawy's probabilistic-logic programs (PLPs)[NH97],
    Koller et al.'s probabilistic relational models (PRMs)
    [FGKP99, Pfe00, Get01], and Kersting and De Raedt's Bayesian logic programs (BLPs) [KD01a, KD01b]

  • Probabilistic Proofs (30 min):
    Muggleton's stochastic logic programs (SLPs) [Mug96, Cus00],
    Anderson et al.'s and Kersting et al.'s relational Markov Models [ADWeld02, KRKD03].
4) Applications of PILP
(10 min.)
Link mining; bioinformatics applications