BLOG Home | Download | User Manual | Source Code Docs | Publications

BLOG 0.3 (14 August 2008)

You can also see a list of changes in the current version (and previous versions).

- BLOG 0.2 (14 December 2007)
- BLOG 0.1.6 (16 March 2007)
- BLOG 0.1.5 (13 January 2006)
- BLOG 0.1.4 (21 December 2005)
- BLOG 0.1.3 (6 November 2005)
- BLOG 0.1.2 (27 September 2005)
- BLOG 0.1.1 (21 September 2005)
- BLOG 0.1 (10 September 2005)

Copyright (c) 2007, 2008, Massachusetts Institute of Technology

Copyright (c) 2005, 2006, Regents of the University of California

All rights reserved.

Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:

- Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
- Neither the name of the University of California, Berkeley nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

The BLOG Inference Engine also includes several pieces of third-party software: a modified version of the CUP v0.10k parser generator; a modified version of the JLex 1.2.6 lexical analyzer generator; the JAMA 1.0.1 matrix package; and the JUnit 4.5 unit testing framework. These third-party packages have their own open-source licenses.

`make`

utility, and the ability to run a shell script.
Decompressing the ZIP archive above will yield a directory called `blog-0.3`

. You can put this directory wherever you like. To compile the source code, `cd`

into this directory and type `make`

. If `make`

finishes with no errors, the inference engine is ready to run.

`runblog`

, which invokes `java`

with the proper classpath. There is also a subdirectory called `examples`

that contains several example BLOG models. To run inference on the urn-and-balls scenario from our paper, give the command:
```
``````
./runblog examples/balls/poisson-prior-noisy.mblog examples/balls/all-same.eblog examples/balls/num-balls.qblog
```

The program will do inference and print out the posterior distribution over the number of balls in the urn, given 10 draws that all appear to be the same color. By default, the program does 10000 samples of likelihood weighting. You can compare its output to the correct posterior distribution, which is given in a comment at the end of `examples/balls/poisson-prior-noisy.mblog`

.
To find out how to do more with the BLOG Inference Engine, please see the user manual.

The BLOG Inference Engine Version can parse any model written in the BLOG language that we introduced in our SRL-04 and IJCAI-05 papers. It includes a full set of built-in types (integers, strings, real numbers, vectors, matrices) and can use arbitrary conditional probability distributions (CPDs) in the form of Java classes that implement a certain interface. Models can include arbitrary first-order formulas.

As noted in our papers, some BLOG models do not actually define unique probability distributions, because they contain cycles or infinite receding chains. The current version of the Inference Engine does not make any effort to detect whether a model is well-defined or not. On some ill-defined models, the inference algorithms will end up in infinite loops.

This version of the Inference Engine includes three general-purpose inference algorithms: rejection sampling (as in our IJCAI-05 paper), likelihood weighting (as in our AISTATS-05 paper), and a Metropolis-Hastings algorithm where the proposal distribution just samples values for variables given their parents. These algorithms are very slow, but they can still yield interesting results on toy problems. The Inference Engine also allows modelers to plug in their own Metropolis-Hastings proposal distributions: the proposal distribution can propose arbitrary changes to the current world, and the engine will compute the acceptance probability. We include a hand-crafted split-merge proposal distribution for the urn-and-balls scenario as an example.

The inference engine also includes two exact inference algorithms that work on BLOG models with known objects (that is, with no number statements). One of these is the variable elimination algorithm; the other is first-order variable elimination with counting formulas (C-FOVE) as described in our AAAI 2008 paper.

We plan to include parameter estimation capabilities -- specifically Monte Carlo EM -- in a future version of the engine. However, the current version has no learning code.

The main reason we're releasing the BLOG Inference Engine code is so that other people can use it, evaluate the strengths and weaknesses of the BLOG language, and develop new inference and learning algorithms. It would also be great to have help improving the Inference Engine's interface and building utilities to work with it. If you have feedback, bug reports, ideas for improvement, or new code, please send email to Brian Milch at *<first initial><last name>*@gmail.com.