BLOG Home | Download | User Manual | Source Code Docs | Publications
BLOG 0.3 (14 August 2008)
You can also see a list of changes in the current version (and previous versions).
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
makeutility, and the ability to run a shell script.
Decompressing the ZIP archive above will yield a directory called
blog-0.3. You can put this directory wherever you like. To compile the source code,
cd into this directory and type
make finishes with no errors, the inference engine is ready to run.
runblog, which invokes
javawith the proper classpath. There is also a subdirectory called
examplesthat contains several example BLOG models. To run inference on the urn-and-balls scenario from our paper, give the command:
The program will do inference and print out the posterior distribution over the number of balls in the urn, given 10 draws that all appear to be the same color. By default, the program does 10000 samples of likelihood weighting. You can compare its output to the correct posterior distribution, which is given in a comment at the end of
./runblog examples/balls/poisson-prior-noisy.mblog examples/balls/all-same.eblog examples/balls/num-balls.qblog
To find out how to do more with the BLOG Inference Engine, please see the user manual.
The BLOG Inference Engine Version can parse any model written in the BLOG language that we introduced in our SRL-04 and IJCAI-05 papers. It includes a full set of built-in types (integers, strings, real numbers, vectors, matrices) and can use arbitrary conditional probability distributions (CPDs) in the form of Java classes that implement a certain interface. Models can include arbitrary first-order formulas.
As noted in our papers, some BLOG models do not actually define unique probability distributions, because they contain cycles or infinite receding chains. The current version of the Inference Engine does not make any effort to detect whether a model is well-defined or not. On some ill-defined models, the inference algorithms will end up in infinite loops.
This version of the Inference Engine includes three general-purpose inference algorithms: rejection sampling (as in our IJCAI-05 paper), likelihood weighting (as in our AISTATS-05 paper), and a Metropolis-Hastings algorithm where the proposal distribution just samples values for variables given their parents. These algorithms are very slow, but they can still yield interesting results on toy problems. The Inference Engine also allows modelers to plug in their own Metropolis-Hastings proposal distributions: the proposal distribution can propose arbitrary changes to the current world, and the engine will compute the acceptance probability. We include a hand-crafted split-merge proposal distribution for the urn-and-balls scenario as an example.
The inference engine also includes two exact inference algorithms that work on BLOG models with known objects (that is, with no number statements). One of these is the variable elimination algorithm; the other is first-order variable elimination with counting formulas (C-FOVE) as described in our AAAI 2008 paper.
We plan to include parameter estimation capabilities -- specifically Monte Carlo EM -- in a future version of the engine. However, the current version has no learning code.
The main reason we're releasing the BLOG Inference Engine code is so that other people can use it, evaluate the strengths and weaknesses of the BLOG language, and develop new inference and learning algorithms. It would also be great to have help improving the Inference Engine's interface and building utilities to work with it. If you have feedback, bug reports, ideas for improvement, or new code, please send email to Brian Milch at <first initial><last name>@gmail.com.