README

This is a MATLAB implementation of our hierarchical context model for object recognition on the SUN 09 dataset [project webpage]. This package contains code to load the annotated images and the pre-computed baseline detector outputs, train our context model, and evaluate its performance on SUN 09. We use the baseline detectors by Felzenszwalb et al., and the package contains a part of the LabelMe toolbox to manage the SUN 09 dataset. You can use the pre-computed baseline detector outputs to evaluate your own context model and compare it with our hierarchical context model.

For questions, please contact Myung Jin Choi.

Download and Installation

Go to the project webpage. Download hcontext_code.tar and datasetMar.tar and extract them in the same directory. If you would like to display detection results on images, download sun09.tar as well and extract it in the same directory.

Configuration

configure.m sets the paths and file names to save/load datasets and models. Set the directory that you extracted the tar files as homeDir.

Training

train.m loads ground-truth labels, baseline detector outputs, and the list of object categories and train the prior model, the measurement model, and the gist predictions of our hierarchical context model. The outputs of the code are pre-computed and stored in the directory models/.

Testing and Evaluating Performance

test.m loads the baseline detector outputs (in dataset/) and the context model (in models/), and evaluate the performance on the test set of SUN 09. You can choose different options (e.g., whether to use gist predictions, whether to use binary models only or to use the location models as well). Since the inference algorithm uses a sampling technique, the performance slightly differs for each run. If you would like to get the same results in every run, set useSamples = false to use MAP estimates instead of samples.

If you would like to evaluate your own context model using the pre-computed baseline detector outputs, replace the line apply_hcontext with your own function. To use our performance evaluation function with respect to the baseline detectors, replace lines 34-84 in scripts/apply_hcontext.m with your algorithm to adjust the score vector new_score. The first [1:Ncategories] elements are Prob(i th object present in the image) and the remaining [Ncategories+1:end] elements correspond to Prob(kth candidate window is a correct detection).

Miscellaneous

installDatabase.m reads raw image and annotation files and creates MATLAB structures with random train/test split. If you use this script to load the dataset, you will need to run the baseline detectors again as well since our stored detector outputs are on the pre-defined train/test split. The output of the code is stored in dataset/sun09_groundTruth.mat.

detectors2LM.m reads baseline detector outputs and creates MATLAB structures. It also selects the list of object categories to be included in the context model. The outputs are stored in dataset/sun09_detectorOutputs.mat and dataset/sun09_objectCategories.

out_of_context.m loads the annotated images and detector outputs in the out-of-context dataset and detects objects out-of-context in each image. You can choose either to use the ground-truth labels or detector outputs only.