Burst Image Deblurring Using Permutation Invariant Convolutional Neural Networks Miika Aittala and Fredo Durand This archive contains a supplemental appendix document, and result images on datasets from previous work, as well as our new datasets. Due to file size constraints, we only include results on full bursts in this package (without the progressions shown in the paper), and heavily compress all images. Hence, these images are provided for visual reference only and should not be used for evaluating algorithms. We will release the uncompressed data, results, source code and pre-trained weights for the method separately to facilitate future research. Datasets we used from Delbracio et al. (2015) and Kohler et al. (2012), as well as result images from methods prior to Wieschollek et al. (2017) can be found at: http://dev.ipol.im/~mdelbra/fba/ https://github.com/cgtuebingen/learning-blind-motion-deblurring/releases http://webdav.is.mpg.de/pixel/benchmark4camerashake/src_files/Instructions_authors.html The new datasets we introduce are 'books', 'car_snow', 'dishwasher', 'street' and 'utensils'. The files in the folders are as follows: - inputs/: the input images for the burst (for our new datasets only) - OURS.jpg: our result when fed with all images of the burst - rdn.jpg: result of Wiescholleck et al. (2017) on the same input, computed using their software implementation from the address above, or provided by the authors books dataset also contains a validation photo, which can be used to verify that the recovered text is accurate, to the extent that it is readable.