Local Laplacian Filters: Edge-aware Image Processing with a Laplacian Pyramid

ACM Transactions on Graphics (Proceedings of SIGGRAPH 2011)
Communications of the ACM (March 2015, Vol. 58, No. 3)
Sylvain Paris, Adobe
Samuel W. Hasinoff, Toyota Technological Institute at Chicago and MIT CSAIL
Jan Kautz, University College London
teaser

Abstract

The Laplacian pyramid is ubiquitous for decomposing images into multiple scales and is widely used for image analysis. However, because it is constructed with spatially invariant Gaussian kernels, the Laplacian pyramid is widely believed as being unable to represent edges well and as being ill-suited for edge-aware operations such as edge-preserving smoothing and tone mapping. To tackle these tasks, a wealth of alternative techniques and representations have been proposed, e.g., anisotropic diffusion, neighborhood filtering, and specialized wavelet bases. While these methods have demonstrated successful results, they come at the price of additional complexity, often accompanied by higher computational cost or the need to post-process the generated results. In this paper, we show state-of-the-art edge-aware processing using standard Laplacian pyramids. We characterize edges with a simple threshold on pixel values that allows us to differentiate large-scale edges from small-scale details. Building upon this result, we propose a set of image filters to achieve edge-preserving smoothing, detail enhancement, tone mapping, and inverse tone mapping. The advantage of our approach is its simplicity and flexibility, relying only on simple point-wise nonlinearities and small Gaussian convolutions; no optimization or post-processing is required. As we demonstrate, our method produces consistently high-quality results, without degrading edges or introducing halos.

Download

Supplemental material

Acknowledgments

We would like to thank Ted Adelson, Bill Freeman and Frédo Durand for inspiring discussions and encouragement, Alan Erickson for the Orion image, and the anonymous reviewers for their constructive comments. This work was supported in part by an NSERC Postdoctoral Fellowship, the Quanta T-Party, NGA NEGI-1582-04-0004, MURI Grant N00014-06-1-0734, and gifts from Microsoft, Google and Adobe. We thank Farbman et al. and Li et al. for their help with comparisons.