Abstract

We present an automated approach for high-quality preview of feature-film rendering during lighting design. Similar to previous work, we use a deep-framebuffer shaded on the GPU to achieve interactive performance. Our first contribution is to generate the deep-framebuffer and corresponding shaders automatically through data-flow analysis and compilation of the original scene. Cache compression reduces automatically-generated deep-framebuffers to reasonable size for complex production scenes and shaders. We also propose a new structure, the indirect framebuffer, that decouples shading samples from final pixels and allows a deep-framebuffer to handle antialiasing, motion blur and transparency efficiently. Progressive refinement enables fast feedback at coarser resolution. We demonstrate our approach in real-world production.

Files

Paper PDF (1.9 MB) ACM DL Author-ize serviceACM DL
Talk Keynote (33 MB) PDF (17 MB) HTML PowerPoint (converted) (27 MB)
Citation BibTeX

Hindsights

None yet.

Acknowledgments

Numerous people have contributed to this project in its many years of exploration and implementation.

This work started under the advising of Pat Hanrahan, initially in collaboration with Ujval Kapasi. Alex Aiken and John Kodumal proposed dependence analysis by graph reachability and provided the first analysis library we used. Matt Pharr, John Owens, Aaron Lefohn, Eric Chan, and many members of the Stanford and MIT Graphics Labs provided years of essential advice and feedback.

Tippett Studio took great risk in actively supporting early research. Dan Goldman introduced the work to ILM, where Alan Trombla, Ed Hanway, and Steve Sullivan have overseen it. Many developers have contributed code, including Sebastian Fernandez, Peter Murphy, Simon Premože, and Aaron Luk. Hilmar Koch, Paul Churchill, Tom Martinek, and Charles Rose provided a critical artist's perspective early in design. Dan Wexler, Larry Gritz, and Reid Gershbein provided useful explanations of commercial lighting technologies.

We thank Michael Bay for graciously sharing unreleased images from his movie, Dan Piponi for generating our hair data, and the anonymous reviewers for their insightful discussion and criticism. Sara Su, Sylvain Paris, Ravi Ramamoorthi, Kevin Egan, Aner Ben-Artzi, and Kayvon Fatahalian provided critical feedback during writing.

This work was supported by NSF CAREER award 0447561, "Transient Signal Processing for Realistic Imagery," an NSF Graduate Research Fellowship, NVIDIA Graduate Fellowship, Ford Foundation Graduate Fellowship, Microsoft Research New Faculty Fellowship and a Sloan fellowship.

Talk

(With a few bugs. Download Keynote/PDF above for best rendition.)