A Virtual Exposure Camera Model

Leonard McMillan
University of North Carolina at Chapel Hil

Modern digital imaging and processing enable us to combine the strengths of video and still photography to provide new capabilities. In this talk, I suggest a virtual exposure camera model, in which a spatiotemporal volume of pixel histories is used for dynamic image enhancement. It enables the capture of high dynamic range videos, on-the-imager tone-mapping, and adaptive noise reduction. We have applied these methods to enhancing underexposed, low-dynamic-range videos. Each pixel’s virtual exposure setting is set independently based on a dynamic function of its spatial neighborhood and temporal history. Temporal integration enables us to expand the image’s dynamic range while simultaneously reducing its noise. Our non-linear exposure variation and denoising filters smoothly transition from temporal to spatial filters in areas of motion. Our virtual exposure camera model also supports temporally coherent, per-frame, tone mapping. Our system outputs restored video sequences with significantly reduced noise and dramatically improved detail.

Biography:

Leonard McMillan is an Associate Professor of Computer Science at the University of North Carolina in Chapel Hill. Leonard is a pioneer in the field of image-based rendering. Image-based rendering attempts to render novel views from collections of reference images rather than from geometric models. Leonard also works in a wide-range of related areas including computer vision, multimedia, image processing, and computer architecture.