Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo
Given a single input image (courtesy of Ken Cheng), our approach hallucinates the same scene at a different time of day, e.g., from
blue hour (just after sunset) to night in the above example. Our approach uses a database of time-lapse videos to infer the transformation for
hallucinating a new time of day.
Publication
YiChang Shih, Sylvain Paris, Frédo Durand, and William T. Freeman,
Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo,
to appear in SIGGRAPH ASIA 2013
Abstract
We introduce “time hallucination” : synthesizing a plausible image
at a different time of day from an input image. This challenging
task often requires dramatically altering the color appearance of the
picture. In this paper, we introduce the first data-driven approach
to automatically creating a plausible-looking photo that appears as
though it were taken at a different time of day. The time of day is
specified by a semantic time label, such as “night”.
Our approach relies on a database of time-lapse videos of various
scenes. These videos provide rich information about the variations
in color appearance of a scene throughout the day. Our method
transfers the color appearance from videos with a similar scene as
the input photo. We propose a locally affine model learned from
the video for the transfer, allowing our model to synthesize new
color data while retaining image details. We show that this model
can hallucinate a wide range of different times of day. The model
generates a large sparse linear system, which can be solved by
off-the-shelf solvers. We validate our methods by synthesizing
transforming photos of various outdoor scenes to four times of
interest: daytime, the golden hour, the blue hour, and nighttime.
Video
Code & data & pictures
Update (2014/03/01)
Update (2014/10/15)
Supplemental materials
Acknowledgement
We thank Jianxiong Xiao for the help and advice in scene matching
code, SIGGRAPH ASIA reviewers for their comments, and acknowledge the funding
from NSF No.0964004 and NSF CGV-1111415.