Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo

Teaser Given a single input image (courtesy of Ken Cheng), our approach hallucinates the same scene at a different time of day, e.g., from blue hour (just after sunset) to night in the above example. Our approach uses a database of time-lapse videos to infer the transformation for hallucinating a new time of day.

Publication

YiChang Shih, Sylvain Paris, Frédo Durand, and William T. Freeman, Data-driven Hallucination for Different Times of Day from a Single Outdoor Photo, to appear in SIGGRAPH ASIA 2013

Abstract

We introduce “time hallucination” : synthesizing a plausible image at a different time of day from an input image. This challenging task often requires dramatically altering the color appearance of the picture. In this paper, we introduce the first data-driven approach to automatically creating a plausible-looking photo that appears as though it were taken at a different time of day. The time of day is specified by a semantic time label, such as “night”. Our approach relies on a database of time-lapse videos of various scenes. These videos provide rich information about the variations in color appearance of a scene throughout the day. Our method transfers the color appearance from videos with a similar scene as the input photo. We propose a locally affine model learned from the video for the transfer, allowing our model to synthesize new color data while retaining image details. We show that this model can hallucinate a wide range of different times of day. The model generates a large sparse linear system, which can be solved by off-the-shelf solvers. We validate our methods by synthesizing transforming photos of various outdoor scenes to four times of interest: daytime, the golden hour, the blue hour, and nighttime.

Video

Code & data & pictures

  • Code (in MatLab, not yet in released mode, containing one video and a test case, 327 MB)
  • Data (463 videos, 9.8 GB)
  • The video retrieval code adaped from Xiao et al. is not included, but their original code can be found here.
  • Some results in the paper, including input, output, match and target frames for comparison (5MB).
  • Frequent Q&A regarding to the code.
  • Update (2014/03/01)

  • New result computed on high-resolution teaser input (1920x1080):
  • Input. Image courtesy of Kevin Cheng.
    Output. The result is some what different from the paper, since the dense matching results on low-res inputs and high-res inputs are slightly different.

    Update (2014/10/15)

  • Time of day presentation given by Sylvain Paris in Adobe Max 2014


  • The Starry Night result in the presentation

  • Supplemental materials

  • We test our method using input images from MIT-Adobe 5K dataset in this website.
  • We evaluate our model and compare with some related work in this document.
  • Acknowledgement

    We thank Jianxiong Xiao for the help and advice in scene matching code, SIGGRAPH ASIA reviewers for their comments, and acknowledge the funding from NSF No.0964004 and NSF CGV-1111415.