Improving Super-Resolution Enhancement of Video by using Optical Flow


Chris Crutchfield
MIT


Abstract

In the literature there has been much research into two methods of attacking the super-resolution problem: using optical flow-based techniques to align low-resolution images as samples of a target high-resolution image, and using learning-based techniques to estimate perceptually-plausible high frequency components of a low-resolution image. Both of these approaches have been naturally extended to apply to image sequences from video, yet heretofore there have been no investigations into combining these methods to obviate problems associated with each method individually. We show how to merge these two disparate approaches to attack two problems associated with super-resolution for video: removing temporal artifacts ("flicker") and improving image quality.

Project Report:    


Additional Data