CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs


Micah K. Johnson Kevin Dale Shai Avidan Hanspeter Pfister William T. Freeman Wojciech Matusik
MIT Harvard University Adobe Systems, Inc. Harvard University MIT CSAIL Adobe Systems, Inc.


An overview of our system. We start by querying a large collection of photographs to retrieve the most similar images. The user selects the k closest matches and the images, both real and CG, are cosegmented to identify similar regions. Finally, the real images are used by the local style transfer algorithms to upgrade the color, tone, and/or texture of the CG image.




Computer Graphics (CG) has achieved a high level of realism, producing strikingly vivid images. This realism, however, comes at the cost of long and often expensive manual modeling, and most often humans can still distinguish between CG images and real images. We present a novel method to make CG images look more realistic that is simple and accessible to novice users. Our system uses a large collection of photographs gathered from online repositories. Given a CG image, we retrieve a small number of real images with similar global structure. We identify corresponding regions between the CG and real images using a novel mean-shift cosegmentation algorithm. The user can then automatically transfer color, tone, and texture from matching regions to the CG image. Our system only uses image processing operations and does not require a 3D model of the scene, making it fast and easy to integrate into digital content creation workflows. Results of a user study show that our improved CG images appear more realistic than the originals.



Micah K. Johnson, Kevin Dale, Shai Avidan, Hanspeter Pfister, William T. Freeman, Wojciech Matusik. CG2Real: Improving the Realism of Computer Generated Images using a Large Collection of Photographs. IEEE TVCG, 2010.