Rendering Fake Soft Shadows with Smoothies

Eric Chan and Frédo Durand

Computer Science and Artificial Intelligence Laboratory
Massachusetts Institute of Technology

Proceedings of the Eurographics Symposium on Rendering 2003


We present a new method for real-time rendering of shadows in dynamic scenes. Our approach builds on the shadow map algorithm by attaching geometric primitives that we call "smoothies" to the objects' silhouettes. The smoothies give rise to fake shadows that appear qualitatively like soft shadows, without the cost of densely sampling an area light source. The soft shadow edges hide objectionable aliasing artifacts that are noticeable with ordinary shadow maps. Our algorithm computes shadows efficiently in image space and maps well to programmable graphics hardware. We present results from several example scenes rendered in real-time.


paper   pdf  (2.9 MB)
submission video (DivX 5.0.3)   avi  (29.9 MB)
comparison video (DivX 5.0.3)   avi  (1.9 MB)
demo video (DivX 5.0.3)   avi  (13.5 MB)
bibtex   bib
slides   html


The two images on the right were generated using a modified version of NVIDIA's md2shader demo, originally written by Mark Kilgard.

The models shown in the first four columns are from the De Espona 3D Models Enciclopedia.

If you're having trouble playing the videos (e.g. the audio is fine, but the video stops playing), try upgrading to the latest version of the DivX codec. Our videos are encoded using DivX 5.0.3 and are incompatible with codecs 5.0.2 and earlier.


You can download the vertex and fragment programs used in the smoothie algorithm. The programs are written in Cg.

Pass 1 stores the eye-linear z values in a floating-point buffer:

Pass 3 computes and stores alpha values into the smoothie buffer:

Pass 4 performs the shadow queries and filtering:

Related Work

Chris Wyman and Charles Hansen wrote a paper titled Penumbra Maps that describe a similar technique, developed independently from ours. Their paper also appears in the Proceedings of the Eurographics Symposium on Rendering 2003. There are two minor differences between our approaches. The first is that their penumbra-casting geometric primitives are "cones" and "sheets," a construction described by Eric Haines in a JGT paper. We use different (but essentially equivalent) geometric primitives called smoothies, which are just screen-aligned quadrilaterals. The second difference is that they store only the depth value of the blockers into a buffer; we store the depth values of both the blockers and the smoothies. Although our variation requires extra storage and an extra depth comparison, it allows us to handle surfaces that act only as receivers. Storing the depth values of the smoothies (or, equivalently, of the cones and sheets) also helps with robustness near the silhouettes of smooth surfaces. If we omit the comparison against the smoothie's depth value, then pixels that lie on the blocker near a silhouette may suffer self-shadowing artifacts from the smoothie's alpha value, due to the nature of discrete buffer sampling. These artifacts are minimized by including the comparison against the smoothie's depth value.

The method presented in this paper is also related to the method described in US patent application no. US 2003/0112237A1, filed by Marco Corbetta on behalf of Crytek GmbH in December 2001. The two methods were developed independently.

Ulf Assarsson and Tomas Akenine-Möller have written a number of papers on interactive soft shadow rendering. Their methods focus on extending shadow volumes instead of shadow maps.

Pradeep Sen, Mike Cammarano, and Pat Hanrahan have developed a cool edge representation of shadows called the shadow silhouette map. Although they focus on hard shadows, our methods are related because we both exploit object-space representations of the blockers' silhouettes. They use silhouette maps, in which each texel stores a point on a silhouette edge and hence serves as a piecewise-linear approximation to the true shadow silhouette; we use the smoothie buffer, in which each texel stores a depth value and a precomputed alpha value based on the distance from the blocker to the receiver.


Figure 11 on page 9 of the paper shows our Cg shader code. The vertex shader for pass 4 has a bug: the last line should read

    projRect = mul(lightClip, pEye);

instead of

    float4 projCoords = mul(lightClip, pEye);

Thanks to Jean-François St-Amour for pointing this out. The code provided on this web page contains the fix.

Copyright Notice

Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or direct commercial advantage and that copies show this notice on the first page or initial screen of a display along with the full citation. Copyrights for components of this work owned by others than the publisher must be honoured and acknowledged.

Back to paper index