The definitions describe the simple setup in
Fig. 1 and many other settings: Shadows and
component events are ubiquitous, showing up whenever moving sensors
are placed inside environments. Here are three additional,
motivational examples; many others could be presented. In
Fig. 4(a), omni-directional, infinite range
sensors partition the 2D environment into polygonal shadows. The
component events happen exactly when the sensors make inflection
and bitangent crossings (see aspect graphs [29]),
which gives rises to the concept of gaps and gap navigation
trees as discussed in [40]. If the sensors have
limited viewing angle [9] or limited range
(Fig. 4(b)), alternate models governing
visible and shadow regions are obtained. In
Fig. 4(c), fixed infrared beams and
surveillance cameras are placed inside of a building, creating a set
of three fixed shadows
. Such a setting is common in
offices, museums, and shopping malls. As a last example,
Fig. 4(d) shows a simplified mobile sensor
network with coverage holes. In this case, the joint sensing range of
the sensor nodes is the FOV and the coverage holes are the shadows,
which fluctuate continuously even if the sensor nodes remain
stationary (consider cellphone signals).
![]() |
For some environments, shadows are readily available or can be
effectively computed with high accuracy, such as visibility sensors
placed in 2D polygonal environments. In some other cases, shadows are
not always easy to extract. As one example, estimating coverage holes
in a wireless sensor network is rather hard since it is virtually
impossible to know whether a point
is covered unless a probe is
dispatched to
to check. It is also well known that 3D visibility
structure is difficult to compute [25,30]. Even though
we do not claim to overcome such inherent difficulties in acquiring
visibility region and/or shadows, the method presented here applies as
long as a reasonably accurate characterization of the shadows is
available.