next up previous
Next: Shadow sequences Up: Component Events, Shadows, and Previous: Component events and shadows

Shadows are everywhere

The definitions describe the simple setup in Fig. 1 and many other settings: Shadows and component events are ubiquitous, showing up whenever moving sensors are placed inside environments. Here are three additional, motivational examples; many others could be presented. In Fig. 4(a), omni-directional, infinite range sensors partition the 2D environment into polygonal shadows. The component events happen exactly when the sensors make inflection and bitangent crossings (see aspect graphs [29]), which gives rises to the concept of gaps and gap navigation trees as discussed in [40]. If the sensors have limited viewing angle [9] or limited range (Fig. 4(b)), alternate models governing visible and shadow regions are obtained. In Fig. 4(c), fixed infrared beams and surveillance cameras are placed inside of a building, creating a set of three fixed shadows $ s_1, s_2, s_3$ . Such a setting is common in offices, museums, and shopping malls. As a last example, Fig. 4(d) shows a simplified mobile sensor network with coverage holes. In this case, the joint sensing range of the sensor nodes is the FOV and the coverage holes are the shadows, which fluctuate continuously even if the sensor nodes remain stationary (consider cellphone signals).

Figure 4: a) Two robots (white discs) carry omni-directional, infinite range sensors partition the environment into seven shadows. b) When range is limited, the topology of shadows changes; only two shadows are left. c) An indoor environment guarded by fixed beam sensors (red line segments) and cameras (yellow cones). There are three connected shadow components. d) A simple mobile sensor network in which the white discs are mobile sensing nodes, with shaded regions being their sensing range at the moment. There are two shadows here with $ s_1$ being unbounded.
\begin{figure}\begin{center}
\begin{tabular}{cc}
\epsfig{figure=figures/shadow...
...,width=0.4\textwidth} \\
(c) & (d) \\
\end{tabular}\end{center}
\end{figure}

For some environments, shadows are readily available or can be effectively computed with high accuracy, such as visibility sensors placed in 2D polygonal environments. In some other cases, shadows are not always easy to extract. As one example, estimating coverage holes in a wireless sensor network is rather hard since it is virtually impossible to know whether a point $ p$ is covered unless a probe is dispatched to $ p$ to check. It is also well known that 3D visibility structure is difficult to compute [25,30]. Even though we do not claim to overcome such inherent difficulties in acquiring visibility region and/or shadows, the method presented here applies as long as a reasonably accurate characterization of the shadows is available.


next up previous
Next: Shadow sequences Up: Component Events, Shadows, and Previous: Component events and shadows
Jingjin Yu 2011-01-18