Publications 
Mohsen Ghaffari and
Bernhard Haeupler,
Optimal Error Rates for Interactive Coding II: Efficiency and List Decoding manuscript 2013.
[Abstract]
[PDF]
[arXiv]
We study interactive coding schemes which allow to simulate any nround interactive protocol over an adversarial channel that is allowed to corrupt a constant fraction of all transmissions. Our coding schemes are the first to simultaneously i) tolerate the optimal error rates, ii) be computationally efficient, and iii) use an almost linear number of rounds.
The said coding schemes address four different settings of interest: We give a computationally efficient nonadaptive coding scheme that tolerates any error rate below 1/4 and uses O(n) rounds. While this error rate is optimal for nonadaptive protocols, it is known that the error rate can be improved to 2/7 for adaptive coding schemes and to 1/2 if the decoding requirement is relaxed to list decoding. We show furthermore that an error rate of 1/3 is achievable, and in fact optimal, if only one party is required to decode. For these last three settings, we give computationally efficient coding schemes that tolerate these optimal error rates and use $n 2^{O(\log^* n \log{\log^* n})}$ rounds. Prior coding schemes achieving these optimal error rates required $\Theta(n^2)$ rounds.
We obtain these results by first considering channels that are controlled by a computationally bounded adversary. This simpler cryptographic setting provides valuable insights and reveals intimate connections between list decodable interactive coding schemes and the standard unique decoding requirement. We then show that these connections carry over to the information theoretic setting. In fact, all our results are obtained by first taking an inefficient list decodable coding scheme, then boosting its efficiency, and lastly applying a new generic reduction to the particular unique decoding setting in question. We believe these insights and individual building blocks to be of independent interest.

We present timeefficient distributed algorithms for decomposing graphs with large edge or vertex connectivity into multiple spanning or dominating trees, respectively. As their primary applications, these decompositions allow us to achieve information flow with size close to the connectivity by parallelizing it along the trees. More specifically, our distributed decomposition algorithms are as follows:
(I) A decomposition of each undirected graph with vertexconnectivity $k$ into (fractionally) vertexdisjoint weighted dominating trees with total weight $\Omega(\frac{k}{\log n})$, in $\widetilde{O}(D+\sqrt{n})$ rounds.
(II) A decomposition of each undirected graph with edgeconnectivity $\lambda$ into (fractionally) edgedisjoint weighted spanning trees with total weight $\lceil\frac{\lambda1}{2}\rceil(1\varepsilon)$, in $\widetilde{O}(D+\sqrt{n\lambda})$ rounds.
We also show round complexity lower bounds of $\tilde{\Omega}(D+\sqrt{\frac{n}{k}})$ and $\tilde{\Omega}(D+\sqrt{\frac{n}{\lambda}})$ for the above two decompositions, using techniques of [Das Sarma et al., STOC'11]. Moreover, our vertexconnectivity decomposition extends to centralized algorithms and improves the time complexity of [CensorHillel et al., SODA'14] from $O(n^3)$ to nearoptimal $\tilde{O}(m)$.
As corollaries, we also get distributed oblivious routing broadcast with $O(1)$competitive edgecongestion and $O(\log n)$competitive vertexcongestion. Furthermore, the vertex connectivity decomposition leads to neartimeoptimal $O(\log n)$approximation of vertex connectivity: centralized $\widetilde{O}(m)$ and distributed $\tilde{O}(D+\sqrt{n})$. The former moves toward the 1974 conjecture of Aho, Hopcroft, and Ullman postulating an $O(m)$ centralized exact algorithm while the latter is the first distributed vertex connectivity approximation.
Mohsen Ghaffari and
Bernhard Haeupler, and Madhu Sudan,
Optimal Error Rates for Interactive Coding I: Adaptivity and Other Settings ACM Symposium on Theory of Computing (STOC) 2014.
[Abstract]
[PDF]
[arXiv]
We consider the task of interactive communication in the presence of adversarial errors and present tight bounds on the tolerable errorrates in a number of different settings.
Most significantly, we explore adaptive interactive communication where the communicating parties decide who should speak next based on the history of the interaction. Braverman and Rao [STOC'11] show that nonadaptively one can code for any constant error rate below 1/4 but not more. They asked whether this bound could be improved using adaptivity. We answer this open question in the affirmative (with a slightly different collection of resources): Our adaptive coding scheme tolerates any error rate below 2/7 and we show that tolerating a higher error rate is impossible. We also show that in the setting of Franklin et al. [CRYPTO'13], where parties share randomness not known to the adversary, adaptivity increases the tolerable error rate from 1/2 to 2/3. For listdecodable interactive communications, where each party outputs a constant size list of possible outcomes, the tight tolerable error rate is 1/2.
Our negative results hold even if the communication and computation are unbounded, whereas for our positive results communication and computation are polynomially bounded. Most prior work considered coding schemes with linear amount of communication, while allowing unbounded computations. We argue that studying tolerable error rates in this relaxed context helps to identify a setting's intrinsic optimal error rate. We set forward a strong working hypothesis which stipulates that for any setting the maximum tolerable error rate is independent of many computational and communication complexity measures. We believe this hypothesis to be a powerful guideline for the design of simple, natural, and efficient coding schemes and for understanding the (im)possibilities of coding for interactive communications.

Edge connectivity and vertex connectivity are two fundamental concepts in graph theory. Although by now there is a good understanding of the structure of graphs based on their edge connectivity, our knowledge in the case of vertex connectivity is much more limited. An essential tool in capturing edge connectivity are the classical results of Tutte and NashWilliams from 1961 which show that a $\lambda$edgeconnected graph contains $\ceil{(\lambda1)/2}$ edgedisjoint spanning trees.
We argue that connected dominating set partitions and packings are the natural analogues of edgedisjoint spanning trees in the context of vertex connectivity and we use them to obtain structural results about vertex connectivity in the spirit of those for edge connectivity.
More specifically, connected dominating set (CDS) partitions and packings are counterparts of edgedisjoint spanning trees, focusing on vertexdisjointness rather than edgedisjointness, and their sizes are always upper bounded by the vertex connectivity $k$.
We constructively show that every $k$vertexconnected graph with $n$ nodes has CDS packings and partitions with sizes, respectively, $\Omega(k/\log n)$ and $\Omega(k/\log^5 n )$, and we prove that the former bound is existentially optimal.
Beautiful results by Karger show that when edges of a $\lambda$edgeconnected graph are independently sampled with probability $p$, the sampled graph has edge connectivity $\tilde{\Omega}(\lambda p)$. Obtaining such a result for vertex sampling remained open. We illustrate the strength of our approach by proving that when vertices of a $k$vertexconnected graph are independently sampled with probability $p$, the graph induced by the sampled vertices has vertex connectivity $\tilde{\Omega}(kp^2)$. This bound is optimal up to polylog factors and is proven by building an $\tilde{\Omega}(kp^2)$ size CDS packing on the sampled vertices while sampling happens.
As an additional important application, we show CDS packings to be tightly related to the throughput of routingbased algorithms and use our new toolbox to yield a routingbased broadcast algorithm with optimal throughput $\Omega(k/\log n + 1)$, improving the (previously bestknown) trivial throughput of $\Theta(1)$.
Noga Alon, Mohsen Ghaffari,
Bernhard Haeupler, and
Majid Khabbazian,
Broadcast Throughput in Radio Networks: Routing vs. Network Coding ACMSIAM Symposium on Discrete Algorithms (SODA) 2014.
[Abstract] [PDF]
The broadcast throughput in a network is defined as the average number of messages that can be transmitted per unit time from a given source to all other nodes when time goes to infinity.
Classical broadcast algorithms treat messages as atomic tokens and route them from the source to the receivers by making intermediate nodes store and forward messages. The more recent network coding approach, in contrast, prompts intermediate nodes to mix and code together messages. It has been shown that certain wired networks have an asymptotic network coding gap, that is, they have asymptotically higher broadcast throughput when using network coding compared to routing. Whether such a gap exists for wireless networks has been an open question of great interest. We approach this question by studying the broadcast throughput of the radio network model which has been a standard mathematical model to study wireless communication.
We show that there is a family of radio networks with a tight $\Theta(\log \log n)$ network coding gap, that is, networks in which the asymptotic throughput achievable via routing messages is a $\Theta(\log \log n)$ factor smaller than that of the optimal network coding algorithm. We also provide new tight upper and lower bounds that show that the asymptotic worstcase broadcast throughput over all networks with $n$ nodes is $\Theta(1 / \log n)$ messagesperround for both routing and network coding.
Mohsen Ghaffari and Fabian Kuhn,
Distributed Minimum Cut Approximation
International Symposium on DIStributed Computing (DISC) 2013.
Winner of the DISC'13 Best Paper Award.
[Abstract]
[PDF]
[arXiv]
We study the problem of computing approximate minimum edge cuts by distributed algorithms. We present two randomized approximation algorithms that both run in a standard synchronous message passing model where in each round, $O(log n)$ bits can be transmitted over every edge (a.k.a. the CONGEST model). The first algorithm is based on a simple and new approach for analyzing random edge sampling, which we call random layering technique. For any weighted graph and any $\epsilon \in (0, 1)$, the algorithm finds a cut of size at most $O(\epsilon^{1}\lambda)$ in $O(D) + \tilde{O}(n^{1/2 + \epsilon})$ rounds, where $\lambda$, D and n are respectively the minimumcut size, the diameter and the number of nodes of the network. The $\tilde{O}$notation hides polylogarithmic factors in n. In addition, using the outline of a centralized algorithm due to Matula [SODA '93], we present a randomized algorithm to compute a cut of size at most $(2+\epsilon)\lambda$ in $\tilde{O}((D+\sqrt{n})/\epsilon^5)$ rounds for any $\epsilon>0$. The time complexities of our algorithms almost match the $\tilde{\Omega}(D + \sqrt{n})$ lower bound of Das Sarma et al. [STOC '11], thus leading to an answer to an open question raised by Elkin [SIGACTNews '04] and Das Sarma et al. [STOC '11].
To complement our upper bound results, we also strengthen the $\tilde{\Omega}(D + \sqrt{n})$ lower bound of Das Sarma et al. by extending it to unweighted graphs. We show that the same lower bound also holds for unweighted multigraphs (or equivalently for weighted graphs in which $O(w\log n)$ bits can be transmitted in each round over an edge of weight $w$). For unweighted simple graphs, we show that computing an $\alpha$approximate minimum cut requires time at least $\tilde{\Omega}(D + \sqrt{n}/\alpha^{1/4})$.
Mohsen Ghaffari and
Bernhard Haeupler,
Fast Structuring of Radio Networks for MultiMessage Communications International Symposium on DIStributed Computing (DISC) 2013.
[Abstract]
[PDF]
We introduce collision free layerings as a powerful way to structure distributed radio networks. These layerings can replace hardtocompute BFStrees in many contexts while having an efficient randomized construction. We demonstrate their versatility by using them to provide near optimal distributed algorithms for several multimessage communication primitives.
Designing efficient communication primitives for radio networks has a rich history that began 25 years ago when BarYehuda et al. introduced fast randomized algorithms for broadcasting and for constructing a BFStree. Their BFStree construction time was $O(D \log^2 n)$ rounds, where $D$ is the network diameter and $n$ is the number of nodes. Since then, the complexity of a broadcast has been resolved to be $T_{BC} = \Theta(D \log \frac{n}{D} + \log^2 n)$ rounds. On the other hand, BFStrees have been used as a crucial building block for many communication primitives and the BFStree construction time remained a bottleneck for these primitives.
We introduce collision free layerings that can be used in place of BFStrees and we give a randomized construction of these layerings that runs in nearly broadcast time, that is, whp in $T_{Lay} = O(D \log \frac{n}{D} + \log^{2+\eps} n)$ rounds for any constant $\eps>0$. We then use these layerings to obtain: (1) A randomized $k$message broadcast running whp in $O(T_{Lay} + k \log n)$ rounds. (2) A randomized algorithm for gathering $k$ messages running whp in $O(T_{Lay} + k)$ rounds. These algorithms are optimal up to the small difference in the additive polylogarithmic term between $T_{BC}$ and $T_{Lay}$. Moreover, they imply the first optimal $O(n \log n)$ round randomized gossip algorithm.
Mohsen Ghaffari,
Bernhard Haeupler, and
Majid Khabbazian,
Randomized Broadcast in Radio Networks with Collision Detection ACM Symposium on Principles of Distributed Computing (PODC) 2013.
Invited to the Distributed Computing Journal's Special Issue for PODC'13
[Abstract]
[PDF]
We present a randomized distributed algorithm that in radio networks with collision detection broadcasts a single message in $O(D + \log^6 n)$ rounds, with high probability\footnote{We use the phrase ``high probability" to indicate a probability at least $1 \frac{1}{n^c}$, for a constant $c\geq 1$, and where $n$ is the network size.}. This time complexity is most interesting because of its optimal additive dependence on the network diameter $D$. It improves over the currently best known $O(D\log\frac{n}{D}\,+\,\log^2 n)$ algorithms, due to Czumaj and Rytter [FOCS 2003], and Kowalski and Pelc [PODC 2003]. These algorithms where designed for the model without collision detection and are optimal in that model. However, as explicitly stated by Peleg in his 2007 survey on broadcast in radio networks, it had remained an open question whether the bound can be improved with collision detection.
We also study distributed algorithms for broadcasting $k$ messages from a single source to all nodes. This problem is a natural and important generalization of the singlemessage broadcast problem, but is in fact considerably more challenging and less understood. We show the following results: If the network topology is known to all nodes, then a $k$message broadcast can be performed in $O(D + k\log n + \log^2 n)$ rounds, with high probability. If the topology is not known, but collision detection is available, then a $k$message broadcast can be performed in $O(D + k\log n + \log^6 n)$ rounds, with high probability. The first bound is optimal and the second is optimal modulo the additive $O(\log^6 n)$ term.
Mohsen Ghaffari,
Nancy Lynch, and
Calvin Newport,
The Cost of Radio Network Broadcast for Different Models of Unreliable Links ACM Symposium on Principles of Distributed Computing (PODC) 2013.
[Abstract]
[PDF] [Press Coverage: MIT News]
We study upper and lower bounds for the global and local broadcast problems in the dual graph
model combined with different strength adversaries. The dual graph model is a generalization of the
standard graphbased radio network model that includes unreliable links controlled by an adversary.
It is motivated by the ubiquity of unreliable links in real wireless networks. Existing results in this
model assume an offline adaptive adversary—the strongest type of adversary considered
in standard randomized analysis. In this paper, we study the two other standard types of adversaries:
online adaptive and oblivious. Our goal is to find a model that captures the unpredictable behavior of
real networks while still allowing for efficient broadcast solutions.
For the online adaptive dual graph model, we prove a lower bound that shows the existence of
constantdiameter graphs in which both types of broadcast require $Omega(n/ \log n)$ rounds, for network
size n. This result is within logfactors of the (near) tight upper bound for the of?ine adaptive setting.
For the oblivious dual graph model, we describe a global broadcast algorithm that solves the problem in
$O(D \log n + \log^2 n)$ rounds for network diameter D, but prove a lower bound of $\Omega(\sqrt{n}/ log n)$ rounds
for local broadcast in this same setting. Finally, under the assumption of geographic constraints on the
network graph, we describe a local broadcast algorithm that requires only $O(\log^2 n \log \Delta)$ rounds in
the oblivious model, for maximum degree $\Delta$. In addition to the theoretical interest of these results, we
argue that the oblivious model (with geographic constraints) captures enough behavior of real networks
to render our efficient algorithms useful for real deployments.
Sebastian Daum, Mohsen Ghaffari,
Seth Gilbert,
Fabian Kuhn,
and
Calvin Newport,
Maximal Independent Sets in Multichannel Radio Networks ACM Symposium on Principles of Distributed Computing (PODC) 2013.
[Abstract]
[PDF]
We present new upper bounds for fundamental problems in multichannel
wireless networks. These bounds address the benefits of dynamic spectrum access, i.e., to what extent multiple communication channels can be used to improve performance. In more detail, we study a multichannel
generalization of the standard graphbased wireless model without collision detection, and
assume the network topology satisfies polynomially bounded
independence.
Our core technical result is an
algorithm that constructs a maximal independent set (MIS) in
$O(\frac{\log^2 n}{F})+\softO(\log{n})$ rounds, in
networks of size $n$ with $F$ channels, where the
$\softO$notation hides polynomial factors in $\log\log n$.
Moreover, we use this MIS algorithm as a subroutine to build a constantdegree
connected dominating set in the same asymptotic time. Leveraging
this structure,
we are able to solve global broadcast and leader election within
$O(D + \frac{\log^2 n}{F})+\softO(\log{n})$ rounds,
where $D$ is the diameter of the graph,
and $k$message multimessage broadcast in $O(D + k + \frac{\log^2 n}{F})+\softO(\log n)$ rounds
for unrestricted message size (with a slow down of only a $\log$ factor
on the $k$ term under the assumption of restricted message size).
In all five cases above, we prove: (a) our results hold with high
probability (i.e., at least $11/n$); (b) our results are within
$\poly{\log\log{n}}$factors of the relevant lower bounds for
multichannel networks; and (c) our results beat the relevant lower
bounds for single channel networks. These new (near) optimal
algorithms significantly expand the number of problems now known to
be solvable faster in multichannel versus single channel wireless
networks.
Mohsen Ghaffari and
Bernhard Haeupler,
NearOptimal Leader Election in MultiHop Radio Networks ACMSIAM Symposium on Discrete Algorithms (SODA) 2013.
[Abstract]
[PDF]
[arXiv]
We design leader election protocols for multihop radio networks that elect a leader in almost the same time $T_{BC}$ that it takes for broadcasting one message (one ID). For the setting without collision detection our algorithm runs whp. in $O(D \log \frac{n}{D} + \log^3 n) \cdot \min\{\log\log n,\log \frac{n}{D}\}$ rounds on any $n$node network with diameter $D$. Since $T_{BC} = \Theta(D \log \frac{n}{D} + \log^2 n)$ is a lower bound, our upper bound is optimal up to a factor of at most $\log \log n$ and the extra $\log n$ factor on the additive term. Our algorithm is furthermore the first $O(n)$ time algorithm for this setting.
Our algorithm improves over a 23 year old simulation approach of BarYehuda, Goldreich and Itai with a $O(T_{BC} \log n)$ running time: In 1987 they designed a fast broadcast protocol and subsequently in 1989 they showed how it can be used to simulate one round of a singlehop network that has collision detection in $T_{BC}$ time. The prime application of this simulation was to simulate Willards singlehop leader election protocol, which elects a leader in $O(\log n)$ rounds whp. and $O(\log \log n)$ rounds in expectation. While it was subsequently shown that Willards bounds are tight, it was unclear whether the simulation approach is optimal. Our results break this barrier and essentially remove the logarithmic slowdown over the broadcast time $T_{BC}$. This is achieved by going away from the simulation approach.
We also give an $O(D + \log n \log \log n) \cdot \min\{\log \log n, \log \frac{n}{D}\} = O(D + \log n) \cdot O(\log \log n)^2$ leader election algorithm for the setting with collision detection (even with singlebit messages). This is optimal up to $\log \log n$ factors and improves over a deterministic algorithm that requires $\Theta(n)$ rounds independently of D.
Our almost optimal leader election protocols are especially important because countless communication protocols in radio networks use leader election as a crucial first step to solve various, seemingly unrelated, communication primitives such as gathering, multiple unicasts or multiple broadcasts. Even though leader election seems easier than these tasks, its bestknown $O(T_{BC} \log n)$ running time had become a bottleneck, preventing optimal algorithms. Breaking the simulation barrier for leader election in this paper has subsequently led to the development of near optimal protocols for these communication primitives.
Mohsen Ghaffari,
Seth Gilbert,
Calvin Newport, and Henry Tan,
Optimal Broadcast in Shared Spectrum Radio Networks International Conference Principles of Distributed Systems (OPODIS) 2012.
[Abstract]
[PDF]
This paper studies single hop broadcast in a single hop shared spectrum radio network.
The problem requires a source to deliver a message
to n receivers, where only a polynomial upper bound
on n is known. The model assumes
that in each round,
each device can participate on 1 out of C >= 1
available communication channels, up to t < C
of which might be disrupted, preventing communication.
This disruption captures the unpredictable message
loss that plagues real shared spectrum networks.
The best existing solution to the problem, which comes
from the systems literature, requires
$O(\frac{C t}{Ct}\log n)$ rounds.
Our algorithm, by contrast, solves the problem
in $O(\frac{C}{Ct}\ceil{\frac{t}{n}} \log n)$
rounds, when $C \geq \log n$,
and in $O(\frac{C}{Ct}\log n \cdot \log\log n)$ rounds,
when C is smaller.
It accomplishes this improvement by deploying a selfregulating
relay strategy in which receivers that already know useful
information coordinate themselves to efficiently assist
the source's broadcast.
We conclude by proving these bounds tight for most cases.
Mohsen Ghaffari,
Bernhard Haeupler,
Nancy Lynch, and
Calvin Newport,
Bounds on Contention Management in Radio Networks International Symposium on DIStributed Computing (DISC) 2012.
[Abstract]
[PDF]
The local broadcast problem assumes that processes in a wireless network are provided messages, one by one, that must be delivered to their neighbors. In this paper, we prove tight bounds for this problem in two wellstudied wireless network models: the classical model, in which links are reliable and collisions consistent, and the more recent dual graph model, which introduces unreliable edges. Our results prove that the Decay strategy, commonly used for local broadcast in the classical setting, is optimal. They also establish a separation between the two models, proving that the dual graph setting is strictly harder than the classical setting, with respect to this primitive.
Mohsen Ghaffari,
Nancy Lynch, and
Srikanth Sastry,
Leader Election Using Loneliness Detection International Symposium on DIStributed Computing (DISC) 2011 + Distributed Computing Journal 2012.
[Abstract]
[PDF]
We consider the problem of leader election (LE) in singlehop radio networks with synchronized time slots for transmitting and receiving messages. We assume that the actual number n of processes is unknown, while the size u of the ID space is known, but possibly much larger. We consider two types of collision detection: strong (SCD), whereby all processes detect collisions, and weak (WCD), whereby only nontransmitting processes detect collisions.
We introduce loneliness detection (LD) as a key subproblem for solving LE in WCD systems. LD informs all processes whether the system contains exactly one process or more than one. We show that LD captures the difference in power between SCD and WCD, by providing an implementation of SCD over WCD and LD. We present two algorithms that solve deterministic and probabilistic LD in WCD systems with time costs of O(log u/n) and O(min(log u/n, log(1/epsilon)/n)), respectively, where epsilon is the error probability. We also provide matching lower bounds.
We present two algorithms that solve deterministic and probabilistic LE in SCD systems with time costs of O(log u) and O(min(log u, log log n+ log(1/epsilon))), respectively, where epsilon is the error probability. We provide matching lower bounds.
Mohsen Ghaffari,
Behnoosh Hariri, and
Shervin Shirmohammadi,
On the Necessity of Using Delaunay Triangulation Substrate in Greedy Routing Based Networks IEEE Communication Letters 2010.
[Abstract]
[PDF]
Large scale decentralized communication systems have motivated a new trend towards online routing where routing decisions are performed based on a limited and localized knowledge of the network. Geometrical greedy routing has been among the simplest and most common online routing schemes. While a geometrical online routing scheme is expected to deliver each packet to the point in the network that is closest to the destination, geometrical greedy routing, when applied over generalized substrate graphs, does not guarantee such delivery as its forwarding decision might deliver packets to a localized minimum instead. This letter investigates the necessary and sufficient conditions of greedy supporting graphs that would guarantee such delivery when used as a greedy routing substrate.
Mohsen Ghaffari,
Behnoosh Hariri, and
Shervin Shirmohammadi,
A Delaunay Triangulation Architecture Supporting Churn and User Mobility in MMVEs ACM workshop on Network and Operating System Support for Digital Audio and Video (NOSSDAV) 2009.
[Abstract]
[PDF]
This article proposes a new distributed architecture for update message
exchange inmassively multiuser virtual environments (MMVE).
MMVE applications require delivery of updates among various locations
in the virtual environment. The proposed architecture here
exploits the location addressing of geometrical routing in order to
alleviate the need for IPspecific queries. However, the use of geometrical
routing requires careful choice of overlay to achieve high
performance in terms of minimizing the delay. At the same time,
the MMVE is dynamic, in sense that users are constantly moving
in the 3D virtual space. As such, our architecture uses a distributed
topology control scheme that aims at maintaining the requires QoS
to best support the greedy geometrical routing, despite user mobility
or churn. We will further prove the functionality and performance
of the proposed scheme through both theory and simulations.
Mohsen Ghaffari and
Farid Ashtiani,
A New Routing Algorithm for Sparse Vehicular AdHoc Networks with Moving Destinations IEEE Wireless Communications and Networking Conference (WCNC) 2009.
[Abstract]
In this paper, we propose the object pursuing based efficient routing algorithm (OPERA) suitable for vehicular ad hoc networks (VANETs), esp. in sparse situations. The proposed algorithm is applicable for both moving and fixed destinations. It is based on considering static nodes at each intersection. In this algorithm, we optimize the decision making at intersections, with respect to the connectivity and feasibilty of the roads. To this end, we consider the average delay of each road as the connectivity metric, and the vehicle availability in the transmission range of the intersection as the feasibility metric. By exploiting the related metrics, we select the next road to forward the packet in order to minimize the overall delay. We also include a pursuing phase in our algorithm, in order to capture the moving destinations. The simulation results indicate the superiority of our proposed algorithm, compared to previous ones.
