### Algorithms and Complexity Seminar

##
Distributed Computation in Dynamic Networks

### Rotem Oshman (MIT)

What is the power of distributed computation in worst-case dynamic
networks? In this work we show that in an always-connected but
constantly changing network where communication links are controlled
by a worst-case adversary, nodes initially know nothing but their own
unique identifiers, messages are restricted to O(log n) bits, and
nodes do not even know who will receive the messages they broadcast,
it is nevertheless possible to compute any function of the nodes'
initial states in O(n^2) rounds. In contrast to other models of
dynamic computation (e.g., self-stabilizing computation and population
protocols), in our model nodes eventually halt and output the result
of the computation.

If the network enjoys some degree of stability, we can use it to speed
up the computation. We introduce a stability measure called T-interval
connectivity, which asserts that in every window of T consecutive
rounds, there exists some connected subgraph which remains stable
during those T rounds. We show that in T-interval connected networks
it is possible to compute any function in O(n + n^2/T) rounds. We also
describe lower bounds on the problem of disseminating k pieces of
information in a dynamic network.

Joint work with Fabian Kuhn and Nancy Lynch.