Taming Aggressive Replication in the Pangaea Wide-Area File System Yasushi Saito, Christos Karamanolis, Magnus Karlsson, and Mallik Mahalingam, HP Labs The goal is take advantage of locality of corporate campuses. The authors assume that servers trust each other and that "eventual consistency" is ok. This consistency is achieved through "Pervasive replication": there are many replicas for a file, and replicas are allowed to change (without notification to other replicas); these changes are gradually propagated to other copies of the file. The local server process is Implemented with SFS loopback. A random graph is created independently for each file/directory. Edges in graph are used for update propagation (How is a random graph created without centralized knowledge?). A few replicas of the file are denoted "golden" and are used as the focal points of backup. "Bronze" replicas are created on each open. Adding and deleting replicas is a challenge. "Harbingers" are used to handle propagation delay. They are transferred before actual updates, letting the upper (server) layer know that update is coming. Evaluation: with 4 replicas it is comparable to NFS when copying. Another experiment looks at Usenet-style collaboration (like newsgroups). The graphs showed time to access decreasing as number of replicas increases (this is good). They also looked at bandwidth usage during experiments.