Efficient Erasure Correcting Codes by Luby, Mitzenmacher, Shikrollahi, and Spielman Both Reed-Solomon(RS) and Tornado codes are a type of Forward Error Correction Codes, but the former slowly encode and slowly decode without loss over an erasure channel and the latter enable fast encoding and decoding with probablilisticly low loss. Both are block codes, as distinct from Hamming codes and other codes which encode one bit at a time. RS codes are parameterizable based on the amount of erasure the application needs to account for. For example, a codeword of length n is composed of k plaintext bits and 2s redundant bits which enable s bits of recoverable erasure. Similarly, a Tornado code of length n is made up of k = (1-p)n data symbols and pn redundant symbols where p is the maximum rate of erasure. The (1-p)n original symbols can be recovered from any set of (1-p)n symbols received. RS views the bits to be encoded as coefficients of a polynomial and maps these coefficients into a field F. To supply redundancy, it then encodes 2s additional distinct non-zero points. In contrast, Tornado codes create a bipartite graph which defines a mapping from message bits to check bits. The check bits are the XOR of the message bits (nodes) which have edges to this check bit. A Digital Fountain Approach to Asynchronous Reliable Multicast by Byers, Luby, and Mitzenmacher A digital fountain is an idealized data streaming server that enables multiple clients that want to receive data of encoded length n bits to listen for exactly n bits from the server and (then?) decode the original message. The essential problem with multiple clients asking for the same piece of data at different, overlapping times is loss of packets of the data over the network and the subsequent retransmission of lost packets at the server. The idea is a hybrid of the data carousel, which repeatedly loops through the data packets forcing more lossy clients to wait until the next time through the loop, and encoding the data with an erasure resilient code like RS. Other systems use hierarchies often run into trouble keeping caches up-to-date and with the logging of protocol-specific messages (at least at Vividon, where I worked, they did :). Some comments on the paper itself: 1) The first sentence says "rich content," which to me means primarily streaming video, is the focal application. The "Requirements for an Ideal Protocol," however, describes a "software download application." To me, software download places accuracy over timeliness and streaming can accept loss but timing is very important and noticeable by a user. I think the real application here is streaming and the "Requirements" section should reflect this. 2) In practice, the stretch factor which determines the redundancy of packets on-the-fly seems like it could weigh down the fountain server. One approach, which we used at Vividon, was to preprocess different quality-of-service streams and then pick a stream based on the client's bandwidth. You could use a similar scheme by having multiple fountains, each with a particular bandwidth (which could be tricky because clients tend to sometimes bounce between QOS-levels) or just by having the same fountain cache stretch-factors after the first requester at that level.