A Case for Redundant Arrays of Inexpensive Disk (RAID) (Patterson, 1988) Jonathan Ledlie CS736, February 16, 2000 Patterson, et. al., provides an elegant solution to the "pending I/O crisis" in his idea for making the sum greater than its parts. In fact, his paper is so admired that the database community's SIGMOD has given it its "Test of Time" award. Beyond the actual implementations of the levels of RAID, this paper has at least three strong points. First, it accurately recognizes the impending bottleneck that users will soon face: as the processor and RAM are getting faster and cheaper, disks, particularly seek times, are remaining nearly constant. The ramification of this is that even though we believe that our applications could be running hundreds of times faster, they can only run one order of magnitude faster, due to the I/O bottleneck. I found this in interesting juxtaposition to the Impact paper which argued that RAM and cache performance is a significant bottleneck. Second, instead of focussing solely on performance, as most operating systems evaluations do, Patterson's idea takes cost into account. Cost, unlike ease-of-use, is somewhat easily quantifiable, but is more often than not, left out of the equation. Third, in evaluating their idea, the Berkeley group, clearly differentiates between the two main kinds of troublesome I/O loads: huge, supercomputer files, and lots of random non-repeated seeks, like in database applications. They leave the reader with a relatively clear feeling as to what to choose for her particular installation. One problem with RAID is that, even though the cost of the hardware for buying the drives will be less, it is a complicated system, and the cost of your RAID maintainer will be higher - the hardware is cheaper, but the human cost in expertise is probably more.