http://people.csail.mit.edu/jaffer/CNS | |
Computer Natural Science |
In the 1980s personal computers were simple. Running time and space could be calculated from examining source code. Tests of programs and performance gave fairly repeatable results. From a computer science perspective, those were the good old days.
In the 1990s PCs gained two-level caches, virtual memory, and demand paging. As Interpreter Speed details, these changes invalidated much of what we knew about running time.
The 2000s have seen the advent of automatic software updates; including an explosion of undesired "updates" from worms and viruses. Despite my ISP's best efforts, worm infected hosts pummel my firewall with 100 packets per hour, 2400 per day, 17000 per week. Parasitic infection has become a fact of computer life.
On a computer not isolated, eviscerated, and fossilized for the purpose of testing or measuring program performance, one can have no reasonable expectation of repeatability of results over spans longer than a week. The acquisition and maintenance of test computers has become a task comparable to the raising of laboratory animals. Unless extreme care is taken, computers won't be exactly the same. Infection can invalidate weeks of work.
Current software viruses have human authors. As computer systems grow more numerous and complicated, new strains of virus may arise independently. "1997 Outbreak of Virus Infecting .050 100-pin D-connectors" explores the etiology of a spontaneous hardware virus.
The CVS Source-control system performs a process similar to recombination on its genetic material (files). As in multicellular organisms, this can harbor cancer. Malignant Tumor Affecting CVS File reports such an occurence found in the field.
Erann Gat is pushing the envelope of computer science. Lisp as an Alternative to Java not only measures execution time and memory usage, but measures development time (14 coders) as well. Here is experimental data to counter prejudices against Lisp and Scheme!
After reading Towards Principled Experimental Study of Autonomous Mobile Robots, it seems a wonder that any autonomous robots not characterized in this manner worked.
In an adversarial computing environment, testing only those failures we can imagine is insufficient. George J. Carrette popularized Monte-Carlo computer systems testing with his 1990 program CRASHME.
Are there any computer users who have not witnessed inexplicable behavior from their machines? I see users go through complicated rituals trying to workaround their computer's idiosyncrasies. Those with clout receive visits from system administrators who reinstall package after package. Often as not, it brings no improvement; they continue their rituals until the computer is replaced. Replacement brings only partial relief, as it trades one set of idiosyncrasies for another.
The science fiction has come true. Networked personal computers exhibit emergent behavior that makes each day's interaction an adventure. Running programs seem to age and require periodic reboot. Machines abruptly freeze and issue edicts to their users. Commands break for a few weeks; then recover. Documents written a month ago are rejected by their programs.
Computer science has become a natural science replete with common puzzling phenomena to explore. I have spent many a scandisk pondering why inserting a text-box in a MS-Word document on Windows98 induced complete paralysis in a Dell PC half of the time.
Copyright © 2002, 2003, 2004, 2006 Aubrey Jaffer
I am a guest and not a member of the MIT Computer Science and Artificial Intelligence Laboratory.
My actions and comments do not reflect in any way on MIT. | ||
agj @ alum.mit.edu | Go Figure! |