Interpreter Latency

"Interpreter Speed" explains about the necessities who bore the SCM interpreter. But focussing only on speed executing repetitive computations would produce a much less useful programming environment than SCM.

What Software Developers Do All Day

Many people who use computers program them, although they may be unaware that their activity could be classified as such. Spreadsheets and scripts, databases and report generation can involve changing instructions (often stored with the data) to the application; in a word, programming.

There are computer users who don't program. When something doesn't work correctly, they are stuck. For this group, startup latency is less of an issue.

The rest of us spend significant amounts of time repeating this sequence:

For software developers, this cycle time is often the primary impediment to quick debugging and progress. If one has a very fast restart, then it is often quicker to try several variations or polarities than it is to stare at the code and count the fenceposts. I find this especially true when debugging code written by others.

Some would argue that program interactivity obviates the need for frequent restarting. But I would be surprised to find an experienced software developer who hasn't wasted hours of interactive modifications in a session because those modifications crashed the program on restart.

Low-Latency Programming

I have used low-latency programming techniques from SCM's inception, although I hadn't thought of it in those terms until reading a posting from Tom Lord about Guile (a descendant of SCM): "Re: Rscheme etc."

For another thing, all this talk about "replacing Guile", especially by an implementation based on compiling, is a little silly. Guile deliberately has all sorts of yummy dynamic behaviors that would be difficult and pointless to try to get right in a compiler (examples: lazily computed top-levels, Guile's low-level macro system, and very-low-latency eval).

What follows describes how SCM achieves its low restart latency. These are not profound ideas. Success is simply a matter of addressing the latency of each link in the chain between Scheme source and execution. Most of these techniques should readily apply to interpreters for other languages.

Low-latency programming principally requires consideration of disk seek times and cache activity. Early, thoughtful organization of source files is a good investment of one's time. Restraining expectations for and dependence on compilers and core interpreters also help prevent the bottlenecks which lead to poor latency.

Precomputing file locations and lazy in-place optimizations were ideas I had not seen elsewhere. The others are natural consequences of looking beyond the CPU when thinking about programs.

Copyright © 2003 Aubrey Jaffer

I am a guest and not a member of the MIT Computer Science and Artificial Intelligence Laboratory.  My actions and comments do not reflect in any way on MIT.
Computer Natural Science
agj @
Go Figure!