[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Diversity - existence, value, and pursuit.



Simon, relax -- I'm sorry, and there's no need for you to go anywhere.
You belong here at least as much as I do.  We clearly got off on the
wrong foot.  You write aggressively, and so do I.  Not a good mix.

Let me try to make my point again, politely. I read your message as
saying there was an inherent contradiction in the two prior posts.
The same person said "Scheme can run as fast as C" and then that "no
bytecode interpreter [or some such term] can be as fast as C".  If
that's contradictory, the implication I draw from is that you think
all Scheme systems must be implemented as bytecode interpreters.

My point is that that's not true, neither in theory nor in practice.
There are several honest-to-goodness Scheme *compilers* out in the
world.  Some of them compile to C, and use C as their semi-portable
assembly language.  Some will even generate assembly instructions in
their native architecture.  And many of these yield very high
performance systems.  Indeed, some, like Bigloo and Stalin, are
designed specifically to produce code that competes with hand-written
C code (both in performance and in object code size), and do extremely
well in some domains (cmd line scripts, numerical processing, etc).

Let me expand on the phrase that caused umbrage.  Lots of systems have 
a REPL (read-eval-print loop) interface.  Watch:

  > 3                                 ; me
  3                                   ; Scheme
  > (+ 1 2)                           ; me
  3                                   ; Scheme
  > (define (square x) (* x x))       ; me
  > (square 10)                       ; me again
  100                                 ; Scheme

Now compare this VERY carefully against the following transcript:

  > 3                                 ; me
  3                                   ; Scheme
  > (+ 1 2)                           ; me
  3                                   ; Scheme
  > (define (square x) (* x x))       ; me
  > (square 10)                       ; me again
  100                                 ; Scheme

See the difference?  Obviously, the first was generated by PLT Scheme,
a (sort of) bytecode interpreter, while the latter was generated by
Chez Scheme, which compiles each expression to machine code, sticks it 
in memory, and jumps there to execute it.

Hence, it's critical to distinguish between an "interactive
programming environment" and an "interpreter".  People used to the C
world are conditioned to thinking an "interpreter" is something that
consumes and evaluates an expression at a time.  But that's not what
an interpreter is!  An interpreter is merely an implementation
strategy.  Now it's true that it's often easier for an interpreter to
accept an expression at a time.  However, one could imagine there are
some interpeters whose interaction looks like

  % interp -f "filename.lang"
  100
  % ...

In contrast, in the Lisp/Smalltalk/... world, honest-to-goodness
compilers have offered interaction capability for a long time -- so
there's no contradiction between performance and interaction.  (This
is equally true in ML.  Indeed, *all* the major ML implementations are 
compilers, most indeed are extremely innovative ones, and yet they all 
offer an interactive interface.)

Some systems, in fact, have both an interpreter and a compiler, and
examine the size and nature of the actual expression to decide whether
it would be quicker to just run it in the interpreter (no time wasted
on code generation and scaffold erection) or to generate machine code
and execute that by jumping to it.  You'd be darned if you could tell
whether the interpreter or the compiler was running at a given instant.

Don't judge a book by its cover (except Java, pg).

Shriram