[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: following up on speed
David B. Tucker wrote:
>On Sat, Jul 20, 2002 at 03:02:52AM -0400, Daniel Weinreb wrote:
>>Michael Vanier wrote:
>>>Hmm, I think I wasn't being clear enough. What I meant is that in C, there
>>>is a tendency to use arrays and structs (however deeply nested) for all
>>>data structures (I've worked on several large projects where this was the
>>>case), whereas a lot of fairly simple data structures that one might find
>>>in any textbook (linked lists, doubly-linked lists, trees, priority queues,
>>>etc. etc.) are avoided because of the memory management headaches that
>>>would arise. In contrast, in languages like (say) ML, it's so trivial to
>>>write this kind of data structure that it's almost automatic.
>>Well, when you contrast languages "like C" against those "like ML", you
>>are not necessarily
>>comparing "malloc/free languages" against "GC languages". Maybe you're
>>"no standard library" languages to "standard library" languages. In
>>modern C++ with STL,
>>all that stuff is pretty easy to do, too.
>Are you saying that the STL obviates the need for GC?
No, certainly not.
I'm saying that the reason people use arrays and structs in C for all
data structures is
not becuase of the GC issue, but simply because they don't have a
of data structures. In C++, for example, the GC story is pretty much
the same as the GC
story for C, but there's this STL library, and people (presumably,
nowadays) use STL rather
than using arrays and structs for basic data structures.
> I'm not an STL
>expert, so perhaps you can enlighten me. Say I need to implement a
>data structure for an abstract syntax tree. What does the STL provide
>that automatically manages the memory of that data structure?
Well, I wasn't saying that it automatically manages the memory. I was
only saying that someone
programming in C++ with STL would not cobble together data structures
out of arrays and
structs, but rather would use convenient STL classes, just as a Java
programmer would use
Java library classes. That's all I was saying.
>>At MIT, we were taught to think about semantics and abstraction much
>>more than we were taught
>>to think about performance. There is a real danger in taking this
>>approach too far.
>How far is "too far", and what is this "real danger" you speak of?
The danger is that the programmer becomes entirely insensitive to
performance, and ends up
writing code that's incredibly slow, and cannot be fixed without
I'll never forget one story I was told (though I admit that I have not
confirmed it all that well.)
At Symbolics, one of our big customers was the MCC research consortium
in Austin TX.
One of the groups there was building a CAD system, designed to be a
run on many Symbolics Lisp Machines. Eventually the project failed,
because it was too slow.
I was told that the MCC researchers went back to their bosses and said,
well, it was all the
fault of Symbolics, because we built it in Lisp and Lisp is slow and
Lisp Machines are slow.
When we (at Symbolics) investigated (and we were certainly very much at
fault for not having
kept in touch with these guys all along and nipping this whole thing in
the bud!), we discovered
(so I am told) that they were modelling every little object in the CAD
system as a distributed
object, in such a way that every time you needed to follow a
from, say, a transitor to a wire that the transitor was connected to,
what would happen is that
you'd do a "get-the-wire-connected-to-my-collector-node" operation on
the object representing
the transistor, and it would do a LAN multicast in order to find the
object represending the wire.
Every dereference in the whole system did a LAN multicast, so naturally
real CAD programs were
spending all their time doing this. If they had done the same
architecture in C on Sun workstations,
exactly the same thing would have happened. But when their research
project failed, they needed to
find an easy target, and since "everyone knew" that Lisp was slow, their
claim that it was our fault
was found to be very plausible.
Apologies if there's anyone on the list from MCC and this story is
inaccurate, but that's how it was
told to me.
So that's the "danger" that I meant, and this was a case of going "too
far": I'm sure the multicast
stuff made everything very flexible and modular (i.e. you can move
objects from one machine to
another and everything "just works"), and presumably they just didn't
understand the performance
implications of what they were doing.
>In my experience, I've seen exactly the opposite -- that students have
>to worry so much about bit-level performance that they can't grasp
>even simple abstractions.