[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: following up on speed
> Date: Sat, 20 Jul 2002 10:59:36 -0400
> From: "David B. Tucker" <email@example.com>
> On Sat, Jul 20, 2002 at 03:02:52AM -0400, Daniel Weinreb wrote:
> > Michael Vanier wrote:
> > >
> > >Hmm, I think I wasn't being clear enough. What I meant is that in C, there
> > >is a tendency to use arrays and structs (however deeply nested) for all
> > >data structures (I've worked on several large projects where this was the
> > >case), whereas a lot of fairly simple data structures that one might find
> > >in any textbook (linked lists, doubly-linked lists, trees, priority queues,
> > >etc. etc.) are avoided because of the memory management headaches that
> > >would arise. In contrast, in languages like (say) ML, it's so trivial to
> > >write this kind of data structure that it's almost automatic.
> > >
> > Well, when you contrast languages "like C" against those "like ML", you
> > are not necessarily
> > comparing "malloc/free languages" against "GC languages". Maybe you're
> > really comparing
> > "no standard library" languages to "standard library" languages. In
> > modern C++ with STL,
> > all that stuff is pretty easy to do, too.
> Are you saying that the STL obviates the need for GC? I'm not an STL
> expert, so perhaps you can enlighten me. Say I need to implement a
> data structure for an abstract syntax tree. What does the STL provide
> that automatically manages the memory of that data structure?
I don't think the STL obviates the need for GC except in comparatively
simple cases. This is the biggest current weakness in C++ as far as I'm
concerned, although it is a fixable weakness. Even Bjarne Stroustrup has
said many times that he believes in the usefulness of optional GC, but the
C++ standard hasn't embraced this yet (probably because of the extreme
machine-dependence of GC, although you could still define a standard
interface to a GC).
> > At MIT, we were taught to think about semantics and abstraction much
> > more than we were taught
> > to think about performance. There is a real danger in taking this
> > approach too far.
> How far is "too far", and what is this "real danger" you speak of?
> In my experience, I've seen exactly the opposite -- that students have
> to worry so much about bit-level performance that they can't grasp
> even simple abstractions.
This has overwhelmingly been my experience too (if you substitute "prefer
to worry" for "have to worry"). We routinely get students in CS 1 at
Caltech (taught using scheme) who appear to know nothing about programming
other than bit and pointer manipulations, and who have a tough time with
any nontrivial abstractions (e.g. functions as arguments). In addition,
they don't seem to understand asymptotic complexity of algorithms, although
many of them are quite adept at micro-optimization (involving, you guessed
it, bits and pointers). So we focus on the big picture to try to create