[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: case against XP (was: PG: Hackers and Painters)



[Neel Krishnaswami <neelk@alum.mit.edu>]
> 
> For me, tests work like a REPL with memory.  A lot of my hacking was
> (and is) trying things out at the interactive prompt, one expression
> at a time. Nowadays, whenever I try out an experiment like that, I
> immediately turn it into a test. This way I can have a permanent
> record of my experiments: it's the programmer's equivalent of a
> scientists's lab notebook, or an artist's sketchbook.
> 

Absolutely excellent analogy! This also applies to the common informal
testing methodology of "just fire up the system and try it." Testing
has other benefits, though, beside the obvious. Here's two:

 - the quality and modularity of my code increased a lot. Half the
   pain of retro-fitting tests to code comes in the form of having to
   rewrite code to be more testable. I've always found that in the
   end, I wish it had been written that way in the first place. In my
   experience, "more testable" is pretty much equivalent to "cleaner
   and more pleasant." This is hard to articulate precisely, but since
   unit testing requires being able to isolate the component (by this
   I simply mean chunk of code-stuff, in your current language) you're
   trying to test, it encourages you to make dependencies explicit and
   well-defined, make components less tightly coupled, keep
   functionality together that belongs together, and so on. This, in
   turn, makes you a better programmer (or so I'd like to think).
 - unit tests are great executable documentation! When trying to
   figure out what another programmer (or a past and distant version
   of yourself) had in mind, unit tests give you an excellent picture
   of what they were thinking. In this case, the lack of coverage in a
   particular area is often as useful as the coverage... You can tell
   exactly what the programmer considered the "normal uses" of this
   component, what she considered the boundary conditions, what things
   were initially overlooked and so on. This may not seem like much,
   but whether you're supposed to be writing code that "correctly
   uses" a component, or working on the component itself, this kind of
   insight is absolutely invaluable. And there's much less chance of
   it being out-of-sync/incorrect than there is with separate
   documentation (design docs, comments, old mailing list posts,
   whatever).

As to whether unit tests should be the sole desiderata of whether your
software system is complete and ready-for-production, I certainly
wouldn't think so... There's still functional/acceptance tests and
performance/stress tests (automated and reproducible on demand, if
possible), the actual custormer requirements (or stories, or whatever
the XP jargon is), and finally and most importantly, actual customer
signoff, which, if you're really working as closely as you'd like to
be with your customer (client, users, whatever), should happen
incrementally and often. Unit tests are fantastic, but they're not the
silver bullet...

And just to state for the record, I am absolutely not a methodology
expert in any sense, and I'm not an XP evangelist or anything of the
sort... I believe the most important thing for any group or programmers
is simply to think on a regular basis about how they write software
and ask themselves a few questions: Are we happy, and mostly enjoying
our work? Are we writing the best software we could be writing? Is
there anything we want to try, even temporarily? What problems have we
been having, individually and as a group? What is really most
important to us? I don't see any need for a silver bullet methodology
beyond that. If the shoe fits, wear it until you find one that fits
better...

Matt

-- 
Matt Hellige                  matt@immute.net
http://matt.immute.net