[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: case against XP (was: PG: Hackers and Painters)




> 
>    This is what test-driven development looks like from the outside -- as 
>    if people wrote unit tests against their complete
>    understanding of the application and hence, potentially left things 
>    uncovered.  Test-driven development doesn't allow
>    a feature to be added to the application code unless it is put in to 
>    make a test work.  There is effectively _no_ code
>    that doesn't correspond to some unit test or other, so it's not easy to 
>    find holes in the test suite.  I know from
>    personal experience that refactoring is vastly easier and more reliable 
>    when the code has been developed using TDD.
> 
> Jerry -- I'm confused. How do you write unit tests say for the
> Travelling Salesman Problem :) And even if you did write a couple of
> test instances, how does it help you write the code to solve the
> problem? (I've seen plenty such codes that handled test suites fine,
> but crashed horribly or performed poorly when faced w/ real production
> problems).
> 
> I just don't get the point about how one can write code driven
> *solely* by tests. I think that would be *impossible*, not only
> because of bugs that the programmer has not thought of (as the article
> says) but also because of most intelligent programs in my experience
> are either heavily data or UI driven, and it is simply impossible to
> get good coverage of problem instances. Sure, one can use carefully
> chosen edge cases to write tests, but ..
> 


It's certainly the case that some kinds of programs are more difficult
to test than others.  However, I find the assertion that it's impossible
to get good coverage a little strong.  People do use carefully chosen
edge cases.  In fact, one of the interesting aspects of the TDD approach
is the deliberate, ongoing attempt to break the program.  Every test
iteration starts with the thought: "How can I break this program now?".
If I can come up with a way, I write a test and demonstrate that the
program breaks.  Then I fix it and try again.  Eventually you run out
of ways to break the program.  This runs entirely counter to the way
I believe most people think about their programs.  Most people tend to
think about the "happy path" nearly all the time and only later (if then)
wonder about how things might go wrong.  (Note that I'm not talking
about checking error return codes, etc. -- I'm speaking more at the
functional level).

I think that this deliberate focus on how to break the code
is one of the most important aspects of TDD.


> Care to explain? (Please note that I am not saying tests are
> meaningless or TDD has no applicability -- all I'm saying is that I
> find the dogma re. TDD in XP puzzling. Esp. because in my experience
> programmers will look where the light is.. and emphasizing tests a bit
> too much .. and you'll get a lot of your hours/$$ spent on verifying
> your computer can indeed add and subtract numbers. Stated differently,
> what percentage of the tests you wrote in TDD/XP were truly
> meaningful, or were you somehow smart enough during the initial stages
> of a project to write truly useful tests right from the get go :)
> 


Okay, I'll try to elaborate.  First of all, when doing TDD, writing tests
has a much different feel than the more common practice of trying to
test pre-written code.  Writing the tests is a big part of the design
process.  I think this is the point missed by most people who haven't
tried TDD.  The article I linked in my last mail gives a good description
of the process.  Writing the tests _is_ design.

A traditional design process is broken into phases:  Requirements,
Functional Spec, Design, Implementation (lather, rinse, repeat).  With
TDD, these are somewhat merged.  You're forced to think about the 
requirements
and functional specification up front to a degree in order to write the 
tests.  You
can't write a test unless you know what the code is supposed to do. 
However,
the design is grounded by implementation as you move along.

All the tests are useful since the test suite constitutes the 
specification for
the program's behavior.  Each piece of functionality is written to satisfy
a test.

It's important to understand how central refactoring is to this 
methodology, however.
The basic approach is:

Write a test that fails,
Write code to make it pass,
Refactor to remove redundancy,
Repeat, until you're unable to construct a failing test.

I find this approach has the following effects:

1) I'm much more aware of interactions between parts of the API (because
I'm using it, rather than just thinking about it)

2) My code is more testable (no surprise here) since I'm writing it
in response to tests.

3) The code is better factored.  As I said, refactoring is central to this

process but it's also enabled by it.  A comprehensive test suite allows
radical refactoring with confidence that you haven't broken anything.

3b) When requirements change late in the game, it's far easier to update
the code since you know right away if your changes are correct.

4) I go down fewer blind alleys and have fewer surprises late in the game.

5) It's easier to do collaborative development since everyone knows if
  a new change has broken anything (just run the tests).


There are definitely places where taking this approach becomes difficult.
Some software needs an elaborate environment in which to operate.  This 
makes
it difficult to set up tests.  Various sorts of mock environments and inputs
can deal with this to a degree but it's still an issue.  Our team just
finished a year-long project that included a web GUI, a lot of
client/server code in Java and C++, SSL, RMI, lots of concurrency, etc.
It was definitely tough to test some of these features in a simple 
unit-test framework.

We're at the point now that new bugs tend to be extremely subtle and it 
often
takes far longer to construct test cases that trigger them than it does
to write the fixes.  We still do it, though.  In fact, our model is that we
don't start fixing the bug until we have a test that reproduces it.  We now
have close to 1000 unit tests in the system (along with other 
higher-level functional
tests) and the unit tests are run every night.

I've had multiple opportunities to test the "TDD allows large-scale 
refactoring"
theory in the last year and I've been very pleased with the result.  By 
definition,
if the tests pass, you didn't break anything. :-) If something does 
break, we add
a test that catches it and move on.  It's very liberating.  I've worked 
on a lot
of complicated programs before where it becomes difficult to change them 
later
because you're not sure of the side-effects of a change.  I don't worry 
about
that any more.