Ayende @ Rahien

It's a girl

Legacy Driven Development

imageHere is an interesting problem that I run into. I needed to produce an XML document for an external system to consume. This is a fairly complex document format, and there are a lot of scenarios to support. I began to test drive the creation of the XML document, but it turn out that I kept having to make changes as I run into more scenarios that invalidated previous assumptions that I made.

Now, we are talking about a very short iteration cycle, I might write a test to validate an assumption (attempting to put two items in the same container should throws) and an hour later realize that it is a legal, if strange, behavior. The tests became a pain point, I had to keep updating things because the invariant that they were based upon were wrong.

At that point, I decided that TDD was exactly the wrong approach for this scenario. Therefor, I decided that I am going to fall back to the old "trial and error" method. In this case, producing the XML and comparing using a diff tool.

The friction in the process went down significantly, because I didn't have to go and fix the tests all the time. I did break things that used to work, but I caught them mostly with manual diff checks.

So far, not a really interesting story. What is interesting is what happens when I decided that I have done enough work to consider most scenarios to be completed. I took all the scenarios and started generating tests for those. So for each scenario I now have a test that tests the current behavior of the system. This is blind testing. That is, I assume that the system is working correctly, and I want to ensure that it keeps working in this way. I am not sure what each test is doing, but the current behavior is assumed to be correct until proven otherwise..

Now I am back to having my usual safety net, and it is a lot of fun to go from zero tests to nearly five hundred tests in a few minutes.

This doesn't prove that the behavior of the system is correct, but it does ensure no regression and make sure that we have a stable platform to work from. We might find a bug, but then we can fix it in safety.

I don't recommend this approach for general use, but for this case, it has proven to be very useful.

Comments

Steve Smith
08/27/2008 07:58 PM by
Steve Smith

Hi Oren,

Yes, this is something Feathers talks about in his Legacy Code book, where he equates such tests to clamping down a piece of wood one is working with to ensure it doesn't move while it's being worked upon. The tests help ensure the system keeps doing what it did before, which is valuable when you start hacking at it. One quick typo, in your 2nd to last paragraph I think you meant to say "...but it does ensure no regression..."

Jason Stangroome
08/27/2008 10:22 PM by
Jason Stangroome

Last time I was doing something similar to this I implemented the diff between my output and the known good output as an automated test.

jdn
08/28/2008 01:36 AM by
jdn

This is nice. I can relate.

Comments have been closed on this topic.