links to this page:    
View this PageEdit this PageUploads to this PageHistory of this PageTop of the SwikiRecent ChangesSearch the SwikiHelp Guide
Writing good tests is much harder than writing good code!
Last updated at 3:04 pm UTC on 8 November 2003
Nov, 2003

Torsten.Bergmann@phaidros.com wrote to the list that writing good tests is much harder than writing good code!

Ingo Hohmann and Stef Ducasse agreed and Hannes Hirzel asked why this is the case:

Roel Wuyts came up with the following explanation:

For me it is hard for several reasons (depending on when you write the test (e.g. in which part of the development cycle)):
1-test first, before implementing new stuff: So the problem is that you are writing tests for something that you do not have already. There the problem is that you are actually pinning down requirements for a problem or a domain that you normally not really know yet. This is not a real problem, but it is hard. I have nothing against it (you have to do it sooner or later, and doing it sooner in the XP way poses less risks).

2-writing tests after writing code: Then you are writing tests for documentation purposes (kind of). This is hard because you want to cover the implementation you have, so there is lots of work to do, and you want to make sure that all your invariants that you implemented are covered. Because you did not start by writing tests, this is a lot of work and can become quite tricky. Sometimes you have to change your implementation in order to be able to write tests (sometimes certain accessors are needed, singleton patterns have to be broken up to add a second instance for testin purposes, ...).

3-writing tests for unknown code: Before reengineering it is typically good to write tests so that you know that you did not brake (too much) when changing unknown things. So this is the problem as in (2) except that here you do not even know the system, its invariants, ... So it gets hard. Of course, reengineering is hard anyway...

4- enforcing application rules: Recently I began coding some meta-tests (and | start to see them in other places as well): tests that express patterns that code should stick to. For example, all subclasses of a certain class have to override a class method, so I wrote a test to check this. Or you can write a test that checks that for every class in your system there is a corresponding test class (called NameOfClassTests), that tests all methods). Or ... or ... or ... That can be hard because you are writing rules to express your architecture, making it explicit and testable.

What do you think?