I recently went to a Devs in the ‘Ditch meetup at 7digital to hear Chris O’Dell (@ChrisAnnODell) explain 7digital’s journey to Continuous Delivery and Steve Freeman (@sf105) speak on GOOS and system testing. We had some useful discussions on dependency injection and how to use logging well, and Steve’s perspectives on ‘code shapes’ and the purpose of tests were revealing.
What is the Purpose of Tests?
Steve is one of the XtC practitioners who have advocated TDD since the early 2000s, well before TDD became common practice, and he is co-author with Nat Pryce (@natpryce) of Growing Object-Oriented Software Guided by Tests (“GOOS”). If there were a single point that would sum up Steve’s talk (and the GOOS book), it would be this: stop thinking of tests as verifying correctness; instead, see tests as a structure around which to grow the code.
This way of thinking about code and the purpose of tests – code growing over time using a substrate or trellis of tests – is very different from that common in the traditional object-oriented school of the late 1980s and 1990s, and probably very different from how many people view tests. We can sum up the GOOS approach with the question: What is the purpose of tests? Do we write tests in order to verify correctness, or to guide the development of the code? GOOS leans very much towards the second use (guiding the development of code).
If we see tests as a trellis or support structure around which to grow code, then it follows that once the code has reached a certain maturity, we should be happy to remove some of the supporting tests, as they have served their purpose. The ‘pruning’ of tests in this way helps to keep the test suite relevant and rapid to execute. The need to prune tests is just one of many points of agreement between GOOS and Working Effectively with Legacy Code by Michael Feathers; see The Long Tail of Technical Debt for an excellent blog post on pruning tests on Michael’s blog. Also, given that an increasing amount of the code we are writing these days is in the infrastructure sphere, how can we apply TDD/GOOS principles to infrastructure automation code, such as Chef/Puppet/etc.?
Steve made several throw-away comments about code shapes and OO design which I thought were interesting. In the list which follows, I have mapped these statements to the relevant sections in the GOOS book:
“Things that make testing easier tend also to make good OO design”.
See p.57 – I think this maxim applies to other aspects of software design, not just the OO aspect. In particular, at thetrainline.com, we have found that those things which make build, deployment and integration testing easier are also generally good for the software system as a whole.
“Use tests to help you think about how to design your code”.
See p.229 – listen to the tests
“‘Shapes’ in code are important”
See p.240-241 – the ‘confused object’ and ‘too many dependencies’ problems.
“Symmetry in code is often a clue that tests and code are well-designed”
See p.107-108 – shows why the ‘imbalance’ between two similar methods is probably a poor design.
I also like the concept of emergent design which is woven throughout GOOS, e.g. p.137 – contrast this with the Big Up-front Design / Ivory Tower Architect view of designing software, which is increasingly discredited, imo.
Dependency Injection – use it wisely
There was a lively discussion about dependency injection (DI) frameworks and object graphs. My colleague Attila noted that these days it’s easy to be a lazy developer and rely entirely on DI to construct objects, avoiding the use of new altogether; however, relying on DI for newing up objects results in a huge inefficient object graph.
Steve agreed, saying he uses Spring and similar DI frameworks only for external components in of the system which are not under his control. For his code, he prefers to let the code itself determine the best place to construct object graphs, by which I think he meant that he uses new in a small number of places (i.e. the uses of new are clustered).
Logging Done Well
Steve made a point of emphasising the importance of good, well-structured, well-formatted logging. His example of the ship’s log was very useful for me: a concrete example of where logging was and is essential to conform to pre-defined standards.
A ship’s log is a matter of public record – it has a defined and accepted format. Here we can see dates and times recorded in a standard pattern, which is more than can be said for some software even in 2012!
In software systems, logging is often added as an afterthought to aid debugging awkward problems, rather than as the first-class feature it should be (see p.233-5 of GOOS). When logging is done well, it can be used to provide a near-real-time view of how the system is behaving, through use of tools such as Syslog, Logstash and Splunk.
However, if logging is a feature, how do we test it? We need to use an abstraction between the domain code and the logging mechanism, a Collaborator in GOOS terms. The abstraction layer also helps us to avoid logorrhoea (think: rolls and rolls of toilet paper!) by allowing us to restrict what actually gets logged to disk or Syslog based on configurable rules.
I recently wrote about a technique very similar to the GOOS logging Collaborator to configure logging levels without recompilation in Software Development Practice Journal; you can read the re-post on my blog here.
It was a packed session at the 7digital offices, and the free beer and chocolates were really appreciated! http://www.meetup.com/devs-in-the-ditch/photos/11185832/ Thanks to Steve, Chris and the 7digital team for a really useful event.
Have you read the GOOS book? I’d love to hear how your experiences map to the GOOS way of writing code.