Stefan Tilkov (@stilkov) from innoQ gave an excellent talk on the importance of a “system-of-systems approach” to software architecture (Breaking the Monolith, slides [PDF, 1MB]). [Update: the video is now online here: http://www.infoq.com/presentations/Breaking-the-Monolith]
In essence, he argued for a distinction between micro-architecture (the design of the individual [sub]system) and macro architecture (the design of interacting systems).
Stefan rightly characterized the use of the classic N-tier “architecture” pattern of UI+Logic+Persistence for every system as “too generic and lazy”. On the other hand, assuming that “1 Project = 1 System” is also very bad news for software architecture; that is, allowing the system design to be driven by the arbitrary constraints of budget allocation leads to poor design.
System Boundaries and Modularization
Stefan talked next about the need to be very clear about the system boundaries, and pick the appropriate modularization strategy for the size of the system.
- For each distinct subsystem, consider using separate persistence, domain model, UI, and even implementation strategies – there is no ‘one size fits all’
- Consider different software technologies for different subsystems (the most appropriate in each case)
- Consider using different database technologies (RDBMS vs NoSQL)
- “Integrating through the database is the single worst design decision you can take”
It seems to me that this common-sense approach is often in danger of being ignored by inexperienced software architects looking for what might be called the ‘latest shiny’ approach, mandating the use of <name your TLA> across the board.
Cross-System vs. System-Internal
Distinguish Cross-System (fundamental) vs. System-internal (transitory) decisions. Aspects such as programming language and persistence should be open to change within a [sub]system every few years, whereas more fundamental design decisions such as data formats and [sub]system responsibilities should be designed for a longer lifetime.
- Cross-System: stick with these for 5+ years
- Communication protocols
- Data formats
- Logging & monitoring
- Data redundancy
- System-internal: can be changed every year or two
- Programming languages
- Dev tools and frameworks
- Coding guidelines
The result of following this approach ought to be that newer technologies can be introduced where appropriate and beneficial (at the System-Internal level) without requiring the re-work of the rest of the system, as contracts and data formats remain the same; a good balance of flexibility and stability.
Timelines for System Evolution
One of the recommendations in Stefan’s talk which resonated the most with me was the need for a timeline for the evolution of both the System-Internal rules and the Cross-System rules.
Such an approach allows you to:
- Identify when approaches need to change and plan them in
- Have “versions” of architectural rules/guidelines and understand which “versions” of (Domain, Cross-system, System-internal) work with each other.
Any interesting (and valuable) software system is likely to be a changing, evolving thing; rather than a sculpture (fixed in form), software should be treated more as a ‘live performance’.
Stefan rounded off the talk with some key points:
- Data redundancy is good for you – sharding data between systems helps to maintain good design
- If a central datastore is available, people will use this (developers are lazy!)
- Consider using Edge-side integration (ESI) Caches; like SSI but at a proxy/edge cache
- Browser-side integration: use the tabs in the browser rather than re-implementing chromes within the browser chrome
- “Distributed transactions are very bad”
I thought this was an great talk, with plenty of solid, common-sense advice.