Blog

Continuous Delivery: Tools, Collaboration, and Conway’s Law – slides from QCon London

Continuous Delivery for databases: microservices, team structures, and Conway’s Law

The way we think about data and databases must adapt to fit with dynamic cloud infrastructure and Continuous Delivery. The need for rapid deployments and feedback from software changes combined with an increase in complexity of modern distributed systems and powerful new tooling are together driving significant changes to the way we design, build, and operate software systems. These changes require new ways of writing code, new team structures, and new ownership models for software systems, all of which in turn have implications for data and databases.

Read the full article on Simple Talk: Continuous Delivery for Databases: Microservices, Team Structures, and Conway’s Law.

(These slides were presented in a talk I gave at develop:BBC 2014 conference on 13th November in London)

Using ball flow exercises to highlight bottlenecks in software delivery

One of the exercises we do during the Experience DevOps workshop is a ball flow scenario inspired by the Ball Point game. We set up different combinations and topologies of Dev teams and Ops teams, and then see how many balls the teams can pass in (say) 60 seconds.

Experience-DevOps-ball-flow-exercise

When we ran the workshop recently in Bangalore, we had a large number of participants, which enabled some interesting experimentation with the topology of the teams. In the exercise, the ‘Dev’ team takes balls from the ‘backlog’, and eventually passes the balls to the ‘Ops’ team, who must ‘make the features live’ under some constraints designed to simulate real-world physical constraints.

Experience-DevOps-in-Bangalore

With the large group of participants in Bangalore, we experimented with multiple value streams (or products). After ‘warm-up’ runs using a single ‘product’ (value stream) and therefore a single Dev team and a single Ops team, we split people into two separate ‘Dev’ teams (one team for each product, and a backlog each) but only a single Ops team servicing both Dev teams:

ExperienceDevOps ball flow 2 Dev 1 Ops

In this topology, the teams were able to roughly match the throughput and error rate from before the Dev people were split into two teams (around 16 balls per minute, with around 4 defects). Then we removed the single ‘Ops’ team, and instead aligned half of the ‘Ops’ people with one product (value stream) and half with the other, creating ‘product teams’ (or ‘service’ teams):

Experience DevOps ball flow 2 Dev 2 Ops

The results were very striking: across all teams the overall throughput more than doubled with end-to-end service teams compared to the situation with a shared Ops team. One service team managed 16 balls with no defects, and the second team managed 20 balls and one defect, for a total of 36 balls and a single defect. Compare this to the 16 balls and 4 defects that were managed by the 2x Dev + 1x Ops team:

Experience DevOps ball flow with service teams

It was clear that – in this scenario – the single Ops team acted as a bottleneck. Part of the reason was due to (simulated) shared infrastructure. When we split people into service teams, we also split the infrastructure too, so that each service team deployed to its own set of ‘servers’.

The simplicity of the exercise, and the speed with which different topologies and constraints can be tried out, makes this ball flow game very useful for exploring different team topologies in a DevOps context.

 

Continuous Delivery eBook from Zend – views from 29 authors

I was recently asked to contribute to an eBook from Zend about moving to Continuous Delivery (CD). The 29 authors in the book share a wide range of experience with CD, and there is plenty of useful advice;  the contributions from Mathias Meyer (@roidrage), Kate Matsudaira (@katemats), and Jamie Ingilby (@jamiei) are particularly worth reading, I think.

In my section of the book I explain how using ThoughtWorks GO to model the testing and release steps (effectively part of the value stream) we won trust from several different people and teams during a move to CD. Using a prototype also helped us to validate the activities undertaken:

We tried to empathize with their situation and, using role-based security in the deployment pipeline, uncovered enough information to give them a sense of visibility and control.

Without being able to visualise and communicate easily the activities we were automating, progress would have been slow or even blocked.

Get a free copy of the eBook here: http://bit.ly/ZendCDbook

Zend eBook CD

Using Chef for infrastructure automation – reading list

I have recently read (and re-read) several books on Chef in order that I can recommend books to clients who are starting with infrastructure automation (and to remind myself of the more obscure uses of knife, encrypted databags, and so on). In this post I comment on these books:

  • Chef Infrastructure Automation Cookbook by Matthias Marschall
  • Managing Windows Servers with Chef by John Ewart
  • Test-Driven Infrastructure with Chef (2nd Edition) by Stephen Nelson-Smith
  • Automation Through Chef Opscode by Navin Sabharwal and Manak Wadhwa

Summary: read Chef Infrastructure Automation Cookbook for a good introduction to Chef on both Linux and Windows; read Managing Windows Servers with Chef if you manage many Windows machines; but most of all read Test-Driven Infrastructure with Chef because without a test-driven approach your infrastructure code will rapidly become tangled, unsupported, and obsolete.

Continue reading Using Chef for infrastructure automation – reading list