I’ve been using deployment pipelines since 2011 starting with GoCD and then other tools. A few months ago, I joined DevOps experts Helen Beal of Ranger4, and Sam Fell & Anders Wallgren of Electric Cloud to discuss deployment pipelines for modern software delivery as part of the Continuous Discussions (#c9d9) series (episode 88).
(YouTube video segments below)
- The concept of the deployment pipeline was defined and popularised by the 2010 book Continuous Delivery by Jez Humble and Dave Farley.
- Deployment pipelines are a really key concept for modern software delivery – all changes flow through the deployment pipeline.
- The fast feedback from deployment pipelines can and should completely change the way we approach software. We can expect rapid feedback on changes which encourages us to make smaller, more frequent code check-ins.
- Value Stream Mapping can be a powerful way to uncover large wait times in the delivery flow.
- A “walking skeleton” deployment pipelines – essentially modelling the current approval/change flow but with empty stages in a tool – helps us to sense-check the current state: “do we really need an approval gate at this point?”
Background to deployment pipelines
Matthew Skelton: So, the concept of deployment pipeline was popularized by Jez Humble and Dave Farley in the book “Continuous Delivery” publishing 2010. That’s where the terminology was really solidified around there and some people have been building what ended up being called deployment pipelines since about 2005. The concepts and the practices and things are quite well matured now but the deployment pipeline wasn’t really a concept until 2010 – the Humble and Farley book.
It was extremely useful, in my opinion, how they characterize what a deployment pipeline is – that route to life is that bit from version control to the production environment or as close to production as we can get in some cases if we’re working in embedded or we’re working in somewhere where we can’t deploy all the time but that kind of bit in the middle which is like the key middle bit of the value stream for software development. The way they characterized it was so good in terms of moving parts, in terms of the fact that almost everything was automated but you could still have some manual gates in there if you needed a manual decision. It was such a really good way of characterizing a whole load of really good practices which lots of people were already doing but they just called it a thing and they put some sort of parameters around it and made it really easy to understand and define what we mean by deployment pipeline. It was super super useful to have that as a starting point for thinking about deployment pipelines and building them.
What is the Continuous Delivery mindset?
Matthew Skelton: I think it’s an interesting point and what we’re seeing here is: we’re starting with kind of continuous integration as a concept back in at least early 2000s, possibly before that, but certainly the first build servers that were coming out, like cruise control and so on back in the early 2000s. That was the first sort of phase of what you might call taking some more and more an industrial approach to aspects of what we’re doing.
I don’t mean that it becomes completely regular because that’s not what software is, but seeing the value in automating stuff and getting fast feedback from being able to automate it and fix problems really really quickly – that kind of mindset which is very different from kind of tinkering away with code and so it might be ready and then perhaps it will kind of work out and come together and if we can be bothered to bring that code together and integrate it with other people’s code.
It’s a very different mindset. We’re proving ourselves all the time – much more scientific kind of way of doing it. Let’s try and falsify this hypothesis by running a load test against it and we’ll get the results and if it fails it’s not a problem, it not like I am a bad coder if the test fails, actually, it’s a good thing.
Sam Fell & Anders Wallgren: That’s right.
Matthew Skelton: It’s something I couldn’t see before – it’s a very different kind of mindset to how a lot of people have worked in the past with software in this way. I think it’s a great step forward because we’re not worried about producing huge numbers of artifacts and binaries because it’s fine – we’ve got some nice cleardown patterns that we can just run against the artifact repository – very very simple stuff. “Has it [the artifact] ever been used?” is the first one.
When I was working at a place in London a few years ago we had an artefact repository. It was distributed across a four or five different sites in across the world because we had a distributed team and so we needed to manage the number of our artifacts quite carefully in the pipeline and the first rule we implemented in terms of cleardown of all those artifacts was: has that package ever been used? We found that 70 percent of the stuff was never used because there was a new version that came on afterwards and so that was used it was very simple and our cleardown policy was just if it has never been used to get rid of it after a period of time.
That’s just an example of the kind of way which if we’ve started to make things more industrial, more repeatable, more automated we can apply rules like this and some of the problems that would have been crazy in the past if we would have been doing manual bills and keeping all that stuff. They just go away because we’ve automated the element.
It’s nice to be in this position where we’re taking these kind of patterns, making things more automated. We’re able to build more useful assumptions on top of our automated, more industrialized tooling and ways of working and do things in a much better way and that’s the real key values about deployment pipelines in this space.
The value of Value Stream Mapping (VSM)
Matthew Skelton: Yeah, we’ve done quite a lot on [value stream mapping] and it’s always very revealing for the organization, for the people in the team. Let’s say for a software development team we tend to start just with version control and watch the flow of change going into production because, actually as you said, it’s really valuable to extend it further left – back towards requirements and business requirements and so on. That’s useful to do too particularly if you’re working with stakeholders, higher up in the organization or with more responsibility at least.
Sam Fell: Why? Why are we doing this? Why is the whole stream is being instantiated?
Matthew Skelton: Exactly, but even if you just take it from version control to production.. Making sure that people can see that journey – version control to production.. Sorry, if you flipped this on it’s head – those organizations that can’t represent that visually, that do not have a single tool that actually controls that flow – generally end up in a pickle. This is what I see time and time again.
When they’ve segmented that flow from at least version control into production with several different tools or where the tools … where it’s not possible to visualize or see what’s going on – you have to kind of go into some crazy tool up here, run a little manual script to kick off the next phase – that kind of stuff – it wastes so much time, it causes so much confusion, makes onboarding very difficult, makes diagnostics very difficult.
People are not thinking about it as if it’s a proper thing. This is our mechanism for software, for value delivery – why would we not want to invest in this thing in a way which….
Anders Wallgren: … it’s not even necessarily just technical issues that you’re uncovering, you know, it’s business issues, it’s functionalities actually…
Matthew Skelton: Exactly. That’s exactly the reason to do it and reason to put in place what Jez Humble and Dave Farley called a “walking-skeleton deployment pipeline” where we initially whichever tool we are using we just create a series of steps and model the current flow. If we’ve got seventeen different test stages, we have seventeen different stages in our deployment pipeline to start with and we have that conversation with people saying “is this what you really want? Are you sure this really what you need?”.
Even if loads of stuff’s not automated then we can just actually model that flow through to the production, it allows us to solve a lot of challenges with security, network partitioning, different kind of access requirements for different teams – if you need security clearance for one area or all this stuff so we can solve that stuff out and all we’re deploying to start with is readme.txt – that’s the one file, we just deploy that one file – let’s see what happens because if I solve that challenge, there’s a whole lot of stuff we have already solved. And then we can deploy helloworld.java and then we can deploy something a bit more interesting, but the key thing is we’re not leaving that till the very end.
Thanks to Helen Beal, Sam Fell, and Anders Wallgren for a great discussion!
Watch the whole #c9d9 episode: