New eBook on Continuous Delivery with Windows and .NET

Back in 2010 when Jez Humble and Dave Farley wrote their ground-breaking book Continuous Delivery, the Windows and .NET platforms lagged behind the Linux/Mac world in terms of automation capability. That is no longer the case – every core feature in Windows and .NET now has a PowerShell API and all the core tooling needed for Continuous Delivery – package management, artifact repositories, build servers, deployment pipelines tools, infrastructure automation, monitoring,and logging – are all now available natively on Windows/.NET.

Chris O’Dell (@ChrisAnnODell) and I decided we should explain how to make Continuous Delivery work with Windows and .NET, and thanks to the great editorial team at O’Reilly, we’ve published a short eBook:

CD with Windows - cover

The dedicated book website is at CDwithWindows.net and O’Reilly have published the first chapter of the book online as an article: Introduction to Continuous Delivery with Windows. We’d love your feedback: book@cdwithwindows.net

UPDATE: we’ll be at both PIPELINE Conference (March 23 2016) and WinOps Conference (May 24 2016) with printed copies of the book.

Note: we began writing the book in August 2015, and it’s astonishing (and exciting!) how much has changed in the 8 months since then, with Windows Nano, Azure and Windows support for Docker and containers, .NET Core, SQL Server on Linux, and even SSH for Windows. These and more recent developments do not feature in the book – perhaps we’ll do an updated version soon. 

Continuous Delivery for databases: microservices, team structures, and Conway’s Law

The way we think about data and databases must adapt to fit with dynamic cloud infrastructure and Continuous Delivery. The need for rapid deployments and feedback from software changes combined with an increase in complexity of modern distributed systems and powerful new tooling are together driving significant changes to the way we design, build, and operate software systems. These changes require new ways of writing code, new team structures, and new ownership models for software systems, all of which in turn have implications for data and databases.

Read the full article on Simple Talk: Continuous Delivery for Databases: Microservices, Team Structures, and Conway’s Law.

(These slides were presented in a talk I gave at develop:BBC 2014 conference on 13th November in London)

Deployability for databases for Continuous Delivery – article on Simple Talk

I wrote an article recently for the Simple Talk website called Common database deployment blockers and Continuous Delivery headaches, where I outline some of the common problems preventing databases from being deployable – a major blocker to Continuous Delivery.

Deployability is now a first-class concern for databases, and there are several technical choices (conscious and accidental) which band together to block the deployability of databases. Can we improve database deployability and enable true Continuous Delivery for our software systems? Of course we can, but first we have to see the problems.

The recommendations include:

  1. Minimize changes in Production
  2. Reduce accidental complexity
  3. Archive, distinguish, and split data
  4. Name things transparently
  5. Source Business Intelligence from a data warehouse
  6. Value more highly the need for change
  7. Avoid Production-only tooling and config where possible [I mention this in my talk How to choose tools for DevOps and Continuous Delivery]

To address [these things] individually perhaps doesn’t seem too challenging, but to tackle deployability requires close, effective collaboration between developers, DBAs, and operations teams to achieve the right balance between rapid deployment and access to data.

Deployability for Databases

Read the full article herehttps://www.simple-talk.com/sql/database-administration/common-database-deployment-blockers-and-continuous-delivery-headaches/

Roundup: Patterns for Performance and Operability

Patterns for Performance and OperabilityI recently posted a review of Patterns for Performance and Operability by Ford et al on the SoftwareOperability website. I think that this book is exceptionally useful in its treatment of both performance and operability, and anyone who cares about how well software works in Production should buy and read a copy (there are paper and eBook editions).

Two other reviews might be useful too: my colleague Anant East (Head of Architecture and Infrastructure, thetrainline.com) wrote up a detailed review of Patterns for Performance and Operability on the tech blog at thetrainline.com, and I posted a short review on Amazon.

Comic Relief, @garethr, @LordCope, and CloudFoundry at QConLondon 2013

I attended QConLondon 2013 last week; what I took from the first four sessions in the Building for Clouds track was: cloud API and infrastructure automation tools have now solved most of the ‘easy’ cloud problems, but harder challenges (such as automating clusters) remain. The sessions were from Tim Savage (@timjsavage) and Zenon Hannick (@zenonhannick) on Comic Relief’s unique challenges with performance testing, Gareth Rushgrove (@garethr) on how to avoid PaaS lock-in, Stephen Nelson-Smith (@LordCope) on how to use Chef to give you ‘optionality’ with different cloud vendors, and Andrew Crump (@acrmp) and Chris Hedley (@ChristHedley) on the CloudFoundry cloud platform.

Continue reading

Tune logging levels in Production without recompiling code

IAP Software Development Practice JournalThis article first appeared in Software Development Practice, Issue 1, published by IAP (ISSN 2050-1455) 

Abstract

When raising log events in code it can be difficult to choose a severity level (such as Error, Warning, etc.) which will be appropriate for Production; moreover, the severity of an event type may need to be changed after the application has been deployed based on experience of running the application. Different environments (Development (Dev), User Acceptance Testing (UAT), Non-Functional Testing (NFT), Production, etc.) may also require different severity levels for testing purposes. We do not want to recompile an application just to change log severity levels; therefore, the severity level of all events should be configurable for each application or component, and be decoupled from event-raising code, allowing us to tune the severity without recompiling the code.

A simple way to achieve this power and flexibility is to define a set of known event IDs by using a sparse enumeration (enum in C#, Java, and C++), combined with event-ID-to-severity mappings contained in application configuration, allowing the event to be logged with the appropriate configured severity, and for the severity to be changed easily after deployment.

Continue reading

GOOS at 7digital – Code Shapes, the Purpose of Tests, and Logging Done Well

I recently went to a Devs in the ‘Ditch meetup at 7digital to hear Chris O’Dell (@ChrisAnnODell) explain 7digital’s journey to Continuous Delivery and Steve Freeman (@sf105) speak on GOOS and system testing. We had some useful discussions on dependency injection and how to use logging well, and Steve’s perspectives on ‘code shapes’ and the purpose of tests were revealing.

Continue reading