Tune logging levels in Production without recompiling code

IAP Software Development Practice JournalThis article first appeared in Software Development Practice, Issue 1, published by IAP (ISSN 2050-1455) 


When raising log events in code it can be difficult to choose a severity level (such as Error, Warning, etc.) which will be appropriate for Production; moreover, the severity of an event type may need to be changed after the application has been deployed based on experience of running the application. Different environments (Development (Dev), User Acceptance Testing (UAT), Non-Functional Testing (NFT), Production, etc.) may also require different severity levels for testing purposes. We do not want to recompile an application just to change log severity levels; therefore, the severity level of all events should be configurable for each application or component, and be decoupled from event-raising code, allowing us to tune the severity without recompiling the code.

A simple way to achieve this power and flexibility is to define a set of known event IDs by using a sparse enumeration (enum in C#, Java, and C++), combined with event-ID-to-severity mappings contained in application configuration, allowing the event to be logged with the appropriate configured severity, and for the severity to be changed easily after deployment.

Continue reading Tune logging levels in Production without recompiling code

Continuous Delivery in London – recent events and new Meetup group

The last few months in London have seen a surge in interest in Continuous Delivery by companies wanting to speed up delivery of their web-based software systems. See below for a summary of the events I have been fortunate to be involved with (I know there are many more); if you’re interested in Continuous Delivery and you’re in or close to London, then join the London Continuous Delivery meetup group (#londoncd) and let’s share our experience.

London Continuous Delivery logo - @DaveNolan (logo by @DaveNolan!)

James Betteley: Continuous Delivery using Maven

James Betteley gave a useful talk at BCS on how he used Maven in a Continuous Delivery context alongside Artifactory, Nexus and Sonar. James blogged about using Maven for Continuous Delivery back in February 2012: well worth a read, even if you’re not using the Java+Maven stack.


In July, Christopher Marsh of AKQA talked about his company’s success with Continuous Delivery on a London-based project for a large client organisation. They used GO from ThoughtWorks Studios for implementing the deployment pipeline.


I went to the London offices 7digital for a Devs In The ‘Ditch session last week, where Chris O’Dell explained how they have moved from painful, irregular releases with tightly-coupled code to frequent small releases and a service-oriented approach. The transformation took two years, and they now restrict deployable units to about a day’s worth of work to make deployment easier. GOOS co-author Steve Freeman also gave a useful talk on full-system testing, which is crucial to get right in a Continuous Delivery context.

ThoughtWorks Studios GO 12.3

The ThoughtWorks Studios product team have changed the pricing model with the 12.3 version of their agile release management tool GO: the free Community edition how has feature parity with the pay-for editions, including previously enterprise-y only features such as LDAP and Environments. This means that small teams can make full use of the excellent deployment pipeline features of GO without the price tag. I was always a bit reluctant to recommend using GO before now because the free version was feature-limited, but with all features now available in all editions, I have to say that for modelling and implementing deployment pipelines, there is other no tool which comes close to GO.

WebPerfDays EU 2012

I was fortunate to be able to present at WebPerfDays EU 2012 on how build and deployment shapes software architecture at thetrainline.com [slides] along with Andie and Oddur from CCPGames. Three of the many really excellent discussions that came up were:

  1. Why you should design your pipelines up front [more on this from me soon…]
  2. How to get real ownership of software (e.g. service/product teams, devs on call, etc.)
  3. Jenkins vs TravisCI vs TW GO for deployment pipeline automation

Slides: How build and deployment shapes software architecture at thetrainline.com

In the end, we had to be ‘evicted’ from the room; we could have gone on discussing for another hour! Apparently, one major UK publisher had nearly 10 staff in the session, and rated it the best session in WebPerfDays. It was so great to be among such brilliant minds and conversions, which led me to…

A London Continuous Delivery meetup group

Based on conversations and discussions at At WebPerfDays it was clear that a London-based meetup group centered on Continuous Delivery would be interesting for quite a few people and organisations.

A few of us agreed to get things off the ground, and we’re now on Meetup.com at London Continuous Delivery (http://www.meetup.com/London-Continuous-Delivery/) and on Twitter with #londoncd. Any help, donations, perks, etc. are very welcome.

Moving a News Website to a Different Content Management System

How does a small but nationally visible non-profit organisation go about moving their news-focused website to a different content management system? A good friend of mine works for a non-profit news organisation and asked me this question recently, so I put together some very brief notes based on some of the website migrations I have done over the last few years for similar organisations (a charity, an industry trade body, and a specialist news publisher).

moving books

The client being a small non-profit organisation constrains the solution to license-free (non-commercial) content management system (CMS), because at an industry standard of around $20k and upwards, the license costs for commercial CMS products tend to be out of the range of small non-profit budgets.

There are therefore five key things to consider for migrating an existing news-focused website for a non-profit organisation to a new CMS:

  1. What open-source technology should you use?
  2. To what extent should you customise the technology and how?
  3. What is a reasonable cost – one-off and ongoing support?
  4. Who should undertake the implementation?
  5. Where should the site be hosted?

There are plenty of other problems to solve once the initial decisions have been taken, but for a simple, news-focused website, with content only in English, we don’t need to worry about managing translations or dealing with transactional workflows. So what’s next?

Continue reading Moving a News Website to a Different Content Management System

Merge tracking with Subversion 1.6

I am now running Subversion 1.6 for my client’s SVN repositories. I upgraded mainly to take advantage of the Merge tracking introduced in Subversion 1.5, and improved in 1.6. In particular, Subversion now creates its own “mergeinfo” entries, so you no longer have to use the svnmerge.py script. Subversion 1.6 now has better detection of “tree conflicts” – essentially, problems with the local working copy caused by renames, missing files, etc.

I used the new release of VisualSVN Server to install and upgrade SVN in a painless way (see earlier post on VisualSVN Server). The new version installs over the top of the previous one, so back up your SVN repositories first.

An important part of the Subversion philosophy is “don’t break things”. Even though VisualSVN Server 2.0 runs Subversion 1.6, the underlying repository format is not upgraded automatically, meaning that the new merge tracking feature is still not (yet) available. To get this working we simply need to run “svnadmin upgrade PATH”, like this:

> svnadmin upgrade D:\Data\Svn\DevDoctor
Repository lock acquired.
Please wait; upgrading the repository may take some time...

Upgrade completed.

Following a working copy svn update, you can run svn merge. In this case, I used TortoiseSVN:

Here, we were merging from the development branch, so I selected Reintegrate a branch. Once the correct settings have been chosen, you can even “Test merge”, which gives you a report of what would happen if you went ahead with the merge. When the changes are successfully merged, Tortoise shows the results:

As normal, the merge needs to be committed, but the crucial difference after the commit succeeds is that Subversion itself has tracked the merge using the mergeinfo property:

You can see that an svn:mergeinfo property has been set on the folder, showing the branch from which the merges were done, and the revision numbers (here, 400-409).

All this means that merging (both from branches to trunk, and from trunk to branches) is all much less tricky and error-prone than before (with Subversion 1.4). Subversion 1.6 is also noticeably quicker than 1.4 for all operations so far.

Further reading: http://scm.jadeferret.com/subversion-16-new-features-explained/

Oracle on Windows Seminar

I recently attended an “Oracle on Windows” seminar in London, organised by Oracle and sponsored by Quantix.

There were four speakers:

  1. Mark Whitehorn – independent DB consultant
  2. Julian Boneham – Quantix
  3. Paul Brankin – Oracle
  4. Jules Lane – Oracle

The talk from Quantix was unfeasably dull; full of sales-speak (“…all vertical markets…”) and delivered by a bloke who clearly wasn’t interested.

Paul Brankin talked about data silos and how this architecture arises (it’s simple to manage initially), how it becomes limiting or fragmenting in business terms (due to inaccessible data, and under- or over-utilization of hardware resources dedicated to a single silo) and how Oracle’s Real Application Clustering (RAC) can help solve the problem. Specifically, Oracle has an Active-Active database failover solution, whereas Microsoft’s SQL Server has only Active-Passive (in the form of mirroring).

Jules Lane gave an overview of the Oracle middleware application stack. Interestingly, he said …we are not really expecting people to write Java code any more…, but instead to rely on component configuration and code-gen tools alone when building middleware applications. The Oracle BPEL Process Manager is akin to BizTalk, as a business process orchestrator, although seems more advanced. No mention of WF/WCF in Jules’s talk, though of course this technology is still fairly new. Also interesting was the Oracle Webservices Manager, which allows policy-based access control to web services, including ASP.NET web services.

In general, the talks by Oracle and Quantix were disappointing; they were generally too sales-focussed, and their “Oracle on Windows” pitch was somewhat embarassed, as if Windows was something they only supported grudgingly. Far more engaging was the first session, by the independent consultant Mark Whitehorn.

Mark – much to the later ire of the Oracle speakers – said categorically that all three major database engines (DB2, Oracle and SQL Server) are extremely competant database engines, and that debates about their relative merits are pretty arcane and irrelevent. He noted that many people choose databases on religious grounds, proffering out-of-date evidence for why one engine is superior to another (e.g. “it’s a poor man’s Sybase fork” or “it cannot even row-lock”).

He stressed the need to look at the other kinds of tools and features available for these engines as more important reasons to choose one over the other:

  • Analysis (e.g. Business Intelligence [BI] tools)
  • Middleware connectivity
  • Server/Database Management

Mark then went on to give some lucid examples of real-world BI analysis (on 150 year old plant specimen records, collected by Charles Darwin!) to demonstrate how useful this kind of analysis can be, in combination with human domain experts.

He finshed by commenting on the new Spatial data types offered by the three database engines, and how some amazing results can be had using “mashups” (think Google Maps).

Mark was an extremely entertaining speaker, who clearly is an outstanding specialist in his field, and it was a pleasure to listen to what he had to say. By contrast, the other speakers seemed rather awkward and apologetic! It was clear from this seminar that Oracle is still well-placed for extremely high-end database applications, but similar resiliency CAN be implemented using SQL Server, at a reduced cost. All three vendors, but especially Oracle and Microsoft, are increasingly competing head to head for the Enterprise AND medium-size database markets, and this trend is set to continue for the next five years at least.