Cloud Deployments – Alex Papadimoulis at QConLondon 2013

Alex Papadimoulis (@apapadimoulis) of Inedo (and TheDailyWTF) gave a really useful talk on deployments for cloud-based software systems at QConLondon 2013 recently [slides, PDF, 1.6MB].

He stressed the importance of finding the appropriate deployment (distribution + delivery) model for each application, and to keep deployments as simple as possible. In fact, we can follow the best practices from Continuous Integration and apply them to deployment.

Continue reading Cloud Deployments – Alex Papadimoulis at QConLondon 2013

What Makes an Effective Build and Deployment Radiator Screen?

Build screens (or build monitors, or information radiators) are an important tool in helping to achieve Continuous Integration and in trapping errors early. When the number of build jobs becomes large, it can be tempting to hide ‘successful’ jobs to save space, but we found this to cause problems. I realised that people need to know the context for the red jobs if they are to take prompt action to fix failing builds, so it’s important to represent the full state of all builds by showing green jobs too.

Continue reading What Makes an Effective Build and Deployment Radiator Screen?

Speed up Web Applications with SSL Offloading

Web sites and web applications are increasingly using secure connections (HTTPS) for all traffic not just obviously sensitive data, as a way to guard against security threats. However, HTTPS requires encryption/decryption of data, which is computationally intensive. Web applications can therefore benefit from “offloading” the encryption/decryption processing required for HTTPS to specialised hardware devices.

Continue reading Speed up Web Applications with SSL Offloading

Controlling optimization and debug info in Release builds in .Net applications

Some interesting info on controlling optimization and debug info for deployed .NET applications:

…the little-known little-used [.NET Framework Debugging Control] section of a .INI file. These help guide and control the JIT. From MSDN:

This JIT configuration has two aspects:

  • You can request the JIT-compiler to generate tracking information. This makes it possible for the debugger to match up a chain of MSIL with its machine code counterpart, and to track where local variables and function arguments are stored.
  • You can request the JIT-compiler to not optimize the resulting machine code.

So Mark suggested this (emphasis mine):

You can have the best of both worlds with a rather neat trick. The major differences between the default debug build and default release build are that when doing a default release build, optimization is turned on and debug symbols are not emitted. So:

    • Step 1: Change your release config to emit debug symbols. This has virtually no effect on the performance of your app, and is very useful if (when?) you need to debug a release build of your app.
    • Step 2: Compile using your new release build config, i.e. *with* debug symbols and *with* optimization. Note that 99% of code optimization is done by the JIT compiler, not the language compiler, so read on…
    • Step 3: Create a text file in your app’s folder called xxxx.exe.ini (or dll or whatever), where xxxx is the name of your executable. This text file should initially look like:


[.NET Framework Debugging Control]
GenerateTrackingInfo=0
AllowOptimize=1

    • Step 4: With these settings, your app runs at full speed. When you want to debug your app by turning on debug tracking and possibly turning off (CIL) code optimization, just use the following settings:


[.NET Framework Debugging Control]
GenerateTrackingInfo=1
AllowOptimize=0

[http://www.hanselman.com/blog/ DebugVsReleaseTheBestOfBothWorlds.aspx]