January 7, 2017

Continuous Integration and Deployment Basics for .NET Developers - Part 4


There are a number of strategies to versioning your .NET application but basically your versioning strategy are going to come down to one of two options - do you want MSBuild to manage the version number or do you want your build server to have input into the version number assigned to your binaries?

Managed by MSBuild

This is by far the easiest approach, all you have to do is modify a single line in the AssemblyInfo.cs file

[assembly: AssemblyVersion("")]

is changed to read

[assembly: AssemblyVersion("1.0.*")]

At build time, MSBuild will automatically assign a the build and revision number and in many cases this is good enough.

This has its advantages and disadvantages:
  • The major advantage is that it's simple and neat and doesn't require much effort.
  • The major disadvantage is that there's no control over the version number that is assigned and traceability becomes more complicated - your QA team are going to be your most likely candidates for complaint if you use this approach.
This approach is documented fairly well and these StackOverflow posts cover this technique fairly comprehensively:


If you don't require the ability to track a binary's authenticity through your system, then quite probably this is the approach you will want to take. It's minimal overhead to make it happen and will give you versioned binaries.

If you want to manage your version numbering from a centralized location, then you can add a GlobalAssemblyInfo.cs file that is really just an AssemblyInfo.cs to your master project. It can be identical to the AssemblyInfo.cs, so just rename that file. Then go to each of your other projects and remove the AssemblyInfo.cs that those projects contain.
  • Right click the project
  • Add existing item
  • Find your GlobalAssenblyInfo.cs
  • Instead of clicking Add, click the down arrow next to it - choose Add As Link
  • Move the file to the Properties folder
Now all of your projects within your solution will have a globally managed version format. This isn't to be confused with the version number itself which may vary based on the build itself.

Managed by the Build Server

An alternative is to have your build server handle versioning at compile time applying a semantic version to each build artifact that meets industry recommendations such as described at http://www.SemVer.org. In order to do this we will need a pre-build script that the build server can initiate and set the version code in each of the AssemblyInfo.cs files. The simplest way is to change the AssemblyInfo.cs file so the line that reads:

I tend to prefer to be able to track a build back to a specific build in my build system, so I have a PowerShell script that I use to parse out original number and replace it with one generated by my build system. The basic gist of the PowerShell script can be found at:


The concept here is to just wrap it in a function and pass in the build version number as a parameter. I've made some cosmetic adjustments as necessary, but basically this script will when given a path to AssemblyInfo.cs parse out the AssemblyVersion and change it to the version info you specify in the input arguments.

function Set-BuildVersion {


    $pattern = '\[assembly: AssemblyVersion\("(?<Major>\d+)\.(?<Minor>\d+)\.(?<Build>(?:\*|\d+))(?:\.(?<Revision>\d+))?"\)\]'
    (Get-Content $AssemblyInfoPath) | ForEach-Object{
        if($_ -match $pattern){
            # We have found the matching line
            # Edit the version number and put back.
            $fileVersion = $matches
            $major = $fileVersion.Major
            if ($VersionMajor -ne $null) {
                $major = $VersionMajor
            $minor = $fileVersion.Minor
            if ($VersionMinor -ne $null) {
                $minor = $VersionMinor
            $build = $fileVersion.Build
            if ($VersionBuild -ne $null) {
                $build = $VersionBuild
            $newVersion = "{0}.{1}.{2}" -f $major, $minor, $build

            if ($VersionMeta -ne $null) {
                $newVersion = "{0}-{1}" -f $newVersion, $VersionMeta
            '[assembly: AssemblyVersion("{0}")]' -f $newVersion
        } else {
            # Output line as is
    } | Set-Content $AssemblyInfoPath

Get-ChildItem -Path . -Filter AssemblyInfo.cs -Recurse | Set-BuildVersion -VersionBuild $buildVersion

This script should be run in a Pre-Build step triggered by the build server. Because of the nature of versioning, it's something that only needs to happen at build time on the build server when producing artifacts for release. It's not something that necessarily needs to run as part of a local build if you're compiling to run on your local machine for debug purposes. Given that it's a generic script, it can easily be added to a PSGet repository and pulled down by the build server at build time to generate version numbers for your binaries. Its lifecycle can be handled exactly as you would handle publishing any other library.

If you wished to run this as a prebuild step in your local builds, you'd need to add a PowerShell step to your project files that executed the PowerShell script and allowed properties to be either defaulted or passed from the commandline when you execute MSBuild.

More to come...

Continuous Integration and Deployment Basics for .NET Developers - Part 3

Dependency Resolution

The first thing to address is that you don't really want external dependencies stored in your source control system if you can help it. It's just extra clutter. Ideally, you just want your code in there where possible.

External dependencies are better satisfied at build time - especially if they can be obtained in some stable fashion from some other dependency management system such as NuGet. If you have external dependencies that aren't NuGet packages, the best options at this time appear to be:
  • Package them into a NuGet package and store them in a local instance of a NuGet server so that the build server can pull them down and include them in the build (more ideal).
  • Add them to a folder in your solution and store them in source control (less ideal).
There are scenarios where storing dependencies in source control make sense. These decisions should be considered carefully on a case by case basis.

I tend to prefer NuGet, or more specifically at this time I'm quite enjoying Inedo's ProGet. You can make your own choice, there are a number of ways of satisfying NuGet dependencies at build time. I like ProGet because its interface is simple, intuitive and allows me centralized management of multiple different types of feed all in once place and it ties into Active Directory nicely for authentication and authorization.

Presently the NuGet team has decreed that dependencies shall be satisfied outside of the compilation and should be handled by the NuGet.exe commandline tool. Prior to this, NuGet added build targets and binaries to a .nuget folder within your solution. Both approaches still currently work. Obviously having the targets and binaries in .nuget folders in your source control system means that every project you have in source control has these binaries and target files... so just clutter really.

I prefer to not add NuGet to my application stack and keep my codebase as pure as possible. I download the NuGet commandline tool to a folder on my build server and add the folder to my path to reference it. I can then handle dependency resolution for any project by running the commandline tool as a prebuild step. This means that my build server doesn't end up with 100 instances of the NuGet commandline tool floating around in various project directories, it doesn't end up checked into my source control system for every project that requires it and developers don't even need it on their machines because Visual Studio handles NuGet dependency resolution quite nicely. In my mind, this is the most efficient approach.
  • NuGet commandline tool: https://dist.nuget.org/index.html
  • Inedo ProGet: https://inedo.com/proget/download
This about covers dependecy resolution.

Continue to Part 4 - Versioning

January 6, 2017

Continuous Integration and Deployment Basics for .NET Developers - Part 2

Okay, so as a .NET developer, the first thing I didn't understand were the major components of a deployment of a .NET application. Some of which you will already be familiar with in passing - such as dependency resolution. If you've already used NuGet, you'll be somewhat familiar with this. You may be familiar with Build in some sense, we'll go into more detail as just understanding how to hit F5 to compile your application is only the very topmost tip of the iceberg. You're likely also familiar with unit testing. So here are the major pieces as I've come to know and understand them:
  1. Dependency Resolution - Obviously our application has prerequisites and libraries it needs to compile. These dependencies are usually satisfied by Visual Studio when you compile, but our build server needs to resolve these dependencies ahead of attempting to compile our code as it doesn't use Visual Studio to do the build.
  2. Version Numbering - Our assemblies will need a unique version number assigned prior to each build. We need to be able to trace which assemblies have been tested and signed off for release. We can't do that without assemblies having unique version numbers assigned.
  3. Build - Our code obviously needs to compile. Without compiled code, we've got nothing to deploy. This will include applying version numbers, unit testing and some limited integration testing. It will be handy if we get to understand the content of our project files which contain the file references, targets and property definitions required to load and compile our application in the correct order.
  4. Testing - Now that we've build our code, we need to test the assemblies for correctness by running our unit and integration tests. Because the application hasn't yet been deployed, any tests that run at this stage cannot require access to infrastructure.
  5. Configuration Transform - Configurations are different for each deployment environment, transforms are run to turn configuration files into templated files that can have environment specific values applied at deployment time.
  6. Packaging - Now our code is built and tested, it needs to be packaged for deployment. This packaged artifact is immutable, it contains the exact binaries that were built and unit tested in step 3, no exceptions.
  7. Deployment - The artifacts get deployed to your test environment where your automated test suite will run the functional tests to prove that the feature changes you made do so successfully and without any regression. Upon success, they may be further deployed to subsequent environments including staging and production.
  8. Functional Testing - Some testing requires infrastructure, and so it can't be completed until the application has been deployed to an environment. So now we're deployed to our target environment, we can run our functional tests. Obviously these will need to be packaged up and deployed and run on a server in our target environment.
As I understood some of these pieces prior to my foray into the DevOps world, it turns out that my understanding as a developer didn't really cut it when it came to deployment. There are some caveats to various pieces that as a developer, I never really had to pay attention to - for instance:
  • Versioning using the 1.0.* that we've all come across just doesn't quite cut it in most deployment environments. If you need to be able to track versions back to a release on your deployment server or a specific build on your build server, your version numbering needs to incorporate these facts.
  • Build well, there is so much more to building your application than just hitting F5 and hoping it compiles. Your project files are highly configurable if you take the time to grok them.
  • Integration Testing can only run on the build server if it has no infrastructure requirements, thus any integration tests must have their own mock repositories or they will only be able to be run post-deployment in a target environment.
  • Configuration Transforms aren't to populate settings with environment specific values. They're to sanitize your configurations so that your deployment system can hydrate the settings with environment specific values - many of which for security purposes will never be visible to developers.
As I continue with future posts, I will go over my notes for each of these pieces and discuss things that need to be considered to get a deployment up and running.

Continue to Part 3 - Dependency Resolution

Continuous Integration and Deployment Basics for .NET Developers - Part 1

If you've tripped over this blog post, you're already likely aware of what continuous integration and deployment is and why it's needed.

A little background...

I'm a .NET developer by trade. I've been developing .NET applications almost as long as .NET was a thing. I've been tinkering with systems automation in some shape or form for most of my career, but really only stumbled into an official job title by accident when a client asked me to become a player on a DevOps task force for their ebook ecosystem. For the past few gigs, I've been playing around with Continuous Integration (CI) and (CD) Continuous Deployment of scalable applications in .NET environments.

There's a lot I didn't know about DevOps when I got started, and a lot I still don't know. I've tried to keep notes along the way to look back on and remind myself from time to time what I've learned. Much of which I sorely wish I'd had someone to show me rather than have to find the information the hard way. I'm going to try and distill my notes to the important pieces in the hope that they can help other .NET developers cross the chasm and begin to understand the DevOps world.

From the projects I've worked with, it's become evident that there's a lot about continuous integration and deployments that developers are either shielded from or are ignorant about - details that our projects would benefit from greatly if we made certain deployment considerations at the initial design of our code. These considerations are rarely asked of developers when writing their code, if they were, it would make our builds and deployments 100 times easier, would make our applications more production ready and in many cases make them more scalable too.

In the next few blog posts I will try to break down each of the pieces necessary to deploy production ready application and the considerations required to form a coherent deployment strategy.

I cannot obviously cover every tool you may use in your production environment, so I have picked a couple of easily accessible off the shelf tools that will cover the main paradigms and hopefully help you to bridge the gap between your development knowledge and what is needed for deployment. The paradigms are very similar across most deployment tools, once you understand one, the concepts are reasonably easily transferable to others.

The tools I'm going to be using for this blog series are:
  • Visual Studio 2015 - There's no real dependency on this version, if you've got 2013 or 2010, they should be perfectly adequate to get you through this series. I don't think there's anything inherently 2015 that I depend upon [though perhaps if I do, people can comment].
  • GitLab - Obviously you can use GitHub, Git, TFS, SVN or whichever source control system you enjoy most or are using in your environment. I made the leap to Git from TFS and SVN a number of years back and it's now my source control system of choice. I like GitLab's features for my personal projects. It's freely downloadable from gitlab.com
  • ProGet - ProGet is a commercial version of NuGet available from Inedo.com. You can easily use NuGet in place of this - or even shared folders if you wish. My reasoning for using ProGet is that I enjoy the facility to host multiple managed feeds in a single intuitive professional user interface. You can use the Free edition for your own projects. The basic license is easily capable of supporting many enterprise ready development teams and the licensing cost is very competitive.
  • MSBuild and Jenkins - Jenkins is open source and has relatively comprehensive installers for every major platform, making it easy to follow along regardless of your choice of build and deployment platform. Their plugin ecosystem maintained by a highly engaged community makes this a highly accessible build system. The bulk of your necessary understanding will be less about Jenkins and more about the structure of your project build target files.
  • Octopus Deploy - Octopus Deploy is fast gaining traction as an industry standard for deployment and with their rapid pace of releases, highly engaged support team and community and competitive licensing model, this is arguably the most accessible deployment system on the market at this time. The community edition is more than adequate for personal projects or proofs of concept. For enterprise systems, most clients I've worked with have yet to stray beyond the $5,000 (at time of writing) Enterprise Edition. The price may seem high until you compare it to the competition and find that it's really miniscule... especially when comparing to offerings from companies like IBM's UrbanCode. Not that I'm knocking UrbanCode, it's an excellent product and there is much value in it.
In a working production environment, there's a good chance that each of these components will be on separate servers, but they do run perfectly adequately for tinkering around with on a single system.

I want to emphasise that these tools are just that, there's nothing inherently different about them than most of the alternatives. I'd recommend concentrating on the paradigms more-so than the tools because you will likely find that you will not be using this toolset in its entirety at your current or future clients - or perhaps you will, it's not my place to speculate. You may even wish to follow along the series of blog posts and use completely different tools - hopefully you will understand the concepts I present well enough to apply them in the tools you're using.

I have separate servers set up for each in a VirtualBox environment as GitLab only runs on Linux and Jenkins not only seems to place nicer on Linux, but most of the documentation you can find appears to be written for Linux, it just makes life easier to go with that. There are many tutorials on how to install and configure these if you feel you need to install them to follow along with these posts.

If you prefer not to have to have a Linux environment set up, GitHub has a version for Windows, or you can use a GitHub repository hosted at GitHub.com to save hosting your own version at all. The caveat to this is that your build system needs access to your repository to function so you can either hook it to your local repository or assume that you will need to be connected to the internet to run your build. For the purposes of stability, I prefer to have my central repository somewhere locally accessible.

If you prefer to run Jenkins on Windows, it works well, I've run it on Windows and it has a Windows installer. Configuration of your builds are virtually identical.

Some considerations for how you set up your infrastructure is to ask yourself: If my build server loses connectivity to this resource, how will this impact my ability to build my code. Prime candidates for this question are: Your source code repository and your dependency repository (i.e. NuGet). I prefer to ensure these are under my control somewhere on my local network that connectivity isn't every going to take down my build or deployment.

Continue to Part 2 - Steps to manage your deployment