January 7, 2017

Continuous Integration and Deployment Basics for .NET Developers - Part 4


There are a number of strategies to versioning your .NET application but basically your versioning strategy are going to come down to one of two options - do you want MSBuild to manage the version number or do you want your build server to have input into the version number assigned to your binaries?

Managed by MSBuild

This is by far the easiest approach, all you have to do is modify a single line in the AssemblyInfo.cs file

[assembly: AssemblyVersion("")]

is changed to read

[assembly: AssemblyVersion("1.0.*")]

At build time, MSBuild will automatically assign a the build and revision number and in many cases this is good enough.

This has its advantages and disadvantages:
  • The major advantage is that it's simple and neat and doesn't require much effort.
  • The major disadvantage is that there's no control over the version number that is assigned and traceability becomes more complicated - your QA team are going to be your most likely candidates for complaint if you use this approach.
This approach is documented fairly well and these StackOverflow posts cover this technique fairly comprehensively:


If you don't require the ability to track a binary's authenticity through your system, then quite probably this is the approach you will want to take. It's minimal overhead to make it happen and will give you versioned binaries.

If you want to manage your version numbering from a centralized location, then you can add a GlobalAssemblyInfo.cs file that is really just an AssemblyInfo.cs to your master project. It can be identical to the AssemblyInfo.cs, so just rename that file. Then go to each of your other projects and remove the AssemblyInfo.cs that those projects contain.
  • Right click the project
  • Add existing item
  • Find your GlobalAssenblyInfo.cs
  • Instead of clicking Add, click the down arrow next to it - choose Add As Link
  • Move the file to the Properties folder
Now all of your projects within your solution will have a globally managed version format. This isn't to be confused with the version number itself which may vary based on the build itself.

Managed by the Build Server

An alternative is to have your build server handle versioning at compile time applying a semantic version to each build artifact that meets industry recommendations such as described at http://www.SemVer.org. In order to do this we will need a pre-build script that the build server can initiate and set the version code in each of the AssemblyInfo.cs files. The simplest way is to change the AssemblyInfo.cs file so the line that reads:

I tend to prefer to be able to track a build back to a specific build in my build system, so I have a PowerShell script that I use to parse out original number and replace it with one generated by my build system. The basic gist of the PowerShell script can be found at:


The concept here is to just wrap it in a function and pass in the build version number as a parameter. I've made some cosmetic adjustments as necessary, but basically this script will when given a path to AssemblyInfo.cs parse out the AssemblyVersion and change it to the version info you specify in the input arguments.

function Set-BuildVersion {


    $pattern = '\[assembly: AssemblyVersion\("(?<Major>\d+)\.(?<Minor>\d+)\.(?<Build>(?:\*|\d+))(?:\.(?<Revision>\d+))?"\)\]'
    (Get-Content $AssemblyInfoPath) | ForEach-Object{
        if($_ -match $pattern){
            # We have found the matching line
            # Edit the version number and put back.
            $fileVersion = $matches
            $major = $fileVersion.Major
            if ($VersionMajor -ne $null) {
                $major = $VersionMajor
            $minor = $fileVersion.Minor
            if ($VersionMinor -ne $null) {
                $minor = $VersionMinor
            $build = $fileVersion.Build
            if ($VersionBuild -ne $null) {
                $build = $VersionBuild
            $newVersion = "{0}.{1}.{2}" -f $major, $minor, $build

            if ($VersionMeta -ne $null) {
                $newVersion = "{0}-{1}" -f $newVersion, $VersionMeta
            '[assembly: AssemblyVersion("{0}")]' -f $newVersion
        } else {
            # Output line as is
    } | Set-Content $AssemblyInfoPath

Get-ChildItem -Path . -Filter AssemblyInfo.cs -Recurse | Set-BuildVersion -VersionBuild $buildVersion

This script should be run in a Pre-Build step triggered by the build server. Because of the nature of versioning, it's something that only needs to happen at build time on the build server when producing artifacts for release. It's not something that necessarily needs to run as part of a local build if you're compiling to run on your local machine for debug purposes. Given that it's a generic script, it can easily be added to a PSGet repository and pulled down by the build server at build time to generate version numbers for your binaries. Its lifecycle can be handled exactly as you would handle publishing any other library.

If you wished to run this as a prebuild step in your local builds, you'd need to add a PowerShell step to your project files that executed the PowerShell script and allowed properties to be either defaulted or passed from the commandline when you execute MSBuild.

More to come...

Continuous Integration and Deployment Basics for .NET Developers - Part 3

Dependency Resolution

The first thing to address is that you don't really want external dependencies stored in your source control system if you can help it. It's just extra clutter. Ideally, you just want your code in there where possible.

External dependencies are better satisfied at build time - especially if they can be obtained in some stable fashion from some other dependency management system such as NuGet. If you have external dependencies that aren't NuGet packages, the best options at this time appear to be:
  • Package them into a NuGet package and store them in a local instance of a NuGet server so that the build server can pull them down and include them in the build (more ideal).
  • Add them to a folder in your solution and store them in source control (less ideal).
There are scenarios where storing dependencies in source control make sense. These decisions should be considered carefully on a case by case basis.

I tend to prefer NuGet, or more specifically at this time I'm quite enjoying Inedo's ProGet. You can make your own choice, there are a number of ways of satisfying NuGet dependencies at build time. I like ProGet because its interface is simple, intuitive and allows me centralized management of multiple different types of feed all in once place and it ties into Active Directory nicely for authentication and authorization.

Presently the NuGet team has decreed that dependencies shall be satisfied outside of the compilation and should be handled by the NuGet.exe commandline tool. Prior to this, NuGet added build targets and binaries to a .nuget folder within your solution. Both approaches still currently work. Obviously having the targets and binaries in .nuget folders in your source control system means that every project you have in source control has these binaries and target files... so just clutter really.

I prefer to not add NuGet to my application stack and keep my codebase as pure as possible. I download the NuGet commandline tool to a folder on my build server and add the folder to my path to reference it. I can then handle dependency resolution for any project by running the commandline tool as a prebuild step. This means that my build server doesn't end up with 100 instances of the NuGet commandline tool floating around in various project directories, it doesn't end up checked into my source control system for every project that requires it and developers don't even need it on their machines because Visual Studio handles NuGet dependency resolution quite nicely. In my mind, this is the most efficient approach.
  • NuGet commandline tool: https://dist.nuget.org/index.html
  • Inedo ProGet: https://inedo.com/proget/download
This about covers dependecy resolution.

Continue to Part 4 - Versioning

January 6, 2017

Continuous Integration and Deployment Basics for .NET Developers - Part 2

Okay, so as a .NET developer, the first thing I didn't understand were the major components of a deployment of a .NET application. Some of which you will already be familiar with in passing - such as dependency resolution. If you've already used NuGet, you'll be somewhat familiar with this. You may be familiar with Build in some sense, we'll go into more detail as just understanding how to hit F5 to compile your application is only the very topmost tip of the iceberg. You're likely also familiar with unit testing. So here are the major pieces as I've come to know and understand them:
  1. Dependency Resolution - Obviously our application has prerequisites and libraries it needs to compile. These dependencies are usually satisfied by Visual Studio when you compile, but our build server needs to resolve these dependencies ahead of attempting to compile our code as it doesn't use Visual Studio to do the build.
  2. Version Numbering - Our assemblies will need a unique version number assigned prior to each build. We need to be able to trace which assemblies have been tested and signed off for release. We can't do that without assemblies having unique version numbers assigned.
  3. Build - Our code obviously needs to compile. Without compiled code, we've got nothing to deploy. This will include applying version numbers, unit testing and some limited integration testing. It will be handy if we get to understand the content of our project files which contain the file references, targets and property definitions required to load and compile our application in the correct order.
  4. Testing - Now that we've build our code, we need to test the assemblies for correctness by running our unit and integration tests. Because the application hasn't yet been deployed, any tests that run at this stage cannot require access to infrastructure.
  5. Configuration Transform - Configurations are different for each deployment environment, transforms are run to turn configuration files into templated files that can have environment specific values applied at deployment time.
  6. Packaging - Now our code is built and tested, it needs to be packaged for deployment. This packaged artifact is immutable, it contains the exact binaries that were built and unit tested in step 3, no exceptions.
  7. Deployment - The artifacts get deployed to your test environment where your automated test suite will run the functional tests to prove that the feature changes you made do so successfully and without any regression. Upon success, they may be further deployed to subsequent environments including staging and production.
  8. Functional Testing - Some testing requires infrastructure, and so it can't be completed until the application has been deployed to an environment. So now we're deployed to our target environment, we can run our functional tests. Obviously these will need to be packaged up and deployed and run on a server in our target environment.
As I understood some of these pieces prior to my foray into the DevOps world, it turns out that my understanding as a developer didn't really cut it when it came to deployment. There are some caveats to various pieces that as a developer, I never really had to pay attention to - for instance:
  • Versioning using the 1.0.* that we've all come across just doesn't quite cut it in most deployment environments. If you need to be able to track versions back to a release on your deployment server or a specific build on your build server, your version numbering needs to incorporate these facts.
  • Build well, there is so much more to building your application than just hitting F5 and hoping it compiles. Your project files are highly configurable if you take the time to grok them.
  • Integration Testing can only run on the build server if it has no infrastructure requirements, thus any integration tests must have their own mock repositories or they will only be able to be run post-deployment in a target environment.
  • Configuration Transforms aren't to populate settings with environment specific values. They're to sanitize your configurations so that your deployment system can hydrate the settings with environment specific values - many of which for security purposes will never be visible to developers.
As I continue with future posts, I will go over my notes for each of these pieces and discuss things that need to be considered to get a deployment up and running.

Continue to Part 3 - Dependency Resolution

Continuous Integration and Deployment Basics for .NET Developers - Part 1

If you've tripped over this blog post, you're already likely aware of what continuous integration and deployment is and why it's needed.

A little background...

I'm a .NET developer by trade. I've been developing .NET applications almost as long as .NET was a thing. I've been tinkering with systems automation in some shape or form for most of my career, but really only stumbled into an official job title by accident when a client asked me to become a player on a DevOps task force for their ebook ecosystem. For the past few gigs, I've been playing around with Continuous Integration (CI) and (CD) Continuous Deployment of scalable applications in .NET environments.

There's a lot I didn't know about DevOps when I got started, and a lot I still don't know. I've tried to keep notes along the way to look back on and remind myself from time to time what I've learned. Much of which I sorely wish I'd had someone to show me rather than have to find the information the hard way. I'm going to try and distill my notes to the important pieces in the hope that they can help other .NET developers cross the chasm and begin to understand the DevOps world.

From the projects I've worked with, it's become evident that there's a lot about continuous integration and deployments that developers are either shielded from or are ignorant about - details that our projects would benefit from greatly if we made certain deployment considerations at the initial design of our code. These considerations are rarely asked of developers when writing their code, if they were, it would make our builds and deployments 100 times easier, would make our applications more production ready and in many cases make them more scalable too.

In the next few blog posts I will try to break down each of the pieces necessary to deploy production ready application and the considerations required to form a coherent deployment strategy.

I cannot obviously cover every tool you may use in your production environment, so I have picked a couple of easily accessible off the shelf tools that will cover the main paradigms and hopefully help you to bridge the gap between your development knowledge and what is needed for deployment. The paradigms are very similar across most deployment tools, once you understand one, the concepts are reasonably easily transferable to others.

The tools I'm going to be using for this blog series are:
  • Visual Studio 2015 - There's no real dependency on this version, if you've got 2013 or 2010, they should be perfectly adequate to get you through this series. I don't think there's anything inherently 2015 that I depend upon [though perhaps if I do, people can comment].
  • GitLab - Obviously you can use GitHub, Git, TFS, SVN or whichever source control system you enjoy most or are using in your environment. I made the leap to Git from TFS and SVN a number of years back and it's now my source control system of choice. I like GitLab's features for my personal projects. It's freely downloadable from gitlab.com
  • ProGet - ProGet is a commercial version of NuGet available from Inedo.com. You can easily use NuGet in place of this - or even shared folders if you wish. My reasoning for using ProGet is that I enjoy the facility to host multiple managed feeds in a single intuitive professional user interface. You can use the Free edition for your own projects. The basic license is easily capable of supporting many enterprise ready development teams and the licensing cost is very competitive.
  • MSBuild and Jenkins - Jenkins is open source and has relatively comprehensive installers for every major platform, making it easy to follow along regardless of your choice of build and deployment platform. Their plugin ecosystem maintained by a highly engaged community makes this a highly accessible build system. The bulk of your necessary understanding will be less about Jenkins and more about the structure of your project build target files.
  • Octopus Deploy - Octopus Deploy is fast gaining traction as an industry standard for deployment and with their rapid pace of releases, highly engaged support team and community and competitive licensing model, this is arguably the most accessible deployment system on the market at this time. The community edition is more than adequate for personal projects or proofs of concept. For enterprise systems, most clients I've worked with have yet to stray beyond the $5,000 (at time of writing) Enterprise Edition. The price may seem high until you compare it to the competition and find that it's really miniscule... especially when comparing to offerings from companies like IBM's UrbanCode. Not that I'm knocking UrbanCode, it's an excellent product and there is much value in it.
In a working production environment, there's a good chance that each of these components will be on separate servers, but they do run perfectly adequately for tinkering around with on a single system.

I want to emphasise that these tools are just that, there's nothing inherently different about them than most of the alternatives. I'd recommend concentrating on the paradigms more-so than the tools because you will likely find that you will not be using this toolset in its entirety at your current or future clients - or perhaps you will, it's not my place to speculate. You may even wish to follow along the series of blog posts and use completely different tools - hopefully you will understand the concepts I present well enough to apply them in the tools you're using.

I have separate servers set up for each in a VirtualBox environment as GitLab only runs on Linux and Jenkins not only seems to place nicer on Linux, but most of the documentation you can find appears to be written for Linux, it just makes life easier to go with that. There are many tutorials on how to install and configure these if you feel you need to install them to follow along with these posts.

If you prefer not to have to have a Linux environment set up, GitHub has a version for Windows, or you can use a GitHub repository hosted at GitHub.com to save hosting your own version at all. The caveat to this is that your build system needs access to your repository to function so you can either hook it to your local repository or assume that you will need to be connected to the internet to run your build. For the purposes of stability, I prefer to have my central repository somewhere locally accessible.

If you prefer to run Jenkins on Windows, it works well, I've run it on Windows and it has a Windows installer. Configuration of your builds are virtually identical.

Some considerations for how you set up your infrastructure is to ask yourself: If my build server loses connectivity to this resource, how will this impact my ability to build my code. Prime candidates for this question are: Your source code repository and your dependency repository (i.e. NuGet). I prefer to ensure these are under my control somewhere on my local network that connectivity isn't every going to take down my build or deployment.

Continue to Part 2 - Steps to manage your deployment

October 1, 2013

Home network security [part 1] - for mortals

There's so much to discuss when it comes to security surrounding your home computers and your home network that I can't possibly write it all in a single blog post, or even two and there is much I need to learn along the way.  There is much that I have spent less time than I should have thinking about and, like you, probably have many questions that I should have found answers to before now... I think the biggest question on my mind right now is: What can be trusted?

It seems like nothing that we're told can be trusted is indeed as trustworthy as we're led to believe. That said, up until now, we've basically been trusting our emotional bias towards certain brands. Microsoft, Apple, Linksys, Dell, Samsung, iOS, Android et al.

So we need to evaluate a few things...

- What software can I trust?  Is Windows trustworthy?  Is Mac OS X?  Is there any operating system I can trust?
- What hardware can I trust?  Can I even trust the actual computer I'm using right now?  Can I trust my phone?
- What online services can I trust?  Is online backup actually safe or am I backing up all my local PRIVATE data and entrusting it to servers on the internet hosted by companies that can be trusted?
- What can I do to protect myself?
- Are there any companies that I can truly entrust my data to in any form?

Those questions are actually a lot bigger than they appear at first glance and each should be given their own level of gravity. I won't cover them in this blog post, instead I'll start by elaborating on the issue that I mentioned in my previous post which is that relating to HTTPS - the thing that says it's safe to enter your personal information on a website.

The technologies that we're led to put our trust in by the media, SSL and TLS - you know, the ones that put the little padlock in your address bar and claims to be secure. I'm going to give you a basic crash course on the infrastructure that holds together our online security. Don't be scared off, I'm going to purposely gloss over the heavy technical information because it only serves to complicate things and won't give you a clear picture of the overall problem.

Let's say you go to your online banking website (just an example, a purchase from Tesco or Wal-mart uses exactly the same technology). The first thing you may notice is that logging into your banking website, the address bar may have changed colour, depending on the browser you use, a padlock will have appeared somewhere on your screen and the address will start HTTPS instead of HTTP. All these are the indicators that you're led to believe keep you safe and say that it's safe to shop online using your credit card details... indeed, they're the hallmarks of the security infrastructure that's been set up to keep you safe. Let's discuss what's going on behind the scenes to give you an idea of what's going on...

The site you are visiting has acquired what's called a digital certificate that's supposed to verify the authenticity of the computer (the web server) that's sending your computer that web page.  A digital certificate is something that supposedly cannot be forged and is somewhat a simile for your passport or identity card. That is, it's legally binding, it cannot be repudiated. That server is bona-fide... allegedly.

Of course, bona-fides are only as good as the authority that provides them. Pretty much like our passport, if we can't trust the authority that provided the passport, then we can't trust the passport. So, how can you trust the authority?  Well, in a nutshell, because we're told to - does that sound right to you? Me either.  So what happens in the internet world is there is what's called a "chain of trust"... this means that the website you're visiting was given their certificate by an authority more trustworthy than them. Likewise, that authority was provided with their certificate by someone more trustworthy than them etc. all the way up to a top level authority whom we're told to trust just because someone big (like say, the government, or Microsoft) says, it's okay, you can trust them.

Well, the big top level authority that a vast number of certificates are provided by is a US company called Verisign. I'm not knocking Verisign, and I'm not setting out to make it seem like these guys are bad. They're providing a service to the best of their ability and god love 'em, they do that pretty well. The problem is, the system is flawed not because you can't trust them, but it's flawed because they're not in a position you should trust them... here's why...

Verisign is a US company, consequently they're bound by US law, which may or may not be comparable to the law of your country of residence. Recently there has been a spate of incidents where it has come to light that US companies have been compelled to violate laws they would otherwise be bound by (with legal impunity) in order for authorities to spy on people around the world, including their own citizens. It would be easy for a bad actor (a bad actor in this sense is anyone who has malicious intent and shouldn't be trusted) to set up a fake website, compel Verisign (or any certificate authority under their chain of trust), through legal means, blackmail or coersion to provide a certificate of authenticity saying that their website is the real mccoy. They can then redirect your traffic to their webserver which looks exactly likethe original, still shows the padlock and other security cues that tell you this site is safe. For all intents and purposes, it looks exactly like the original. Even if you had the technical ability to pull up the certificate and display it on your screen, it would be indeterminable. In fact, there would be very little to give the game away, only subtle clues that most everyday users would never notice... for instance, the IP address of the webserver may suddenly appear to be located in a different country than the original, but again, it may not.

When you enter a website address in your web browser, a few things happen [if you don't know what an IP address is, it's basically the phone number of your computer on the internet].

  1. You connect your computer to a trusted router - probably your home router, but could easily be the Wifi at the office, Starbucks, the airport or some other public network.
  2. You open your web browser and enter a web address in the address bar.
  3. Your computer checks in a local database called a cache to see if it already has an IP address for that website.
  4. If your computer has the IP address in its cache, we jump to step 10
  5. If your computer doesn't have the IP address in its cache, it goes to a database and finds the IP addresses for a list of available DNS servers. DNS stands for Domain Name Service, it's basically a phone book to look up the IP address for the website you entered - this list of DNS servers is usually provided automatically by your Internet Service Provider to your router when you connect it to the internet, and when you connect to your wifi your computer can get the list and ask it for the IP addresses.
  6. Your computer sends the server part of the address - the bit between the https:// and the next / for instance - www.myonlinebank.com or www.walmart.com to the first DNS server in the list.
  7. The DNS server looks to see if it has the IP address for the server you requested, if it does, it sends your computer back the IP address.
  8. If the DNS server didn't find an IP address, your computer asks the next one in the list until it finds an IP address.
  9. If none of the DNS servers found an IP address your computer receives an "unknown host" response and your web browser displays an ugly message to say it couldn't find what you're looking for and you curse.
  10. If your computer has found an IP address then it sends the address you entered in the web browser to that IP address.
  11. The computer at that IP address sends back a web page signed with a certificate - the one we discussed earlier.
  12. Your web browser checks the certificate to see if it's authentic and activates all the pretty security features on your browser that tells you the page is authentic and secure.

In that process, there are a whole heap of places that can be attacked to get between you and the real server in order to get at your information...

  1. The wifi on your router can be hacked and reprogrammed to maliciously gain access to your local network and steal data directly from your local computers via a number of attacks.
  2. Someone can gain a bad certificate and pretend to be a legitimate website, but this requires redirecting you to their site instead of the original... this can be done by modifying the programming of your router so that all DNS queries are routed to their DNS servers, modifying the real DNS servers you use to direct traffic to their website.
  3. Someone can find a way to install a certificate of trust on your computer and use a certificate signed by that certificate of trust so that everything still looks secure in your web browser, even with an certificate that's not been signed by a trustworthy authority.
  4. Someone could compel a trusted authority, such as Verisign to provide them with a "legitimate" certificate for a rogue website to obtain your information.

There is a less broken approach using an infrastructure called "Web of Trust", but honestly, that's not much less broken than the Chain of Trust that web browsers are configured for. I will cover that in another post.

So there you have it, that is why putting your faith blindly in HTTPS is not secure. What can you do to secure yourself against these issues? Am I saying don't use online banking or make purchases online? No, I'm not saying that at all. I am saying, be careful which sites you trust, and don't necessarily trust them just because your browser says to. If you see anything suspicous, like marketing emails from comapnies you don't normally get email from, emails asking you to log in to update your security information, emails claiming to be from security departments of various companies, beware. Chances are you can still make purchases from your usual online stores and chances are, your usual banking website is legitimate. If in doubt, open your web browser manually and type the address into the address bar yourself. Don't Google for your bank website, don't open it from links in an email. Be cautious.

There are some thins we should also do help mitigate certain attack approaches (what we call attack vectors):

  1. Never ever connect to a wifi access that you can't be sure you can trust. The minute you do, all of the data you transmit can be logged by whoever is in control of that access point.
  2. If you must connect to a public wifi access point, find the provider of the access point and ask them the name of it, don't assume that the ones you see can all be trusted.  It's easy to set up a fake wifi access point in a coffee shop and start harvesting people's data. [Side note, never plug your iPhone into a charger that you can't trust, there are attacks that can install malicious software on your iPhone to harvest your data as well]
  3. Log in to your home router right now and make sure you've changed the following:

  • The router should definitely not have the default network name, so change it from Linksys or DLink or whatever it was when you got it to something else. It shouldn't be anything that can easily be identified by your address, location or person. Someone shouldn't be able to identify and access your network just by knowing you unless you've given them the access information.
  • The router should definitely not have the default password, and in fact, if you have a router that lets you change the default username, change that too.
  • The router should not have remote administration enabled unless you absolutely need it, if it is enabled, don't use the default port for remote administration. Changing it isn't that much of a hindrance because an attacker with a port scanner will still find the open port, but it's one extra step they must take, reducing the chances you'll be attacked by some kid without much of a clue.
  • The router should be using at least WPA2 security... not that this is foolproof, there are known hacks that can bypass it, but it's much safer than WEP and WPA which any 15 year old with some readily available software can bypass it. I will cover configuring better security protocols on your router in a later blog post. A shared key is a security risk, if you can configure enterprise grade security on your router giving each user their own username and password, it's a lot safer. I'll cover this in a later blog post.
  • Make sure your shared key is strong. Preferably a combination of both lower and uppercase letters, numbers and other symbols longer than 8 characters.
  • If you expect to have anyone other than your local network using your wifi - for instance family and friends, set up guest wifi and use wireless isolation to make sure that every computer that is connected to your guest network is isolated in their own space.
  • Configure wireless access MAC authentication. The MAC address is a physical address assigned to the network card installed in your computer. On it's own, this is no hindrance to an attacker as the MAC address can be faked, but in combination with everything else, it's one extra step for someone to bypass.
  • If you can, reduce the wifi signal so that your wifi access point can't be connected to from outside the house. If an attacker can't hear the signal, they can't connect.
  • Find a list of DNS servers that can be trusted and configure your router to use those instead of trusting your ISP to provide them. Ideally they'll be hosted in an independent state that is known for strict privacy laws - such as Iceland or Switzerland.
  • The clock in your router is used to synchronize features that may include some security level features. If you have the ability to configure an NTP server, make sure you configure an NTP server you trust - for instance ch.pool.ntp.org in Switzerland [don't take my word for this].

That should keep you relatively safe for the moment... we'll cover WPA2 Enterprise security and how you can get that installed on your home router for better security in the next post.

As always, anyone that has further information that would be helpful in addition to this post, please post in the comments. I look forward to hearing from you.

September 27, 2013

An exploration of computer (in)security - for mortals...

Well it's been a while since my last blog post as I haven't really felt like I've had much of any relevance to contribute to the world. Lately I've been feeling a pull to contribute more and have been mulling over issues thrown into sharp relief by the revelations of the Edward Snowden saga.

By now, I'm not sure there are too many people in the world who haven't heard of this guy or what he has brought to light. For those who have no idea what I'm talking about or why you should care, here it is in a nutshell:

Edward Snowden was a member of staff working on behalf of a private US company contracted to the National Security Agency as a systems administrator. The NSA is America's version of GCHQ, the communications gathering centre behind MI5/MI6. Earlier this year, after flying to Hong Kong, he leaked a huge trove of information to Glenn Greenwald, a reporter for the UK newspaper The Guardian, regarding gob-smacking NSA abuses of the US constitution that has huge global ramifications relating to all forms of communication. After leaking this information he fled to Russia, allegedly enroute to Bolivia or Guatamala to escape the long arm of the CIA. The US revoked his passport mid flight leaving him trapped for a few weeks in the transit zone of Moscow's Sheremetyevo airport. After Bolivia granted Snowden political asylum, the US violated international law by grounding the Bolivian presidential plane in Spain under suspicion they were trying to smuggle Snowden out of Russia, which South American governments are all still furious about. The Russians eventually provided him political asylum and he was allowed to leave the airport transit zone and remain in Russia for a year while he finds alternative means to drop off the grid and escape from the CIA for good.

The information released by the Guardian sent the head of the NSA Keith B. Alexander to court to account for the actions of his office, where he committed perjury by denying having spied on the American people - evidence later proved this was a lie. During the course of all this, it came to light that a secret court was granting secret decisions to secret law enforcement requests for information on people - the Foreign Intelligence Surveillance Court (FISC for short) has been secretly granting all kinds of illegal surveillance activities on the grounds that because the court provides legal oversight, the surveillance activities aren't illegal - except that the court wasn't democratically elected (on the grounds that it was secret) and doesn't appear to answer to anyone. The outcome of these decisions are that companies are forced to provide access to any data the US government says they want and the company is gagged from discussing it under penalty of... ? Who knows what the penalty is for breaking a gag order, I have no idea - prison I guess, or the use of whatever trove of information the NSA has against you to discredit you, close down your company and destroy the remainder of whatever life and freedom you thought you had.

It turns out that the NSA is bascially recording every internet transmission, including those of both the American people and everyone else around the world and has the ability to read basically anything deemed "secure" by current internet security protocols. For instance the technologies we are all sold as keeping your vital personal information secure from prying eyes, the technology that puts the little padlock in your browser address bar that you're told to trust as the gold standard for your internet safety, labelled SSL or TLS and other three letter acronyms designed to make your eyes glaze over can easily be bypassed by the US government (I'll explain the details of that revelation in a future post).

So that's the situation in a nutshell - everything you're told is "safe and secure" on the internet should be questioned - everything!  Just about everything the public is taught about internet security by the media is at best inaccurate and at worst, a lie. Every transmission is recorded, most things can be read easily and most of what's left can be decoded with a little effort. Nothing is as cut and dry as you're led to believe, nothing you do online is as safe or secure as you're led to believe and probably take for granted.

I hear you say "So what, I'm in the UK, the NSA has no legal jurisdiction here", you may have a point, but any information you transmit over the internet most likely flows through the US or is hosted on servers based in the US or by US companies. That means your information is fair game... and the US is not above having you extradited for things you've done that are a crime in the US but not in your home country (such as Richard O'Dwyer who they sought extradition for providing links on his website to copyright infringing material... and if they can't get you legally extradited, they'll quite happily kidnap you and return you to the US for trial in what they call extraordinary rendition.

Anyone that's known me well knows that I've always had a vague fascination with encryption and cryptography, vague in the sense that any 10 year old boy shown how to write secret messages is fascinated by it. For those that are unaware of what encryption and cryptography is, it is the art of concealing a message by scrambling it up using a process that only the intended recipient can later unscramble to reveal the original message your wrote. In the meantime, anyone else looking at it won't be able to read it, nor could they modify it without the intended recipient knowing.

I've tinkered with many cryptographic and security tools in my life but have never really taken to heart just how seriously I should take it... after all, I'm a nobody, I don't do anything interesting, at least, not in the eyes of any government. I'm not a conspiracy theorist, particularly. I don't have enough political influence to be a threat to anyone, so why should I really care?

In the past couple of weeks, I've started a journey into exploring computer and internet security in the hope that I can pull together a solution that will help everyone become not only more security conscious but actually build the basic skills to manage our own computer security more comprehensively.

I will be blogging about my discoveries in (hopefully) plain English that my kids will be able to understand growing up - I want them to be able to take charge of their own computer and internet security as that starts to become important.  So stick around and hopefully my discoveries will spur insights that are of benefit to more than just me and my family, but to everyone reading my blog as well.

I encourage everyone to contribute in the comments, this is a journey for me as much as it is for everyone else. I don't by any stretch consider myself an expert in this arena yet, there is much I don't know. We're all in this together and I hope it turns out to be as fascinating to everyone else as it is to me, after all, the outcome of this journey will hopefully help keep all of our information as private as I believe it should be - and hopefully you do too.

September 28, 2010

Nine non-development jobs professional developers would benefit from doing

Nine non-development jobs that professional software developers would benefit from doing once in their lives - not including the most obvious Software Engineer :P

  • Front line technical support - This will teach you that more often than not, business metrics [as flawed as they are from your developer point of view] drive the business, not your idealism towards your software. Your 'skewed' idealistic point of view has no weight in the call centre. This will give you an insight into just how much money large corporations waste on supporting each and every bug your software goes out with.
  • Second line technical support - Dealing with the simplest of problems that front line technical support can't figure out within the definition of their required metrics - i.e. their average call time, calls handled per shift etc.
  • Third line technical support - Dealing with calls from customers that neither front line or second line technical support can't figure out. Of the technical support jobs, this is the most fun. You only get the intriguing problems that nobody else can figure out.
  • Customer service in a call centre - Dealing with random calls regarding software, from "what does it do" to "I have no idea what I'm doing". You will have to find many ways of presenting the same information in different ways because not everyone 'gets it' the way you think they should. This will teach you patience... with a safety net, you can always put the customer on hold and freak out about how stupid they are.
  • Desk side support - Dealing with customers in person, sitting by the customer and fixing their problem for them or showing them how to fix it for themselves. The biggest lesson you will learn from this job is patience - without a safety net, you can't freak out because the customer is right there. Whether you consider the person an idiot or the smartest person you've met in your life, you have to keep an even temper and demonstrate compassion for their situation - even if underneath you're fuming.
  • Retail sales - The customer is important. This job will teach you many things: Personal interaction with unfamiliar people, body language, anticipation of customer needs and the value added upsell.
  • Integration specialist for someone else's software - Going to client sites and installing software into their production environments. This will teach you what happens in corporations once your software makes it into the wild. Just because it works on your test servers, with your test data doesn't mean it's going to work in the wild, on someone else's server, under someone else's control, with their data. It will also teach you the lengths you need to go to in order to integrate your software with other business applications.
  • Graphic Design - What's the point being able to make great software if it looks ugly?
  • Typesetting/Copy writing - What's the point in being able to make great software if nobody can read it?

If anyone's got any other non-development jobs that would be useful for rounding out a professional software developer, please comment! :)