Category Archives: Cloud

OSS Watch joins G-Cloud

OSS Watch has been awarded a place on G-Cloud, the UK Government cloud procurement initiative, and will be making a range of services available through the CloudStore.

Government Procurement Service Supplier logo

The UK Government has been working to make the purchasing of public sector ICT as simple and transparent as possible. All services listed on the CloudStore are part of the G-Cloud framework and so immediately available for the public sector to procure and use.

This means that the public sector can now procure OSS Watch services quickly, easily and cheaply through the Cloudstore. This includes our full range of consultancy services, available under “Lot 4: Specialist Cloud Services”.

For the past 10 years, OSS Watch has provided independent, non-advocacy information and consultancy on all aspects of Open Source for the UK education sector, and we’re excited to be able to offer our services and expertise direct to the public sector for the first time.

For more information, see the G-Cloud information page on our website, or contact us at

Running open source virtual machines… on Microsoft Windows Azure? Welcome to the VM Depot

Last week I gave a talk on open source as part of a Microsoft Azure for Education day at UCL in London. I was sharing the stage with Stephen Lamb from Microsoft, who gave a great overview of the various open source projects that Microsoft are engaged in, including Node.js and PHP for Windows. But the main highlight was VM Depot.


VM Depot is a way to upload, share, and deploy virtual machine images on Microsoft’s Windows Azure cloud platform. For example, you can easily find common open source packages such as Drupal and WordPress on various Linux operating systems available as VMs, so that you can create and run your own instances.

This makes it very easy to get started with open source packages, as all the dependencies and related components and configuration are all set up and ready to use – for many packages this means just doing your customisation for things like your own web domain and personalising the user interface.

As well as the usual suspects such as Drupal, the VM Depot can host all kinds of other software; for example, you can deploy the Open Data portal platform CKAN. This opens up possibilities for using the service for more niche requirements, for example you could create a VM image of your research software and dataset to make it easier for reviewers to run your experiments. Or you can modify an existing image to include extensions and enhancements that may target a more specialist audience, for example you could create a WordPress image with templates and add-ons to run as an overlay journal rather than a regular blog.

So why is Microsoft doing this?

Well, it seems to fit as part of the drive towards Microsoft being less of a software company and more of a device manufacturer and cloud services provider. When it comes to offering  cloud services, its less important what your customers choose to run on them, so much as making sure they can run whatever they need. For most organisations that usually means a mixture of closed source and open source packages; by offering the VM Depot, Microsoft can serve these customers as part of an existing relationship, rather than force them to go with other service providers for running open source products.

Microsoft have certainly come a long way since the infamous “cancer” remark.

For more on Microsoft and Open Source, check out Microsoft Open Technologies.

Open Source and Open Standards key to future of public sector IT

Last week Open Source, Open Standards 2013 took place in London, an event focussed on the public sector. Naturally these being two topics we’re very keen on here at OSS Watch I went along too.

Overall the key message to take away from the event was just how central to public sector IT strategy these two themes have become, and also how policy is being rapidly turned into practice, everywhere from the NHS to local government.

Tariq Rashid, the Open Source policy lead for the UK Government, spoke of the need for IT to be focussed on user needs, and to deliver sustained value, by moving from “special” software procured for the public sector, to services delivered using commodified IT.

Even where services are unique to the public sector, Rashid and other speakers at the event made the case that most elements of such services can be delivered by building on commodified IT. For example, the open source CMS Drupal is used for delivering increasing numbers of public sector IT services, and the Government Digital Service builds its services from open source components.

The two strategies of Open Source and Open Standards are necessary as they create the ‘competitive tension’ needed to drive down cost and improve sustainability.

Mark Bohannon of Red Hat gave an overview of the global landscape of Open Source in government, in the US and UK, and identified the UK policies as being particularly forward looking. Mark positioned Cloud and Big Data as two key areas where Open Source and Open Standards were critical, calling out OpenStack and Hadoop as particular cases, and also provided some great case studies on open source from the military and from space exploration.

Mark made the point that Open Source and Open Standards underpin a more fundamental change in IT, away from big IT projects towards IT that is agile, modular and responsive to user needs.

Ian Levy of CESG dispelled some myths around security and Open Source (“If anyone in UK government says CESG has banned open source send their name to me and I’ll have them killed”) and made the case for a common sense approach to security, whether the software or service is open source or closed source.

Mark Taylor from Sirius has long been an advocate for open source in the public sector, and it was good to be at a point where the message has been heeded! He began with a nice Schopenhauer quote:

All truth passes through three stages. First, it is ridiculed. Second, it is violently opposed. Third, it is accepted as being self-evident.

In the talk he provided lots of practical advice for public sector organisations on putting Open Source into practice, which include calling on those writing tenders to focus on user needs instead of naming technology solutions. Mark also gave a workshop later in the day where he continued this theme, expanding on how public sector organisations and companies had made transitions to open source. Its not very easy to summarise here in a post, but I found the information very practical and useful; for example, when transitioning IT, to start with the systems furthest away from users, such as backend services and infrastructure, to avoid sparking the usual neophobia when you change technologies for users.

Inderjit Singh gave an overview of the NHS standards-based approach to IT, with some nice background on which approaches had been tried and where the current strategy is going. The current approach has been to use a programme of change projects involving SMEs that have engaged 40 new suppliers, and which is accelerating the take up of the standards.

Singh asserted that standards and fundamental for enabling an open architecture, and that open source and open standards go hand in hand in delivering value for users.

After some workshop sessions, we had Alasdair Mangham from the London borough of Camden giving us a look into how they’ve been building services using open source software in collaboration with SMEs. This involved a major shift in contracting – rather than write an huge set of requirements in a tender document, they disaggregated the project and bought in specialist capabilities (in usability, service design, SOA etc) as needed in smaller chunks of time using an agile process.

Graham Mellin gave an overview of the Met Office’s new space weather system built using open standards and using open source software; for their own specialist systems they decided to go down the route of making it Open Source rather than the private partner sharing route as result of an exploitation planning process.

I met with a lot of people at the event, from suppliers, local government, NHS and national government departments, and it was good to get a sense of how the public sector is moving – whatever the pace in individual areas – towards this vision of more affordable, sustainable and user focussed IT, and better utilising the capabilities of UK SMEs and startups.

We pointed out recently in our post in the Guardian, Higher Education in particular is in a strong position in this area as a result of past investments in Open Source and Open Standards, and we now need to think about how we take that forwards.

As Mark Taylor pointed out in his talk, the public sector accounts for over half of IT spend in the UK – and we can choose to either unite and use that market power to shape the future, or be divided up and conquered.

Shallow versus Deep Differentiation: Do we need more copyleft in the cloud?

In a previous post I discussed two different models for open source services; the “secret source” model, which is based on providing a differentiated offering on top of an open source stack, and a copyleft model using licenses that address the “ASP loophole” such as AGPL.

Another way of looking at these two models is in terms of the level and characteristics of differentiation that they afford.

Shallow versus Deep

Sea and clouds

If a service offering – and this applied whether its a SaaS solution or infrastructure virtualization or anything in between – uses a copyleft license such as AGPL, then this tends to encourage shallow differentiation. By this I mean that the service offered to users by different providers is differentiated in a way that does not involve significant changes to the core codebase. For example, service providers may differentiate on price points, service packages, location, and reputation, while offering the same solution.

There can also be differentiation at the technology level including custom configurations, styling and so on, or added features; however under an AGPL-style license these are also typically distributed under AGPL, so if service providers do want to extend and enhance the codebase, this is contributed back to the community. If a provider really did want to provide deep differentiation, it would effectively have to create a fork.

If a service offering instead builds on top of an open source stack using a permissive license such as the Apache license, then it becomes possible for providers to offer deep differentiation in the services they provide; they are at liberty to make significant changes to the software without contributing this back to the community developing the core codebase. This is because, under the terms of most open source licenses, providing an online service using software is not considered “distribution” of the software.

What does this all mean?

For service providers this presents something of a quandary. On the one hand, a common software base represents a significant cost saving as development effort is pooled, reducing waste. On the other, there is a clear business case for greater differentiation to compete as the market becomes more crowded.

How this is resolved is something of a philosophical question.

It may be that, acting out of self-interest, service providers will over time balance out the issues of differentiation and pooled development regardless of any kind of licensing constraint; the cost savings and reduced risk offered by pooling development effort for the core codebase will be clear and significant, and providers will apply deep differentiation only where there is very clear benefit in doing so, while contributing back to the core codebase for everything else.

Alternatively, service providers may rush to differentiate deeply from the outset, leaving the core codebase starved of resources while each provider focusses on their own enhancements. In this scenario, copyleft licensing would be needed to sustain a critical mass of contributions to the core.

Which is it to be?

Given that OpenStack and Apache CloudStack, two of the main cloud infrastructure-as-a-service projects, are both licensed under the Apache license, we can observe over the coming year or two which seems to be the likely scenario. Under the first model, we should see the developer community and contributions for these projects continue to grow, irrespective of how deeply providers differentiate services based on the software.

Under the second scenario, we should see something rather different, in that the viability of the project should suffer even as the number of providers building services on them grows.

As of now, both projects seem to be growing in terms of contributors; here’s OpenStack (source: Ohloh):

OpenStack contributors per month; rising from 50 in 2011 to 250 in 2013

… and here is CloudStack (source: Ohloh):

CloudStack contributors per month; around 25 over 2011-2012, rising steeply to 60 in 2013

(Both projects have slightly lower numbers of commits, though that can simply reflect greater maturity of codebases rather than reduced viability, which is why I’ve focussed on the number of contributors)

If the concerns over “deep differentiation” turn out to be justified, then community engagement in these two projects should suffer as effort is diverted into differentiated offerings built on them, rather than channelled into contributions back to the core projects.

Is deep differentiation really an issue for cloud?

Deep and shallow differentiation is a concept borrowed from marketing, and is sometimes used to refer to how easy it is for a competitor to copy a service offering. One example of this is the Domino’s Pizza “hot in 30 minutes or its free” service promise – it would be difficult for a competitor to copy this offering without actually changing the nature of its operation to match – it can’t just copy the tagline without risking giving away free pizza and going out of business.

In cloud services, its arguable how much differentiation will be in terms of software functionalities and capabilities, and how much on the operational and marketing aspects of the services: things like pricing, reliability, support, speed, ease of use, ease of payment and so on.

If the key to success in cloud is in amongst the latter, then it really doesn’t matter that most providers use basically the same software, and providers will want to take advantage of having a common, high quality software platform with pooled development costs.

A further problem with deep differentiation in the software stack is that this could impact portability and interoperability – having extra features is great, but cloud customers also value the ability to choose providers and to switch when they need to. Providers focussing on a few popular open source cloud offerings are another kind of standardisation, complementing interoperability standards such as OVF, and one that gives customers confidence that they aren’t being locked in; as well as being able to move to another provider, they also get the option to in-source the solution if they so wish.

Are there better reasons for copyleft?

It remains to be seen whether there really is a problem with the open cloud, and whether copyleft is an answer. Personally I’m not convinced there is.

However, that doesn’t mean copyleft on services isn’t important; on the contrary I think that licenses such as AGPL offer organisations a useful option when looking to make their services open.

Recent examples such as EdX highlight that AGPL is a viable alternative for licensing software that runs services, and that perhaps with greater awareness among service providers we may see more usage of it in future. For example, for the public sector it may offer an appropriate approach for making government shared services open source.

(Sea and cloud photo by Michiel Jelijs licensed under CC-BY 2.0)

Are you SaaSing me?

In the final of my trio of posts deconstructing the “cloud”, I’m going to be looking at Software-as-a-Service, or SaaS. SaaS is the next layer of the “cloud” stack that sits atop Platform-as-a-Service (PaaS) (when it’s been built using such a platform), and/or Infrastructure-as-a-Service (IaaS).

Software-as-a-Service is usually presented to the user as a web application. It is distinct from a standalone web application in that each customer has their own instance of the software, rather than just having an account on a larger system.

For example, LinkedIn wouldn’t be considered SaaS since you’re not getting an instance of the LinkedIn software, just a user account. However, signing up for a blog on gives you your own WordPress instance, on its own subdomain, with its own set of users. This would be considered SaaS.

There are more complex examples such as Sourceforge, which gives you an account within their system, but also instances of software for your project such as bug trackers and wikis. Google Docs also blurs this definition.

SaaS providers will usually offer a multiple tiers of service, ranging from a highly limited free account to a well provisioned or even “unlimited” paid account.

There are 2 key advantages of SaaS. Firstly, it completely removes the administrative overhead of deploying software. Usually a few clicks of a web interface is all it takes to “install” your SaaS instance, and the provider takes care of the computing and storage resources required. Secondly, you can access the software from anywhere. As long as a machine has an Internet connection and a web browser, no further setup is usually required for end users.

There are of course potential issues to be aware of. Unlike having software deployed locally, or a web application deployed in-house, you’re unlikely to have direct access to your data (and depending on the terms of service, you might not even own it). All data will be stored by your SaaS provider and presented through the application. This makes backing up or archiving data locally difficult if not impossible, and requires absolute trust and confidence in your provider and their security policies.

While having your software available from any machine with a web browser and Internet connection is very convenient, this can be a double-edged sword in a situation where one of these isn’t available.

Open source software appears in the SaaS world in several guises, as Scott discussed in this week’s post. Some SaaS products may be built using permissively-licenced components, but with some proprietary code sticking it all together.

Of course, a complete open source software product may be offered as a service. WordPress is released under the GNU General Public Licence (GPL), but is offered as both a free and commercial service at and other providers. ownCloud is released under the Affero General Public Licence (APGL) and available as a service from and other providers. The terms of the AGPL, unlike those of the GPL, mean that even when provided as a service the service provider must allow you to download the source code.

The benefits of choosing an open source product when selecting SaaS isn’t as clear-cut as for lower layers of the “cloud” stack. If the product isn’t released under AGPL, the service provider doesn’t have to distribute the source code. If the software is released under a different open source licence, you will probably be able to download the software elsewhere and run your own instance locally as a contingency. However, without access to your data, the utility of this contingency is limited.

SaaS solutions using a combination of open source and proprietary components, as far as the customer is concerned, may as well be entirely proprietary. The provider may be developing and releasing the open source components, and these may be useful in other systems. However, in terms of the service being provided, the fact that some parts are open source doesn’t directly benefit the customer, only the provider.

When considering open source SaaS solutions, the key factor to look for is the portability of data. WordPress can publish your data in standard RSS formats. Some products will have an “export” feature, allowing your to download a copy of your data. In these cases, you realise the benefits of choosing an open source product, as you can easily move your data to another provider or an in-house copy of the software.

How Can Services Be Open Source?

For many organisations, the key dilemma they face in procurement is not between closed source vs closed source software, but between deploying software and using online services – whether its infrastructure, platform, or shared services. However, open source plays a part in all these kinds services, but it can do so in two distinctly different ways, which carry different advantages and risks for providers and customers.

Cloud Quine

A fun diversion for computer scientists is something called a [quine]; a quine is a program that can print its own source code, including the code used to print the source code! However, there was something about this concept that attracted activists such as Bradley Kuhn, particularly as software was more often deployed as web services rather than distributed and run directly on users machines, and this wasn’t covered by existing licensing (the “ASP loophole”).

What if you could license software used to run a service, so that users could always download it, including any modifications the service provider had implemented?

This is the idea behind the Affero GPL (AGPL) license. If a piece of software uses the AGPL, when its deployed to run as a service – whether its to provide infrastructure-as-a-service or an application as a shared service – users can download the complete source code including all modifications used by the service provider.

So, for example, if you take an AGPL-licensed program such as StatusNet, a Twitter-likestatus-sharing system, and you deploy it and make some tweaks to the user interface and add a few features, anyone can download your modified version and run their own instance of it. Another example is ownCloud, which is offered as either AGPL or under a commercial license.

For customers this does offer some interesting risk mitigation; if for whatever reason you no longer which to use your existing service provider, you have the capability to switch to an identical service run either in-house or by another provider, as the complete source code for running the service is available. However, for a service provider it may be less attractive, as you have limited ability to differentiate your offering in the service itself – though you can still excel at customer service, reliability, pricing and other non-software services.

Secret Sourcery

However, while Kuhn, Moglen and others were developing the AGPL, an alternative model for open source was emerging, based on using liberal licenses such as MIT, BSD and ASL.

In this model, while the core software used for offering the service is open source, some of the “glue” code that binds it together, and typically the user experience offered for the service, is closed source.

For example, while most of the software used to run Github is open source, GitHub itself cannot be downloaded and run elsewhere as some of its code is closed source; a strategy that Tom Preston-Warner outlines in his post “Open Source (Almost) Everything“.

For service providers it does offer the capability to provide greater differentiation of their offering – whether its a better user experience, or extra features.

For customers, however, it does create more of a problem, as its less obvious how you might be able to migrate to another provider as, unlike in the AGPL world, you don’t have access to the complete platform. Its also possible for a service provider to fold without the services it ran being able to be recreated. This is a business risk; and is one reason why open standards for cloud services are such a critical (and highly contested) area.

However its also a question of degree – for some services there is very little provided above and beyond an open source software stack and the value added is in areas such as pricing, reliability and customer service. For example, a provider may offer CloudStack or OpenStack largely as-is, or provide hosting of largely unmodified open source applications such as Moodle.

For others, the service is largely proprietary with a few bits of open source software at a fairly low level, representing a more serious level of potential lock-in depending on how well the service supports open standards.

Each service provider can therefore present quite a complex picture in terms of risk mitigation.

Making a choice

Today, the “secret source” approach is by far the most common model for services – whether its infrastructure as a service or shared application services, and is also likely to be the model used by most of the shared service providers currently used by universities and colleges.

But are they aware of the pros and cons of this model? Or even that there might even be a choice of model? Would they have a preference of one model over the other for services offered by sector organisations such as JISC?

I think these are questions worth asking for the education sector. For the wider world, “secret source” is clearly dominant, but if we want to go down that route too, it would be better to do it through choice rather than ignorance.

See also: Google and the Affero GPL and SaaS: Who Shares Wins?

Developers in the Mist

In my last post I started dissecting the term “cloud”, looking at Infrastructure-as-a-Service (IaaS) and what it means in terms of Open Source Software.  This time I’ll be moving to the next layer of the cloud stack to look at Platform-as-a-Service, or PaaS.

PaaS provides a way of provisioning development environments and tools on top of an IaaS system.  PaaS allows you to easily deploy the tools required for developing and running applications; languages, runtime environments and testing tools, with the inherent flexibility of resource allocation that comes with running on IaaS.

For example, you might want to start developing a Ruby on Rails application.  Without PaaS, you might create a new Virtual machine, set up the operating system, install a database, Ruby, Rails and all their dependencies, configure remote access to the machine, and so on.  With PaaS you can simply use the provided tools to deploy a pre-configured Ruby on Rails platform as per your specifications.

The key benefits here is that developers can focus on developing applications rather than setting up their environment and managing server resources, and that custom-built applications can scale easily to meet demand.

You can get PaaS solutions from several of the usual suspects.  Windows Azure offers PaaS tools to run on top of its IaaS offerings, while Google App Engine allows you to develop and run your software in Google-land.  Both of these systems are proprietary, purely sold as a service.

Open Source PaaS offerings are also emerging. RedHat’s OpenShift and VMWare’s CloudFoundry are both platforms providing similar features to the proprietary competitors, but are available under an Open Source licence.  RedHat and VMWare bother offer their respective solutions as a supported commercial service, but as is the nature of Open Source, there are other companies selling services based on the same systems.

The key differentiator touted by the Open Source solutions as opposed to their proprietary competitors is freedom from lock-in, although OpenShift and Cloud Foundry present this benefit differently.  OpenShift focuses on support for portable technologies such as Java and Open Source languages as a means of allowing you to take your applications elsewhere if you choose.  Cloud Foundry provides support for similar technologies, but focuses on avoiding lock-in to a single IaaS platform.  This means that Cloud Foundry will run on VMWare’s own vSphere infrastructure, OpenStack, or even Amazon’s EC2. The OpenShift website does make mention of OpenShift running on Amazon Web Services, but most of the documentation refers to running on OpenStack.

Of course, the other benefit of Open Source solutions is that you’re not tied to using a commercial service at all.  If you’ve got a supported IaaS system running in house, or rented infrastructure from an external service, you can deploy your own PaaS atop it and let your staff spend more time developing and less time setting up platforms.

Unclouding the issue

You can’t do anything with software at the moment, Open Source or otherwise, without the word “cloud” being thrown about.  If you’re looking at options for software you might want to use, it can be hard to see through the hype of this term, and understand what product you’re actually talking about.  This article is going to look at one area of the “cloud” ecosystem and the relevance to Open Source Software.

You might hear someone speak about “cloud servers” or “servers in the cloud”.  What’s probably being spoken about here is a product called Infrastructure-as-a-Service (IaaS).

Traditionally, if you wanted a server, you had the option of buying or renting physical machines and either hosting them in house, or in a data centre through a co-location service.  More recently, we’ve seen these physical machines replaced with virtual machines, allowing several server systems to be run on the same piece of hardware. Again, these can be hosted internally or externally.

IaaS takes this to the next level, abstracting away the underlying hardware from the customer. An IaaS service is typically housed in a large data centre, where servers’ resources are pooled into a cluster.  Virtual servers can then be provisioned on-demand, with underlying software taking care of which hardware is actually doing the work.

This model provides excellent flexibility and scalability, as new virtual servers can be provisioned and resources allocated to meet the demands of the services they are running.  It also holds the potential for cost saving – different types of customers will have different peaks in demand, which can be balanced across the shared infrastructure.  This can lower the overall computing capacity required, in turn lowering costs.

Set-ups like this are one of the reasons we use the term “cloud” – the blurring of division between pieces of hardware to make one ubiquitous “blob” of computing resource.

The best known example of IaaS is without a doubt Amazon’s Elastic Computing Cloud, or EC2.  Amazon owns several data centres across the globe.  When renting a server from EC2, you simply specify the resources you require and the location.  The system takes care of the rest and presents you with a remote login to your server.  EC2 is used by sites like Reddit and Foursquare to give them the ability to scale in line with demand.

Of Course EC2 isn’t the only player in the space.  There are 2 high-profile examples of Open Source platforms that can be used to provide IaaS: OpenStack and CloudStack.  OpenStack is produced by the OpenStack Foundation, originally founded by NASA and RackSpace but now comprising a sector-spanning group of technology companies.  Several companies in the Foundation run public cloud services on the OpenStack platform, in competition with EC2.  CloudStack was originally developed by, who were bought out by desktop vitrtualisation giant Citrix. Citrix subsequently open sourced the CloudStack system through the Apache Foundation.  CloudStack is used by big name brands such as BT and GoDaddy, as well as some smaller ones.

If your infrastructure is being provided as a service, it might not be immediately apparent why it matters if the underlying technology is Open Source. Your primary concern is likely to be what software you’re running on the servers that you’re renting.  However, it certainly warrants some consideration.

The first aspect to look at is the choice it affords you.  If you want a solution running on OpenStack, there’s numerous companies  for you to choose from, while knowing you’re getting the same product backed by the same group of vendors.  These companies will still want to compete between themselves, be it on price, the types of server they offer, or the management tools they provide.  Also, as the underlying system is the same across vendors, you avoid lock-in.

Another factor that shouldn’t be overlooked is that you don’t have to use OpenStack or CloudStack as a service from someone else.  If you’ve got a datacenter in your organisation, you can run your own, private IaaS system.  This isn’t for everyone, and does rather muddy the “as-a-Service” term, but you can still think of it as a service provided internally.  Rather than having to provision VMs in response to individual requests, running a private IaaS system would allow users to scale solutions to meet their needs at any given time.

Private IaaS systems can have a disadvantage: systems running on a private cloud are likely to have similar peaks in demand, reducing the potential for cost savings.

This isn’t the end of the cloud story.  There’s other “as-a-Service” products that are marketed under the “cloud” banner, and there’s Open Source to be found in all of them.  We’ll be looking at them in future articles.