Locked into Free Software? Unpicking Kuali’s AGPL Strategy

Chuck Severance recently published a post entitled How to Achieve Vendor Lock-in with a Legit Open Source License – Affero GPL where he criticises the use of AGPL licenses, particularly its use – or at least, intended use – by Kuali. Chuck’s post is well worth reading – especially if you have an interest in the Kuali education ERP system. What I’m going to discuss here are some of the details and implications of AGPL, in particular where there are differences between my take on things and the views that Chuck expresses in his post.

Lock

image by subcircle CC-BY

Copyleft licenses such as GPL and AGPL are more restrictive than the so-called permissive licenses such as the Apache Software License and MIT-style licenses. The intent behind the additional restrictions is, from the point of view of the Free Software movement, to ensure the continuation of Free Software. The GPL license requires any modifications of code it covers to also be GPL if distributed.

With the advent of the web and cloud services, the nature of software distribution has changed; GPL software can – and is – used to run web services. However, using a web service is not considered distributing the software, and so companies and organisations using GPL-licensed code to run their site are not required to distribute any modified source code.

Today, most cloud services operate what might be described as the “secret source” model. This uses a combination of Open Source, Free Software and proprietary code to deliver services. Sometimes the service provider will contribute back to the software projects they make use of, as this helps improve the quality of the software and helps build a sustainable community – but they are under no obligation to do so unless they actually choose to distribute code rather than use it to run a service.

The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels anyone using the software to run a service to also distribute the modified source code.

AGPL has been used by projects such as Diaspora, StatusNet (the software originally behind Identi.ca – it now uses pump.io), the CKAN public data portal software developed by the Open Knowledge Foundation, and MIT’s EdX software.

[UPDATE 20 September 2014: EdX has since relicensed its AGPL component under the Apache License]

We’ve also discussed before on this blog the proposition – made quite forcefully by Eben Moglen – that the cloud needs more copyleft. Moglen has also spoken in defence of the AGPL as one of the means whereby Free Software works with cloud services.

So where is the problem?

The problem is that the restrictions of AGPL, like GPL before it, can give rise to bad business practice as well as good practice.

In a talk at Open World Forum in 2012, Bradley Kuhn, one of the original authors of AGPL, reflected that, at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). In a similar manner to how GPL is sometimes used in a “bait and switch” business model, AGPL can be used to discourage use of code by competitors.

For example, as a service provider you can release the code to your service as AGPL, knowing that no-one else can run a competing service without sharing their modifications with you. In this way you can ensure that all services based on the code have effectively the same level of capabilities. This makes sense when thinking about the distributed social networking projects I mentioned earlier, as there is greater benefit in having a consistent distributed social network than having feature differentiation among hosts.

However, in many other applications, differentiation in services is a good thing for users. For an ERP system like Kuali, there is little likelihood of anyone adopting such a system without needing to make modifications – and releasing them back under AGPL. It would certainly be difficult for another SaaS provider to offer something that competes with Kuali  using their software based on extra features, as any improvements they could make would automatically need to be shared back with Kuali anyway. They would need to compete on other areas, such as price or support options.

But back to Chuck’s post – what do we make of the arguments he makes against AGPL?

If we look back at the four principles of open source that I used to start this article, we quickly can see how AGPL3 has allowed clever commercial companies to subvert the goals of Open Source to their own ends:

  • Access to the source of any given work – By encouraging companies to only open source a subset of their overall software, AGPL3 ensures that we will never see the source of the part (b) of their work and that we will only see the part (a) code until the company sells itself or goes public.
  • Free Remix and Redistribution of Any Given Work – This is true unless the remixing includes enhancing the AGPL work with proprietary value-add. But the owner of the AGPL-licensed software is completely free to mix in proprietary goodness – but no other company is allowed to do so.
  • End to Predatory Vendor Lock-In – Properly used, AGPL3 is the perfect tool to enable predatory vendor lock-in. Clueless consumers think they are purchasing an “open source” product with an exit strategy – but they are not.
  • Higher Degree of Cooperation – AGPL3 ensures that the copyright holder has complete and total control of how a cooperative community builds around software that they hold the copyright to. Those that contribute improvements to AGPL3-licensed software line the pockets of commercial company that owns the copyright on the software.

On the first point, access to source code, I don’t think there is anything special about AGPL. Companies like Twitter and Facebook already use this model, opening some parts of their code as Open Source, while keeping other parts proprietary. Making the open parts AGPL makes a difference in that competitors also need to release source code, so I think overall this isn’t a valid point.

On the second point, mixing in other code, Chuck is making the point that the copyright owner has more rights than third parties, which is unarguably true. Its also true of other licenses too. I think its certainly the case that, for a service provider, AGPL offers some competitive advantage.

Chuck’s third point, that AGPL enables predatory lock-in, is an interesting one. There is nothing to prevent anyone from forking an AGPL project – it just has to remain AGPL. However, the copyright owner is the only party that is able to create proprietary extensions to the code without releasing them, which can be used to give an advantage.

However, this is a two-edged sword, as we’ve seen already with MySQL and MariaDB; Oracle adding proprietary components to MySQL is one of the practices that led to the MariaDB fork. Likewise, if Kuali uses its code ownership prerogative to add proprietary components to its SaaS offering, that may precipitate a fork. Such a fork would not have the ability to add improvements without distributing source code, but would instead have to differentiate itself in other ways – such as customer trust.

Finally, Chuck argues that AGPL discourages cooperation. I don’t think AGPL does this any more than GPL already does for Linux or desktop applications; what is new is extending that model to web services. However, it certainly does offer less freedom to its developer community than MIT or ASL – which is the point.

In the end customers do make choices between proprietary, Open Source, and Free Software, and companies have a range of business models they can operate when it comes to using and distributing code as part of their service offerings.

As Chuck writes:

It never bothers me when corporations try to make money – that is their purpose and I am glad they do it. But it bothers me when someone plays a shell game to suppress or eliminate an open source community. But frankly – even with that – corporations will and should take advantage of every trick in the book – and AGPL3 is the “new trick”.

As we’ve seen before, there are models that companies can use that take advantage of the characteristics of copyleft licenses and use them in a very non-open fashion.

There are also other routes to take in managing a project to ensure that this doesn’t happen; for example, adopting a meritocratic governance model and using open development practices mitigates the risk of the copyright owners acting against the interests of the user and developer community. However, as a private company there is nothing holding Kuali to operate in a way that respects Free Software principles other than the terms of the license itself – which of course as copyright owner it is free to change.

In summary, there is nothing inherently anti-open in the AGPL license itself,  but combined with a closed governance model it can support business practices that are antithetical to what we would normally consider “open”.

Choosing the AGPL doesn’t automatically mean that Kuali is about to engage in bad business practices, but it does mean that the governance structure the company chooses needs to be scrutinised carefully.

Kuali governance change may herald end of ‘Community Source’ model

For quite some time now at OSS Watch we’ve struggled with the model of “Community Source” promoted by some projects within the Higher Education sector. Originating with Sakai, and then continuing with Kuali, the term always seemed confusing, given that it simply meant a consortium-governed project that released code under an open-source license.

As a governance model, a consortium differs from both a meritocracy (as practised by the Apache Software Foundation) or a benevolent dictatorship, or a single-company driven model. It prioritises agreement amongst managers rather than developers, for example.

We produced several resources (Community Source vs. Open Source and The Community Source Development Model) to try to disambiguate both the term and the practices that go along with it, although these were never particularly popular, especially with some of the people involved in the projects themselves. If anything I believe we erred on the side of being too generous.

However, all this is about to become, well, academic. Sakai merged with JaSig to form the Apereo Foundation, which is taking a more meritocratic route, and the most high-profile project using the Community Source model – the education ERP project Kuali – has announced a move to a company-based governance model instead.

I think my colleague Wilbert Kraan summed up Community Source quite nicely in a tweet:

‘Community source’ probably reassured nervous suits when OSS was new to HE, but may not have had much purpose since

Michael Feldstein also provides a more in-depth analysis in his post Community Source Is Dead.

There’s good coverage elsewhere of the Kuali decision, so I won’t reiterate it here:

A few months ago we had a conversation with Jisc about its prospect to alumnus challenge, where the topic of Kuali came up. Back then we were concerned that its governance model made it difficult to assess the degree of influence that UK institutions or Jisc might exercise without making a significant financial contribution (rather than, as in a meritocracy, making a commitment to use and develop the software).

Its hard to say right now whether the move to a for-profit will make things easier or more difficult – as Michael points out in his post,

Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in … mindset

We’ll have to see how the changes pan out in Kuali. But for now we can at least stop talking about Community Source. I never liked the term anyway.

Ohloh to be renamed Black Duck Open Hub

Here at OSS Watch we’re big fans – and users – of Ohloh, the site that helps you analyze open source software repositories, for example when evaluating the sustainability of projects.

Since Black Duck bought the site back in 2010 there haven’t been any obvious changes.

Until now that is:

Dear Ohloh user, 

 

We would like you to be the first to know of an exciting update to Ohloh.net. This week, Ohloh will be changing its name to the Black Duck Open Hub.

Since 2010, we have supported the Ohloh community, now consisting of over 250,000 users, with a stream of new features and functions – all to remain freely available. The name change to Black Duck Open Hub reflects an increasing commitment on our part to the developer community, as well as anyone who wants to learn about the world of open source.

So, goodbye Ohloh, hello Black Duck Open Hub!

I have to admit it doesn’t exactly roll off the tongue that easily, and I’m not looking forward to correcting all the mentions of it in various OSS Watch publications either, but hey, things move on!

Open Source Phones at MWC

Mobile World Congress is running this week in Barcelona.  While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.

Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS.  It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.

Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq.  Both phones are currently sold running Android, but will ship with Ubuntu later this year.  The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.

There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of.  “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone.  Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible.  However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.

A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it.  The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.

Leveling the playing field for open source in education and public sector

Last week I presented at ALT-C in Nottingham on the topic of open source in education and the public sector.

This was partly to invite people to participate in Open Source Options for Education, and partly to open up discussions around software procurement policies and processes in the sector.

The discussion tended to confirm our survey findings that the practice of procurement including open source options varies a lot within institutions, resulting in different biases (both for and against FOSS). So the degree to which FOSS is considered for procurement in education can be quite different depending on who you’re dealing with. To get a more balanced approach to closed and open source software would therefore require engagement with all of the different groups engaged in software procurement to develop their understanding and practices.

Here’s the slides:


BBC creates HTML5 TV appliance

The BBC R&D labs have recently been busy working on a TV prototyping appliance called the egBox. The idea behind it is to create a minimum viable product of a HTML5-based TV as the base platform for experimenting with new features.

The appliance uses HTML 5, the WebM video codec, runs services using Node.js, and is being used as the basis for various technology experiments at the BBC. The  set of components used by the egBox is one that many developers will be familiar with – node, redis, socket.io, backbone.js – on top of which the developers are working on areas such as TV authentication.

What is interesting is that, while the idea is to create a minimum viable TV product as the basis of other prototypes, the nature of the egBox stack suggests lots of interesting ways to extend it. For example, as it uses Node.js, you might make use of the Webinos platform to offer apps.

At the time of writing the egBox code is not open source; however, that is the eventual plan.

The usual sticking point for any foray into TV using open source is actually working with TV services themselves. The “Smart TV” space not only has a whole raft of competing and conflicting standards, but most of the consortia operate on a paid membership basis, some even requiring signing NDAs to read the documentation; this is something I covered in a post a few years ago.

Things have improved since then, but there is still a long way to go. Ironically, a W3C standard for encrypted media content might actually be a good thing – or at least, a less-bad-thing – for open source TV, as W3C standards are royalty-free and publicly available unlike many of the specifications developed within the TV industry.

The upshot is that any open source project looking to work with a good range of TV services is likely to have to pony up some membership fees, and potentially keep some important parts of the codebase closed source to avoid issues with partners and consortia.

Still, its going to be interesting to see whats possible with the egBox.

You can find out more on the egBox over at the BBC’s R&D blog.

How “open source” is the Minnowboard?

This week, Intel announced the Minnowboard, a small embedded development board akin to the RaspberryPi, BeagleBoard and similar devices. The point that grabbed my attention is that it’s being touted as an “open source computer”. The device is shipped running Ångström and is compatible with the Yocto project for building custom embedded Linux systems, but while there are many devices available that run Linux, the term “open source computer” is seldom bandied about. So just how “open source” is the Minnowboard?

The minnowboard viewed from above, with labelled external interfaces

The Minnowboard from above

For a start, the board uses Intel chips, which is usually a good sign that the drivers required will be open source, without requiring any binary blobs in the Linux kernel. Furthermore, the UEFI system is open source. This is the code which executes when the computer first powers up and launches the operating system’s boot loader, and making this open source allows hackers to write their own pre-OS applications and utilities for the Intel platform, an opportunity we don’t often see on consumer devices.

Update: Scott Garman’s comment below clarifies the situation regarding graphics drivers and initialisation code.  A proprietary driver is required for hardware accelerated graphics.

However, moving away from the software, there’s a clear message that the Minnowboard is meant to be “open source hardware”. There are of course competing definitions from groups like OHANDA and OSHWA as to what qualifies as “open hardware” and “open source hardware” – one we heard at Open Source Junction included that all components should be available from multiple sources which is never going to be the case here – but a reasonable metric in this case would be, “is one free to modify the design and make a similar device?”.

The language on the site certainly seems to suggest that this is the intention. The Minnowboard Design Goals page clearly states:

Our goal was to create a customizable developer board that uses Intel® Architecture and can be easily replicated. It is a simple board layout that lends itself to customization. The hardware design is open. We used open source software as much as possible. We used standard (not cutting edge) components that are in stock and affordable, to keep the cost down.

Also, the introductory video explicitly says that the technical documentation will be available under Creative Commons licenses allowing people to modify the designs without signing an NDA.
That said, this documentation isn’t currently on the website, the only reference is a notice saying that “By August 2013 we will post all board documentation, software releases, links to Google Group forums, where to buy, and information of interest to the community.” We’ll just have to be patient.

Update: The schematics, design files and bill of materials are now available on the Technical Features page of the Minnowboard website.

Minnowboard with a type-C lure expansion template attached.

There are 3 formats for “lure” expansion boards – the Type C template here shows the smallest.

Another vector to openness for the Minnowboard is the opportunity to create daughter boards dubbed lures. These are akin to Arudino shields and allow additional components to be plugged in to the main board to expand its capabilities. There’s already several designs taking shape, and there’s certainly the potential for a community to arise around creation of these lures.

What Intel have produced is an open platform with standard components and interfaces for prototyping and developing embedded systems. Unlike the RaspberryPi which is designed as a learning device, the Minnowboard’s design (once released) could represent a starting point for both hobbyist projects and commercial products, without any royalties to be paid to its original designers (except, of course, that you’ll need to buy your chips from Intel). From Intel’s point of view, this is clearly a move to gain traction in the ARM-dominated market of embedded systems. As far as calling it an “open source computer” goes, once the designs are published, I think they’ll have done a pretty good job to justify the term.

Images from Minnowboard Wiki users Jayneil and Prpplague, used under CC-BY-SA.

Running open source virtual machines… on Microsoft Windows Azure? Welcome to the VM Depot

Last week I gave a talk on open source as part of a Microsoft Azure for Education day at UCL in London. I was sharing the stage with Stephen Lamb from Microsoft, who gave a great overview of the various open source projects that Microsoft are engaged in, including Node.js and PHP for Windows. But the main highlight was VM Depot.

cloud

VM Depot is a way to upload, share, and deploy virtual machine images on Microsoft’s Windows Azure cloud platform. For example, you can easily find common open source packages such as Drupal and WordPress on various Linux operating systems available as VMs, so that you can create and run your own instances.

This makes it very easy to get started with open source packages, as all the dependencies and related components and configuration are all set up and ready to use – for many packages this means just doing your customisation for things like your own web domain and personalising the user interface.

As well as the usual suspects such as Drupal, the VM Depot can host all kinds of other software; for example, you can deploy the Open Data portal platform CKAN. This opens up possibilities for using the service for more niche requirements, for example you could create a VM image of your research software and dataset to make it easier for reviewers to run your experiments. Or you can modify an existing image to include extensions and enhancements that may target a more specialist audience, for example you could create a WordPress image with templates and add-ons to run as an overlay journal rather than a regular blog.

So why is Microsoft doing this?

Well, it seems to fit as part of the drive towards Microsoft being less of a software company and more of a device manufacturer and cloud services provider. When it comes to offering  cloud services, its less important what your customers choose to run on them, so much as making sure they can run whatever they need. For most organisations that usually means a mixture of closed source and open source packages; by offering the VM Depot, Microsoft can serve these customers as part of an existing relationship, rather than force them to go with other service providers for running open source products.

Microsoft have certainly come a long way since the infamous “cancer” remark.

For more on Microsoft and Open Source, check out Microsoft Open Technologies.

Is MariaDB replacing MySQL?

Over the past month there has been a steady trickle of announcements from organisations switching from MySQL to MariaDB.

Seal vs Dolphin!

From next month, MariaDB will replace MySQL as the default database in Fedora.  And now RedHat has announced its doing the same. Even Wikimedia started using it.

So what is MariaDB, why is the switch happening, and what are the implications?

MariaDB is a fork of MySQL intended to be, as much as possible, a drop-in replacement for MySQL in most cases. It has pretty much the same features, with some extras and claims performance improvements over the original.

However, I don’t think its really the pros and cons of the technology that matters so much as the backstory.

MySQL was acquired by Sun Microsystems in 2009. However, one of the original developers of MySQL, ‘Monty’ Widenius, was unhappy with the way things were working out at Sun, and left to start his own company, and his own fork of MySQL, MariaDB. The later acquisition of Sun by Oracle was also viewed negatively by Widenius, who issued a call to arms to “save MySQL”.

However, Widenius wasn’t the only one concerned about the future of MySQL, and other forks appeared such as the Facebook MySQL fork and Drizzle.

When comparing MariaDB and MySQL, we’re comparing cultures of development as well as feature sets.

Oracle is perceived – rightly or wrongly – as having a conflict of interest as the owners of MySQL and also suppliers of its major closed-source competitor, leading people to suspect that the company is being slow to develop the database further.

It is also seen as being less open in its development process, and losing the trust of the community. For example, where Oracle have added major new capabilities to MySQL, these are often closed-source extensions. Oracle also stopped using a public bug tracker and instead switched to it internal system. These moves – and others – are cited as examples of Oracle moving away from the open development model that made MySQL successful in the first place.

However, is MariaDB, any more “open” than Oracle?

MariaDB is managed by the MariaDb Foundation, and has been developing its community governance model. The board of MariaDB includes some well-known figures from the Open Source world including Simon Phipps of the Open Source Initiative, and Andrew Katz of Moorcrofts; in April, Phipps was elected as the MariaDB Foundation CEO. On the commercial side, MariaDB service providers include Monty Program, the company founded by Widenius, and SkySQL, founded by other former MySQL employees.

This separation in governance between service providers and the development community counters the perception of “conflict of interest” found with Oracle and MySQL, and also means that the Foundation can implement policies and practices that support openness and transparency in the running of the project.

Is openness and transparency reason enough to switch?

One of the claims made by MariaDB is that it applies community patches and implements new features much more rapidly than Oracle. If MariaDB is iterating faster than MySQL, fixing bugs and implementing new technologies, without sacrificing stability and quality, then this is a key product differentiator.

So for some, switching to MariaDb may mean they are more comfortable with the governance and management of the community. For others, it’s the benefits that stem from that openness that will drive them to switch.

(It’s difficult to determine which is the prime motivator, as even where companies switch out of a concern over Oracle’s stewardship of MySQL they are more likely to couch this in terms of the technical advantages of MariaDB rather than blatantly come out and say it.)

What this story also shows is how the strategic use of forking – the “nuclear option” of open source – can be a viable strategy once a project becomes perceived by its community as going down the wrong path. And probably a better option than waiting until the company discontinues the product, and then attempting to resurrect it.

Will MariaDB replace MySQL in the long run? Widenius certainly thinks so. However, as Daniel Bartholemew points out, “the competition between MariaDB and MySQL can only be good.”

And whatever happens at least we won’t need to stop using the acronym LAMP.

Are you looking to switch to MariaDB? If so, why? Let us know in the comments.

Photo by by bnorwood used under a CC-BY Creative Commons License

Shallow versus Deep Differentiation: Do we need more copyleft in the cloud?

In a previous post I discussed two different models for open source services; the “secret source” model, which is based on providing a differentiated offering on top of an open source stack, and a copyleft model using licenses that address the “ASP loophole” such as AGPL.

Another way of looking at these two models is in terms of the level and characteristics of differentiation that they afford.

Shallow versus Deep

Sea and clouds

If a service offering – and this applied whether its a SaaS solution or infrastructure virtualization or anything in between – uses a copyleft license such as AGPL, then this tends to encourage shallow differentiation. By this I mean that the service offered to users by different providers is differentiated in a way that does not involve significant changes to the core codebase. For example, service providers may differentiate on price points, service packages, location, and reputation, while offering the same solution.

There can also be differentiation at the technology level including custom configurations, styling and so on, or added features; however under an AGPL-style license these are also typically distributed under AGPL, so if service providers do want to extend and enhance the codebase, this is contributed back to the community. If a provider really did want to provide deep differentiation, it would effectively have to create a fork.

If a service offering instead builds on top of an open source stack using a permissive license such as the Apache license, then it becomes possible for providers to offer deep differentiation in the services they provide; they are at liberty to make significant changes to the software without contributing this back to the community developing the core codebase. This is because, under the terms of most open source licenses, providing an online service using software is not considered “distribution” of the software.

What does this all mean?

For service providers this presents something of a quandary. On the one hand, a common software base represents a significant cost saving as development effort is pooled, reducing waste. On the other, there is a clear business case for greater differentiation to compete as the market becomes more crowded.

How this is resolved is something of a philosophical question.

It may be that, acting out of self-interest, service providers will over time balance out the issues of differentiation and pooled development regardless of any kind of licensing constraint; the cost savings and reduced risk offered by pooling development effort for the core codebase will be clear and significant, and providers will apply deep differentiation only where there is very clear benefit in doing so, while contributing back to the core codebase for everything else.

Alternatively, service providers may rush to differentiate deeply from the outset, leaving the core codebase starved of resources while each provider focusses on their own enhancements. In this scenario, copyleft licensing would be needed to sustain a critical mass of contributions to the core.

Which is it to be?

Given that OpenStack and Apache CloudStack, two of the main cloud infrastructure-as-a-service projects, are both licensed under the Apache license, we can observe over the coming year or two which seems to be the likely scenario. Under the first model, we should see the developer community and contributions for these projects continue to grow, irrespective of how deeply providers differentiate services based on the software.

Under the second scenario, we should see something rather different, in that the viability of the project should suffer even as the number of providers building services on them grows.

As of now, both projects seem to be growing in terms of contributors; here’s OpenStack (source: Ohloh):

OpenStack contributors per month; rising from 50 in 2011 to 250 in 2013

… and here is CloudStack (source: Ohloh):

CloudStack contributors per month; around 25 over 2011-2012, rising steeply to 60 in 2013

(Both projects have slightly lower numbers of commits, though that can simply reflect greater maturity of codebases rather than reduced viability, which is why I’ve focussed on the number of contributors)

If the concerns over “deep differentiation” turn out to be justified, then community engagement in these two projects should suffer as effort is diverted into differentiated offerings built on them, rather than channelled into contributions back to the core projects.

Is deep differentiation really an issue for cloud?

Deep and shallow differentiation is a concept borrowed from marketing, and is sometimes used to refer to how easy it is for a competitor to copy a service offering. One example of this is the Domino’s Pizza “hot in 30 minutes or its free” service promise – it would be difficult for a competitor to copy this offering without actually changing the nature of its operation to match – it can’t just copy the tagline without risking giving away free pizza and going out of business.

In cloud services, its arguable how much differentiation will be in terms of software functionalities and capabilities, and how much on the operational and marketing aspects of the services: things like pricing, reliability, support, speed, ease of use, ease of payment and so on.

If the key to success in cloud is in amongst the latter, then it really doesn’t matter that most providers use basically the same software, and providers will want to take advantage of having a common, high quality software platform with pooled development costs.

A further problem with deep differentiation in the software stack is that this could impact portability and interoperability – having extra features is great, but cloud customers also value the ability to choose providers and to switch when they need to. Providers focussing on a few popular open source cloud offerings are another kind of standardisation, complementing interoperability standards such as OVF, and one that gives customers confidence that they aren’t being locked in; as well as being able to move to another provider, they also get the option to in-source the solution if they so wish.

Are there better reasons for copyleft?

It remains to be seen whether there really is a problem with the open cloud, and whether copyleft is an answer. Personally I’m not convinced there is.

However, that doesn’t mean copyleft on services isn’t important; on the contrary I think that licenses such as AGPL offer organisations a useful option when looking to make their services open.

Recent examples such as EdX highlight that AGPL is a viable alternative for licensing software that runs services, and that perhaps with greater awareness among service providers we may see more usage of it in future. For example, for the public sector it may offer an appropriate approach for making government shared services open source.

(Sea and cloud photo by Michiel Jelijs licensed under CC-BY 2.0)