Locked into Free Software? Unpicking Kuali’s AGPL Strategy

Chuck Severance recently published a post entitled How to Achieve Vendor Lock-in with a Legit Open Source License – Affero GPL where he criticises the use of AGPL licenses, particularly its use – or at least, intended use – by Kuali. Chuck’s post is well worth reading – especially if you have an interest in the Kuali education ERP system. What I’m going to discuss here are some of the details and implications of AGPL, in particular where there are differences between my take on things and the views that Chuck expresses in his post.

Lock

image by subcircle CC-BY

Copyleft licenses such as GPL and AGPL are more restrictive than the so-called permissive licenses such as the Apache Software License and MIT-style licenses. The intent behind the additional restrictions is, from the point of view of the Free Software movement, to ensure the continuation of Free Software. The GPL license requires any modifications of code it covers to also be GPL if distributed.

With the advent of the web and cloud services, the nature of software distribution has changed; GPL software can – and is – used to run web services. However, using a web service is not considered distributing the software, and so companies and organisations using GPL-licensed code to run their site are not required to distribute any modified source code.

Today, most cloud services operate what might be described as the “secret source” model. This uses a combination of Open Source, Free Software and proprietary code to deliver services. Sometimes the service provider will contribute back to the software projects they make use of, as this helps improve the quality of the software and helps build a sustainable community – but they are under no obligation to do so unless they actually choose to distribute code rather than use it to run a service.

The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels anyone using the software to run a service to also distribute the modified source code.

AGPL has been used by projects such as Diaspora, StatusNet (the software originally behind Identi.ca – it now uses pump.io), the CKAN public data portal software developed by the Open Knowledge Foundation, and MIT’s EdX software.

[UPDATE 20 September 2014: EdX has since relicensed its AGPL component under the Apache License]

We’ve also discussed before on this blog the proposition – made quite forcefully by Eben Moglen – that the cloud needs more copyleft. Moglen has also spoken in defence of the AGPL as one of the means whereby Free Software works with cloud services.

So where is the problem?

The problem is that the restrictions of AGPL, like GPL before it, can give rise to bad business practice as well as good practice.

In a talk at Open World Forum in 2012, Bradley Kuhn, one of the original authors of AGPL, reflected that, at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). In a similar manner to how GPL is sometimes used in a “bait and switch” business model, AGPL can be used to discourage use of code by competitors.

For example, as a service provider you can release the code to your service as AGPL, knowing that no-one else can run a competing service without sharing their modifications with you. In this way you can ensure that all services based on the code have effectively the same level of capabilities. This makes sense when thinking about the distributed social networking projects I mentioned earlier, as there is greater benefit in having a consistent distributed social network than having feature differentiation among hosts.

However, in many other applications, differentiation in services is a good thing for users. For an ERP system like Kuali, there is little likelihood of anyone adopting such a system without needing to make modifications – and releasing them back under AGPL. It would certainly be difficult for another SaaS provider to offer something that competes with Kuali  using their software based on extra features, as any improvements they could make would automatically need to be shared back with Kuali anyway. They would need to compete on other areas, such as price or support options.

But back to Chuck’s post – what do we make of the arguments he makes against AGPL?

If we look back at the four principles of open source that I used to start this article, we quickly can see how AGPL3 has allowed clever commercial companies to subvert the goals of Open Source to their own ends:

  • Access to the source of any given work – By encouraging companies to only open source a subset of their overall software, AGPL3 ensures that we will never see the source of the part (b) of their work and that we will only see the part (a) code until the company sells itself or goes public.
  • Free Remix and Redistribution of Any Given Work – This is true unless the remixing includes enhancing the AGPL work with proprietary value-add. But the owner of the AGPL-licensed software is completely free to mix in proprietary goodness – but no other company is allowed to do so.
  • End to Predatory Vendor Lock-In – Properly used, AGPL3 is the perfect tool to enable predatory vendor lock-in. Clueless consumers think they are purchasing an “open source” product with an exit strategy – but they are not.
  • Higher Degree of Cooperation – AGPL3 ensures that the copyright holder has complete and total control of how a cooperative community builds around software that they hold the copyright to. Those that contribute improvements to AGPL3-licensed software line the pockets of commercial company that owns the copyright on the software.

On the first point, access to source code, I don’t think there is anything special about AGPL. Companies like Twitter and Facebook already use this model, opening some parts of their code as Open Source, while keeping other parts proprietary. Making the open parts AGPL makes a difference in that competitors also need to release source code, so I think overall this isn’t a valid point.

On the second point, mixing in other code, Chuck is making the point that the copyright owner has more rights than third parties, which is unarguably true. Its also true of other licenses too. I think its certainly the case that, for a service provider, AGPL offers some competitive advantage.

Chuck’s third point, that AGPL enables predatory lock-in, is an interesting one. There is nothing to prevent anyone from forking an AGPL project – it just has to remain AGPL. However, the copyright owner is the only party that is able to create proprietary extensions to the code without releasing them, which can be used to give an advantage.

However, this is a two-edged sword, as we’ve seen already with MySQL and MariaDB; Oracle adding proprietary components to MySQL is one of the practices that led to the MariaDB fork. Likewise, if Kuali uses its code ownership prerogative to add proprietary components to its SaaS offering, that may precipitate a fork. Such a fork would not have the ability to add improvements without distributing source code, but would instead have to differentiate itself in other ways – such as customer trust.

Finally, Chuck argues that AGPL discourages cooperation. I don’t think AGPL does this any more than GPL already does for Linux or desktop applications; what is new is extending that model to web services. However, it certainly does offer less freedom to its developer community than MIT or ASL – which is the point.

In the end customers do make choices between proprietary, Open Source, and Free Software, and companies have a range of business models they can operate when it comes to using and distributing code as part of their service offerings.

As Chuck writes:

It never bothers me when corporations try to make money – that is their purpose and I am glad they do it. But it bothers me when someone plays a shell game to suppress or eliminate an open source community. But frankly – even with that – corporations will and should take advantage of every trick in the book – and AGPL3 is the “new trick”.

As we’ve seen before, there are models that companies can use that take advantage of the characteristics of copyleft licenses and use them in a very non-open fashion.

There are also other routes to take in managing a project to ensure that this doesn’t happen; for example, adopting a meritocratic governance model and using open development practices mitigates the risk of the copyright owners acting against the interests of the user and developer community. However, as a private company there is nothing holding Kuali to operate in a way that respects Free Software principles other than the terms of the license itself – which of course as copyright owner it is free to change.

In summary, there is nothing inherently anti-open in the AGPL license itself,  but combined with a closed governance model it can support business practices that are antithetical to what we would normally consider “open”.

Choosing the AGPL doesn’t automatically mean that Kuali is about to engage in bad business practices, but it does mean that the governance structure the company chooses needs to be scrutinised carefully.

Kuali governance change may herald end of ‘Community Source’ model

For quite some time now at OSS Watch we’ve struggled with the model of “Community Source” promoted by some projects within the Higher Education sector. Originating with Sakai, and then continuing with Kuali, the term always seemed confusing, given that it simply meant a consortium-governed project that released code under an open-source license.

As a governance model, a consortium differs from both a meritocracy (as practised by the Apache Software Foundation) or a benevolent dictatorship, or a single-company driven model. It prioritises agreement amongst managers rather than developers, for example.

We produced several resources (Community Source vs. Open Source and The Community Source Development Model) to try to disambiguate both the term and the practices that go along with it, although these were never particularly popular, especially with some of the people involved in the projects themselves. If anything I believe we erred on the side of being too generous.

However, all this is about to become, well, academic. Sakai merged with JaSig to form the Apereo Foundation, which is taking a more meritocratic route, and the most high-profile project using the Community Source model – the education ERP project Kuali – has announced a move to a company-based governance model instead.

I think my colleague Wilbert Kraan summed up Community Source quite nicely in a tweet:

‘Community source’ probably reassured nervous suits when OSS was new to HE, but may not have had much purpose since

Michael Feldstein also provides a more in-depth analysis in his post Community Source Is Dead.

There’s good coverage elsewhere of the Kuali decision, so I won’t reiterate it here:

A few months ago we had a conversation with Jisc about its prospect to alumnus challenge, where the topic of Kuali came up. Back then we were concerned that its governance model made it difficult to assess the degree of influence that UK institutions or Jisc might exercise without making a significant financial contribution (rather than, as in a meritocracy, making a commitment to use and develop the software).

Its hard to say right now whether the move to a for-profit will make things easier or more difficult – as Michael points out in his post,

Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in … mindset

We’ll have to see how the changes pan out in Kuali. But for now we can at least stop talking about Community Source. I never liked the term anyway.

5 lessons for OER from Open Source and Free Software

While the OER community owes some of its genesis to the open source and free software movements, there are some aspects of how and why these movements work that I think are missing or need greater emphasis.

open education week 2014

1. Its not what you share, its how you create it

One of the distinctive elements of the open source software movement are open development projects. These are the projects where software is developed cooperatively (not collaboratively, necessarily) in public, often by people contributing from multiple organisations. All the processes that lead to the creation and release of software – design, development, testing, planning – happen using publicly visible tools. Projects also actively try to grow their contributor base.

When a project has open and transparent governance, its much easier to encourage people to voluntarily provide effort free of charge that far exceeds what you could afford to pay for within a closed in-house project. (Of course, you have to give up a lot of control, but really, what was that worth?)

While there are some cooperative projects in the OER space, for example some of the open textbook projects, for the most part the act of creating the resources tends to be private; either the resources are created and released by individuals working alone, or developed by media teams privately within universities.

Also, in the open source world its very common for multiple companies to put effort into the same software projects as a way of reducing their development costs and improving the quality and sustainability of the software. I can’t think offhand of any examples of education organisations collaborating on designing materials on a larger scale – for example, cooperating to build a complete course.

Generally, the kind of open source activity OER most often resembles is the “code dump” where an organisation sticks an open license on something it has essentially abandoned. Instead, OER needs to be about open cooperation and open process right from the moment an idea for a resource occurs.

Admittedly, the most popular forms of OER today tend to be things like individual photos, powerpoint slides, and podcasts. That may partly be because there is not an open content creation culture that makes bigger pieces easier to produce.

2. Always provide “source code”

Many OERs are distributed without any sort of “source code”. In this respect, license aside, they don’t resemble open source software so much as “freeware” distributed as executables you can’t easily pick apart and modify.

Distributing the original components of a resource makes it much easier to modify and improve. For example, where the resource is in a composite format such as a PDF, eBook or slideshow, provide all the embedded images separately too, in their original resolution, or in their original editable forms for illustrations. For documents, provide the original layout files from the DPT software used to produce them (but see also point 5).

Even where an OER is a single photo, it doesn’t hurt to distribute the original raw image as well as the final optimised version. Likewise for a podcast or video the original lossless recordings can be made available, as individual clips suitable for re-editing.

Without “source code”, resources are hard to modify and improve upon.

3. Have an infrastructure to support the processes, not just the outputs

So far, OER infrastructure has mostly been about building repositories of finished artefacts but not the infrastructure for collaboratively creating artefacts in the open (wikis being an obvious exception).

I think a good starting point would be to promote GitHub as the go-to tool for managing the OER production process. (I’m not the only one to suggest this, Audrey Watters also blogged this idea)

Its such an easy way to create projects that are open from the outset, and has a built in mechanism for creating derivative works and contributing back improvements. It may not be the most obvious thing to use from the point of view of educators, but I think it would make it much clearer how to create OERs as an open process.

There have also been initiatives to do a sort of “GitHub for education” such as CourseFork that may fill the gap.

4. Have some clear principles that define what it is, and what it isn’t

There has been a lot written about OER (perhaps too much!) However what there isn’t is a clear set of criteria that something must meet to be considered OER.

For Free Software we have the Four Freedoms as defined by FSF:

  • Freedom 0: The freedom to run the program for any purpose.
  • Freedom 1: The freedom to study how the program works, and change it to make it do what you wish.
  • Freedom 2: The freedom to redistribute copies so you can help your neighbor.
  • Freedom 3: The freedom to improve the program, and release your improvements (and modified versions in general) to the public, so that the whole community benefits.

If a piece of software doesn’t support all of these freedoms, it cannot be called Free Software. And there is a whole army of people out there who will make your life miserable if it doesn’t and you try to pass it off as such.

Likewise, to be “open source” means to support the complete Open Source Definition published by OSI. Again, if you try to pass off a project as being open source when it doesn’t support all of the points of the definition, there are a lot of people who will be happy to point out the error of your ways. And quite possibly sue you if you misuse one of the licenses.

If it isn’t open source according to the OSI definition, or free software according to the FSF definition, it isn’t some sort of “open software”. End of. There is no grey area.

Its also worth pointing out that while there is a lot of overlap between Free Software and Open Source at a functional level, how the criteria are expressed are also fundamentally important to their respective cultures and viewpoints.

The same distinctive viewpoints or cultures that underlie Free Software vs. Open Source are also present within what might be called the “OER movement”, and there has been some discussion of the differences between what might broadly be called “open”, “free”, and “gratis” OERs which could be a starting point.

However, while there are a lot of definitions of OER floating around, there hasn’t emerged any of these kind of recognised definitions and labels – no banners to rally to for those espousing these distinctions .

Now it may seem odd to suggest splitting into factions would be a way forward for a movement, but the tension between the Free Software and Open Source camps has I think been a net positive (of course those in each camp might disagree!) By aligning yourself with one or the other group you are making it clear what you stand for. You’ll probably also spend more of your time criticising the other group, and less time on infighting within your group!

Until some clear lines are drawn about what it really stands for, OER will continue to be whatever you want to make of it according to any of the dozens of competing definitions, leaving it vulnerable to openwashing.

5. Don’t make OERs that require proprietary software

OK, so most teachers and students still use Microsoft Office, and many designers use Adobe. However, its not that hard to develop resources that can be opened with and edited using free or open source software.

The key to this is to develop resources using open standards that allow interoperability with a wider range of tools.

This could become more of an issue if (or rather when) MOOC platforms start to  ”embrace and extend” common formats for authors to make use of their platform features. Again, there are open standards (such as IMS LTI and the Experience API) that mitigate against this. This is of course where CETIS comes in!

Is that it?

As I mentioned at the beginning of this post, OER to some extent is inspired by Open Source and Free Software, so it already incorporates many of the important lessons learned, such as building on (and to some extent simplifying and improving) the concept of free and open licenses. However, its about more than just licensing!

There may be other useful lessons to be learned and parallels drawn – add your own in the comments.

Originally posted on Scott’s personal blog

Open Source Options for Education updated

We’ve just updated our Open Source Options for Education list, providing a list of alternatives to common proprietary software used in schools, colleges and universities.  Most of the software we list is provided by the academic and open source communities via our publicly editable version.  Some new software we’ve added in this update includes:

SageMath

SageMath is a package made from over 100 open source components including R and Python with the goal of creating “a viable free open source alternative to Magma, Maple, Mathematica and Matlab.”  Supported by the University of Washington, the project is currently trialling SageMath Cloud, a hosted service allowing instant access to the suite of SageMath tools with no setup required.

R and R Commander

R is the go-to language for open source statistical analysis, and R Commander provides a graphical interface to make running R commands easier. Steven Muegge got in touch to let us know that he uses the two projects for teaching graduate research methods at Carleton University. Thanks, Steven!

Gibbon

Gibbon is a management system combining features of a VLE (such as resource sharing, activities and markbooks) and MIS systems (such as attendance, timetables, and student information).  The system was developed by the International College of Hong Kong.  Thanks to Ross Parker for letting us know about Gibbon.

OwnCloud Documents

The recent release of OwnCloud 6 includes a new tool called OwnCloud Documents allowing real-time collaboration on text documents. Collaborators can be other users on the Owncloud system, or anonymous users with the link from the author.  With support for LDAP and Active Directory, could this represent a viable alternative to Google Docs for privacy-conscious institutions?

Running Doom 3 BFG Edition on Linux

Over the weekend I started playing Doom 3 BFG Edition, I re-release of the mid-00s first person shooter.  The reason I’m talking about this here is, as we’ve discussed before, id software who make Doom 3 have a policy of open sourcing the code for their games.

Doom Disks

Doom Disks by Pelle Wessman CC By-SA

Doom 3 and the BFG Edition are no different in this regard, both being open sourced and the original Doom 3 even receiving an official Linux port.  However, id never ported the BFG Edition to Linux.

Predictably, this hasn’t posed a problem for the open source community, and bit of googling turned up a Github fork of id’s code with support for Linux.  The code relies on the SDL and OpenAL libraries to handle input and audio respectively, but once those dependencies were installed, it compiled for me on Ubuntu 13.10 and 12.04 LTS with no problems.

Alongside the resulting binary, to actually play the game requires the commercial data files, which aren’t distributed freely.  Since the game’s distributed using the Steam DRM system, you need to install a copy of the Windows Steam client, install the game, then copy the files in place.

It’s possible to install the game files using WINE, but I was using a laptop which happened to be dual-booting Windows so I installed the game as normal on Windows, then switched to Linux and created a symbolic link to the data files on the Windows disk partition.

There’s a few caveats to note about the open-source version of this game.  Firstly, trying to run the game with AMD graphics caused the game to crash with OpenGL errors.  Reading bug reports shows that this may be a problem with driver compatibility (some people have gotten it working), but using a system with NVidia graphics worked flawlessly.  The game also uses a couple of non-free components which can’t be included in the GPL code: the Bink video codec and the “Carmack’s Reverse” shadow stencilling technology licensed from Creative.  This means that the odd in-game video is missing, although this doesn’t really detract from the game-play as the audio still plays.

The ease with which I was able to find a solution to play this unsupported Windows game natively on Linux is a real testament to the open source community’s ability and willingness to solve and share solutions to problems.  I was also really impressed by how well the game ran under these circumstances, sowing how bright a future Linux has as a gaming platform.

Releasing a new Open Source Project – BitTorrent Sync Indicator

BitTorrent, creators of the highly popular distributed peer-to-peer file sharing protocol, recently released BitTorrent Sync, a solution for syncing folders between machines based on the BitTorrent protocol.  BTSync provides a fully distributed and encrypted alternative to services like Dropbox where all your data is synced through a third-party server.

BTSync has been released for Windows, Mac, Linux and other platforms, although the user experience on Linux isn’t quite as polished as it’s counterparts – the only interface provided is via a local webserver accessed through your browser, while Windows and Mac get a nice desktop GUI with a system tray indicator.  I found this a pain as I’d sometimes finish making changes to a synced file and want to shut my computer down quickly, but had to open my browser first to check if the file had finished syncing.

While BTSync isn’t Open Source, the developers are very open to feedback from users and developers.  I quickly realised that I’d be able to use data from the web interface to create a desktop indicator for Linux, so in the open source tradition of scratching my own itch, I wrote a python script that gave me an indicator to show if a file was syncing.  When it was workable, I stuck it on Github with an open source licence and made a post on the BitTorrent Labs forum.

I then noticed another post on the forum by a developer called Leo Moll – he was packaging BitTorrent Sync for Ubuntu and Debian distributions, and as I’d written my script with Ubuntu in mind, asked if he’d like to include it in his packages.  He agreed and before long my indicator could be installed alongside a well integrated BitTorrent Sync client.

Here’s when things really took off.  With it being so easy to get hold of my indicator, people started using it and reporting bugs on the GitHub page.  Almost as quickly, they started submitting patches.  I got a new set of better animated icons for the indicator, various bugfixes for cases I’d not come across, new feature requests, and even someone packaging the indicator for Arch Linux.

Alongside this Leo and I were contacted by another developer who was packaging BitTorrent Sync for Debian and Ubuntu.  We had a discussion and worked out where best to focus our efforts to avoid duplicating each other’s work and creating conflicting packages.  Leo and I are now discussing merging our codebases to streamline our work and allow for better integration.

In the space of a month, what started as a little hack to make my life a little bit easier has become a vibrant project with an engaged community of developers and users.  The real key, I think, has been to make it as simple as possible to let users run the software, and to show I’m listening and responsive to feedback.

CIO.com – How to Choose the Best License for Your Open Source Software Project

You need sound coding skills to create good software, but the success of an open source project can also depend on something much less glamorous: your choice of software license.

Last week I spoke to Paul Rubens of CIO.com about the issues that need to be considered when deciding which licence to use when releasing your code, including why a licence is necessary, the varieties of Free and Open Source Software licences, and how you provide licenses for the non-software parts of your project.

You can read the full article at CIO.com.

Is Open Source Insecure?

tl;dr: Open Source is inherently no more or less secure than closed source software.

banksy stencil with security camera

For a more thorough answer to this question, we’ve just updated our briefing note, “Is Open Source Software Insecure? An Introduction To The Issues” where we look at some of the ways in which software is considered secure, and look at some of the common claims both for and against the security of Free & Open Source Software.

On the whole there are no significant differences in security between closed and open source software as a category. The key differences are between individual products, and the governance processes around security – something which applies to both closed and open source software.

Claims that Open Source is inherently insecure – or, conversely, that it is inherently more secure – are unfounded and should be challenged, particularly in the process of selecting and procuring software. Accepting such a generalisation may actually be increasing security risks for the organisation, by excluding the most fit-for-purpose solutions from consideration.

Photo by nolifebeforecoffee of a stencil by banksy.

OSS Watch at ALT-C next week

I’ll be at ALT-C next Tuesday to  talk about Open Source Options for Education as part of the “OERs and OSSs” session in the morning. I’ll also be around for the rest of the day so feel free to collar me for a chat about anything OSS-related!

Here’s the session details to whet your appetite:

Levelling the playing field for open source in education and public sector

Open Source Software (OSS) offers many potential benefits for procuring organisations, including reduced costs and greater flexibility.

The UK Cabinet Office has taken an active role in levelling the playing field for Open Source Software (OSS) in the procurement of IT systems in the public sector.

This has included a set of open standards principles that favour Royalty-Free standards (UK Cabinet Office 2012a), and a procurement toolkit that includes open source options for commonly procured types of system (UK Cabinet Office 2012b).

These interventions are necessary, as many organisations in the public sector have procurement policies and processes that – whether intentionally or otherwise – exclude open source alternatives from selection, even where it would save organisations money or provide them with the systems that best fit their needs.

This also applies within the education sectors, and OSS Watch, based at the University of Oxford, has worked with the Cabinet Office on extending this guidance to the education sector, publishing Open Source Options for Education (Johnson et al., 2012). This lists open source alternatives for many IT solutions used in education, including subject-specific applications.

In this session we will introduce Open Source Options and the Cabinet Office guidance, and explain how it can be used to open up procurement in education institutions.
We will also invite delegates to contribute their own suggestions for open source alternatives they have used in their own work to include in the options.

UK Cabinet Office. (2012a). Open Standards: Open Opportunities – flexibility and efficiency in government IT, https://www.gov.uk/government/consultations/open-standards-open-opportunities-flexibility-and-efficiency-in-government-it

UK Cabinet Office. (2012b). Open Source Options. https://www.gov.uk/government/publications/open-source-procurement-toolkit

Johnson, M., Wilson, S., Wilson, R. (2012). Open Source Options for Education. http://www.oss-watch.ac.uk/resources/ossoptionseducation