Locked into Free Software? Unpicking Kuali’s AGPL Strategy

Chuck Severance recently published a post entitled How to Achieve Vendor Lock-in with a Legit Open Source License – Affero GPL where he criticises the use of AGPL licenses, particularly its use – or at least, intended use – by Kuali. Chuck’s post is well worth reading – especially if you have an interest in the Kuali education ERP system. What I’m going to discuss here are some of the details and implications of AGPL, in particular where there are differences between my take on things and the views that Chuck expresses in his post.


image by subcircle CC-BY

Copyleft licenses such as GPL and AGPL are more restrictive than the so-called permissive licenses such as the Apache Software License and MIT-style licenses. The intent behind the additional restrictions is, from the point of view of the Free Software movement, to ensure the continuation of Free Software. The GPL license requires any modifications of code it covers to also be GPL if distributed.

With the advent of the web and cloud services, the nature of software distribution has changed; GPL software can – and is – used to run web services. However, using a web service is not considered distributing the software, and so companies and organisations using GPL-licensed code to run their site are not required to distribute any modified source code.

Today, most cloud services operate what might be described as the “secret source” model. This uses a combination of Open Source, Free Software and proprietary code to deliver services. Sometimes the service provider will contribute back to the software projects they make use of, as this helps improve the quality of the software and helps build a sustainable community – but they are under no obligation to do so unless they actually choose to distribute code rather than use it to run a service.

The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels anyone using the software to run a service to also distribute the modified source code.

AGPL has been used by projects such as Diaspora, StatusNet (the software originally behind Identi.ca – it now uses pump.io), the CKAN public data portal software developed by the Open Knowledge Foundation, and MIT’s EdX software.

We’ve also discussed before on this blog the proposition – made quite forcefully by Eben Moglen – that the cloud needs more copyleft. Moglen has also spoken in defence of the AGPL as one of the means whereby Free Software works with cloud services.

So where is the problem?

The problem is that the restrictions of AGPL, like GPL before it, can give rise to bad business practice as well as good practice.

In a talk at Open World Forum in 2012, Bradley Kuhn, one of the original authors of AGPL, reflected that, at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). In a similar manner to how GPL is sometimes used in a “bait and switch” business model, AGPL can be used to discourage use of code by competitors.

For example, as a service provider you can release the code to your service as AGPL, knowing that no-one else can run a competing service without sharing their modifications with you. In this way you can ensure that all services based on the code have effectively the same level of capabilities. This makes sense when thinking about the distributed social networking projects I mentioned earlier, as there is greater benefit in having a consistent distributed social network than having feature differentiation among hosts.

However, in many other applications, differentiation in services is a good thing for users. For an ERP system like Kuali, there is little likelihood of anyone adopting such a system without needing to make modifications – and releasing them back under AGPL. It would certainly be difficult for another SaaS provider to offer something that competes with Kuali  using their software based on extra features, as any improvements they could make would automatically need to be shared back with Kuali anyway. They would need to compete on other areas, such as price or support options.

But back to Chuck’s post – what do we make of the arguments he makes against AGPL?

If we look back at the four principles of open source that I used to start this article, we quickly can see how AGPL3 has allowed clever commercial companies to subvert the goals of Open Source to their own ends:

  • Access to the source of any given work – By encouraging companies to only open source a subset of their overall software, AGPL3 ensures that we will never see the source of the part (b) of their work and that we will only see the part (a) code until the company sells itself or goes public.
  • Free Remix and Redistribution of Any Given Work – This is true unless the remixing includes enhancing the AGPL work with proprietary value-add. But the owner of the AGPL-licensed software is completely free to mix in proprietary goodness – but no other company is allowed to do so.
  • End to Predatory Vendor Lock-In – Properly used, AGPL3 is the perfect tool to enable predatory vendor lock-in. Clueless consumers think they are purchasing an “open source” product with an exit strategy – but they are not.
  • Higher Degree of Cooperation – AGPL3 ensures that the copyright holder has complete and total control of how a cooperative community builds around software that they hold the copyright to. Those that contribute improvements to AGPL3-licensed software line the pockets of commercial company that owns the copyright on the software.

On the first point, access to source code, I don’t think there is anything special about AGPL. Companies like Twitter and Facebook already use this model, opening some parts of their code as Open Source, while keeping other parts proprietary. Making the open parts AGPL makes a difference in that competitors also need to release source code, so I think overall this isn’t a valid point.

On the second point, mixing in other code, Chuck is making the point that the copyright owner has more rights than third parties, which is unarguably true. Its also true of other licenses too. I think its certainly the case that, for a service provider, AGPL offers some competitive advantage.

Chuck’s third point, that AGPL enables predatory lock-in, is an interesting one. There is nothing to prevent anyone from forking an AGPL project – it just has to remain AGPL. However, the copyright owner is the only party that is able to create proprietary extensions to the code without releasing them, which can be used to give an advantage.

However, this is a two-edged sword, as we’ve seen already with MySQL and MariaDB; Oracle adding proprietary components to MySQL is one of the practices that led to the MariaDB fork. Likewise, if Kuali uses its code ownership prerogative to add proprietary components to its SaaS offering, that may precipitate a fork. Such a fork would not have the ability to add improvements without distributing source code, but would instead have to differentiate itself in other ways – such as customer trust.

Finally, Chuck argues that AGPL discourages cooperation. I don’t think AGPL does this any more than GPL already does for Linux or desktop applications; what is new is extending that model to web services. However, it certainly does offer less freedom to its developer community than MIT or ASL – which is the point.

In the end customers do make choices between proprietary, Open Source, and Free Software, and companies have a range of business models they can operate when it comes to using and distributing code as part of their service offerings.

As Chuck writes:

It never bothers me when corporations try to make money – that is their purpose and I am glad they do it. But it bothers me when someone plays a shell game to suppress or eliminate an open source community. But frankly – even with that – corporations will and should take advantage of every trick in the book – and AGPL3 is the “new trick”.

As we’ve seen before, there are models that companies can use that take advantage of the characteristics of copyleft licenses and use them in a very non-open fashion.

There are also other routes to take in managing a project to ensure that this doesn’t happen; for example, adopting a meritocratic governance model and using open development practices mitigates the risk of the copyright owners acting against the interests of the user and developer community. However, as a private company there is nothing holding Kuali to operate in a way that respects Free Software principles other than the terms of the license itself – which of course as copyright owner it is free to change.

In summary, there is nothing inherently anti-open in the AGPL license itself,  but combined with a closed governance model it can support business practices that are antithetical to what we would normally consider “open”.

Choosing the AGPL doesn’t automatically mean that Kuali is about to engage in bad business practices, but it does mean that the governance structure the company chooses needs to be scrutinised carefully.

Kuali governance change may herald end of ‘Community Source’ model

For quite some time now at OSS Watch we’ve struggled with the model of “Community Source” promoted by some projects within the Higher Education sector. Originating with Sakai, and then continuing with Kuali, the term always seemed confusing, given that it simply meant a consortium-governed project that released code under an open-source license.

As a governance model, a consortium differs from both a meritocracy (as practised by the Apache Software Foundation) or a benevolent dictatorship, or a single-company driven model. It prioritises agreement amongst managers rather than developers, for example.

We produced several resources (Community Source vs. Open Source and The Community Source Development Model) to try to disambiguate both the term and the practices that go along with it, although these were never particularly popular, especially with some of the people involved in the projects themselves. If anything I believe we erred on the side of being too generous.

However, all this is about to become, well, academic. Sakai merged with JaSig to form the Apereo Foundation, which is taking a more meritocratic route, and the most high-profile project using the Community Source model – the education ERP project Kuali – has announced a move to a company-based governance model instead.

I think my colleague Wilbert Kraan summed up Community Source quite nicely in a tweet:

‘Community source’ probably reassured nervous suits when OSS was new to HE, but may not have had much purpose since

Michael Feldstein also provides a more in-depth analysis in his post Community Source Is Dead.

There’s good coverage elsewhere of the Kuali decision, so I won’t reiterate it here:

A few months ago we had a conversation with Jisc about its prospect to alumnus challenge, where the topic of Kuali came up. Back then we were concerned that its governance model made it difficult to assess the degree of influence that UK institutions or Jisc might exercise without making a significant financial contribution (rather than, as in a meritocracy, making a commitment to use and develop the software).

Its hard to say right now whether the move to a for-profit will make things easier or more difficult – as Michael points out in his post,

Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in … mindset

We’ll have to see how the changes pan out in Kuali. But for now we can at least stop talking about Community Source. I never liked the term anyway.

Rogō: an open source solution for high-stakes assessment

Rogo screenshots

If you’re looking for an open source library for adding some whizz-bang stuff to your website, or an open source text editor or graphics programme, then you’ve got plenty of options to choose from. But when it comes to solutions for core business functions, then your choices are usually a bit more restricted. So when I heard about Rogō I was immediately interested, as it tackles a business problem in education for which there are very few solutions – managing high stakes, summative assessment. So I caught up with Simon Wilkinson from the project at the ALT-C conference in Manchester to find out more.

There are plenty of projects out there that let you create and deliver formative, low-stakes assessments – self tests and diagnostics for example – but summative assessment is a whole different class of problem, where the critical features are things like security, reliability, performance and trust.

Rogō started out in 2003 as TouchStone, an in-house assessment management system used by schools within the Faculty of Medicine and Health Sciences at the University of Nottingham, before being selected by central IT services to be a core supported assessment platform at the University.

Since then the software has changed its name (Rogō is latin for “I ask”, in case you were wondering) and has been transitioning from an in-house system to a much more generic open source solution with the help of JISC (and OSS Watch of course).

Over the past year the Rogō team has been working with five other institutions, helping them install the software, integrate it with their systems and run assessments. This experience is then informing the community strategy for Rogō. Following on from a consultation with OSS Watch the team added a public issue tracker, selected a license, and started developing its community engagement processes.

However, as Simon Wilkinson at Rogō admits, the project is still very much at a “fledgling” stage as an open project. For Rogō, one key motivation for going the open source route is that “it mitigates against risk”; or as Simon puts it “if we [the core team at Nottingham] all get run over by a double decker the community can still pick it up.”

To support this objective,  the team are working to foster a developer community, and are working initially with their set of partner institutions to see how closely they are able to engage with the project  - can they move from being prospective users piloting the system, to active users contributing bug reports and requirements, to actual contributing code?

“Where we are now is a bit mixed; some of our partners struggled to get the software installed but they’re all there now, and Oxford are the most prolific in terms of posting tickets”. But how can they move from here to contributing to the code?

“One weakness identified of Rogō as a result of the review [performed by OSS Watch] was that we don’t have a clearly articulated design” says Wilkinson, “With a clearer articulation I think more people will come on board, and build using the same design ethos”. To this end, the team are focussing on improving documentation – in particular how they explain the overall structure and concepts of the platform to make it much easier for potential developers to understand how the system works, and what core terms and concepts mean; the term “course”, for example, means something different in Rogō than it does in Moodle.

For Rogō to go from a fledgling to a sustainable open project, the team need to look at a wide range of issues – how they can support and foster a more diverse user and developer community, how to improve documentation and installation processes, how to relate the software to services such as support, consultancy and hosting. As with any enterprise solution, software-as-a-service (SaaS) is also one of the options the team needs to consider as well as locally hosted and integrated systems.

A lot of work also needs to go into making core services such as authentication more pluggable, to support commonly used single-sign-on services such as CoSign (used by institutions including the University of Leicester) or WebAuth (used by Oxford). This is going to be a particularly good test for how well Rogō can move from a single-institution system: integrating with various kinds of  authentication services is a necessity for other organisations to adopt Rogō – but its also something which doesn’t provide any direct benefit to Nottingham, where the integration is already complete.

Looking ahead, I think Rogō has some key advantages over other “fledgling” open source projects. It has a long track record of use at Nottingham, something very important for establishing trust in a system that performs such a critical role. It is also in a niche which has very few other solutions – there is, for example, only one real closed-source competitor in UK HE – with the potential to gain a significant core of committed users and developers. I think we’re going to hear a lot more of Rogō in the future. And maybe even learn how to say it properly.

If you’d like to find out more about Rogō, you can visit the Rogō site at the University of Nottingham.

If you have a project in your institution and are looking to go open source, visit the OSS Watch site for more information and to find out how to get in touch with us.

Thanks to Simon Wilkinson for taking the time to meet up with me, and to ALT-C 2012.

Microsoft’s OOXML Wins ISO Approval

Perhaps wary that the date might detract from the news, ISO – the International Organization for Standards – waited until today before announcing that Microsoft’s Office Open XML (OOXML) document description schema has finally been accepted as an ISO standard as of April 1, 2008. There has been a long and bitter battle over whether this schema should be adopted. For one thing, an ISO-approved XML standard for describing office documents already exists in the form of OpenDocument created in association with Sun Microsystems by the Organization for the Advancement of Structured Information Standards or OASIS. Many argue that having multiple standards for the same objects defeats the purpose of establishing standards in the first place. While this is on the face of it a reasonable argument, it seems a little Utopian to expect complete global unanimity on these subjects, particularly where such valuable commercial interests are at stake. After all, the world has not even managed to agree on a standard standards body, so expecting agreement at any lower level seems over-optimistic. Microsoft’s OOXML has been a standard according to ECMA International since 2006, while OASIS approved OpenDocument back in 2005.

So why is there such bitterness over this issue? Well, some of it comes from the perception that OOXML is in itself an inadequate standard which has triumphed through Microsoft’s expertise at lobbying ISO member bodies for their votes. Critics point out that the standard is itself is incredibly long and complex – over six thousand pages. It has also been widely observed that rather than trying to select a set of characteristics that need to be described in order to define a document minimally and efficiently, OOXML instead describes a huge set of overlapping characteristics that define the many different ways Microsoft has described documents over the almost twenty year life of the Microsoft Office product. It is easy to see why they have done this; it greatly facilitates conversion of all legacy documents into the new format. Still, it also results in a swollen specification that competitors will find very difficult to implement in their products. For example, OOXML defines many functions such as shapeLayoutLikeWW8, which instructs a rendering application to arrange text around a shape in the same way as Microsoft’s Word 97. Clearly Microsoft will have an advantage over competitors in making their products reliably behave in these ways.

Back in September 2007 OOXML lost an adoption vote at ISO, partly as a result of muscular lobbying from the free and open source communities, and hundreds of changes to the standard were requested by the voting members. While many of these were implemented by Microsoft and ECMA, the majority remained unimplemented at the time of OOXML’s approval.

Another controversial aspect of the OOXML standard is Microsoft’s patent non-enforcement promise that accompanies it. International standards must at the very least include fair and non-discriminatory terms for the licensing of patents that their use might infringe. Generally the standards bodies prefer that associated patents are licensed at no cost, and this is essentially what Microsoft has done with their Open Specification Promise. It promises that Microsoft will not enforce their patents against anyone as a result of their activities implementing OOXML readers, writers or renderers. However Microsoft make no explicit promise that subsequent versions of OOXML will also be covered by such a promise, merely saying that they aim to continue the promise in areas where they continue to engage with open standards bodies. This has alarmed many people, pointing to a possible future where everyone has adopted OOXML only to find that Microsoft withdraw from engagement with standards bodies and also withdraw their patent promise for subsequent versions. In comparison, Sun’s Non-Assertion Covenant for OpenDocument offers a perpetual promise not to sue for both version 1.0 and all subsequent versions. In the run-up to ISO’s decision, the Software Freedom Law Center (SFLC), a free-and-open-source-supporting public interest legal practice, released a document filled with dire warnings about Microsoft’s Patent Promise, and telling anyone writing software under the GNU General Public License to shun it. SFLC’s argument is twofold. Firstly they argue that, despite the promise, a piece of multi-purpose code might be protected when used to implement the standard but infringing when used for something else. Secondly, they argue that Microsoft’s failure to extend the promise to future revisions of OOXML means that projects attempting to progressively implement newer and newer versions of the standard may hit a legal brick wall down the line.

Are these worries justified? Certainly the SFLC’s first point is well taken, given the propensity of free and open source developers to repurpose code. The second point is less persuasive, I think, and a little opaquely worded in their document. To be clear, implementations of the current version of OOXML will always be protected from patent action by Microsoft, whether they withdraw the promise from future versions or not (provided the code in question is actually used to implement the standard). As to whether Microsoft will actually withdraw the promise from future versions, it is a difficult issue to predict. Microsoft got into the open standards game in the first place in order to win procurement contracts – often in the public sector – where open standards are listed as pre-requisites. While it may be notionally possible for Microsoft to partially re-enclose their format by either withdrawing the promise from a future version or withdrawing from the open standards process altogether, the practicality of such a move would depend heavily on how Microsoft’s users would respond to it. Thus the future of the standard really depends less of Microsoft’s whim and more on ourselves and the organisations for which we work.

XCRI: standard course information

At the recent IWMW, I went to a session on XCRI. Unfortunately I was too busy listening to take detailed notes and the presentation slides don’t appear to be in the web.

XCRI is a new standard for exchanging post-compulsory course information. Universities, further education, adult learning centres, vocational agencies and continuing professional development providers can all publish information about their courses, enabling careers advisers, institutions and government agencies to find the relevant information on courses in order to encourage people to enrol in them.

Previously there was no standard format for such information and the main consumers of it all require it in different forms. UCAS is a major consumer, as are any number of different government schemes aimed at increasing the take-up of educational opportunities and regional development programs aiming to tackle unemployment by retraining and upskilling. Institutions also typically have their own course catalogue of some description too. Keeping all of these in sync, both with each other and with what students of the course actually get taught is a significant challenge.

XCRI is an XML standard similar in nature to Atom: (a) it’s plain XML (for those people who want to keep things simple) with a mapping to RDF (for those wanting generalised knowledge representation); (b) it’s got a small number of tags as possible, and where ever possible those tags reuse definitions widely used elsewhere; (c) a feed is a list of items.

To make publishing XCRI easier, the standard assumes (but doesn’t enforce) that the feed is merely a text file on a webserver representing all an institutions forthcoming courses. This is to explicitly encourage batch export and validation of XCRI from legacy systems, which is expected to the dominant form of generation for most institutions for some time.

XCRI is a new standard, and their website is still under construction, but some of the community members have websites with decent information on XCRI. Indeed the community building around XCRI is very impressive, with support from a wide variety of institutions.

If you’ve got an open source or open development project you’re trying to build a community around, why not join the new community-development mailing list that we at OSS Watch have recently started? Unfortunately, no, we can’t claim the success of XCRI had anything to do with us, but we can certainly answer your questions and give you pointers.

Microsoft, Verisign and Partners to Collaborate with OpenID

OpenID is an open, decentralised, free framework for user-centric digital identity. The goal is to release every part of this work under the most liberal licences possible, so there’s no money or licensing or registering required to play. It benefits the community as a whole if something like this exists, and we’re all a part of the community.

Microsoft and VeriSign, along with other partners, have announced that they “will collaborate on interoperability between OpenID and Windows CardSpace(TM) to make the Internet safer and easier to use. “

What interests me in this announcement is the word “collaborate”. I can almost hear the MS sceptics groaning, but is this announcement different?

OpenID was originally specified without any specific authentication method in mind. Brad Fitzpatrick, the original creator of OpenID, said, “Now people ask me what I think about Microsoft supporting it, using their InfoCards as the method of authentication…. I think it’s great! So far I’ve seen Kerberos integration for OpenID, voiceprint biometric auth (call a number and read some words), Jabber JID-Ping auth, etc…. all have different trade-offs between convenience and security. But as more people have CardSpace on their machines, users should get both convenience and security.”

CardSpace is claimed to provide significant anti-phishing, privacy, and convenience benefits to users. Scott Kveton, CEO of JanRain (another of the partners in this agreement), says, “Windows CardSpace is shipping with Vista today and is a well thought-out technology that helps address many of the privacy and security concerns that people have had with OpenID. OpenID helps users describe their identity across many sites in a public fashion. The two together are very complimentary products and each has its strength.”

This looks like a true collaboration between the OpenID community, Microsoft and others. From what I have seen All parties are happy with the deal and there appears to be no evidence of one “side” having to compromise. A true victory for open development? I think so, only time will tell us for certain.