Kuali governance change may herald end of ‘Community Source’ model

For quite some time now at OSS Watch we’ve struggled with the model of “Community Source” promoted by some projects within the Higher Education sector. Originating with Sakai, and then continuing with Kuali, the term always seemed confusing, given that it simply meant a consortium-governed project that released code under an open-source license.

As a governance model, a consortium differs from both a meritocracy (as practised by the Apache Software Foundation) or a benevolent dictatorship, or a single-company driven model. It prioritises agreement amongst managers rather than developers, for example.

We produced several resources (Community Source vs. Open Source and The Community Source Development Model) to try to disambiguate both the term and the practices that go along with it, although these were never particularly popular, especially with some of the people involved in the projects themselves. If anything I believe we erred on the side of being too generous.

However, all this is about to become, well, academic. Sakai merged with JaSig to form the Apereo Foundation, which is taking a more meritocratic route, and the most high-profile project using the Community Source model – the education ERP project Kuali – has announced a move to a company-based governance model instead.

I think my colleague Wilbert Kraan summed up Community Source quite nicely in a tweet:

‘Community source’ probably reassured nervous suits when OSS was new to HE, but may not have had much purpose since

Michael Feldstein also provides a more in-depth analysis in his post Community Source Is Dead.

There’s good coverage elsewhere of the Kuali decision, so I won’t reiterate it here:

A few months ago we had a conversation with Jisc about its prospect to alumnus challenge, where the topic of Kuali came up. Back then we were concerned that its governance model made it difficult to assess the degree of influence that UK institutions or Jisc might exercise without making a significant financial contribution (rather than, as in a meritocracy, making a commitment to use and develop the software).

Its hard to say right now whether the move to a for-profit will make things easier or more difficult – as Michael points out in his post,

Shifting the main stakeholders in the project from consortium partners to company investors and board members does not require a change in … mindset

We’ll have to see how the changes pan out in Kuali. But for now we can at least stop talking about Community Source. I never liked the term anyway.

Rogō: an open source solution for high-stakes assessment

Rogo screenshots

If you’re looking for an open source library for adding some whizz-bang stuff to your website, or an open source text editor or graphics programme, then you’ve got plenty of options to choose from. But when it comes to solutions for core business functions, then your choices are usually a bit more restricted. So when I heard about Rogō I was immediately interested, as it tackles a business problem in education for which there are very few solutions – managing high stakes, summative assessment. So I caught up with Simon Wilkinson from the project at the ALT-C conference in Manchester to find out more.

There are plenty of projects out there that let you create and deliver formative, low-stakes assessments – self tests and diagnostics for example – but summative assessment is a whole different class of problem, where the critical features are things like security, reliability, performance and trust.

Rogō started out in 2003 as TouchStone, an in-house assessment management system used by schools within the Faculty of Medicine and Health Sciences at the University of Nottingham, before being selected by central IT services to be a core supported assessment platform at the University.

Since then the software has changed its name (Rogō is latin for “I ask”, in case you were wondering) and has been transitioning from an in-house system to a much more generic open source solution with the help of JISC (and OSS Watch of course).

Over the past year the Rogō team has been working with five other institutions, helping them install the software, integrate it with their systems and run assessments. This experience is then informing the community strategy for Rogō. Following on from a consultation with OSS Watch the team added a public issue tracker, selected a license, and started developing its community engagement processes.

However, as Simon Wilkinson at Rogō admits, the project is still very much at a “fledgling” stage as an open project. For Rogō, one key motivation for going the open source route is that “it mitigates against risk”; or as Simon puts it “if we [the core team at Nottingham] all get run over by a double decker the community can still pick it up.”

To support this objective,  the team are working to foster a developer community, and are working initially with their set of partner institutions to see how closely they are able to engage with the project  - can they move from being prospective users piloting the system, to active users contributing bug reports and requirements, to actual contributing code?

“Where we are now is a bit mixed; some of our partners struggled to get the software installed but they’re all there now, and Oxford are the most prolific in terms of posting tickets”. But how can they move from here to contributing to the code?

“One weakness identified of Rogō as a result of the review [performed by OSS Watch] was that we don’t have a clearly articulated design” says Wilkinson, “With a clearer articulation I think more people will come on board, and build using the same design ethos”. To this end, the team are focussing on improving documentation – in particular how they explain the overall structure and concepts of the platform to make it much easier for potential developers to understand how the system works, and what core terms and concepts mean; the term “course”, for example, means something different in Rogō than it does in Moodle.

For Rogō to go from a fledgling to a sustainable open project, the team need to look at a wide range of issues – how they can support and foster a more diverse user and developer community, how to improve documentation and installation processes, how to relate the software to services such as support, consultancy and hosting. As with any enterprise solution, software-as-a-service (SaaS) is also one of the options the team needs to consider as well as locally hosted and integrated systems.

A lot of work also needs to go into making core services such as authentication more pluggable, to support commonly used single-sign-on services such as CoSign (used by institutions including the University of Leicester) or WebAuth (used by Oxford). This is going to be a particularly good test for how well Rogō can move from a single-institution system: integrating with various kinds of  authentication services is a necessity for other organisations to adopt Rogō – but its also something which doesn’t provide any direct benefit to Nottingham, where the integration is already complete.

Looking ahead, I think Rogō has some key advantages over other “fledgling” open source projects. It has a long track record of use at Nottingham, something very important for establishing trust in a system that performs such a critical role. It is also in a niche which has very few other solutions – there is, for example, only one real closed-source competitor in UK HE – with the potential to gain a significant core of committed users and developers. I think we’re going to hear a lot more of Rogō in the future. And maybe even learn how to say it properly.

If you’d like to find out more about Rogō, you can visit the Rogō site at the University of Nottingham.

If you have a project in your institution and are looking to go open source, visit the OSS Watch site for more information and to find out how to get in touch with us.

Thanks to Simon Wilkinson for taking the time to meet up with me, and to ALT-C 2012.

Microsoft’s OOXML Wins ISO Approval

Perhaps wary that the date might detract from the news, ISO – the International Organization for Standards – waited until today before announcing that Microsoft’s Office Open XML (OOXML) document description schema has finally been accepted as an ISO standard as of April 1, 2008. There has been a long and bitter battle over whether this schema should be adopted. For one thing, an ISO-approved XML standard for describing office documents already exists in the form of OpenDocument created in association with Sun Microsystems by the Organization for the Advancement of Structured Information Standards or OASIS. Many argue that having multiple standards for the same objects defeats the purpose of establishing standards in the first place. While this is on the face of it a reasonable argument, it seems a little Utopian to expect complete global unanimity on these subjects, particularly where such valuable commercial interests are at stake. After all, the world has not even managed to agree on a standard standards body, so expecting agreement at any lower level seems over-optimistic. Microsoft’s OOXML has been a standard according to ECMA International since 2006, while OASIS approved OpenDocument back in 2005.

So why is there such bitterness over this issue? Well, some of it comes from the perception that OOXML is in itself an inadequate standard which has triumphed through Microsoft’s expertise at lobbying ISO member bodies for their votes. Critics point out that the standard is itself is incredibly long and complex – over six thousand pages. It has also been widely observed that rather than trying to select a set of characteristics that need to be described in order to define a document minimally and efficiently, OOXML instead describes a huge set of overlapping characteristics that define the many different ways Microsoft has described documents over the almost twenty year life of the Microsoft Office product. It is easy to see why they have done this; it greatly facilitates conversion of all legacy documents into the new format. Still, it also results in a swollen specification that competitors will find very difficult to implement in their products. For example, OOXML defines many functions such as shapeLayoutLikeWW8, which instructs a rendering application to arrange text around a shape in the same way as Microsoft’s Word 97. Clearly Microsoft will have an advantage over competitors in making their products reliably behave in these ways.

Back in September 2007 OOXML lost an adoption vote at ISO, partly as a result of muscular lobbying from the free and open source communities, and hundreds of changes to the standard were requested by the voting members. While many of these were implemented by Microsoft and ECMA, the majority remained unimplemented at the time of OOXML’s approval.

Another controversial aspect of the OOXML standard is Microsoft’s patent non-enforcement promise that accompanies it. International standards must at the very least include fair and non-discriminatory terms for the licensing of patents that their use might infringe. Generally the standards bodies prefer that associated patents are licensed at no cost, and this is essentially what Microsoft has done with their Open Specification Promise. It promises that Microsoft will not enforce their patents against anyone as a result of their activities implementing OOXML readers, writers or renderers. However Microsoft make no explicit promise that subsequent versions of OOXML will also be covered by such a promise, merely saying that they aim to continue the promise in areas where they continue to engage with open standards bodies. This has alarmed many people, pointing to a possible future where everyone has adopted OOXML only to find that Microsoft withdraw from engagement with standards bodies and also withdraw their patent promise for subsequent versions. In comparison, Sun’s Non-Assertion Covenant for OpenDocument offers a perpetual promise not to sue for both version 1.0 and all subsequent versions. In the run-up to ISO’s decision, the Software Freedom Law Center (SFLC), a free-and-open-source-supporting public interest legal practice, released a document filled with dire warnings about Microsoft’s Patent Promise, and telling anyone writing software under the GNU General Public License to shun it. SFLC’s argument is twofold. Firstly they argue that, despite the promise, a piece of multi-purpose code might be protected when used to implement the standard but infringing when used for something else. Secondly, they argue that Microsoft’s failure to extend the promise to future revisions of OOXML means that projects attempting to progressively implement newer and newer versions of the standard may hit a legal brick wall down the line.

Are these worries justified? Certainly the SFLC’s first point is well taken, given the propensity of free and open source developers to repurpose code. The second point is less persuasive, I think, and a little opaquely worded in their document. To be clear, implementations of the current version of OOXML will always be protected from patent action by Microsoft, whether they withdraw the promise from future versions or not (provided the code in question is actually used to implement the standard). As to whether Microsoft will actually withdraw the promise from future versions, it is a difficult issue to predict. Microsoft got into the open standards game in the first place in order to win procurement contracts – often in the public sector – where open standards are listed as pre-requisites. While it may be notionally possible for Microsoft to partially re-enclose their format by either withdrawing the promise from a future version or withdrawing from the open standards process altogether, the practicality of such a move would depend heavily on how Microsoft’s users would respond to it. Thus the future of the standard really depends less of Microsoft’s whim and more on ourselves and the organisations for which we work.

XCRI: standard course information

At the recent IWMW, I went to a session on XCRI. Unfortunately I was too busy listening to take detailed notes and the presentation slides don’t appear to be in the web.

XCRI is a new standard for exchanging post-compulsory course information. Universities, further education, adult learning centres, vocational agencies and continuing professional development providers can all publish information about their courses, enabling careers advisers, institutions and government agencies to find the relevant information on courses in order to encourage people to enrol in them.

Previously there was no standard format for such information and the main consumers of it all require it in different forms. UCAS is a major consumer, as are any number of different government schemes aimed at increasing the take-up of educational opportunities and regional development programs aiming to tackle unemployment by retraining and upskilling. Institutions also typically have their own course catalogue of some description too. Keeping all of these in sync, both with each other and with what students of the course actually get taught is a significant challenge.

XCRI is an XML standard similar in nature to Atom: (a) it’s plain XML (for those people who want to keep things simple) with a mapping to RDF (for those wanting generalised knowledge representation); (b) it’s got a small number of tags as possible, and where ever possible those tags reuse definitions widely used elsewhere; (c) a feed is a list of items.

To make publishing XCRI easier, the standard assumes (but doesn’t enforce) that the feed is merely a text file on a webserver representing all an institutions forthcoming courses. This is to explicitly encourage batch export and validation of XCRI from legacy systems, which is expected to the dominant form of generation for most institutions for some time.

XCRI is a new standard, and their website is still under construction, but some of the community members have websites with decent information on XCRI. Indeed the community building around XCRI is very impressive, with support from a wide variety of institutions.

If you’ve got an open source or open development project you’re trying to build a community around, why not join the new community-development mailing list that we at OSS Watch have recently started? Unfortunately, no, we can’t claim the success of XCRI had anything to do with us, but we can certainly answer your questions and give you pointers.

Microsoft, Verisign and Partners to Collaborate with OpenID

OpenID is an open, decentralised, free framework for user-centric digital identity. The goal is to release every part of this work under the most liberal licences possible, so there’s no money or licensing or registering required to play. It benefits the community as a whole if something like this exists, and we’re all a part of the community.

Microsoft and VeriSign, along with other partners, have announced that they “will collaborate on interoperability between OpenID and Windows CardSpace(TM) to make the Internet safer and easier to use. “

What interests me in this announcement is the word “collaborate”. I can almost hear the MS sceptics groaning, but is this announcement different?

OpenID was originally specified without any specific authentication method in mind. Brad Fitzpatrick, the original creator of OpenID, said, “Now people ask me what I think about Microsoft supporting it, using their InfoCards as the method of authentication…. I think it’s great! So far I’ve seen Kerberos integration for OpenID, voiceprint biometric auth (call a number and read some words), Jabber JID-Ping auth, etc…. all have different trade-offs between convenience and security. But as more people have CardSpace on their machines, users should get both convenience and security.”

CardSpace is claimed to provide significant anti-phishing, privacy, and convenience benefits to users. Scott Kveton, CEO of JanRain (another of the partners in this agreement), says, “Windows CardSpace is shipping with Vista today and is a well thought-out technology that helps address many of the privacy and security concerns that people have had with OpenID. OpenID helps users describe their identity across many sites in a public fashion. The two together are very complimentary products and each has its strength.”

This looks like a true collaboration between the OpenID community, Microsoft and others. From what I have seen All parties are happy with the deal and there appears to be no evidence of one “side” having to compromise. A true victory for open development? I think so, only time will tell us for certain.