About Mark Johnson

Mark Johnson is Development Manager as OSS Watch

Open source lecture capture at The University of Manchester

We received an excellent contribution to our Open Source Options for Education list this week, in the shape of a real-world usage example of Opencast Matterhorn at the University of Manchester.  The previous examples of Matterhorn usage we’ve had on the list have been documentation of pilot projects, so it’s great to have such an in-depth look at a full scale deployment to refer to.

Cervino (Matterhorn) by Eider Palmou (CC-By)

Cervino (Matterhorn) by Eider Palmou. CC-BY

The case study looks at the University’s movement from a pilot project, with 10 machines and the proprietary Podcast Producer 2 software, to a deployment across 120 rooms using Opencast Matterhorn.  During the roll-out, the University adopted an opt-out policy meaning all lectures are captured by default, collecting around 1000 hours of media per week.

The University has no policy to preferentially select open source or proprietary software.   However, Matterhorn gave the University some specific advantages.  The lack of licensing or compulsory support costs kept the costs down, and combined with the system’s modularity allowed them to scale the solution from an initial roll-out to institution wide solution in a flexible way.  The open nature also allowed for customisations (such as connecting to the University timetable system) to be added and modified as requirements developed, without additional permissions being sought.  These advantages combined to provide a cost-effective solution within a tight timescale.

If your institution uses an open source solution for an educational application or service, let us know about it and we’ll include it in the Open Source Options for Education list.

If all bugs are shallow, why was Heartbleed only just fixed?

This week the Internet’s been ablaze with news of another security flaw in a widely used open source project, this time a bug in OpenSSL dubbed “Heartbleed”.

This is the third high-profile security issue in as many months. In each case the code was not only open but being used by thousands of people including some of the world’s largest technology companies, and had been in place for a significant length of time.

heartbleedIn his 1999 essay The Cathedral and The Bazaar, Eric Raymond stated that “Given enough eyeballs, all bugs are shallow.”  If this rule (dubbed Linus’s Law by Raymond, for Linus Torvalds) still holds true, then how can these flaws exist for such a long time before being fixed?

Let’s start by looking at what we mean by a “bug”.  Generally speaking the term “bug” refers to any defect in the code of a program, whereby it doesn’t function as required.  That definition certainly applies in all these cases, but for a bug to be reported, it has to affect people in a noticeable way.  The particular variety of bugs we’re talking about here, security flaws in encryption libraries, don’t affect the general population of users, even where we’re talking about users on the scale we see here.  It’s only when people try and use the software in a way other than it’s intended, or specifically audit code looking for such issues, that these bugs become apparent.

When Raymond talks about bugs being shallow, he’s really talking about how many people looking at a given problem will find the cause and solution more quickly than one person looking at that problem.  In the essay, Raymond quotes Torvalds saying “I’ll go on record as saying that finding [the bug] is the bigger challenge.”

So the problem we’ve been seeing here isn’t the the bugs took a long time to diagnose and fix, instead it’s that their lack of impact on the intended use of the software means they’ve taken a long time to be noticed.  Linus’s Law still holds true, but it’s not a panacea for security. The recent events affirm that neither open or closed code is inherently more secure.

For more about security in open source software, check out our briefing on the subject.

A new Voice in the crowd

6 Months ago, 4 journalists quit their respective jobs at the leading UK Linux magazine, Linux Format.  Today, a new magazine hit the shelves of the country’s newsagents: Linux Voice.

Linux Voice issue 1 on a shelf

Linux Voice on the shelves of a popular high-street newsagent (yes, that one)

With the same team behind it, Linux Voice has the same feel and a similar structure to old issues of Linux Format. However, Linux Voice aims to be different to other Linux publications in 3 key ways: It’s independent, so only answerable to readers; 9 months after publication, all issues will be licensed CC-BY-SA; 50% of the profits at the end of each financial year will be donated to free software projects, as chosen by the readers.

Linux Voice's Copyright notice, including an automatic re-licensing clause

Linux Voice’s Copyright notice, including an automatic re-licensing clause

By presenting itself with these key principles, Linux Voice embodies in a publication the spirit of the community it serves, which provides a compelling USP for free software fans.  On top of that, Linux Voice was able to get started thanks to a very successful crowd funding campaign on IndieGoGo, allowing the community to take a real sense of ownership.

Aside from the business model, the first issue contains some great content.  There’s a 2-page section on games for Linux, which would have been hard to fill two years ago, but is now sure to grow.  There’s a round-up of encryption tools looking at security, usability and performance, to help average users keep their data safe.  There’s a bundle of features and tutorials, including homebrew monitoring with a RaspberryPi and PGP email encryption. Plus, of course, letters from users, news, and the usual regulars you’d expect from any magazine.

I’m particularly impressed by what appears to be a series of articles about the work of some of the female pioneers of computing. Issue 1 contains a tutorial looking at the work of Ada Lovelace, and Issue 2 promises to bring us the work of Grace Hopper.  It’s great to see a publication shining the spotlight on some of the early hackers, and it’s fascinating to see how it was done before the days of IDEs, text editors, or even in some cases electricity!

For your £6 (less if you subscribe) you get 114 pages jammed with great content, plus a DVD with various linux distros and other software to play with. Well worth it in my opinion, and I look forward to Issue 2!

Open Source Phones at MWC

Mobile World Congress is running this week in Barcelona.  While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.

Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS.  It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.

Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq.  Both phones are currently sold running Android, but will ship with Ubuntu later this year.  The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.

There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of.  “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone.  Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible.  However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.

A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it.  The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.

OSS Watch publishes National Software Survey 2013

OSS Watch, supported by Jisc, has conducted the National Software Survey roughly every two years since 2003. The survey studies the status of open and closed source software in both Further Education (FE) and Higher Education (HE) institutions in the UK. OSS Watch is a non-advocacy information service covering free and open source software. We do not necessarily advocate the adoption of free and open source software. We do however advocate the consideration of all viable software solutions – free or constrained, open or closed – as the best means of achieving value for money during procurement.

Throughout this report the term “open source” is used for brevity’s sake to indicate both free software and open source software.

Summary of National Software Survey findings - findings can be found in full in the report linked below

Looking back over 10 years of surveys, we can see how open source has grown in terms of its impact on ICT in the HE and FE sectors. For example, when we first ran our survey in 2003, the term “open source” was to be found in only 30% of ICT policies – and in some of those it was because open source software was prohibited! In our 2013 survey we now find open source considered as an option in the majority of institutions.

Open source software has also grown as an option for procurement; while only a small number of institutions use mostly open source software, all institutions now report they use a mix of open source and closed source.

However, the picture is not all positive for open source advocates, and we’ve noticed the differences between HE and FE becoming more pronounced.

You can read the full report online, or download the PDF from the OSS Watch website.

Is license compatibility worth worrying about?

At FOSDEM last weekend I saw an excellent talk by Richard Fontana entitled Taking License Compatibility Semi-Seriously. The talk took a look at the origins of compatibility issues between free and open source software licences, how efforts have been made to either address them directly or dodge around them, and ask whether it’s worth worrying about them in the first place.  This post will summarise the talk and delve into some of the points I found most interesting.

The idea of FOSS license compatibility isn’t one that was created alongside the FOSS movements, but rather one that came about when projects started to combine code released under different licences, particularly copyleft and non-copyleft licenses.  As such, there’s no real definition of what license compatibility means, and so people tend to defer to received doctrine (such as the FSF’s list of GPL compatible licenses), or leave it up to lawyers to sort out.

Early versions of KDE and Qt created the biggest significant license compatibility issue in the FOSS world.  Qt’s original proprietary license and later the QPL under which it was relicenced were considered incompatible with the GPLv2 under which the KDE project (or at least, parts of it) were licensed.  Qt is now dual-licensed under LGPL or a commercial proprietary license which fixes this incompatibility, but the FSF also suggest a remedy whereby a specific exception is added to the QPL allowing differently-licensed software to be treated under the terms of the GPL.

Another common incompatibility issue with FOSS licenses has arisen where projects have wanted to combine GPLv2 code with ASLv2 code. The FSF consider the patent termination and indemnification provisions in ASLv2 to make it incompatible with GPLv2, however they believed these provisions to be a good thing so ensured that GPLv3 was compatible with it.  Indeed, the GPLv3 went on to codify what it meant for another license to be compatible with it.

While this means at first glance that only code explicitly licensed as GPLv3 and ASLv2 can be used together while GPLv2 and ASLv2 cannot, this isn’t necessarily the case.  The FSF encouraged projects to license their code “GPLv2 or later“, in the hope that when future versions of the license were released, they would be encouraged to transition to the new license and in doing so benefit from features such as ASLv2 compatibility.  However, this method of licensing can be interpreted as “GPLv2 with the option to treat it as GPLv3 instead”, meaning that for the purposes of compatibility it can be treated to be GPLv3, while remaining “GPLv2 or later”.

This has the opposite effect of the FSF’s intention by encouraging projects to remain “GPLv2 or later” for the added flexibility it provides while avoiding forcing licensees to be bound by parts of GPLv3 that either party may not like.

While the above trick won’t work for code licensed “GPLv2 only“, a similar thing is possible for code licensed “LGPLv2 only“.  As the LGPLv2 is intended for library code, it contains a clause allowing you to re-license the code to GPLv2 or any later version, in case you wanted to include it in non-library software.  This means that you could, for the purposes of compatibility, treat the code as GPLv3.  The Artistic License 2.0 and EUPL contain similar re-licensing clauses.

What all of this shows us is that while it’s a complex issue, it’s a somewhat artificial one, and there’s all sorts of tricks one can use to circumvent it.  In practice, these compatibility “rules” are rarely followed, and rarely enforced.

In response to this, Richard Fontana suggests that we borrow the idea of “duck typing” from programming to make our lives easier.  If a FOSS project wants to combine some code under the GPL with code under a more permissive, possibly incompatible license, as long as they’re willing to follow the convention of distributing the source as though it was all GPL, the community still gets the benefit without the additional headache of worrying over which bits are allowed to be combined with which.

Commitment Gradients in OSS projects

In a recent training session, I discussed commitment gradients – how much extra effort is involved to move between each stage of involvement within a project.  After the session I was asked for some examples of commitment gradients and how it’s possible to make them shallower, so it’s easier for people to progress their involvement in a project.

A desirable commitment gradient, showing a gradual progression in the effort required between stages of involvement in a project

This graph represents a desirable commitment gradient.  The move from knowing about the project to using and discussing it is fairly trivial.  Reporting bugs requires some extra knowledge, e.g. using the bug tracker, but isn’t a significantly harder step.  Contributing patches is slightly harder as it requires knowledge of the programming language and awareness of things such as coding styles.  Finally, moving into a leadership role requires significant additional effort, as leaders need to have an awareness of all aspects of the project, including understanding of the governance model, as well as having gained the confidence of other community members.

Using the softwareA graph showing a commitment gradient which jumps suddenly between knowing about a project, and being able to use its outputs

This graph represents a project where the software is so hard to install that you need to have intimate knowledge of the project to even get it working.  For example, if configuration settings are hard-coded, setting the software up involves knowledge of the language, changing the code, then compiling it yourself before you even get started.  By this point, you know the software so well that there’s nearly no extra effort required for the following stages, but most people won’t bother, and your user base will suffer.

To make the commitment gradient lower at this stage, a project should make it easy to acquire, install and configure its outputs.  For example, having packaged versions in the software repositories or app stores for the target platforms makes installation easier.  Where this isn’t appropriate, an automated installer requiring little technical knowledge (such as that used by WordPress) can be used as an option for beginners, with a more configurable “expert’s mode” available for more experienced users.  For configuration, being able to change settings through the software’s interface rather than in code or configuration files is helpful.

Discussing the project

A graph showing a commitment gradient that jumps suddenly between being able to use a project and being able to engage in community discussions

This graph represents a project where the software is easy to use, but the community has an elitist attitude and is hostile to newcomers.  Responses to questions asked assume deep technical understanding of the software, and people who don’t have such understanding are expected to find out for themselves before they engage with those who do.

The solution to this is to promote a culture of moderation and mentorship which ensures that discussions are conducted in a tolerant way that allows newcomers to learn.

Another issue at this stage may the the technology used – for example, if all user support takes place on Usenet newsgroups, many people wont know how to access them, or the conventions they are expected to follow.  Using channels that new users will be more familiar with, such as web forums or social media, can help lower the commitment gradient here.

Reporting bugs

The step from discussing the project to reporting a bug can be high where the project uses a complex bug tracker, where there is an involved process to get access to the tracker, and where gathering the information required to submit a useful report involves intimate knowledge of the software.

The Ubuntu project lowers the gradient at this stage through use of the ubuntu-bug utility.  Any user can run the command ubuntu-bug <software name>, and have a template bug report generated with all the required information about their environment, and any relevant logs or crash reports.  All they then need to do is write a description of the problem.  Again, a culture of moderation and mentorship is useful here to help guide people into writing useful reports.

Submitting Patches

Submitting patches inevitably involves a step up in terms of effort, as the contributor needs sufficient knowledge of the programming language, the source code, a development environment and so on.  However, the commitment gradient can be made too steep if contributors are expected to follow a complex or poorly documented coding style, if they are expected to do a lot of manual testing before submission, and if the actual submission process is esoteric.

The main way to lower the gradient here is documentation, and automation where possible.  Coding styles should be well-defined and documented.  Tests should be automated using a unit testing tool.  The submission process should be well documented, using a well known workflow such as GitHub’s pull requests can help here.

The Moodle community has a tool called “code checker” which is packaged as a Moodle plugin, and allows developers to analyse their code to ensure it meets the project’s coding style.  This allows them to quickly identify and fix any issues before submission, and allows reviewers to quickly direct them to instructions on how to fix any discrepancies.

Taking Charge

Again, a large step up at this stage is inevitable, and in some respects desirable, as a project probably doesn’t want to be led by someone who hasn’t shown sufficient commitment.  There may also be legal requirements for the people in charge to adhere to.

However, excessive or unclear requirements for how a person might get voting rights within a project may make this step too large, so these need to be fair and well-documented.  Also, a leader will need to have a good understanding of the project’s governance model and its decision-making process, so these need to be well-documented too.

If an project is large enough, it may be possible to allow different levels of commitment at this stage, so not everyone who has a say on technical issues is also required to, for example, make budget decisions.

Open Source Options for Education updated

We’ve just updated our Open Source Options for Education list, providing a list of alternatives to common proprietary software used in schools, colleges and universities.  Most of the software we list is provided by the academic and open source communities via our publicly editable version.  Some new software we’ve added in this update includes:

SageMath

SageMath is a package made from over 100 open source components including R and Python with the goal of creating “a viable free open source alternative to Magma, Maple, Mathematica and Matlab.”  Supported by the University of Washington, the project is currently trialling SageMath Cloud, a hosted service allowing instant access to the suite of SageMath tools with no setup required.

R and R Commander

R is the go-to language for open source statistical analysis, and R Commander provides a graphical interface to make running R commands easier. Steven Muegge got in touch to let us know that he uses the two projects for teaching graduate research methods at Carleton University. Thanks, Steven!

Gibbon

Gibbon is a management system combining features of a VLE (such as resource sharing, activities and markbooks) and MIS systems (such as attendance, timetables, and student information).  The system was developed by the International College of Hong Kong.  Thanks to Ross Parker for letting us know about Gibbon.

OwnCloud Documents

The recent release of OwnCloud 6 includes a new tool called OwnCloud Documents allowing real-time collaboration on text documents. Collaborators can be other users on the Owncloud system, or anonymous users with the link from the author.  With support for LDAP and Active Directory, could this represent a viable alternative to Google Docs for privacy-conscious institutions?

Koha trademark case settled

Earlier in the year, I wrote a case study on Koha, the open source library management system released under the GPL, detailing the history of the project and how the sale of assets had created confusion and disagreements between the Horowhenua Library Trust (HLT) who originally commissioned the system, and PTFS who now holds the copyright for most of the project’s original assets, publishing their own fork under the name LibLime Koha.

At the time of writing, the major issue at hand was PTFS’s trademark application for the mark KOHA in New Zealand, which HLT and Catalyst IT who provide commercial support for Koha were opposing.  This month, the case was settled, with the commissioner ruling against PTFS and rejecting the application.

HLT and Catalyst opposed the application on 6 grounds:

  1. The mark was likely to deceive or cause confusion.
  2. The application for the mark was contrary to New Zealand law (specifically, The Fair Trading Act 1986), on the basis of ground 1.
  3. Use of the mark would amount to passing off, also in breach of New Zealand law.
  4. The mark was identical to an existing trade mark in use in New Zealand.
  5. PTFS wasn’t the rightful owner of the mark, HLT was.
  6. The application was made in bad faith, on the basis that HLT owns the mark.

Interestingly, grounds 3, 4, and 5 were rejected by the commissioner, largely on the grounds that HLT’s use of the name Koha didn’t constitute a trade mark.  When HLT originally open sourced Koha, the evidence presented showed that it intended Koha to be given away for free so other libraries could benefit from it.  The commissioner didn’t consider this to constitute trading, and therefore Koha, while identical to the mark being registered, didn’t constitute a trade mark.

As ground 5 didn’t show HLT to be the rightful owner, ground 6 was also rejected as PTFS weren’t seen to be acting in bad faith by trying to register a mark which clearly belonged to someone else.

However, HLT and Catalyst’s success in this case hinges on the fact that when the trademark application was made in 2010, HLT’s Koha software had existed for 10 years and was well known in New Zealand’s library sector.  Since the commissioner considered the mark being registered to be identical to the name Koha, and HLT’s software to be the same class of product as PTFS’s, it was found that the two could be confused by a substantial number of people, allowing ground 1 to succeed.

Furthermore, the cited sections of the Fair Trading Act had a similar but stricter requirement that there not be a real risk that such a confusion or deception might happen.  The commissioner believed that due to Koha’s prominence in the industry there was a real risk in this case, allowing ground 2 to succeed.

The application for the trade mark has now been defeated, with HLT and Catalyst being awarded nearly 7,500 NZD in legal costs between them.  What affect this will have on the use of the Koha name in New Zealand isn’t clear – since HLT have been shown not to own the mark themselves, they are unlikely to be able to stop PTFS from using the name in New Zealand should they choose to.  However the Koha community in New Zealand can now rest easy knowing that they won’t be stopped from continuing to use the name as they always have.

I hope that other open source software projects use the case of Koha as a lesson to ensure that your branding and IP is well-managed, so that cases like this can be avoided.

You can read the Commissioner’s full ruling here.

TYPO3 Communications Workshop

In November OSS Watch travelled to Altlenigen near Mannheim in Germany to run a 2-day workshop for the TYPO3 community as part of their Marketing Sprint Week.

Across the 2 days, Scott Wilson and I presented sessions on the varieties of communities and why we form them, communication within online communities, governance of free and open source software projects, leadership, and conflict resolution.

Typo 3 - Content Management is MagicWhile we were only able to have a small group from the vast community of TYPO3 at the workshop, those who did attend represented a range of teams from the community, including developers from both the TYPO3 CMS and TYPO3 Neos projects, as well as members of the marketing team and the community manager, Ben van ‘t Ende.

One of the great things to see was how open and honest the attendees were about the issues we discussed and the challenges they faced.  A few points were of particular interest to me.

When we discussed the reasons we form communities, there was no clear agreement on what the shared interest of the TYPO3 community was.  Defining this will be key to driving towards the community’s common goals in the future.

The community uses a myriad selection of communication channels, and the purpose of each isn’t always clear-cut.  There’s also been a general lack of moderation culture, which has led to a few poisonous people getting out of hand.  Instilling a sense of shared values  and leading by example is needed over time to help ensure that discussions remain constructive.

There is a visible lack of diversity in the community, both shallow-level (most contributors are white, male and located in Germany) and deep-level (there are lots of highly skilled developers, but less who are learning or from other disciplines).  These issues could affect the long-term sustainability of the community if the barriers for new, less-skilled contributors are too high.  Engagement with users and less technical members of the community will also be key to shaping the community’s goals.

These problems aren’t the kind that will be fixed by a two-day workshop or a change of policy, it’s going to take commitment and leadership from those who believe in the community to move things forward.  One thing that we definitely saw from this workshop is that those people are present and highly active in the TYPO3 community.  We look forward to the possibility of working together again.

Our presentation slides from the workshop can be found on SlideShare.

A special thanks to Christian Händel for making sure we made it to Altlenigen and back!