Category Archives: Legal

Open Source Software Licensing Trends

This is a guest post from Jim Farmer, Chairman of Instructional Media + Magic Inc. Jim has also written a series of feature articles on open source for Informa’s London-based Intellectual Property Magazine.

Higher education has traditionally been a knowledge “sharing” environment. Early software was exchanged without license and, in practice, without restrictions. As the monetization of intellectual property, including software, becomes pervasive more restrictive software licenses have been introduced and enforced. These licenses impose legal duties of the user of “open source software” that could be unexpected and have undesirable consequences.

The first license restrictions were a series of “copyleft” licenses that imposed a duty of a user who makes modifications of open source software to share these modifications with others. In addition, the terms and conditions of licenses of the modified software is required for all subsequent users as well. Richard Stallman is credited with launching the free software movement. He used software licensing to enforce this desired behaviour. In practice the open source community was already sharing software so the “copyleft” licenses were not a substantial burden. Disputes were avoided by an email or telephone request, almost always honoured.

Some open source software from higher education because commercial software products with proprietary licenses. Examples include North Carolina State University’s statistical package that led to SAS, and the University of Chicago’s package that led to SPSS. Their contribution was documentation and standardized stable versions of the software. Subsequently this strategy was used by Red Hat to introduce Red Hat Linux.

Extending Stallman’s practice of imposing duty, the recent and rarely used Affero license has imposed additional and potentially burdensome restrictions on distribution of modifications made to software used as a service over a network.

Higher education is becoming more sensitive to these license restrictions. There are three recent licensing choices that illustrate the trade-off decisions that were made.

edX Seeks More Software Users

Harvard University and MIT had adopted the Affero software license for their edX learning technology platform. In September, Ned Batchelder, edX Sofrtware Architect wrote “…one license does not fit all purposes, which is why we’ve decided to relicense one part, our XBlock API, under Apache 2.0.”

As part of its license compliance software and services, Black Duck compiles use of the various licenses. Using this data the edX shift from restrictive to permissive licensing is illustrated in Figure 1. The data suggests edX’s action was consistent with trends in open source licensing.

Graph showing license usage in open source software; Affero is less than 1% and rank ed 16th most popular; Apache 2.0 is ranked 3rd most popular, after GPL 2.0 and MIT.

Figure 1 – Use of Open Source Software Licenses

Batchelder describes the motivation for the change:

The XBlock API will only succeed to the extent that it is widely adopted, and we are committed to encouraging broad adoption by anyone interested in using it. For that reason, we’re changing the license on the XBlock API from AGPL to Apache 2.0.


The Apache license is permissive: it lets adopters and extenders do what they want with their changes. They can release them under a copyleft license like AGPL, or a permissive license like Apache, or even keep them closed-source.

Using Black Duck data for 2009 and 2015, the licensing trends in Figure 2 show the sharp increases in use of the MIT and Apache permissive licenses.

Figure 2.Trends in license use from 2009-2015, showing increases for MIT and ASL, decrease in GPL and LGPL

Figure 2 – Change in Use 2009 to 2015

According to Black Duck’s data on the use of software licenses, Apache 2.0 – used by 19% – has moved from its 7th ranking to 3rd most used software license. The GNU General Public License is still the most frequently used at 25%. However the GPL license has lost 21.4% of user share since 2009 and Apache has gained 12.4%. The least restrictive MIT license grew from 3.3% to 19.0% during the same period to become the second most frequently used open source software license.

The least restrictive MIT license has few restrictions: You can not sue MIT that the software didn’t do what you thought it should—“fitness of purpose.” Also it mandates attribution via reproduction of the copyright statement.

There is also a difference based on the purpose of the license. Figure 3 shows the differences in use by software developers of open source software, on downloads of the software selected for use, and what companies are using. For enterprise use the Apache license is most used.

Figure 3. License usage by purpose. Figure 3 – License Use by Purpose

Donnie Berkholz at RedMonk quantified the shift toward permissive licensing using data from July 2012. He summarized his results using the ratio of permissive to copyleft licenses. The results are shown in Figure 4. Licenses for both Java and JavaScript—two of the most frequently used—became more frequently used than copyleft licenses in 2008. Cumulatively in 2010 the majority of open source software licenses were permissive licenses.

Figure 4: Upwards trend for permissive licensing (source: redmonk)

Figure 4 – Shift of Open Source Software to Permissive Licensing.

In December 2014 ZDNet’s Steven J Vaughan-Nicholas summarized::

“The three primary permissive license choices (Apache/BSD/MIT) … collectively are employed by 42 percent. They represent, in fact, three of the five most popular licenses in use today.” These permissive licenses have been gaining ground at GPL’s expense. The two biggest gainers, the Apache and MIT licenses, were up 27 percent, while the GPLv2, Linux’s license, has declined by 24 percent.

He also reported that in July 2013 Aaron Williamson, senior staff counsel at the Software Freedom Law Center, documented that 85.1 percent of GitHub programs had no license. He commented:

Yes, without any license, your code defaults to falling under copyright law. In that case, legally speaking no one can reproduce, distribute, or create derivative works from your work. You may or may not want that. In any case, that’s only the theory. In practice you’d find defending your rights to be difficult.

The primary edX learning system continues to use the Affero license. Apereo Foundation’s Sakai learning system is licensed under Apache; Moodle uses the GPL license.

edX’s move to a less restrictive license will likely increase use. To gain additional users, perhaps the Apache license should be used for the edX learning system as well.

Kuali Foundation Seeks to Protect Cloud User Market

Administrative software being developed by the participants in the Kuali Foundation was licensed under the Educational Community License (ECL)—an OSI (Open Source Initiative) approved special purpose license for higher education software based o the Apache license. In August the Kuali Foundation Chair Brad Wheeler announced “… the Kuali Foundation is creating a Professional Open Source commercial entity.” He also said “Kuali software now and in the future will remain open source and available for download and local implementations.” The same day the Kuali Foundation posted Brad Wheeler’s blog Kuali 2.0 FAQs. He wrote “The current plan is for the Kuali codebase to be forked and re-licensed under Affero General public License (AGPL). AGPL allows customers to download and use the code at will, but requires partners trying to monetize the software to contribute code changes back to Kuali. This is intended to discourage partners/Kuali Commercial Affiliates (KCAs) from receiving revenue from hosting Kuali software, but does not prohibit them.”

The Foundation asked its participants to transfer their software development to Kuali Inc.and use their proposed cloud-based systems. The Kuali Foundation continues to make available the current version of its software under ECL. The cloud versions also include software proprietary to Kuali Inc.

On September 8, 2014, Chuck Severance wrote :

… the successful use of AGPL3 to found and fund “open source” companies that can protect their intellectual property and force vendor lock-in *is* the “change” that has happened in [Kuali’s] past decade that underlies both of these announcements and the makes a pivot away from open source and to professional open source an investment with the potential for high returns to its shareholders.

Severance suggested how to achieve “high returns:”

First take VC [venture capitalists] money and develop some new piece of software. Divide the software into two parts – (a) the part that looks nice but is missing major functionality and (b) the super-awesome add-ons to that software that really rock. You license (a) using the AGPL3 and license (b) as all rights reserved and never release that source code.


You then stand up a cloud instance of the software that combines (a) and (b) and not allow any self-hosted versions of the software which might entail handing your (b) source code to your customers.

On October 2 at Educause, reporting for e-Literate on the Kuali session, Phil Hill identified “(b):”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

Referring to multi-tenancy, Inside Higher Ed’s Carl Straumsheim described the purpose of “(b)” confirming Chuck Severance’s scenario:

“I’ll be very blunt here,” [Kuali’s Barry] Walsh said. “It’s a commercial protection — that’s all it is.”

In a 10 September blog post Locked into Free Software? Unpicking Kuali’s AGPL Strategy OSS Watch’s Scott Wilson considered the implications of AGPL. He pointed out “The GPL license requires any modifications of code it covers to also be GPL if distributed [emphasis added]. The use of a cloud-based service is not considered distribution of code. So a user could offer a cloud service without making modifications available to the community. Wilson wrote:

The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels [his emphasis] anyone using the software to run a service to also distribute the modified source code.

Wilson also reported Bradley Kuhn, one of the original authors of AGPL, in a talk at Open World Forum in 2012 said “… at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). This unfortunate characterization may rarely be true.

The AGPL license does meet the Open Source Initiative’s criteria of an open source license. But the pressures of monetization causes its terms to be used inconsistent with the connotation of “open source.”

Oracle Builds a Community?

On September 29th at Oracle World, Oracle announced their Oracle Student Cloud and their investment in the Oracle Customer Strategic Design Program. Embry-Riddle Aeronautical University, University of Texas System and the University of Wisconsin-Madison will participate “to provide guidance and domain expertise that will help shape the design and development of Oracle Student Cloud. A press release described the initiative:

  • Each university will work with Oracle through significant milestones and releases, providing guidance and expertise to develop an industry-leading product. The growth of non-traditional programs is an important trend for these customers, and the first release of Oracle Student Cloud is expected to include flexible core structures and an extensible architecture to manage a variety of traditional and non-traditional educational offerings.
  • Oracle Student Cloud will feature a compelling mobile user interface that enables customers to extend, brand, and differentiate the student experience for each institution.
  • The first phase of Oracle Student Cloud is designed to support the core capabilities of enrolment, payment, and assessment. Oracle Student Cloud will embed CRM-based functionality throughout the solution to promote engagement and collaboration, along with a business intelligence foundation to provide customers with actionable insight into their student operations.

The Design Program could be interpreted as combining the contributions of a community as found in open source development, and a proprietary model that would use the standard Oracle license. If successful this innovation could benefit both Oracle and colleges and universities.

In an October 7 blog Cole Clark, Global Vice President Education and Research industry, reflected on Oracle World. He included Stanford University as a participant. He also said a fifth partner in Europe would be named the following week at the Utrecht NL Higher Education User Group meeting.

He wrote:

We believe this [Oracle Customer Strategic Design Program] gives us a broad spectrum of the higher ed panoply from which to draw a great deal of insight and council [counsel] as we build the next generation student system in the cloud with mobile and social attributes at the core of the development initiative.

He also described the role of open source software:

Don’t get me wrong; there are definitely areas where Kuali (and other open source initiatives) fill gaps that the private sector will likely never pursue – Coeus [research administration] and the open library environment are excellent examples.  Parts of Unizen may be another.  But in the broader areas … where ample (and growing) competition exists to drive innovation up and costs down, there is no justification for investing shrinking resources in higher education on software development and support.

The description of contribution expected of the participants—guidance and domain expertise—and their diverse needs and competencies suggest functional requirements and designs of student services that improve the Oracle software. The reference to the growth of non-traditional programs demonstrated sensitivity to unfilled needs of current student systems. If these are incorporated into the Oracle product, it would benefit their college and university customers. And perhaps be available earlier than other alternatives.

Incorporating customer feedback on products is becoming a standard industry practice for consumer goods. If broadly implemented Clark’s innovation could change the relationship between higher education and software suppliers.

There is one concern. Oracle declined to answer the question whether the participants would be required to sign non-disclosure agreements. It they are, many of the benefits of the broad open communications found in open source development projects may be lost.


  1. The data on the shift from restrictive to permissive licensing suggests, but does not confirm, broader participation and use of software using permissive licenses. edX may want to consider relicensing the learning platform itself using an Apache license to attract more users of its software
  2. Kuali Inc.’s experience introducing the Aferro license demonstrates how restrictions can be perceived based, in part, on the intent of the copyright holder. The many yet-undefined terms that could be a “cause of action” enabling a copyright holder to bring a legal action against a user presents risks that advice of a licensing specialist or an intellectual property attorney may be needed to fully understand.
  3. Oracle Higher Education may benefit colleges and universities by introducing broad collaboration similar to open source communities. That should be encouraged. But implementation may be fragile in the sense participants, users, and prospects are likely sceptical of success. Complete transparency and open communication about the work of the Strategic Design Program may make the true purpose better known and results more widely used.

The emergence of “intellectual property”—software licenses in these cases—has created monetary incentives for copyright holders. Assessment of licensing restrictions and risks should now be incorporated into all information technology decisions.

This guest post is (c) Jim Farmer, and is licensed under the Creative Commons Attribution 4.0 International license. The graphic in Figure 4 is by Donnie Berkholz of RedMonk, and licensed under the Creative Commons Attribution ShareAlike 3.0 license.

Koha trademark case settled

[UPDATE 04/06/13] Since this blog post was written, the trademark for Koha in New Zealand has been granted to HLT.  See Chris’s comment below.

Earlier in the year, I wrote a case study on Koha, the open source library management system released under the GPL, detailing the history of the project and how the sale of assets had created confusion and disagreements between the Horowhenua Library Trust (HLT) who originally commissioned the system, and PTFS who now holds the copyright for most of the project’s original assets, publishing their own fork under the name LibLime Koha.

At the time of writing, the major issue at hand was PTFS’s trademark application for the mark KOHA in New Zealand, which HLT and Catalyst IT who provide commercial support for Koha were opposing.  This month, the case was settled, with the commissioner ruling against PTFS and rejecting the application.

HLT and Catalyst opposed the application on 6 grounds:

  1. The mark was likely to deceive or cause confusion.
  2. The application for the mark was contrary to New Zealand law (specifically, The Fair Trading Act 1986), on the basis of ground 1.
  3. Use of the mark would amount to passing off, also in breach of New Zealand law.
  4. The mark was identical to an existing trade mark in use in New Zealand.
  5. PTFS wasn’t the rightful owner of the mark, HLT was.
  6. The application was made in bad faith, on the basis that HLT owns the mark.

Interestingly, grounds 3, 4, and 5 were rejected by the commissioner, largely on the grounds that HLT’s use of the name Koha didn’t constitute a trade mark.  When HLT originally open sourced Koha, the evidence presented showed that it intended Koha to be given away for free so other libraries could benefit from it.  The commissioner didn’t consider this to constitute trading, and therefore Koha, while identical to the mark being registered, didn’t constitute a trade mark.

As ground 5 didn’t show HLT to be the rightful owner, ground 6 was also rejected as PTFS weren’t seen to be acting in bad faith by trying to register a mark which clearly belonged to someone else.

However, HLT and Catalyst’s success in this case hinges on the fact that when the trademark application was made in 2010, HLT’s Koha software had existed for 10 years and was well known in New Zealand’s library sector.  Since the commissioner considered the mark being registered to be identical to the name Koha, and HLT’s software to be the same class of product as PTFS’s, it was found that the two could be confused by a substantial number of people, allowing ground 1 to succeed.

Furthermore, the cited sections of the Fair Trading Act had a similar but stricter requirement that there not be a real risk that such a confusion or deception might happen.  The commissioner believed that due to Koha’s prominence in the industry there was a real risk in this case, allowing ground 2 to succeed.

The application for the trade mark has now been defeated, with HLT and Catalyst being awarded nearly 7,500 NZD in legal costs between them.  What affect this will have on the use of the Koha name in New Zealand isn’t clear – since HLT have been shown not to own the mark themselves, they are unlikely to be able to stop PTFS from using the name in New Zealand should they choose to.  However the Koha community in New Zealand can now rest easy knowing that they won’t be stopped from continuing to use the name as they always have.

I hope that other open source software projects use the case of Koha as a lesson to ensure that your branding and IP is well-managed, so that cases like this can be avoided.

You can read the Commissioner’s full ruling here. – How to Choose the Best License for Your Open Source Software Project

You need sound coding skills to create good software, but the success of an open source project can also depend on something much less glamorous: your choice of software license.

Last week I spoke to Paul Rubens of about the issues that need to be considered when deciding which licence to use when releasing your code, including why a licence is necessary, the varieties of Free and Open Source Software licences, and how you provide licenses for the non-software parts of your project.

You can read the full article at

Open Hardware at CERN

Last week I took the opportunity to visit Oxford’s Hackspace and see a talk by Javier Serrano of CERN.  Serrano has been working together with Moorcrofts, an Oxford-based legal firm, on the latest version of CERN’s Open Hardware Licence (OHL).

CERN’s systems have unique requirements in terms of scale, synchronisation and geographic distribution.  As a result, a lot of their hardware is produced to bespoke specifications.


Serrano spoke about the models available when considering closed/open and commercial/non-commercial licensing.  Due to the long lifespan and iterative nature of CERN’s systems, a commercial proprietary solution would create vendor lock-in which isn’t acceptable for their requirements.  An open solution without commercialisation wouldn’t be sustainable.  He concluded that an open, commercialised solution provides “the best of both worlds” in terms of sustainable support and sharing of knowledge, which is one of CERN’s core goals.

The licence itself takes inspiration from the GNU GPL for software, with modifications to make it more applicable to hardware.  The licence is designed to cover the documentation for the hardware (such as CAD files and bills of materials), allowing the documents to be distributed, modified, and used to manufacture products given that the documentation is made accessible to those receiving the products.

Serrano described the licence as “weak-copyleft”.  It is designed to ensure that modifications to the design, used complete or in part, are shared back to the community.  However, it does not attempt to stipulate that the designs of other products that are integrated or linked with the OHL products also have their designs shared.

Similarly, the licence contains a patent grant to any patents owned by the designer, but it doesn’t attempt to make this reciprocal – the licensee isn’t required to license their own patents back to the licensor.

A final notable feature of the licence is the stipulation that alongside any trademarks and copyright notices, any references to the location of the documentation must not be removed from the designs.  This means, for example, that a URL to access the documentation could be included in the top copper layer of a PCB – this would ensure than anyone receiving the board would have access to the designs.

Serrano finished by introducing White Rabbit – a network time protocol which improves on the Precise Time Protocol standard to synchronise networked nodes with tolerance of under a nanosecond. The documentation for the hardware implementing White Rabbit is released under the CERN OHL.

A big thanks to Oxhack for hosting the event, and Moorcrofts for sponsoring it.

Image Credit: Large Hadron Detail (Fred Benson) by Michael Maniberg

Should you use Creative Commons licenses for software?

Creative commons logos, ending in a question mark

tl;dr: no.

Creative Commons is a great way to license documentation, websites, articles, artwork, and other media assets associated with a software projects, but source code has some special characteristics that are better suited to using licenses recommended by the Open Source Initiative and the Free Software Foundation.

To find out why, take a look at our recently updated briefing note on Creative Commons licensing and Open Content, where we’ve added a section on this question.

FRAND and policy: Obama vetoes ITC iPhone and iPad ban

Even for those of us who find smartphone patent litigation interesting, the sheer number of decisions and reversals across so many countries can be hard to track. The general impression one gains is that every major player is seeking to gain a stake in the profits of every other major player by winning patent litigation against them.You could almost be forgiven for wondering if – should they all get their way and create channels of profit-sharing connecting them all – they wouldn’t end up being a kind of de facto single entity who could then cancel their individual litigation budgets and instead put some more effort into innovation. Indeed some would argue that this kind of patent enforcement frenzy is a sign of a saturated market in which innovation has slowed and the players are forced to grow their profit not by expanding the customer base with attractive new features but by squabbling over the money already developed customers are spending.

So while it can be depressing to monitor, sometimes the smartphone litigation gangfight can throw up interesting policy decisions. This happened on Saturday, when President Obama decided to veto a ban imposed by the International Trade Commission (ITC, but not the Thunderbirds one). The ITC offers, among other things, a quick method to banning the distribution of products that you feel are infringing on your patents. Well, I say quick… it still takes a year or so, but that’s lightning fast compared to many patent litigations conducted in court. So the ITC can act quickly, but on the down side (a) they can’t award damages, only prevent sales and (b) even if you win, the President can still decide that, for ‘policy reasons’ the ban should not happen. This last risk is not generally considered to be huge however; indeed no President since Reagan has vetoed an ITC decision. So Samsung probably felt that when the ITC granted their request to prevent the sale of certain models of Apple’s iPhone 4 and certain iPads (models 1 and 2) they had a good chance of it happening. (Having said that, it is ironic that the last time a veto did happen, back in 1987, it was in Samsung’s favour, and covered the distribution of their 16 and 32 KB RAM chips).

Samsung’s patent covers technology necessary for implementing the CDMA standard, which is used for making data connections to certain US mobile providers. As what is called a ‘Standards-essential Patent’ (SEP), the patent in question has to be licensed to competitors, and on what are known as FRAND (Fair, Reasonable and Non-discriminatory) terms. We have blogged about the interaction between FRAND and free and open source software before, and we discuss the issues in our briefing note ‘Open Standards and Open Source‘. When the owner of a SEP uses it to try to ban a competitor’s product, therefore, there will be some serious discussion of whether the owner has – as they must – already offered a licence to that competitor under fair and reasonable terms. What does fair and reasonable mean, in dollars? Unfortunately no-one knows. It’s one of those ‘litigate and see’ things. In this case, we know that Apple and Samsung have negotiated and failed to reach agreement. As Fortune magazine notes, while the details of this negotiation are largely private, a dissenting view attached to the ITC decision gives a small insight into that process, and implies that Samsung were perhaps attempting to close a so-called ‘tying’ deal. This works a lot like a satellite or cable TV package: the things you actually want are only available with a load of extras that you don’t necessarily want. Here Samsung may have been trying to force Apple to license additional patents in exchange for a workable deal on the CDMA essential patent.

This Presidential veto is interesting for a number of reasons. Its rarity makes it news, but also gives us an indication that the issues it addresses are considered very important to US trade policy. It would be possible to view this as self-interested and protectionist – after all Apple must be one of the US’ largest tax payers. However in practice Apple would just have to settle if the Presidential veto had not been applied, and it is not as though they could not afford to do so. So it seems more likely that the policy imperative here was more associated with the dispelling of doubt around whether FRAND is really a workable model. By vetoing the ITC ban, the President sends a clear message that trying to expand the boundaries of ‘fairness’ to include things that are really quite ‘unfair’ and indeed having endless arguments about what constitutes ‘fair’ is damaging to a healthy IT market.

From the point of view of the free and open source community though, it is also an interesting decision. As our briefing note linked above points out, implementing even open standards in FOSS can be non-trivial. When the UK Cabinet Office decided to define open standards as those available on a royalty free basis (partly to encourage FOSS software provision to government), there was some grumbling among standards definition bodies (indeed I was present at an event in Brussels on FRAND just after the policy was announced, and when it was mentioned in the room it clearly irked more than a few attendees). However the veto shows that the current system of agreeing to be ‘fair’ then immediately disagreeing about what ‘fair’ means is terminally broken. In this context the Cabinet Office’s decision to provide something a bit more defined than just ‘fair’ seems well justified, whether it was decided upon with FOSS in mind or not.

Digital Exhaustion

When I was a poor student I was extremely grateful that the local bookshop had both a new and a second hand section. Text books were and are expensive, and I would always check the used section for a usable if dog-eared copy of whatever text I was seeking. At the end of each term I would lug a stack of books up to the second hand department and recover some beer money.

These days students will probably acquire at least some of their textbooks in digital form, to be read on tablets or ereaders. Often these copies will be cheaper than their physical equivalents, but as things stand right now, they lack the end-of-term-beer-money-cash-in value that their heavier cousins still enjoy. Why is this?

It might make more sense to ask why we can resell the physical copies. After all, books are copyright works, and one of the exclusive rights a copyright owner enjoys is the right of distribution. Why can I distribute a book to which I don’t own the copyright when I sell it second hand? It’s because of what is called ‘exhaustion’. Exhaustion means that – for any given copy of a copyright or patented item –  the copyright or patent owner’s rights run out  when it is sold for the first time. This leaves the first buyer free to resell that copy of the item without the rights holder complaining that the sale infringes their rights. Why does the law explicitly allow this? It’s chiefly because the reverse situation – in which the rights holder controls every subsequent sale – is generally considered to give them too much power to fix prices therefore distort the market in ways that are societally undesirable.

So if resale of copyright and patented items is societally beneficial, shouldn’t it be possible to resell digital copies in the same way we can resell physical copies? As things stand I can resell my paperbacks but not my ebooks, my CDs but not my mp3s, my games on DVD but not the apps on my phone. The technologies that prevent users from illegally duplicating digital copies (and which are illegal to disable) also prevent this kind of resale. Only the controller of the necessary encryption keys can permit the transfer required for resale. This issue has become more pressing recently for a number of separate reasons.

Firstly, large scale digital retailers Apple and Amazon have both obtained patents for systems of resale for digital items. These systems are interesting in that they both seem to presuppose that the rights holder can enforce their rights after first sale (in Amazon’s case by effectively destroying the item after a certain number of resales, and in Apple’s case by enforcing a cut of the resale price being handed over to the rights holder).

Secondly, the announcements of the next generation of home consoles (Xbox One and PS4) have both led to speculation that the second hand games market will be restrained by the new technologies built into the new systems. It has been clear for years that games creators and console manufacturers resent the second hand games market. Second hand sales – it is argued – reduce the market for new copies and new games. None of the second hand price goes to the console or game manufacturers. However, when the game disc is all that is required to play the game, there is little they can do to restrain that disc changing hands after first sale. In the case at least of the Xbox One, it seems clear that some kind of controlled resale of games will be implemented, although the details are yet to be fully announced. This would – it seems likely – involve unique identifiers being assigned to each copy of a game (whether on disc or downloaded), and registration of these identifiers with a user account being a technological necessity for a user to play. Thus resale would necessarily involve the console manufacturer’s consent (to transfer the game ID from one account to another), whether it was legally required or not, and at that point of transfer it seems likely that some kind of levy may end up being charged.

This would all be rather disheartening if it were not for the third development, which was last year’s European Court of Justice decision in the case of Usedsoft v Oracle. This case concerned whether a company (Usedsoft) could resell licences to Oracle software that it had bought from legitimate Oracle licensees. The question being considered was, in essence, whether a combination of transfer of a copy and transfer of a licence to use the copy amounted to a ‘first sale’ that could trigger rights exhaustion. Oracle argued that it couldn’t, and that what they were doing was selling perpetual licences that could not be transferred from one user to another. Usedsoft argued that in effect Oracle were selling copies of the software and so could not control resale. The ECJ agreed with Usedsoft, and ruled that when you provide a perpetual licence and a copy of the licensed software you are selling a copy, and lose rights over that particular copy.

The fallout from this decision has yet to fully play out. Law firm Linklaters advises its software developing readers to consider stopping selling perpetual licences altogether, instead moving to a rental or service provision model. It’s hard to see this working well for consumer products however. Would customers accept annual renewals to keep their books, games and music? Probably not, unless the initial prices were a lot lower, and that is not an attractive prospect for creators and distributors.

So how does this affect free and open source software? In the worst case, one could potentially argue that the responsibilities associated with various FOSS licences, such as attribution, copyleft and source provision only apply to the first acquirer of the software and that – as a result of exhaustion – responsibilities associated with that copy no longer apply once it passes from a first acquirer to later downstream users. As the FOSS model relies on all copies requiring the same compliance with the licence, this could construably be an ugly problem. Whether this problem actually is a problem depends to a large extent on what we consider to be a ‘sale’. The Usedsoft judgement has this to say on the subject:

According to a commonly accepted definition, a ‘sale’ is an agreement by which a person, in return for payment, transfers to another person his rights of ownership in an item of tangible or intangible property belonging to him.

Using this definition it would seem fairly clear that the usual method of acquiring FOSS does not fit the pattern for a ‘sale’, due to the absence of a payment in exchange for rights of ownership over a copy. However it should be noted that Oracle tried to argue that they were handing out the copies for free but the licences for money, and that therefore it was not sale of an item, and that was rejected by the ECJ. As I am not a lawyer I can’t really give an informed opinion on how closely the judgement might bear on the FOSS model. I have certainly heard a number of people who are lawyers express very similar reluctance to give an opinion.

What seems clear is that older notions associated with physical items, first sale and exhaustion are hard to apply in the digital world, where the idea of a discrete ‘item’ is problematic. To play devil’s advocate, we can probably assume that the idea of exhaustion of rights was first conceived when it seemed obvious that a new copy would have certain advantages over a resold copy in the market, in terms of absence of physical wear and tear. Is exhaustion as fair a principle in the digital world, where the resold item is identical to the original? We can also see that the very idea of a discrete copy of a digital item is something innately tied to copy protection and digital signing technologies. If we cannot identify a particular ‘item’ we cannot know if it has been resold. While as consumers we may want a healthy second hand market in digital items, in practice such a market may well require the embracing of the kind of copy protection technologies that – up to now – consumer groups have tended to decry. What we can say is that, as technology and law continue to develop, the issue of rights in digital copies will need a clearer resolution than we have now.

Unlicensed code: is it ever OK?

Car With No License Plate

In an earlier post Mark Johnson responded to recent commentary about unlicensed code on Github. Mark criticised the idea put forward by some pundits that developers not licensing their software project was some kind of movement. Instead Mark sees it as emerging from a lack of education (or, quite possibly, sheer laziness). He also reiterated the point that a lack of licensing clarity discourages community and harms reuse and sustainability of software. Experienced developers won’t touch unlicensed code because they have no legal right to use it.

However, I decided to follow up by seeing if I could start from the other end of the argument and identify some good – or at least acceptable – cases for where you might legitimately make your source code available intentionally without applying a license.

Here’s what I’ve come up with.

Deferring the licensing decision

Licenses interact with your choice of business model. For example, some licenses are more useful than others when pursuing a dual-licensing strategy; some make more sense for software that provides online services; and each license provides some degree of advantage over others for particular cases (if there wasn’t, there wouldn’t be so many of them!)

However, for some projects its hard to identify early on what the business case is going to be, or even if there is likely to be any point developing one.

Your software experiment may turn into a liberally-licensed library, a copyleft and commercial dual-licensed application, or a service offered under something like the CPAL or AGPL, but maybe its too early to tell. Should you keep it under wraps while you work out where its going, or share it now and risk selecting the wrong license?

Releasing your code with no license while you are still deciding on an appropriate model is one possible option. The downside of this is that no-one will really be able to reuse your code until you do apply a license, and it is also likely to deter potential collaborators.

So even here I’d still recommend choosing a license and revisiting the choice later as the project matures: as the owner of the intellectual property for your software you always have the option of changing your mind, and your license, later on.

Changing licenses for software can be controversial and difficult, but at least you have more chance of developing a user community and partner network to have this argument with by making the initial code available under a recognised license.

Note that its also much easier to change from a more restrictive license to a more permissive one than the other way around.

Software as evidence

There is a type of project where releasing code but not licensing it (effectively sharing code with all rights reserved) may make sense. This is where you have no interest in anyone else actually reusing your code, or building on it, or contributing to it!

Why on earth would you want to do that? Well, when the purpose of releasing the code is not to create viable software, but instead to provide transparency and reproducibility.

For example, if you have written software as part of an experiment, and you need reviewers to be able to replicate or inspect your work. In this case, there is no real expectation that anyone will take your code and reuse it for something else, or integrate it into any kind of distribution.

So maybe then you can just distribute the code, but not as open source or free software?

One reason why that may not be a great idea is that no-one else being able to make use of the code is just your perspective; from another perspective, maybe your code has a value you don’t realise? As Scott Hanselman points out, you can think of this kind of these projects as a “Garage Sale” where one developer’s junk is another’s treasure.

You may also be concerned that, by distributing your code under an open source license you may be raising expectations of what the code is for, or inviting a critique of your software development skills; this is a theme that Neil Chue Hong picks up on in a post over on the SSI blog. (Neil even points to a special license, the CRAPL, aimed at this sort of case.)

Even for very specialised academic code aimed at a single objective for a single paper in a specialist journal, the case can be made for releasing the code as Free or Open Source software.

(For another good discussion of this topic, see Randall LeVeque’s post Top ten reasons to not share your code (and why you should anyway)

The Contractual Obligation Software Project

Sometimes you get to work on a project as part of some sort of funded initiative, which, while not stipulating sharing your code as open source, does expect you to at least make the code “available” in some fashion.

So, like an artist locked into a record contract, when the funding runs out you may be tempted to just make a code dump somewhere in order to meet your obligations, and in a fit of spite not even bother put a license on it either.

However, the “garage sale” metaphor works well here, too. Maybe the project or initiative didn’t exactly set the world on fire, but maybe some of the code written in the process could still be salvaged for something.

Gists and examples

You often find code snippets in blog posts or as solutions to questions on StackOverflow. This is very rarely explicitly licensed, but the assumption is that its usually OK to copy and paste without worrying too much about licensing. If you’re conscientious, you can always pop in a comment with a link to where you found it.

However, there are also grey areas, such as Gists, which are a bit more than a few lines of code, but not quite project in their own right.

Even with a small snippet of code, its not always clear whether or not copyright protection applies. For example, a lengthy example of how a standard Java library should be used would probably not be protected as it doesn’t involve much creativity. However, a two-line program that offers a novel solution to a problem could well be considered protected under copyright.

So, in some cases you may be justified in not bothering with a license for a snippet or Gist, but to avoid all uncertainty its still better to put in a license header, or at least make it clear you’re willing to license the code for anyone who wants it that thinks its necessary.

Creating Fear, Uncertainty and Doubt

OK, I wouldn’t say this is a great reason, but it could be a reason.

Maybe you really do want to make people uncertain about whether they can use your code because … well, because thats the way you roll.

Maybe you’re happy to license your code, but only with people who ask you nicely first, and you don’t want them to be able to distribute their code as free or open source software for some reason.

Or maybe you are looking to bait the uncautious into copying your software so you can threaten them with lawyers and shake them down for money, because you are actually a Copyright Troll.

A copyright troll

Not necessarily wise, but not necessarily evil either

From this brief excursion I would conclude that distributing unlicensed code is never a great idea, and rarely even a good one, but I can see there are circumstances where you might consider doing it. In each case, though, there is usually a better option worth taking.

Car image by Su-May . Copyright Troll Image by redtimmy

Unlicensed code: Movement or Madness?

One of the hot topics of commentary on open source development at the moment is the licensing situation on GitHub.  When code is committed to GitHub, the copyright owner (usually the author or their employer) retains all rights to the code, and anyone wishing to re-use the code (by downloading it, or by “forking” and modifying it) is bound by the terms of the license the code is published under.  The point of discussion in this case, is that many (indeed, the majority) of repositories on GitHub contain no license file at all.

There are two troubling points to the commentary on this phenomenon.  The first is that some discussions suggest that publishing with no license is “highly permissive”, implicitly allowing anyone to take the code and do with it as they wish.

In fact, it’s usually the case that having no license on your code is equivalent to having an “All Rights Reserved” notice, preventing any re-use of your code at all.  Whether it’s the copyright holder’s intention to enforce these rights isn’t being made clear, but it’ll be enough to put off any company who might want to engage with such a project under an open development model.

The second troubling point is that commentators are time and again dressing this up as a wilful movement.  James Governor coined the term “Post Open Source Software“, while Matt Asay claims “Open Source Is Old School, Says The GitHub Generation“.  These commentaries seem to imply that there’s some sort of “No License Manifesto” being championed (in a similar fashion to the Agile Manifesto, perhaps).

The only movement I’ve seen which would be akin to this is the Unlicense, which encourages authors to wilfully lay aside any claims to their rights, effectively a Public Domain dedication which Glyn Moody has suggested is the way forward for open source.

However, what we’ve seen on GitHub shows no such conscious setting aside of rights, it shows a lack of education.  Publishing articles touting release without a license as how all the cool new kids are working encourages behaviour which could prove damaging to the development of a project’s community, and the wider community in turn.

Fortunately there are voices of reason in these discussions.  Stephen Walli of the Outercurve Foundation points out that governance == community.  If a project seeks to “fuck the license and governance” as James Governor suggests, then they risk doing the same to their community by alienating contributors (particularly those that are part of a larger organisation, rather than individual developers), as these contributors have no predictable structure to work within.

If the project lead might turn around and say “I dont feel like accepting your contributions, and by the way, if you keep using my code I’ll sue you”, you’ve got very little incentive to work with them.

By neglecting your community in this way, you project is at risk of being limited to a few individual contributors who know and trust one another implicitly.  I can’t believe that developers seeking to allow permissive use of their code would be happy with this as an outcome.

GitHub haven’t yet made any suggestion that they feel this is a problem they should work to solve.  It’s our responsibility as a community to ensure that we educate newcomers to become responsible open source citizens, rather than encouraging them to follow established bad practices.

Licensing and governance analysis form 2 cornerstones of OSS Watch’s openness rating.  If you’d like advice on how to improve your projects management of these areas, please get in touch.

4 Tips for Keeping on Top of Project Dependencies

Almost any software project involves working with dependencies – from single-purpose libraries to complete frameworks. When you’re working on a project it’s tempting to bring in libraries, focus on meeting the user need, and figure out the niceties later. However, a little thought early on can go a long way.

Photo of a stack of cards

This is because every dependency can bring its own licensing obligations that affect how you are able to distribute your own software. In some cases, in order to release the software under a particular license you may end up having to rewrite substantial amounts of software to remove reliance on a library or framework that is distributed under an incompatible license.

So there is a tradeoff between being agile and productive in the short term, against the risk of needing to do a costly refactoring triggered by a compatibility check before – or even worse, after – a release.

For larger projects, and organisations with multiple projects, this starts to stray into the territory of open source policies and compliance processes, but for this post lets just focus on the basics for small projects.

1. Make it routine

A good strategy is to build good dependency management practices into your general software development practices – similar to the concept of building in quality or building in security.

In other words, given that the cost of fixing things later can be significant, it’s worth investing in the practices and tools that can ensure potential issues are spotted and fixed earlier.

At its simplest, this can just mean developing a greater awareness as an individual developer of where your code comes from,  knowing that what you reuse can limit your choices for how you license and distribute your own code.

So in practical terms, this means being careful about copying and pasting code from the web, and making sure you know the licenses of any dependencies, preferably before working with them, but certainly before building any reliance on them into your code.

It may also make sense to handle any required attribution notices for inclusion in a NOTICE and README as you go along, rather than just rely on a release audit to always pick them up.

2. Let tools take some of the strain

There are also tools that can help make things easier. For example, if you use Maven for Java projects, there is a License Validator plugin that can help flag up problems as part of your compile and build process.

Alternatively, Ninka is an Open Source tool for scanning files for licenses and copyrights. While it can’t follow import declarations or dynamically linked libraries, it can be useful to periodically check builds. A similar project is Apache RAT (Release Audit Tool) which was originally created for use within the Apache Software Foundation for reviewing releases made in the Apache Incubator.

For larger projects and organisations there are also complete open source policy compliance solutions like Protek from Black Duck, or Discovery from OpenLogic.

It’s also worth pointing out that, while tools can be a part of the solution – and can be invaluable for large projects – ultimately it’s still your responsibility to make sure you meet the obligations of the software you are reusing.

3. Remember to check more than just the licences!

If a dependency has a compatible licence, thats great. But what about if the project that distributes it doesn’t bother checking their own dependencies?

This is where it’s good to have an idea about the governance and processes of projects you depend on.

There aren’t just licensing risks associated with dependencies – if you rely heavily on a library that has only one or two developers then you also run the risk that it may become a “zombie” project with implications for the rest of your code, for example, if security patches are no longer being applied.

A zombie

Beware of zombie projects!

The commercial tools mentioned above are also typically backed by a knowledge base that can also flag up other issues with dependencies, such as governance or sustainability problems.  However, just having a check for the project on Ohloh is often good enough for most smaller projects to check that a library is still “live”.

If you need to know more about the sustainability of a particular project, OSS Watch can carry out an Openness Review to check its viability using a range of factors – get in touch with us if you want to know more.

4. Keep track of past decisions and share knowledge with colleagues

Some organisations make use of component registries to keep track of which components they approve on in their software projects. This can save time spent by developers researching the same libraries, but makes most sense when you have a lot of projects that probably need the same kinds of components, in which case focussing on reusing the same set of libraries makes sense.

Another reason for using a registry is where you need to perform more detailed evaluations, for example for security, and so checking a dependency is more involved than just figuring out which license it uses, and that the project isn’t dead.

Some examples of commercial registries are Sonatype Component Lifecycle Management   and Black Duck Code Center. Again, for a smaller project or an organisation with a relatively small set of projects this can be overkill, and just having a shared document somewhere where you can keep note of which libraries you’ve used can be effective.

For example, you could share a spreadsheet with colleagues containing some basic information on each library like what version you’re using, what license it’s under and the date and results of any investigations you’ve done into sustainability, security or risk assessment.

Is it worth it?

Reusing code is good practice and should save you time and expense – so it’s annoying if the administration associated with it starts affecting your productivity.

You can make a judgement call about what level of risk you feel is acceptable; for example, on an internal-only research project the risk of having to undergo a major refactoring should the project be successful may be one worth taking.

However, for a production system, or a component that is itself intended for reuse, you may just have to accept that you have to be a bit more diligent in how you reuse code.

Photo by DieselDemon used under CC-BY-2.0.