Untangling the university web presence with OpenScholar

This is a guest post from the OpenScholar team at Gizra. A lot of public sector organisations have moved recently to an open source CMS solution, citing the benefits not just in cost but also in flexibility, and its great to see examples of universities following suit. If your university has a similar experience, tell us about it in the comments!

OpenScholar at Harvard logoAs in many fields, the introduction of the web into higher education took place gradually and unevenly. This led many academic staff, projects and even whole departments to build their own Web presence independent from each other, using their personal or department budgets to hire external help and grad students to create their websites.

Naturally, this led fairly quickly to the Ivory Tower looking more like the Tower of Babel in terms of web presence, when universities found out they have scores of sites running on various incompatible environments, increasingly difficult to maintain, update or apply security patches – a situation that is still bogging down many academic IT departments.

Many institutions are attempting to fix this by standardizing on a single CMS system, often an Open Source one. When Harvard University faced the problem, it decided to take it one step further and create a CMS focused on academic use.

As a basis, it picked Drupal, one of the most widely used Open Source CMS solutions, powering civic and commercial websites such as WhiteHouse.gov, The Economist, Twitter’s developer website and many others, which already had a strong academic presence. Harvard used Drupal as the base for its own distribution named OpenScholar, which essentially bundles specific backend modules (e.g. bibliography handling) along with a user interface tailored for users in academia.

As the project progressed, we at Gizra were called in for a short consulting gig based on our experience releasing the Organic Groups module for Drupal, which then morphed into a 3 year engagement, at its peak employing four full time developers on our end and an equal number on Harvard’s.

The result is a system that aims to solve both the content creator and IT admin woes. Academic staff are provided with an intuitive UI for smooth website creation. Templates already incorporate the common (and some less common) elements used in such sites: For example, a professor can sign in and have a basic template created. She can then choose to have a calendar on the right sidebar, a blog in the middle, a bibliography page linked on the footer etc – all with an easy to use drag & drop interface.

For the IT side, this helps reduce the amount of user support required, but more critically the system also provide a single, unified codebase upon which all the institution websites are built. Upgrading to a new version or applying a security patch is done in one place, as opposed to keeping dozens of different environments up to date.

OpenScholar now runs all of Harvard University’s websites – 5120 at time of writing – and is starting to be used at Princeton, Berkeley, Virginia Tech and others. Drupal’s excellent multilingual support is helping it spread worldwide, and we’ve recently helped the Hebrew University in Jerusalem add support for right-to-left text, enabling easy creation and management of websites in Hebrew, Farsi, Arabic and other languages.

Leading Drupal cloud hosting providers Acquia and Pantheon now offer a turnkey solution for easily setting up highly optimized, elastic OpenScholar environments without the need for local installation and maintenance at all. For organizations wishing to keep their servers on-site, we’re collaborating with Zend Technologies on a packaged solution that will allow installing a complete secure and optimized OpenScholar environment locally from scratch.

Following the success at Harvard, OpenScholar continues to develop its core as well as adding more UI elements per professor and department’s demands. An RESTful API is now being developed which will allow easier integration with existing systems as well a smoother and more sophisticated front end.

For more information on OpenScholar, visit the OpenScholar website.

Open Source Software Licensing Trends

This is a guest post from Jim Farmer, Chairman of Instructional Media + Magic Inc. Jim has also written a series of feature articles on open source for Informa’s London-based Intellectual Property Magazine.

Higher education has traditionally been a knowledge “sharing” environment. Early software was exchanged without license and, in practice, without restrictions. As the monetization of intellectual property, including software, becomes pervasive more restrictive software licenses have been introduced and enforced. These licenses impose legal duties of the user of “open source software” that could be unexpected and have undesirable consequences.

The first license restrictions were a series of “copyleft” licenses that imposed a duty of a user who makes modifications of open source software to share these modifications with others. In addition, the terms and conditions of licenses of the modified software is required for all subsequent users as well. Richard Stallman is credited with launching the free software movement. He used software licensing to enforce this desired behaviour. In practice the open source community was already sharing software so the “copyleft” licenses were not a substantial burden. Disputes were avoided by an email or telephone request, almost always honoured.

Some open source software from higher education because commercial software products with proprietary licenses. Examples include North Carolina State University’s statistical package that led to SAS, and the University of Chicago’s package that led to SPSS. Their contribution was documentation and standardized stable versions of the software. Subsequently this strategy was used by Red Hat to introduce Red Hat Linux.

Extending Stallman’s practice of imposing duty, the recent and rarely used Affero license has imposed additional and potentially burdensome restrictions on distribution of modifications made to software used as a service over a network.

Higher education is becoming more sensitive to these license restrictions. There are three recent licensing choices that illustrate the trade-off decisions that were made.

edX Seeks More Software Users

Harvard University and MIT had adopted the Affero software license for their edX learning technology platform. In September, Ned Batchelder, edX Sofrtware Architect wrote “…one license does not fit all purposes, which is why we’ve decided to relicense one part, our XBlock API, under Apache 2.0.”

As part of its license compliance software and services, Black Duck compiles use of the various licenses. Using this data the edX shift from restrictive to permissive licensing is illustrated in Figure 1. The data suggests edX’s action was consistent with trends in open source licensing.

Graph showing license usage in open source software; Affero is less than 1% and rank ed 16th most popular; Apache 2.0 is ranked 3rd most popular, after GPL 2.0 and MIT.

Figure 1 – Use of Open Source Software Licenses

Batchelder describes the motivation for the change:

The XBlock API will only succeed to the extent that it is widely adopted, and we are committed to encouraging broad adoption by anyone interested in using it. For that reason, we’re changing the license on the XBlock API from AGPL to Apache 2.0.


The Apache license is permissive: it lets adopters and extenders do what they want with their changes. They can release them under a copyleft license like AGPL, or a permissive license like Apache, or even keep them closed-source.

Using Black Duck data for 2009 and 2015, the licensing trends in Figure 2 show the sharp increases in use of the MIT and Apache permissive licenses.

Figure 2.Trends in license use from 2009-2015, showing increases for MIT and ASL, decrease in GPL and LGPL

Figure 2 – Change in Use 2009 to 2015

According to Black Duck’s data on the use of software licenses, Apache 2.0 – used by 19% – has moved from its 7th ranking to 3rd most used software license. The GNU General Public License is still the most frequently used at 25%. However the GPL license has lost 21.4% of user share since 2009 and Apache has gained 12.4%. The least restrictive MIT license grew from 3.3% to 19.0% during the same period to become the second most frequently used open source software license.

The least restrictive MIT license has few restrictions: You can not sue MIT that the software didn’t do what you thought it should—“fitness of purpose.” Also it mandates attribution via reproduction of the copyright statement.

There is also a difference based on the purpose of the license. Figure 3 shows the differences in use by software developers of open source software, on downloads of the software selected for use, and what companies are using. For enterprise use the Apache license is most used.

Figure 3. License usage by purpose. Figure 3 – License Use by Purpose

Donnie Berkholz at RedMonk quantified the shift toward permissive licensing using data from July 2012. He summarized his results using the ratio of permissive to copyleft licenses. The results are shown in Figure 4. Licenses for both Java and JavaScript—two of the most frequently used—became more frequently used than copyleft licenses in 2008. Cumulatively in 2010 the majority of open source software licenses were permissive licenses.

Figure 4: Upwards trend for permissive licensing (source: redmonk)

Figure 4 – Shift of Open Source Software to Permissive Licensing.

In December 2014 ZDNet’s Steven J Vaughan-Nicholas summarized::

“The three primary permissive license choices (Apache/BSD/MIT) … collectively are employed by 42 percent. They represent, in fact, three of the five most popular licenses in use today.” These permissive licenses have been gaining ground at GPL’s expense. The two biggest gainers, the Apache and MIT licenses, were up 27 percent, while the GPLv2, Linux’s license, has declined by 24 percent.

He also reported that in July 2013 Aaron Williamson, senior staff counsel at the Software Freedom Law Center, documented that 85.1 percent of GitHub programs had no license. He commented:

Yes, without any license, your code defaults to falling under copyright law. In that case, legally speaking no one can reproduce, distribute, or create derivative works from your work. You may or may not want that. In any case, that’s only the theory. In practice you’d find defending your rights to be difficult.

The primary edX learning system continues to use the Affero license. Apereo Foundation’s Sakai learning system is licensed under Apache; Moodle uses the GPL license.

edX’s move to a less restrictive license will likely increase use. To gain additional users, perhaps the Apache license should be used for the edX learning system as well.

Kuali Foundation Seeks to Protect Cloud User Market

Administrative software being developed by the participants in the Kuali Foundation was licensed under the Educational Community License (ECL)—an OSI (Open Source Initiative) approved special purpose license for higher education software based o the Apache license. In August the Kuali Foundation Chair Brad Wheeler announced “… the Kuali Foundation is creating a Professional Open Source commercial entity.” He also said “Kuali software now and in the future will remain open source and available for download and local implementations.” The same day the Kuali Foundation posted Brad Wheeler’s blog Kuali 2.0 FAQs. He wrote “The current plan is for the Kuali codebase to be forked and re-licensed under Affero General public License (AGPL). AGPL allows customers to download and use the code at will, but requires partners trying to monetize the software to contribute code changes back to Kuali. This is intended to discourage partners/Kuali Commercial Affiliates (KCAs) from receiving revenue from hosting Kuali software, but does not prohibit them.”

The Foundation asked its participants to transfer their software development to Kuali Inc.and use their proposed cloud-based systems. The Kuali Foundation continues to make available the current version of its software under ECL. The cloud versions also include software proprietary to Kuali Inc.

On September 8, 2014, Chuck Severance wrote :

… the successful use of AGPL3 to found and fund “open source” companies that can protect their intellectual property and force vendor lock-in *is* the “change” that has happened in [Kuali’s] past decade that underlies both of these announcements and the makes a pivot away from open source and to professional open source an investment with the potential for high returns to its shareholders.

Severance suggested how to achieve “high returns:”

First take VC [venture capitalists] money and develop some new piece of software. Divide the software into two parts – (a) the part that looks nice but is missing major functionality and (b) the super-awesome add-ons to that software that really rock. You license (a) using the AGPL3 and license (b) as all rights reserved and never release that source code.


You then stand up a cloud instance of the software that combines (a) and (b) and not allow any self-hosted versions of the software which might entail handing your (b) source code to your customers.

On October 2 at Educause, reporting for e-Literate on the Kuali session, Phil Hill identified “(b):”

The back-and-forth involved trying to get a clear answer, and the answer is that the multi-tenant framework to be developed / owned by KualiCo will not be open source – it will be proprietary code. I asked Joel Dehlin for additional context after the session, and he explained that all Kuali functionality will be open source, but the infrastructure to allow cloud hosting is not open source.

Referring to multi-tenancy, Inside Higher Ed’s Carl Straumsheim described the purpose of “(b)” confirming Chuck Severance’s scenario:

“I’ll be very blunt here,” [Kuali’s Barry] Walsh said. “It’s a commercial protection — that’s all it is.”

In a 10 September blog post Locked into Free Software? Unpicking Kuali’s AGPL Strategy OSS Watch’s Scott Wilson considered the implications of AGPL. He pointed out “The GPL license requires any modifications of code it covers to also be GPL if distributed [emphasis added]. The use of a cloud-based service is not considered distribution of code. So a user could offer a cloud service without making modifications available to the community. Wilson wrote:

The AGPL license, on the other hand, treats deployment of websites and services as “distribution”, and compels [his emphasis] anyone using the software to run a service to also distribute the modified source code.

Wilson also reported Bradley Kuhn, one of the original authors of AGPL, in a talk at Open World Forum in 2012 said “… at that time, some of the most popular uses of AGPL were effectively “shakedown practices” (in his words). This unfortunate characterization may rarely be true.

The AGPL license does meet the Open Source Initiative’s criteria of an open source license. But the pressures of monetization causes its terms to be used inconsistent with the connotation of “open source.”

Oracle Builds a Community?

On September 29th at Oracle World, Oracle announced their Oracle Student Cloud and their investment in the Oracle Customer Strategic Design Program. Embry-Riddle Aeronautical University, University of Texas System and the University of Wisconsin-Madison will participate “to provide guidance and domain expertise that will help shape the design and development of Oracle Student Cloud. A press release described the initiative:

  • Each university will work with Oracle through significant milestones and releases, providing guidance and expertise to develop an industry-leading product. The growth of non-traditional programs is an important trend for these customers, and the first release of Oracle Student Cloud is expected to include flexible core structures and an extensible architecture to manage a variety of traditional and non-traditional educational offerings.
  • Oracle Student Cloud will feature a compelling mobile user interface that enables customers to extend, brand, and differentiate the student experience for each institution.
  • The first phase of Oracle Student Cloud is designed to support the core capabilities of enrolment, payment, and assessment. Oracle Student Cloud will embed CRM-based functionality throughout the solution to promote engagement and collaboration, along with a business intelligence foundation to provide customers with actionable insight into their student operations.

The Design Program could be interpreted as combining the contributions of a community as found in open source development, and a proprietary model that would use the standard Oracle license. If successful this innovation could benefit both Oracle and colleges and universities.

In an October 7 blog Cole Clark, Global Vice President Education and Research industry, reflected on Oracle World. He included Stanford University as a participant. He also said a fifth partner in Europe would be named the following week at the Utrecht NL Higher Education User Group meeting.

He wrote:

We believe this [Oracle Customer Strategic Design Program] gives us a broad spectrum of the higher ed panoply from which to draw a great deal of insight and council [counsel] as we build the next generation student system in the cloud with mobile and social attributes at the core of the development initiative.

He also described the role of open source software:

Don’t get me wrong; there are definitely areas where Kuali (and other open source initiatives) fill gaps that the private sector will likely never pursue – Coeus [research administration] and the open library environment are excellent examples.  Parts of Unizen may be another.  But in the broader areas … where ample (and growing) competition exists to drive innovation up and costs down, there is no justification for investing shrinking resources in higher education on software development and support.

The description of contribution expected of the participants—guidance and domain expertise—and their diverse needs and competencies suggest functional requirements and designs of student services that improve the Oracle software. The reference to the growth of non-traditional programs demonstrated sensitivity to unfilled needs of current student systems. If these are incorporated into the Oracle product, it would benefit their college and university customers. And perhaps be available earlier than other alternatives.

Incorporating customer feedback on products is becoming a standard industry practice for consumer goods. If broadly implemented Clark’s innovation could change the relationship between higher education and software suppliers.

There is one concern. Oracle declined to answer the question whether the participants would be required to sign non-disclosure agreements. It they are, many of the benefits of the broad open communications found in open source development projects may be lost.


  1. The data on the shift from restrictive to permissive licensing suggests, but does not confirm, broader participation and use of software using permissive licenses. edX may want to consider relicensing the learning platform itself using an Apache license to attract more users of its software
  2. Kuali Inc.’s experience introducing the Aferro license demonstrates how restrictions can be perceived based, in part, on the intent of the copyright holder. The many yet-undefined terms that could be a “cause of action” enabling a copyright holder to bring a legal action against a user presents risks that advice of a licensing specialist or an intellectual property attorney may be needed to fully understand.
  3. Oracle Higher Education may benefit colleges and universities by introducing broad collaboration similar to open source communities. That should be encouraged. But implementation may be fragile in the sense participants, users, and prospects are likely sceptical of success. Complete transparency and open communication about the work of the Strategic Design Program may make the true purpose better known and results more widely used.

The emergence of “intellectual property”—software licenses in these cases—has created monetary incentives for copyright holders. Assessment of licensing restrictions and risks should now be incorporated into all information technology decisions.

This guest post is (c) Jim Farmer, and is licensed under the Creative Commons Attribution 4.0 International license. The graphic in Figure 4 is by Donnie Berkholz of RedMonk, and licensed under the Creative Commons Attribution ShareAlike 3.0 license.

Project Yamina: Encouraging the next generation

This is a guest post from Hunter from Project Yamina, one of the student-led projects that won a place in the Jisc Student Innovation programme. Here at OSS Watch we’re supporting the programme over the summer and advising students on their projects.

Hi, my name is Hunter and I’m responsible for a new website called “Project Yamina”. This summer, I’m part of the JISC Student of Summer Innovation. JISC wanted new ideas – from students – that would show how technology can improve students’ life. Their hope is, with the assistance of funding, the twenty successful projects will create something worthwhile by November.

Project Yamina Logo

Project Yamina started as a first year university design project. Based on the workplace, we had to come up with something that would change the work environment for the better. Looking at research, I saw there was a large volume of careers that people viewed as being more suited for men; jobs that people imagined few women worked in. For example Free Software and Open Source communities – women are very under-represented. I thought that this was crazy, and came up with the idea of changing the workplace  – and these attitudes – by finding a way to encourage more girls to enter some of these jobs.

The idea of Project Yamina is for it to be an online magazine. Something full of interesting profiles (on both women and careers), personal essays, helpful facts and tips, alongside news items. Then, I hope girls will look at the site, and discover a woman who is a (for example) scientist, coder, police officer or sniper. Perhaps she’ll think it sounds interesting, and it’s a career she would like to do too.

I’m looking for people to be featured as profiles – this means answering a handful of questions for me, or writing a short essay on any topic to do with your work and experiences. Don’t worry, it doesn’t have to be anything  too formal – I want the website to be fun for everyone involved. Of course, you’re welcome to write an essay and be a profile too! I’m extremely keen to have people from all backgrounds involved, especially those who would like to talk about how they overcame adversity, even if you’d like to stay anonymous.

To find out more, Visit the website: http://projectyamina.strikingly.com or blog: http://projectyamina.tumblr.com

I can be contacted at: http://projectyamina.strikingly.com/#become-involved

Overlooked Open Source Tools for Libraries

This guest post has been contributed by Nicole C. Engard, author of Practical Open Source Software for Libraries.

I’ve been teaching and researching open source software for what seems like ages, so sometimes I forget that other librarians don’t necessarily know about all of the great open source tools I do. There are a few must have applications for every library (and a few more specialized tools) that I’d like to share with you all.

Continue reading

Open Innovation at the Open Source Junction

This guest post has been contributed by Ross Gardler of OpenDirective. Ross is Vice President of Community Development at The Apache Software Foundation and a mentor at the Outercurve Foundation. Ross has been active in open development of open source software for over ten years.

Over the last couple of years OSS Watch has run a series of three events called the Open Source Junction (OSJ). These events aimed to bring together academia and the commercial sector to foster communication, collaboration and open innovation and were kicked off by Gabriel Hanganu when I was a member of the OSS Watch team. The second two editions were held after I had left to start OpenDirective. Having understood OSS Watch’s vision for these I continued to participate.

With an explicit goal of building a network to surface new opportunities for collaboration between academic and commercial participants OSS Watch had decided to tackle some significant challenges. Consequently, the OSS Watch team planned to work between events to help build on these opportunities. Each subsequent event sought to broaden these participation networks even further.

Since OpenDirective was created with the express intention of helping to take research outputs to market through open innovation the OSJ objectives are well aligned to our own strategic goals. In this post, at the request of OSS Watch, I will revisit these three events and explore some of the initiatives that can, at least in part, be credited to one or more of the Open Source Junction events. In so doing I hope to demonstrate the value of the Open Source Junction events. In a subsequent post I’ll make some recommendations for future events of this type.

The first Open Source Junction was held on 29-30 March, 2011 and had a focus on “open source cross-platform mobile apps” and “how to manage the co-production of cross-platform mobile apps in an open development context”. The structure of the event was a fairly comfortable one for workshop regulars. Presentations were interspersed with occasional “interactive” sessions that focused on introducing participants interests and skills. Being a two day event the evening social activities were important in building a sense of participation, but in the main this event sought to build a base level of knowledge about one another as well as a basic understanding of common collaboration practices in open source software development. OSS Watch’s own event report concluded that “given the enthusiastic response to the event and the firm prospect of future collaboration, the community’s life force is already looking strong.” Certainly there were a number of attendees who indicated they had new opportunities to explore.

By the time the second OSJ came around on 5-6 July 2011 some of these opportunities had progressed and small sub-communities was forming within the larger OSJ network. The second event focused a little more tightly on specific areas of interest as identified by participants in the first OSJ. The organisers defined the focus as “context-aware mobile technologies”. The OSS Watch report for this event says “a huge array of services [were] offered and requested… aided by the format of the event, with numerous … interactive sessions built in”. Indeed this event was more interactive than the first. This was achieved by having more sessions designed specifically to get people thinking beyond their self-defined boundaries of expertise.

By the end of the second event the stage had been set, the main actors had been identified and preparations for the third event began. It was at this third event the script would be written and roles cast.

Open Source Junction 3 was held almost a year after the first on 20-21 March 2012. This event, once again, built on previous events. However, this time the goal was not to narrow the focus further but instead to introduce a new, but related, topic and potential collaborators who specialised in this topic area. The third event therefore had a broad focus of “mobile technologies and the cloud”. It brought together the mobile skills identified in previous events with a broader set of skills around cloud based delivery systems. The goal was to bring a new angle to the existing community which would offer up previously undiscovered opportunities.

In order to realise the OSJ objectives this final event was significantly more interactive than earlier editions, with eight fully interactive sessions compared to two at OSJ1 and four at OSJ2. The first day consisted of a series of ice-breakers and introduction sessions leading to a second day that was designed to generate concrete ideas for collaboration. These sessions were carefully managed to ensure that where possible, the right people connected to the right people. Whilst there was plenty of opportunity for chance meetings the OSS Watch team had pre-identified overlaps between many participants and worked hard to help individuals discover and explore these potential touch points.

OSS Watch’s event report concluded that “OSJ3 built on the solid foundations of the past events with connections that had been made at previous OSJs, such as Cloud4All and Webinos, taking concrete steps forwards. With the ever increasing focus on interactivity at this event, many new connections, such as linking MAAVIS and Cellularity, were formed. Over the coming months OSS Watch will seek to assist in the further development of these relationships in order to ensure that they continue to feel that sharing early, sharing often is both comfortable and productive. For starters, OSS Watch is working with the OMELETTE and WSO2 teams to explore whether MyCocktail should enter the Apache Software Foundation alongside the Wookie and Rave (which themselves benefited from previous OSS Watch support).”

So it would seem the three events built up a number of promises of collaboration. In researching this post I decided to visit the opportunities identified in the final OSS Watch report. My concern was that events such as these often result in potentials that are not followed up on. In reality full in-boxes quickly force out new plans despite our best intentions. However, on this occasion I’m pleased to report that many of the hottest opportunities have indeed taken measurable steps forward.

The potential for MyCocktail to enter the Apache Software Foundation alongside Wookie and Rave (both of which were represented at all three OSJ’s) was examined by OSS Watch and WSO2 (a participant in OSJ3). At the time of writing this has not progressed significantly, initial explorations indicated that although the project is very interesting there is little motivation within the MyCocktail team. The MyCocktail code is open source and, in theory, could be reused easily by others. However, in practice, long term maintenance is important and it would seem that in this case long term maintenance is unlikely to be present. This is the only one of the three concrete opportunities identified in the report that has not led to a success.

Cloud4All and Webinos have engaged in a number of strategic discussions. These have resulted in the Cloud4All team exploring the use of an Android node.js port delivered by the Webinos team. This work is still at the experimental stages but implementation was started at a recent Cloud4All hackathon event in Austria. Unfortunately Webinos team members were unable to be present at this event although it is still hoped that active cross-pollination, as opposed to passive code-sharing, is possible. This kind of reuse, particularly if active collaboration is undertaken, will bring significant benefits to both projects. It is clear that the initial discovery of this potential collaboration was initially identified and explored as a result of representatives of each project attending all three OSJ events. Indeed, OSS Watch facilitated a “connect session” at OSJ3.

MAAVIS was also represented at all three events, however their emerging partnership with Cellularity was a result of the final broadening of scope in OSJ3 since Cellularity was a newcomer at that time. The Cellularity approach to building a form of private cloud infrastructure using custom hardware and open source software sparked ideas for delivery of the MAAVIS product. A subsequent demonstration of Cellularity’s commercial hardware offering, facilitated by OSS Watch, allowed this concept to be further examined. Also present at this meeting were representatives of the JISC funded DataFlow project.

DataFlow was not present at any OSJ event but synergies were subsequently identified by OSS Watch and OpenDirective staff when analysing event outputs. Both the MAAVIS and DataBank opportunities are still being explored and OpenDirective have submitted proposals for product prototyping to the Technology Strategy Board as a result of these discussions. If successful this proposal will see the prototyping of a new open hardware and software framework that could integrate all three projects in a final marketable product. Watch this space for updates in the coming months.

It is likely that other opportunities were identified by participants at the OSJ events, I’ve only explored the ones I am aware of. However, even if this is not the case and the successes identified above are the only direct results of this work by OSS Watch I think it is safe to say that the Open Source Junction events were a significant success. As OSS Watch enters a new funding period in the coming month I hope to see more events like these.

What makes a community led project work?

This guest post has been contributed by Ross Gardler of OpenDirective. Ross is Vice President of Community Development at The Apache Software Foundation and a mentor at the Outercurve Foundation. Ross has been active in open development of open source software for over ten years.

OSS Watch has been participating in the development of Apache Rave, a ‘next-generation portal engine, supporting (Open)Social Gadgets as well as WC3 widgets’. As Sander observes in this blog, the Rave ecosystem is made up of a ‘diverse range of collaborators’ from both the academic and commercial sectors. These partners are sharing resources in order to build a critical piece of software at lower cost as well as to increase innovation around that product.

A few days ago I posted an evaluation of the Apache OpenOffice project’s journey through the Apache Incubator (all code entering the Apache Software Foundation (ASF) must pass through the incubator). That post looked at what makes an Apache project different from many other open source project. This post repeats many of the same points, but rather than examine them from the point of view of OpenOffice I will examine why predominantly academic team behind Apache Rave chose to go to the ASF.

Continue reading

Top 10 IP and licensing tips when licensing open data and open content

This guest post has been contributed by Naomi Korn and is based on a series of 10 Minute Blog entries that Naomi has written for the JISC funded OER IPR Support project, for which she is the Project Director. Naomi is the co-author, together with Charles Oppenheim of Licensing Open Data: A Practical Guide.

Editor’s note: This post addresses IP issues surrounding open data and open content rather than open source software. Whilst open data and content is outside OSS Watch’s remit it is, of course, pertinent to the world of open source software and we welcome Naomi’s thoughts and expertise.

1.    Identify the IPR and other legal issues which maybe associated with the data and content you wish to license. For example, even if there are no underlying IPR issues in your data and content, you may be constrained by contractual terms and conditions underpinning the supply of data etc. from third parties to you. You can read more about this at http://www.jisclegal.ac.uk/Projects/TransferandUseofBibliographicRecords.aspx

2. Don’t forget to identify all the layers of rights. There may be more than one layer of copyright materials, other types of IP (such as Performers’ rights) as well as other legal issues (such as Data Protection etc) which will need identifying and managing.

3. Decide how ‘open’ you wish to license your data and content. Issues that may need to be addressed include: – controlling use for non commercial uses only vs. allowing commercial exploitation by third parties and encouraging BCE etc – requirements for attribution vs. the resulting possibility of attribution stacking – controlling reuse and repurposing but sacrificing potential interoperability when blending with content, data as well as software licensed under more open terms.

4. Remember that the more ‘open’ the use and repurposing of your content and data, the greater the risk if you have not cleared all the rights.  This is particularly pertinent for in copyright materials for which the rights holders are either unknown or cannot be traced (so called ‘orphan works’). In these situations, the OER IPR Support Risk Management Calculator can be used to establish an indicative risk score which can be used to help inform decisions relating to risk management.

5. Risk Management is increasingly important in the provision of access to open content where it may not be clear who created what and who owns what rights (if any). An organisation’s relationship to risk management should be supported from the bottom-up, by a realistic understanding of the nature of the work and its proposed use, and by the top-down recognition that an organisation’s understanding and acceptance of necessary risks, needs to be agreed, captured in policies and where possible, mitigated. This is an important component in the development of an appropriate corporate governance framework to support the delivery of open content and open data.

6. Consider how the licensing of your data and content relates to the licensing of other types of materials such as open source software, and whether one broader licence, such as the Open Government Licence which covers data, software, content etc might be more beneficial than multiple licences.

7. Clear permissions with any third parties (as per 1 above), making sure that permissions that are sought are either the same or more than the permissions that you then grant under your selected open licence – never less! The support video profiled on the OER IPR Support webpages can provide more insight about this issue.

8. Remember, open licences are often irrevocable, global and in perpetuity, so make sure that you are happy with what you intend to do with your data and content before you licence it out. Worst case, openly licensed resources can be removed from the web etc., but permissions granted up to that point cannot be revoked.

9. Get permissions in writing, such as emails etc from any third party rights holders. Verbal permission is not adequate.

10. Extract key information relating to third party permissions and store in a suitable system which is centrally accessible to prevent the ‘siloing’ of core rights management information. This is particularly important if projects are funded for a specific period time, such as JISC Projects, but where the permissions to use the materials may be subject to certain limitations and/or crediting requirements etc, as well as ensuring that there is a place to record rights holders contact details in case further contact is required.

An open letter to OSS developers: thank you!

This guest post is contributed by Donna Reish, who writes on the topic of best universities.

Dear OSS developers,

I wanted to write to say thank you for the work that you do. Thank you for the hours you put into your projects. Thank you for developing them and updating them. Thank you for keeping them free! And thank you for thinking up and creating the tools that make my job easier.

As a freelance writer, I cannot earn a living without having excellent tools: a working computer, pens and paper, internet access, image-editing abilities, and a word processor. The health of my business depends on how well these tools work for me as I complete my projects.

At the same time, I’m appalled by the cost associated with some of the options out there. Adobe InDesign and Microsoft Office Suite are both quite expensive, and I have a hard time justifying diverting my money to pay for those when my income is already squeezed as tightly as it is.

Instead, I have found that products created as openly as possible and provided for free have done wonders for my business. I’m speaking, most specifically of course, about OpenOffice.org, which, as you well know, has a writer program that more than allows me to accomplish all of my basic writing tasks.

I think one of the beautiful things about open source applications, like OpenOffice’s word processor, is that they integrate with other applications almost seamlessly. In the case of word processors, I can save a document that I’m working on in such a way as to allow someone with Microsoft Word to read and edit it just as easily. When I coordinate with my clients, I don’t have to jump through a lot of hoops in order to make the file a certain kind in order to help them read it or edit it. As someone who doesn’t quite know how computer programming works, I treat such compatibility like a miracle on earth!

Another open sourced application that I’ve found incredibly helpful for my freelancing business is GTD-free, an open sourced productivity application that basically helps me implement the ‘getting things done’ method of personal productivity management. When I freelance, I often juggle multiple projects, many of which have different deadlines and requirements. I need to have a great method of keeping all of it tracked in one place. I used to use a Moleskine notebook, but I found that the exercise of constantly writing down things was getting to be a task in and of itself. The switch to this application made my life so much easier.

Finally, I know I owe open source developers a lot, but if you have better suggestions regarding productivity apps, feel free to share your comments! I’ve been really happy so far with the tools I’m using, but I’m always looking for ways to improve.

Anyhow, these are some of the real world benefits for which the work you do is indirectly responsible! Thank you again.

Donna Reish

Editor’s note: Donna’s letter is an excellent example of someone acting in the evangelising role. The evangelist is an important role within an open source community and is discussed, along with all the other community roles, in the OSS Watch briefing note ‘Roles in open source projects‘.

OSS Watch Open Source Junction, Oxford, 28–29 March 2011

This guest post was written by Michelle Pauli, who also wrote the live blog at Open Source Junction.

‘More people pooling more resources in new ways is the history of civilisation’
Howard Rheingold

Open source software features, in some form, in just about every mobile device. This has created huge opportunities for innovation, communication and collaboration, and there is wide interest in mobile apps in the developer, consumer and business world. Yet, so far, there have been few attempts to bring together commercial and academic developers working on mobile apps in order to build partnerships based on lessons learned from open source development.

Open Source Junction, with its goal of building a sustainable community of stakeholders interested in mobile technologies, did just that. The first in a series of planned events, this two-day meeting focusing on cross-platform mobile apps gathered participants from all sectors to not only discuss innovation and collaboration but also take the first steps towards making it happen.

Open innovation

The 21st-century model of an organisation is ‘default to open’, declared Roland Harwood of 100% Open, citing Wikileaks as a topical example. Setting the scene for the networking elements of the event, he explained that open innovation is less about the ‘what’ than the ‘who’. It recognises that not all the smartest people work for us, so we need to move from the conceptual position that value lies in what we hold in our heads to the understanding that value lies in who we have around us. Or, as 100% Open put it, ‘innovating with partners by sharing the risks and rewards’.

Quoting the writer JG Ballard, Roland suggested that ‘the future reveals itself through the peripheral’ and said that we all need to be better at spotting what’s coming from outside our own sector. ‘Talk to lots of people and don’t stay in your own bubbles,’ he urged.

He had some powerful examples of companies that had opened up and reaped the rewards. These ranged from Lego’s inspired tolerance of copyright infringement that has made its Mindstorms range such a success, to Local Motors, a car sales company where customers have a hand in building the cars (think ‘beer and welding evenings’). He also namechecked Mozilla and Android to demonstrate that open source is mainstream business now.

A slight note of cynicism entered the discussion when Roland was asked if, with some of the ‘customer-led innovation projects’ he described, there was an element of companies trying to get customers to do their marketing work for them. ‘It’s a fine line,’ he admitted. ‘But it is also possible to have a more two-way relationship with customers so that it is not just a one-way street based on selling.’

In any case, with open innovation, coming up with ideas is rarely the problem. The hard work lies in making them happen and the challenge is to not only recognise a good idea (which is crucial at the start of the process) but also to recognise the effort involved in taking it forward.

Culture clash?

One of the reasons why implementing a good idea can be hard work comes down to clashes of cultures. There can be inertia and distrust between innovators and corporate bodies and collaboration can be perceived as risky. Roland described the ‘airlock solution’ that his organisation has pioneered to reassure both parties that ideas can be discussed in a confidential and ‘safe’ space.

Gabriel Hanganu, community development manager at OSS Watch, brought the issue home to the particular audience at Open Source Junction by focusing in on academic/business partnerships. He cited some fascinating surveys, including one conducted among UK academics by the Advanced Institute of Management Research, which found that academics are five times more likely to be entrepreneurial than the general public.

Another, a 2010 survey by UK Innovation Research Centre, found that most academics engage with industry to further their research. They are also interested in the impact of their research – its practical applications. Few academics engage with industry for purely financial gain and, increasingly, they are looking to build research networks.

On the industry side, there is a general distrust of academic business ability: it is felt that academics cannot, and do not want to, conduct outsourced research delivered in short timeframes. But, said Gabriel, industry needs to accept universities as equal partners, valued for their strengths.

For Gabriel, the key is practice-led transformation: it is not enough just to change perception of each other’s sector or have policies to work towards a common goal – you need open development to create the change from within.

An example of this kind of creative partnership in action came from the University of Oxford’s John Lyle, with his presentation on Webinos. This is a European Union project to produce a cross-device runtime environment for web applications. The idea is that fragmentation increases when you move from mobile to TV, laptops, navigation devices, etc. – at the moment, you can’t play a game on your mobile, walk into your house and seamlessly transfer your playing experience to your TV, for example. Webinos aims to resolve this by delivering an open web platform to allow apps to run across mobile, home media, PC and in-car devices.

Some hard questions were asked about how similar Webinos is to other projects around open web apps and the feasibility of trying to generalise a user interface, but the really exciting thing about Webinos is its success in bringing together a wide, pan-European, cross-sector consortium. It consists of 22 founding members from nine countries and the industrial partners include Samsung, Sony Ericsson, Deutsche Telecom and BMW.

As well as Webinos’s founding partnership, John said that the project is committed to creating ‘a worldwide open source community driving and using the results’.

How will that be achieved? Although 15% of Webinos’s funding has been earmarked for community-building, John was warned by Gabriel that ‘in our experience people talk a lot about building community but not a lot happens until the end, when funding has run out and it’s all unsustainable. At OSS Watch we advise people to think about sustainability right from the start.’


So, given that the aim of Open Source Junction is to build a community, what does that mean and how can it be done?

Ross Gardler, manager of OSS Watch and vice-president of community development at the Apache Software Foundation, tackled the topic head on and pinpointed the importance of a governance model: it is the structure that underpins how decisions are made, who makes them and how; conflict resolution and sustainability.

There are two extremes of open source governance: benevolent dictatorship and meritocracy, with the main difference showing up in how conflicts are resolved. Benevolent dictatorship requires ‘genius’, including very strong interpersonal skills; meritocracies do not have that problem, but they can stagnate if not managed well.

But, whichever you go for, said Ross, ‘it is extremely hard to build a community. So get on with building it and stop agonising about it!’

According to Stephen Walli, technical director of the Outercurve Foundation, the ‘campfire rule’ is that we’ve understood communities ‘since you had a campfire and I wanted to sit beside it’ and so open source communities are nothing new. But, again, the governance system needs to be resolved early on.

It is also crucial to make it as simple as possible for people to get involved: ‘The magic can happen on day one but you have to tell people what you want and how they can do it – you have to make it easy for them,’ he said.

Businesses often look to foundations as IP packagers and liability firewalls so they can grow their community more easily. The nine biggest open source projects in the world are based in foundations, Stephen added.

Sander van der Waal of OSS Watch offered some guidance on easing the open development process and advised that there are two essential collaboration tools: information and communication. A good issue-tracker system is crucial to both of these, along with a functioning mailing list.

The positive impact that successful community-building can offer was amply demonstrated by Scott Wilson from CETIS at the University of Bolton. He described how the Wookie widget project started out as a tiny deliverable worked on by a small number of people from one organisation funded from one source for a fixed time. Thanks to its entry into the Apache Incubator, it is now a viable, sustainable piece of open source software and the result is better software than they could have created alone, more interesting research opportunities, far greater impact and a wealth of new partnerships. It has even made money, and did so quickly.

But what can also damage a community beyond repair? Filthy lucre, said Ross. ‘Money ruins everything! Do not have money inside your community! It does not have any place there. It has to be an even playing field. If someone can buy influence then your community is broken,’ he emphasised.

Business sense

While money may cause problems within the governance of a community, it’s also the reason that open source communities need a business model to be sustainable.

Nick Allott, founder of NquiringMinds, raised some eyebrows in the audience with his claim that ‘code is a liability not an asset’, because, as Ross concurred, maintaining software costs money. ‘You have to account for that even if it is open source,’ he said. ‘Someone has to fix the bugs and get the servers back up and all sorts of things and that costs money. You have to generate some money and so someone somewhere has to have a business model. It might be to make money or it might be to reduce costs. If you do not do that then you will fail.’

What kinds of business models are out there? Quite a few, it seems. Potential business models include advertising, dual licensing and packaging for hardware and services (such as warranties, support or customisations). In the mobile app space, people are making money from app sales, upgrades and in-app sales, advertising and server-side revenue. However, the mobile app market is too young for anyone to really know the future – we don’t know what will be commoditised and what the healthy revenue streams will be in five years.

Nick took a look at some of the murkier methods used by bigger business in the open source space, based on growing the ecosystem, controlling the ecosystem and devaluing competitors’ assets. ‘Open source is not always nice and friendly,’ he warned. ‘There are ways to make revenue from open source, but the big players play a different game – to reduce costs and take out competition. Open source can have profound ecosystem effects: you can kill business overnight.’

Of course, there can also be partnerships that are not based on profit. Iris Lapinski of the educational charity Apps for Good offered an inspiring take on collaborative mobile app development with Transit. It’s a Bengali translation app that came out of a course run in Tower Hamlets with a group of girls who realised that there was a problem with communication between their English-speaking teachers and Bengali-speaking parents. Apps for Good brings in experts, from business executives to designers and developers, to work with the young mobile entrepreneurs on a voluntary basis.

Native v web…

Transit will be a native app, unlike most of the apps featured in case studies during the event. The native app v web app dilemma was a thread running through many discussions.

According to Tim Fernando from Oxford University Computing Services, who spoke about the very neat Mobile Oxford app and its associated Molly open source project, ‘If you are working in education, native apps are quite a dangerous route to go down because of renewing code each year, app store commitments and so on.’

App store terms and conditions are certainly an issue for open source developers. However, when asked ‘Are app stores evil?’, Rowan Wilson of OSS Watch took a measured line.

‘They are not evil by default – arguably Maemo repositories were the first app stores,’ he said. ‘The concept itself is not necessarily undermining to open source. Where they are not the sole channel of distribution the problems are significantly reduced. But they do introduce a new form of fragmentation and it can mean that you do not necessarily look outside the one marketplace you see when you get your device.’

Despite the appeal for developers of the web app over the native app, it was also recognised that apps for the iPhone appeal as a ‘shiny new thing’ to vice-chancellors.

Mike Jones from the University of Bristol, whose MyMobileBristol web app provides time- and location-sensitive information for students (such as the nearest available computer terminal and the next bus to the halls of residence), commented that ‘people ask “is it on the app store?”’ and it doesn’t need to be but they think that to access something it has to be on the app store. There are also people in the university who worry that the university does not have a brand presence on the app store.’

Where next?

One of the most important elements of the event was the ‘speed dating’ session, in which participants introduced themselves to each other and sought synergies between skills and needs and projects. It was the first step in developing the nascent Open Source Junction community and a number of potential partnerships were identified immediately.

In the closing session, OSS Watch’s Gabriel Hanganu identified three key areas for the future of Open Source Junction – open development, sustainability and marketing – and said that ‘depending on how these are addressed, the community will live or die’. Given the enthusiastic response to the event and the firm prospect of future collaboration, the community’s life force is already looking strong.

If you enjoyed reading this report, you may also like to see Michelle’s mini-interviews with some of the attendees.

Further reading


Open Source Junction blog (including live blog, slides and photos)
Programme and speaker bios

100% Open
Apps for Good
Mobile Oxford
Outercurve Foundation

Further information from OSS Watch:

App stores and openness
Free and open source software in mobile devices
Open innovation in software
How to build an open source community
Roles in open source projects
Wookie: a case study in sustainability

Fixing the Web with the help of the open source community

This guest post was written by Dr Gail Bradbrook, who works for Citizens Online, a charity that promotes digital inclusion.

Fix the Web is, in the jargon of the day, a crowd-sourcing project with the aim of changing the face of web accessibility. It is led by Citizens Online, the national charity I work for. A couple of years ago, we did some work with the EC on their strategy for digital inclusion (the use of technology by disadvantaged people).

At a European level, the progress on ensuring that all disabled people have a good internet experience was shockingly bad. EU countries signed up to a Riga target in 2006, which said that by 2010 all public sector websites should be accessible. I don’t think they have (dared!) measure it this year. In 2007 it had improved by only 2%, so they are moving their target to 2015.

We are probably at about 40% in the UK, according to Socitm research, but of course that is just the public sector. The private sector is not as ‘good’ and there is clearly still a long way to go. Using the latest (2008) WCAG2.0 standards (the basis of the recently launched BS8878) would seriously diminish (to nearly zero!) the number of accessible sites. What struck me was that the attempts to rectify this situation were very top-down, useful, but nonetheless limited attempts to draw up standards and promote them, build business cases, etc.

I asked myself where the voice of the average disabled person was in this and what role social media and ‘good geekery’ could play? (I’m a self-confessed ‘Geek Groupie’ at Stroud’s Barcamp!). Fix the Web was born out of those considerations and discussions with stakeholders. We got some funding from the Nominet Trust to take it forwards. I was always certain that the open source community would be central to the success of the project (though I had to stop referring to you good folks as ‘hactivists’ because people thought I was proposing something illegal!).

The simple idea is that we want to make reporting inaccessible websites as easy as possible for disabled people. They can highlight any problems they are having in less than 60 seconds, then quickly move on, without the burden of finding the right person to contact, and then constructing a considered email or filling out a form (which may finish with an inaccessible CAPTCHA!).

People can choose from a few options when reporting a problem: using a form on the site (http://www.fixtheweb.net), via twitter (#fixtheweb #fail, url and the problem) or by emailing post@fixtheweb.net. However, my ‘dream’ was a clickable toolbar that would capture the website details and provide the easiest option. Steve Lee from Full Measure brokered an introduction – as part of his OSS Watch support activities provided to ATBar – to the folks at Southampton University who are developing the ATbar (formerly funded by TechDis). The development team of Sebastian Skuse, Dr Mike Wald and E A Draffan from the Learning Societies Lab at Southampton, have collaborated with Fix the Web to create a special Fix the Web button on the toolbar, not only making the reporting process as fast as possible, but also opening up the project to the 2 million current users of the toolbar.

The idea of the toolbar has also been supported by JISC-funded OSS Watch, which provides advice on the use, development and licensing of open source software. The team aims to build a community around the project and take it forward through its recently awarded JISC REALISE project. Over the last five months, there have been over 1.8 million ‘toolbar hits’ on the ATBar.

The underlying ethos of Fix the Web is about raising awareness across the spectrum of understanding on this issue. So those who are clueless will get to hear about it, those who forget to consider it will find it further forwards in their thinking, those who know something will learn more, etc. And it is about empathising with people and the barriers they face, whether in knowledge or power or current budgets, and working with them, rather than naming and shaming.

It would be great to get more open source folks involved in the project. You don’t need to be an expert in web accessibility to join in, but you may improve your knowledge by doing so. Volunteering takes place online, in your own time. This is very much about a lot of people doing a little and over time collectively helping to Fix the Web we all love.