Category Archives: Mobile technology

Open Source Phones at MWC

Mobile World Congress is running this week in Barcelona.  While it’s predicable that we’ve seen lots of Android phones, including the big unveiling of the Galaxy S5 from Samsung, I’ve found it interesting to see the coverage of the other devices powered by open source technologies.

Mozilla announced their plans for a smartphone that could retail for as little as $25. It’s based on a new system-on-chip platform that integrates a 1GHz processor, 1GB of RAM and 2GB of flash memory, and will of course be running the open source Firefox OS.  It’s very much an entry level smartphone, but the $25 price point gives real weight to Mozilla’s ambition to target the “next billion” web users in developing countries.

Ubuntu Touch is finally seeing the light of day on 2 phones, one from Chinese manufacturer Meizu and one from Spanish manufacturer Bq.  Both phones are currently sold running Android, but will ship with Ubuntu later this year.  The phones’ internals have high-end performance in mind, with the Meizu sporting an 8-core processor and 2GB of RAM, clearly chosen to deliver Ubuntu’s fabled “convergence story”.

There’s been rumours abound this year that Nokia have been planning to release an Android smartphone, and they confirmed the rumours were true at MWC, sort of.  “Nokia X” will be a fork of Android with its own app store (as well as third-party ones) and a custom interface that borrows elements from Nokia’s Asha platform and Windows Phone.  Questions were raised at the rumour mill over whether Microsoft’s takeover of Nokia’s smartphone business would prevent an Android-based Nokia being possible.  However, Microsoft’s vice-president for operating systems Joe Belfiore said “Whatever they do, we’re very supportive of them,” while Nokia’s Stephen Elop maintains that the Windows-based Lumia range is still their primary smartphone product.

A slightly more left-field offering comes in the shape of Samsung’s Gear 2 “smartwatch” running Tizen, the apparently-not-dead-after-all successor to Maemo, Meego, LiMo, and all those other Linux-based mobile operating systems that never quite made it.  The device is designed to link up to the Samsung Galaxy range of Android phones, but with the dropping of “Galaxy” from the Gear’s branding, perhaps we’ll be seeing a new brand of Tizen powered smartphones from Samsung in the future.

GotoFail, Open Source and Edward Snowden

On Friday Apple released a patch for a flaw in one of their core security libraries. The library is used both in Apple’s mobile operating system iOS, and their desktop operating system OSX. As of today, the desktop version has yet to be patched. This flaw, and its aftermath, are interesting for a number of reasons.

Firstly, it’s very serious. The bug means that insecure network connections are falsely identified as secure by the operating system. This means that the flaw has an impact across numerous programs; anything that relies on the operating system to negotiate a secure connection could potentially be affected. This makes a whole range of services like web and mail vulnerable to so-called ‘man-in-the-middle’ attacks where a disreputable network host intercepts your network traffic, and potentially thereby gains access to your personal information.

Secondly, the flaw was dumb. The code in question includes an unnecessarily duplicated ‘goto’, highlighted here:

It looks like a cut-and-paste error, as the rogue ‘goto’ is indented as though it is conditional when – unlike the one above it – it is not. There are many reasons a bug like this ought not to get through quality assurance. It results in unreachable code, which the compiler would normally warn about. It would have been obvious if the code had been run through a tool that checks coding style, another common best practice precaution. Apple have received a huge amount of criticism for both the severity and the ‘simplicity’ of this bug.

Thirdly, and this is where we take a turn into the world of free and open source software, the code in question is part of Apple’s open source release programme. That is why I can post an image of the source code up there, and why critics of Apple have been able to see exactly how dumb this bug is. So one effect of Apple making the code open source has been that – arguably – it has increased the anger and ridicule to which they have been exposed. Without the source being available, we would have a far less clear idea of how dumb a mistake this was. Alright, one might argue, open source release makes your mistakes clear, but it also lets anyone fix them. That is a good trade-off, you might say. Unfortunately, in this case, it is not that simple. Despite being open source, the security framework in question is not provided by Apple in a state which makes it easy to modify and rebuild. Third party hackers have found it easier to fix the OSX bug by patching the faulty binary – normally a much more difficult route – rather than using Apple’s open source code to compile a fixed binary.

It is often argued that one key benefit of open source is that it permits code review by anyone. In this case, though, despite being a key security implementation and being available to review for over a year, this bug was not seemingly identified via source review. For me, this once again underlines that – while universal code review is a notional benefit of open source release – in practice it is universal ability to fix bugs once they’re found that is the strongest argument for source availability strengthening security. In this case Apple facilitated the former goal but made the latter problematic, and thereby in my opinion seriously reduced the security benefit open source might have brought.

Finally, it is interesting to note that a large number of commentators have asked whether this bug might have been deliberate. In the atmosphere of caution over security brought about by Edward Snowden’s revelations, these questions naturally arise. Did Apple deliberately break their own security at the request of the authorities? Obviously we cannot know. However it is interesting to note the relation between that possibility and the idea that open source is a weapon against deliberate implantation of flaws in software.

Bruce Schneier, the security analyst brought in by The Guardian to comment on Snowden’s original documents, noted in his commentary that the use of free and open source software was a means of combating national security agencies and their nasty habit of implanting and exploiting software flaws. After all if you can study the source you can see the backdoors, right? Leaving aside the issue of compromised compiler binaries, which might poison your binaries even when the source is ‘clean’, the GotoFail incident raises another question about the efficacy of open source as a weapon against government snooping. Whether deliberate or not, this flaw has been available for review for over a year.

The internet is throbbing with the schadenfreude of programmers and others attacking Apple over their dumbness. Yet isn’t another lesson of this debacle that we cannot rely on open source release on it’s own to act as a guarantee that our security critical code is neither compromised nor just plain bad?

Ubuntu Edge – Crowdfunding the Formula 1 of phones

Today Canonical announced the next step in their project to bring Ubuntu to the phone.  A new potentially record-breaking IndieGoGo crowd-funding campaign has been launched for the “Ubuntu Edge” – a device which seeks to bring together all the latest developments in mobile technology into a limited-run consumer device… that will run Ubuntu, of course.

In the campaign’s promotional video, Canonical’s founder Mark Shuttleworth reveals that consumer mobile devices dont benefit from the newest existing technologies as phone companies won’t use them until they’re already being produced at scale.  Shuttleworth goes on to describe the Ubuntu Edge phone with the latest and greatest technology – a Sapphire Crystal screen (which wont scratch unless you’ve got diamonds in your pocket), a Silicon-anode battery, high-spec CPU, RAM and storage, plus a camera and screen that focus on the features that apply to common usage, rather than just high pixel density.

The phone will be running the latest version of the open source Ubuntu OS for phones (commonly called “Ubuntu Touch”), which is seeking to provide full device convergence – essentially allowing your phone to be your PC when connected to peripherals – and with the specs on offer, that certainly seems realistic.

The phone will also be able to run Android, the currently market leader and competing open source offering.  In fact, the IndieGoGo page touts the Ubuntu Edge as an “Open Device”, designed to allow users to tinker with it and run their own software.  However this is probably a move to placate those who have fears about switching to a platform with a relatively young ecosystem – there’s a wealth developers producing apps for Android, but not so many for Ubuntu Touch.

There’s a feeling that Canonical are going for broke with this campaign.  The goal is set at a whopping $32,000,000 (yes, that’s thirty-two million dollars) which is far more than even the most successful crowd-funded projects in the past, and the price for actually getting your hands on a device is $830 (£540) or $600 (£390) if you back during the first 24 hours.  If the campaign doesn’t reach its goal, Canonical will focus solely on “consumer phones” using current technologies, where it will have a battle with the established players.

This could be a huge breakthrough in what companies supported by a community of users can achieve through crowd-funding campaigns, or it could be an embarrassing failure for Canonical and vicariously the Ubuntu community.  Ubuntu Touch has been a rocky road for community relations as the company’s goals have been seen by some to drive decisions at the expense of community consultation.  The success or failure of Ubuntu Touch could be a tipping point either way in this relationship.

Comment on this story by the open source community has only just started, and there’s already plenty being said for and against the campaign.  I’ll look back in a month and we can see how it went!

Ring ring ring ring ring ring ring, Ubuntu Phone

Yesterday evening Canonical posted a much-hyped “virtual keynote” (video of Mark Shuttleworth) announcing Ubuntu for phones. This heralds a new open source phone OS entering the scene, and I’d like to take a look at some of the points from the 20 minute video that I found interesting, as well as how it’s been received by the wider technology community.

The first thing that interests me about Ubuntu for phones is that it’s build on an Android base. Rather than trying to reinvent the wheel, Canonical built on the work that the Android Open Source Project has already done to support mobile hardware on Linux. For Canonical, this means a lot less development for them to worry about. For users and manufacturers, it means a phone that can run a vanilla open source Android ROM (such as Google’s Nexus phones) can also run Ubuntu.

Despite this, it will not run Android apps, or as TechCrunch pejoratively put it: “Android apps can’t even run on the Ubuntu Phone OS”. This makes sense to me, however.

Ubuntu’s offering to developers is twofold: firstly, first-class citizenship for web apps using the Unity Web API that landed in Ubuntu 12.10 – the same code used to integrate with the Unity desktop will allow apps to integrate with the phone’s interface (it’s not clear if this is also called “Unity”). Secondly, native apps written in QML are supported, a great framework for building interfaces with the Qt toolkit (again, Canonical aren’t reinventing the wheel, which is good). As such, the business of the apps will be written in C++ (with OpenGL supported for graphical flair), and Javascript for interface “glue”.

Adding a Java/Dalvik VM to support Android apps would be a big drain on resources, and as Mark Shuttleworth said recently, would cause more problems than it solves for the Ubuntu ecosystem.

In the video, Mark also mentions your Ubuntu Phone being your PC. Whether this means a blown up version of the touch UI when attached to a screen and keyboard, or a full-blown desktop environment a la Ubuntu for Android isn’t clear. It’s quite a compelling feature though, especially for businesses, who would have less devices to manage and be able to do so through Canonical’s Landscape product.

Towards the end of the video, Mark mentions that “a few lucky members of community have had early private access” to some parts of the system. Presumably this is the first of the “Skunkworks” projects that Mark announced recently on his blog, where Canonical are bringing in contributors from the community to projects that would otherwise be completely behind closed doors. Hopefully we’ll hear more about the work these community members have been doing in due course.

I could go on about this, but I’ll leave it there for now. I’m sure we’ll be discussing Ubuntu for Phones at length on the next series of the Ubuntu UK Podcast. If you’ve got any thoughts you’d like to share, leave them in the comments below!

Open WebOS: Its alive!

After being axed by HP last year , WebOS has re-emerged as an open source OS ready to be ported to a range of devices.

Being on the web end of the web-vs-native debate going on in the mobile world, I was always a bit of a fan of WebOS, even though it always seemed a bit doomed when positioned against the might of the Android and iOS platforms. However, rooting for the underdog aside, is there any reason to be optimistic about its chances to survive in its new open source form?

The closest comparison I can make to the WebOS story would be that of Symbian, which moved to the Symbian Foundation in 2009 with the intention of going open source.  However, by 2011, Nokia had reversed its course and started to wind down the foundation. So at least WebOS has got further than Symbian with what appears to be a fully open 1.0 release of the OS.

In general though, the strategy of taking a product moving it to become OSS doesn’t have a good track record. Not least because its also synonymous with “zombie projects” – for example, Jaiku was acquired by Google in 2007, then in 2009 it was announced the software was going open source while at the same time the whole thing was effectively being abandoned.

Looking back over the history of WebOS in the tech press is a bit depressing: “The Lonesome Death Of WebOS”, “can WebOS be revived?”, “HP Kills Off webOS”. Its the Operating System That Refuses To Die!

A zombie

Look out! Its a zombie project!

When Steve Winston in the video below says “we will continue to develop Open WebOS” I can sense the “HP Finally Drives Stake Through Heart of WebOS” article being readied for the following month.

YouTube Preview Image

However there are some encouraging signs. The Enyojs project has developed from being the framework for WebOS applications into a successful generic framework for building web apps for any platform. Perhaps other aspects of WebOS are similarly viable, even if the OS itself never catches on.

There is also the idea presented in the video above of targeting the market for kiosks and other types of devices outside the premium tablet and smartphone world. It seems kind of logical, but at the same also a bit desperate, especially given the success of Android.

Perhaps the lesson here is that, if you want to create a successful platform, go with open source as a starting strategy, not an emergency back-up plan.

Open Source Junction 3: mobile and cloud, Oxford 20-21 March 2012

Mobile technologies have become an integral part of our lives. Research indicates that by 2015 80% of people accessing the Internet will be doing so from mobile devices. Mobile applications and services are changing the way we engage with the web, and to a certain extent with each other.

At the same time, cloud technologies deliver better and better IT services. From email and content storage to complex computing and development platforms, users can access clouds via simple browsers, thus eliminating the need for end-user applications and high-power computers.

In UK Higher Education, cloud solutions are an integral part of a JISC programme aimed at helping universities and colleges deliver better efficiency and value for money through the development of shared services. As pointed out by Rachel Bruce, JISC’s Innovation Director for digital infrastructure, cloud solutions are increasingly attractive to HE institutions. They allow universities to reduce environmental and financial costs, share the load of maintaining a physical infrastructure, be flexible and operate on a pay-as-you-go basis, access data and applications from any location, and make scientific experiments easier to reproduce. Continue reading

Star pupils

Google has been in the news repeatedly over the last six months for closing down some of its many, many side projects. In general these are being mothballed in perpetuity, but in some cases, there is a transition plan. This is the case for Google Sky Map, and for our community it’s an interesting variation on the more traditional ‘open source it and hope it takes off’ approach that industry players like Nokia have tried in the past, and HP seem about to try again. Continue reading

FOSS Focus

I was thoroughly drawn in by the Amazon ‘Black Friday’ event last week, buying both a phone and a camera, against my better judgement and to the disgust of my bank manager. While trying to suppress my buyer’s remorse by searching the internet for all the marvellous capabilities of my soon-to-arrive devices, I noticed that the camera, a Canon Powershot SX220 HS, was one of the models capable of running a piece of open source software called CHDK released under the GNU GPLv2. This program leverages the fact that the camera will execute anything that look even remotely like a firmware update that is located on its SD card without requiring a digital signature, allowing an adjunct to the device’s firmware to be executed every time it starts up. You can even place the program on the SD card and select whether it is booted or not by changing the ‘write protect’ switch on the card.

Once the software is booted the user has access to an almost ridiculously long list of tweaks and features, including saving pictures as ‘RAW’ (meaning that the data from the camera’s sensor is saved to the card unaltered, rather than being crushed down into a smaller JPEG file), greater control over exposure times and the ability to construct more complicated ‘bracketing’, meaning that a series of shots can be taken with differing focal lengths or levels of white balance, allowing creation of HDR images and focus-stacked images. Even more geektastic is the ability to script the functionality of the camera using UBASIC or LUA, allowing a user to build functionality like time lapse photography and the taking of pictures only when motion is detected.

One question that remained with me, even as I contemplated spending more unwise pounds, was what Canon’s attitude was to this project. After all, some of the functionality that CHDK contains can be obtained from Canon in its more expensive models. They could close the technical loophole that allows the additional software to be run fairly easily, so one must assume that they do not see the existence of open source expansions of their equipment as a threat to their business model. Might they even see it as a selling point? Certainly it seems that running CHDK is likely to void your warranty, so perhaps the existence of a group of customers who opt out of expensive warranty provision is seen as a bonus.

Discovering this went some way towards alleviating the guilt of my spending. After all, I had got a large amount of functionality at essentially half price. But… if you want to run long scripts then you really need the attachment that lets the camera run off mains power. That’s not such a bargain. I could also probably do with a better tripod… Oh dear…

So. Farewell Then Flash Player Mobile…

(Apologies to E J Thribb)

Adobe’s announcement that it will be dropping Flash Player for mobile devices from its future plans has been widely interpreted as a victory for Apple, and in particular their late Chief Executive Steve Jobs. Perhaps because of his essay Thoughts on Flash, the absence of Flash technology from Apple mobile devices has seemed to be a personal decision of Jobs. In that essay Jobs made a group of points about exactly why he saw Flash as a detrimental technology, certainly for Apple mobile devices, and to some extent for Apple computers in general (“We also know first hand that Flash is the number one reason Macs crash.”) Competing phone and tablet makers pointed to their devices’ ability to run Flash, although not always that well. Apple’s refusal to engage with Flash on mobile led Adobe to declare that Apple mobile devices could not access the ‘full web’, particularly the video content that at that time was most frequently packaged as a Flash object.

In fact, Flash became so synonymous with web video packaging that it is easy to forget that it started life (as FutureSplash Animator back in the dark days of dial-up) as primarily a vector graphics animation package. It provided the possibility of small files containing large images and enhanced interactivity (this was before Javascript was well supported or unified). Gradually Javascript and broadband made these selling points more moot, and Flash then made the leap into bundling video codecs and providing a unified way for browsers to display video. Now, partly as a result of Apple’s stand against it, native video decoding in the browser combined with HTML 5’s <video> tag have once again made Flash less relevant. Now, it could be argued, the chief use of Flash is in rapid prototyping and dissemination of simple web games, many of which go on to spawn native titles on the bigger gaming platforms. Even that function is likely to be somewhat supplanted by HTML 5 apps in the future.

So has Flash as a whole been doomed by Apple’s tactics. It’s worth looking at that Steve Jobs essay again to find out. Among the many legitimate criticisms he levels (instability, insecurity, poor efficiency) Jobs also attacked the fact that Flash provided a path to developing mobile apps (both in the browser and compiled to native code) that was out of the control of the platform owner:

We know from painful experience that letting a third party layer of software come between the platform and the developer ultimately results in sub-standard apps and hinders the enhancement and progress of the platform. If developers grow dependent on third party development libraries and tools, they can only take advantage of platform enhancements if and when the third party chooses to adopt the new features. We cannot be at the mercy of a third party deciding if and when they will make our enhancements available to our developers.

This becomes even worse if the third party is supplying a cross platform development tool. The third party may not adopt enhancements from one platform unless they are available on all of their supported platforms. Hence developers only have access to the lowest common denominator set of features. Again, we cannot accept an outcome where developers are blocked from using our innovations and enhancements because they are not available on our competitor’s platforms.

Flash is a cross platform development tool. It is not Adobe’s goal to help developers write the best iPhone, iPod and iPad apps. It is their goal to help developers write cross platform apps. And Adobe has been painfully slow to adopt enhancements to Apple’s platforms. For example, although Mac OS X has been shipping for almost 10 years now, Adobe just adopted it fully (Cocoa) two weeks ago when they shipped CS5. Adobe was the last major third party developer to fully adopt Mac OS X.
Our motivation is simple – we want to provide the most advanced and innovative platform to our developers, and we want them to stand directly on the shoulders of this platform and create the best apps the world has ever seen. We want to continually enhance the platform so developers can create even more amazing, powerful, fun and useful applications. Everyone wins – we sell more devices because we have the best apps, developers reach a wider and wider audience and customer base, and users are continually delighted by the best and broadest selection of apps on any platform.

In other words: Apple did not want development tools for their mobile devices to exist which were not under their control. Jobs cited this as ‘the most important reason’ that he rejected Adobe’s technology. Yet only months after that essay Apple backtracked on the restriction and permitted Flash tools to compile Adobe Flash Actionscript applications for submission to the App Store. Indeed, if you don’t own a Mac, using Flash to generate iOS apps is one of the very few alternatives available to you. In fact, Flash Builder, the tool which does the actual compiling of Actionscript programs into iOS applications, is really the open source development environment Eclipse distributed with a proprietary Adobe plugin. Potentially the concession won by Adobe could lead to entirely open source tool chains for the development of iOS apps.

So while Jobs’ tauntings over the possibility of a robust and useful Flash Player Mobile (“We have routinely asked Adobe to show us Flash performing well on a mobile device, any mobile device, for a few years now. We have never seen it”) have proved prescient, in fact Adobe won a crucial concession from Apple almost a year ago. That concession widened the potential role of open source tools in developing iOS apps (it’s always been possible for Android). Adobe have also announced that they intend to ‘aggressively contribute’ to HTML5, perhaps indicating that they will be extending their development tools to allow Actionscript programs to be emitted as HTML5 web apps.

What really emerges from the struggles over mobile Flash is a strong sense of the entropy of the mobile device space at the moment. Rhetoric is deployed and attitudes struck, only for the originators to back away in a matter of months. App Store policies change monthly and with them the possibility of using open source code on the devices they serve. Competitive head-butting between closed source behemoths (like Adobe and Apple) can result in the opening up of data standards (as the pressure from iOS users has resulted in more HTML5 compliant video on the web – although let’s not get into the patents around those). For open source authors and proponents, the mobile space remains a changing and challenging environment. What the skirmishes around Flash demonstrate, though, is that the struggle of closed source vendors for competitive advantage can provide opportunities for free and open source code.

Build a better mousetrap

“Build a better mousetrap, and the world will beat a path to your door” as Wikipedia informs me Ralph Waldo Emerson never quite said. The point – that real innovation sells itself – remains true today. Indeed it could be argued that the average consumer is more engaged with the heartbeat of technological innovation now than ever before, with software releases making headlines among the more traditional stories of war and celebrity.

Emerson’s non-quote does raise a question, however. How do we identify technology which is better? With mouse-traps there are some fairly obvious metrics relating to mouse mortality and cheese preservation, but not all inventions are as easy to benchmark. The last few weeks have seen anouncements of upgrades to the world’s two most commonly used smartphone operating systems: Apple’s iOS (version 5) and Google’s Android (version 4). Each brings a raft of new features, although in both cases it has to be said that these new features are no longer as core to the operation of the device as innovations in earlier versions. Voice-operated search and facial recognition are nice, but hardly essential elements of a mobile computer, at least for now. Perhaps lost in the combative comparisons deployed by proponents of each OS is the fact that a genuinely key ability – web browsing – is implemented on both platforms using essentially the same code: the Web Kit open source project. While newer functionality is added by Google and Apple to differentiate the competing products, it pays them to cooperate on key, unavoidable elements of their offerings. Given this, it’s fair to repeat the question – how do we identify real innovation? The newer differentiating features appear to be the cutting edge of endeavour, but their very newness is a demonstration that – up to now at least – they have not been essential elements of the technology in question. Some of them will die away despite their novelty, having never truly improved the invention that they embellish. Like a cheese grater on your mouse trap, it’s possibly a nice idea and undoubtedly novel, but how useful is it really? Only time will tell, and in the meantime better springs, and better browsers, are being developed.

So perhaps the question needs to be: “looking back at innovations that have proved to be key, how do they tend to develop?” Using the answer to this, we might be able to form some techniques for looking at our cutting-edge-but-possibly-pointless innovations and making guesses about their eventual utility. We might even be able to identify over-arching strategies for conducting and rewarding innovation…

Here we get into an argument that flared up earlier this month, when a video of Francis Gurry, the Director General of the UN’s World Intellectual Property Organization (WIPO) back in June was discovered by the internet commentating community. Gurry was speaking to sum up his views on a debate which had just taken place on ‘Accelerating Growth and Development’ in relation to invention and intellectual property. Gurry’s argument was seemingly  summed up by the headline on the BoingBoing article which drew it to the internet’s attention: “WIPO boss: the Web would have been better if it was patented and its users had to pay license fees”. Reading the article, though, even the quote that BoingBoing had pulled failed to use that emotive word ‘better’:

Intellectual property is a very flexible instrument. So, for example, had the world wide web been able to be patented, and I think that is a question in itself, perhaps the amount of investment that has gone into or would be able to go into basic science would be different. If you had found a very flexible licensing model, in which the burden for the innovation of the world wide web had been shared across the whole user community in a very fair and reasonable manner, with a modest contribution for everyone for this wonderful innovation, it would have enabled enormous investment in turn in further basic research. And that is the sort of flexibility that is built into the intellectual property system. It is not a rigid system.

Reaction to the video from proponents of open content and open source across the internet was voluble and aggravated. Gurry was accused of being ideologically indoctrinated and blinkered, tied to anachronistic models of IP registration and exploitation even in the face of the incredible growth and success of the web largely without the intervention of these models. In fact though, the most that Gurry says is that the web would have been ‘different’. Taken in the context of the statements which preceded it (and which you can hear by downloading the video), in which the value of the traditional IP systems had been questioned repeatedly, Gurry’s statements do not really support the distillation they were given, and which caused so much anger. He is trying to argue that the web could have grown within more traditional licensing structures. Whether he is right about this or not, he is not claiming here that it would have been ‘better’ under those circumstances.

The anger and confusion here are natural, though. The battle lines between proponents of the traditional and the more ‘open’ approaches to innovation (and here we should note that the buzz phrase ‘open innovation’ often itself refers to deeply traditional IP exploitation patterns) have long been drawn, and the forces on both sides are keen to tackle and destroy the arguments of their opponents wherever they see them. The web is often perceived  – with much justification – as a triumph of innovation outside the traditional IP exploitation framework. To hear someone perceived as being part of the old-guard even discussing it can seem presumptuous to some ears. Yet in reality the implied dichotomy here is simplistic. The open licensing movements themselves are underpinned by the arcane operations of traditional licensing and exploitation. While they may give these operations an innovative twist, they could not be enforced or defended without them. Conversely, Gurry’s example of why  the patent regime is beneficial fails to address the criticisms of openness proponents. He points to the publication framework implicit in the current patent system, and makes the comparison between the saxophone – which has fully documented design documents available thanks to its having been patented – and the violin – where many secrets of producing the greatest instruments have been lost through secrecy and the passage of time. This critique – while interesting – is almost wholly inappropriate as a defence of the current system in opposition to more open models. In the modern case, both models involve complete publication – the distinction lies in how benefits are reaped from exploitation and by whom.

Given the frequent failures of either side in this debate to engage with what the other is actually saying – illustrated by this sad tale –  it’s not surprising that telling which innovations are better remains hard. While ideology is important, it can often obscure our view of what actually matters to most people: how many mice are killed (or indeed captured).