03.06.07

30 Days of Freedom

Posted in Essays, Free Culture, Tools at 7:48 am by Erik

Slashdot just publicized 30 Days With Ubuntu, a review of the Ubuntu Linux distribution and its strengths and shortcomings. I found the review to be honest and accurate. It is purely technical and does not argue the ethics of free/libre software. I want to use it as an opportunity to reflect on where this movement is today, and why it matters. It’s not any in-depth analysis, but might be interesting to some readers (and writing such things always helps me to hone my beliefs, my arguments, and my rhetoric). In this article, I will refer to free/libre software as “Free Code”, which is a bit of an experiment and which I will explain at the end.

I’ve been a Kubuntu user for a year now (and a Debian user before that for 3 1/2 years), so I have plenty of opinions of my own, but I want to focus on the main criticisms identified by the reviewer:

  1. support for recent hardware is incomplete – some things don’t work or require a lot of tweaking
  2. no commercial games, except through (crappy) emulation
  3. no good video editing code, no good PhotoShop replacement
  4. 64-bit version sucks in many ways

These are remarkably few criticisms, which give an indication of how far Linux and Ubuntu have come. I’ll try to address each one of them briefly.

Hardware support

This is going to be a tricky issue unless and until we get mainstream adoption. That’s not entirely a chicken/egg problem, because it does not concern machines which are specifically selected or built for the purpose of running Linux (which is of course how the vast majority of users get Windows on their computers in the first place). I think three things have to mainly happen to make progress:

  • We need to build a really, really sexy Linux-based PC that everyone wants to have, get some large vendor to support it, and market it widely. See [[FriendlyPC]] for some ideas on this. The OLPC effort will be a good case study on what to do and what to avoid. (I’m particularly interested in whether Squeak will prove to be a viable development platform or not.) Not likely in 2 years. Maybe use the Wikipedia brand? If WP regains significant credibility, a “Wikipedia PC” could become quite the item.
  • Users of particular hardware need to be able to find each other, and pool resources (money, activism & coding ability), to make hardware support happen. If a major distributor like Ubuntu integrated such a matching tool, it could make a significant difference in the speed with which new hardware is supported. This goes for general code needs as well, of course, but as the review shows, these are fairly well met already.
  • There needs to be a single certification program that is supported by all major Linux vendors & companies. I don’t want to look for “Red Hat enabled” or “Ubuntu supported”; I want to know whether recent versions of any of these distributions will work with the hardware or not. And once the certification program has gained acceptance, you can do a bait and switch and withdraw certification from non-free vendors. :-)

Of course, an increasing market dominance of a single Linux-based platform will also make things easier. My general philosophy on competition is that it should only exist where it makes eminent sense, i.e., it should grow on the basis of irreconcilable disagreements about philosophy, direction, management, or architecture, not on the simple desire of some people to make more money than others. This ties into my belief that we need to gradually change from a profit-driven society to a cooperative one. Hence, I support rallying around the most progressive Linux distribution, which appears to be Ubuntu/Debian at this point. (That said, I would prefer the for-profit which supports Ubuntu, Canonical, to be fully owned by the non-profit, which is the model chosen by the Mozilla Foundation.)

Some users believe strongly in making existing non-free drivers as easy to obtain and use as possible. I’m not completely opposed to the idea, but we need a more creative solution than this. My suggestion is what I call “/usr/bin/indulgence”, something like Automatix, but more deeply embedded into the OS, which would make installing any non-free code (from the lowest to the highest level) a trivial procedure, but which would also ask the user very kindly to support an equivalent Free Code implementation in any of the three ways mentioned earlier: money, activism, or code.

I object to handling such matters carelessly and to the vociferous “You people are idiots for not making this as easy as possible” arguments that are sometimes made; these are short-sighted and uncreative (as is the ideological opposition to even discussing the issue).

Commercial games and emulation

The reviewer points out that Linux is not a state of the art gaming platform. I’m quite happy with that. Free Code games give me an occasional distraction, but are not of sufficient depth to be seriously addictive. (Some Battle for Wesnoth or NetHack players might object to the previous statement.) I think this is something parents should take into account, and we should communicate it as a plus when talking to offices, governments, and home users.

If you know me, you may be surprised to learn that I’m in favor of some governmental regulation on games; not on grounds of violence or sex, but on grounds of addictiveness. That is a topic for another entry — but I personally see mainstream games on Linux as a non-issue. Your Mileage May Vary. If you want more distractions, there are thousands of games from the 1980s and 1990s that will work perfectly in emulators which pretend your machine is a Super Nintendo, a Commodore Amiga, or a DOS PC. Seach for “Abandonware” and “ROMs” on BitTorrent et al.

I’m more interested in how the Free Code ethos, combined with continued innovation in funding and collaboration, could result in Free Games that are technically modern, but built with more than just the interest in getting as many players as possible to pay a monthly subscription fee. Games that teach things. Games that make people do things. Games that demand and encourage creativity. Games where the builders care about the lives of the players, and not just their money. I’ll be happy to promote and help with that. Getting the latest Blizzard game to run nicely? Not so much.

Missing code

You know that Linux is ready for governments and businesses when a 30 day review points out DVD and photo editing as the main weaknesses — and not because there are no Free Code replacements, but because they aren’t quite good enough yet. The reviewer only tried two applications, GIMP and Kino. I share his feelings towards the GIMP photo editor, which I regard as an “old school” Free Code project where the developers would rather tell the users why their program is, in fact, highly usable than conducting serious usability tests and making improvements. To be fair, the existing GIMP user base, which is used to the current implementation, may also resist significant changes.

That is not to say that the quite remarkable GIMP functionality could not be wrapped into a nicer user interface. GIMPShop is one such attempt, which I have not tried. I hope that it will become a well-maintained fork; I don’t have much hope for GIMP itself to improve in the UI department. I am personally partial to Krita which, while still young, seems to have generally made the right implementation decisions, and is truly user-focused (as is all of KDE — I love those guys). I am not a professional photo editor, so I don’t know how mature Krita is for serious work. It is good enough for everything I do.

As for video editing, it would have been nice to read about some alternatives to Kino, such as the popular Cinelerra. As an outsider, I expect it will take a couple of years for one of these solutions to become truly competitive. Fortunately, many end users have very basic needs (cutting, titling, some effects), which the existing solutions cover, as the reviewer acknowledges.

There are, of course, countless areas which the reviewer does not touch upon. Personally, I think two important missing components for home users are high quality speech recognition (which many users will expect now that Vista is introducing it to the mainstream), and Free OCR. Here I’m not even talking about quality; the existing packages are, frankly, useless (I have scanned, OCR’d and proofread books, so I know a little bit about the topic). While setting my father up with Kubuntu, I played with the Tesseract engine, which was recently freed. It produced the best results of all the ones I’ve tested, but still did poorly compared to proprietary solutions (perhaps due to a lack of preprocessing). Frankly, without some serious financial support I doubt we’ll make much headway anytime soon. If the Wikimedia Foundation had more resources to spare, I’d advocate funding this, as it would greatly benefit Wikisource.

64 bit support

The reviewer points out a few glitches in the apps and drivers of the 64 bit version of Ubuntu. I cannot see a single argument in the article why this even matters to end users, except that “Vista has it” and that Ubuntu will have to provide good 64 bit support “to be taken seriously.” For the most part, this seems to be an issue of buzzword compliance, and I’m confident that the glitches will be worked out long before it starts to be relevant (i.e. when regular users start to operate on huge amounts of data in the course of their daily work).

Instead of chasing after an implementation detail of doubtful significance, I find the opposite direction of innovation much more fascinating: getting Linux to run well on low end machines and embedded systems. If you want, you can run Linux on a 486 PC from 15 years ago using DeLi Linux, though you’ll have to do without some modern applications simply due to their capabilities. Others, on the other hand, are up-to-date, and there are even well maintained text-mode web browsers for Linux, not to mention highly capable text-based authoring environments like emacs and vim.


Where next?

Given the state of Linux today, there is really no excuse for any government not to switch to Free Code. Certainly, there are still unmet needs, but if governments collaborate, these will be addressed quickly. Which, incidentally, is also a lovely way to bring people from different cultures together. Imagine Free Code hackers from Venezuela, Norway, and Pakistan working together on the same codebase. Such scenarios are, of course, already playing out, but would be much more common if any taxpayer-funded code needed to be freely licensed.

I believe this is inevitable, and that we’re going to continue to see significant progress within governments and academia. One could compare the Free Code progress to any civil rights struggle. Some will consider this an exaggeration, but one should not underestimate how much is connected to it, especially 20 to 30 years down the line: the involvement of developing nations in the information society, the control over the media platform through which everything will be created and played, the potential for reform to capitalism itself — and more. While Free Code may not save lives in ways which are as obvious as, say, an HIV drug or a law against anti-gay violence, it is the enabling support layer of a Free Culture which is very much connected to such issues: to education and awareness, to media decentralization, to intellectual property law, and so on and so forth.

Accordingly, we can expect some governments to take reactionary positions for decades (regardless of the quality of free solutions and undeniable cost savings), while others are already well in the progress of migrating. This may sound obvious, but is a bit counterintuitive to many politically naive hackers: No matter how good a job we do, we will continue to meet resistance and opposition on all levels.

As for businesses migrating their desktops, I believe it’s going to be a slower process. There are many contributing factors here: the ignorance of decision-makers, the feeling that anything “free” is suspicious, the sustained marketing effort of proprietary vendors, the reluctance to commit to approaches which require cooperation with competitors, the short term thinking that often dominates company policies, and last but not least, the common lack of any ethical perspective. Of course, there are also more practical grounds for opposition, but in general, I think the corporate sector (in spite of the adoption of Free Code on the server) is going to be the hardest to convert. I’m supportive of helping especially small businesses make the switch, in spite of my reservations about the profit motive. (Realistically, we’re going to have to live with capitalism for some time …)

Let me explain the reason I used the idiosyncratic term “Free Code” in this article. “Free Software” is disappointingly ambiguous, and “Open Source” is morally sterile. The word “code” points to what matters most: the instructions, the recipes underlying an application, which can be remixed, shared and rearranged. And “Free Code”, like “Free Culture”, calls to mind freedom over any vague notion of “openness” of “sources”. Last but not least, it carries the ominous double meaning that Lessig pointed out: code is law. Code influences and shapes our society increasingly, the more networked it becomes. Who would you rather have in charge of writing law: a monopoly-centered oligarchy, or anyone who has the will and the ability to do so? If you realize what code truly is, the conclusions become inescapable.

“Free Code” is perhaps unlikely to catch on, but sometimes I just want to try out a phrase to see how it feels. I’ve also always found that “software” is an undeniably silly term. It’s easy to ridicule people who would care about such a thing. It’s “soft.” It’s a commodity. To call a coder a software developer industrializes and trivializes their role. Coding should be a creative process which is deeply rooted in social and political awareness. People should be proud to say: “I am not a ‘software developer’. I am a Free Coder.”

05.01.06

RfC: A Free Content and Expression Definition

Posted in Essays, Ideas at 5:09 pm by Erik

If you distribute this announcement, please make an addition to /Log so we can avoid duplicates.

The free culture movement is growing. Hackers have created a completely free operating system called GNU/Linux that can be used and shared by anyone for any purpose. A community of volunteers has built the largest encyclopedia in history, Wikipedia, which is used by more people every day than CNN.com or AOL.com. Thousands of individuals have chosen to upload photos to Flickr.com under free licenses. But – just a minute. What exactly is a “free license”?

In the free software world, the two primary definitions – the Free Software Definition and the Open Source Definition – are both fairly clear about what uses must be allowed. Free software can be freely copied, modified, modified and copied, sold, taken apart and put back together. However, no similar standard exists in the sphere of free content and free expressions.

We believe that the highest standard of freedom should be sought for as many works as possible. And we seek to define this standard of freedom clearly. We call this definition the “Free Content and Expression Definition”, and we call works which are covered by this definition “free content” or “free expressions”.

Neither these names nor the text of the definition itself are final yet. In the spirit of free and open collaboration, we invite your feedback and changes. The definition is published in a wiki. You can find it at:

http://freedomdefined.org/ or http://freecontentdefinition.org/

Please use the URL <http://freedomdefined.org/static/> (including the trailing slash) when submitting this link to high-traffic websites.

There is a stable and an unstable version of the definition. The stable version is protected, while the unstable one may be edited by anyone. Be bold and make changes to the unstable version, or make suggestions on the discussion page. Over time, we hope to reach a consensus. Four moderators will be assisting this process:

  • Erik Möller – co-initiator of the definition. Free software developer, author and long time Wikimedian, where he initiated two projects: Wikinews and the Wikimedia Commons.
  • Benjamin Mako Hill – co-initiator of the definition. Debian hacker and author of the Debian GNU/Linux 3.1 Bible, board member of Software in the Public Interest, Software Freedom International, and the Ubuntu Foundation.
  • Mia Garlick. General Counsel at Creative Commons, and an expert on IP law. Creative Commons is, of course, the project which offers many easy-to-use licenses to authors and artists, some of which are free content licenses and some of which are not.
  • Angela Beesley. One of the two elected trustees of the Wikimedia Foundation. Co-founder and Vice President of Wikia, Inc.

None of the moderators is acting here in an official capacity related to their affiliations. Please treat their comments as personal opinion unless otherwise noted. The Creative Commons project has welcomed the effort to clearly classify existing groups of licenses, and will work to supplement this definition with one which covers a larger class of licenses and works.

In addition to changes to the definition itself, we invite you to submit logos that can be attached to works or licenses which are free under this definition:

http://freedomdefined.org/Logo_contest

One note on the choice of name. Not all people will be happy to label their works “content”, as it is also a term that is heavily used in commerce. This is why the initiators of the definition compromised on the name “Free Content and Expression Definition” for the definition itself. We are suggesting “Free Expression” as an alternative term that may lend itself particularly to usage in the context of artistic works. However, we remain open on discussing the issue of naming, and invite your feedback in this regard.

We encourage you to join the open editing phase, to take part in the logo contest, or to provide feedback. We aim to release a 1.0 version of this definition fairly soon.

Please forward this announcement to other relevant message boards and mailing lists.

Thanks for your time,

Erik Möller and Benjamin Mako Hill

12.08.05

Lessig thinks we need NC licenses. I think we should make them obsolete.

Posted in Essays at 5:17 pm by Erik

In his latest “Creative Commons in Review” column, Lawrence Lessig has responded at length to my article “The Case for Free Use:
Reasons Not to Use a Creative Commons -NC License”
. He agrees with the gist of the article, but points out:

For example, imagine you’re in a band and you’ve recorded a new song. You’re happy to have it spread around the Internet. But you’re not keen that Sony include it on a CD — at least without asking you first. If you release the song under a simple Attribution license there’s no reason Sony (or anyone else) couldn’t take your song and sell it. And I personally see nothing wrong with you wanting to reserve your commercial rights so that Sony has to ask you permission (and pay you) before they can profit from your music.

Let’s not forget that the CC-NC license alone guarantees that the work, if it is of interest to anyone, will be freely available on the Net. This is really important, and Lessig does not mention it — perhaps he thinks it is obvious, but I do not believe it is. Anyone who uses a CC-NC license must understand that they are giving away their work to the world for free. Even for large files, bandwidth costs have become negligible thanks to new distribution mechanisms such as BitTorrent and public media archives like Ourmedia. Today, anyone can essentially distribute any large static media file in demand to anyone else, for free. You can even do it without advertising.

This is only part of the media revolution. The other part are new mechanisms to tell people about interesting resources. For textual content, blogs are already doing a pretty good job. Social tagging, collaborative filtering – all this is happening. In combination, this means that there is a clear trend that any freely licensed work (NC or not) that is of high value to many will find its way through the network. In fact, to a lesser extent, this is even true for proprietary content — but here the content industry can impose regulations to push the distribution into darknets. For files that can be freely copied, the Net can develop its full strengths as a medium to spread memes and build mindshare.

Yes, there are still millions of people who have never read a blog. But as a new generation is growing up with these tools — the generation which is the primary target audience of much of the music we’re talking about — this is changing rapidly.

So, if Sony managed to make a buck of a work that is freely available throughout the Net, then I very much doubt they would be able to do so for much longer. I’m also quite sure that the artist in question could then easily land a contract and go proprietary, if they wanted to. There will very likely always be distribution and marketing platforms that could get away with charging a small amount for a song — but arguably, they are performing a valuable service to the artist in return, by getting the word out about their music. It’s also quite likely in the current media landscape that such platforms would use freely licensed content as teasing material for their commercial offerings, and make it available for free download.

It is true that the argument cannot be entirely discounted. We are living in times of transition, where it is possible to make money off people’s ignorance or their attachment to traditional distribution media, such as CDs. But if anything, this transition will be accelerated by putting more content under truly free licenses.

As I’ve said before, I hope that Creative Commons will inform creators about the consequences of the NC licensing choice. Lessig signals that this may soon be done, and that’s great. But the really interesting question, in my mind, is not how to stop companies like Sony from commecrially exploiting freely available works — it’s how to build an economy of goodwill, one where creators of free quality content are rewarded fairly.

These two issues are very much related. The reason people choose the NC license is that they (often subconsciously) feel that somehow, some day, they might make lots of money with their content — and they don’t want anyone else to do it. They may also feel that, with an NC license, they can still keep some control over how frequently their work is distributed across the planet (as noted above, this is not true).

If we can demonstrate that content creators can make money without utilizing copright, then much of the rationale for using an NC license disappears. All that remains to be done then is to improve these mechanisms to the point where they become the dominant mechanism to find and pay for content.

Take these nifty buttons:

They practically cry out to be turned into the decentralized infrastructure to promote such a platform. Instead of pointing to static license pages, they could point to dynamic pages about the creators and their works. These pages would allow visitors to not only donate money, but also to make suggestions for new works (work for hire), or to pledge to support a project or cause defined by the creator.

The pledging could work similarly to Pledgebank — only if enough other people sign up to meet the goal, anyone has to pay. And one of the cool things about Pledgebank is that it is open-ended. It doesn’t have to be about money, and it doesn’t have to benefit the original creator.

Once you have such a platform, you can bootstrap it into something ever more powerful. You can add group-forming features where Creative Commons users can join forces to support particular causes or projects, or vote on how donated money is spent. You can organize fundraisers to make old proprietary content freely available. You can improve the functionality for work-for-hire projects (milestones, specifications, collaborative funding). You can add better search and discovery tools. You can improve usability. It’s a practically open-ended project.

With or without a license that prohibits commercial use, these mechanisms are needed to make the free content economy work. Here’s a challenge to Larry: Turn the Creative Commons directory into a platform for discovering content and renumerating creators. Make it an open source project and get the best brains out in the field of social networking to work on this — but put some paid developers on the task to make sure the job gets done. Nobody on the planet is in a better position than you are to get such a project off the ground.