The history of AltaVista, Yahoo! and Google visualized as GIF animations. Would be even cooler with a slider.
AuthorErik
Either webcomics artists are an unusually innovative bunch, or I’m reading too much of their work. đ In any case, on the heels of OhNoRobot, Biggest Webcomic Loser is another very interesting (and somewhat bizarre) project coming from the comics community. Overweight comics artists have decided to join forces to lose money and raise fat for charity. Er, sorry, the other way around. It works like this:
- Each artist defines a personal weight goal.
- The site is updated regularly with new comics from the artists who have signed up for the project.
- Visitors can choose to pledge financial support to UNICEF on a per-lbs basis (apparently they’re not innovative enough to use the metric system).
- The financial support per-lbs is indicated next to each comic.
The total amount of pledged money for all comics is already at over $5K, and given that it’s only been running for a few weeks, it seems like it could go much higher. Once any artist has reached their weight loss goal, those who have pledged to support “them” (i.e. UNICEF) will be reminded to make a donation.
The one thing I would change is to try to pick a general theme for the comics to follow — otherwise they tend to focus on the idea itself, which might not be the best way to attract readers who do not have the same, er, problem.
It’s a bit strange to tie this specifically to the webcomics community, but it has the advantage of providing interesting new content (comics) every day. I wonder if something similar might work for blogs. See Pledgebank for a very cool generic pledging system.
For those interested in the progress of Wikidata and the “Ultimate Wiktionary”, we have just posted a prototype containing lexicological data from the General Multilingual Environmental Thesaurus (GEMET) used in the European Union (over 70,000 words in more than 20 languages imported so far). This is a first read-only stab at a complex structured data wiki application. I’ve posted further details on wikitech-l.
This one is a simple idea, and it may already exist. I’d like a way to maintain a personal blacklist of sites which I never wish to visit. When browsing the web, all links that (visibly or invisibly) point to these sites should then be somehow marked. Preferably, that should be done in a low-overhead way that doesn’t require rewriting every webpage. I’d be happy to hover over a link to see whether it is blacklisted or not, although color-coding or similar would of course be more user-friendly.
I do realize that some search engines offer blacklists as part of their “personalized search”, but I’d rather host this list on my machine and not have it tied to a global identity, requiring me to let Google et al. set eternal cookies to use them. Besides, this would also show bad links on other websites such as Wikipedia, which might come in handy.
One personal use for this is searching for lyrics. There are too many lyrics sites that are full of floating, animated, annoying ads of different types, some of them making their way through Firefox popup and ad blocking. Incidentally, this is the primary result of the music industry’s campaign against lyrics websites: the big lyrics archives which are left are scammers. Thanks, guys! (I was in the process of downloading all the lyrics from www.lyrics.ch when they shut it down. Unfortunately, I only ever got to the letter H ..)
The Internet Archive is a great invention, providing a view through the history of a webpage. It is also a great tool for investigative journalism and academic research. Whether it’s about the history of a dubious company, or a page whose content has mysteriously changed, the Internet Archive adds wiki-like versioning to webpages that otherwise would not have it. To avoid massive copyright problems, the Archive has made two crucial compromises: It does not show pages less than 6 months old, and it retroactively deletes material when the site owners want it to (see FAQ). In fact, owners don’t even need to ask — they just have to put a special robots.txt file on their webservers, and the next time the crawlers see the site, it is removed from the Archive.
An archive where material can disappear from one day to the next without notice is a quite bizarre thing. Links to archived web pages become invalid. When a site is removed from the Archive, it appears as if it had never been there. It can provide entertainment value to see the history of popular webpages, but if anything controversial is immediately removed at the owner’s whim without as much as a verification process, then the tool uses much of its value for serious research. It’s the controversial stuff that needs to be archived the most.
If the Internet Archive doesn’t fix this flaw, it needs to be replaced with a solution that doesn’t have it, such as a decentralized storage network. As a temporary hack, it would be useful if someone set up a “Wayback Wayback Machine”, a site which, on request, crawls all revisions of a website in the Internet Archive, stores them, and makes them available to researchers who can provide credentials. This would only help to protect the record of websites where it seems likely that they might be deleted in the future (scams, phishing sites, etc.) and someone thinks of requesting a secure copy (in which case they could also manually download and save the revisions). A better long-term solution is needed, and it would be best if it was provided by the Internet Archive itself.
Until then, whenever you see something unusual in the Wayback Machine, remember to make a copy. It might not be there tomorrow.
In his latest “Creative Commons in Review” column, Lawrence Lessig has responded at length to my article “The Case for Free Use:
Reasons Not to Use a Creative Commons -NC License”. He agrees with the gist of the article, but points out:
For example, imagine youââŹâ˘re in a band and youââŹâ˘ve recorded a new song. YouââŹâ˘re happy to have it spread around the Internet. But youââŹâ˘re not keen that Sony include it on a CD ââŹâ at least without asking you first. If you release the song under a simple Attribution license thereââŹâ˘s no reason Sony (or anyone else) couldnââŹâ˘t take your song and sell it. And I personally see nothing wrong with you wanting to reserve your commercial rights so that Sony has to ask you permission (and pay you) before they can profit from your music.
Let’s not forget that the CC-NC license alone guarantees that the work, if it is of interest to anyone, will be freely available on the Net. This is really important, and Lessig does not mention it — perhaps he thinks it is obvious, but I do not believe it is. Anyone who uses a CC-NC license must understand that they are giving away their work to the world for free. Even for large files, bandwidth costs have become negligible thanks to new distribution mechanisms such as BitTorrent and public media archives like Ourmedia. Today, anyone can essentially distribute any large static media file in demand to anyone else, for free. You can even do it without advertising.
This is only part of the media revolution. The other part are new mechanisms to tell people about interesting resources. For textual content, blogs are already doing a pretty good job. Social tagging, collaborative filtering – all this is happening. In combination, this means that there is a clear trend that any freely licensed work (NC or not) that is of high value to many will find its way through the network. In fact, to a lesser extent, this is even true for proprietary content — but here the content industry can impose regulations to push the distribution into darknets. For files that can be freely copied, the Net can develop its full strengths as a medium to spread memes and build mindshare.
Yes, there are still millions of people who have never read a blog. But as a new generation is growing up with these tools — the generation which is the primary target audience of much of the music we’re talking about — this is changing rapidly.
So, if Sony managed to make a buck of a work that is freely available throughout the Net, then I very much doubt they would be able to do so for much longer. I’m also quite sure that the artist in question could then easily land a contract and go proprietary, if they wanted to. There will very likely always be distribution and marketing platforms that could get away with charging a small amount for a song — but arguably, they are performing a valuable service to the artist in return, by getting the word out about their music. It’s also quite likely in the current media landscape that such platforms would use freely licensed content as teasing material for their commercial offerings, and make it available for free download.
It is true that the argument cannot be entirely discounted. We are living in times of transition, where it is possible to make money off people’s ignorance or their attachment to traditional distribution media, such as CDs. But if anything, this transition will be accelerated by putting more content under truly free licenses.
As I’ve said before, I hope that Creative Commons will inform creators about the consequences of the NC licensing choice. Lessig signals that this may soon be done, and that’s great. But the really interesting question, in my mind, is not how to stop companies like Sony from commecrially exploiting freely available works — it’s how to build an economy of goodwill, one where creators of free quality content are rewarded fairly.
These two issues are very much related. The reason people choose the NC license is that they (often subconsciously) feel that somehow, some day, they might make lots of money with their content — and they don’t want anyone else to do it. They may also feel that, with an NC license, they can still keep some control over how frequently their work is distributed across the planet (as noted above, this is not true).
If we can demonstrate that content creators can make money without utilizing copright, then much of the rationale for using an NC license disappears. All that remains to be done then is to improve these mechanisms to the point where they become the dominant mechanism to find and pay for content.
Take these nifty buttons:
They practically cry out to be turned into the decentralized infrastructure to promote such a platform. Instead of pointing to static license pages, they could point to dynamic pages about the creators and their works. These pages would allow visitors to not only donate money, but also to make suggestions for new works (work for hire), or to pledge to support a project or cause defined by the creator.
The pledging could work similarly to Pledgebank — only if enough other people sign up to meet the goal, anyone has to pay. And one of the cool things about Pledgebank is that it is open-ended. It doesn’t have to be about money, and it doesn’t have to benefit the original creator.
Once you have such a platform, you can bootstrap it into something ever more powerful. You can add group-forming features where Creative Commons users can join forces to support particular causes or projects, or vote on how donated money is spent. You can organize fundraisers to make old proprietary content freely available. You can improve the functionality for work-for-hire projects (milestones, specifications, collaborative funding). You can add better search and discovery tools. You can improve usability. It’s a practically open-ended project.
With or without a license that prohibits commercial use, these mechanisms are needed to make the free content economy work. Here’s a challenge to Larry: Turn the Creative Commons directory into a platform for discovering content and renumerating creators. Make it an open source project and get the best brains out in the field of social networking to work on this — but put some paid developers on the task to make sure the job gets done. Nobody on the planet is in a better position than you are to get such a project off the ground.
In case you missed it, the log from [[Democracy|DTECHCON]] on Saturday is here. A small crowd gathered to discuss different projects such as Plexnet, Parlement and Demosphere. A project which doesn’t have a website of its own yet is InfoLibre by François A. Bradet. As I understand it, he wants to use mechanisms of roleplaying games to motivate people to participate in productive online environments. After the chat was officially over, Wybo Wiersma dropped in to discuss LogiLogi; a separate log is available for this.
All in all, I think the model of holding a conference on IRC is quite workable, though we haven’t yet tested it with a large crowd. I think it’s a good way to connect people who might otherwise not meet, and let them know about different projects in a short time. It would have been cool to have Tom Steinberg from mysociety, but he never responded to my invitation – the mysociety projects came up in the discussion a couple of times, especially Pledgebank.
Alas, I wrote the manifesto too short ahead of the conference; if I had taken some time to polish and publish it, it could have brought some additional attention to DTECHCON. As is, I still think it was a worthwhile and useful exercise, and very much intend to repeat it at some point. For those interested in democracy online, the Demosphere wiki has a cool list of related projects.
[[Freedom Tools|Here’s an idea to distribute tools and knowledge to fight censorship.]] Excerpt:
Perhaps a simple PHP or Perl script in combination with a central web repository of “freedom tools” and documentation could enable more people to help others to exercise their human rights.
The web script would be installed by anyone who wants to help people to get secure, censorship-resistant, anonymous Net access. The script would be called “rename-me.php” or similar, and the user installing it would be asked to give it an arbitrary name of their choosing.
After placing it in a writable directory on their server and executing it, the script would download a signed archive (ZIP file) from the central “freedom tools” repository. It would also output a bunch of HTML for the webmaster to put on their website. This HTML would contain a reference to a nice button image which would function as a link to the script.
On subsquent calls of the script, it would produce a download page which is an HTML skeleton that redirects the visitor to the ZIP file. The download page could be customized by the webmaster. As an added bonus, the script could perform platform (operating system) detection and redirect to a platform-specific archive.
See the [[Freedom Tools|wiki page]] for what this ZIP file could contain, and how such a system could be used to create thousands of entrypoints into free, secure networks. Please [[edit:Freedom Tools|edit the idea]] or add your comments!
In order to promote [[Democracy|DTECHCON]] tomorrow, I have written a small Manifesto. Excerpt:
It is not a single man who created this world. It is not even a few hundred. It is all of us who create it and have done so for millennia. The great inventor is nothing without the resources to implement his idea. His implementation is worthless without the people who use it and refine it. Even then, many inventions cannot function if society does not accommodate or even opposes them.
The collaborative nature of modern civilization makes it quite resilient. Take one human out, and it still works. Take ten thousand out, and it still works. Take a million out – it still works. There is no single person you can kill to kill society. Collectively, we have achieved greatness — while individually, we are extremely fallible. So fallible that one of the biggest health problems in modern societies is obesity.
Yet, given the complexity of even just a single aspect of our economy — how does the apple get into the supermarket? — and the fallibility of individuals, there appears to be a rather glaring flaw in our social design:
- Our world is created by the actions of the many, not of the few.
- Yet our society still solves its problems through representation of the many by the few.
I have submitted it to Kuro5hin, but I enourage you to edit the wiki version.
MeatballWiki is a very interesting wiki which discusses online communities with a strong focus on wikis. It’s been around for ages, but now they have hacked some neat AJAX functionality into the site (via Ajaxian). RecentChanges fetches new changes from the server in regular intervals and allows you to filter the view using Meatball’s unique concept of EditCategories without reloading the page. Page histories use a split pane model where diffs are fetched from the server without reloading the page. I’m missing an obvious way to bookmark diffs, though. As for Recent Changes, the coolest thing are real-time changes. Wikipedia used to have them on the web, but the script wasn’t built to scale. You can still get them via IRC on irc.wikimedia.org, though, and there’s cool tools like CryptoDerk’s VandalFighter which make use of that.
© 2024 Intelligent Designs – Erik Moeller's Blog
Theme by Anders Noren — Up ↑