Wikimedia Brand Survey

Posted in Wikis at 8:32 am by Erik

The Wikimedia Foundation, which runs Wikipedia, has a jungle of brands: logos, project names, and corresponding domain names. For many of the project names, there are even localized variants in different languages. But even the basic names are confusing, such as the Wikimedia/Wikipedia similarity.

With that leading introduction ;-) , I’d like to invite you to complete the Wikimedia brand survey if you are interested in such matters & find the time.


Referencing source code

Posted in Wikis at 7:59 am by Erik

Following up: The Nethack Wikia is actually using Wikipedia’s referencing extension for directly referencing lines of code in the NetHack source. It’s a pretty great wiki, too.



Posted in Free Culture, Wikis at 8:08 am by Erik

The discovery of Gliese 581 c is a watershed moment in the search for extrasolar planets and alien life. What folly to view religion as revelation, when it is science that is unwrapping the universe like a giant birthday present, making visible entire worlds one by one, in the unimaginably vast candy store of billions of observable galaxies. One of the most promising missions among the planet hunters is COROT, a space telescope operated by the French and European Space Agencies. And, of course, when I wanted to see what the state of that mission is, I intuitively looked it up on Wikipedia.

Purely by coincidence, COROT has found its first planet yesterday. Not only was this noted in the Wikipedia article about COROT, the planet itself already has an entry of its own. Thus, I did not learn about the discovery through the numerous RSS feeds and news websites I follow (including Wikinews), but through Wikipedia. We call Wikipedia an encyclopedia — but it is clearly much more than any encyclopedia history has ever seen.

I am hardly the first person to notice this, and indeed, the New York Times recently devoted an article to exploring Wikipedia’s coverage of the Virginia Tech massacre. How can one make more intelligent use of the news-like characeristics of Wikipedia and combine them in meaningful ways with our news-dedicated project, Wikinews?

I’ve personally subscribed to the history RSS feeds of a number of articles of interest (access them from the “history” page of the article in the bottom left corner). These give you diffs to the latest changes to the article, which can be useful in order to, say, notice that one of your favorite bands has released a new album. But of course you will get a lot of crud, including vandalism and boring maintenance edits. There are simple ways to make feeds smarter — only pushing changes into the feed when an article has stabilized, filtering minor edits, etc.

Structured data will also allow for some interesting feed possibilities: if an album is an object associated with a band, then it is possible to be notified if there are specific changes to the existing objects or additions of new ones. This general principle can be applied wonderfully broadly, turning any wiki into a universal event notification mechanism. (Alert me when person X dies / a conference of type Y happens / an astronomical object with the characteristics A, B, and C is discovered.) Wikipedia (and its structured data repository) will be the single most useful one, but specialized wikis will of course thrive and benefit from the same technology.

In the department of less remote possibilities, I’ve described an RSS extension I’d like to see back in February. It would allow the transformation of portals into mini-news sites linking directly to relevant Wikipedia articles. In general, the more ways we have to publish RSS automatically or semi-automatically, the better–the community will innovate around the technology.

Our separate Wikinews project remains justifiable as a differently styled, more detailed and granular view of events of the day largely irrespective of their historical significance. But I believe we should try to make the two projects interact more neatly when it comes to major events. Cross-wiki content transclusion in combination with the ever-elusive single user login might spur some interesting collaborations, particularly about components that are useful to both projects (timelines, infoboxes, and so on). Perhaps even the idea of shared fair use content is not entirely blasphemous in this context.

The increasing use of Wikipedia as a news source in its own right will only strengthen its cultural significance in ways that we have yet to understand.

Is Wikipedia complete?

Posted in Wikis at 5:59 am by Erik

Sage Ross reports in the latest Wikipedia Signpost about an interesting experiment at George Mason University where history students were asked to write articles about a subject not already covered in the English Wikipedia. It is interesting to read the course blog for the students’ impression of Wikipedia. (The talk page of the signpost article lists some of the articles they created.)

There are many observations one can make about this experiment, but I want to focus on just one. Many of the students had great trouble finding a topic to write about that is not already covered by Wikipedia. Those who did sometimes did not realize that an article about their topic existed under a different title (or chose to ignore it, wanting to provide instead “their own perspective”). This was fascinating to me, given that I believe this should have been the easiest part of their assignment. Granted, it was complicated by the fact that the students had to create a new article. But let’s think a little about the common notion that the English Wikipedia is “basically complete”.

Wikipedia provides anyone with plenty of guidance on what to write about. There is, of course, the gigantic directory of
requested articles, which is growing faster than old requests are being fulfilled. Moreover, even when browsing any Wikipedia article about history, you will notice the occasional red link. Their frequency increases as you go past the history of North America and Europe. Beyond history, there are countless specialized pages waiting to be written — articles about species, geographical entities, astronomical objects, and so forth. But here, we are still only talking about horizontal growth. The perfect Wikipedia article allows near unlimited exploration and is supported by rich media, source text, news, references, structured data .. and every single article that currently exists can be improved in this regard. Only a very tiny fraction of articles has reached our current “featured article” standard. This standard and its interpretation have changed significantly over time.

In fact, perhaps the “perfect” article cannot exist, as our conception of knowledge is constantly changing. Here are just some expectations that I think we will have of future articles, in rough order of appearance:

  1. Structured data. If we deploy technology like Semantic MediaWiki or OmegaWiki, we will have to rethink the ways in which we deal with structured data such as the information in most infoboxes. Much of the data currently in human- or bot-maintained lists will be automatically obtained from the structured data embedded into or associated with articles. As existing scientific databases are wikified, these too will become connected with our own content, and it will become possible to navigate directly to the latest scientific results as they are being collected. Of course, even simple structured data functionality poses very serious scalability issues, and we will likely see these efforts evolve separately from the main Wikipedia content for a while. But as the technology matures, the need for integration will increase — and Wikipedians will be expected to hunt for as many sources of data as possible to enrich any given article.
  2. More free content. Vast archives of materials are waiting to be liberated from copyright restrictions, and any single source can add great value. Aside from any massive philanthropic content liberation campaigns and the advances of the open access movement, I hope and believe that reform of the incredibly unbalanced international system of copyright law is possible. Even shaving as much as 30 years off current copyright terms would unlock decades of cultural wealth. Lastly, Wikipedia’s own influence continues to grow, and the importance of having content in Wikipedia may often outweigh any arguments against free content licensing.
  3. Deep sourcing. I have already explored this notion here: Whether we are writing about games, software, or videos, I expect that our models of referencing will require radical innovation to reference deep segments of the content. The best reference is one which allows me to go directly to the relevant piece of code, text, sound or video — but that will of course only be possible for transparent, open access resources.
  4. Levels of knowledge. We have different levels of detail within each Wikipedia, but the current Wikipedias are essentially written for intelligent, educated readers. We should have materials for different reading levels, and summaries of complex subjects written for readers with little pre-existing knowledge. Simple English and Wikijunior are first attempts to make this happen, but we should have a more abstract perspective on how to best represent these different levels of knowledge throughout projects and languages.
  5. Less language-centric views. Right now, references tend to be to works in the language of the respective Wikipedia. However, even following the interwiki links, one can often discover sources in other languages on the same topic, which may very well be much richer and more useful. As our cross-language communication tools improve, our expectation will be to present the views of more than one language space on a given topic. Breakthroughs in freely available machine translation tools could have a massive transformational impact, but even a less ambitious project like Wikicat and the associated ideas could revolutionize the way we look at sources.
  6. More data types. We are very image-rich, but still have few other media. Virtually every article can be served by video content, be it clips from a documentary or an actual recording of the subject. Even original documentary material made through wiki collaboration is a possibility. As for sounds, every musical instrument, every animal that makes sounds, every politician or activist, should have sound files associated with their article.

    In terms of images and tables, their prevalence and quality will increase further as we deploy new extensions such as WikiTeX, which are essentially integrated authoring tools for specialized content such as chessboard patterns, relational diagrams, or music scores. We can and do support all this content already, but the easier it becomes to create it, the more widely it will be used. (And, of course, syntax-driven authoring is hardly the peak of usability.) One particular killer application could result from more intelligent generation of SVG images using text parameters. This is not trivial (the text needs to be rendered within a given “hot spot area” of the image), but not impossible either.

  7. “Sociality”. Presently we only encourage community building for the explicit purpose of creating reference works. Wikiversity is a notable exception with the desire to form learning communities. But why should it not be possible for me to connect easily with students doing their thesis on a particular Wikipedia topic, or researchers who specialize on it? The existing WikiProjects, portals and IRC channels are also seeds for interest communities around particular topics. I believe it is inevitable that these seeds will grow into broader discussion and research areas, partially as part of project convergence. We should stop being afraid of such communities of interest–a community of interest that is strongly connected to Wikipedia may very well be preferable to one which is not, even if it is about Pokemon.
  8. Project convergence. Our current sister project templates are dumb, dead links. Imagine being able to navigate the annotated text of a book from Wikisource directly from within the related Wikipedia article, seeing a sidebar with the latest Wikinews stories on a given topic, scrollable galleries from Commons, or quiz questions from Wikiversity. One should frown upon buzzwords like “web 2.0″ or “mash-ups”, but some of the underlying ideas are worth exploring. One of my favorite UI paradigms that is enabled by AJAX is the infinite loader. The loveliest example of this is Google Reader, which allows you to scroll through the archives of any news source, until it runs out of data, without ever reloading a page. We need similar boundless knowledge exploration tools. As we build them, and integrate our projects in other ways, the distinction between the different “Wiki-somethings” will blur, and the expectation for quality content from our sister projects will increase.
  9. Simple interactive content. There’s not really much that is stopping us from integrating the countless open source Java- and Flash-based learning applets that are out there into Wikipedia, except for free-as-in-freedom and security issues. At least Java should be “open enough” soon, and Flash might get a decent open source implementation. As for security review, I believe that open source, combined with a simple trust model and a healthy dose of “assume good faith” will be sufficient.
  10. Machinima. A type of video, machinimations are relatively easy to create 3D films. They are typically made using the movie-recording capabilities of computer games. Their quality is driven by the multimedia capabilities of PCs and game consoles, and the games implemented for them. Games are a multi-billion-dollar industry that may eventually eclipse even moviemaking, so continued innovation is inevitable. Machinima can be used to re-enact any sequence of events using cutting edge 3D graphics. A military simulation with good machinima capabilities may very well lead to the first massive use of this technology to enrich Wikipedia articles about historical battles with amateur re-creations thereof.
  11. Interactive 3D content. Second Life is trying to become the “3D web” by making much of its technology available under open source conditions. Perhaps it will succeed, perhaps not. I expect that real mass adoption of 3D technology in an everyday context will only occur together with stereoscopic displays. “Virtual Reality” has become one of those technologies that, like video conferencing, has been predicted so frequently and imagined in so much detail without significant mass use beyond gaming that many people have stopped believing in it — but eventually, 3D navigation may become the standard method by which most of us access content of any type. As is so often the case, this change is gradual, and the new 3D capabilities of both the Linux and the Windows desktop are first humble steps in this direction.

    Most imagined 3D user interfaces have focused on simple metaphors such as “avatars”, buildings, “flying”, and so on. I expect that 3D interfaces will draw from these metaphors, but they will be governed by user needs for efficient ways to locate content, places, and things. (At least within the open source culture, technology tends to be driven by user needs, not by a top down hype machine.) Sometimes those tools will be visual, sometimes verbal, sometimes social. So I’m not convinced that we will access all Wikipedia content through intelligent avatars who answer questions using speech recognition and artificial intelligence. :-)

    In the end, the narratives of these 3D worlds may end up being more dream-like than reality-like in their chaotic structure and convergence of sensory stimuli. But I do believe that users will want to participate in interactive, social learning environments (bringing the experience of a well-designed museum exhibit to the Internet as a whole), and that these will blend with purely textual explanations.

  12. Intelligent learning systems. We know that people learn with different efficacy under different conditions, but unfortunately, things aren’t very simple beyond that — no single model of learning styles has found strong empirical support. I believe that computer-facilitated learning can theoretically adapt as well to the complexity of a human neural network as human-mediated learning, if not more so. An ILS would likely rely on a vast database of information for any single learner, a database that would have to keep track of much of their activities online. (This is not necessarily a privacy issue if the database is stored locally and encrypted.) Moreover, it would have to tap into participatory activities and teacher assessments. Therefore, I expect that advanced systems of this nature are still quite a remote possibility. But if they can be built, I think they will radically alter the way we learn, and impose new requirements on the content of any learning resource.

These are just some developments that are (somewhat) predictable with our current technological horizon. We have no idea how knowledge might be transformed by new communication tools, nanotech, artificial intelligence, neural interfaces, or anything else we may dream up. But even within the limits of today’s tech, the notion that Wikipedia is “finished” in any meaningful way is very alien to me.