Bringing Wikipedia articles up to a quality standard we can be proud of will require more than just “stable versions” (frozen revisions that community members claim to be of a given quality standard). Take the article on Mitt Romney, one of the many people hoping to become the next president of the United States. The article describes Romney’s record as governor of Massachusetts with the following words:
Romney was sworn in as the 70th governor of Massachusetts on January 2, 2003, along with Lieutenant Governor Kerry Healey. Within one year of taking office, Romney eliminated a 3 billion dollar budget deficit. During this time he did not raise taxes or debt. He also proceeded to end his term with a 1 billion dollar surplus as well as lower taxes and a lower unemployment rate.
All this information is properly referenced and sourced to … Romney for President, Inc. Of course, the article will eventually become more sane, but this is the state it’s been in for weeks, and this is what we currently serve readers looking for information about this particular candidate. And it’s quite likely that such a revision would at least have been approved as “non-vandalized” under a stable version system.
Yet, is the answer to give up on the idea of radically open editing? The source of the problem here seems to be not so much that “anyone can edit”, but that the people who do edit are self-selected. And for many topics, self-selection leads to bias. Whether it’s Mormons writing about Mormonism, Pokemon lovers writing about Pokemon characters, or teenage Mitt Romney supporters writing about Mitt Romney, the problem shows up on thousands of topics. Sometimes different self-selected factions counter each other’s bias, but that is obviously not something one can rely on, especially when one faction wins a particular war of attrition.
Putting stronger emphasis on professional expertise will not address this problem, and indeed, one will find examples of the same self-selection bias in more expert-driven communities like Citizendium (e.g. an article on chiropractic largely written by a chiropractor). All one can hope for from self-selected experts is that their bias is more intelligently disguised. Are volunteer communities doomed to self-selection bias? Well, dealing with the problem requires first recognizing it as such. And currently recognition of the problem on Wikipedia is very limited. Indeed, suggestions of self-selection bias are usually countered with replies such as “judge the article, not the authors”, often followed by reference to the “no personal attacks” policy. Outside clear commercial interests, Wikipedians are ill-prepared to deal with their own bias.
It also seems clear that a broad recusal & disclosure policy that would extend the current “conflict of interest” guidelines would go too far. Firstly, it would simply lead to much self-selection bias being hidden from view: The editor promoting Romney’s campaign on MySpace would simply remove the reference to that MySpace page from their userpage. Secondly, biased or not, self-selected editors will often be the best-informed about a particular subject. Rather than trying to remove them from the set of editors working on a particular article, it generally seems wiser to broaden the set to include more independent voices.
I believe we need to think of this as a socio-technical problem: How do we get a large number of relatively random, but highly trusted contributors to carefully look at a particular article and to scan for bias? Clearly, NPOV dispute tags aren’t sufficient: POV fighters will have an interest in removing them as soon as possible, and given the sheer number of them, they no long serve as sufficient motivation for the average editor. Furthermore, the articles which people choose to “fix” are again highly self-selected.
As just one possible alternative, imagine that some trusted (elected?) group of users could flag articles for “bias review”. They would set a number of people from 10 to 100 who would be randomly selected from the pool of active editors. Those people would get a note: “The article XY has been flagged for bias review. You have been randomly selected as a reviewer. Do you accept?” If the user does not accept, the review notice would automatically be propagated to another random user. In combination with stable quality versions, this could help to get many independent voices to look for obvious signs of bias. One might also consider encouraging the development of article forks by separate workgroups, and letting readers decide (by discussion or vote) which one is the least biased.
Do you have other ideas? Whatever the solution, I do believe that we need to start thinking seriously about the problem if we want Wikipedia to be useful in any area of “contested knowledge”. And we need to start experimenting, rather than waiting endlessly for a consensus that will never come. Right now, thousands of contested articles are dominated by factions fighting POV wars of attrition. That cannot be the final answer.