Lost in the Crowd: Web 3.0 Part 1

Rochelle Terman
October 14, 2011
Image of a data-related word cloud, including words like data and verify.
This week, my entire geeked-out household is restless with excitement because the new iPhone 4S is coming out, which really should be called iPhone 5 because it’s just, well, crazy awesome cool.
 
Something that is, a) crazy awesome cool and b) newish, usually attracts a generational suffix in contemporary American discourse – think American Dream 2.0, Parenting 2.0 and Obama 2.0 (could he be on the horizon?). 
 
Besides the iPhone, another major release – web 3.0 – has become the topic of some buzz around Berkeley Campus after Dan Whaley, veteran internet entrepreneur, gave a talk at the Bear’s Lair last week about his new project designed to increase the quality of information on the Internet. Web 3.0 doesn’t actually refer to any new technology, but rather a new modality governing the relationship between our on- and off-line lives.
 
To understand web 3.0, it’s important to understand its predecessor, web 2.0, which itself abandoned the static, unidirectional model of web 1.0 in favor for a more dynamic and democratic online experience. According to UC Berkeley Professor Peter Sahlins (History):
 
Web 2.0 is founded on the notion of social networking and online collaboration, but also on passing the monopoly of content creation to all possible users… With web 2.0, everyone is a photographer, a filmmaker, a writer, a poet—and all products are exchanged and shared. Knowledge and art are in this modality democratic by nature and must remain that way: to use the old liberal metaphor, in the marketplace of ideas, truth will triumph (and Wikipedia can be trusted).
 
The trend of leveraging mass collaboration depends essentially on the idea that an open call to an undefined group of people gathers those who are inherently most fit to perform the task at hand. Coupled with the anonymity associated with large crowds, the design attempts to solve complex problems with ‘decentralized’ and ‘democratic’ filtering processes that supposedly benefit the most relevant and fresh ideas. Wikipedia, here, is not just a random example but perhaps the archetype of web 2.0, with its emphasis on crowd-sourcing, anonymity, and equality among users.
 
But as any prof knows, Wikipedia has its own problems, particularly the fact that it keeps coming up as an authoritative source in undergraduate papers on everything from Durkheim to diplomats. Various commentators from Andrew Keen to Nicholas Carr have criticized web 2.0 for creating a ‘cult of digital narcissism and amateurism’ that excludes experts from their legitimate role as the creators of knowledge in favor of a misguided populism disguised as democracy.
 
On the other hand, as much as we’d like to think all users are created equal in a web 2.0 world, they’re really not. Much like ‘marketplace’ metaphors beg the question of whether participants are truly equal and interchangeable, the discussion on web 2.0 ignores the existence of user inequality caused by digital divides, particular internet cultures, and ingrained epistemological norms. The fact that Wikipedia editors are overwhelmingly male, coupled with an acknowledgment that policies regarding ‘reliable sources’ fundamentally rest on traditional scholarly material should give us pause whenever we encounter claims about how ‘revolutionary’ Wikipedia and crowd-sourcing are. At the end of the day, it still helps to have a Ph.D., even on the web.
 
Nor should we be foolish enough to believe that crowd-sourcing automatically benefits all equally. As Andrea Grove points out, crowd-sourcing was originally based on an economic impetus for corporations who used crowd-sourcing techniques as a way to save money. It was only when start-ups in Silicon Valley, with an eye towards open-source values in opposition to Microsoft and Apple, integrated the crowd-sourcing model for their own, geeky gains that we began to think of crowd-sourcing as a form of nerd-led subversion.
 
Perhaps the real problem with crowd-sourcing is not that everyone is equal on the web – which is decidedly untrue – but rather the confusion, once we’re dealing with avatars and interfaces, between who is the most knowledgeable versus who is loudest. Can the real expert please stand up?
 
Web 3.0, according to Professor Sahlins, attempts to solve this problem by reintroducing the boundary-keeping functions of specialists and experts, while maintaining the values of ‘openness’ and accessibility that animated previous generation of the IT.  As one commentator put it, “The age of the expert is coming.”
 
Which brings me back to Dan Whaley, who proposed his own solution to the problems of web 2.0 with Hypothes.is. Hypothesi.is is a non-profit, open-source web browser overlay that enables crowd-sourced peer review of any website with sentence-level annotation. Unlike Wikipedia, influence on Hypothes.is is based on the commentator’s track record or reputation. A metric for the credibility of articles will represent a synthesis of the accumulated critique they receive, dependent on the reputation and domain expertise of those providing the critique. It is worth noting that many of the advisors to the project are affiliated with the University of California.
 
A lot of questions are left unanswered in this new regime. How will reputation be measured? What will determine whether expert-led crowd-sourcing leads to innovation or simply regression toward the mean? And what is the relationship between meritocracy, credentialism, and anonymity?
 
To gain insight into these questions, it is crucial, ultimately to engage in debates surrounding anonymity in the web, for another essential feature of web 3.0 is a movement away from pure digital anonymity. For more on how this relates to crowd-sourcing, experts, and knowledge-production, check out my next post on pseudonymity.
 

 

Continue to Part 2 of this post