The global buildup of Internet connectivity and growing availability of inexpensive computing and communication devices have made the World Wide Web a virtual continent that is borderless. Anyone in the world with a computer and Internet access can now explore, join, build, or abandon any Web community at any time.
This new freedom is often attributed to the “Web 2.0 era” of services and applications that let webizens easily share opinions and resources. Consequently, users can collectively contribute to a Web presence and generate massive content behind their virtual collaboration.
“AN ATTITUDE NOT A TECHNOLOGY”
Tim O’Reilly was among the first to evangelize the concept of Web 2.0, coining the phrase in 2004. He reflected a year later that “One of the key lessons of the Web 2.0 era is this: Users add value…. Therefore, Web 2.0 companies set inclusive defaults for aggregating user data and building value as a side-effect of ordinary use of the application” ( www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html).
Following O’Reilly’s definition, Web 2.0 technologies provide rich and lightweight online tools that let users contribute new data they can aggregate to harness a community’s “collective intelligence.” However, Web 2.0 should not be equated with such technologies.
In his Internet Alchemy blog, Ian Davis asserts that “Web 2.0 is an attitude not a technology” (http://iandavis.com/blog/2005/07/talis-web-20-and-all-that). “It’s about enabling and encouraging participation through open applications and services,” he adds. “By open I mean technically open with appropriate APIs but also, more importantly, socially open, with rights granted to use the content in new and exciting contexts.”
Web 2.0 thus represents a paradigm shift in how people use the Web. While most users were once limited to passively viewing Web sites created by a small number of providers with markup and programming skills, now nearly everyone can actively contribute content online. Technologies are important tools, but they are secondary to achieving the greater goal of promoting free and open access to knowledge.
Toward that end, Web 2.0 systems should be simple, scalable, and sensible.
Not all users are technically savvy. A Web 2.0 system should provide a simple interface so that even the least sophisticated webizen can contribute input. Simplicity is important so that common people, not just experts, can build and use the Web.
All webizens should have an equal opportunity to participate in Web 2.0 systems. Popular systems must employ fair and widely accepted protocols to accommodate numerous users without discrimination. Scalability is especially important on the Web given its global reach.
A Web 2.0 system should be able to digest all legible input, regardless of the source, and produce sensible conclusions. This could be as simple as using visitor counts to identify the most popular pages or materials, or as sophisticated as doing trend analysis similar to that used by program trading in stock markets.
FACILITATING USER PARTICIPATION
There is no one set of technologies that every Web 2.0 system uses. Any Web-based software that lets users create and update content is arguably a Web 2.0 technology. However, several families of technologies that encourage user participation and social networking are associated with the Web 2.0 era.
Other new technologies make it easy for Web services to connect to multiple data and information sources. XML-RPC, Representational State Transfer (REST), RSS, Atom, mashups, and similar technologies facilitate the subscription, propagation, reuse, and intermixing of Web content.
Perhaps the most important resource for Web 2.0 is the user. Providing friendly tools for user participation in content creation, consumption, and distribution has been the key to success (and failure) for many startups in the Web 2.0 era. Technologies such as blogs, wikis, podcasts, and vod-casts foster the growth of new Web communities.
Technologies are also in place to make Web sites more scalable. For example, Google and Yahoo! process most requests in less than a second, and connections to popular user-based Web sites such as YouTube and Flickr are nearly effortless.
Compared to technologies that make Web sites simpler to use and more scalable, those designed to produce and manage collective intelligence are relatively immature. Imple- menting scalability can indeed be challenging, but sensibility comes at variable sophistication levels.
Hit counters roughly indicate Web sites’ relative popularity, while the volume of user comments provides a measure of user participation. However, these and other such simple metrics do not necessarily communicate the value of online content.
Some Web sites drill further down by asking users to indicate whether certain information is helpful or even to rate it on, say, a scale of 1 to 10 and then indexing the results. Nevertheless, the widespread reluctance of many people to provide feedback severely limits the effectiveness of such mechanisms.
Most current Web 2.0 sites were originally designed to be either user data repositories (such as YouTube and Flickr) or social networks (like MySpace and Xanga). They thus lack structured intelligence and present popular results in an ad hoc manner. Finding meaningful information can be almost impossible; most of the time, bumping into something interesting is pure luck.
To address this problem, some Web sites feature recommender systems that employ filtering technologies to point users to objects of interest. Collaborative filtering provides personalized recommendations based on individual user preferences as well as those of other users with similar interests, while content-based filtering analyzes and rates the content of infor- mation sources to create profiles of users’ interests.
Most Web 2.0 sites include search engines to help users locate content others have created. These systems retrieve information by inspecting keyword metatags embedded by the author. However, such tags might be created randomly and not correlate with the actual content.
Newer versions of search engines use a combination of data content (term frequency and density), data context (file name and domain name), and the number of incoming links (PageRank data). Web 2.0 site developers must continue developing better techniques to provide more effective search capability, especially for multimedia content.
Mashups are a simple and powerful Web 2.0 content creation/reuse technology that lets users integrate information from multiple sources to provide an enriched experience. For example, it’s possible to build a Web site that shows application-specific data next to photos selected from Flickr at run time or atop locations displayed on a Google map. The content origins of newly created pages can be explicitly acknowledged or embedded in the production process.
The quality of service in mashups depends on its composite services—low-quality output from one service can degrade the quality of its successors. Thus, when a mashup contains many service providers, determining individual services’ accountability is necessary to properly attribute credit, identify the root of a problem, or improve the complete process (Y. Zhang, K-J. Lin, and Jane Y.J. Hsu, “Accountability Monitoring and Reasoning in Service-Oriented Architectures,” Service-Oriented Computing and Applications, vol. 1, no. 1, 2007, pp. 35-50).
Web 2.0 has the democratic goal of allowing—in fact, encouraging—all webizens to create, share, distribute, and enjoy ideas and information. To reach this goal, Web-based systems must be simple to use, highly scalable, and rich in sensible content. Among these qualities, sensibility is the hardest to master and will experience the most technological breakthroughs.
Only when this goal is accomplished will it be possible to identify the common set of Web 2.0 capabilities requiring support in all “webfront” devices, much as PC desktops now offer standard Web connection and browser features.