Web 2.0 - Online Article

Harnessing the Collective Intelligence

Abstract

Web 2.0 is the business revolution in the computer industry caused by the move to the internet as platform, and an attempt to understand the rules for success on that new platform. Chief among those rules is this: Build applications that harness network effects to get better the more people use them. More precisely, it's all about "harnessing collective intelligence".

The basic characteristics of Web 2.0 include:

  1. Don't treat software as an artifact, but as a process of engagement with your users. ("The perpetual beta")
  2. Open your data and services for re-use by others, and re-use the data and services of others whenever possible. ("Small pieces loosely joined")
  3. Don't think of applications that reside on either client or server, but build applications that reside in the space between devices. ("Software above the level of a single device")
  4. Remember that in a network environment, open APIs and standard protocols win, but this doesn't mean that the idea of competitive advantage goes away. ("The law of conservation of attractive profits")
  5. Chief among the future sources of lock in and competitive advantage will be data, whether through increasing returns from user-generated data, through owning a namespace, or through proprietary file formats. ("Data is the Intel Inside")

Revealing the fact that the world is moving towards Web 3.0, some of the industry experts deny to accept the term Web 2.0 as the New Age Revolution.

Web 2.0: The Beginning

The bursting of the dot-com bubble in the fall of 2001 marked a turning point for the web. The concept of "Web 2.0" began with a conference brainstorming session between Tim O'Reilly and MediaLive International. Dale Dougherty, web pioneer and O'Reilly VP, noted that far from having "crashed", the web was more important than ever, with exciting new applications and sites popping up with surprising regularity. What's more, the companies that had survived the collapse seemed to have some things in common. Everyone agreed that the dot-com collapse marked some kind of turning point for the web, and so the Web 2.0 Conference was born. In this initial brainstorming, they formulated their sense of Web 2.0 by example as shown.

Web 1 - 2.jpg

1. The Web as a Platform

1.bmp

Like many important concepts, Web 2.0 doesn't have a hard boundary, but rather, a gravitational core. We can visualize Web 2.0 (see Figure below) as a set of principles and practices that tie together a veritable solar system of sites that demonstrate some or all of those principles, at a varying distance from that core.

Web 2.0 Meme map

At the first Web 2.0 conference, in October 2004, John Battelle and Tim O'Reilly listed a preliminary set of principles in their opening talk. The first of those principles was "The web as a platform."Yet that was also a rallying cry of Web 1.0 ruler Netscape. What's more, two of the initial Web 1.0 exemplars, DoubleClick and Akamai, were both pioneers in treating the web as a platform. People don't often think of them as "web services", but in fact, ad serving was the first widely deployed web service, and the first widely deployed "mashup" (to use another term that has gained currency of late). Akamai also treats the network as the platform, and at a deeper level of the stack, building a transparent caching and content delivery network that eases bandwidth congestion. Nonetheless, these pioneers provided useful contrasts because later entrants have taken their solution to the same problem even further, understanding something deeper about the nature of the new platform. Let's, consider some essential items of the difference between Web 1.0 and 2.0 in the three well-known cases.

Netscape vs. Google

If Netscape was the standard bearer for Web 1.0, Google is most certainly the standard bearer for Web 2.0, if only because their respective IPOs were defining events for each era. Netscape framed "the web as platform" in terms of the old software paradigm: their flagship product was the web browser, a desktop application. Google, by contrast, began its life as a native web application, never sold or packaged, but delivered as a service, with customers paying, directly or indirectly, for the use of that service. None of the trappings of the old software industry are present. No scheduled software releases, just continuous improvement. No licensing or sale, just usage. No porting to different platforms so that customers can run the software on their own equipment, just a massively scalable collection of commodity PCs running open source operating systems plus home-grown applications and utilities that no one outside the company ever gets to see.

The value of the software is proportional to the scale and dynamism of the data it helps to manage.

DoubleClick vs. Overture and AdSense

DoubleClick harnesses software as a service, has a core competency in data management, and was a pioneer in web services long before web services even had a name. However, DoubleClick was ultimately limited by its business model. It bought into the '90s notion that the web was about publishing, not participation; that advertisers, not consumers, ought to call the shots; that size mattered, and that the internet was increasingly being dominated by the top websites as measured by MediaMetrix and other web ad scoring companies. Overture (later called Yahoo! Search Marketing) and Google's (AdSense) success came from an understanding of what Chris Anderson refers to as "the long tail," the collective power of the small sites that make up the bulk of the web's content. DoubleClick's offerings require a formal sales contract, limiting their market to the few thousand largest websites. Overture and Google figured out how to enable ad placement on virtually any web page. What's more, they eschewed publisher/ad-agency friendly advertising formats such as banner ads and pop-ups in favor of minimally intrusive, context-sensitive, consumer-friendly text advertising.

Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.

Akamai vs. BitTorrent

Like DoubleClick, Akamai is optimized to do business with the head, not the tail, with the center, not the edges. While it serves the benefit of the individuals at the edge of the web by smoothing their access to the high-demand sites at the center, it collects its revenue from those central sites. BitTorrent, like other pioneers in the P2P movement, takes a radical approach to internet decentralization. Every client is also a server; files are broken up into fragments that can be served from multiple locations, transparently harnessing the network of downloaders to provide both bandwidth and data to other users. BitTorrent thus demonstrates a key Web 2.0 principle: the service automatically gets better the more people use it. While Akamai must add servers to improve service, every BitTorrent consumer brings his own resources to the party. There's an implicit "architecture of participation", a built-in ethic of cooperation, in which the service acts primarily as an intelligent broker, connecting the edges to each other and harnessing the power of the users themselves.

2. Harnessing Collective Intelligence: Examples

2.bmp

The central principle behind the success of the giants born in the Web 1.0 era who have survived to lead the, Web 2.0 era is that they have embraced the power of the web to harness collective intelligence:

  • Hyperlinking is the foundation of the web. As users add new content, and new sites, it is bound in to the structure of the web by other users discovering the content and linking to it.
  • Yahoo!, the first great internet success story, was born as a catalog, or directory of links, an aggregation of the best work of thousands, then millions of web users. While Yahoo! has since moved into the business of creating many types of content, its role as a portal to the collective work of the net's users remains the core of its value.
  • Google's breakthrough in search, which quickly made it the undisputed search market leader, wasPageRank, a method of using the link structure of the web rather than just the characteristics of documents to provide better search results.
  • eBay's product is the collective activity of all its users; like the web itself, eBay grows organically in response to user activity, and the company's role is as an enabler of a context in which that user activity can happen.
  • Amazon sells the same products as competitors. But Amazon has made a science of user engagement. They have an order of magnitude more user reviews, invitations to participate in varied ways on virtually every page--and even more importantly, they use user activity to produce better search results. Amazon insiders call this the "flow" around products.

Now, innovative companies that pick up on this insight and perhaps extend it even further are making their mark on the web:

  • Wikipedia, an online encyclopedia based on the unlikely notion that an entry can be added by any web user, and edited by any other, is a radical experiment in trust.
  • Sites like del.icio.us and Flickr have pioneered a concept that some people call "folksonomy" (in contrast to taxonomy), a style of collaborative categorization of sites using freely chosen keywords, often referred to as tags. Tagging allows for the kind of multiple, overlapping associations that the brain itself uses, rather than rigid categories. In the canonical example, a Flickr photo of a puppy might be tagged both "puppy" and "cute"--allowing for retrieval along natural axes generated user activity.
  • Collaborative spam filtering products like Cloudmark aggregate the individual decisions of email users about what is and is not spam, outperforming other conventional systems.
  • It is a truism that the greatest internet success stories don't advertise their products. Their adoption is driven by "viral marketing"--that is, recommendations propagating directly from one user to another.
  • Much of the infrastructure of the web relies on the peer-production methods of open source, in themselves an instance of collective, net-enabled intelligence. There are more than 100,000 open source software projects listed on SourceForge.net. Anyone can add a project, anyone can download and use the code, and new projects migrate from the edges to the center as a result of users putting them to work, an organic software adoption process relying almost entirely on viral marketing.

Network effects from user contributions are the key to market dominance in the Web 2.0 era.

Blogging and the Wisdom of Crowds

One of the most highly touted features of the Web 2.0 era is the rise of blogging. At its most basic, a blog is just a personal home page in diary format. As Rich Skrenta notes, the chronological organization of a blog "seems like a trivial difference, but it drives an entirely different delivery, advertising and value chain."

One of the things that have made a difference is a technology called RSS. RSS allows someone to link not just to a page, but to subscribe to it, with notification every time that page changes. Skrenta calls this "the incremental web." Others call it the "live web". What's dynamic about the live web are not just the pages, but the links. A link to a weblog is expected to point to a perennially changing page, with "permalinks" for any individual entry, and notification for each change. An RSS feed is thus a much stronger link than, say a bookmark or a link to a single page.

For the first time it became relatively easy to gesture directly at a highly specific post on someone else's site and talk about it. Discussion emerged. Chat emerged. And - as a result - friendships emerged or became more entrenched. The permalink was the first - and most successful - attempt to build bridges between weblogs.

3. Data is the Next Intel Inside

Every significant internet application to date has been backed by a specialized database. In the internet era, we can already see a number of cases where control over the database has led to market control and outsized financial returns. For Example, domain name registry of Verisign, map services by NavTeq and Digital Globe. Data is indeed the Intel Inside of these applications, a sole source component in systems whose software infrastructure is largely open source or otherwise commodified. We expect to see battles between data suppliers and application vendors in the next few years, as both realize just how important certain classes of data will become as building blocks for Web 2.0 applications.

For example, in the area of identity, PayPal, Amazon's 1-click, and the millions of users of communications systems, may all be legitimate contenders to build a network-wide identity database. (In this regard, Google's recent attempt to use cell phone numbers as an identifier for Gmail accounts may be a step towards embracing and extending the phone system.) Meanwhile, startups like Sxip are exploring the potential of federated identity, in quest of a kind of "distributed 1-click" that will provide a seamless Web 2.0 identity subsystem. In the area of calendaring, EVDB is an attempt to build the world's largest shared calendar via a wiki-style architecture of participation.

4. End of the Software Release Cycle

One of the defining characteristics of internet era software is that it is delivered as a service, not as a product.

  1. Operations must become a core competency. So fundamental is the shift from software as artifact to software as service that the software will cease to perform unless it is maintained on a daily basis.
  2. Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, "release early and release often" in fact has morphed into an even more radical position, "the perpetual beta," in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. It's no accident that services such as Gmail, Google Maps, Flickr, del.icio.us, and the like may be expected to bear a "Beta" logo for years at a time.

5. Lightweight Programming Models

Lightweight business models are a natural concomitant of lightweight programming connections. The Web 2.0 mindset is good at re-use. This is called "innovation in assembly." When commodity components are abundant, we can create value simply by assembling them in novel or effective ways.

  1. Support lightweight programming models that allow for loosely coupled systems. The complexity of the corporate-sponsored web services stack is designed to enable tight coupling. While this is necessary in many cases, many of the most interesting applications can indeed remain loosely coupled and even fragile, making the Web 2.0 mindset different from the traditional IT mindset!
  2. Think syndication, not coordination. Simple web services, like RSS (Really Simple Syndication) and REST-based (Representational State Transfer) web services are about syndicating data outwards, not controlling what happens when it gets to the other end of the connection. This idea is fundamental to the internet itself, a reflection of what is known as the end-to-end principle.
  3. Design for "hackability" and remixability. Systems like the original web, RSS, and AJAX all have this in common: the barriers to re-use are extremely low. Much of the useful software is actually open source, but even when it isn't, there is little in the way of intellectual property protection. The web browser's "View Source" option made it possible for any user to copy any other user's web page. The most successful web services are those that have been easiest to take in new directions unimagined by their creators. The phrase "some rights reserved," which was popularized by the Creative Commons to contrast with the more typical "all rights reserved," is a useful guidepost.

6. Software above the Level of a Single Device

Web 2.0 is no longer limited to the PC platform. "Useful software written above the level of the single device will command high margins for a long time to come."

 iTunes is the best exemplar of this principle. This application seamlessly reaches from the handheld device to a massive web back-end, with the PC acting as a local cache and control station. TiVo is another good example. They are services, not packaged applications. Real time traffic monitoring, flash mobs and citizen journalism are only a few of the early warning signs of the capabilities of the new platform.

7. Rich User Experiences

We're entering an unprecedented period of user interface innovation, as web developers are finally able to build web applications as rich as local PC-based applications. In 90's, JavaScript and then DHTML were introduced as lightweight ways to provide client side programmability and richer user experiences. Several years ago, Macromedia and Laszlo Systems coined the term "Rich Internet Applications" to highlight the capabilities of Flash to deliver not just multimedia content but also GUI-style application experiences.

However, the potential of the web to deliver full scale applications didn't hit the mainstream till Google introduced Gmail and Google Maps, web based applications with rich user interfaces and PC-equivalent interactivity. It's easy to see how Web 2.0 will also remake the address book, treating the local address book on the PC or phone merely as a cache of the contacts we've explicitly asked the system to remember. Web 2.0 Word Processors supporting wiki-style collaborative editing, not just standalone documents.

Hierarchy of Web 2.0 Application: Levels of the Game

"Web 2.0's most important step forward seems to be the widespread adoption of Ajax." Alas, that is a common misconception. Just because something uses Ajax and is presented on the web doesn't make it a Web 2.0 application. This confusion lead to the definition of a hierarchy of "Web 2.0-ness":

Level III: The application could ONLY exist on the net, and draws its essential power from the network and the connections it makes possible between people or applications. They harness network effects to get better the more people use them. EBay, craigslist, Wikipedia, del.icio.us, Skype, Dodgeball meet this test. They are fundamentally driven by shared online activity. The web it self has  this character, which Google and other search engines have then leveraged.In the hierarchy of web 2.0 applications, the highest level is to embrace the network, to understand what creates network effects, and then to harness them in everything you do.DevkiNandan

Level II: The application can exist offline, but it is uniquely advantaged by being online. Flickr is a great example. You can have a local photo management application (like iPhoto) but the application gains remarkable power by leveraging an online community. In fact, the shared photo database, the online community, and the artifacts it creates (like the tag database) is central to what distinguishes Flickr from its offline counterparts. And its fuller embrace of the internet (for example, that the default state of uploaded photos is "public") is what distinguishes it from its online predecessors.

Level I: The application can and does exist successfully offline, but it gains additional features by being online. Writely is a great example. If you want to do collaborative editing, its online component is terrific, but if you want to write alone, as Fallows did, it gives you little benefit (other than availability from computers other than your own.)

Level O: The application has primarily taken hold online, but it would work just as well offline if you had all the data in a local cache. MapQuest, Yahoo! Local, and Google Maps are all in this category (but mashups like housingmaps.com are at Level 3.) To the extent that online mapping applications harness user contributions, they jump to Level 2.

Criticism: Does it Really Exists?

Given the lack of set standards as to what "Web 2.0" actually means, implies, or requires, the term can mean radically different things to different people. Some think that this new definition does not have a tight and explicit structure. Is 'revolution' the right concept to introduce? Web 2.0 is not just a change of direction; it is intended to signify a broader way of building sustainability.

Many of the ideas of Web 2.0 already featured on networked systems well before the term "Web 2.0" emerged. Amazon.com, for instance, has allowed users to write reviews and consumer guides since its launch in 1995, in a form of self-publishing. Amazon also opened its API to outside developers in 2002. Prior art also comes from research in computer-supported collaborative learning and computer-supported cooperative work and from established products like Lotus Notes and Lotus Domino.

Conversely, when a web-site proclaims itself "Web 2.0" for the use of some trivial feature (such as blogs or gradient-boxes) observers may generally consider it more an attempt at self-promotion than an actual endorsement of the ideas behind Web 2.0. "Web 2.0" in such circumstances has sometimes sunk simply to the status of a marketing buzzword, like "synergy," that can mean whatever a salesperson wants it to mean, with little connection to most of the worthy but (currently) unrelated ideas originally brought together under the "Web 2.0" banner.

The argument also exists that "Web 2.0" does not represent a new version of World Wide Web at all, but merely continues to use "Web 1.0" technologies and concepts. Clearly, techniques such as AJAX are not a replacement for underlying protocols like HTTP but an additional layer of abstraction on top of them.

Other criticism has included the term "a second bubble," (referring to the Dot-com bubble of circa 1995-2001), suggesting that too many Web 2.0 companies attempt to develop the same product with a lack of business models. The Economist has written of "Bubble 2.0." Web 2.0 is ultimately based on trust. Trust is always broken. However, the human spirit is a wonderful thing, and the fact that we can build applications that let us cooperate in new ways gives outlet to that spirit.

Web 2.0: Different Views

There are at least three incompatible definitions floating around.

For Tim O'Reilly (the initiator of the concept), Web 2.0 is a mishmash of tools and sites that foster collaboration and participation. Flickr, YouTube, MySpace, Wikipedia, and the entire blogosphere are examples. We don't buy stuff from them-we use them to share digital assets and link up with other people. Podcasting is a Web 2.0 technology, because it's almost as easy to create a podcast as to listen to one. The more time we put into a Web 2.0 site-tagging photos, posting comments, editing wiki entries-the better it works for everyone. In a nutshell, "Web 2.0 is made of people!"

Web developers use Web 2.0 a second way, to refer to the software and languages used to build the gee-whiz features of these sites. Ajax, tag clouds, and wikis are basic components of many collaborative sites. In general, Web 2.0 tools are free, easy to master, and easy to interconnect. Google Maps + Wikipedia = Placeopedia! But the definition runs aground when Web 2.0 technologies power Gap.com, an impressive but collaboration-free shopping experience. "Ajax without participation doesn't make for Web 2.0."

A third definition gets thrown around in Silicon Valley. A "Web 2.0 play" is a bid to make money by funding a bring-your-own-content site. It's a long-shot but low-risk investment that could become the next Google. Or at least the next thing Google buys. No warehouses full of inventory, no sprawling staff, and no NASA-grade supply chain management systems. Dodgeball and Digg are good examples of popular sites started on a shoestring. Google snapped up Dodgeball last year; Digg's imminent acquisition is a foregone conclusion among valley wags. But buyout-hungry entrepreneurs now slap the 2.0 moniker willy-nilly on mobile services and browser applications that are neither built on Ajax nor made of people.

The Road Ahead: Dawn of Web 3.0

While there is still a hot debate on existence of Web 2.0, the world is witnessing a new for-coming revolution, Web 3.0. Web 3.0 is a term that has been coined to describe the evolution of Web usage and interaction that includes transforming the Web into a database, a move towards making content accessible by multiple non-browser applications, the leveraging of artificial intelligence technologies and the Semantic web and three dimensional interaction and collaboration. The term Web 3.0 is becoming a subject of increased interest and debate from late 2006 extending into 2007.

This is final step in the decomposition of monolithic Web Pages into discrete components that include the Presentation (HTML and (X)HTML), Logic (Web Services APIs), and Data (Data Models) trinity, it transitions Web containment from Web Pages to Web Data. Its emergence simplifies the development and deployment of Data Model driven composite applications that provide easy, transparent and organized access to "the world's data, information, and knowledge."

About the Author:

No further information.




Comments

Rajesh Kumar Gupta on 2008-08-23 10:18:06 wrote,

Good Effort.

Ravisha Nepaul on 2009-03-20 21:49:11 wrote,

very interesting..