Is Web 2.0 Following Footsteps of Its Big Brother Chasing “Popularity”?

March 18, 2006 at 9:27 pm | Posted in Blog | Leave a comment

Since Larry Page invented his PageRank, Web 1.0 or, at least, the Web 1.0 Search went into the tricky way of using “popularity” as a major criterion for displaying information for the Netizens.

 It gave a birth to the whole cottage industry of smoke and mirrors respectably called the Search Engine Optimization (SEO). We have no doubt that Larry and Sergey Brin had the very noble and sensible intentions when they change the world with their algorithms, however, as it often happens, the road to hell is paved with good intentions.

When we type a search query in Google, the Web pages retrieved are sorted by their popularity or number of inbound links rather than their relevance. One can argue that the above terms are vague.

This argument goes back well into the Information Retrieval age with its Recall vs. Precision tradeoff.  

We refer you to a great comment on this subject made by Dennis D. McDonald in another great piece by Dave Taylor:  

Back when the earth was still cooling and I was in graduate school studying information retrieval systems, we spent a lot of time discussing measures of “precision” versus “recall”. I don’t know if people still study these things since retrieval systems have changed so much, but I think the concepts are still useful.  

“Precision” was a measure of the proportion of retrieved items that were relevant to your query. “Recall” was a measure of the proportion of items that were relevant to your query that were actually retrieved by the system.

Many papers were written on the mathematics behind these two concepts, but for me the relative value of the two measures always related back to the type of query you were asking the system to help you with. Were you, say, at the beginning of a search where you needed to cast a net as widely as possible? Or were you trying to locate a specific fact and needed to zero in on relevant information as specifically and as quickly as possible? Variables impacting your accomplishment of these goals included the completeness of the underlying database being searched and the sophistication of the search technology in converting your query into operation against the database.

Plus, what was the value of the time you could devote to the research? Could you afford to scan through a large number of retrieved but irrelevant items to locate the one or two items you actually needed? Or was it more important for the system to do the screening for you?

I think that measures of popularity have a time and place. I would never confuse “relevance” with “popularity” though since the terms are so relative. But I can conceive of many situations where knowing what a lot of people are talking about is an important and valuable thing to know.  

Now enter Web 2.0, which appearance coincided with birth of a very sharp concept of “Long Tail” coined by Chris Anderson who noted that a relative handful of weblogs have many links going into them but “the long tail” of millions of weblogs have only a handful of links going into them. Beginning in a series of speeches in early 2004 and culminating with the publication of a Wired magazine article in October 2004, Anderson described the effects of the long tail on current and future business models. While the concept is definitely is not new one and originates from the famous Pareto 80-20 rule, in a nutshell, it analyzes the information and product delivery from/to those minor players comprising a vast majority of Web players (consumers, sites, bloggers etc.), which are politically incorrectly called in statistics the “noise”.  

Concerning Web 2.0, Chris Anderson wrote that one of the Web 2.0 patterns is that Small sites make up the bulk of the internet’s content; narrow niches make up the bulk of internet’s the possible applications. Therefore: Leverage customer-self service and algorithmic data management to reach out to the entire web, to the edges and not just the center, to the long tail and not just the head.

Is it true? Let’s analyze the two prominent Web 2.0 players and trendsetters: Digg and Memeorandum.

According to Wikipedia, it is how Digg works:

Readers can view all of the stories that have been submitted by fellow users in the “digg all” section of the site. Once a story has received enough “diggs”, roughly 30 or more within a certain time period, it appears on Digg’s front page. Should the story not receive enough diggs, or if enough users make use of the problem report feature to point out issues with the submission, the story will remain in the “digg all” area.

Thus Digg, as Google before, is working on a premise that the most popular is the best.

How about Memeorandum? That is what Tech Crunch, the Consumers Report of Web 2.0, says about the service:

Memeorandum is a way to track blog conversations relating to political or tech issues (Gabe can and probably will add additional verticals in the future) in a highly effective manner. When you go to the site you see what is being talked about the most in the blogsphere at that moment. The most highly linked articles appear at the top and in bigger font sizes. Less popular items are below. Super-popular items eventually are pushed down as newer popular stuff goes up.

Here’s how it works: A post is written. People start to link to it. If enough people link and it becomes very popular, it goes up in the “New Item Finder” area in the top right. If more people link, it will go up in the main area. If a link includes conversation and discourse (substantial text in addition to the link), the linking blog is noted underneath the popular post.

You probably paid attention to this “The most highly linked articles appear at the top and in bigger font sizes. Less popular items are below”. It reminds us the Google Web Search, where very often really relevant items show up somewhere on Page 10 of the search results where nobody will read them. Welcome to the Popularity Club!

It is pity that democratic by nature Web 2.0 is deviating from its own path and chasing mostly the popular information sources. Particularly, taking into consideration that the Big Brother Google is considering finally changing the search algorithms to balance the popularity and relevance. 

RSS Newsmastering can keep up with the most relevant Web sources without sorting them by popularity. This way, the RSS NewsRadars cover any subject no matter how extensive or narrow it is.  

Leave a Comment »

RSS feed for comments on this post. TrackBack URI

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Create a free website or blog at WordPress.com.
Entries and comments feeds.

%d bloggers like this: