Skip to Main Content

From Privacy to Penalties: Is Google giving users what they asked for? (Part II)

Posted on 1.19.2015

:: By Ben Oren, Dynamic Search ::


This is the second part of a three-part series of articles (Read Part 1) examining Google’s practices and standing policies, in relation to its declared objectives of increasing access to information and improving user experience, as well as its since-dropped corporate motto “don’t be evil." Among the practices shown to contradict these stated objectives are penalties, dealt with broadly in the previous article in this series. The current article refers to another practice that has steadily escalated: the gradual marginalization of organic search results. The final part in this series will deal with Google’s repeated user privacy violations.

How Google Took Over Organic Results

Originally, Google was a simple search engine. It was its simplicity, more so than anything else, which helped it gain in popularity. Over the years, it has changed significantly, both visually and technically. If we take a look at search results from the late 90s and compare them to search results today, it’s easy to spot countless differences. Nowadays, the top 10 results often can’t even be seen on the first page, due to all the images, videos, news items and other elements Google introduced.

My colleague Branko Rihtman wrote a post addressing these changes, and illustrated them perfectly. This is what Google search results pages looked like 10 years ago:

And here’s one of the layouts that can be seen nowadays:

The above configuration is still somewhat reasonable; although there are only five regular organic results displayed, local searches do warrant the display of local results. This recent layout was available when Google still stuck to its original purpose, emphasizing the end user’s experience, but things took a turn for the worse when Google Shopping (and tourism offers) started to infiltrate results pages.

In 2011, the U.S. Senate held a hearing where CEO of price comparison giant NexTag, Jeffrey Katz, testified:

"But what Google engineering giveth, Google marketing taketh away. Google abandoned these core principles when they started interfering with profits and profit growth. Today, Google doesn’t play fair. Google rigs its results, biasing in favor of Google Shopping and against competitors like us. As a result, Nextag’s access is more and more discriminated against. Not because our service has gotten worse – in fact, our service is much better than it has ever been – but because we compete with Google where it matters most, for very lucrative shopping users."

Once Google effectively began selling through their website, we started stumbling upon sponsored ads within search results. In the example below, a search for hotels in Las Vegas – note the extent of Google’s takeover of organic results, primarily through its sponsored ads that include the display of ratings and pricing, parameters they don’t make available to any of their competitors’ results:

Behind every such change, the first explanation Google provides has to do with the benefit to users. While indeed many of the changes made through the years simplify user experience, Google’s continued exploitation of its popularity (and said ‘user experience’ argument) to benefit its own bottom line is bothersome, especially since it fails to provide businesses a fair chance or equal visibility in organic results. The example above shows unfairness toward huge, profitable hotels in Las Vegas, whom we may not feel sorry for and who may be able to compensate with other forms of visibility, but similar examples can be found in smaller niches. Imagine, for instance, a small town with several inns that aren’t given a fair chance to reach potential clients.

Once again, the unfair competition extends further – even to non-profit organizations, such as Wikipedia. Several months ago, Matt Cutts tweeted a link to a new lead form that users can fill out to report scraped content and scraper sites.

Screenshot taken from searchengineland.com on 08/15/14. (click to zoom in)

This tweet spread like wild fire after it came to light that Google itself was in breach of its guidelines, using scraped content from other websites when it suited it. In an absurdly accurate example, one tweeter pointed out that Google had lifted the very definition of “scraper site” directly from Wikipedia:

Screenshot taken from searchengineland.com on 08/15/14. (click to zoom in)

When inputting the search phrase ‘what is a scraper site’, tweeter Dan Barker saw Google’s definition displayed at the top of search results as part of its Knowledge Graph. By displaying this info, directly taken from Wikipedia, Google essentially canceled out all other results. The problem with this is that hardly any user would click through to the URLs in the results after seeing the abridged version of what they searched for right then and there. Google severely damaged Wikipedia’s chances of getting traffic, and thus potential support for its free-information manifesto, even though Wikipedia created the content and were the most relevant result - as evidenced by their appearance at the top of organic results. Continuing reading (part 3).

Read Part 1, Read Part 3 of this three-part series


Ben Oren specializes in handling Web marketing efforts and boosting online conversion for large corporations in highly competitive niches, mostly in the U.S. and Europe. Ben Oren is the Head Marketing Consultant at Dynamic Search - a U.S. based, reputable Web marketing agency handling small and medium clients worldwide. He is currently the Director of Web Marketing at WhiteWeb and a contributor to leading industry publications.

 Request Website Magazine's Free Weekly Newsletters 

WebsiteMagazineMiniLogo

Leave Your Comment

Login to Comment

Become a Member

Not already a part of our community?
Sign up to participate in the discussion. It's free and quick.

Sign Up

 

Leave a comment
    Load more comments
    New code
  •