Google Search Result Quality Evaluators

Google's search quality evaluation process site may have been around for years.

SearchBistro recently posted a 22 page PDF titled General Guidelines on Random-Query Evaluation that was last revised on December 31, 2003. In addition to posting the Random-Query Evaluation PDF, Henk van Ess has recently posted:

  • examples of offensive (or low quality) sites

  • some whitelisted sites:
    Here is a non-exhaustive "white list" of the sites whose pages are not to be rated as Offensive (nor as Erroneous):
    Kelkoo, Shopping.com, dealtime.com, bizrate.com, bizrate.lycos.com, dooyoo.com;

  • tips for rating sites:
    If it's a machine-generated, no added value affiliates, it's Spam. If it provides some unique values, for example, customer feedback, local information, it should be rated on the merit scale even if it has some affiliates. Similarly, if the game site allows you to download a game, without being intrusive (i.e. install a spyware without notice), it should be rated on the merit scale, instead of Spam.

  • How reviewers communicate to come up with solutions when review quality scores are far apart from one another

Search Classification Types:
The Google review guide classifies searches as being

  • navigational (example: a search for United Airlines)

  • informational (example: how do I..)
  • transactional (example: buy 18K White Gold Omega Watch)
  • any mixture of the above categories.

Resource Quality Rating:
Google then asks raters to classify sites listed in random queries using the following categories:

  • Vital

    • most queries, especially generic type queries do not have a Vital result.

    • Vital result example: search for Ask Jeeves returns www.ask.com.
  • Useful
    • these should have some of the following characteristics (although it likely will not exhibit all of them): comprehensive, quality, answers the search query with precission, timly, authoritative.

    • This is the highest rating attainable for most pages on most search queries.
    • Useful result example: search for USA Patriot Act returning the ACLU page covering the USA Patriot Act.
    • For some plural queries, such as Newspapers in Scotland, the best results may be lists of related sites. Reviewers must also check some links on the page to ensure the page is functional.
  • Relevant
    • One step down from Useful. Relevant results may satisfy only one important facet of a query, whereas Useful results are expected to be more broad and thorough.

    • Results that would have been Vital if a more common interpretation did not overshadow it are considered relevant.
  • Not Relevant
    • Not Relevant results are related to the topic but do not help users.

    • If a person searching for Real Estate finds a San Diego Real Estate website that would probably not be relevant since most people searching for that do not live in or want to move specifically to San Diego.
    • As the San Diego example is too narrow geographically other sites could also be too narrow in other non location based ways, such as being outdated or too specific to a subset idea of the query.
  • Off Topic
    • Is not a useful page. Irrelevant.

    • Usually occurs when text matching algorithms do not account for some terms that can have multiple meanings.
  • Offensive
    • Pages or sites that often do not hold merit on any query.

    • Example Offensive sites: spyware, unrequested porn, AdSense scraper and other keyword net type sites, etc.
  • Erronious
  • Didn't Load
  • Foreign Language
  • Unrated

Vital to Offensive are in order of quality. The higher the better. Erronious through Unrated are cast as non votes. When in doubt between rating values raters are expected to rate at the lower of the two rating values.

Why this is Important:
By learning how and what they want evaluators to look for it makes it easier to understand how to deliver what the search engines want.

This post was a quick review of General Guidelines on Random-Query Evaluation. If you are heavily interested in SEO it is well worth your time to read the original document, which lists many more examples and is in far greater detail than this post.

Random Thoughts:
With how relatively low the wages are for these positions ($10 - $20 an hour) you have to wonder:

  • why it took so long for this information to come out

  • if some of these people are using the information they gained from participating in other ways
  • if these people know anything about Google's business model, and how much THEY could be making on a per click basis if they created well cited content that fit Google's guidelines.
  • and a far off tangent! what would happen if Google's business model made self employment too profitable to where they could not afford to pay workers
Published: June 6, 2005 by Aaron Wall in google technology

Comments

June 6, 2005 - 11:37pm

Brilliant stuff, Aaron - thanks for the quick review, it is much appreciated.

platformbeds
November 15, 2007 - 6:15am

Well, i guess for now, google is the best search engine in terms of returning qualified and relevant results. But i heard that wikipedia is in the process of launching a search engine, it will remember the activities of users and determine the best qualified results based on user experience. For example, in google, when you search 'platform beds' the top results are commercial sites. The content-driven site that provides valuable info bestplatformbeds.com ranked 5th the time i check it

Add new comment

(If you're a human, don't change the following field)
Your first name.
(If you're a human, don't change the following field)
Your first name.
(If you're a human, don't change the following field)
Your first name.