Secret Matt Cutts Video Unveiled

Google Expands Snippets & Related Searches Word Relationships

Google announced that they are rolling out a new technology to better understand word relationships and extend their snippets on longer search queries.

Starting today, we're deploying a new technology that can better understand associations and concepts related to your search, and one of its first applications lets us offer you even more useful related searches (the terms found at the bottom, and sometimes at the top, of the search results page).

Note that they claimed that this is "one of its first applications." If they can improve relevancy with integrating this technology directly into the core search algorithms then it will lower the importance of on page optimization (since you only need to be close rather than use the specific words that were searched for). Such a change would likely decrease the traffic to low authority sites held up largely by strong site structure and on page SEO, while increasing the amount of traffic going to high authority sites and well branded sites that are poorly structured from an SEO perspective.

I am not sure if this sort of algorithm change would favor shorter or longer content pages. In most cases I would guess longer pages if they were kept on theme, and broken up to relevant chunks. The expanded snippets on longer search queries show a lot more information directly in Google's search results, which helps thicker pages show off their offering more than thinner pages, but cedes more control of the information over to Google as they can show close to 250 characters in the search results.

If the technology was applied to anchor text it might also limit the value of anchor text manipulation by boosting up the value of related phrases (if Google knows that the word Salesforce is relevant to CRM then they might count that anchor text more).

Greg Sterling noted that this change came from the Orion technology that was purchased by Google from Ori Allon in 2006. He also interviewed them:

I spoke yesterday to Google and Ori Allon. To the extent that I understood his discussion of the way Orion’s technology had been applied to refinements here’s what’s going on at a high level: pages are being scanned in “real-time” by Google after a query is entered. Conceptually and contextually related sites/pages are then identified and expressed in the form of the improved refinements. This is not solely keyword based but derived from an “understanding” of content and context.

It is hard to speculate if/when this technology will move from sideshow to becoming a big deal. The current usage is fairly trivial, but it could get much more well ingrained into many parts of the relevancy algorithms.

As search engines get more sophisticated with how they show word relationships (on branded and non-branded search queries) that is one more thing that can be optimized, though likely one that will require a holistic marketing strategy to optimize, because you will need to create a lot of co-citation (or some other signal of relevancy) across many pages on the web.

A couple years ago Lord Maurice Saatchi described their brand strategy as being built off of One Word Equity.

In this new business model, companies seek to build one word equity - to define the one characteristic they most want instantly associated with their brand around the world, and then own it. That is one-word equity.

It is the modern equivalent of the best location in the high street, except the location is in the mind.

Danny Sullivan Highlights Google's 2 Tier Justice System

Danny highlighted how many aggregators of aggregators and content cesspools are bogusly clogging up Google's search results with sites that would be viewed as spam if the owner was not socially well connected:

You kind of feel sorry for Joe Schmoe. Build a name by once having worked for Apple or by having written a few marketing books, and you seem to get much better treatment than Joe would get if he pulled the same SEO play stunts.

Alltop, Mahalo, Squidoo -- none of them dominate Google. But seriously, Squidoo has a PR8 home page? Alltop has a PR7? Search Engine Land, which actually produces original content, sits with a PR6 -- but these guys that simply compile content from others get a big fat PR kiss on the lips?

Hey, I don't fret about PR scores. I know how meaningless they can be. But Joe Schmoe who tried to launch one of these types of sites wouldn't get any PR at all. Google would have shut them down long ago. Lesson here? To be a really successful SEO, get successful at something else, then jump into your SEO play.

Danny Sullivan is probably the only neutral reporter in the search space with a decade + of experience AND a background in traditional journalism. He is usually quite neutral, so for him to say that, you know Google is screwing up pretty bad.

If you are good at public relations you can have all the PageRank you want. Can't afford a proper public relations campaign? Have no brand equity other than being branded as an SEO? You are the scum that makes the internet a cesspool. Better luck next life!

If you can't be found you don't exist. As Google's "spam team" grows more subjective with the definition of spam (hey I know him it's not spam, never heard of him it's spam, etc.) the web loses out on its diversity. Meanwhile how about you view some great fraudulent government grant ads through AdWords.

Google's public relations team publicly lied about cleaning those fraudulent ads up.

"Our AdWords Content Policy does not permit ads for sites that make false claims, and we investigate and remove any ads that violate our policies," said Google in a statement e-mailed to ClickZ News. "We have discussed these issues with the Federal Trade Commission and reaffirmed our commitment to protecting users from scam ads."

The above LIE was quoted from an article published 3 weeks ago, but Google is still making over $10,000 a day carpet-bombing searchers with that reverse billing fraud (and probably $10,000's more on the content network).

Spam is only spam *if* the spammer is not paying Google and they are too small to fight back against the often arbitrary and injust decisions of the irrational Google engineers that "fight spam" while turning a blind eye to grant scam ads.

Pretty worthless hypocrisy, Google. Who is trying to turn the web into a cesspool full of fraudulent ads and corporate misinformation? This company:

Phorm/Google Behavioral Ad Targeting - Based on Your Browsing Data

Phorm, a UK company that partnered with BT to run secret trials to target ads based on usage data, was roasted by the media with article titles like Phorm’s All-Seeing Parasite Cookie.

Google, which has long stayed away from behavioral targeting due to privacy (and negative publicity) concerns, announced they are jumping into the behavioral ad targeting market:

Google will use data it collects about what Web sites users visit and what it knows about the content of those sites to sort its massive audience of users into groups such as hockey fans or travel enthusiasts. The data won't be drawn from users' search queries, but from text files known as cookies that Google installs on the Web browsers of users who visit pages where it serves ads.

DoubleClick, AdSense, Google Toolbar, Gmail, Youtube, Blogger, Google Groups, Google Checkout, Google Chrome, Google Analytics...there are lots of ways to track you, even if you do not want to be tracked. Google will allow users to opt out of such targeting, with yet another cookie, but if you clear cookies then you are back in the matrix again.

And while Google claims they are not using search queries in their current behavioral targing, Danny Sullivan wrote:

Google confirmed in a session I moderated at the Omniture Summit last month that they have tested behaviorial targeted ads using past search history data. Again, that doesn’t seem to be part of this release, but it could come in the future.

As discovered during early Google research titled The Anatomy of a Large-Scale Hypertextual Web Search Engine:

we expect that advertising funded search engines will be inherently biased towards the advertisers and away from the needs of the consumers.

Indeed. Tim Berners-Lee, creator of the world wide web, spoke out against behavioral targeting:

"People use the web in a crisis, when wondering whether they have a sexually transmitted disease, or cancer, when wondering if they are homosexual and whether to talk about it … to discuss political views."
...
"The power of this information is so great that the commercial incentive for companies or individuals to misuse it will be huge," he said. "It is absolutely essential to have absolute clarity that it is illegal."

If Google continues down this path unquestioned, then in due time you may not be able to get health insurance because of a web page you viewed or a keyword you trusted Google enough to search for. Better luck next life!

Download SEO Book Torrent: Should Google Recommend That?

In the following video Matt Cutts highlighted that he did not feel that the update was driven by brand, but more in concepts of trust, PageRank, and authority:

RankPulse, a tool I used in my analysis of the algorithm change, is powered by the Google SOAP API, which Google will soon stop supporting. Matt played down the size of the algorithm update made by a Googler named Vince. But John Andrews takes a contrarian view, looking at Google's behavior after the algorithm update was analyzed:

You might say that Google’s API,via custom third-party innovations like RankPulse.com, enabled us to “organize the world’s information and make it universally accessible and useful” (which is Google’s corporate mission statement, by the way).

It sure seems contradictory for Google, a company based on the collection and permanent storage of others’ web page content, to forbid others from doing the same. It is also quite egregious for Google to expect to operate secretly, with no accountability (such as might be obtained through archiving of Google results), when Google exerts so much influence over Internet commerce.

One of Google's initial complaints, as mentioned by Joshua Sciarrino, was that search information was too secretive:

At the same time, search engines have migrated from the academic domain to the commercial. Up until now most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented (see Appendix A). With Google, we have a strong goal to push more development and understanding into the academic realm.......However, it is very difficult to get this (academic) data, mainly because it is considered commercially valuable.

As Google gobbles up your content while shielding its results from unauthorized access, it creates a weakness which a new search service could exploit...by being far more open.

While Google doesn't want anyone to access their proprietary business secrets, if you search for my brand they recommend you look for a torrent to go download an old copy of my ebook.

sounds like a fair trade, eh? No big deal. Google is a common carrier, and intends to use that to their business advantage whenever and wherever possible.

I hope you (and your business model) are not allergic to peanuts!

Hugo Guzman: Deconstructing the Google 'Brand?' Algorithm

So here we are on Tuesday, March 3 rd, and I’m still trying to fully digest the implications of Aaron’s “Heavy emphasis on Brandings” post from last Wednesday, February 25. The data that was presented, the context that was provided and the labyrinth of insightful user comments that were spawned left me reeling for days. So much so that I wouldn’t be surprised if the annals of SEO history associate February 25, 2009 as the infamous “Aaron Wall” update.

In all seriousness, though, this really is a big deal, especially for folks like me who spend their days attempting to optimize mainstream “Big Brand” web sites for a living. I’m fortunate enough work for an interactive agency that takes SEO seriously, and my team strives to deliver a truly comprehensive approach to SEO – blending site-side factors, link building, social media elements, and analytics. We usually do a pretty darn good job, despite the myriad of obstacles and pitfalls associated with trying to implement SEO for a large, lumbering, Fortune 500 web portal. And sadly, like many big firms out there, we have occasionally chalked up our shortcomings to a lack of implementation and cooperation on the part of the client. It’s that typical “not our fault, it’s a crappy big brand site” copout that many of us have heard a thousand times before.

Then along comes Aaron with his revelations about Google’s recent algorithm shift and its ramifications for big brands, and all hell breaks loose:

  • I immediately spiral into self-doubt regarding me and my team’s marketing abilities
  • I start scrambling to deconstruct this alleged algorithm shift
  • I start emailing all of my senior team members asking them to attempt deconstructing the algorithm shift
  • they roll their eyes and one of them tells me stop sending so many random emails at 10 o’clock at night

I’ve calmed down a bit since then, but I’m still hard at work trying to figure out exactly what levers have caused certain “Big Brand” sites to skyrocket in the SERPs while others remain mired in search engine mediocrity. As with most things in life, the best course of action is to introduce a bit of the old scientific method, systematically isolating variables in an attempt to identify predictable patterns that can be replicated.

After taking a high-level look at each of the keywords outlined in Aaron’s post, and the corresponding brand sites that made the jump onto the front page, several possible culprits become apparent. Here are a couple that jumped out at me:

Social Media Signals – companies like University of Phoenix have made a concerted effort to engage users via social media channels, and those social reverberations could be a key facet in Google’s newly refined algorithm, especially if some of those reverberations include mention of the phrase “online degree.”

Increased weighting of anchor text within internal site linkage – companies like American Airlines seem to be leveraging both their own internal site pages as well partner sites to increase the volume of anchor text occurrences for the term “airline tickets” (although they’re missing out on some seriously low-hanging fruit by failing to optimize the alt. image attribute on their global logo image link). If Google has decided to increase the potency of this element, then large brand portals with voluminous amounts of internal pages and partner sites (or branded micro sites) could gain an upper hand for highly competitive terms.

Increased sensitivity to offline marketing campaigns – Perhaps Google’s algorithm is getting better at recognizing site traffic associated with offline marketing campaigns. This would extremely difficult to do without having direct access to a site’s analytics data (although Google Analytics conspiracy theorists are convinced that this is already the case for sites using GA) but perhaps Google is using signals such as the relative volume of specific search queries (e.g. branded queries like “State Farm”) and somehow tying that data back to terms that the algorithm associates with the given brand query (e.g. State Farm = Auto Insurance).

Disclaimer: I haven’t been able to actually test these hypotheses out thoroughly or with any real semblance of scientific method. After all, it’s only been five days since I read the post, and I do have other things to do besides ponder the ramifications of this alleged algorithm shift (it’s 10pm so I have to start annoying my team with random emails again).

Besides, Google’s results could roll back at any moment, rendering all of these insights (nearly) moot. Still, if you’re in any way involved in optimizing web sites for big brands (or if you just want to improve your eye for SEO) it’s probably a good idea to start doing a little scientific testing of your own.

If you liked this post (or even if you thought it was a flaming pile of dog excrement) feel free to reach out to me via my Twitter handle: http://twitter.com/hugoguzman11

Big Brands? Google Brand Promotion: New Search Engine Rankings Place Heavy Emphasis on Branding

Originally when we published this we were going to make it subscriber only content, but the change is so important that I thought we should share some of it with the entire SEO industry. This post starts off with a brief history of recent algorithm updates, and shows the enormous weight Google is placing on branded search results.

The Google Florida Update

I got started in the search field in 2003, and one of the things that helped get my name on the map was when I wrote about the November 14th Google Florida update in a cheeky article titled Google Sells Christmas [1]. To this day many are not certain exactly what Google changed back then, but the algorithm update seemed to hit a lot of low level SEO techniques. Many pages that exhibited the following characteristics simply disappeared from the search results

  • repetitive inbound anchor text with little diversity
  • heavy repetition of the keyword phrase in the page title and on the page
  • words is a phrase exhibiting close proximity with few occurrences of the keywords spread apart
  • a lack of related/supporting vocabulary in the page copy

The Google Florida update was the first update that made SEO complicated enough to where most people could not figure out how to do it. Before that update all you needed to do was buy and/or trade links with your target keyword in the link anchor text, and after enough repetition you stood a good chance of ranking.

Google Austin, Other Filters/Penalties/Updates/etc.

In the years since Google has worked on creating other filters and penalties. At one point they tried to stop artificial anchor text manipulation so much that they accidentally filtered out some brands for their official names [2].

The algorithms have got so complex on some fronts that Google engineers do not even know about some of the filters/penalties/bugs (the difference between the 3 labels often being an issue of semantics). In December 2007, a lot of pages that ranked #1 suddenly ended up ranking no better than position #6 [3] for their core target keyword (and many related keywords). When questioned about this, Matt Cutts denied the problem until after he said they had already fixed it. [4]

When Barry asked me about "position 6" in late December, I said that I didn't know of anything that would cause that. But about a week or so after that, my attention was brought to something that could exhibit that behavior. We're in the process of changing the behavior; I think the change is live at some datacenters already and will be live at most data centers in the next few weeks.

Recent Structural Changes to the Search Results

Google helped change the structure of the web in January 2005 when they proposed a link rel=nofollow tag [5]. Originally it was said to stop blog spam, but by September of the same year, Matt Cutts changed his tune to where you were considered a spammer if you were buying links without using rel=nofollow on them. Matt Cutts documented some of his repeated warnings on the Google Webmaster Central blog. [6]

A bunch of allegedly "social" websites have adopted the use of the nofollow tag, [7] turning their users into digital share-croppers [8] and eroding the link value [9] that came as a part of being a well known publisher who created link-worthy content.

In May of 2007 Google rolled out Universal search [10], which mixes in select content from vertical search databases directly into the organic search results. This promoted

  • Google News
  • Youtube videos (and other video content)
  • Google Product Search
  • Google Maps/Local
  • select other Google verticals, like Google Books

These 3 moves (rel=nofollow, social media, and universal search), coupled with over 10,000 remote quality raters [11], has made it much harder to manipulate the search results quickly and cheaply unless you have a legitimate well trusted site that many people vouch for. (And it does not hurt to have spent a couple hours reading their 2003, 2005, and 2007 remote quality guidelines that were leaked into the SEO industry. [12]

Tracking Users Limits Need for "Random" Walk

The PageRank model is an algorithm built on a random walk of links on the web graph. But if you have enough usage data, you may not need to base your view of the web on that perspective since you can use actual surfing data to help influence the search results. Microsoft has done research on this concept, under the name of BrowseRank. [13] In Internet Explorer 8 usage data is sent to Microsoft by default.

Google's Chrome browser phones home [14] and Google also has the ability to track people (and how they interact with content) through Google Accounts, Google Analytics, Google AdSense, DoubleClick, Google AdWords, Google Reader, iGoogle, Feedburner, and Youtube.

Yesterday we launched a well received linkbait, and the same day our rankings for our most valuable keywords were lifted in both Live and Google, part of that may have been the new links, but I would be willing to bet some of it was caused from 10,000's of users finding their way to our site.

Google's Eric Schmidt Offers Great SEO Advice

If you ask Matt Cutts what big SEO changes are coming up he will tell you "make great content" and so on...never wanting to reveal the weaknesses of their search algorithms. Eric Schmidt, on the other hand, is frequently talking to media and investors with intent of pushing Google's agendas and all the exciting stuff that is coming out. In the last 6 months Mr. Schmidt has made a couple quotes that smart SEOs should incorporate into their optimization strategies - one on brands [15], and another on word relationships [16].

Here is Mr. Schmidt's take on brands from last October

The internet is fast becoming a "cesspool" where false information thrives, Google CEO Eric Schmidt said yesterday. Speaking with an audience of magazine executives visiting the Google campus here as part of their annual industry conference, he said their brands were increasingly important signals that content can be trusted.

"Brands are the solution, not the problem," Mr. Schmidt said. "Brands are how you sort out the cesspool."

"Brand affinity is clearly hard wired," he said. "It is so fundamental to human existence that it's not going away. It must have a genetic component."

And here is his take on word relationships from the most recent earnings call

“Wouldn’t it be nice if Google understood the meaning of your phrase rather than just the words that are in that phrase? We have a lot of discoveries in that area that are going to roll out in the next little while.”

The January 18th Google Update Was Bigger Than Florida, but Few People Noticed it

Tools like RankPulse [17] allow you to track the day to day Google ranking changes for many keywords.

4 airlines recently began ranking for "airline tickets"

At least 90% of the first page of search results for auto insurance is owned by large national brands.

3 boot brands / manufacturers rose from nowhere to ranking at the top of the search results.

3 of the most well recognized diet programs began ranking for diets.

4 multi-billion dollar health insurance providers just began ranking, with Aetna bouncing between positions #1 and 2.

3 of the largest online education providers began ranking for online degree.

5 watch brands jumped onto the first page of search results for watches. To be honest I have never heard of Nixon Now.

The above images are just some examples. Radioshack.com recently started ranking for electronics and Hallmark.com just recently started ranking for gifts. The illustrations do not list all brands that are ranking, but brands that just started ranking. Add in other brands that were already ranking, and in some cases brands have 80% or 90% of the first page search results for some of the most valuable keywords. There are thousands of other such examples across all industries if you take the time to do the research, but the trend is clear - Google is promoting brands for big money core category keywords.

Want to read the rest of our analysis? If you are a subscriber you can access it here.

Mahalo Caught Spamming Google With PageRank Funneling Link Scheme

Jason "SEO is dead" Calacanas, founder of Mahalo, used "SEO is dead" as a publicity stunt to help launch his made for AdSense scraper website. In the past we have noted how he was caught ranking pages without any original content - in clear violation of Google's guidelines. And now he has taken his spam strategy one step further, by creating a widget that bloggers can embed on their blogs.

The following link list looks like something you would find on an autogenerated spam website, but was actually on Hack A Day, a well respected technology blog with lots of PageRank.

  • Note that the links are not delivered in Javascript and do not use nofollow.
  • The links are repetitive and spammy.
  • The links have no contextual relevance.

This activity is in stark contrast to Google's webmaster guidelines:

Your site's ranking in Google search results is partly based on analysis of those sites that link to you. The quantity, quality, and relevance of links count towards your rating. The sites that link to you can provide context about the subject matter of your site, and can indicate its quality and popularity. However, some webmasters engage in link exchange schemes and build partner pages exclusively for the sake of cross-linking, disregarding the quality of the links, the sources, and the long-term impact it will have on their sites. This is in violation of Google's webmaster guidelines and can negatively impact your site's ranking in search results. Examples of link schemes can include:

  • Links intended to manipulate PageRank
  • Links to web spammers or bad neighborhoods on the web
  • Excessive reciprocal links or excessive link exchanging ("Link to me and I'll link to you.")
  • Buying or selling links that pass PageRank


The above links not only appear on hackaday, but Mahalo is actually creating a "Mahalo Blog Network" that cross links to other Mahalo promoting blogs and exists for the purpose of flowing PageRank into high paying Mahalo pages.

Back around the last time Jason was calling SEO spam, he was promoting Weblogs Inc., and his blog revenues relied heavily on selling PageRank from his blogs to casino websites.

Do the venture capitalists that invested in Mahalo support such Google gaming and PageRank selling strategies? When will Google act on this blatant violation of their guidelines? Jason has a clear history of operating outside the spirit of their guidelines, and if Google lets this slide then many other people are going to start spamming them too. Google has an obligation to protect searchers from such devious behavior, lest they let it slide and promote the creation of more spam.

Update: This Looks Worse Than I Originally Thought!

While leveraging blog sidebars to pump PageRank and anchor text is pretty bad, at least it was not in the editorial content of blog posts. But it looks like many Mahalo employees not only put links in their sidebars, but they publish posts that consist of little but a link laundry list pointing at various seasonally hot parts of the Mahalo site.





The above is just a small sample of such posts promoting Mahalo. There are probably hundreds or thousands of suchs posts floating around the web. What makes that strategy any better than the "evil" Pay Per Post strategy that Jason Calacanis was allegedly against? I guess it is only bad when someone else is profiting from it.

Did Google Actually Penalize Google Japan?

After Google Japan got caught buying paid blog reviews it was claimed that Google penalized their own site. Sure their toolbar PageRank score matters, but did it do anything to their actual rankings? Not so far as I can tell.

Search Google for John Chow or Text Link Ads and try to find the official branded sites...that is what a real penalty looks like. It looks like The SEO Commandments don't apply equally to everyone.

Thou shalt bear witness against all thy competitors, spying and snitching and ratting on them whenever thou perceivest a purported spam causing grief to Mine index and My corporate ego. And My profits. For thus shalt thou spare Me labor and the expense of attending to Mine Own job. And if thou wilt not lay it to heart to give glory to My name in this manner, behold, I will corrupt thy ranking, and spread dung upon thy name, and castigate thee as unethical, and thine SEO agency shall be damned and misranked in all eternity. For verily, I am a jealous Search Engine.

Ad Networks - "Partners" Hoarding Publisher Data For Profit

Are the big networks trying to lock up their data?

It would appear that some big players are trying to muscle in between the user and the webmaster by limiting the webmasters access is to valuable statistical data.

The excellent SmackDown blog has a post about Google reportedly testing Ajax results in the main SERPs.

Sounds innocuous enough, right?

Trouble is, what happens to existing tools? Plugins? Rank checkers? Stats and other referral tracking packages? All tools that rely on Google passing data in order to work.

Many tool vendors would likely adapt, but as Michael points out, what happens if all the referral data shows as coming from Google.com i.e. no keyword data is passed?

Browsers do not include that data in the referrer string, and it is never sent to the server. Therefore, all referrals from a Google AJAX driven search currently make it look as if you are getting traffic from Google’s homepage itself. Now, while this kind of information showing up in your tracking programs might be quite a boost to the ego if you don’t know any better, and will work wonders for picking up women in bars (”guess who links to me from their homepage, baby!”), for actual keyword tracking it is of course utterly useless.

Perhaps the only place you'll be able to get this data is Google Analytics? Is this the next step - a lock-in?

It has happened before.

Remember the changes to Adsense? Google introduced a new form of tracking code that can't be tracked by third party tools. However, that data is available within Google Analytics.

This obviously puts other tracking vendors at a competitive disadvantage, and signals to the webmaster community just where the ownership of that data lies.

Data Lock In

There appears to be an emerging trend, of late, whereby networks are leveraging their power against the interests of individual webmasters in terms of data ownership. Having been locked out themselves for a few years, the middle men are trying to squeeze their way back in again.

Take a look at the new contracts of GroupM, the worlds largest buyer of online media, as detailed in GroupM Revises Terms For All Online Ad Buys, Claims Data Is 'Confidential' on MediaPost:

The wording in GroupM's new T&Cs, which are attached to all the insertion orders and contracts it submits to online publishers beginning this year, amends the current industry standard by adding, the following: "Notwithstanding the foregoing or any other provision herein to the contrary, it is expressly agreed that all data generated or collected by Media Company in performing under this Agreement shall be deemed 'Confidential Information' of Agency/Advertiser......Experts familiar with online advertising contracts say the term is a smoking gun, because it raises a broader industry debate over who actually owns the data generated when an advertiser serves an ad on a publisher's page. Is it the advertiser's data? Is it the agency's data? Is it the publisher's data? Under the current industry standard, the data is considered "co-owned" by all sides of the process, but some believe the new GroupM wording seeks to shift the rights over data ownership exclusively to the advertiser and the agency.

The article also suggests that other ad providers may follow suit. What this may mean is that your can't leverage data in other ways. You might not even be able to collect it.

Whilst this issue has popped up again of late, it is nothing new. There has long been a battle for consumer data because it is so valuable. The ad networks can create a lot of valuable data as a by-product of their advertising placement, because they can leverage network effects and scale in the way the individual webmaster cannot. Naturally the next step is to lock it up and protect it.

The cost of protecting that data may come at the webmasters expense. As the MediaPost article says, who does the data belong to? The publisher or the ad network? Both?

Traditionally, it's been both. But that might be about to change, if the above contract is anything to go by.

Forced Partnerships

Incidentally, other contracts really push the boat out when it comes depriving webmasters of control. Techcrunch reported that the Glam Network, a large ad provider made up of advertising affiliates, includes this little clause in their contract:

10. Right of First Refusal
a. Notice. If at any time Affiliate proposes to sell, license, lease or otherwise transfer all or any portion of its interest in any of the Affiliate Websites, then Affiliate shall promptly give Glam written notice of Affiliate’s intention to sell....

Essentially, if you want to sell your website, and you've agreed to these terms, then Glam have first right of refusal on the sale! Nice.

What this all might lead to is less ownership, less control, and less flexibility for the individual webmaster when dealing with big networks.

Or perhaps, in the case of Google, they're going to find other ways to pass data and just haven't outlined how yet.

One to keep a close eye on, methinks...

Pages