Affiliate marketing is a business model whereby the marketer is paid a commission based on the products or services they help to sell.
Typically, a merchant provides the billing, the stock, the handling, and the customer service function, whereas the affiliate finds the buyers, and directs them to the merchant's site.
This business model is a natural fit for search marketers. The search marketer need only get the traffic by way of search rankings or PPC, and the profits come flooding in.
That's the theory, anyway.
The reality is that most affiliate marketers aren't making much, if anything. You'll find thousands of e-books promising instant riches by way of affiliate marketing, however the people making the money tend to be the people selling the books!
In this guide, we'll show you how affiliate marketing really works. We'll look at the nature of the game, the obstacles, and the SEO techniques and strategies you can use to create profitable and defensible revenue streams.
History Of Affiliate Marketing
Commission selling and revenue sharing is nothing new. It pre-dates the internet. However, unlike off-line equivalents, the internet version requires little active selling effort on the part of the affiliate, other than placing internet-based advertising. Needless to say, affiliate marketing on the internet took off quickly.
According to Wikipedia,
"The consensus of marketers and adult industry insiders is that Cybererotica was either the first or among the early innovators in affiliate marketing with a cost per click program.[2] During November 1994, CDNOW launched its BuyWeb program. With this program CDNOW was the first non-adult website to introduce the concept of an affiliate or associate program with its idea of click-through purchasing. CDNOW had the idea that music-oriented websites could review or list albums on their pages that their visitors may be interested in purchasing. These websites could also offer a link that would take the visitor directly to CDNOW to purchase the albums."
How Affiliate Programs Work
Most affiliate programs work like this:
A visitor arrives at a site run by an affiliate
visitor clicks on a link that leads to the merchants site
the lead is identified as belonging to the affiliate by way of a tracking code, a cookie, or URL referral
if the visitor then buys a product or service, the affiliate receives a commission
Another common model involves capturing sales leads, and then selling them on to a merchant i.e. mortgage applications.
The benefit to the merchant is a low-risk sales channel. The merchant only has to pay the affiliate if a visitor buys something, or qualifies as a genuine lead.
Therefore, most of the risk in affiliate marketing lies with the affiliate. The affiliate must buy or earn the traffic, but will only make any money if the traffic actually converts into a buyer or a lead.
How To Become An Affiliate
Becoming an affiliate is, in most cases, easy. You fill out a form and you're good to go.
Incidentally, we have one of our own :)
The hard part is getting high quality visitor traffic. A lot of people are fighting for that same traffic, which means it is going to require time, effort and cost to acquire. In order to make a successful business out of being an affiliate, you need to get traffic at a lower cost than you can "sell" it.
Let's take a look at a PPC approach to illustrate the affiliate business model:
You sign up for an affiliate program
You choose a product to market
You place PPC ads on Google Adwords for $.50 cents per click
These PPC ads lead to your landing page
The landing page contains your affiliate link, leading to the merchant site
If the visitor buys, you make a commission of $100.00
If you bought one hundred clicks at .50 cents, your cost of advertising was $50. Your affiliate payment is $100. Therefore, your profit is $50. If you can repeat this automated feat a few hundred times per day, you'll soon be driving that new Ferrari.
The problem is, of course, that many other affiliates, and the merchants themselves, are doing likewise. This drives up the cost of the clicks and reduces the margin available to the affiliate. Typically, affiliates have little or no control over the margins they can attach to the products or services of the merchant.
The PPC marketers tend to work on slim margins and high volumes. Those who can do high volumes tend to have more leverage with their merchants in terms of margin. Another barrier for the new entrant is that PPC accounts, like Adwords, build up a credibility history, which can give high volume players lower prices.
SEO Tactics And Techniques
Unlike PPC, SEO doesn't involve a cost per click charge. Therefore, if an SEO can rank pages, s/he stands to make significant margins. The SEO still has competition from other SEOs chasing the same rankings, but it is harder for new entrants to unseat the SEOs positions, as often happens in PPC.
There are a number of different ways to approach SEO affiliate marketing, from building a site consisting of a merchants product inventory, to building a comparison shopping site, a review site, or simply creating a collection of on-topic landing pages.
In choosing your approach, it pays to keep one thing in mind:
Google doesn't like you
Google's Quality Rater Guidelines
A few years ago, search engines weren't very good at spotting duplicate pages.
If you performed a search in any of the competitive P.P.C areas - in affiliate marketing, PPC also means "Pills, Porn & Casinos" - you'd likely see thousands of near identical sites. The search results "street" was chock full of pimps, and the law was pretty much powerless to stop them. Take out a few pimps, and there would be tens of thousands stepping up to take their place.
These days, Google does a better job of weeding out duplicate content, and what it deems "low value" pages. This has led to significant changes in the way affiliates approach affiliate marketing.
To understand the approach you'll need to take, let's firstly look at how Google classifies affiliate sites.
The Google Quality Rater Guidelines, a document which is reportedly a training course for Google's army of human spam police, was leaked to the internet in 2007.
This document provides valuable insights into how Google classifies spam. Anyone interested in SEO will find this document essential reading, but it is of particular interest to affiliates. The document singles out affiliates for special mention on a number of occasions. Those mentions are usually followed closely by the word "spam"
State your reason for assigning “Spam”, “Maybe Spam”, and “Malicious” flags. For example, Sneaky redirect to eBay....Amazon thin affiliate
Major cosmopolitan cities are preferred targets for spammers, especially hotel affiliates. Such results should be labeled as Spam, even if they have relevance to the query
Thin Affiliates: Spammers make money when a transaction is made after the user has clicked through to the merchant’s site
Types Of Spam - A thin affiliate is a page that exists to deliver a visitor to a page on another domain with a different owner.
Keywords deliver visitors to the affiliate page, and links on the affiliate page deliver visitors to the second page,
which is owned by a real merchant.
Not great news for the budding SEO affiliate.
However, Google does make a distinction:
Thin affiliates are bad
Other types of affiliates are not
What is a Thin Affiliate?
Google defines a thin affiliate as:
...a page that exists to deliver a visitor to a page on another domain with a different owner. The thin affiliate site contains text and perhaps images copied from the merchant site. It offers no (or very little) value-added service while earning its commission"
Google also states the types of affiliate pages it deems to be acceptable.
"If a page offers some value in addition to its links to the merchant, then it is not a thin affiliate. For example, if the affiliate offers price comparison functionality, or displays product reviews, recipes, lyrics, etc., it is not a thin affiliate, and, therefore, not Spam"
Google gives examples such as www.shopping.com, www.pricegrabber.com, and Kelkoo.co.uk. These sites are deemed acceptable because they provide extra value.
So, if you want to fly under Google's spam radar, you should aim to become a "fat" affiliate*. In order to do this, you should look to add value of an informational nature.
*Note: you can still be a thin affiliate and make money. There are plenty of these sites in the index. However, they are engaged in an arms race with Google, and it is a race that Google will likely win. Most of the hardcore thin affiliates effort goes into staying one step of Google, which can be a lot more work than simply building more valuable content.
Create Valuable Content
The Search Quality document also tells us what type of content Google finds valuable.
Google have a content rating scale which consists of five grades: Vital, Useful, Relevant, Not Relevant, or Off-Topic. In order to escape Google's spam filters and hand edits, you must fit into the first three grades.
The Search Quality document goes into great detail in terms of defining these grades, but the most important point to remember is that a page is rated based on the match to the *concept* of the query, not the presence or absence of the query term on the page. What this means is that it isn't good enough for a page simply to mention a keyword term, a page must "answer" the visitors query. This blows away a lot of conventional theory on SEO - relevance isn't just about adding keywords.
Google doesn't treat all search queries the same. Google separates queries out into three categories: navigational, informational, or transactional.
A navigational query is a query intended to locate a specific web page. For example, "yahoo mail". The searcher clearly wants to find the Yahoo Mail service, not information about Yahoo Mail, or where to buy mail services.
An informational query seeks information on a topic. For example, "tsunami". A Wikipedia article providing information about a tsunami would be a relevant result.
A transactional query seeks to complete a transaction on the Web – for money or free – of a product or service. For example, "Beatles poster". A relevant result would be a page on which to purchase poster.
Affiliates often focus on transactional queries. These queries indicate a person is some way along the sales funnel as is ready to buy, which suits the affiliate just fine.
However, the danger is that if you focus solely on transaction queries, you may well be labeled a thin affiliate, especially if your next step is to link to a merchant's site. Google will look to differentiate the affiliate sites from the merchant site, leave the merchant site in the results, and flush the thin affiliates. After all, the thin affiliates are adding no value to the transactional sales process.
Create Added Value
A good approach, and one used by many super affiliates, is to create hybrid sites.
A hybrid site covers both informational and transactional queries. There are a number of reasons why doing this is a good idea.
Firstly, Google is more likely to identify your pages as "useful" if you add value to the sales process. For example, rather than just having a transactional landing page that repeats the same offer the merchant is making, you might create a page that compares the relative merits of various products.
Secondly, Google's algorithms are constantly changing in favor of high quality, authoritative content. Not only does the content need to be authoritative, but it needs to appear on an authoritative domain in order to rank well. In order to be perceived as a quality domain, you'll need high quality linkage data. Consider how difficult these links are going to be to obtain for a purely commercial site, let alone a purely commercial affiliate site.
If you provide genuinely useful information, you'll achieve three things. One, it will be easier to get links. Two, you will build some brand equity that can be used for other purposes if the merchant doesn't work out i.e. Adsense. Lastly, it will be less likely you'll be taken out by Google. Google wants genuinely useful information it their index.
The downside is that useful information can take a long time to build. The alternative, however, is engaging in an arms race of cat and mouse with Google, which can also be time consuming.
Check out this useful tool from Microsoft AdCenter Labs: http://adlab.microsoft.com/Online-Commercial-Intention/Default.aspx
Even a shopping oriented site, such as shopping.com, is predicted to have visitor traffic intent that is 23% transactional, and 77% informational. If this fat affiliate site didn't provide a high level of information, it would miss out on a lot of traffic.
Tips & Tricks
Disguise Your Shopping Cart
One spam flag is a transaction occurring on a different site i.e. the merchants site
Look for affiliate programs that will let you host the shopping cart on your own site. You can pass the information to the merchant at the very last stage of the transaction, thus hiding it from all but the most determined quality rater.
Also keep in mind Google's guidelines for recognizing true merchants:
A “view your shopping cart” link that stays on the same site and updates when you add items to it
A return policy with a physical address
A shipping charge calculator
A “wish list” link, or a link to postpone purchase of an item until later
A way to track FedEx orders
A user forum,the ability to register or login, a gift registry
An invitation to become an affiliate of that site
If you're handling transactional queries, the more of these signals you can include, the more likely you are to stay beneath Google's radar.
Redirects
If you do redirect to a merchant, try to cloak your redirects in scripts. It is less likely Google will follow these links to the merchant site. It also helps protect you from unscrupulous operators who may steal affiliate referrals.
This is by no means full-proof, however, and unlikely to pass human review.
When you feel unsure if a page is spam, ask yourself the following question: If I remove the copied content,
scraped news feeds, fake forums and blogs, thin affiliate links, parked/expired domain links, and all that is left
are PPC ads and sponsored links, the page is probably spam
Think reviews, comparisons, context, sales questions and answers, buying advice, trends, statistics, social elements, discussion, competitions, awards, etc.
Target The Regions
The algorithms used on some regional variations of Google can be more forgiving of thin affiliate sites than Google.com. This is because the depth of content in the regions can be shallow, so in order to show enough results, Google often lets past content they wouldn't in deeper markets, like .com.
If you're having trouble competing in the US space, try out some regional affiliate programs. Not only is the algorithm more forgiving, but the competition is decreased. The downside is the lower traffic numbers overall.
Coupons
Offering coupons can be a great strategy, as it helps differentiate your offering from other affiliates, and everyone loves a bargain.
Here's an example Aaron is using to promote AdCenter.
Load Up On Relevant Keywords
Look at how many keywords are integrated into this page.
Keywords include:
2008
pubcon
webmasterworld
webmaster world
coupon
coupons
code
discount
promo
promotional
las vegas
november
conference
brett tabke
That page contains low volume, but high value keywords, including coupon keywords. Yet, the page doesn't look spammy.
The demographics of MSN and Yahoo are different to Google.
It is fair to characterize most MSN users as less technically savvy than users of the other two engines. There is also anecdotal evidence to suggest they are more interested in shopping than in research. Yahoo lies somewhere between the two points. The users of these engines may not care so much about advertising, so don't overlook these valuable channels.
They also don't appear to be as good as Google at filtering out thin affiliates.
Brand Terms
If you can, try and target brand related terms.
Brand terms tend to be transactional, especially if you directly target such terms. For example, "discounts on Toshiba televisions".
It can be difficult for PPC marketers to bid on brand terms because of legal issues, but not for SEOs. The main obstacle you'll come across is Google's new brand-oriented search algorithm.
Down Market
The current economic crisis might be good news for affiliates. People with less disposable income tend to be more interested in value. Consumers have an ingrained perception that the best prices can be found online.
So, in your copy, emphasize convenience, savings and communicate the value proposition. Price comparison affiliate approaches should do well in the current market.
Program Selection
While outside the scope of this document, here are a few tips on how to select a merchant.
Avoid saturated affiliate areas
If you can, by-pass affiliate programs altogether and approach merchants directly. This can be more work, but it can pay off handsomely if you have the niche largely to yourself.
Don't pay any attention to anyone waving around an oversized check as proof of their earnings
What they're not telling you is how much they spent. If their check is for $1M and they spent $2M, all they are saying is "I lost $1M dollars and - hey - you can too!"
Don't listen to anyone who tells you what specific area to get into, either in a free forum or for $97
Why would any affiliate give away a treasure map? The answer is they don't. You'll usually see this type of information long after a niche has become heavily saturated. Most affiliates work a niche, then move on once the area becomes too well known to others.
Do research niches
Approach affiliate marketing like you would any other business, and ask the same questions. Is this niche crowded? How will I differentiate my offering? What can I do that my competitors don't do? Is there a market for this product or service?
A major key to success in affiliate marketing is niche selection, so study up on this area. You'll be head and shoulders above the rest :)
Some larger online publishers are facing declining display ads with a bold strategy: bigger, louder, and more obnoxious ad units. AdWeek reports:
The fixed panel, a 336-by-860-pixel banner that is wider than the standard skyscraper and follows users as they scroll down the page.
The XXL box, a 468-by-648-pixel unit that can expand with video.
The pushdown, a 970-by-418-pixel placement that takes up over half of the page before rolling up.
We recently added a slideup and a popup to the site here, but you should be able to click them once and not see them again (at least until you clear cookies), and at least they are marketing our own site.
But the idea of making larger and more obnoxious ad units some sort of standard for cross-selling seems to be against what is working. Most of Google's ad revenues come from tiny text ads that are relevant to user demand. One of the best ways to have relevant ads is to create what users want and sell it. If they are going to spend that many pixels on the ads, rather than making bigger ad units the publishers should use the content area to sell and add premium services to their sites and start selling content.
Phorm, a UK company that partnered with BT to run secret trials to target ads based on usage data, was roasted by the media with article titles like Phorm’s All-Seeing Parasite Cookie.
Google will use data it collects about what Web sites users visit and what it knows about the content of those sites to sort its massive audience of users into groups such as hockey fans or travel enthusiasts. The data won't be drawn from users' search queries, but from text files known as cookies that Google installs on the Web browsers of users who visit pages where it serves ads.
DoubleClick, AdSense, Google Toolbar, Gmail, Youtube, Blogger, Google Groups, Google Checkout, Google Chrome, Google Analytics...there are lots of ways to track you, even if you do not want to be tracked. Google will allow users to opt out of such targeting, with yet another cookie, but if you clear cookies then you are back in the matrix again.
And while Google claims they are not using search queries in their current behavioral targing, Danny Sullivan wrote:
Google confirmed in a session I moderated at the Omniture Summit last month that they have tested behaviorial targeted ads using past search history data. Again, that doesn’t seem to be part of this release, but it could come in the future.
"People use the web in a crisis, when wondering whether they have a sexually transmitted disease, or cancer, when wondering if they are homosexual and whether to talk about it … to discuss political views."
...
"The power of this information is so great that the commercial incentive for companies or individuals to misuse it will be huge," he said. "It is absolutely essential to have absolute clarity that it is illegal."
If Google continues down this path unquestioned, then in due time you may not be able to get health insurance because of a web page you viewed or a keyword you trusted Google enough to search for. Better luck next life!
Brian Clark notes the rise of the word authenticity in the field of marketing. Clay Shirky wrote that transparency is the new marketing. For individuals these are true, because if some people grow to like you some will grow to hate you, and it can wear you down to try to fight off a bad reputation in a sound byte culture. Just ask Jim Cramer about Bear Stearns
But for larger companies perceived transparency & authenticity is far more important than actual having either.
Google's Free Ride
Because Google provides a valuable service for free and is easy for end users to like they can get away with a lot of stuff that a company like Microsoft could never do.
Paid Posts
Google's search engineers have waged a war on paid blog posts. Google Japan knew they were operating outside of the guidelines when they were buying links in blogs. Google attempted to sweep the story under the rug until after they knew it was going to get enough exposure that they couldn't effectively do it, and then it became a case study for their anti-spam team, when they applied a fake penalty against their site.
Click Fraud
One of my AdWords content campaigns had an image ad unit that was pulling a 2% clickthrough rate and getting over 600 clicks a day, for a cost near $100 a day. I have already blocked a lot of the nasty no-value exclusion categories but I spent the last couple days blocking some more, and the above $100 a day spend was reduced to $5.42 once the fraud and skimming was removed.
I know Google says they offer some refunds for fraudulent clicks, but some of the stuff in their networks is so fraudulent that the only way to police it is to remove it completely. Over the past year some of these fraudulent sites have got hundreds and thousands of dollars from me for setting up nothing more than a thinly veiled click fraud botnet - feels more like Yahoo! than Google when I looked at the traffic, but at least I was able to filter it out after they stole some of my money.
Such fraud damages the entire ad ecosystem - advertisers have their budgets depleted due to click fraud, legitimate publishers are paid less because content is viewed to have less value when so much of the advertiser ad budgets are blown on click fraud, and web users see lower quality ads because some of the best ads had their budgets blown on click fraud.
Support Piracy
BearShare is an official Google search partner. At one point years ago I saw ads for a company supporting illegal downloads of their very own copyright work. Now Google has moved on toward promoting keywords with Torrent in them on brand searches.
If the domain name has leech, domain, or torrent in it then you know most advertisers do not want to subsidize it. Google needs to make a category for downloads and warez sites. The only reason they have not is because doing so would make people realize how bad some of the content network is.
Reverse Billing Fraud
Many services are marketed as being free or complimentary or just pay shipping, and then in 6 point text in the footer there is a disclaimer about how your credit card will be charged $20 3 times a month until you notice it. These typically promote broad market fads & services like acai berry diets, ringtones, credit checks, and free government grants. A group of con men / scam artists jump from one opportunity to the next to scam consumers using the Google ad network.
The government grants scam made enough of a footprint that the FTC is getting involved. Google claims that they are sorting out the issue:
"Our AdWords Content Policy does not permit ads for sites that make false claims, and we investigate and remove any ads that violate our policies," said Google in a statement e-mailed to ClickZ News. "We have discussed these issues with the Federal Trade Commission and reaffirmed our commitment to protecting users from scam ads."
The Opera CEO believes that Microsoft giving users the option to turn IE 8 off is not enough, and the EU is trying to rip Microsoft apart day by day. Even if Microsoft is able to buy Yahoo! Search it will not be enough to compete. Why? Even when you use Microsoft's products they recommend Google's stuff over their own.
Google's Momentum
While Microsoft is busy fighting off competitors in many markets, Google keeps gaining market-share and market leverage in new verticals through soft-bundling.
One of my clients that did not use Google Checkout simply had to stop advertising on AdWords until Checkout was enabled, because without it the usually slim profit margins on AdWords turned negative. It turns out that pricing ads based on "quality" with a discount for a higher clickthrough rates allows the highest quality advertiser to rank #1, so long as they are using Google Checkout to give Google more market data and another chance at monetization. ;)
The Google Chrome browser is recommended on Youtube, advertised all over the AdSense network, and pushed via bundling partnerships with the likes of the Real player and Divx. Its release forced Microsoft to make some of their security products free, and will keep costing both companies money, with the hope that it costs Microsoft more than it costs Google.
Google is still crying to the EU that Microsoft's business behaviors are uncompetitive. Once you get branded it is hard to get un-branded. In spite of the recent slowdown in search volume Google still has a lot of market momentum behind them. You will know Google is in trouble when the market stops giving them leeway and treats them more like Microsoft.
How You Can Apply This to Your Business Today
1.) Always push to own a market default position and once you achieve that position keep investing in maintaining it, while reminding the market that your growth was organic due to your superior quality.
2.) Even if the business model is not there you can always create one/bolt one on if you get enough exposure. In the age of soft bundling it is no accident that we offer some of the best free distributed SEO tools like the SEO Toolbar and SEO 4 Firefox, with intent of helping push this site into a default market status.
3.) What sorts of distributed marketing can your site benefit from?
when I got on the web one of the first mistakes I made was trying to go after the cheap traffic on the second and third tier networks. I arbitraged one of them and was pulling a 300% ROI selling them back their own traffic - but the wankers never paid me a cent.
It can be appealing to think of how to do things cheaper...and sure in the short term it might make sense to do something half-way to get it up and going and then to keep making incremental improvements. But people like Philip M. Parker have created automated technologies to write books. Cheaper is a hard way to compete in the content business.
If you are just fishing for nickels and do not intend to take the market head on then any of the following can take you out:
algorithm updates
remote quality rater or spam report
self-sabotage through doing something a little too clever
more established niche competitors accidentally or intentionally copying you
large general competitors accidentally or intentionally copying you
With search many early successes will be longtail keywords, but eventually you want to go after the biggest and best that you can achieve. Why? Once you have status you enjoy cumulative advantage in everything else you do. Things that are somewhat remarkable become quite remarkable just because who is doing them.
It can take a long time to work yourself to the top of a hierarchy. Most people who succeed ignore the hierarchy and look for a way to dominate a related niche
Ideally, a new player wants to come in with a fresh approach that doesn’t necessarily threaten the existing hierarchy. This allows you to develop an audience by sharing with existing players, not necessarily competing with them.
What you’re looking to do is intensify the niche by doing something more, or differently (or maybe even better) than the existing players. You do this by first evaluating and understanding where the niche is currently, and position your content in a way that pushes the envelope.
Unless you are really well established there is a lot of uncertainty in what you do online. Each additional investment can seem like you are getting closer to the point of diminishing returns, wasting your time. But then surprisingly one day things go way better than expected, and things are received much better than expected. You get a dopamine rush and the sun shines a bit brighter. Network effects kick in and you have reached the point of increasing returns - where each $ invested returns 10s or 100s of dollars back. I think Seth refers to this concept as The Dip - its what separates market winners from people who wasted their time.
But you usually have to lose $50,000 to $100,000 in sweat equity to get to that point, at least on your first successful project. The good news is that once you have already done that work nobody (except for you) can really take it away from you. Even if Google or some other market maker does not like you then you still have other social leverage and exposure which can be used to help generate revenue, launch new websites and projects, or (God forbid) get a real job.
solves my buying too many books and bookshelves problem
you can store notes in it (everything is backed up on Amazon's servers)
You can search against all your books and notes in it (which really turns it into a powerful reference library...makes me want to buy about 3 or 4 of them to store different topics in) . This should be VERY powerful for looking well researched and finding money quotes. Steven Johnson (one of my favorite authors) uses Devonthink when he writes a book.
it has an audio/reader version baked in
it has an Oxford dictionary baked in
new books are typically only $9.99 and take less than a minute to download
it starts off where you last read
While it has many shades of gray, it lacks color and does not have a touch screen interface. It is a nice device and will make moving far easier than it would have been if I kept buying so many physical books.
If books get more interactive with more permiable barriers when they are digitized then they may play a much bigger role in the web graph. Google's copyright settlement with authors and publishers may make Google more likely to promote books:
“When someone goes to Google, they've got a question in mind and an answer they need,” Jennie Johnson, a Google spokesper son told DMNews. “We don't really care where [on the Web] that answer comes from. If it comes from a book, great; if it comes from a Web page, fine.”
One of the things I regret over the past couple years is that I let my reading slide. If early usage is any indication of future usage then hopefully the Kindle will help me get into reading more often.
I have been a follower of Jonathan Mendez's Optimize and Prophesize for a while, and recently interviewed him.
At SES in New York you are speaking on a panel titled "search becomes the display OS" - what does that mean, why is this shift happening?
The shift is part of the Darwinian evolution of the web. Many people have mistakenly viewed search as a channel when in reality it is a behavior. It is the way people use the web. This is clear as YouTube is now the #2 search engine, Facebook, eBay & Craigslist are in the top 10 search engines and Twitter is trying to position itself as a real-time search. Search is integral to the web experience.
From the display standpoint we need to keep in mind that this medium does not need ads to support it nor are ads part of the experience. Display advertising was built as a parallel platform - not weaved into the web like search but placed on top of it. Display has always had its own ecosystem of real estate, content and serving that is separate from the public web.
What we’ve witnessed with display’s lackluster performance and the inevitable crash of CPM rates is the idea of it being a stand-alone platform was wrong. Display needs to be an application that is integrated with web platform and the way people use the web. It should be based on user controls and rules based delivery of content. To truly be relevant and useful for people, publishers and advertisers, it must become a web service like search.
Search is currently at the center of the web. Do you foresee any technologies or services that might shift its position?
On the contrary I think it will become more entrenched and important since everyday millions of new pieces of content data keep getting added to the web and older content gets digitized. As I mentioned search is basic human behavior. We all go online with a goal in mind to either recover information and content or discover information and content. Those behaviors are primal. No technology or services will shift them.
How do you place value on a search impression?
The value is based on what you do with the information. Impressions are the ultimate arbiter of interest and demand. Of course if you go to Google trends often you will become somewhat worried about the collective psyche of this country. In all seriousness however, this is business intelligence. Quick story about impressions - a few years ago I was working with a big client and they were launching a new product. We had purchased the category kw for this product over a year and it hardly had any impressions. We strongly advised them against spending two million dollars to launch this product because there was no demand for it. They didn’t listen to the “search” guys. Within a year the CMO was fired because the product flopped. So in that instance I would say those couple hundred impressions were worth two million dollars.
One of the most powerful pieces of search is that the ad unit looks just like the content. What can publishers do to maximize ad integration without risking their perceived credibility?
In my experience you add credibility as a publisher if you provide helpful, useful and interesting content. There’s no reason that can’t be an ad. Most everyone I know has clicked on a Google ad. Sometimes it is preferable. This creates value to Google as a publisher. Ads that are helpful and interesting will add value to other publishers in the same manner because they are helping their visitors. People rarely forget who helped them in a useful way whether it be a website or “in real life.” In fact there is a large intangible value that is not even being captured when this happens. I think some people even refer to it as branding.
What can publishers and vendors learn from the dominance of search when thinking about how to build and brand their websites? What are some easy ways to make our user experiences more relevant?
Give people control over the delivery of content. The most successful online segmentation strategy is when a person tells you what they want -- self-segmentation. That is the beauty of search. The keyword is the ultimate expression of people’s goals. No website or advertiser knows more about what I want than I do! It explains why the best and most successful experiences on the web (Google, eBay, Craigslist, Yelp) have query fields and lots of text links and it is something I always keep in mind in doing page design. As far as branding I think that goes back to what we were just talking about, the site experience. Great experiences build brands and that is the same online as well as off. Keep in mind all of this should be tested and optimized. It is no accident that Google is the #1 brand in the world without spending a penny on advertising. From day one no one has tested online experience more than Google.
Many people have been promoting Twitter as a Google-killer in real-time search. Why are they wrong?
You mean besides the fact that Google made $21B in ad revenue last year has $8B in cash, owns half a million servers and Twitter search has probably 10 employees and no revenue?
There are some major problems with RTS. First let’s start with the way people search. This has been studied and very clearly defined over the years by brilliant people like Andrei Broder, Daniel Rose and others. I recently took the query classifications they defined and applied it to RTS (http://www.optimizeandprophesize.com/jonathan_mendezs_blog/2009/02/misguided-notions-a-study-of-the-value-creation-in-realtime-search.html). I came to the conclusion that with optimal RTS - which is a huge challenge as I’ll get to - that less than 20% of all queries would benefit in anyway from RTS.
As far as the technical challenges spamming would be very hard to filter in real-time. Also authority as we know from PageRank is a fundamental driver of relevance. How do you define authority in real time? If you do not rank results than is it just a noisy stream? I’ve come to the conclusion that if it RTS becomes anything useful it will be a search vertical, like travel. Helpful for certain things but nowhere near a primary search tool. It is still a great addition to the web but not something Google needs to be concerned with. In fact I think Google is in the position to provide RTS for the entire web which is much more useful than RTS for a single app.
How slow and painful will the transition of ad dollars from offline to online be? What will be the catalyst that allows ad agencies to push search and online aggressively?
Very slow, but this shouldn’t be painful. We know the attention is online so dollars will continue to increase but I think a $25 billion dollar online industry is pretty good right now. It’s grown much faster the past few years than even the most bullish forecasts from five years ago. The catalyst will be innovation and the businesses themselves that must demand performance. Bill Gross the inventor of PPC said it best, “the true value of the Internet is in its accountability…performance guarantees have to be the model for paying for media.” As soon as we embrace performance for all advertising, even so called brand advertising, we will prove our value and grow our industry. Google stands as proof of concept for this. But the battle over performance will be long and bloody. In just the past couple of weeks we’ve had groups like the IAB and the AAAA speaking out against performance and metrics. This type of rhetoric and their fear of accountability are actually helping to slow down the transition.
How many newspaper companies do you see lasting through this economic downturn?
Not too many. Besides the fact that their authority over the past years has waned with bloggers and false reporting the real problem is that newspapers are not an efficient means of information compared to everything else we have today. What percentage of the paper is relevant or interesting to you? 5%? 15%? Yet you are paying for the entire paper when you buy it. Doesn’t make sense. We used to have town criers too, but then newspapers came along. I don’t think most people will miss them. Times have changed. Maybe we’ve just come full circle – people getting their news from other people they trust is the best way to disseminate information. Who trusts the papers?
What will the online vs offline divide look like in 2 years? 5 years? 10 years?
I’m not so sure we’ll have a divide in 5 or 10 years. The kids graduating high school this year were 8 years old when Google was started. I see kids 4 and 5 years old naturally manipulating iPhones. Many of us have persistent web connection and we like it - we feel uncomfortable without it! Of course it’s nice to get off the grid sometimes but what is happening with digital technology is the great story of our age. Everything is becoming digital, addressable and connected via the web. All of us lucky enough to be working here will reap the rewards of that in the coming years because the growth of digital will far outpace the amount of talent in the workforce. We should have bigger paychecks in 5 years!
Many people focus on one particular segment of the market, whereas you seem to have a well-rounded knowledge of SEO, PPC, user experience, and conversion strategies. How did you find the time to tie all these different disciplines together?
Well, I’ve been at it 11 years so that accounts for the time. It is corny but I love the web and I am passionate about trying to make it more relevant to everything we do. Looking back my career path from Site>Email>SEO>UX>SEM>LPO>Display, it seems like a very natural progression to me. Basically, with one stop for UX I have just been a marketer trying to stay ahead of the advances in marketing technology. Also, I love learning how people use the web and all the disciplines I have worked in are fundamentally rooted in the same thing -- understanding people’s goals and optimizing the delivery and presentation of information to meet those goals. As an industry we tend to divide the web into vectors but we often lose sight that the web experience for people is linear. The more holistic understanding we have generally the better our results.
_____________________
Thanks Jonathan! To read more of his thoughts check out Optimize and Prophesize.
In this guide, we'll introduce designers and developers to SEO, and recommend low-impact, highly effective strategies to integrate SEO into the work-flow.
Background
What Is SEO?
Search Engine Optimization is the process of making a site more visible in search engine results. The higher the visibility, the more traffic a site should receive.
In short, a search engine is a database. The database is populated by data the search engine gathers. It gathers this information by sending out spider. A spider is a program that resembles a web browser, all albeit a very basic one. It gathers the HTML code, and sends in back to the database for sorting.
The main areas of concern are:
Can the spider crawl the site?
Will it rank the site pages above those of the competition?
SEO helps assure those two things happen.
What Does SEO Involve?
The two main elements of SEO are:
On-site factors
Inbound linking from external sites
The aim is to ensure the site pages can be crawled by search engine spiders, and increase the likelihood those pages will rank well. The exact formula whereby a page ranks is a matter of guesswork. However, there are core aspects when, if integrated well, significantly increase the chances of ranking.
Firstly, pages must be crawlable by a search spider.
Secondly, the pages should be in a format likely to result in search traffic.
This typically involves aligning page content with keyword queries. For example, if a website owner wants to rank for the search term "web designers Seattle", they may include a page entitled "Web Designers Seattle" on the site, and that page will contain tightly focused information on that one topic. The ranking process is more complicated that this, but the important point to note is that searcher intent and site content should be closely aligned.
Thirdly, pages should have a number of links pointing to them from other sites. This aspect is largely out of the hands of designers and programmers, but if the content is not easy to link to, then problems can arise.
So, what aspects do you need to integrate into your process?
Let's start with designers*.
*There is a high degree of cross-over in terms of tasks between designers and programmers, so you should familiarize yourself with both sets of guidelines below
Notes For Designers
Failure to integrate SEO into the work process can lead to significant site redesign later on, however integration is relatively painless if implemented early.
SEO is most effective when it forms part of the site brief and specification.
Separate Content From Presentation
The leaner the code, the faster a page will load. The faster a page loads, the more likely it is that the spider will be able to crawl the entire site. Bloated code can also cause problems in terms of the weighting the search engine algorithms gives to elements on the page. Style sheets, java script and other code should be called from a separate file, rather than embedded.
Navigate The Site With A Text Browser
A search engine spider is similar in function to a text-only browser. So, if you can navigate a site using a text-only browser, the site will be able to be crawled by a search spider.
Whilst spiders are getting better at handling media content, they can have problems both crawling it and deciphering it.
If you site features a lot of media content, particularly in terms of navigation, make sure you have an alternative navigation option that a spider can follow. Spiders prefer HTML links in compliance with WC3 coding standards.
If your pages feature a lot of graphical or video content, use body content to describe it.
Flash
It's a myth you can't use Flash.
The key to using Flash lies in the method of integration. Avoid using Flash as the main method of navigation and content delivery. Instead, embed flash within HTML pages that contain other HTML content and links.
The search engines are getting better at crawling Flash, but they don't handle it as well as they handle HTML. If a search engine experiences problems indexing a site, it is likely to back out, leaving other parts of the site unindexed.
It can also be difficult, and sometimes impossible, to link to individual Flash pages. Pages that aren't well linked are less likely to show up in search results.
Adobe has a solution that involves an API and the Flash Player run-time, which allows search engines to read the content of SWF files. You can find out more in "The Black Magic Of Flash SEO"
If a site must remain all Flash, create a "printer friendly" version of the site that contains all the same text data as the flash site. If this version of the site is well linked, the search engines will crawl it.
Keep in mind these are workarounds and are not ideal. If your competitors' sites make it easy for the search engine by not using Flash, they are likely to rank ahead of you, all else being equal.
Keep your important pages within easy reach. That typically means close to the top of the hierarchy.
If possible, try to incorporate a site map and make sure it is linked to from every page. That way, the spider is less likely to miss crawling content.
Use Text Headings
Headings should use HTML text, as opposed to graphics. Search engines place importance on keyword terms contained within links and heading tags.
If you need to use graphic headings for some pages, ensure the alt tags are populated, and/or the page deprecates to a text default if graphics are turned off.
It is also highly recommended that you repeat the heading in HTML text somewhere else on the page, or in the link text of any page pointing to that page.
Mark-Up The Code
The following tags should be populated with keyword data. The SEO will want to alter the content of these fields.
Title
Alt
Meta Description
Other tags often manipulated by SEOs include:
H1, H2, H3 etc
Bold or Strong
Hyperlinks - particularly the link text
Body text
Frames
Where possible, avoid using frames.
If the site isn't coded correctly, the search engine may simply index the master frame, or the search visitor may arrive at a page that is impossible to navigate, because the page is viewed out of context.
If you do use frames, use a simulation spider to highlight any problem areas. You could also use the no-frame tag, although the search engines aren't overly fond of data contained in this tag due to abuse i.e. showing one thing to viewers, and something different to spiders.
The search engines look for keywords in file names in order to help make sense of a page. Where possible, use descriptive and meaningful file names, such as acme.com/hello-world.htm, as opposed to numbers, such as acme.com/1234567.php.
Likewise, directories should also be descriptive and be clearly organized:
If you need to use coded URLs, a suitable workaround is use URL rewriting. Here's a tutorial about URL redirection, and other useful URL techniques.
URL Structure & Cannonicalization
Try to avoid placing too many parameters in URLs. This will dilute the descriptive benefits of the file name, making it difficult for the search engine to derive meaning.
Canonicalization is the process by which URLs are standardized. For example, www.acme.com and www.acme.com/ are treated as the same page, even though the syntax of the URL is different. Problems can occur when the search engine doesn't normalize URLs properly. For example, a search engine might see http://www.acme.com and http://acme.com as different pages. In this instance, the search engine has the host names confused.
It is preferable to to link to a folder URL eg. site.com/folder/ as opposed to an absolute URL which specifies the file name. The link structures SEO's set-up are sensitive to changes in the way the document is specified. For example, inbound links may be lost, which may lead to a loss in ranking.
Major switches later on - say, a change in CMS - can cause headaches and a considerable amount of reworking if all the URLs are specified explicitly.
Links
Internal links should follow W3C guidelines. Search engines place value on keywords contained in standard text links.
It is best to avoid coded, graphical or scripted links. If using these types of links, duplicate the linking structure elsewhere on the page. In the footer, for example.
There should be no more than 100 links per page, and ideally a lot less.
Sitemaps are easy to create, and provide an elegant and painless workaround if your internal linking structure is not optimized.
By including a Site Map, you also provide an default for any crawl related issues you may not be aware of.
Custom 404
If a search engine spider encounters a 404 page, it may back out if there are no links to follow off that page.
Create a customized 404 page that contains links to parts of the site that contain links and aren't likely to be removed i.e. home page and site map.
Robots.txt
A robots txt page is the first page a spider reads. You can specify any areas which you don't want the search engine to index. Not all search engines will obey the directives specified in the robots txt, so don't use this as your sole means of ensuring data stays out of the public domain.
Points Of Conflict Between SEO & Design/Development, And How To Resolve Them
A lot of conflict can be resolved simply by involving the SEO early in the site development process. The benefit in doing so means far less pain for designers and developers that if SEO is bolted on as an after-thought. It's like building a house, then deciding you need to relocate the kitchen.
SEO should be part of the design and development specification.
But what if you have a legacy site?
Here's some requirements you'll likely hear from SEO's:
1. The Site Design Needs To Be Changed
SEO is mostly about providing the search engine with text data. If a site doesn't have much in the way of text, the SEO will look to add text to existing pages, and beef out the site with additional text content.
There was a time when SEOs only needed to add meta-data, however search engines no longer pay much attention to meta data. They look at two areas: visible site content and link structures.
The biggest impact to design is likely to be on the home page. The home page usually carries the most weight with search engines.
Here are some suggestions for preserving design and integrating SEO:
Add text below the fold
Replace graphical headings with text headings
Separate form form content
Include a printer-friendly version of the site
2. We Need To Provide Link-Worthy Content
Search engines place a lot of value on links from external sites. If your site doesn't contain much in the way of notable content, it will be difficult to get links.
Consider integrating a publishing model. For example, you could add a blog, news feed or forum.
Do you have a lot of offline content that could be published online? Whilst such content may not be considered important in terms of your web presence, it can be used to cast a wide net. The more content you have, the more pages you'll likely see in the index, and the more search visitors you'll receive. This content can be relegated in the hierarchy so that non-search visitors aren't distracted by it, and your main site design isn't affected.
Such content especially useful if this content is informational, as opposed to commercial, in nature. It is often difficult to get links to purely commercial content.
Tools & Techniques For Monitoring, Diagnosing & Rectifying SEO Problems
If a site can't be crawled, then it won't appear in search engine result pages. Use the following tools to detect and fix problems.
Google Webmaster Central
Google Webmaster Central provides a host of diagnostic information and is available free from Google. The service will tell you what pages couldn't be crawled, what the errors were, how many duplicate tags you have, and visitor information.
You can also build and submit your xml site map, which can solve a number of crawl issues.
Text Browser
If you can successfully navigate around your site using a text browser, then your site can be crawled by a spider.
Pay careful attention to any pages that prevent navigation problems, and resolve by placing text links on/to these pages.
SEOBook Toolbar
The SEOBook Toolbar provides details about page popularity, the level of competition, and on-page diagnostics.
One way you can use this tool is to determine if the search engine knows what your page is about by looking at the frequency of words on the page. If those words are mostly off-topic, you may need to include more keyword phrases throughout your copy.
SEOBook Health Check
The SEOBook Health Check tool also provides a wealth of troubleshooting data. You can determine if you've got error handling issues, duplicate content, and conical URL issues.
A lot of our best SEO tips are shared on the blog here. That strategy originally came to be because my original business model (for this site) was to sell an ebook, and it was hard to stuff everything inside 1 ebook and expect it to come out congruent, especially
while selling it to a wide audience
when revising it many times
with SEO touching upon so many other disciplines like psychology, sociology, public relations, branding, advertising, content creation, information architecture, social networking, algorithm testing, etc.
Admitedly the ebook was a work in progress. As the search algorithms evolved and my knowledge of the field of marketing improved there were always new ideas I could add (or remove or change)...things where I said "hey I could make this part way better." But to be able to do that, you have to be able to look at your old work and admit where you were wrong or ignorant (or correct, but shortsighted).
After 4 years of making such updates, you get a lot better at seeing some of your own flaws and thinking about things you could do better, and you get better at seeing underlying trends in the search algorithms...especially as you grow your sites, track the search results, read customer feedback, read search research, read algorithm patents, read Google's internal company documents, and listen to engineers speak in Fed Speak. Each data point adds value to the next.
When I was new to the SEO field, learning SEO was much less complex because the algorithms were less complex and because the market did not have the noise it has in it today. Today there is no shortage of complexity in the SEO industry. But then the SEO industry is made to seem even more complex than it is by people playing semantic games, people willfully misinforming others, and those so desperate for attention that they are willing to write anything in hopes of getting a link or a mention in social media channels.
Rather than calling the update an update (as they are traditionally called) Matt Cutts preferred to call (what we saw as an update) a change, but as Michael Gray mentioned, those semantics are irrelevant unless Matt chooses to share more information
When you go around stating there was no update (your definition), when we can clearly see there was an update (our definition), we’ve got a problem. It looks like you’re trying to perform some Jedi mind trick, if you keep repeating there was no update and waving your hand eventually we’ll all believe you. Even worse it’s like you’re trying to tell us what we’re seeing isn’t really there and this is one of those “these aren’t my pants officer” moments from cops.
Even after Matt Cutts said in a video that they made a change, people began passing around that video on Twitter noting how I was wrong about the update and that there was no update. Some of them were probably the same people who denounced the position 6 issue we mentioned - a penalty/filter that was denied, changed/fixed, and then - and only then - a glitch.
I am not sure what sort of bizarro world those "I told you it was not an update" people claiming to be SEOs come from, but I thank them for polluting the free SEO content available on the web and misinforming so many people...they are part of what makes our training program so popular and profitable. They also make the search results less competitive - anyone who is listening to them is heading in the wrong direction building a weak foundation. :)
In the following video Matt Cutts highlighted that he did not feel that the update was driven by brand, but more in concepts of trust, PageRank, and authority:
RankPulse, a tool I used in my analysis of the algorithm change, is powered by the Google SOAP API, which Google will soon stop supporting. Matt played down the size of the algorithm update made by a Googler named Vince. But John Andrews takes a contrarian view, looking at Google's behavior after the algorithm update was analyzed:
You might say that Google’s API,via custom third-party innovations like RankPulse.com, enabled us to “organize the world’s information and make it universally accessible and useful” (which is Google’s corporate mission statement, by the way).
It sure seems contradictory for Google, a company based on the collection and permanent storage of others’ web page content, to forbid others from doing the same. It is also quite egregious for Google to expect to operate secretly, with no accountability (such as might be obtained through archiving of Google results), when Google exerts so much influence over Internet commerce.
At the same time, search engines have migrated from the academic domain to the commercial. Up until now most search engine development has gone on at companies with little publication of technical details. This causes search engine technology to remain largely a black art and to be advertising oriented (see Appendix A). With Google, we have a strong goal to push more development and understanding into the academic realm.......However, it is very difficult to get this (academic) data, mainly because it is considered commercially valuable.
As Google gobbles up your content while shielding its results from unauthorized access, it creates a weakness which a new search service could exploit...by being far more open.
While Google doesn't want anyone to access their proprietary business secrets, if you search for my brand they recommend you look for a torrent to go download an old copy of my ebook.
sounds like a fair trade, eh? No big deal. Google is a common carrier, and intends to use that to their business advantage whenever and wherever possible.
I hope you (and your business model) are not allergic to peanuts!