A/B testing is an internet marketing standard. In order to optimize response rates, you compare one page against another. You run with the page that gives you the best response rates.
But anyone who has tried A/B testing will know that whilst it sounds simple in concept, it can be problematic in execution. For example, it can be difficult to determine if what you’re seeing is a tangible difference in customer behaviour or simply a result of chance. Is A/B testing an appropriate choice in all cases? Or is it best suited to specific applications? Does A/B testing obscure what customers really want?
In this article, we’ll look at some of the gotchas for those new to A/B testing.
1. Insufficient Sample Size
You set up test. You’ve got one page featuring call to action A and one page featuring call to action B. You enable your PPC campaign and leave it running for a day.
When you stop the test, you’ve found call-to-action A converted at twice the rate of call-to-action B. So call-to-action A is the winner and we should run with it, and eliminate option B.
But this would be a mistake.
The sample size may be insufficient. If we only tested one hundred clicks, we might get a significant difference in results between two pages, but that change doesn't show up when we get to 1,000 clicks. In fact, the result may even be reversed!
So, how do we determine a sample size that is statistically significant? This excellent article explains the maths. However, there are various online sample size calculators that will do the calculations for you, including Evan’s. Most A/B tracking tools will include sample size calculators, but it’s a good idea to understand what they’re calculating, and how, to ensure the accuracy of your tests.
In short, make sure you've tested enough of the audience to determine a trend.
2. Collateral Damage
We might want to test a call to action metric. We want to test the number of people who click on the “find out more” link on a landing page. We find that a lot more people click on this link we use the term “find out more” than if we use the term “buy now”.
But what if the conversion rate for those who actually make a purchase falls as a result? We achieved higher click-thrus on one landing page at the expense of actual sales.
This is why it’s important to be clear about the end goal when designing and executing tests. Also, ensure we look at the process as a whole, especially when we’re chopping the process up into bits for testing purposes. Does a change in one place affect something else further down the line?
In this example, you might A/B test the landing page whilst keeping an eye on your total customer numbers deeming the change effective only if customer numbers also rise. If your aim was only to increase click-thru, say to boost quality scores, then the change was effective.
3. What, Not Why
In the example above, we know the “what”. We changed the wording of a call-to-action link, and we achieved higher click thru’s, although we’re still in the dark as to why. We’re also in the dark as to why the change of wording resulted in fewer sales.
Was it because we attracted more people who were information seekers? Were buyers confused about the nature of the site? Did visitors think they couldn’t buy from us? Were they price shoppers who wanted to compare price information up front?
We don’t really know.
But that’s good, so long as we keep asking questions. These types of questions lead to more ideas for A/B tests. By turning testing into an ongoing process, supported by asking more and hopefully better questions, we’re more likely to discover a whole range of “why’s”.
4. Small Might Be A Problem
If you’re a small company competing directly with big companies, you may already be on the back foot when it comes to A/B testing.
It’s clear that its very modularity can cause problems. But what about in cases where the number of tests that can be run at once is low? While A/B testing makes sense on big websites where you can run hundreds of tests per day and have hundreds of thousands of hits, only a few offers can be tested at one time in cases like direct mail. The variance that these tests reveal is often so low that any meaningful statistical analysis is impossible.
Put simply, you might not have the traffic to generate statistically significant results. There’s no easy way around this problem, but the answer may lay in getting tricky with the maths.
Experimental design massively and deliberately increases the amount of variance in direct marketing campaigns. It lets marketers project the impact of many variables by testing just a few of them. Mathematical formulas use a subset of combinations of variables to represent the complexity of all the original variables. That allows the marketing organization to more quickly adjust messages and offers and, based on the responses, to improve marketing effectiveness and the company’s overall economics
Another thing to consider is that if you’re certain the bigger company is running A/B tests, and achieving good results, then “steal” their landing page*. Take their ideas for landing pages and use that as a test against your existing pages. *Of course, you can’t really steal their landing page, but you can be "influenced by” their approach.
What your competitors do is often a good starting point for your own tests. Try taking their approach and refine it.
The multi-armed bandit problem takes its terminology from a casino. You are faced with a wall of slot machines, each with its own lever. You suspect that some slot machines pay out more frequently than others. How can you learn which machine is the best, and get the most coins in the fewest trials?
Like many techniques in machine learning, the simplest strategy is hard to beat. More complicated techniques are worth considering, but they may eke out only a few hundredths of a percentage point of performance.
What multi-armed bandit algorithm does is that it aggressively (and greedily) optimizes for currently best performing variation, so the actual worse performing versions end up receiving very little traffic (mostly in the explorative 10% phase). This little traffic means when you try to calculate statistical significance, there’s still a lot of uncertainty whether the variation is “really” worse performing or the current worse performance is due to random chance. So, in a multi-armed bandit algorithm, it takes a lot more traffic to declare statistical significance as compared to simple randomization of A/B testing. (But, of course, in a multi-armed bandit campaign, the average conversion rate is higher).
Multivariate testing may be suitable if you’re testing a combination of variables, as opposed to just one i.e.
Product Image: Big vs. Medium vs Small
Price Text Style: Bold vs Normal
Price Text Color: Blue vs. Black vs. Red
There would be 3x2x3 different versions to test.
The problem with multivariate tests is they can get complicated pretty quickly and require a lot of traffic to produce statistically significant results. One advantage of multivariate testing over A/B testing is that it can tell you which part of the page is most influential. Was it a graphic? A headline? A video? If you're testing a page using an A/B test, you won't know. Multivariate testing will tell you which page sections influence the conversion rate and which don’t.
6. Methodology Is Only One Part Of The Puzzle
So is A/B testing worthwhile? Are the alternatives better?
The methodology we choose will only be as good as the test design. If tests are poorly designed, then the maths, the tests, the data and the software tools won’t be much use.
Start the test by first asking yourself a question. Something on the lines of, “Why is the engagement rate of my site lower than that of the competitors…..Collect information about your product from customers before setting up any big test. If you plan to test your tagline, run a quick survey among your customers asking how they would define your product.
Secondly, consider the limits of testing. Testing can be a bit of a heartless exercise. It’s cold. We can’t really test how memorable and how liked one design is over the other, and typically have to go by instinct on some questions. Sometimes, certain designs just work for our audience, and other designs don’t. How do we test if we're winning not just business, but also hearts and minds?
Does it mean we really understand our customers if they click this version over that one? We might see how they react to an offer, but that doesn’t mean we understand their desires and needs. If we’re getting click-backs most of the time, then it’s pretty clear we don’t understand the visitors. Changing a graphic here, and wording there, isn’t going to help if the underlying offer is not what potential customers want. No amount of testing ad copy will sell a pink train.
The understanding of customers is gained in part by tests, and in part by direct experience with customers and the market we’re in. Understanding comes from empathy. From asking questions. From listening to, and understanding, the answers. From knowing what’s good, and bad, about your competitors. From providing options. From open communication channels. From reassuring people. You're probably armed with this information already, and that information is highly useful when it comes to constructing effective tests.
Do you really need A/B testing? Used well, it can markedly improve and hone offers. It isn't a magic bullet. Understanding your audience is the most important thing. Google, a company that uses testing extensively, seem to be most vulnerable when it comes to areas that require a more intuitive understanding of people. Google Glass is a prime example of failing to understand social context. Apple, on the other hand, were driven more by an intuitive approach. Jobs: "We built [the Mac] for ourselves. We were the group of people who were going to judge whether it was great or not. We weren’t going to go out and do market research"
A/B testing is can work wonders, just so long as it isn’t used as a substitute for understanding people.
Last October Vendran Tomic wrote a guide for local SEO which has since become one of the more popular pages on our site, so we decided to follow up with a QnA on some of the latest changes in local search.
Q: Google appears to have settled their monopolistic abuse charges in Europe. As part of that settlement they have to list 3 competing offers in their result set from other vertical databases. If Google charges for the particular type of listing then these competitors compete in an ad auction, whereas if the vertical is free those clicks to competitors are free. How long do we have until Google's local product has a paid inclusion element to it?
A: Local advertising market is huge. It's a market that Google still hasn't mastered. It's a market still dominated by IYP platforms.
Since search in general is stagnant, Google will be looking to increase their share of the market.
That was obvious to anyone who was covering Google's attempt to acquire Groupon since social couponing is a local marketing phenomenon mostly.
Their new dashboard is not only more stable with a slicker interface, but also capable of facilitating any paid inclusion module.
I would guess that Google will not wait a long time to launch a paid inclusion product or something similar, since they want to keep their shareholders happy.
Q: In the past there have been fiascos with things like local page cross-integration with Google+. How "solved" are these problems, and how hard is it to isolate these sorts of issues from other potential issues?
A: Traditionally, Google had the most trouble with their "local" products. Over the years, they were losing listings, reviews, merging listings, duplicating them etc. Someone called their attempts "a train wreck at the junction." They were also notoriously bad with providing guidance that would help local businesses navigate the complexity of the environment Google created.
Google has also faced some branding challenges - confusing even the most seasoned local search professionals with their branding.
Having said that, things have been changing for the better. Google has introduced phone support which is, I must say, very useful. In addition, the changes they made in a way they deal with local data made things more stable.
However, I'd still say that Google's local products are their biggest challenge.
Q: Yelp just had strong quaterly results and Yahoo! has recently added a knowledge-graph like pane to their search results. How important is local search on platforms away from Google? How aligned are the various local platforms on ranking criteria?
A: Just like organic search is mostly about two functions - importance and relevance, local search is about location prominence, proximity and relevance (where location prominence is an equivalent to importance in general SEO).
All local search platforms have ranking factors that are based on these principles.
The only thing that's different is what they consider ranking signals and the way they place on each. For example, to rank high in Yahoo! Local, one needs to be very close to the centroid of the town, have something in the title of their business that matches the query of the search and have a few reviews.
Google is more sophisticated, but the principles are the same.
The less sophisticated local search platforms use less signals in their algorithm, and are usually geared more towards proximity as a ranking signal.
It's also important to note that local search functions as a very interconnected ecosystem, and that changes made in order to boost visibility in one platform, might hurt you in another.
Q: There was a Google patent where they mentioned using driving directions to help as a relevancy signal. And Bing recently invested in and licensed data from Foursquare. Are these the sorts of signals you see taking weight from things like proximity over time?
A: I see these signals becoming/increasing in importance over time as they would be a useful ranking signal. However, to Google, local search is also about location sensitivity, and these signals will probably not be used outside of this context.
If you read a patent named "Methods And Systems For Improving A Search Ranking Using Location Awareness" (Amit Singhal is one of the inventors), you will see that Google, in fact, is aware that people have different sensitivities fo different types of services/queries. You don't necessarily care where your plumber will come from, but you do care where the pizza places are where you search for pizza in your location.
I don't see driving directions as a signal ever de-throning proximity, because proximity is closer to the nature of the offline/online interaction.
Q: There are many different local directories which are highly relevant to local, while there are also vertical specific directories which might be tied to travel reviews or listing doctors. Some of these services (say like OpenTable) also manage bookings and so on. How important is it that local businesses "spread around" their marketing efforts? When does it make sense to focus deeply on a specific platform or channel vs to promote on many of them?
A: This is a great question, Aaron! About 5 years ago, I believed that the only true game in town for any local business is Google. This was because, at that time, I wasn't invested in proper measurement of outcomes and metrics such as cost of customer acquisition, lead acqusition etc.
Local businesses, famous for their lack of budgets, should always "give" vertical platforms a try, even IYP type sites. This is why:
one needs to decrease dependance on Google because it's an increasingly fickle channel of traffic acquisition (Penguin and Panda didn't spare local websites),
sometimes, those vertical websites can produce great returns. I was positively surprised by the number of inquiries/leads one of our law firm clients got from a well known vertical platform.
using different marketing channels and measuring the right things can improve your marketing skills.
Keep in mind, basics need to be covered first: data aggregators, Google Places, creating a professional/usable/persuasive website, as well as developing a measurement model.
Q: What is the difference between incentivizing a reasonable number of reviews & being so aggressive that something is likely to be flagged as spam? How do you draw the line with trying to encourage customer reviews?
A: Reviews and review management have always been tricky, as well as important. We know two objective things about reviews:
consumers care about reviews when making a purchase and
reviews are important for your local search visibility.
Every local search/review platform worth its weight in salt will have a policy in place discouraging incentivizing and "buying" reviews. They will enforce this policy using algorithms or humans. We all know that.
Small and medium sized businesses make a mistake of trying to get as many reviews as humanly possible, and direct them to one or two local search platforms. Here, they make two mistakes:
1. they're driven by a belief that one needs a huge number of reviews on Google and
2. one needs to direct all their review efforts at Google.
This behavior forces them to be flagged algorithmically or manually. Neither Google nor Yelp want you to solicit reviews.
However, if you change your approach from aggressively asking for reviews to a survey-based approach, you should be fine.
What do I mean by that?
A survey-based approach means you solicit your customers' opinions on different services/products to improve your operations - and then ask them to share their opinion on the web while giving them plenty of choices.
This approach will get you much further than mindlessly begging people for reviews and sending them to Google.
The problem with clear distinction between the right and wrong way in handling reviews, as far as Google goes, lies in their constant changing of guidelines regarding reviews.
Things to remember are: try to get reviews on plenty of sites, while surveying your customers and never get too aggressive. Slow and steady wins the race.
Q: On many local searches people are now getting carouseled away from generic searches toward branded searches before clicking through, and then there is keyword(not provided) on top of that. What are some of the more cost efficient ways a small business can track & improve their ranking performance when so much of the performance data is hidden/disconnected?
A: Are you referring to ranking in Maps or organic part of the results? I'm asking because Google doesn't blend anymore.
Q: I meant organic search
A: OK. My advice has always been to not obsess over rankings, but over customer acquisition numbers, leads, lifetime customer value etc.
However, rankings are objectively a very important piece of the puzzle. Here are my suggestions when it comes to more cost efficient ways to track and improve ranking performance:
When it comes to tracking, I'd use Advanced Web Ranking (AWR) or Authority Labs, both of which are not very expensive.
Improving ranking performance is another story. Local websites should be optimized based on the same principles that would work for any site (copy should be written for conversion, pages should be focused on narrow topics, titles should be written for clickthrough rates etc).
On the link building side of things, I'd suggest taking care of data aggregators first as a very impactful, yet cost effective strategy. Then, I would go after vertical platforms that link directly to a website, that have profiles chockfull of structured data. I would also make sure to join relevant industry and business associations, and generally go after links that only a real local business can get - or that come as a result of broader marketing initiatives. For example, one can organize events in the offline world that can result in links and citations, effectively increasing their search visibility without spending too much.
Q: If you are a local locksmith, how do you rise above the spam which people have publicly complained about for at least 5 years straight now?
A: If I were a local locksmith, I would seriously consider moving my operations close to the centroid of my town/city. I would also make sure my business data across the web is highly consistent.
In addition, I would make sure to facilitate getting reviews on many platforms. If this wouldn't be enough (as it often isn't enough in many markets), I would be public about Google's inability to handle locksmiths spam in my town - using their forums, and any other medium.
Q: In many cities do you feel the potential ROI would be high enough to justify paying for downtown real estate then? Or would you suggest having a mailing related address or such?
A: The ROI of getting a legitimate downtown address would greatly depend on customer lifetime value. For example, if I were a personal injury attorney in a major city, I would definitely consider opening a small office near a center of my city/town.
Another thing to consider would be the search radius/location sensitivity. If the location sensitivity for a set of keywords is high, I would be more inclined to invest in a downtown office.
I wouldn't advocate PO boxes or virtual offices, since Google is getting more aggressive about weeding those out.
Q: Google recently started supporting microformats for things like hours of operation, phone numbers, and menus. How important is it for local businesses to use these sorts of features?
A: It is not a crucial ranking factor, and is unlikely to be any time in the near future. However, Google tends to reward businesses that embrace their new features - at least in local search. I would definitely recommend embracing microformats in local search.
Q: As a blogger I've noticed an increase in comment spam with NAP information in it. Do you see Google eventually penalizing people for that? Is this likely to turn into yet another commonplace form of negative SEO?
A: This is a difficult question. Knowing how Google operates, it's possible they start penalizing that practice. However, I don't see that type of spam being particularly effective.
Most blogs cannot do a lot to enhance the location prominence. But if that turned into a negative SEO avenue, I would say that Google wouldn't handle it well (based on their track records).
Q: Last year you wrote a popular guide to local search. What major changes have happened to the ecosystem since then? Would you change any of the advice you gave back then? Or has local search started to become more stable recently?
A: There weren't huge changes in the local ecosystem. Google has made a lot of progress in transferring accounts to the new dashboard, improving the Bulk upload function. They also changed their UX slightly.
Moz entered the local search space with their Moz Local product.
Q: When doing a local SEO campaign, how much of the workload tends to be upfront stuff versus ongoing maintenance work? For many campaigns is a one-off effort enough to last for a significant period of time? How do you determine the best approach for a client in terms of figuring out the mix of upfront versus maintenance and how long it will take results to show and so on?
A: This largely depends on the objective of the campaign, the market and the budget. There are verticals where local Internet marketing is extremely competitive, and tends to be a constant battle.
Some markets, on the other hand, are easy and can largely be a one-off thing. For example, if you're a plumber or an electrician in a small town with a service area limited to that town, you really don't need much maintenance, if any.
However, if you are a roofing company that wants to be a market leader in greater Houston, TX your approach has to be much different.
The upfront work tends to be more intense if the business has NAP inconsistencies, never did any Internet marketing and doesn't excel at offline marketing.
If you're a brand offline and know to tie your offline and online marketing efforts, you will have a much easier time getting the most out of the web.
In most smaller markets, the results can be seen in a span of just a few months. More competitive markets, in my experience, require more time and a larger investment.
Q: When does it make sense for a local business to DIY versus hiring help? What tools do you recommend they use if they do it themselves?
A: If local business owner is in a position where doing local Internet marketing is their highest value activity, it would make sense to do it themselves.
However, more often than not, this is not the case even for the smallest of businesses. Being successful in local Internet marketing in a small market is not that difficult. But it does come with a learning curve and a cost in time.
Having said that, if the market is not that competitive, taking care of data aggregators, a few major local search platforms and acquisition of a handful of industry links would do the trick.
For data aggregators, one might go directly to them or use a tool such as UBM or Moz Local.
To dig for citations, Whitespark's citation tool is pretty good and not that expensive.
Q: The WSJ recently published a fairly unflatering article about some of the larger local search firms which primarily manage AdWords for 10's of thousands of clients & rely on aggressive outbound marketing to offset high levels of churn. Should a small business consider paid search & local as being separate from one another or part of the same thing? If someone hires help on these fronts, where's the best place to find responsive help?
A: "Big box" local search companies were always better about client acquisition than performance. It always seemed as if performance wasn't an integral part of their business model.
However, small businesses cannot take that approach when it comes to performance. Generally speaking, the more web is connected to business, the better of a small business is. This means that a local Internet marketing strategy should start with business objectives.
Everyone should ask themselves 2 questions:
1. What's my lifetime customer value?
2. How much can I afford to spend on acquiring a customer?
Every online marketing endeavor should be judged through this lens. This means greater integration.
Q: What are some of the best resources people can use to get the fundamentals of local search & to keep up with the changing search landscape?
Vedran Tomic is a member of SEOBook and founder of Local Ants LLC, a local internet marketing agency. Please feel free to use the comments below to ask any local search questions you have, as Vedran will be checking in periodically to answer them over the next couple days.
There's the safe way & the high risk approach. The shortcut takers & those who win through hard work & superior offering.
One is white hat and the other is black hat.
With the increasing search ecosystem instability over the past couple years, some see these labels constantly sliding, sometimes on an ex-post-facto basis, turning thousands of white hats into black hats arbitrarily overnight.
It's fantastic journalism & an important read for anyone who considers themselves an SEO.
Take the offline analog to Google's search "quality" guidelines & in spirit Google repeatedly violated every single one of them.
creating links that weren’t editorially placed or vouched for by the site’s owner on a page, otherwise known as unnatural links can be considered a violation of our guidelines. Advertorials or native advertising where payment is received for articles that include links that pass PageRank
Advertorials are spam, except when they are not: "the staff and professors at GMU’s law center were in regular contact with Google executives, who supplied them with the company’s arguments against antitrust action and helped them get favorable op-ed pieces published"
Don't deceive your users.
Ads should be clearly labeled, except when they are not: "GMU officials later told Dellarocas they were planning to have him participate from the audience," which is just like an infomercial that must be labeled as an advertisement!
Preventing Money from Manipulating Editorial
Make reasonable efforts to ensure that advertisements do not affect search engine rankings. For example, Google's AdSense ads and DoubleClick links are blocked from being crawled by a robots.txt file.
Money influencing outcomes is wrong, except when it's not: "Google’s lobbying corps — now numbering more than 100 — is split equally, like its campaign donations, among Democrats and Republicans. ... Google became the second-largest corporate spender on lobbying in the United States in 2012."
The best way to get other sites to create high-quality, relevant links to yours is to create unique, relevant content that can naturally gain popularity in the Internet community. Creating good content pays off: Links are usually editorial votes given by choice, and the more useful content you have, the greater the chances someone else will find that content valuable to their readers and link to it.
Payment should be disclosed, except when it shouldn't: "The school and Google staffers worked to organize a second academic conference focused on search. This time, however, Google’s involvement was not publicly disclosed."
Cloaking refers to the practice of presenting different content or URLs to human users and search engines. Cloaking is considered a violation of Google’s Webmaster Guidelines because it provides our users with different results than they expected.
cloaking is evil, except when it's not: Even as Google executives peppered the GMU staff with suggestions of speakers and guests to invite to the event, the company asked the school not to broadcast its involvement. “We will certainly limit who we announce publicly from Google”
...and on and on and on...
It's not safe to assume that just because a specific deceptive technique isn't included on this page, Google approves of it.
And while they may not approve of something, that doesn't mean they avoid the strategy when mapping out their own approach.
There's a lesson & it isn't a particularly subtle one.
There’s a case study on Moz on how to get your site back following a link penalty. An SEO working on a clients site describes what happened when their client got hit with a link penalty. Even though the link penalty didn't appear to be their fault, it still took months to get their rankings back.
Some sites aren't that lucky. Some sites don’t get their rankings back at all.
The penalty was due to a false-positive. A dubious site links out to a number of credible sites in order to help disguise their true link target. The client site was one of the credible sites, mistaken by Google for a bad actor. Just goes to show how easily credible sites can get hit by negative SEO, and variations thereof.
There’s a tactic in there, of course.
Take Out Your Competitors
Tired of trying to rank better? Need a quicker way? Have we got a deal for you!
Simply build a dubious link site, point some rogue links at sites positioned above yours and wait for Google’s algorithm to do the rest. If you want to get a bit tricky, link out to other legitimate sites, too. Like Wikipedia. Google, even. This will likely confuse the algorithm for a sufficient length of time, giving your tactic time to work.
Those competitors who get hit, and who are smart enough to work out what’s going on, may report your link site, but, hey, there are plenty more link sites where that came from. Roll another one out, and repeat. So long as your link site can’t be connected with you - different PC, different IP address, etc - then what have you got to lose? Nothing much. What have your competitors got to lose? Rank, a lot of time, effort, and the very real risk they won’t get back into Google’s good books. And that’s assuming they work out why they lost rankings.
I’m not advocating this tactic, of course. But we all know it’s out there. It is being used. And the real-world example above shows how easy it is to do. One day, it might be used against you, or your clients.
Grossly unfair, but what can you do about it?
Defensive Traffic Strategy
Pleading to Google is not much of a strategy. Apart from anything else, it’s an acknowledgement that the power is not in your hands, but in the hands of an unregulated arbiter who likely views you as a bit of an annoyance. It’s no wonder SEO has become so neurotic.
It used to be the case that competitors could not take you out pointing unwanted links at you. No longer. So even more control has been taken away from the webmaster.
The way to manage this risk is the same way risk is managed in finance. Risk can be reduced using diversification. You could invest all your money in one company, or you could split it between multiple companies, banks, bonds and other investment classes. If you’re invested in one company, and they go belly up, you lose everything. If you invest in multiple companies and investment classes, then you’re not as affected if one company gets taken out. In other words, don’t put all your eggs in one basket.
It’s the same with web traffic.
1. Multiple Traffic Streams
If you only run one site, try to ensure your traffic is balanced. Some traffic from organic search, some from PPC, some from other sites, some from advertisements, some from offline advertising, some from email lists, some from social media, and so on. If you get taken out in organic search, it won’t kill you. Alternative traffic streams buy you time to get your rankings back.
2. Multiple Pages And Sites
A “web site” is a construct. Is it a construct applicable to a web that mostly orients around individual pages? If you think in terms of pages, as opposed to a site, then it opens up more opportunities for diversification.
Pages can, of course, be located anywhere, not just on your site. These may take the form of well written, evergreen, articles published on other popular sites. Take a look at the top sites in closely related niches and see if there are any opportunities to publish your content on them. Not only does this make your link graph look good, so long as it’s not overt, you’ll also have achieve more diversity.
Will creatively defines the concept of barnacle SEO as follows:
Attaching oneself to a large fixed object and waiting for the customers to float by in the current.
Directly applied to local search, this means optimizing your profiles or business pages on a well-trusted, high-ranking directory and working to promote those profiles instead of — or in tandem with — your own website.“
You could also build multiple sites. Why have just one site when you can have five? Sure, there’s more overhead, and it won’t be appropriate in all cases, but again, the multiple site strategy is making a comeback due to Google escalating the risk of having only one site. This strategy also helps get your eggs into multiple baskets.
3. Prepare For the Worst
If you've got most of your traffic coming from organic search, then you’re taking a high risk approach. You should manage that risk down with diversification strategies first. Part of the strategy for dealing with negative SEO is not to make yourself so vulnerable to it in the first place.
If you do get hit, have a plan ready to go to limit the time you’re out of the game. The cynical might suggest you have a name big enough to make Google look bad if they don’t show you.
Lyrics site Rap Genius says that it is no longer penalized within Google after taking action to correct “unnatural links” that it helped create. The site was hit with a penalty for 10 days, which meant people seeking it by name couldn’t find it.
For everyone else, here’s a pretty thorough guide about how to get back in.
Have your “plead with Google” gambit ready to go at a moments notice. The lead time to get back into Google can be long, so the sooner you get onto it, the better. Of course, this is really the last course of action. It’s preferable not make yourself that vulnerable in the first place.