One approach to search marketing is to treat the search traffic as a side-effect of a digital marketing strategy. I’m sure Google would love SEOs to think this way, although possibly not when it comes to PPC! Even if you’re taking a more direct, rankings-driven approach, the engagement and relevancy scores that come from delivering what the customer values should serve you well, too.
In this article, we’ll look at a content strategy based on value based marketing. Many of these concepts may be familiar, but bundled together, they provide an alternative search provider model to one based on technical quick fixes and rank. If you want to broaden the value of your SEO offering beyond that first click, and get a few ideas on talking about value, then this post is for you.
In any case, the days of being able to rank well without providing value beyond the click are numbered. Search is becoming more about providing meaning to visitors and less about providing keyword relevance to search engines.
Discover and quantify your customers' wants and needs
Commit to the most important things that will impact your customers
Create customer value that is meaningful and understandable
Assess how you did at creating true customer value
Improve your value package to keep your customers coming back
Customers compare your offer against those of competitors, and divide the benefits by the cost to arrive at value. Marketing determines and communicates that value.
This is the step beyond keyword matching. When we use keyword matching, we’re trying to determine intent. We’re doing a little demographic breakdown. This next step is to find out what the customer values. If we give the customer what they value, they’re more likely to engage and less likely to click back.
What Does The Customer Value?
A key question of marketing is “which customers does this business serve”? Seems like an obvious question, but it can be difficult to answer. Does a gym serve people who want to get fit? Yes, but then all gyms do that, so how would they be differentiated?
Obviously, a gym serves people who live in a certain area. So, if our gym is in Manhattan, our customer becomes “someone who wants to get fit in Manhattan”. Perhaps our gym is upmarket and expensive. So, our customer becomes “people who want to get fit in Manhattan and be pampered and are prepared to pay more for it”. And so on, and so on. They’re really questions and statements about the value proposition as perceived by the customer, and then delivered by the business.
So, value based marketing is about delivering value to a customer. This syncs with Google’s proclaimed goal in search, which is to put users first by delivering results they deem to have value, and not just pages that match a keyword term. Keywords need to be seen in a wider context, and that context is pretty difficult to establish if you’re standing outside the search engine looking in, so thinking in terms of concepts related to the value proposition might be a good way to go.
Value Based SEO Strategy
The common SEO approach, for many years, has started with keywords. It should start with customers and the business.
The first question is “who is the target market” and then ask what they value.
Relate what they value to the business. What is the value proposition of the business? Is it aligned? What would make a customer value this business offering over those of competitors? It might be price. It might be convenience. It’s probably a mix of various things, but be sure to nail down the specific value propositions.
Then think of some customer questions around these value propositions. What would be the likely customer objections to buying this product? What would be points that need clarifying? How does this offer differ from other similar offers? What is better about this product or service? What are the perceived problems in this industry? What are the perceived problems with this product or service? What is difficult or confusing about it? What could go wrong with it? What risks are involved? What aspects have turned off previous customers? What complaints did they make?
Make a list of such questions. These are your article topics.
You can glean this information by either interviewing customers or the business owner. Each of these questions, and accompanying answer, becomes an article topic on your site, although not necessarily in Q&A format. The idea is to create a list of topics as a basis for articles that address specific points, and objections, relating to the value proposition.
For example, buying SEO services is a risk. Customers want to know if the money they spend is going to give them a return. So, a valuable article might be a case study on how the company provided return on spend in the past, and the process by which it will achieve similar results in future. Another example might be a buyer concerned about the reliability of a make of car. A page dedicated to reliability comparisons, and another page outlining the customer care after-sale plan would provide value. Note how these articles aren’t keyword driven, but value driven.
Ever come across a FAQ that isn’t really a FAQ? Dreamed-up questions? They’re frustrating, and of little value if the information doesn’t directly relate to the value we seek. Information should be relevant and specific so when people land on the site, there’s more chance they will perceive value, at least in terms of addressing the questions already on their mind.
Compare this approach with generic copy around a keyword term. A page talking about “SEO” in response to the keyword term “SEO“might closely match a keyword term, so that’s a relevance match, but unless it’s tied into providing a customer the value they seek, it’s probably not of much use. Finding relevance matches is no longer a problem for users. Finding value matches often is. Even if you’re keyword focused, added these articles provides you semantic variation that may capture keyword searches that aren't appearing in keyword tools.
Keyword relevance was a strategy devised at a time when information was less readily available and search engines weren't as powerful. Finding something relevant was more hit and miss that it is today. These days, there’s likely thousands, if not millions, of pages that will meet relevance criteria in terms of keyword matching, so the next step is to meet value criteria. Providing value is less likely to earn a click back and more likely to create engagement than mere on-topic matching.
The Value Chain
Deliver value. Once people perceive value, then we have to deliver it. Marketing, and SEO in particular, used to be about getting people over the threshold. Today, businesses have to work harder to differentiate themselves and a sound way of doing this is to deliver on promises made.
So the value is in the experience. Why do we return to Amazon? It’s likely due to the end-to-end experience in terms of delivering value. Any online e-commerce store can deliver relevance. Where competition is fierce, Google is selective.
In the long term, delivering value should drive down the cost of marketing as the site is more likely to enjoy repeat custom. As Google pushes more and more results beneath the fold, the cost of acquisition is increasing, so we need to treat each click like gold.
Monitor value. Does the firm keep delivering value? To the same level? Because people talk. They talk on Twitter and Facebook and the rest. We want them talking in a good way, but even if they talk in a negative way, it can still useful. Their complaints can be used as topics for articles. They can be used to monitor value, refine the offer and correct problems as they arise. Those social signals, whilst not a guaranteed ranking boost, are still signals. We need to adopt strategies whereby we listen to all the signals, so to better understand our customers, in order to provide more value, and hopefully enjoy a search traffic boost as a welcome side-effect, so long as Google is also trying to determine what users value. .
Not sounding like SEO? Well, it’s not optimizing for search engines, but for people. If Google is to provide value, then it needs to ensure results provide not just relevant, but offer genuine value to end users. Do Google do this? In many cases, not yet, but all their rhetoric and technical changes suggest that providing value is at the ideological heart of what they do. So the search results will most likely, in time, reflect the value people seek, and not just relevance.
Today, signals such as keyword co-occurrence, user behavior, and previous searches do in fact inform context around search queries, which impact the SERP landscape. Note I didn’t say the signals “impact rankings,” even though rank changes can, in some cases, be involved. That’s because there’s a difference. Google can make a change to the SERP landscape to impact 90 percent of queries and not actually cause any noticeable impact on rankings.
The way to get the context right, and get positive user behaviour signals, and align with their previous searches, is to first understand what people value.
A product, in and of itself is really only 1/2 of what you are selling to your clients. The other 1/2 of the equation is the "experience".
It sounds a bit "fluffy" but in my career as a service provider and in my purchasing history as a consumer the experience matters. I would even go so far as to say that in some very noticeable cases the experience can outweigh the product itself (to some extent anyways).
These halves, the product and the experience, can cut both ways.
Sometimes a product is so good that the experience can be average or even below average and the provider will still make out and sometimes the experience is so fantastic that an otherwise average or above average product is elevated to what can be priced as a premium product or service.
Let's get a few obvious variables out of the way first. It is understood that:
Experience matters more to some people than others
Experience matters more in certain industries than others
The actual product matters more to some
The actual product matters more in some industries
If we stipulate that the 4 scenarios mentioned above are true, which they are, it still doesn't change the basic premise that you are probably leaving revenue and growth on the table if you settle on one side or the other.
While it's true that you can be successful even if your product to experience ratio is like a seesaw heavily weighted in one direction over the other, it is also true that you would probably be more successful if you made both the best each could be.
Defining Where Product Meets Experience
I'll layout a couple of examples here to help illustrate the point:
The "Big Four" in the link research tools space; Ahrefs, Link Research Tools, Majestic, and Open Site Explorer
The two more well-known "tool/reporting suites" Raven and Moz outside of much more expensive enterprise toolkits
In my experience Ahrefs has been the best combination of product and experience, especially lately. Their dataset continues to grow and recent UI changes have made it even easier to use. Exports are super fast and I’ve had quick and useful interactions with their support staff. Perhaps it isn’t a coincidence that, from groups of folks I interact with and follow online, Ahrefs continues to pop up more often in conversation than not.
To me, Majestic and Link Research Tools are examples of where the product is really, really strong (copious amounts of data across many segments) but the UI/UX is not quite as good as the others. I realize some of this is subjective but in other comparisons online this seems to be a prevailing theme.
Open Site Explorer has a fantastic UI/UX but the data can be a bit behind the others and getting data out (exporting) is bit more of a chore than point, click, download. It seems like over a period of time OSE has had a rougher road to data and update growth than the other tools I mentioned.
In the case of two of more popular reporting and research suites, Moz and Raven, Raven has really caught up (if not surpassed) Moz in terms of UI/UX. Raven pulls in data from multiple sources, including Moz, and has quite a few more (and easier to get to and cross-reference) features than Moz.
Moz may not be interested in getting into some of the other pieces of the online marketing puzzle that Raven is into but I think it’s still a valid comparison based on the very similar, basic purpose of each tool suite.
Assessing Your Current Position
When assessing or reassessing your products and offerings, a lot of it goes back to targeting the right market.
Is the market big enough to warrant investment into a product?
How many different segments of a given market do you need to appeal to?
Where’s the balance between feature bloat (think Zoho CRM) versus “good enough” functionality with an eye towards an incredible UX (think Highrise CRM)?
If the market isn’t big enough and you have to go outside your initial target, how will that affect the balance between the functionality of your product and the experience for your users, customers, or clients?
If you are providing SEO services your "functionality" might be how easy it is to determine the reports you provide and their relationship(s) to a client's profitability or goals (or both). Your "experience" is likely a combination of things:
The graphical presentation of your documents
The language used in your reports and other interactions with the client
The consistency of your "brand" across the web
The consistency of your brand presentation (website, invoices, reports, etc)
Client ability to access reports and information quickly without having to ask you for it
Consistency of your information delivery (are you always on-time, late, or erratic with due dates, meetings, etc)
When you breakdown what you think is your "product" and "experience" you'll likely find that it is pretty simple to develop a plan to improve both, rather than beating the vague "let's do great things" company line that no one really understands but just nods at.
Example of Experience in Action
In just about every Consumer Reports survey Apple comes out on top for customer satisfaction. Apple, whether you like their products/"culture" or not, creates a fairly reliable, if not expensive, end to end experience. This is doubly true if you live near an Apple store.
If you look at laptop failure rates Apple is generally in the middle of the pack. There are other things that go into the Apple experience (using the OS and such) but part of the reason people are willing to pay that premium is due to their support options and ability to fix bugs fairly quickly.
To tie this into our industry, I think Moz is a good parallel example here. Their design is generally heralded as being quite pleasant and it's pretty easy to use their tools; there isn't a steep learning curve to using most of their products.
I think their product presentation is top notch, even though I generally prefer some of their competitors products. They are pretty active on social media and their support is generally very good.
So, in the case of Moz it's pretty clear that people are willing to pay for less robust data or at least less features and options partly (or wholly) due to their product experience and product presentation.
Redesigning Your Experience
You might already have some of these but it's worthwhile to revisit a very basic style guide (excluding audience development):
Consistent logo and colors
Vocabulary and Language Style (the tone of your brand, is it My Brand or MyBrand or myBrand, etc)
Some Additional Resources
Here are some visual/text-based resources that I have found helpful during my own redefining process:
The marketing strategy, based on high rankings against keyword terms, is about gaining a steady flow of new visitors. If a site ranks better than competing sites, this steady stream of new visitors will advantage the top sites to the disadvantage of those sites beneath it.
The selling point of SEO is a strong one. The client gets a constant flow of new visitors and enjoys competitive advantage, just so long as they maintain rank.
A close partner of SEO is PPC. Like SEO, PPC delivers a stream of new visitors, and if you bid well, and have relevant advertisements, then you enjoy a competitive advantage. Unlike PPC, SEO does not cost per click, or, to be more accurate, it should cost a lot less per click once the SEOs fees are taken into account, so SEO has enjoyed a stronger selling point. Also, the organic search results typically have a higher level of trust from search engine users.
91% prefer using natural search results when looking to buy a product or service online".[Source: Tamar Search Attitudes Report, Tamar, July 2010]
Rain On The Parade
Either by coincidence or design, Google’s algorithm shifts have made SEO less of a sure proposition.
If you rank well, the upside is still there, but because the result is less certain than it used to be, and the work more involved than ever, the risk, and costs in general, have increased. The more risky SEO becomes in terms of getting results, the more Adwords looks attractive, as at least results are assured, so long as spend is sufficient.
Adwords is a brilliant system. For Google. It’s also a brilliant system for those advertisers who can find a niche that doesn’t suffer high levels of competition. The trouble is competition levels are typically high.
Because competition is high, and Adwords is an auction model, bid prices must rise. As bid prices rise, only those companies that can achieve ROI at high costs per click will be left bidding. The higher their ROI, the higher the bid prices can conceivably go. Their competitors, if they are to keep up, will do likewise.
So, the PPC advertiser focused on customer acquisition as a means of growing the company will be passing more and more of their profits to Google in the form of higher and higher click prices. If a company wants to grow by customer acquisition, via the search channel, then they’ll face higher and higher costs. It can be difficult to maintain ROI via PCC over time, which is why SEO is appealing. It’s little wonder Google has their guns pointed at SEO.
A fundamental problem with Adwords, and SEO in general, is that basing marketing success around customer acquisition alone is a poor long term strategy.
More on that point soon….
White-Hat SEO Is Dead
It’s surprising a term such as “white hat SEO” was ever taken seriously.
Any attempt to game a search engine’s algorithm, as far as the search engine is concerned, is going to be frowned upon by the search engine. What is gaming if it’s not reverse engineering the search engines ranking criteria and looking to gain a higher rank than a site would otherwise merit? Acquiring links, writing keyword-focused articles, for the purpose of gaining a higher rank in a search engine is an attempt at rank manipulation. The only thing that varies is the degree.
Not that there’s anything wrong with that, as far as marketers are concerned.
The search marketing industry line has been that so long as you avoided “bad behaviour”, your site stood a high chance of ranking well. Ask people for links. Find keywords with traffic. Publish pages focused on those topics. There used to more certainty of outcome.
If the outcome is not assured, then so long as a site is crawlable, why would you need an SEO? You just need to publish and see where Google ranks you. Unless the SEO is manipulating rank, then where is the value proposition over and above simply publishing crawlable content? Really, SEO is a polite way of saying “gaming the system”.
Those who let themselves be defined by Google can now be seen scrambling to redefine themselves. “Inbound marketers” is one term being used a lot. There’s nothing wrong with this, of course, although you’d be hard pressed to call it Search Engine Optimization. It’s PR. It’s marketing. It’s content production. The side effect of such activity might be a high ranking in the search engines (wink, wink). It’s like Fight Club. The first rule of Fight Club is…...
A few years back, we predicted that the last SEOs standing would be blackhat, and that’s turned out to be true. The term SEO has been successfully co-opted and marginalized. You can still successfully game the system with disposable domains, by aggressively targeting keywords, and buying lot of links and/or building link networks, but there’s no way that’s compliant with Google’s definitions of acceptable use. It would be very difficult to sell that to a client without full disclosure. Even with full disclosure, I’m sure it’s a hard sell.
But I digress….
Optimization In The New Environment
The blackhats will continue on as usual. They never took direction from search engines, anyway.
Many SEOs are looking to blend a number of initiatives together to take the emphasis off search. Some call it inbound. In practice, it blends marketing, content production and PR. It's a lot less about algo hacking.
For it to work well, and to get great results in search, the SEO model needs to be turned on its head. It’s still about getting people to a site, but because the cost of getting people to a site has increased, every visitor must count. For this channel to maintain value, then more focus will go on what happens after the click.
If the offer is not right, and the path to that offer isn’t right, then it’s like having people turn up for a concert when the band hasn’t rehearsed. At the point the audience turns up, they must deliver what the audience wants, or the audience isn’t coming back. The bands popularity will quickly fade.
This didn’t really matter too much in the past when it was relatively cheap to position in the SERPs. If you received a lot of slightly off-topic traffic, big deal, it’s not like it cost anything. Or much. These days, because it’s growing ever more costly to position, we’re increasingly challenged by the “growth by acquisition” problem.
Consider optimizing in two areas, if you haven’t already.
1. Offer Optimization
We know that if searchers don’t find what they what, they click back. The click back presents two problems. One, you just wasted time and money getting that visitor to your site. Secondly, it’s likely that Google is measuring click-backs in order to help determine relevancy.
How do you know if your offer is relevant to users?
The time-tested way is to examine a couple of the 4ps. Product, price, position, and place. Place doesn’t matter so much, as we’re talking about the internet, although if you’ve got some local-centric product or service, then it’s a good idea to focus on it. Promotion is what SEOs do. They get people over the threshold.
However, two areas worth paying attention to are product and price. In order to optimize product, we need to ask some fundamental questions:
Does the customer want this product or service?
What needs does it satisfy? Is this obvious within a few seconds of viewing the page?
What features does it have to meet these needs? Are these explained?
Are there any features you've missed out? Have you explained all the features that meet the need?
Are you including costly features that the customer won't actually use?
How and where will the customer use it?
What does it look like? How will customers experience it?
What size(s), color(s) should it be?
What is it to be called?
How is it branded?
How is it differentiated versus your competitors?
What is the most it can cost to provide, and still be sold sufficiently profitably?
SEOs are only going to have so much control over these aspects, especially if they’re working for a client. However, it still pays to ask these questions, regardless. If the client can’t answer them, then you may be dealing with a client who has no strategic advantage over competitors. They are likely running a me-too site. Such sites are difficult to position from scratch.
Unless you're pretty aggressive, taking on me-too sites will make your life difficult in terms of SEO, so thinking about strategic advantage can be a good way to screen clients. If they have no underlying business advantage, ask yourself if you really want to be doing SEO for these people?
In terms of price:
What is the value of the product or service to the buyer?
Are there established price points for products or services in this area?
Is the customer price sensitive? Will a small decrease in price gain you extra market share? Or will a small increase be indiscernible, and so gain you extra profit margin?
What discounts should be offered to trade customers, or to other specific segments of your market?
How will your price compare with your competitors?
Again, even if you have little or no control over these aspects, then it still pays to ask the questions. You're looking for underlying business advantage that you can leverage.
Once we’ve optimized the offer, we then look at conversion.
2. Conversion Optimization
There’s the obvious conversion most search marketers know about. People arrive at a landing page. Some people buy what’s on offer, and some leave. So, total conversions/number of views x 100 equals the conversion rate.
However, when it comes to SEO, it’s not just about the conversion rate of a landing page. Unlike PPC, you don’t have precise control over the entry page. So, optimizing for conversion is about looking at every single page on which people enter your site, and optimizing each page as if it were an entry point.
What do you want people to do when they land on your page?
Have a desired action in mind for every page. It might be a sign-up. It might be to encourage a bookmark. It might be to buy something. It might be to tweet. Whatever it is, we need to make the terms of engagement, for the visitor, clear for each page - with a big, yellow highlight on the term “engagement”! Remember, Google are likely looking at bounce-back rates. So, there is a conversion rate for every single page on your site, and they’re likely all different.
Think about the shopping cart process. Is a buyer, particularly a mobile buyer, going to wade through multiple forms? Or could the sale be made in as few clicks as possible? Would integrating Paypal or Amazon payments lift your conversion rates? What’s your site speed like? The faster, the better, obviously. A lot of conversion is about streamlining things - from processes, to navigation to site speed.
At this point, a lot of people will be wondering how to measure and quantify all this. How to track track conversion funnels across a big site. It’s true, it’s difficult. It many cases, it’s pretty much impossible to get adequate sample sizes.
However, that’s not a good reason to avoid conversion optimization. You can measure it in broad terms, and get more incremental as time goes on. A change across pages, a change in paths, can lead to small changes on those pages and paths, even changes that are difficult to spot, but there is sufficient evidence that companies who employ conversion optimization can enjoy significant gains, especially if they haven't focused on these areas in the past.
While you could quantify every step of the way, and some companies certainly do, there’s probably a lot of easy wins that can be gained merely by following these two general concepts - optimizing the offer and then optimizing (streamlining) the pages and paths that lead to that offer. If something is obscure, make it obvious. If you want the visitor to do something, make sure the desired action is writ-large. If something is slow, make it faster.
Do it across every offer, page and path in your site and watch the results.
"Content is king" is one of those “truthy” things some marketers preach. However, in most businesses the bottom line is king, attention is queen, and content can be used as a means to get both, but it depends.
The problem is that content is easy to produce. Machines can produce content. They can tirelessly churn out screeds of content every second. Even if they didn’t, billions of people on the internet are perfectly capable of adding to the monolithic content pile at similar rates.
Low barriers to content production and distribution mean the internet has turned a lot of content into near worthless commodity. Getting and maintaining attention is the tricky part, and once a business has that, then the benefits can flow through to the bottom line.
Some content is valuable, of course. Producing valuable content can earn attention. The content that gets the most attention is typically something for which an audience has a strong need, yet can’t easily get elsewhere, and is published in a place they're likely to see. Or someone they know is likely to see. An article on title tags will likely get buried. An article on the secret code to cracking Google's Hummingbird algorithms will likely crash your server.
Up until the point everyone else has worked out how to crack them, too, of course.
What Content Does The User Want?
Content can become King if the audience bestows favor upon it. Content producers need to figure out what content the audience wants. Perversely, Google have chosen to make this task even more difficult than it was before by withholding keyword data. Between Google’s supposed “privacy” drive, Hummingbird supposedly using semantic analysis, and Penguin/Panda supposedly using engagement metrics, page level and path level optimization are worth focusing upon going forward.
If you haven’t done one for a while, now is probably a good time to take stock and undertake a content audit.
You Have Valuable Historical Information
If you’ve got historical keyword data, archive it now. It will give you an advantage over those who follow you from this point on. Going forward, it will be much more expensive to acquire this data.
Run an audit on your existing content. What content works best? What type of content is it? Video? Text? What’s the content about? What keywords did people use to find it previously? Match content against your historical keyword data.
If keywords can no longer suggest content demand, then how do we know what the visitor wants in terms of content? We must seek to understand the audience at a deeper level. Take a more fuzzy approach.
Watch Activity Signals
Analytics can get pretty addictive and many tools let you watch what visitors do in real time. Monitor engagement levels on your pages. What is a user doing on that page? Are they reading? Contributing? Clicking back and forward looking for something else?
Ensure pages with high engagement are featured prominently in your information architecture. Relegate or fix low-engagement pages. Segment out your content so you know which is the most popular, in terms of landings, and link that information back to ranking reports. This way, you can approximate keywords and stay focused on the content users find most relevant and engaging. Segment out your audience, too. Different visitors respond to different things. Do you know which group favours what? What do older people go for? What do younger people go for? Here are a few ideas on how to segment users.
User behavior is getting increasingly complex. It takes multiple visits to purchase, from multiple channels/influences. Hence the addition of user segmentation allows us to focus on people. (For these exact reasons multi-channel funnels analysis and attribution modeling are so important!)
At the moment in web analytics solutions, people are defined by the first party cookie stored on their browser. Less than ideal, but 100x better then what we had previously. Over-time as we all expand to Universal Analytics perhaps we will have more options to track the same person, after explicitly asking for permission, across browsers, channels and devices
If Google won’t give you keywords, build your own keyword database. Think about ways you can encourage people to use your in-site search. Watch the content they search for and consume the most. Another way of looking at site search is to provide navigation links that emphasize different keywords terms. For example, you could place these high up on your page, with each offering a different option relating to related keyword terms. Take a note of which keyword terms visitors favour over others.
In the good old days, people dutifully used site navigation at the left, right, or top of a website. But, two websites have fundamentally altered how we navigate the web: Amazon, because the site is so big, sells so many things, and is so complicated that many of us go directly to the site search box on arrival. And Google, which has trained us to show up, type what we want, and hit the search button. Now when people show up at a website, many of them ignore our lovingly crafted navigational elements and jump to the site search box. The increased use of site search as a core navigation method makes it very important to understand the data that site search generates
Where does attention flow from? Social media? A mention is great, but if no attention flows over that link to your content, then it might be a misleading metric. Are people sharing your content? What topics and content gets shared the most?
Again, this comes back to understanding the audience, both what they’re talking about and what actions they take as a result. In “Digital Marketing Analytics: Making Sense Of Consumer Data”the authors recommend creating a “learning agenda”. Rather than just looking for mentions and volume of mentions, focus on specific brand or service attributes. Think about the specific questions you want answered by visitors as if they those visitors were sitting in front of you.
For example, how are consumers reacting to prices in your niche? What are their complaints? What do they wish would happen? Are people talking negatively about something? Are they talking positively about something? Who are the new competitors in this space?
Those are pretty rich signals. We can then link this back to content by addressing those issues within our content.
The independent webmaster has taken a beating over the last couple of years. Risk has become harder to spread, labor costs have gone up, outreach has become more difficult and more expensive as Google's webspam team and the growing ranks of the Search Police spread the FUD far and wide.
The web is still a great place to be and still offers incredible opportunity that is largely unavailable, without much more capital intensive risk, in the offline world.
There's still plenty of success to be had in the web-based business model but like any strategy we have to refine it from time to time. I thought I'd share the core processes I go through when starting a new site.
Look for Signal, Look Past the Noise
Online marketers, celebrities, and brands pretty much power the Twittersphere and the 140 character limit invariably leads to statements full of bluster (and shallowness) like:
Links are dead
Forget links get social likes, +1's, RT's, and so on
Guest posting is dead
Infographics are dead
SEO is dead
None of that is true but when folks try to become prognosticators they will just keep saying the same thing over and over, with some slight re-framing, until they finally get it right.
All you have to do is look at the really ridiculous statements over the years about how ranking "doesn't matter". These statements have gone back to at least 2006-ish, craziness.
Or look at the past couple years where we get "social shares are the new link" shoved down our throats despite the data that flies in the face of that statement, at least as it pertains to organic search growth.
Yet, years later both of these "industry trends" would have cost you significant amounts of revenue and search share. We don't have to debate the spam links vs non-spam links here either. No one here is advocating for you to build crappy links and you don't need to.
Establishing Your Portfolio
It's quite likely, as an independent webmaster, that you will have sites that serve different purposes. I have sites that:
are actively being built into online brands (or trying at least :D )
exist as pure, longer-standing SEO plays that are cash cows used to fund more sustainable long-term projects
are built to initially live off of paid traffic, direct outreach, and/or social campaigns with organic search as a tertiary method of traffic acquisition
exist solely to test new ideas or new products before building an actual site/brand
I also work a select type of client. One thing I found helpful was to set up a spreadsheet with some very basic information to help me keep track of things at a 10,000 foot view.
So I have a column for:
Purpose Tag (one of the areas I described above)
Net Monthly Revenue (multiple columns)
Rolling 12 month Net Revenue
Same monthly/rolling numbers for costs
From there, I do a quick chart to show what areas most of the revenue is coming from and where the investment is going. Over time, I try to make sure the online brand area (where we are getting traffic and revenue from a healthly mix of multiple sources) is outpacing the pure SEO plays in both areas and we try to shy away from making too many expensive pure SEO plays where no mid-long term "brandability" exists.
We also like to see growth in client areas as well, but only for the right kind of client. The wrong kind of client can have a really destructive effect on a small team.
Staying small, lean, and profitable are also big keys to this strategy. If you are up against it on debt and overhead you will probably be less likely to make the proper decisions for your long-term viability on the 'net.
Considerations When Starting a New Site
I think most small teams or individual publishers can probably handle 2-3 branded sites at a time (stipulating that a branded site is one where there are just about all elements of online marketing involved). The first step I take is to determine what bucket the site will go in.
A testing site is easy enough to decide on. I might have an idea for a new product so I'll just throw up a small Wordpress site, a landing page, and test it out via PPC. Part of the initial research here is to determine whether there is any existing "search" demand or if you'll be tasked with creating demand on your own.
You can certainly build an online product that will be driven, initially, mostly by offline demand if you have the right networking in place. For the most part we try to stick to stuff where there is some initial demand online as the offline networking component tends to involve, in my experience, a lot more initial work, more stakeholders, etc.
When we look at a "product" we consider the following as "stuff" we could sell:
Certainly a site can have any combination of those elements but generally those are the three basic types of things we'd consider selling. From there we would want to figure out:
brand name and domain (I prefer one or two word domains here, keyword not required)
search volume estimates and the length of the tail for each core keyword
if conversations are taking place across the web for the broader topic or lateral topics where we can insert ourselves/product
if our product can be a niche of an already successful, broader product offering
does the product have a reasonable chance of success in the social media realm
if we can make it better than what exists now
Example of a Product Idea
So one example, as I also dabble in real estate a bit, that I'll give is a CRM/PM solution for real estate investors. Most of the products out there aren't what I would consider "good". Many of the solutions are either just not very good or require some hook into a complex solution like Microsoft Dynamics CRM.
There's demand for the product on the web and there's a lot that could be done, more elegantly, with technologies that are available today to help connect all the things that go into an investment decision and investment management.
This is something I'm kicking around and it's a good example of our strategy of trying to find a successful, broad market where opportunity exists for niches to be served in a more direct, elegant manner.
We could do 2 of 3 product types here, but would likely start with just the online product itself and maybe hang training or courses off of it later.
You Need a Product
If you want to stick around online I believe you need at least 1 product and brand that can sustain the up and down nature of search cycles. You could argue that client work is your product and I'd buy that.
However, I think client work is still an area where you are more beholden to the decisions of others, in a more abrupt fashion (internal client spend decisions, taking things in-house, etc), than you are if you have your own product or service especially at the price points charged to clients.
I could also make the case that if you are selling direct to consumers you are beholden to them as well. Yet, I think the risk is better spread out over an SaaS model, subscription model, or direct product model than it is selling to either a handful of large clients or handfuls of large clients that require a large team of people and all that goes into the management of a team like that.
There still is a ton of opportunity on the web, there is no doubt about that. The practice of finding a broad market and picking a niche in there has worked out well for us in the last year or so.
In some areas we start off with no connections at all. So in areas where we are behind the 8 ball on relationships we will often hire writers from boards like ProBlogger.Net where will we specifically ask for folks who are in that industry with an existing site and active social following to write for us.
We will also ask them to promote what they write for us on their social channels and site while hooking their authorship profile into the posts they do for us. This helps us, in certain industries anyway, really grow an audience for short money and establish relationships with established, trusted people in the space.
Finding that balance between passion and monetary potential is difficult and there's often some level of tradeoff. If you use the items I listed earlier as a guide to determine how to move forward with an idea, or if moving forward even makes sense for the idea, then I think you'll be starting off in a solid position.
The last couple of years have been really turbulent but that also has created more opportunities in different areas and while it's nice to throw out the word "diversify" it's also good to take a more boots on the ground approach than a theoretical one.
The core hallmarks of a traditional SEO campaign are still largely the same but there's no reason why you can't stick around and take advantage of these opportunities, especially with all the experience you have in multiple areas of online marketing from being an independent webmaster in the golden age of SEO.
One of the problems with analysing data is the potential to get trapped in the past, when we could be imagining the future. Past performance can be no indication of future success, especially when it comes to Google’s shifting whims.
We see problems, we devise a solution. But projecting forward by measuring the past, and coming up with “the best solution” may lead to missing some obvious opportunities.
In 1972, psychologist, architect and design researcher Bryan Lawson created an empirical study to understand the difference between problem-based solvers and solution-based solvers. He took two groups of students – final year students in architecture and post-graduate science students – and asked them to create one-story structures from a set of colored blocks. The perimeter of the building was to optimize either the red or the blue color, however, there were unspecified rules governing the placement and relationship of some of the blocks.
Lawson found that:
The scientists adopted a technique of trying out a series of designs which used as many different blocks and combinations of blocks as possible as quickly as possible. Thus they tried to maximize the information available to them about the allowed combinations. If they could discover the rule governing which combinations of blocks were allowed they could then search for an arrangement which would optimize the required color around the design. By contrast, the architects selected their blocks in order to achieve the appropriately colored perimeter. If this proved not to be an acceptable combination, then the next most favorably colored block combination would be substituted and so on until an acceptable solution was discovered.
Nigel Cross concludes from Lawson's studies that "scientific problem solving is done by analysis, while designers problem solve through synthesis”
Design thinking tends to start with the solution, rather than the problem. A lot of problem based-thinking focuses on finding the one correct solution to a problem, whereas design thinking tends to offer a variety of solutions around a common theme. It’s a different mindset.
One of the criticisms of Google, made by Google’s former design leader Douglas Bowman, was that Google were too data centric in their decision making:
When a company is filled with engineers, it turns to engineering to solve problems. Reduce each decision to a simple logic problem. Remove all subjectivity and just look at the data...that data eventually becomes a crutch for every decision, paralyzing the company and preventing it from making any daring design decisions…
There’s nothing wrong with being data-driven, of course. It’s essential. However, if companies only think in those terms, then they may be missing opportunities. If we imagine “what could be”, rather than looking at “what was”, opportunities present themselves. Google realise this, too, which is why they have Google X, a division devoted to imagining the future.
What search terms might people use that don’t necessarily show up on keyword mining tools? What search terms will people use six months from now in our vertical? Will customers contact us more often if we target them this way, rather than that way? Does our copy connect with our customers, of just search engines? Given Google is withholding more search referral data, which is making it harder to target keywords, adding some design thinking to the mix, if you don’t already, might prove useful.
Tools For Design Thinking
In the book, Designing For Growth, authors Jeanne Liedtka and Tim Ogilvie outline some tools for thinking about opportunities and business in ways that aren’t data-driven. One famous proponent of the intuitive, design-led approach was, of course, Steve Jobs.
It's really hard to design products by focus groups. A lot of times, people don't know what they want until you show it to them
The iphone or iPad couldn’t have been designed by looking solely at the past. They mostly came about because Jobs had an innate understanding of what people wanted. He was proven right by the resulting sales volume.
Design starts with empathy. It forces you to put yourself in the customers shoes. It means identifying real people with real problems.
In order to do this, we need to put past data aside and watch people, listen to people, and talk with people. The simple act of doing this is a rich source of keyword and business ideas because people often frame a problem in ways you may not expect.
For example, a lot of people see stopping smoking as a goal-setting issue, like a fitness regime, rather than a medical issue. Advertising copy based around medical terminology and keywords might not work as well as copy oriented around goal setting and achieving physical fitness. This shift in the frame of reference certainly conjures up an entirely different world of ad copy, and possibly keywords, too. That different frame might be difficult to determine from analytics and keyword trends alone, but might be relatively easy to spot simply by talking to potential customers.
Designing For Growth is worth a read if you’re feeling bogged down in data and looking for new ways to tackle problems and develop new opportunities. I don’t think there’s anything particularly new in it, and it can come across as "the shiny new buzzword" at times, but the fundamental ideas are strong. I think there is value in applying some of these ideas directly to current SEO issues.
Designing For Growth recommends asking the following questions.
What is the current reality? What is the problem your customers are trying to solve? Xerox solved a problem customers didn’t even know that had when Xerox invented the fax machine. Same goes for the Polaroid camera. And the microwave oven. Customers probably couldn’t describe those things until they saw and understood them, but the problem would have been evident had someone looked closely at the problems they faced i.e. people really wanted faster, easier ways of completing common tasks.
What do your customers most dislike about the current state of affairs? About your industry? How often do you ask them?
One way of representing this information is with a flowchart. Map the current user experience from when they have a problem, to imagining keywords, to searching, to seeing the results, to clicking on one of those results, to finding your site, interacting to your site, to taking desired action. Could any of the results or steps be better?
Usability tests use the same method. It’s good to watch actual customers as they do this, if possible. Conduct a few interviews. Ask questions. Listen to the language people use. We can glean some of this information from data mining, but there’s a lot more we can get by direct observation, especially when people don’t click on something, as non-activity seldom registers in a meaningful way in analytics.
What would “something better” look like?
Rather than think in terms of what is practical and the constraints that might prevent you from doing something, imagine what an ideal solution would look like if it weren’t for those practicalities and constraints.
A lot of usability testers create personas. These are fictional characters based on real or potential customers and are used try to gain an understanding of what they might search for, what problems they are trying to solve, and what they expect to see on our site. Is this persona a busy person? Well educated? Do they use the internet a lot? Are they buying for themselves, or on behalf of others? Do they tend to react emotionally, or are they logical? What incentives would this persona respond to?
Personas tend to work best when they’re based on actual people. Watch and observe. Read up on relevant case studies. Trawl back through your emails from customers. Make use of story-boards to capture their potential actions and thoughts. Stories are great ways to understand motivations and thoughts.
What are those things your competition does, and how could they be better? What would those things look like in the best possible world, a world free of constraints?
“What wows” is especially important for social media and SEO going forward.
Those other sites are not bringing additional value. While they’re not duplicates they bring nothing new to the table. It’s not that there’s anything wrong with what these people have done, but they should not expect this type of content to rank.
Google would seek to detect that there is no real differentiation between these results and show only one of them so we could offer users different types of sites in the other search results
Cutts talks about the creation of new value. If one site is saying pretty much the same as another site, then those sites may not be duplicates, but one is not adding much in the way of value, either. The new site may be relegated simply for being “too samey”.
"I don't fucking want innovation," an anonymous ex-employee recalls Pincus saying in 2010, according to the SF Weekly. "You're not smarter than your competitor. Just copy what they do and do it until you get their numbers."
Generally speaking, up-and-coming sites should focus on wowing their audience with added depth and/or a new perspective. This, in turn, means having something worth remarking upon, which then attracts mentions across social media, and generates more links.
Is this certain to happen? Nothing is certain as far as Google is concerned. They could still bury you on a whim, but wowing an audience is a better bet than simply imitating long-established players using similar content and link structures. At some point, those long-established players had to wow their audience to get the attention and rankings they enjoy today. They did something remarkably different at some point. Instead of digging the same hole deeper, dig a new hole.
In SEO, change tends to be experimental. It’s iterative. We’re not quite sure what works ahead of time, and no amount of measuring the past tells us all we want to know, but we try a few things and see what works. If a site is not ranking well, we try something else, until it does.
Which leads us to….
Do searchers go for it? Do they do that thing we want them to do, which is click on an ad, or sign up, or buy something?
SEOs are pretty accomplished at this step. Experimentation in areas that are difficult to quantify - the algorithms - have been an intrinsic part of SEO.
The tricky part is not all things work the same everywhere & much like modern health pathologies, Google has clever delays in their algorithms:
Many modern public health pathologies – obesity, substance abuse, smoking – share a common trait: the people affected by them are failing to manage something whose cause and effect are separated by a huge amount of time and space. If every drag on a cigarette brought up a tumour, it would be much harder to start smoking and much easier to quit.
One site's rankings are more stable because another person can't get around the sandbox or their links get them penalized. The same strategy and those same links might work great for another site.
Changes in user behavior are more directly & immediately measurable than SEO.
Consider using change experiments as an opportunity to open up a conversation with potential users. “Do you like our changes? Tell us”. Perhaps use a prompt asking people to initiate a chat, or participate on a poll. Engagement that has many benefits. It will likely prevent a fast click back, you get to see the words people use and how they frame their problems, and you learn more about them. You become more responsive and empathetic sympathetic to their needs.
Beyond Design Thinking
There’s more detail to design thinking, but, really, it’s mostly just common sense. Another framework to add, especially if you feel you’re getting stuck in faceless data.
Design thinking is not a panacea. It is a process, just as Six Sigma is a process. Both have their place in the modern enterprise. The quest for efficiency hasn't gone away and in fact, in our economically straitened times, it's sensible to search for ever more rigorous savings anywhere you can
What's best about it, I feel, is this type of thinking helps break strategy and data problems down and give it a human face.
In this world, designers can continue to create extraordinary value. They are the people who have, or could have, the laterality needed to solve problems, the sensing skills needed to hear what the world wants, and the databases required to build for the long haul and the big trajectories. Designers can be definers, making the world more intelligible, more habitable
Actually, they’re not words they’re acronyms, but you get my drift, I’m sure :)
It must be difficult for SEO providers to stay on the “good and pure” side of SEO when the definitions are constantly shifting. Recently we’ve seen one prominent SEO tool provider rebrand as an “inbound marketing” tools provider and it’s not difficult to appreciate the reasons why.
SEO, to a lot of people, means spam. The term SEO is lumbered, rightly or wrongly, with negative connotations.
Consider email marketing.
Is all email marketing spam? Many would consider it annoying, but obviously not all email marketing is spam.
There is legitimate email marketing, whereby people opt-in to receive email messages they consider valuable. It is an industry worth around $2.468 billion. There are legitimate agencies providing campaign services, reputable tools vendors providing tools, and it can achieve measurable marketing results where everyone wins.
Yet, most email marketing is spam. Most of it is annoying. Most of it is irrelevant. According to a Microsoft security report, 97% of all email circulating is spam.
So, only around 3% of all email is legitimate. 3% of email is wanted. Relevant. Requested.
One wonders how much SEO is legitimate? I guess it depends what we mean by legitimate, but if we accept the definition I’ve used - “something relevant wanted by the user” - then, at a guess, I’d say most SEO these days is legitimate, simply because being off-topic is not rewarded. Most SEOs provide on-topic content, and encourage businesses to publish it - free - on the web. If anything, SEOs could be accused of being too on-topic.
The proof can be found in the SERPs. A site is requested by the user. If a site is listed matches their query, then the user probably deems it to be relevant. They might find that degree of relevance, personally, to be somewhat lacking, in which case they’ll click-back, but we don’t have a situation where search results are rendered irrelevant by the presence of SEO.
Generally speaking, search appears to work well in terms of delivering relevance. SEO could be considered cleaner than email marketing in that SEOs are obsessed with being relevant to a user. The majority of email marketers, on the other hand, couldn't seem to care less about what is relevant, just so long as they get something, anything, in front of you. In search, if a site matches the search query, and the visitor likes it enough to register positive quality metrics, then what does it matter how it got there?
It probably depends on whos’ business case we’re talking about.
Matt Cutts has released a new video on Advertorials and Native Advertising.
Matt makes a good case. He reminds us of the idea on which Google was founded, namely citation. If people think a document is important, or interesting, they link to it.
This idea came from academia. The more an academic document is cited, and cited by those with authority, the more relevant that document is likely to be. Nothing wrong with that idea, however some of the time, it doesn’t work. In academic circles, citation is prone to corruption. One example is self-citation.
But really, excessive self-citation is for amateurs: the real thing is forming a “citation cartel” as Phil David from The Scholarly Kitchen puts it. In April this year, after receiving a “tip from a concerned scientist” Davis did some detective work using the JCR data and found that several journals published reviews citing an unusually high number of articles fitting the JIF window from other journals. In one case, theMedical Science Monitor published a 2010 review citing 490 articles, 445 of them were published in 2008-09 in the journal Cell Transplantation (44 of the other 45 were for articles from Medical Science Journal published in 2008-09 as well). Three of the authors were Cell Transplantation editors
So, even in academia, self-serving linking gets pumped and manipulated. When this idea is applied to the unregulated web where there is vast sums of money at stake, you can see how citation very quickly changes into something else.
There is no way linking is going to stay “pure” in such an environment.
The debate around “paid links” and “paid placement” has been done over and over again, but in summary, the definition of “paid” is inherently problematic. For example, some sites invite guest posting, pay the writers nothing in monetary terms, but the payment is a link back to the writers site. The article is a form of paid placement, it’s just that no money changes hands. Is the article truly editorial?
It’s a bit grey.
A lot of the time, such articles pump the writers business interests. Is that paid content, and does it need to be disclosed? Does it need to be disclosed to both readers and search engines? I think Matt's video suggests it isn't a problem, as utility is provided, but a link from said article may need to be no-followed in order to stay within Google's guidelines.
Matt wants to see clear and conspicuous disclosure of advertorial content. Paid links, likewise. The disclosure should be made both to search engines and readers.
Which is interesting.
Why would a disclosure need to be made to a search engine spider? Granted, it makes Google’s job easier, but I’m not sure why publishers would want to make Google’s job easier, especially if there’s nothing in it for the publishers.
But here comes the stick, and not just from the web spam team.
Google News have stated they may remove a publication if a publication is taking money for paid content and not adequately disclosing that fact - in Google’s view - to both readers and search engines, then that publication may be kicked from Google News. In so doing, Google increase the risk to the publisher, and therefore the cost, in accepting paid links or paid placement.
So, that’s why a publisher will want to make Google’s job easier. If they don’t, they run the risk of invisibility.
Now, on one level, this sounds fair and reasonable. The most “merit worthy” content should be at the top. A ranking should not depend on how deep your pockets are i.e. the more links you can buy, the more merit you have.
However, one of the problems is that the search results already work this way. Big brands often do well in the SERPs due to reputation gained, in no small part, from massive advertising spend that has the side effect, or sometimes direct effect, of inbound links. Do these large brands therefore have “more merit” by virtue of their deeper pockets?
SEO has helped level the playing field for small businesses, in particular. The little guy didn’t have deep pockets, but he could play the game smarter by figuring out what the search engines wanted, algorithmicly speaking, and giving it to them.
I can understand Google’s point of view. If I were Google, I’d probably think the same way. I’d love a situation where editorial was editorial, and business was PPC. SEO, to me, would mean making a site crawlable and understandable to both visitors and bots, but that’s the end of it. Anything outside that would be search engine spam. It’s neat. It’s got nice rounded edges. It would fit my business plan.
But real life is messier.
If a publisher doesn’t have the promotion budget of a major brand, and they don’t have enough money to outbid big brands on PPC, then they risk being invisible on search engines. Google search is pervasive, and if you’re not visible in Google search, then it’s a lot harder to make a living on the web. The risk of being banned for not following the guidelines is the same as the risk of playing the game within the guidelines, but not ranking. That risk is invisibility.
Is the fact a small business plays a game that is already stacked against them, by using SEO, “bad”? If they have to pay harder than the big brands just to compete, and perhaps become a big brand themselves one day, then who can really blame them? Can a result that is relevant, as far as the user is concerned, still really be labelled “spam”? Is that more to do with the search engines business case than actual end user dissatisfaction?
Publishers and SEOs should think carefully before buying into the construct that SEO, beyond Google’s narrow definition, is spam. Also consider that the more people who can be convinced to switch to PPC and/or stick to just making sites more crawlable, then the more spoils for those who couldn’t care less how SEO is labelled.
It would be great if quality content succeeded in the SERPs on merit, alone. This would encourage people to create quality content. But when other aspects are rewarded, then those aspects will be played.
Perhaps if the search engines could be explicit about what they want, and reward it when they’re delivered it, then everyone’s happy.
I guess the algorithms just aren’t that clever yet.
Jon Henshaw put the hammer down on inbound marketing highlighting how the purveyors of "the message" often do the opposite of what they preach. So much of the marketing I see around that phrase is either of the "clueless newb" variety, or paid push marketing of some stripe.
@seobook why don't you follow more of your followers?
One of the clueless newb examples smacked me in the face last week on Twitter, where some "HubSpot certified partner" (according to his Twitter profile) complained to me about me not following enough of our followers, then sent a follow up spam asking if I saw his artice about SEO.
The SEO article was worse than useless. It suggested that you shouldn't be "obvious" & that you should "naturally attract links." Yet the article itself was a thin guest post containing the anchor text search engine optimization deep linking to his own site. The same guy has a "book" titled Findability: Why Search Engine Optimization is Dying.
Why not promote the word findability with the deep link if he wants to claim that SEO is dying? Who writes about how something is dying, yet still targets it instead of the alleged solution they have in hand?
If a person wants to claim that anchor text is effective, or that push marketing is key to success, it is hard to refute those assertations. But if you are pushy & aggressive with anchor text, then the message of "being natural" and "just let things flow" is at best inauthentic, which is why sites like Shitbound.org exist. ;)
Some of the people who wanted to lose the SEO label suggested their reasoning was that the acronym SEO was stigmatized. And yet, only a day after rebranding, these same folks that claim they will hold SEO near and dear forever are already outing SEOs.
Then he told me he wasn’t seeing any results from following all the high-flown rhetoric of the “inbound marketing, content marketing” tool vendor. “Last month, I was around 520 visitors. This month, we’re at 587.”
Want to get to 1,000? Work and wait and believe for another year or two. Want to get to 10,000? Forget it.
You could grow old waiting for the inbound marketing fairy tale to come true.
Of course I commented on the above post & asked Andrew if he could put "inbound marketer" in the post title, since that's who was apparently selling hammed up SEO solutions.
In response to Henshaw's post (& some criticalcomments) calling inbound marketing incomplete marketing Dharmesh Shah wrote:
When we talk about marketing, we position classical outbound techniques as generally being less effective (and more expensive) over time. Not that they’re completely useless — just that they don’t work as well as they once did, and that this trend would continue."
Hugh MacLeod is brilliant with words. He doesn't lose things in translation. His job is distilling messages to their core. And what did his commissions for HubSpot state?
thankfully consiging traditional marketing to the dustbin of history since 2006
traditional marketing is easy. all you have to do is pretend it works
the good news is, your customers are just as sick of traditional marketing as you are
hey, remember when traditional marketing used to work? neither do we
traditional marketing doesn't work. it never did
Claiming that "traditional marketing" doesn't work - and never did, would indeed be claiming that classical marketing techniques are ineffective / useless.
If something "doesn't work" it is thus "useless."
You never hear a person say "my hammer works great, it's useless!"
As always, watch what people do rather than what they say.
When prescription and behavior are not aligned, it is the behavior that is worth emulating.
That's equally true for keyword rich deeplink in a post telling you to let SEO happen naturally and for people who relabel things while telling you not to do what they are doing.
If "traditional marketing" doesn't work AND they are preaching against it, why do they keep doing it?
Measuring PPC and SEO is relatively straightforward. But how do we go about credibly measuring social media campaigns, and wider public relations and audience awareness campaigns?
As the hype level of social media starts to fall, then more questions are asked about return on investment. During the early days of anything, the hype of the new is enough to sustain an endeavor. People don't want to miss out. If their competitors are doing it, that's often seen as good enough reason to do it, too.
You may be familiar with this graph. It's called the hype cycle and is typically used to demonstrate the maturity, adoption and social application of specific technologies:
Where would social media marketing be on this graph?
I think a reasonable guess, if we're seeing more and more discussion about ROI, is somewhere on the "slope of enlightenment". In this article, we’ll look at ways to measure social media performance by grounding it in the only criteria that truly matter - business fundamentals.
We’ve talked about the Cluetrain Manifesto and how the world changed when corporations could no longer control the message. If the message can no longer be controlled, then measuring the effectiveness of public relations becomes even more problematic.
PR used to be about crafting a message and placing it, and nurturing the relationships that allowed that to happen. With the advent of social media, that’s still true, but the scope has expanded exponentially - everyone can now repeat, run with, distort, reconfigure and reinvent the messages. Controlling the message was always difficult, but now it’s impossible.
On the plus side, it’s now much easier to measure and quantify the effectiveness of public relations activity due to the wealth of web data and tools to track what people are saying, to whom, and when.
The Same, Only Different
As much as things change, the more they stay the same. PR and social media is still about relationships. And getting relationships right pays off:
Today, I want to write about something I’d like to call the “Tim Ferriss Effect.” It’s not exclusive to Tim Ferriss, but he is I believe the marquee example of a major shift that has happened in the last 5 years within the world of book promotion. Here’s the basic idea: When trying to promote a book, the main place you want coverage is on a popular single-author blog or site related to your topic.....The post opened with Tim briefly explaining how he knew me, endorsing me as a person, and describing the book (with a link to my book.) It then went directly into my guest post– there was not even an explicit call to action to buy my book or even any positive statements about my book. An hour later, (I was #45 on Amazon’s best seller list
Public relations is more than about selling, of course. It’s also about managing reputation. It’s about getting audiences to maintain a certain point of view. Social media provides the opportunity to talk to customers and the public directly by using technology to dis-intermediate the traditional gatekeepers.
Can We Really Measure PR & Social Media Performance?
How do you measure the value of a relationship?
How can you really tell if people feel good enough about your product or service to buy it, and that “feeling good” was the direct result of editorial placement by a well-connected public relations professional?
Can you imagine another marketing discipline that used dozens of methods for measuring results? Take search engine marketing for example. The standards are pretty cut and dry: visitors, page views, time on site, cost per click, etc. For email marketing, we have delivery, open rates, click thru, unsubscribes, opt-ins, etc”
In previous articles, we’ve looked at how data-driven marketing can save time and be more effective. The same is true of social media, but given it’s not an exact science, it’s a question of finding an appropriate framework.
Does sending out weekly press releases result in more income? How about tweeting 20 times a day? How much are 5,000 followers on Facebook worth? Without a framework to measure performance, there’s no way of knowing.
Furthermore, there’s no agreed industry standard.
In direct marketing channels, such as SEO and PPC, measurement is fairly straightforward. We count cost per click, number of visitors, conversion rate, time on site, and so on. But how do we measure public relations? How do we measure influence and awareness?
PR firms have often developed their own in-house terms of measurement. The problem is that without industry standards, success criteria can become arbitrary and often used simply to show the agency in a good light and thus validate their fees.
Some agencies use publicity results, such as the number of mentions in the press, or the type of mention i.e. prestigious placement. Some use advertising value equivalent i.e. is what editorial coverage would cost if it were buying advertising space. Some use public opinion measures, such as polls, focus groups and surveys, whilst others compare mentions and placement vs competitors i.e. who has more or better mentions, wins. Most use a combination, depending on the nature of the campaign.
Most business people would agree that measurement is a good thing. If we’re spending money, we need to know what we’re getting for that money. If we provide social media services to clients, we need to demonstrate what we’re doing works, so they’ll devote more budget to it in future. If the competition is using this channel, then we need to know if we’re using it better, or worse, than we are.
Perhaps the most significant reason why we measure is to know if we’ve met a desired outcome. To do that we must ignore gut feelings and focus on whether an outcome was achieved.
Why wouldn’t we measure?
Some people don’t like the accountability. Some feel more comfortable with an intuitive approach. It can be difficult for some to accept that their pet theories have little substance when put to the test. It seems like more work. It seems like more expense. It’s just too hard. When it comes to social media, some question whether it can be done much at all
For proof, look no further than The Atlantic, which shook the social media realm recently with its expose of “dark social” – the idea that the channels we fret over measuring like Facebook and Twitter represent only a small fraction of the social activity that’s really going on. The article shares evidence that reveals that the vast majority of sharing is still done through channels like email and IM that are nearly impossible to measure (and thus, dark).
And it's not like a lot of organizations are falling over themselves to get measurement done:
According to a Hypatia Research report, "Benchmarking Social Community Platform Investments & ROI," only 40% of companies measure social media performance on a quarterly or annual basis, while almost 13% or the organizations surveyed do not measure ROI from social media at all, and another 18% said they do so only on an ad hoc basis. (Hypatia didn't specify what response the other 29% gave.)
If we agree that measurement is a good thing and can lead to greater efficiency and better decision making, then the fact your competition may not be measuring well, or at all, then this presents great opportunity. We should strive to measure social media ROI, as providers or consumers, or it becomes difficult to justify spend. The argument that we can't measure because we don’t know all the effects of our actions isn’t a reason not to measure what we can.
Marketing has never been an exact science.
What Should We Measure?
Measurement should be linked back to business objectives.
In “Measure What Matters”, Katie Delahaye Paine outlines seven steps to social media measurement. I liked these seven steps, because they aren’t exclusive to social media. They’re the basis for measuring any business strategy and similar measures have been used in marketing for a long time.
It’s all about proving something works, and then using the results to enhance future performance. The book is a great source for those interested in reading further on this topic, which I’ll outline here.
1. What Are Your Objectives?
Any marketing objective should serve a business objective. For example, “increase sales by X by October 31st”.
Having specific, business driven objectives gets rid of conjecture and focuses campaigns. Someone could claim that spending 30 days tweeting a new message a day is a great thing to do, but if, at the end of it, a business objective wasn’t met, then what was the point?
Let’s say an objective is “increase sales of shoes compared to last December’s figures”. What might the social strategy look like? It might consist of time-limited offers, as opposed to more general awareness messages. What if the objective was to “get 5,000 New Yorkers to mention the brand before Christmas”? This would lend itself to viral campaigns, targeted locally. Linking the campaign to specific business objectives will likely change the approach.
If you have multiple objectives, you can always split them up into different campaigns so you can measure the effectiveness of each separately. Objectives typically fall into sales, positioning, or education categories.
2. Who Is The Audience?
Who are you talking to? And how will you know if you’ve reached them? Once you have reached them, what is it you want them to do? How will this help your business?
Your target audience is likely varied. Different audiences could be industry people, customers, supplier organizations, media outlets, and so on. Whilst the message may be seen by all audiences, you should be clear about which messages are intended for who, and what you want them to do next. The messages will be different for each group as each group likely picks up on different things.
Attach a value to each group. Is a media organization picking up on a message more valuable than a non-customer doing so? Again, this should be anchored to a business requirement. “We need media outlets following us so they may run more of our stories in future. Our research shows more stories has led to increased sales volume in the past”. Then a measure might be to count the number of media industry followers, and to monitor the number of stories they produce.
3. Know Your Costs
What does it cost you to run social media campaigns? How much time will it take? How does this compare to other types of campaigns? What is your opportunity cost? How much does it cost to measure the campaign?
As Delahaye Paine puts it, it’s the “I” in ROI.
Testing is comparative, so have something to compare against.
You can compare yourself against competitors, and/or your own past performance. You can compare social media campaigns against other marketing campaigns. What do those campaigns usually achieve? Do social media campaigns work better, or worse, in terms of achieving business goals?
In terms of ROI, what’s a social media “page view” worth? You could compare this against the cost of a click in PPC.
5. Define KPIs
Once you’ve determined objectives, defined the audience, and established benchmarks, you should establish criteria for success.
For example, the objective might be to increase media industry followers. The audience is the media industry and the benchmark is the current number of media industry followers. The KPI would be the number of new media industry followers signed up, as measured by classifying followers into subgroups and conducting a headcount.
Measuring the KPI will differ depending on objective, of course. If you’re measuring the number of mentions in the press vs your competitor, that’s pretty easy to quantify.
“Raising awareness” is somewhat more difficult, however once you have a measurement system in place, you can start to break down the concept of “awareness” into measurable components. Awareness of what? By whom? What constitutes awareness? How to people signal they’re aware of you? And so on.
6. Data Collection Tools
How will you collect measurement data?
Content analysis of social or traditional media
Primary research via online, mail or phone survey
There are an overwhelming number of tools available, and outside the scope of this article. No tool can measure “reputation” or “awareness” or “credibility” by itself, but can produce usable data if we break those areas down into suitable metrics. For example, “awareness” could be quantified by “page views + a survey of a statistically valid sample”.
Half the battle is asking the right questions.
7. Take Action
A measurement process is about iteration. You do something, get the results back, act on them and make changes, and arrive at a new status quo. You then do something starting from that new point, and so on. It’s an ongoing process of optimization.
Were objectives met? What conclusions can you draw?
Those seven steps will be familiar to anyone who has measured marketing campaigns and business performance. They’re grounded in the fundamentals. Without relating social media metrics back to the underlying fundamentals, we can never be sure if what we’re doing is making or a difference, or worthwhile. Is 5,000 Twitter followers a good thing?
What business problem does it address?
Did You Make A Return?
You invested time and money. Did you get a return?
If you’ve linked your social media campaigns back to business objectives you should have a much clearer idea. Your return will depend on the nature of your business, of course, but it could be quantified in terms of sales, cost savings, avoiding costs or building an audience.
In terms of SEO, we’ve long advocated building brand. Having people conduct brand searches is a form of insurance against Google demoting your site. If you have brand search volume, and Google don’t return you for brand searches, then Google looks deficient.
So, one goal of social media that gels with SEO is to increase brand awareness. You establish a benchmark of branded searches based on current activity. You run your social media campaigns, and then see if branded searches increase.
Granted, this is a fuzzy measure, especially if you have other awareness campaigns running, as you can’t be certain cause and effect. However, it’s a good start. You could give it a bit more depth by integrating a short poll for visitors i.e. “did you hear about us on Twitter/Facebook/Other?”.
Mechanics Of Measurement
Measuring social media isn’t that difficult. In fact, we could just as easily use search metrics in many cases. What is the cost per view? What is the cost per click? Did the click from a social media campaign convert to desired action? What was your business objective for the social media campaign? To get more leads? If so, then count the leads. How much did each lead cost to acquire? How does that cost compare to other channels, like PPC? What is the cost of customer acquisition via social media?
In this way, we could split social media out into the customer service side and marketing side. Engaging with your customers on Facebook may not be all that measurable in terms of direct marketing effects, it’s more of a customer service function. As such, budget for the soft side of social media need not come out of marketing budgets, but customer service budgets. This could still be measured, or course, by running customer satisfaction surveys.
Is Social Media Marketing Public Relations?
Look around the web for definitions of the differences between PR and social media, and you’ll find a lot of vague definitions.
Social media is a tool used often used for the purpose of public relations. The purpose is to create awareness and nurture and guide relationships.
Public relations is sometimes viewed it as a bit of a scam. It’s an area that sucks money, yet can often struggle to prove its worth, often relying on fuzzy, feel-good proclamations of success and vague metrics. It doesn’t help that clients can have unrealistic expectations of PR, and that some PR firms are only too happy to promise the moon:
PR is nothing like the dark, scary world that people make it out to be—but it is a new one for most. And knowing the ropes ahead of time can save you from setting impossibly high expectations or getting overpromised and oversold by the firm you hire. I’ve seen more than my fair share of clients bringing in a PR firm with the hopes that it’ll save their company or propel a small, just-launched start-up into an insta-Facebook. And unfortunately, I’ve also seen PR firms make these types of promises. Guess what? They’re never kept
Internet marketing, in general, has a credibility problem when it doesn’t link activity back to business objectives.
Part of that perception, in relation to social media, comes from the fact public relations is difficult to control:
The main conduit to mass publics, particularly with a consumer issue such as rail travel or policing, are the mainstream media. Unlike advertising, which has total control of its message, PR cannot convey information without the influence of opinion, much of it editorial. How does the consumer know what is fact, and what has influenced the presentation of that fact?
But lack of control of the message, as the Cluetrain Manifesto points out, is the nature of the environment in which we exist. Our only choice, if we are to prosper in the digital environment, is to embrace the chaos.
Shouldn’t PR just happen? If you’re good, people just know? Well, even Google, that well known, engineering-driven advertising company has PR deeply embedded from almost day one:
David Krane was there from day one as Google's first public relations official. He's had a hand in almost every single public launch of a Google product since the debut of Google.com in 1999.
Good PR is nurtured. It’s a process. The way to find out if it’s good PR or ineffective PR is to measure it. PR isn’t a scam, anymore so than any other marketing activity is a scam. We can find out if it’s worthwhile only by tracking and measuring and linking that measurement back to a business case. Scams lack transparency.
The way to get transparency is to measure and quantify.
Do users find big headlines more relevant? Does using long text lead to more, or less, visitor engagement? Is that latest change to the shopping cart going to make things worse? Are your links just the right shade of blue?
If you want to put an end to tiresome subjective arguments about page length, or the merits of your clients latest idea, which is to turn their website pink, then adopting an experimental process for web publishing can be a good option.
If you don’t currently use an experiment-driven publishing approach, then this article is for you. We’ll look at ways to bake experiments into your web site, the myriad of opportunities testing creates, how it can help your SEO, and ways to mitigate cultural problems.
The merits of any change should be derived from the results of the change under a controlled test. This process is common in PPC, however many SEO’s will no doubt wonder how such an approach will affect their SEO.
We’ve gotten several questions recently about whether website testing—such as A/B or multivariate testing—affects a site’s performance in search results. We’re glad you’re asking, because we’re glad you’re testing! A/B and multivariate testing are great ways of making sure that what you’re offering really appeals to your users
Post-panda, being more relevant to visitors, not just machines, is important. User engagement is more important. If you don’t closely align your site with user expectations and optimize for engagement, then it will likely suffer.
The new SEO, at least as far as Panda is concerned, is about pushing your best quality stuff and the complete removal of low-quality or overhead pages from the indexes. Which means it’s not as easy anymore to compete by simply producing pages at scale, unless they’re created with quality in mind. Which means for some sites, SEO just got a whole lot harder.
Experiments can help us achieve greater relevance.
If It ‘Aint Broke, Fix It
One reason for resisting experiment-driven decisions is to not mess with success. However, I’m sure we all suspect most pages and processes can be made better.
If we implement data-driven experiments, we’re more likely to spot the winners and losers in the first place. What pages lead to the most sales? Why? What keywords are leading to the best outcomes? We identify these pages, and we nurture them. Perhaps you already experiment in some areas on your site, but what would happen if you treated most aspects of your site as controlled experiments?
We also need to cut losers.
If pages aren’t getting much engagement, we need to identify them, improve them, or cut them. The Panda update was about levels of engagement, and too many poorly performing pages will drag your site down. Run with the winners, cut the losers, and have a methodology in place that enables you to spot them, optimize them, and cut them if they aren’t performing.
Testing Methodology For Marketers
Tests are based on the same principles used to conduct scientific experiments. The process involves data gathering, designing experiments, running experiments, analyzing the results, and making changes.
1. Set A Goal
A goal should be simple i.e. “increase the signup rate of the newsletter”.
We could fail in this goal (decreased signups), succeed (increased signups), or stay the same. The goal should also deliver genuine business value.
There can be often multiple goals. For example, “increase email signups AND Facebook likes OR ensure signups don’t decrease by more than 5%”. However, if you can get it down to one goal, you’ll make life easier, especially when starting out. You can always break down multiple goals into separate experiments.
2. Create A Hypothesis
What do you suspect will happen as a result of your test? i.e. “if we strip all other distractions from the email sign up page, then sign-ups will increase”.
The hypothesis can be stated as an improvement, or preventing a negative, or finding something that is wrong. Mostly, we’re concerned with improving things - extracting more positive performance out of the same pages, or set of pages.
“Will the new video on the email sign-up page result in more email signups?” Only one way to find out. And once you have found out, you can run with it or replace it safe in the knowledge it's not just someone's opinion. The question will move from “just how cool is this video!” (subjective) to “does this video result in more email sign-ups?”. A strategy based on experiments eliminates most subjective questions, or shifts them to areas that don’t really affect the business case.
The video sales page significantly increased the number of visitors who clicked to the price/guarantee page by 46.15%....Video converts! It did so when mentioned in a “call to action” (a 14.18% increase) and also when used to sell (35% and 46.15% increases in two different tests)
When crafting a hypothesis, you should keep business value clearly in mind. If the hypothesis suggests a change that doesn’t add real value, then testing it is likely a waste of time and money. It creates an opportunity cost for other tests that do matter.
When selecting areas to test, you should start by looking at the areas which matter most to the business, and the majority of users. For example, an e-commerce site would likely focus on product search, product descriptions, and the shopping cart. The About Page - not so much.
Order areas to test in terms of importance and go for the low hanging fruit first. If you can demonstrate significant gains early on, then it will boost your confidence and validate your approach. As experimental testing becomes part of your process, you can move on more granular testing. Ideally, you want to end up with a culture whereby most site changes have some sort of test associated with them, even if it’s just to compare performance against the previous version.
Look through your stats to find pages or paths with high abandonment rates or high bounce rates. If these pages are important in terms of business value, then prioritize these for testing. It’s important to order these pages in terms of business value, because high abandonment rates or bounce rates on pages that don’t deliver value isn’t a significant issue. It’s probably more a case of “should these pages exist at all”?
3. Run An A/B or Multivariate Test
Two of the most common testing methodologies in direct response marketing are A/B testing and multivariate testing.
A/B Testing, otherwise known as split testing, is when you compare one version of a page against another. You collect data how each page performs, relative to the other.
Version A is typically the current, or favored version of a page, whilst page B differs slightly, and is used as a test against page A. Any aspect of the page can be tested, from headline, to copy, to images, to color, all with the aim of improving a desired outcome. The data regarding performance of each page is tested, the winner is adopted, and the loser rejected.
Multivariate testing is more complicated. Multivariate testing is when more than one element is tested at any one time. It’s like performing multiple A/B tests on the same page, at the same time. Multivariate testing can test the effectiveness of many different combinations of elements.
Which method should you use?
In most cases, in my experience, A/B testing is sufficient, but it depends. In the interest of time, value and sanity, it’s more important and productive to select the right things to test i.e. the changes that lead to the most business value.
As your test culture develops, you can go more and more granular. The slightly different shade of blue might be important to Google, but it’s probably not that important to sites with less traffic. But, keep in mind, assumptions should be tested ;) Your mileage may vary.
There are various tools available to help you run these test. I have no association with any of these, but here’s a few to check out:
Statistical significance is used to refer to two separate notions: the p-value, the probability that observations as extreme as the data would occur by chance in a given single null hypothesis; or the Type I error rate α (false positive rate) of a statistical hypothesis test, the probability of incorrectly rejecting a given null hypothesis in favor of a second alternative hypothesis
In short, you need enough visitors taking an action to decide it is not likely to have occurred randomly, but is most likely attributable to a specific cause i.e. the change you made.
5. Run With The Winners
Run with the winners, cut the losers, rinse and repeat. Keep in mind that you may need to retest at different times, as the audience can change, or their motivations change, depending on underlying changes in your industry. Testing, like great SEO, is best seen as an ongoing process.
Make the most of every visitor who arrives on your site, because they’re only ever going to get more expensive.
Here’s an interesting seminar where the results of hundreds of experiments were reduced down to three fundamental lessons:
a) How can I increase specify? Use quantifiable, specific information as it relates to the value proposition
b) How can I increase continuity? Always carry across the key message using repetition
c) How can I increase relevance? Use metrics to ask “why”
Often, tests will fail.
Changing content can sometimes make little, if any, difference. Other times, the difference will be significant. But even when tests fail to show a difference, it still gives you information you can use. These might be areas in which designers, and other vested interests, can stretch their wings, and you know that it won’t necessarily affect business value in terms of conversion.
Sometimes, the test itself wasn't designed well. It might not have been given enough time to run. It might not have been linked to a business case. Tests tend to get better as we gain more experience, but having a process in place is the important thing.
You might also find that your existing page works just great and doesn’t need changing. Again, it’s good to know. You can then try replicating this successes in areas where the site isn’t performing so well.
Failure and mistakes are inevitable. Knowing this, we put mechanisms in place to spot failures and mistakes early, rather than later. Structured failure is a badge of honor!
Thomas Edison performed 9,000 experiments before coming up with a successful version of the light bulb. Students of entrepreneurship talk about the J-curve of returns: the failures come early and often and the successes take time. America has proved to be more entrepreneurial than Europe in large part because it has embraced a culture of “failing forward” as a common tech-industry phrase puts it: in Germany bankruptcy can end your business career whereas in Silicon Valley it is almost a badge of honour
Or perhaps it’s because some of the best ideas in tech today have come from those that weren’t so good. (Remember, Apple's first tablet devices was called the Newton.)
There’s a word used to describe this get-over-it mentality that I heard over and over on my trip through Silicon Valley and San Francisco this week: “Pivot“
Experimentation, and measuring results, will highlight failure. This can be a hard thing to take, and especially hard to take when our beloved, pet theories turn out to be more myth than reality. In this respect, testing can seem harsh and unkind. But failure should be seen for what it is - one step in a process leading towards success. It’s about trying stuff out in the knowledge some of it isn’t going to work, and some of it will, but we can’t be expected to know which until we try it.
In The Lean Startup, Eric Ries talks about the benefits of using lean methodologies to take a product from not-so-good to great, using systematic testing”
If your first product sucks, at least not too many people will know about it. But that is the best time to make mistakes, as long as you learn from them to make the product better. “It is inevitable that the first product is going to be bad in some ways,” he says. The Lean Startup methodology is a way to systematically test a company’s product ideas.
Fail early and fail often. “Our goal is to learn as quickly as possible,” he says
Given testing can be incremental, we don’t have to fail big. Swapping one graphic position for another could barely be considered a failure, and that’s what a testing process is about. It’s incremental, and iterative, and one failure or success doesn’t matter much, so long as it’s all heading in the direction of achieving a business goal.
It’s about turning the dogs into winners, and making the winners even bigger winners.
Feel Vs Experimentation
Web publishing decisions are often based on intuition, historical precedence - “we’ve always done it this way” - or by copying the competition. Graphic designers know about colour psychology, typography and layout. There is plenty of room for conflict.
Douglas Bowden, a graphic designer at Google, left Google because he felt the company relied too much on data-driven decisions, and not enough on the opinions of designers:
Yes, it’s true that a team at Google couldn’t decide between two blues, so they’retesting 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.
That probably doesn’t come as a surprise to any Google watchers. Google is driven by engineers. In Google’s defense, they have such a massive user base that minor changes can have significant impact, so their approach is understandable.
Putting emotion, and habit, aside is not easy.
However, experimentation doesn’t need to exclude visual designers. Visual design is valuable. It helps visitors identify and remember brands. It can convey professionalism and status. It helps people make positive associations.
But being relevant is also design.
Adopting an experimentation methodology means designers can work on a number of different designs and get to see how the public really does react to their work. Design X converted better than design Y, layout Q works best for form design, buttons A, B and C work better than buttons J, K and L, and so on. It’s a further opportunity to validate creative ideas.
Part of getting experimentation right has to do with an organizations culture. Obviously, it’s much easier if everyone is working towards a common goal i.e. “all work, and all decisions made, should serve a business goal, as opposed to serving personal ego”.
All aspects of web publishing can be tested, although asking the right questions about what,to test is important. Some aspects may not make a measurable difference in terms of conversion. A logo, for example. A visual designer could focus on that page element, whilst the conversion process might rely heavily on the layout of the form. Both the conversion expert and the design expert get to win, yet not stamp on each others toes.
One of the great aspects of data-driven decision making is that common, long-held assumptions get challenged, often with surprising results. How long does it take to film a fight scene? The movie industry says 30 days.
Experts go with what they know. And they’ll often insist something needs to take a long time. But when you don’t have tons of resources, you need to ask if there’s a simpler, judo way to get the impact you desire. Sometimes there’s a better way than the “best” way. I thought of this while watching “The Fighter” over the weekend. There’s a making of extra on the DVD where Mark Wahlberg, who starred in and produced the film, talks about how all the fight scenes were filmed with an actual HBO fight crew. He mentions that going this route allowed them to shoot these scenes in a fraction of the time it usually takes
How many aspects of your site are based on assumption? Could those assumptions be masking opportunities or failure?
Some experiments, if poorly designed, don’t lead to more business success. If an experiment isn’t focused on improving a business case, then it’s probably just wasted time. That time could have been better spent devising and running better experiments.
In Agile software design methodologies, the question is always asked “how does this change/feature provide value to the customer”. The underlying motive is “how does this change/feature provide value to the business”. This is a good way to prioritize test cases. Those that potentially provide the most value, such as landing page optimization on PPC campaigns, are likely to have a higher priority than, say, features available to forum users.
I hope this article has given you some food for thought and that you'll consider adopting some experiment-based processes to your mix. Here's some of the sources used in this article, and further reading: