Getting Granular With User Generated Content

The stock market had a flash crash today after someone hacked the AP account & made a fake announcement about bombs going off at the White House. Recently Twitter's search functionality has grown so inundated with spam that I don't even look at the brand related searches much anymore. While you can block individual users, it doesn't block them from showing up in search results, so there are various affiliate bots that spam just about any semi-branded search.

Of course, for as spammy as the service is now, it was worse during the explosive growth period, when Twitter had fewer than 10 employees fighting spam:

Twitter says its "spammy" tweet rate of 1.5% in 2010 was down from 11% in 2009.

If you want to show growth by any means necessary, engagement by a spam bot is still engagement & still lifts the valuation of the company.

Many of the social sites make no effort to police spam & only combat it after users flag it. Consider Eric Schmidt's interview with Julian Assange, where Eric Schmidt stated:

  • "We [YouTube] can't review every submission, so basically the crowd marks it if it is a problem post publication."
  • "You have a different model, right. You require human editors." on Wikileaks vs YouTube

We would post editorial content more often, but we are sort of debating opening up a social platform so that we can focus on the user without having to bear any editorial costs until after the fact. Profit margins are apparently better that way.

As Google drives smaller sites out of the index & ranks junk content based on no factor other than it being on a trusted site, they create the incentive for spammers to ride on the social platforms.

All aboard. And try not to step on any toes!

When I do some product related searches (eg: brand name & shoe model) almost the whole result set for the first 5 or 10 pages is garbage.

  • Blogspot.com subdomains
  • Appspot.com subdomains
  • YouTube accounts
  • Google+ accounts
  • sites.google.com
  • Wordpress.com subdomains
  • Facebook Notes & pages
  • Tweets
  • Slideshare
  • LinkedIn
  • blog.yahoo.com
  • subdomains off of various other free hosts

It comes without surprise that Eric Schmidt fundamentally believes that "disinformation becomes so easy to generate because of, because complexity overwhelms knowledge, that it is in the people's interest, if you will over the next decade, to build disinformation generating systems, this is true for corporations, for marketing, for governments and so on."

Of course he made no mention in Google's role in the above problem. When they are not issuing threats & penalties to smaller independent webmasters, they are just a passive omniscient observer.

With all these business models, there is a core model of building up a solid stream of usage data & then tricking users or looking the other way when things get out of hand. Consider Google's Lane Shackleton's tips on YouTube:

  • "Search is a way for a user to explicitly call out the content that they want. If a friend told me about an Audi ad, then I might go seek that out through search. It’s a strong signal of intent, and it’s a strong signal that someone found out about that content in some way."
  • "you blur the lines between advertising and content. That’s really what we’ve been advocating our advertisers to do."
  • "you’re making thoughtful content for a purpose. So if you want something to get shared a lot, you may skew towards doing something like a prank"

Harlem Shake & Idiocracy: the innovative way forward to improve humanity.

Life is a prank.

This "spam is fine, so long as it is user generated" stuff has gotten so out of hand that Google is now implementing granular page-level penalties. When those granular penalties hit major sites Google suggests that those sites may receive clear advice on what to fix, just by contacting Google:

Hubert said that if people file a reconsideration request, they should “get a clear answer” about what’s wrong. There’s a bit of a Catch-22 there. How can you file a reconsideration request showing you’ve removed the bad stuff, if the only way you can get a clear answer about the bad stuff to remove is to file a reconsideration request?

The answer is that technically, you can request reconsideration without removing anything. The form doesn’t actually require you to remove bad stuff. That’s just the general advice you’ll often hear Google say, when it comes to making such a request. That’s also good advice if you do know what’s wrong.

But if you’re confused and need more advice, you can file the form asking for specifics about what needs to be removed. Then have patience

In the past I referenced that there is no difference between a formal white list & overly-aggressive penalties coupled with loose exemptions for select parties.

The moral of the story is that if you are going to spam, you should make it look like a user of your site did it, that way you

  • are above judgement
  • receive only a limited granular penalty
  • get explicit & direct feedback on what to fix

Experiment Driven Web Publishing

Do users find big headlines more relevant? Does using long text lead to more, or less, visitor engagement? Is that latest change to the shopping cart going to make things worse? Are your links just the right shade of blue?

If you want to put an end to tiresome subjective arguments about page length, or the merits of your clients latest idea, which is to turn their website pink, then adopting an experimental process for web publishing can be a good option.

If you don’t currently use an experiment-driven publishing approach, then this article is for you. We’ll look at ways to bake experiments into your web site, the myriad of opportunities testing creates, how it can help your SEO, and ways to mitigate cultural problems.

Controlled Experiments

The merits of any change should be derived from the results of the change under a controlled test. This process is common in PPC, however many SEO’s will no doubt wonder how such an approach will affect their SEO.

Well, Google encourages it.

We’ve gotten several questions recently about whether website testing—such as A/B or multivariate testing—affects a site’s performance in search results. We’re glad you’re asking, because we’re glad you’re testing! A/B and multivariate testing are great ways of making sure that what you’re offering really appeals to your users

Post-panda, being more relevant to visitors, not just machines, is important. User engagement is more important. If you don’t closely align your site with user expectations and optimize for engagement, then it will likely suffer.

The new SEO, at least as far as Panda is concerned, is about pushing your best quality stuff and the complete removal of low-quality or overhead pages from the indexes. Which means it’s not as easy anymore to compete by simply producing pages at scale, unless they’re created with quality in mind. Which means for some sites, SEO just got a whole lot harder.

Experiments can help us achieve greater relevance.

If It ‘Aint Broke, Fix It

One reason for resisting experiment-driven decisions is to not mess with success. However, I’m sure we all suspect most pages and processes can be made better.

If we implement data-driven experiments, we’re more likely to spot the winners and losers in the first place. What pages lead to the most sales? Why? What keywords are leading to the best outcomes? We identify these pages, and we nurture them. Perhaps you already experiment in some areas on your site, but what would happen if you treated most aspects of your site as controlled experiments?

We also need to cut losers.

If pages aren’t getting much engagement, we need to identify them, improve them, or cut them. The Panda update was about levels of engagement, and too many poorly performing pages will drag your site down. Run with the winners, cut the losers, and have a methodology in place that enables you to spot them, optimize them, and cut them if they aren’t performing.

Testing Methodology For Marketers

Tests are based on the same principles used to conduct scientific experiments. The process involves data gathering, designing experiments, running experiments, analyzing the results, and making changes.

1. Set A Goal

A goal should be simple i.e. “increase the signup rate of the newsletter”.

We could fail in this goal (decreased signups), succeed (increased signups), or stay the same. The goal should also deliver genuine business value.

There can be often multiple goals. For example, “increase email signups AND Facebook likes OR ensure signups don’t decrease by more than 5%”. However, if you can get it down to one goal, you’ll make life easier, especially when starting out. You can always break down multiple goals into separate experiments.

2. Create A Hypothesis

What do you suspect will happen as a result of your test? i.e. “if we strip all other distractions from the email sign up page, then sign-ups will increase”.

The hypothesis can be stated as an improvement, or preventing a negative, or finding something that is wrong. Mostly, we’re concerned with improving things - extracting more positive performance out of the same pages, or set of pages.

“Will the new video on the email sign-up page result in more email signups?” Only one way to find out. And once you have found out, you can run with it or replace it safe in the knowledge it's not just someone's opinion. The question will move from “just how cool is this video!” (subjective) to “does this video result in more email sign-ups?”. A strategy based on experiments eliminates most subjective questions, or shifts them to areas that don’t really affect the business case.

The video sales page significantly increased the number of visitors who clicked to the price/guarantee page by 46.15%....Video converts! It did so when mentioned in a “call to action” (a 14.18% increase) and also when used to sell (35% and 46.15% increases in two different tests)

When crafting a hypothesis, you should keep business value clearly in mind. If the hypothesis suggests a change that doesn’t add real value, then testing it is likely a waste of time and money. It creates an opportunity cost for other tests that do matter.

When selecting areas to test, you should start by looking at the areas which matter most to the business, and the majority of users. For example, an e-commerce site would likely focus on product search, product descriptions, and the shopping cart. The About Page - not so much.

Order areas to test in terms of importance and go for the low hanging fruit first. If you can demonstrate significant gains early on, then it will boost your confidence and validate your approach. As experimental testing becomes part of your process, you can move on more granular testing. Ideally, you want to end up with a culture whereby most site changes have some sort of test associated with them, even if it’s just to compare performance against the previous version.

Look through your stats to find pages or paths with high abandonment rates or high bounce rates. If these pages are important in terms of business value, then prioritize these for testing. It’s important to order these pages in terms of business value, because high abandonment rates or bounce rates on pages that don’t deliver value isn’t a significant issue. It’s probably more a case of “should these pages exist at all”?

3. Run An A/B or Multivariate Test

Two of the most common testing methodologies in direct response marketing are A/B testing and multivariate testing.

A/B Testing, otherwise known as split testing, is when you compare one version of a page against another. You collect data how each page performs, relative to the other.

Version A is typically the current, or favored version of a page, whilst page B differs slightly, and is used as a test against page A. Any aspect of the page can be tested, from headline, to copy, to images, to color, all with the aim of improving a desired outcome. The data regarding performance of each page is tested, the winner is adopted, and the loser rejected.

Multivariate testing is more complicated. Multivariate testing is when more than one element is tested at any one time. It’s like performing multiple A/B tests on the same page, at the same time. Multivariate testing can test the effectiveness of many different combinations of elements.

Which method should you use?

In most cases, in my experience, A/B testing is sufficient, but it depends. In the interest of time, value and sanity, it’s more important and productive to select the right things to test i.e. the changes that lead to the most business value.

As your test culture develops, you can go more and more granular. The slightly different shade of blue might be important to Google, but it’s probably not that important to sites with less traffic. But, keep in mind, assumptions should be tested ;) Your mileage may vary.

There are various tools available to help you run these test. I have no association with any of these, but here’s a few to check out:

4. Ensure Statistical Significance

Tests need to show statistical significance. What does statistically significant mean?

For those who are comfortable with statistics:

Statistical significance is used to refer to two separate notions: the p-value, the probability that observations as extreme as the data would occur by chance in a given single null hypothesis; or the Type I error rate α (false positive rate) of a statistical hypothesis test, the probability of incorrectly rejecting a given null hypothesis in favor of a second alternative hypothesis

For those of you, like me, who prefer a more straightforward explanation. Here’s also a good explanation in relation to PPC, and a video explaining statistical significance in reference in A/B test.

In short, you need enough visitors taking an action to decide it is not likely to have occurred randomly, but is most likely attributable to a specific cause i.e. the change you made.

5. Run With The Winners

Run with the winners, cut the losers, rinse and repeat. Keep in mind that you may need to retest at different times, as the audience can change, or their motivations change, depending on underlying changes in your industry. Testing, like great SEO, is best seen as an ongoing process.

Make the most of every visitor who arrives on your site, because they’re only ever going to get more expensive.

Here’s an interesting seminar where the results of hundreds of experiments were reduced down to three fundamental lessons:

  • a) How can I increase specify? Use quantifiable, specific information as it relates to the value proposition
  • b) How can I increase continuity? Always carry across the key message using repetition
  • c) How can I increase relevance? Use metrics to ask “why”

Tests Fail

Often, tests will fail.

Changing content can sometimes make little, if any, difference. Other times, the difference will be significant. But even when tests fail to show a difference, it still gives you information you can use. These might be areas in which designers, and other vested interests, can stretch their wings, and you know that it won’t necessarily affect business value in terms of conversion.

Sometimes, the test itself wasn't designed well. It might not have been given enough time to run. It might not have been linked to a business case. Tests tend to get better as we gain more experience, but having a process in place is the important thing.

You might also find that your existing page works just great and doesn’t need changing. Again, it’s good to know. You can then try replicating this successes in areas where the site isn’t performing so well.

Enjoy Failing

Fail fast, early and fail often”.

Failure and mistakes are inevitable. Knowing this, we put mechanisms in place to spot failures and mistakes early, rather than later. Structured failure is a badge of honor!

Thomas Edison performed 9,000 experiments before coming up with a successful version of the light bulb. Students of entrepreneurship talk about the J-curve of returns: the failures come early and often and the successes take time. America has proved to be more entrepreneurial than Europe in large part because it has embraced a culture of “failing forward” as a common tech-industry phrase puts it: in Germany bankruptcy can end your business career whereas in Silicon Valley it is almost a badge of honour

Silicon Valley even comes up with euphemisms, like “pivot”, which weaves failure into the fabric of success.

Or perhaps it’s because some of the best ideas in tech today have come from those that weren’t so good. (Remember, Apple's first tablet devices was called the Newton.)
There’s a word used to describe this get-over-it mentality that I heard over and over on my trip through Silicon Valley and San Francisco this week: “Pivot“

Experimentation, and measuring results, will highlight failure. This can be a hard thing to take, and especially hard to take when our beloved, pet theories turn out to be more myth than reality. In this respect, testing can seem harsh and unkind. But failure should be seen for what it is - one step in a process leading towards success. It’s about trying stuff out in the knowledge some of it isn’t going to work, and some of it will, but we can’t be expected to know which until we try it.

In The Lean Startup, Eric Ries talks about the benefits of using lean methodologies to take a product from not-so-good to great, using systematic testing”

If your first product sucks, at least not too many people will know about it. But that is the best time to make mistakes, as long as you learn from them to make the product better. “It is inevitable that the first product is going to be bad in some ways,” he says. The Lean Startup methodology is a way to systematically test a company’s product ideas.
Fail early and fail often. “Our goal is to learn as quickly as possible,” he says

Given testing can be incremental, we don’t have to fail big. Swapping one graphic position for another could barely be considered a failure, and that’s what a testing process is about. It’s incremental, and iterative, and one failure or success doesn’t matter much, so long as it’s all heading in the direction of achieving a business goal.

It’s about turning the dogs into winners, and making the winners even bigger winners.

Feel Vs Experimentation

Web publishing decisions are often based on intuition, historical precedence - “we’ve always done it this way” - or by copying the competition. Graphic designers know about colour psychology, typography and layout. There is plenty of room for conflict.

Douglas Bowden, a graphic designer at Google, left Google because he felt the company relied too much on data-driven decisions, and not enough on the opinions of designers:

Yes, it’s true that a team at Google couldn’t decide between two blues, so they’retesting 41 shades between each blue to see which one performs better. I had a recent debate over whether a border should be 3, 4 or 5 pixels wide, and was asked to prove my case. I can’t operate in an environment like that. I’ve grown tired of debating such minuscule design decisions. There are more exciting design problems in this world to tackle.

That probably doesn’t come as a surprise to any Google watchers. Google is driven by engineers. In Google’s defense, they have such a massive user base that minor changes can have significant impact, so their approach is understandable.

Integrate Design

Putting emotion, and habit, aside is not easy.

However, experimentation doesn’t need to exclude visual designers. Visual design is valuable. It helps visitors identify and remember brands. It can convey professionalism and status. It helps people make positive associations.

But being relevant is also design.

Adopting an experimentation methodology means designers can work on a number of different designs and get to see how the public really does react to their work. Design X converted better than design Y, layout Q works best for form design, buttons A, B and C work better than buttons J, K and L, and so on. It’s a further opportunity to validate creative ideas.

Cultural Shift

Part of getting experimentation right has to do with an organizations culture. Obviously, it’s much easier if everyone is working towards a common goal i.e. “all work, and all decisions made, should serve a business goal, as opposed to serving personal ego”.

All aspects of web publishing can be tested, although asking the right questions about what,to test is important. Some aspects may not make a measurable difference in terms of conversion. A logo, for example. A visual designer could focus on that page element, whilst the conversion process might rely heavily on the layout of the form. Both the conversion expert and the design expert get to win, yet not stamp on each others toes.

One of the great aspects of data-driven decision making is that common, long-held assumptions get challenged, often with surprising results. How long does it take to film a fight scene? The movie industry says 30 days.

Mark Walberg challenged that assumption and did it in three:

Experts go with what they know. And they’ll often insist something needs to take a long time. But when you don’t have tons of resources, you need to ask if there’s a simpler, judo way to get the impact you desire. Sometimes there’s a better way than the “best” way. I thought of this while watching “The Fighter” over the weekend. There’s a making of extra on the DVD where Mark Wahlberg, who starred in and produced the film, talks about how all the fight scenes were filmed with an actual HBO fight crew. He mentions that going this route allowed them to shoot these scenes in a fraction of the time it usually takes

How many aspects of your site are based on assumption? Could those assumptions be masking opportunities or failure?

Winning Experiments

Some experiments, if poorly designed, don’t lead to more business success. If an experiment isn’t focused on improving a business case, then it’s probably just wasted time. That time could have been better spent devising and running better experiments.

In Agile software design methodologies, the question is always asked “how does this change/feature provide value to the customer”. The underlying motive is “how does this change/feature provide value to the business”. This is a good way to prioritize test cases. Those that potentially provide the most value, such as landing page optimization on PPC campaigns, are likely to have a higher priority than, say, features available to forum users.

Further Reading

I hope this article has given you some food for thought and that you'll consider adopting some experiment-based processes to your mix. Here's some of the sources used in this article, and further reading:

What Types of Sites Actually Remove Links?

Since the disavow tool has come out SEOs are sending thousands of "remove my link" requests daily. Some of them come off as polite, some lie & claim that the person linking is at huge risk of their own rankings tank, some lie with faux legal risks, some come with "extortionisty" threats that if they don't do it the sender will report the site to Google or try to get the web host to take down the site, and some come with payment/bribery offers.

If you want results from Google's jackassery game you either pay heavily with your time, pay with cash, or risk your reputation by threatening or lying broadly to others.

At the same time, Google has suggested that anyone who would want payment to remove links is operating below board. But if you receive these inbound emails (often from anonymous Gmail accounts) you not only have to account for the time it would take to find the links & edit your HTML, but you also have to determine if the person sending the link removal request represents the actual site, or if it is someone trying to screw over one of their competitors. Then, if you confirm that the request is legitimate, you either need to further expand your page's content to make up for the loss of that resource or find a suitable replacement for the link that was removed. All this takes time. And if that time is from an employee that means money.

There have been hints that if a website is disavowed some number of times that data can be used to further go out & manually penalize more websites, or create link classifications for spam.

... oh no ...

Social engineering is the most profitable form of engineering going on in the 'Plex.

The last rub is this: if you do value your own life at nothing in a misguided effort to help third parties (who may have spammed up your site for links & then often follow it up with lying to you to achieve their own selfish goals), how does that reflect on your priorities and the (lack of) quality in your website?

If you contacted the large branded websites that Google is biasing their algorithms toward promoting, do you think those websites would actually waste their time & resources removing links to third party websites? For free?

Color me skeptical.

As a thought experiment, look through your backlinks for a few spam links that you know are hosted by Google (eg: Google Groups, YouTube, Blogspot, etc.) and try to get Google's webmaster to help remove those links for you & let us know how well that works out for you.

Some of the larger monopoly & oligopolies don't offer particularly useful customer service to their paying customers. For example, track how long it takes you to get a person on the other end of the phone with a telecom giant, a cable company, or a mega bank. Better yet, look at how long it took AdWords to openly offer phone support & the non-support they offer AdSense publishers (remember the bit about Larry Page believing that "the whole idea of customer support was ridiculous?")

For the non-customer Google may simply recommend that the best strategy is to "start over."

When Google aggregates Webmaster Tools link data from penalized websites they can easily make 2 lists:

  • sites frequently disavowed
  • sites with links frequently removed

If both lists are equally bad, then you are best off ignoring the removal requests & spending your time & resources improving your site.

If I had to guess, I would imagine that being on the list of "these are the spam links I was able to remove" is worse than being on the list of "these are the links I am unsure about & want to disavow just in case."

What say you?

Creating Effective Advertising

The Atlantic published an interesting chart comparing print advertising spend with internet advertising spend:

So, print advertising is tanking. Internet advertising, whilst growing, is not growing particularly fast, and certainly isn’t catching up to fill the titanic sized gap left by print.

As a result, a number of publishers who rely on advertising for the lion's share of their revenue are either struggling, going belly up, or changing their models.

The Need For More Effective Advertising

We recently looked at paywalls. More and more publishers are going the paywall route, the latest major publisher being The Washington Post.

Given the ongoing devaluation of content by aggregators and their advertising networks, few can blame them. However, paywalls aren’t the only solution. Part of the problem with internet advertising is that as soon as people get used to seeing it they tend to block it out, so it becomes less effective.

We looked at the problems with display advertising. Federated Media abandoned the format and will adopt a more “social” media strategy.

We also looked at the rise of Native Advertising, which is advertising that tightly integrates with content to the point where it’s difficult to tell the two apart. This opens up a new angle for SEOs looking to place links.

The reason the advertising gap isn’t closing is due to a number of factors. It’s partly historical, but it’s also to do with effectiveness, especially when it comes to display advertising. If advertisers aren’t seeing a return, then they won’t advertise.

Inventory is expanding a lot faster than the ability or desire of advertisements to fill it, which is not a good situation for publishers. So, internet publishers are experimenting with ideas on how to be more effective. If native advertising and social are deemed more effective, then that is the way publishers will go.

People just don't like being advertised at.

The ClueTrain Manifesto

The Cluetrain Manifesto predicted much of what we see happening today. Written in 2000 by Rick Levine, Christopher Locke, Doc Searls, and David Weinberger, the Cluetrain Manifesto riffed on the idea that markets are conversations, and consumers aren't just passive observers:

A powerful global conversation has begun. Through the Internet, people are discovering and inventing new ways to share relevant knowledge with blinding speed. As a direct result, markets are getting smarter—and getting smarter faster than most companies

That seems obvious now, but it was a pretty radical idea back then. The book was written before blogs became popular. It was way before anyone had heard of a social network, or before anyone had done any tweeting.

Consumers were no longer passive, they were just as likely to engage and create, and they would certainly talk back, and ultimately shape the message if they didn't like it. The traditional top-down advertising industry, and publishing industry, has been turned on its head. The consumers are publishers, and they’re not sitting around being broadcast at.

The advertising industry has been struggling to find answers, not entirely successfully, ever since.

Move Away From Display And Towards Engagement

In order for marketing to be effective on the web, it needs to be engaging to an audience that ignores the broadcast message. This is the reason advertising is starting to look more like content. It ‘s trying to engage people using the forms they already use in their personal communication.

For example, this example mimics a blog post encouraging people to share. It pretty much is a blog post, but it’s also an advertisement. It meets the customer on their terms, in their space and on their level. For better or worse, the lines are growing increasingly blurred.

Facebook's Managing Editor, Dan Fletcher, has just stood down, reasoning:

The company "doesn't need reporters," Fletcher said, because it has a billion members who can provide content.You guys are the reporters," Fletcher told the audience. "There is no more engaging content Facebook could produce than you talking to your family and friends.

People aren't reporters in the journalistic sense, but his statement suggests where the revenue for advertising lies, which is in between people’s conversations. As a side note, you may notice that article is “brought to you by our sponsor”. Most of the links go through bit.ly, however they could just as easily be straight links.

The implication is that a lot of people aren't even listening to reporters anymore, they want to know about the world as filtered through the eyes of their friends and families. The latter has happened since time began, but only recently has advertising leaped directly into that conversation. Whether that is a good thing or not, or welcomed, is another matter, but it it is happening.

Two Types Of Advertisements

Advertising takes two main forms. Institutional, or “brand” advertising, and direct response advertising. SEOs are mainly concerned with direct response advertising.

Direct-Response Marketing is a type of marketing designed to generate an immediate response from consumers, where each consumer response (and purchase) can be measured, and attributed to individual advertisements.[1] This form of marketing is differentiated from other marketing approaches, primarily because there are no intermediaries such as retailers between the buyer and seller, and therefore the buyer must contact the seller directly to purchase products or services.

However, brand advertising is the form around which much of the advertising industry is based:

Brand ads, also known as "space ads," strive to build (or refresh) the prospect's awareness and favorable view of the company or its product or service. For example, most billboards are brand ads.

Online, the former works well, but only if the product or service suits direct advertising. Generally speaking, a lot of new-to-market products and services, and luxury goods, don’t suit direct advertising particularly well, unless they’re being marketed on complementary attributes, such as price or convenience.

The companies that produce goods and services that don’t suit direct marketing aren't spending as much online.

But curious changes are afoot.

What's Happening At Facebook?

Those who advertise on Facebook will have noticed the click-thru rate. Generally, it's pretty low, suggesting direct response isn't working well in that environment.

Click-through rates on Facebook ads only averaged 0.05% in 2010, down from 0.06% in 2009 and well short of what’s considered to be the industry average of 0.10%. That’s according to a Webtrends report that examined 11,000 Facebook ads, first reported upon by ClickZ.

It’s not really surprising, give Facebook’s user base are Cluetrain passengers, even if most have never heard of it:

Facebook, a hugely popular free service that’s supported solely through advertising, yet is packed with users who are actively hostile to the idea of being marketed to on their cherished social network......this is what I hear from readers every time I write about the online ad economy, especially ads on Facebook: “I don’t know how Facebook will ever make any money—I never click on Web ads!

But a new study indicates click-thru rates on Facebook might not matter much. The display value of the advertising has been linked back to product purchases, and the results are an eye-opener:

Whether you know it or not—even if you consider yourself skeptical of marketing—the ads you see on Facebook are working. Sponsored messages in your feed are changing your behavior—they’re getting you and your friends to buy certain products instead of others, and that’s happening despite the fact that you’re not clicking, and even if you think you’re ignoring the ads......his isn’t conjecture. It’s science. It’s based on a remarkable set of in-depth studies that Facebook has conducted to show whether and how its users respond to ads on the site. The studies demonstrate that Facebook ads influence purchases and that clicks don’t matter

Granted, such a study is self-serving, but if it's true, and translates to many advertisers, then that's interesting. Display, engagement, institutional and direct marketing all seem to be melding together into "content". SEOs who want to get their links in the middle of content will be in there, too.

You may notice the Cluetrain-style language in the following Forbes post:

Some innovative companies, like Vine and smartsy, are catching on to this wave by creating apps and software that allows a dialogue between a brand and its audience when and where the consumer wants. Such technology opens a realm of nearly endless possibilities of content creation while increasing conversion rates dramatically. Audience participation isn’t just allowed; it’s encouraged. Hell, it’s necessary. By not only providing consumers with information in the moment of their interest, but also engaging them in conversation and empowering them to create their own content, we can drastically increase the relevancy of messaging and its authenticity.

Technology Has Finally Caught Up With The Cluetrain

Before the internet, it wasn’t really possible to engage consumers in conversations, except in very limited ways. Technology wasn’t up to the task.

But now it is.

The conversation was heralded in the Cluetrain Manifesto over a decade ago. People don’t want to just be passive consumers of marketing messages – they want engagement. The new advertising trends are all about increasing that level of engagement, and advertisers are doing it, in part, by blurring the lines between advertising and content.