A Declining Internet

For as broad and difficult of a problem running a search engine is and how many competing interests are involved, when Matt Cutts was at Google they ran a pretty clean show. Some of what they did before the algorithms could catch up was of course fearmongering (e.g. if you sell links you might be promoting fake brain cancer solutions) but Google generally did a pretty good job with the balance between organic and paid search.

Early in search ads were clearly labeled, and then less so. Ad density was light, and then less so.

It appears as a somewhat regular set of compounded growth elements on the stock chart, but it is a series of decisions. What to measure, what to optimize, what to subsidize, and what to sacrifice.

Savvy publishers could ride whatever signals were over-counted (keyword repetition, links early on, focused link anchor text, keyword domains, etc.) and catch new tech waves (like blogging or select social media channels) to keep growing as the web evolved. In some cases what was once a signal of quality would later become an anomaly ... the thing that boosted your rank for years eventually started to suppress your rank as new signals were created and signals composed of ratios of other signals got folded into ranking and re-ranking.

Over time as organic growth became harder the money guys started to override the talent, like in 2019 when a Google yellow flag had the ads team promote the organic search and Chrome teams intentionally degrade user experience to drive increased search query volume:

“I think it is good for us to aspire to query growth and to aspire to more users. But I think we are getting too involved with ads for the good of the product and company.” - Googler Ben Gnomes

A healthy and sustainable ecosystem relies upon the players at the center operating a clean show.

If they decide not to, and eat the entire pie, things fall apart.

One set of short-term optimizations is another set of long-term failures.

The specificity of an eHow article gives it a good IR score, and AdSense pays for a thousand similar articles to be created, then the "optimized" ecosystem gets a shallow sameness, which requires creating new ranking signals.

In the last quarter, Q1 of 2025, it was the first time the Google partner network represented less than 10% of Google ad revenues in the history of the company.

Google's fortunes have never been more misaligned with web publishers than they are today. This statement becomes more true each day that passes.

That ecosystem of partners is hundreds of thousands of publishers representing millions of employees. Each with their own costs and personal optimization decisions.

Publishers create feature works which are expensive, and then cross-subsidize the most expensive work with cheaper & more profitable works. They receive search traffic to some type of pages which are seemingly outperforming today and think that is a strategy which will help them into the future, though hitting the numbers today can mean missing them next year, as the ranking signal mix squeezes out profits from those "optimizations," and what led to higher traffic today becomes part of a negative sitewide classifier the lowers rankings across the board in the future.

Last August Googler Ryan Moulton published a graph of newspaper employees from 2010 until now, showing about a 70% decline. The 70% decline also doesn't factor in that many mastheads have been rolled up by private equity players which lever them up on debt and use all the remaining blood to pay interest payments - sometimes to themselves - while stiffing losses from the underfunded pension plans on other taxpayers.

The quality of the internet that we've enjoyed for the last 20 years was an overhand from when print journalism still made money. The market for professionally written text is now just really small, if it exists at all.

Ryan was asked "what do you believe is the real cause for the decline in search quality, then? Or do you think there hasn't been a decline?"

His now deleted response stated "It's complicated. I think it's both higher expectations and a declining internet. People expect a lot more from their search results than they used to, while the market for actually writing content has basically disappeared."

The above is the already baked cake we are starting from.

The cake were blogs were replaced with social feeds, newspapers got rolled up by private equity players, larger broad "authority" branded sites partner with money guys to paste on affiliate sections, while indy affiliate sites are buried ... the algorithmic artifacts of Google first promoting the funding of eHow, then responding to the success of entities like Demand Media with Vince, Panda, Penguin, and the Helpful Content Update.

The next layer of the icky blurry line is AI.

“We have 3 options: (1) Search doesn’t erode, (2) we lose Search traffic to Gemini, (3) we lose Search traffic to ChatGPT. (1) is preferred but the worst case is (3) so we should support (2)” - Google's Nick Fox

So long as Google survives, everything else is non-essential. ;)

AI overview distribution is up 116% over the past couple months.

Google features Reddit *a lot* in their search results. Other smaller forums, not so much. A company consisting of many forums recently saw a negative impact from algorithm updates earlier this year.

Going back to that whole bit about not fully disclosing economic incentives risks promoting brain cancer ... well how are AI search results constructed? How well do they cite their sources? And are the sources they cited also using AI to generate content?

"its gotten much worse in that "AI" is now, on many "search engines", replacing the first listings which obfuscates entirely where its alleged "answer" came from, and given that AI often "hallucinates", basically making things up to a degree that the output is either flawed or false, without attribution as to how it arrived at that statement, you've essentially destroyed what was "search." ... unlike paid search which at least in theory can be differentiated (assuming the search company is honest about what they're promoting for money) that is not possible when an alleged "AI" presents the claimed answers because both the direct references and who paid for promotion, if anyone is almost-always missing. This is, from my point of view anyway, extremely bad because if, for example, I want to learn about "total return swaps" who the source of the information might be is rather important -- there are people who are absolutely experts (e.g. Janet Tavakoli) and then there are those who are not. What did the "AI" response use and how accurate is its summary? I have no way to know yet the claimed "answer" is presented to me." - Karl Denninger

The eating of the ecosystem is so thorough Google now has money to invest in Saudi Arabian AI funds.

Periodically ViperChill highlights big media conglomerates which dominate the Google organic search results.

One of the strongest horizontal publishing plays online has been IAC. They've grown brands like Expedia, Match.com, Ticketmaster, Lending Tree, Vimeo, and HSN. They always show up in the big publishers dominating Google charts. In 2012 they bought About.com from the New York Times and broke About.com into vertical sites like The Spruce, Very Well, The Balance, TripSavvy, and Lifewire. They have some old sites like Investopedia from their 2013 ValueClick deal. And then they bought out the magazine publisher Meredith, which publishes titles like People, Better Homes and Gardens, Parents, and Travel + Leisure. What does their performance look like? Not particularly good!

DDM reported just 1% year-over-year growth in digital advertising revenue for the quarter. It posted $393.1 million in overall revenue, also up 1% YOY. DDM saw a 3% YOY decline in core user sessions, which caused a dip in programmatic ad revenue. Part of that downturn in user engagement was related to weakening referral traffic from search platforms. For example, DDM is starting to see Google Search’s AI Overviews eat into its traffic.

Google's early growth was organic through superior technology, then clever marketing via their toolbar, and later a set of forced bundlings on Android combined with payolla for default search placements in third party web browsers. A few years ago the UK government did a study which claimed if Microsoft gave Apple a 100% revshare on Bing they still couldn't compete with the Google bid for default search placement in Apple Safari.

Microsoft offered over a 100% ad revshare to set Bing as the default search engine and went so far as discussing selling Bing to Apple in 2018 - but Apple stuck with Google's deal.

In search, if you are not on Google you don't exist.

As Google grew out various verticals they also created ranking signals which in some cases were parasitical, or in other cases purely anticompetitive. To this day Google is facing billions in of dollars in new suits across Europe for their shopping search strategy.

The Obama administration was an extension of Google, so the FTC gave Google a pass in spite of discovering some clearly anticompetitive behavior with real consumer harm. The Wall Street Journal published a series of articles from getting half the pages of the FTC research into Google's conduct:

"Although Google originally sought to demote all comparison shopping websites, after Google raters provided negative feedback to such a widespread demotion, Google implemented the current iteration of its so-called 'diversity' algorithm."

What good is a rating panel if you get to keep re-asking the questions again in a slightly different way until you get the answer you want? And then place a lower quality clone front and center simply because it is associated with the home team?

"Google took unusual steps to "automatically boost the ranking of its own vertical properties above that of competitors,” the report said. “For example, where Google’s algorithms deemed a comparison shopping website relevant to a user’s query, Google automatically returned Google Product Search – above any rival comparison shopping websites. Similarly, when Google’s algorithms deemed local websites, such as Yelp or CitySearch, relevant to a user’s query, Google automatically returned Google Local at the top of the [search page].”"

The forced ranking of house properties is even worse when one recalls they were borrowing third party content without permission to populate those verticals.

Now with AI there is a blurry line of borrowing where many things are simply probabilistic. And, technically, Google could claim they sourced content from a third party which stole the original work or was a syndicator of it.

As Google kept eating the pie they repeatedly overrode user privacy to boost their ad income, while using privacy as an excuse to kneecap competing ad networks.

Remember the old FTC settlement over Google's violation of Safari browser cookies? That is the same Google which planned on depreciating third party cookies in Chrome and was even testing hiding user IP addresses so that other ad networks would be screwed. Better yet, online business might need to pay Google a subscription fee of some sort to efficiently filter through the fraud conducted in their web browser.

HTTPS everywhere was about blocking data leakage to other ad networks.

AMP was all about stopping header bidding. It gave preferential SERP placement in exchange for using a Google-only ad stack.

Even as Google was dumping tech costs on publishers, they were taking a huge rake of the ad revenue from the ad serving layer: "Google's own documents show that Google has siphoned off thirty-five cents of each advertising dollar that flows through Google's ad tech tools."

After acquiring DoubleClick to further monopolize the online ad market, Google merged user data for their own ad targeting, while hashing the data to block publishers from matching profiles:

"In 2016, as part of Project Narnia, Google changed that policy, combining all user data into a single user identification that proved invaluable to Google's efforts to build and maintain its monopoly across the ad tech industry. ... After the DoubleClick acquisition, Google "hashed" (i.e., masked) the user identifiers that publishers previously were able to share with other ad technology providers to improve internet user identification and tracking, impeding their ability to identify the best matches between advertisers and publisher inventory in the same way that Google Ads can. Of course, any puported concern about user privacy was purely pretextual; Google was more than happy to exploit its users' privacy when it furthered its own economic interests."

In terms of cost, I really don't think the O&O impact has been understood too, especially on YouTube. - Googler David Mitby

Did we tee up the real $ price tag of privacy? - Googler Sissie Hsiao

Google continues to spend billions settling privacy-related cases. Settling those suits out of court is better than having full discovery be used to generate a daisy chain of additional lawsuits.

As the Google lawsuits pile up, evidence of how they stacked the deck becomes more clear.

Google's Hyung-Jin Kim Shares Google Search Ranking Signals

On February 18, 2025 Google's Hyung-Jin Kim was interviewed about Google's ranking signals. Below are notes from that interview.

"Hand Crafting" of Signals

Almost every signal, aside from RankBrain and DeepRank (which are LLM-based) are hand-crafted and thus able to be analyzed and adjusted by engineers.

  • To develop and use these signals, engineers look at data and then take a sigmoid or other function and figure out the threshold to use. So, the "hand crafting" means that Google takes all those sigmoids and figures out the thresholds.
    • In the extreme hand-crafting means that Google looks at the relevant data and picks the mid-point manually.
    • For the majority of signals, Google takes the relevant data (e.g., webpage content and structure, user clicks, and label data from human raters) and then performs a regression.

Navboost. This was HJ's second signal project at Google. HJ has many patents related to Navboost and he spent many years developing it.

ABC signals. These are the three fundamental signals. All three were developed by engineers. They are raw, ...

  • Anchors (A) - a source page pointing to a target page (links). ...
  • Body (B) - terms in the document ...
  • Clicks (C) - historically, how long a user stayed at a particular linked page before bouncing back to the SERP. ...

ABC signals are the key components of topicality (or a base score), which is Google's determination of how the document is relevant to a query.

  • T* (Topicality) effectively combines (at least) these three ranking signals in a relatively hand-crafted way. ... Google uses to judge the relevance of the document based on the query term.
  • It took a significant effort to move from topicality (which is at its core a standard "old style" information retrieval ("IR") metric) ... signal. It was in a constant state of development from its origin until about 5 years ago. Now there is less change.
    • Ranking development (especially topicality) involves solving many complex matheivlatical problems.
    • For topicality, there might be a team of ... engineers working continuously on these hard problems within a given project.

The reason why the vast majority of signals are hand-crafted is that if anything breaks Google knows what to fix. Google wants their signals to be fully transparent so they can trouble-shoot them and improve upon them.

  • Microsoft builds very complex systems using ML techniques to optimize functions. So it's hard to fix things - e.g., to know where to go and how to fix the function. And deep learning has made that even worse.
  • This is a big advantage of Google over Bing and others. Google faced many challenges and was able to respond.
    • Google can modify how a signal responds to edge cases, for example in response to various media/public attention challenges ...
    • Finding the correct edges for these adjustments is difficult, but would be easy to reverse engineer and copy from looking at the data.

Ranking Signals "Curves"

Google engineers plot ranking signal curves.

The curve fitting is happening at every single level of signals.

lf Google is forced to give information on clicks, URLs, and the query, it would be easy for competitors to figure out the high-level buckets that compose the final IR score. High- level buckets are:

  • ABC — topicality
    • Topicality is connected to a given query
  • Navboost
  • Quality
    • Generally static across multiple queries and not connected to a specific query.
    • However, in some cases Quality signal incorporates information from the query in addition to the static signal. For example, a site may have high quality but general information so a query interpreted as seeking very narrow/technical information may be used to direct to a quality site that is more technical.

Q* (page quality (i.e., the notion of trustworthiness)) is incredibly important. lf competitors see the logs, then they have a notion of “authority” for a given site.

Quality score is hugely important even today. Page quality is something people complain about the most.

  • HJ started the page quality team ~ 17 years ago.
  • That was around the time when the issue with content farms appeared.
    • Content farms paid students 50 cents per article and they wrote 1000s of articles on each topic. Google had a huge problem with that. That's why Google started the team to figure out the authoritative source.
    • Nowadays, people still complain about the quality and AI makes it worse.

Q* is about ... This was and continues to be a lot of work but could be easily reverse engineered because Q is largely static and largely related to the site rather than the query.

Other Signals

  • eDeepRank. eDeepRank is an LLM system that uses BERT, transformers. Essentially, eDeepRank tries to take LLM-based signals and decompose them into components to make them more transparent. HJ doesn't have much knowledge on the details of eDeepRank.
  • PageRank. This is a single signal relating to distance from a known good source, and it is used as an input to the Quality score.
  • ... (popularity) signal that uses Chrome data.

Search Index

  • HJ's definition is that search index is composed of the actual content that is crawled - titles and bodies and nothing else, i.e., the inverted index.
  • There are also other separate specialized inverted indexes for other things, such as feeds from Twitter, Macy's etc. They are stored separately from the index for the organic results. When HJ says index, he means only for the 10 blue links, but as noted below, some signals are stored for convenience within the search index.
  • Query-based signals are not stored, but computed at the time of query.
    • Q* - largely static but in certain instances affected by the query and has to be computed online (see above)
  • Query-based signals are often stored in separate tables off to the side of the index and looked up separately, but for convenience Google stores some signals in the search index.
    • This way of storing the signals allowed Google to ...

User-Side Data

By User Side Data, Google's search engineers mean user interaction data, not the content/data that was created by users. E.g., links between pages that are created by people are not User Side data.

Search Features

  • There are different search features - 10 blue links as well as other verticals (knowledge panels, etc). They all have their own ranking.
  • Tangram (fka Tetris). HJ started the project to create Tangram to apply the basic principle of search to all of the features.
  • Tangram/Tetris is another algorithm that was difficult to figure out how to do well but would be easy to reverse engineer if Google were required to disclose its click/query data. By observing the log data, it is easy to reverse engineer and to determine when to show the features and when to not.
  • Knowledge Graph. Separate team (not H/’s) was involved in its development.
  • Knowledge Graph is used beyond being shown on the SERP panel.
    • Example — “porky pig” feature. If people query about the relation of a famous person, Knowledge Graph tells traditional search the name of the relation and the famous person, to improve search results - Barack Obama's wife's height query example.
  • Self-help suicide box example. Incredibly important to figure it out right, and tons of work went into it, figuring out the curves, threshold, etc. With the log data, this could be easily figured out and reverse engineered, without having to do any of the work that Google did.

Reverse Engineering of Signals

There was a leak of Google documents which named certain components of Google's ranking system, but the documents don't go into specifics of the curves and thresholds.

The documents alone do not give you enough details to figure it out, but the data likely does.

Google Antitrust Leaked Documents

User interaction signals

Create relevancy signals out of user read, clicks, scrolls, and mouse hovers.

Not how search works

Search does not work by delivering results which match a query that ends at the user. This view of search is incomplete.

How search works

The flow of the engagement metrics from the end user / searcher back to the search engine helps the search engine refine the result set.

Fake document understanding

Google looks at the actions of searchers much more than they look at raw documents. If documents elicit a positive reaction from searchers that is proof the document is good. If a document elicits negative reactions then they presume the document is bad.

Google learns from searchers

The result set is designed not just to serve the user, but to create an interaction set where Google can learn from the user & incorporate logged user data into influencing the rankings for future searches.

Dialog is the source of the magic

Each user interaction gives Google data to refine their ranking algorithms and make search smarter.

Happy users provide informed user interactions

Informed user interactions are part of a virtuous cycle which allows Google to better train their models & understand language patterns, then use that understanding to deliver a more relevant search result set.

Prior user behavior is used as a baseline.

Google is not pushing search personalization anywhere near as hard as they once did (at least not outside of localization) but in the above Google states prior selections is one of Google's strongest ranking for rankings.

Once again rather than understanding documents directly they can consider the users who chose the documents. Users can be maps based on actions outside of standard demographics so that more like users are given more weight on their user interactions with the result set choices.

Google revenue growth is consistent

Core Google ad revenue grows much more consistently than any other large media business, growing at 20% to 22% year after year for 8 in 9 years with the one outlier year being 30% growth.

Apple is paid by Google to not compete in search.

Apple got around a 50% revshare in the mid 2000's on through to the iPhone deal renewal.

Manipulating ad auctions

Google artificially inflates ad rank of the runner up in some ad auctions to bleed the auction winner dry. Ad pricing is not based on any sort of honest auction mechanism, but rather has Google looking across at your bids and your reactions to price gouging to keep increasing the ad prices they charge you.

Organics below the fold

Google not only pushes down the organic result set with 3 or 4 ads above the regular results, but then they can include other selections scraped from across the web in an information-lite format to try to focus attention back upward. Then after users get past a singular organic search result it is time to redirect user attention once again using a "People also ask" box.

Google can further segment user demand via ecommerce website styled filters, though some of the filters offered may be for other websites, in addition to things like size, weight, color, price, and location.

The Magical Black Box

Google's mission statement is "organize the world's information and make it universally accessible and useful."

That mission is so profound & so important the associated court documents in their antitrust cases must be withheld from public consumption.

Before document sharing was disallowed, some were shared publicly.

Internal emails stated:

  • Hal Varian was off in his public interviews where he suggested it was the algorithms rather than the amount of data which is prime driver of relevancy.
  • Apple would not get any revshare if there was a user choice screen & must set Google as the default search engine to qualify for any revshare.
  • Google has a policy of being vague about using clickstream data to influence ranking, though they have heavily relied upon clickstream data to influence ranking. Advances in machine learning have made it easier to score content to where the clickstream data had become less important.
  • When Apple Maps launched & Google Maps lost the default position on iOS Google Maps lost 60% of their iOS distribution, and that was with how poorly the Apple Maps roll out went.
  • Google sometimes subverted their typical auction dynamics and would flip the order of the top 2 ads to boost ad revenues.
  • Google had a policy of "shaking the cushions" to hit the quarterly numbers by changing advertiser ad prices without informing advertisers that they'd be competing in a rigged auction with artificially manipulated shill bids from the auctioneer competing against them.

When Google talked about hitting the quarterly numbers with shaking the cusions the 5% number which was shared skewed a bit low:

For a brand campaign focused on a niche product, she said the average CPC at $11.74 surged to $25.85 over the last six months, amounting to a 108% increase. However, there wasn’t an incremental return on sales.

“The level to which [price manipulations] happens is what we don’t know,” said Yang. “It’s shady business practices because there’s no regulation. They regulate themselves.”

Early in the history of search ads Google blocked trademark keyword bidding. They later allowed it. When keyword bidding on trademarks was allowed it led to a conundrum for some advertisers. If you do not defend your trademark you could lose it, but if you agree with competitors not to bid on each other's trademarks the FTC could come after you - like they did with 1-800 Contacts. This set up forces many brands to participate in auctions where they are arbitraging their own pre-existing brand equity. The ad auctioneer runs shady auctions where it looks across at your account behavior and bids then adjusts bid floors to suck more money out of you. This amounts to something akin to the bid jamming that was done in early Overture, except it is the house itself doing it to you! The last auction I remembered like that was SnapNames, where a criminal named Nelson Brady on the executive team used the handle halverez to leverage participant max bids and put in bids just under their bids. The goal of his fraud? To hit the numbers & get an earn out bonus - similar to how Google insiders were discussing "shaking the cushions" to hit the number.

Halverez created a program which looked across aggregate bid data, join auctions which only had 1 other participant, and then use the one-way view of competing bids to put in a shill bid to drive up costs - which sure sounds conceptually similar to Google's "shaking the cushions."

"Just looking at this very tactically, and sorry to go into this level of detail, but based on where we are I'm afraid it's warranted. We are short __% queries and are ahead on ads launches so are short __% revenue vs. plan. If we don't hit plan, our sales team doesn't get its quota for the second quarter in a row and we miss the street's expectations again, which is not what Ruth signaled to the street so we get punished pretty badly in the market. We are shaking the cushions on launches and have some candidates in May that will help, but if these break in mid-late May we only get half a quarter of impact or less, which means we need __% excess to where we are today and can't do it alone. The Search team is working together with us to accelerate a launch out of a new mobile layout by the end of May that will be very revenue positive (exact numbers still moving), but that still won't be enough. Our best shot at making the quarter is if we get an injection of at least __%, ideally __%, queries ASAP from Chrome. Some folks on our side are running a more detailed, Finance-based, what-if analysis on this and should be done with that in a couple of days, but I expect that these will be the rough numbers.

The question we are all faced with is how badly do we want to hit our numbers this quarter? We need to make this choice ASAP. I care more about revenue than the average person but think we can all agree that for all of our teams trying to live in high cost areas another $___,___ in stock price loss will not be great for morale, not to mention the huge impact on our sales team." - Google VP Jerry Dischler

Google is also pushing advertisers away from keyword-based bidding and toward a portfolio approach of automated bidding called Performance Max, where you give Google your credit card and budget then they bid as they wish. By blending everything into a single soup you may not know where the waste is & it may not be particularly easy to opt out of poorly performing areas. Remember enhanced AdWords campaigns?

Google continues to blur dataflow outside of their ad auctions to try to bring more of the ad spend into their auctions.

The amount Google is paying Apple to be the default search provider is staggering.

Tens of billions of dollars is a huge payday. No way Google would hyper-optimize other aspects of their business (locating data centers near dams, prohibiting use of credit card payments for large advertisers, cutting away ad agency management fees, buying Android, launching Chrome, using broken HTML on YouTube to make it render slowly on Firefox & Microsoft Edge to push Chrome distribution, all the dirty stuff Google did to violate user privacy with overriding Safari cookies, buying DoubleClick, stealing the ad spend from banned publishers rather than rebating it to advertisers, creating a proprietary version of HTML & force ranking it above other results to stop header bidding, & then routing around their internal firewall on display ads to give their house ads the advantage in their ad auctions, etc etc etc) and then just throw over a billion dollars a month needlessly at a syndication partner.

For perspective on the scale of those payments consider that it wasn't that long ago Yahoo! was considered a big player in search and Apollo bought Yahoo! plus AOL from Verizon for about $5 billion & then was quickly able to sell branding & technology rights in Japan to Softbank for $1.6 billion & other miscellaneous assets for nearly a half-billion, reducing the net cost to only $3 billion.

If Google loses this lawsuit and the payments to Apple are declared illegal, that would be a huge revenue (and profit) hit for Apple. Apple would be forced to roll out their own search engine. This would cut away at least 30% of the search market from Google & it would give publishers another distribution channel. Most likely Apple Search would launch with a lower ad density than Google has for short term PR purposes & publishers would have a year or two of enhanced distribution before Apple's ad load matched Google's ad load.

It is hard to overstate how strong Apple's brand is. For many people the cell phone is like a family member. I recently went to upgrade my phone and Apple's local store closed early in the evening at 8pm. The next day when they opened at 10 there was a line to wait in to enter the store, like someone was trying to get concert tickets. Each privacy snafu from Google helps strengthen Apple's relative brand position.

Google has also diluted the quality of their own brand by rewriting search queries excessively to redirect traffic flows toward more commercial interests. Wired covered how Project Mercury works:

This onscreen Google slide had to do with a “semantic matching” overhaul to its SERP algorithm. When you enter a query, you might expect a search engine to incorporate synonyms into the algorithm as well as text phrase pairings in natural language processing. But this overhaul went further, actually altering queries to generate more commercial results. ... Most scams follow an elementary bait-and-switch technique, where the scoundrel lures you in with attractive bait and then, at the right time, switches to a different option. But Google “innovated” by reversing the scam, first switching your query, then letting you believe you were getting the best search engine results. This is a magic trick that Google could only pull off after monopolizing the search engine market, giving consumers the false impression that it is incomparably great, only because you’ve grown so accustomed to it.

The mobile search results on Google require at least a screen or two of scrolls to get to the organic results if there is a hint of commercial intent behind the search query. Once they have monetized the real estate they are reliant on broader economic growth & using ad buy bundling to drive cross-subsidies of other non-search ad inventory, which may contain more than a bit of fraud. Performance Max may max out your spend without actually performing for anybody other than Google.

Google not only shill bid on lower competition terms to squeeze defensive brand bids and boost auction floor pricing, but they also implemented shill bids in competitive ad auctions:

Michael Whinston, a professor of economics at the Massachusetts Institute of Technology, said Friday that Google modified the way it sold text ads via “Project Momiji” – named for the wooden Japanese dolls that have a hidden space for friends to exchange secret messages. The shift sought “to raise the prices against the highest bidder,” Whinston told Judge Amit Mehta in federal court in Washington.

While Google's search marketshare is rock solid, the number of search engines available has increased significantly over the past few years. Not only is there Bing and DuckDuckGo but the tail is longer than it was a few years back. In addition to regional players like Baidu and Yandex there's now Brave Search, Mojeek, Qwant, Yep, and You. GigaBlast and Neeva went away, but anything that prohibits selling defaults to a company with over 90% marketshare will likely lead to dozens more players joining the search game. Search traffic will remain lucrative for whoever can capture it, as no matter how much Google tries to obfuscate marketing data the search query reflects the intent of the end user.

“Search advertising is one of the world’s greatest business models ever created…there are certainly illicit businesses (cigarettes or drugs) that could rival these economics, but we are fortunate to have an amazing business.” - Google VP of Finance Mike Roszak

AI-Driven Search

I just dusted off the login here to realize I hadn't posted in about a half-year & figured it was time to write another one. ;)

Yandex Source Code Leak

Some of Yandex's old source code was leaked, and few cared about the ranking factors shared in the leak.

Mike King made a series of Tweets on the leak.

The signals used for ranking included things like link age

and user click data including visit frequency and dwell time

Google came from behind and was eating Yandex's lunch in search in Russia, particularly by leveraging search default bundling in Android. The Russian antitrust regulator nixed that and when that was nixed, Yandex regained strength. Of course the war in Ukraine has made everything crazy in terms of geopolitics. That's one reason almost nobody cared about the Yandex data link. And the other reason is few could probably make sense of understanding what all the signals are or how to influence them.

The complexity of search - when it is a big black box which has big swings 3 or 4 times a year - shifts any successful long term online publishers away from being overly focused on information retrieval and ranking algorithms to focus on the other aspects of publishing which will hopefully paper over SEO issues. Signs of a successful & sustainable website include:

  • It remains operational even if a major traffic source goes away.
  • People actively seek it out.
  • If a major traffic source cuts its distribution people notice & expend more effort to seek it out.

As black box as search is today, it is only going to get worse in the coming years.

ChatGPT Hype

The hype surrounding ChatGPT is hard to miss. Fastest growing user base. Bing integration. A sitting judge using the software to help write documents for the court. And, of course, the get-rich-quick crew is out in full force.

Some enterprising people with specific professional licenses may be able to mint money for a window of time

but for most people the way to make money with AI will be doing something that AI can not replicate.

Bing Integration of Open AI Technology

The New Bing integrated OpenAI's ChatGPT technology to allow chat-based search sessions which ingest web content and use it to create something new, giving users direct answers and allowing re-probing for refinements. Microsoft stated the AI features also improved their core rankings outside of the chat model: "Applying AI to core search algorithm. We’ve also applied the AI model to our core Bing search ranking engine, which led to the largest jump in relevance in two decades. With this AI model, even basic search queries are more accurate and more relevant."

Fawning Coverage

Some of the tech analysis around the AI algorithms is more than a bit absurd. Consider this passage:

the information users input into the system serves as a way to improve the product. Each query serves as a form of feedback. For instance, each ChatGPT answer includes thumbs up and thumbs down buttons. A popup window prompts users to write down the “ideal answer,” helping the software learn from its mistakes.

A long time ago the Google Toolbar had a smiley face and a frown face on it. The signal there was basically pure spam. At one point Matt Cutts mentioned Google would look at things that got a lot of upvotes to see how else they were spamming. Direct Hit was also spammed into oblivion many years before that.

In some ways the current AI search stuff is trying to re-create Ask Jeeves, but Ask had already lost to Google long ago. The other thing AI search is similar to is voice assistant search. Maybe the voice assistant search stuff which has largely failed will get a new wave of innovation, but the current AI search stuff is simply a text interface of the voice search stuff with a rewrite of the content.

High Confidence, But Often Wrong

There are two other big issues with correcting an oracle.

  • You'll lose your trust in an oracle when you repeatedly have to correct it.
  • If you know the oracle is awful in your narrow niche of expertise you probably won't trust it on important issues elsewhere.

Beyond those issues there is the concept of blame or fault. When a search engine returns a menu of options if you pick something that doesn't work you'll probably blame yourself. Whereas if there is only a single answer you'll lay blame on the oracle. In the answer set you'll get a mix of great answers, spam, advocacy, confirmation bias, politically correct censorship, & a backward looking consensus...but you'll get only a single answer at a time & have to know enough background & have enough topical expertise to try to categorize it & understand the parts that were left out.

We are making it easier and cheaper to use software to re-represent existing works, at the same time we are attaching onerous legal liabilities to building something new.

Creating A Fuzy JPEG

This New Yorker article did a good job explaining the concept of lossy compression:

"The fact that Xerox photocopiers use a lossy compression format instead of a lossless one isn’t, in itself, a problem. The problem is that the photocopiers were degrading the image in a subtle way, in which the compression artifacts weren’t immediately recognizable. If the photocopier simply produced blurry printouts, everyone would know that they weren’t accurate reproductions of the originals. What led to problems was the fact that the photocopier was producing numbers that were readable but incorrect; it made the copies seem accurate when they weren’t. ... If you ask GPT-3 (the large-language model that ChatGPT was built from) to add or subtract a pair of numbers, it almost always responds with the correct answer when the numbers have only two digits. But its accuracy worsens significantly with larger numbers, falling to ten per cent when the numbers have five digits. Most of the correct answers that GPT-3 gives are not found on the Web—there aren’t many Web pages that contain the text “245 + 821,” for example—so it’s not engaged in simple memorization. But, despite ingesting a vast amount of information, it hasn’t been able to derive the principles of arithmetic, either. A close examination of GPT-3’s incorrect answers suggests that it doesn’t carry the “1” when performing arithmetic."

Exciting New Content Farms

Ted Chiang then goes on to explain the punchline ... we are hyping up eHow 2.0:

Even if it is possible to restrict large language models from engaging in fabrication, should we use them to generate Web content? This would make sense only if our goal is to repackage information that’s already available on the Web. Some companies exist to do just that—we usually call them content mills. Perhaps the blurriness of large language models will be useful to them, as a way of avoiding copyright infringement. Generally speaking, though, I’d say that anything that’s good for content mills is not good for people searching for information. The rise of this type of repackaging is what makes it harder for us to find what we’re looking for online right now; the more that text generated by large language models gets published on the Web, the more the Web becomes a blurrier version of itself.

The same New Yorker article mentioned the concept that if the AI was great it should trust its own output as input for making new versions of its own algorithms, but how could it score itself against itself when its own flaws are embedded recursively in layers throughout algorithmic iteration without any source labeling?

Testing on your training data is considered a cardinal rule machine learning error. Using prior output as an input creates similar problems.

Each time AI eats a layer of the value chain it leaves holes in the ecosystem, where the primary solution is to pay for what was once free. Even the "buy nothing" movements have a commercial goal worth fighting over.

As AI offers celebrity voices, impersonate friends, track people, automates marketing, and creates deep fake celebrity-like content, it will move more of social media away from ad revenue over to a subscription-based model. Twitter's default "for you" tab will only recommend content from paying subscribers. People will subscribe to and pay for a confirmation bias they know (even - or especially - if it is not approved within the state-preferred set of biases), provided there is a person & a personality associated with it. They'll also want any conversations with AI agents remain private.

When the AI stuff was a ragtag startup with little to lose the label "open" was important to draw interest. As commercial prospects improved with the launch of GPT-4 they shifted away from the "open," explaining the need for secrecy for both safety and competitive reasons. Much of the wow factor in generative AI is in recycling something while dropping the source to make something appear new while being anything but. And then the first big money number is the justification for further investments in add ons & competitors.

Google's AI Strategy

Google fast followed Bing's news with a vapoware announcement of Bard. Some are analyzing Google letting someone else go first as being a sign Google is behind the times and is getting caught out by an upstart.

Google bought DeepMind in 2014 for around $600 million. They've long believed in AI technology, and clearly lead the category, but they haven't been using it to re-represent third party content in the SERPs to the degree Microsoft is now doing in Bing.

My view is Google had to let someone else go first in order to defuse any associated antitrust heat. "Hey, we are just competing, and are trying to stay relevant to change with changing consumer expectations" is an easier sell when someone else goes first. One could argue the piss poor reception to the Bard announcement is actually good for Google in the longterm as it makes them look like they have stronger competition than they do, rather than being a series of overlapping monopoly market positions (in search, web browser, web analytics, mobile operating system, display ads, etc.)

Google may well have major cultural problems, but "They are all the natural consequences of having a money-printing machine called “Ads” that has kept growing relentlessly every year, hiding all other sins. (1) no mission, (2) no urgency, (3) delusions of exceptionalism, (4) mismanagement," though Google is not far behind in AI. Look at how fast they opened up Bard to end users.

AI = Money / Increased Market Cap

The capital markets are the scorecard for capitalism. It is hard to miss how much the market loved the Bing news for Microsoft & how bad the news was for Google.

Millions Suddenly Excited About Bing

In a couple days over a million people signed up to join a Bing wait list.

Your Margin is My Opportunity

Microsoft is pitching this as a margin compression play for Google

that may also impact their TAC spend

ChatGPT costs around a couple cents per conversation: "Sam, you mentioned in a tweet that ChatGPT is extremely expensive on the order of pennies per query, which is an astronomical cost in tech. SA: Per conversation, not per query."

The other side of potential margin compression comes from requiring additional computing power to deliver results:

Our sources indicate that Google runs ~320,000 search queries per second. Compare this to Google’s Search business segment, which saw revenue of $162.45 billion in 2022, and you get to an average revenue per query of 1.61 cents. From here, Google has to pay for a tremendous amount of overhead from compute and networking for searches, advertising, web crawling, model development, employees, etc. A noteworthy line item in Google’s cost structure is that they paid in the neighborhood of ~$20B to be the default search engine on Apple’s products.

Beyond offering a conversational interface, Bing is also integrating AI content directly in their search results on some search queries. It goes *BELOW* all the ads & *ABOVE* the organic results.

The above sort of visual separator eye candy has historically had a net effect of shifting click distributions away from organics toward the ads. It is why Google features "people also ask" and similar in their search results.

AI is the New Crypto

Microsoft is pitching that even when AI is wrong it can offer "usefully" wrong answers. And a lot of the "useful" wrong stuff can also be harmful: "there are a ton of very real ways in which this technology can be used for harm. Just a few: Generating spam, Automated romance scams, Trolling and hate speech ,Fake news and disinformation, Automated radicalization (I worry about this one a lot)"

"I knew I had just seen the most important advance in technology since the graphical user interface. This inspired me to think about all the things that AI can achieve in the next five to 10 years. The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it." - Bill Gates

Since AI is the new crypto, everyone is integrating it, if only in press release format, while banks ban it. All of Microsoft's consumer-facing & business-facing products are getting integrations. Google is treating AI as the new Google+.

Remember all the hype around STEM? If only we can churn out more programmers? Learn to code!

Well, how does that work out if the following is true?

"The world now realizes that maybe human language is a perfectly good computer programming language, and that we've democratized computer programming for everyone, almost anyone who could explain in human language a particular task to be performed." - Nvidia CEO Jensen Huang

AI is now all over Windows. And for a cherry on top of the hype cycle:

A gradual transition gives people, policymakers, and institutions time to understand what’s happening, personally experience the benefits and downsides of these systems, adapt our economy, and to put regulation in place. It also allows for society and AI to co-evolve, and for people collectively to figure out what they want while the stakes are relatively low.

We believe that democratized access will also lead to more and better research, decentralized power, more benefits, and a broader set of people contributing new ideas. As our systems get closer to AGI, we are becoming increasingly cautious with the creation and deployment of our models.

We have a nonprofit that governs us and lets us operate for the good of humanity (and can override any for-profit interests), including letting us do things like cancel our equity obligations to shareholders if needed for safety and sponsor the world’s most comprehensive UBI experiment.

Algorithmic Publishing

The algorithms that allow dirt cheap quick rewrites won't be used just by search engines re-representing publisher content, but also by publishers to churn out bulk content on the cheap.

After Red Ventures acquired cNet they started publishing AI content. The series of tech articles covering that AI content lasted about a month and only ended recently. In the past it was the sort of coverage which would have led to a manual penalty, but with the current antitrust heat Google can't really afford to shake the boat & prove their market power that way. In fact, Google's editorial stance is now such that Red Ventures can do journalist layoffs in close proximity to that AI PR blunder.

Men's Journal also had AI content problems.

AI content poured into a trusted brand monetizes the existing brand equity until people (and algorithms) learn not to trust the brands that have been monetized that way.

A funny sidebar here is the original farmer update that aimed at eHow skipped hitting eHow because so many journalists were writing about how horrible eHow was. These collective efforts to find the best of the worst of eHow & constantly writing about it made eHow look like a legitimately sought after branded destination. Google only downranked eHow after collecting end user data on a toolbar where angry journalists facing less secure job prospects could vote to nuke eHow, thus creating the "signal" that eHow rankings deserve to be torched. Demand Media's Livestrong ranked well far longer than eHow did.

Enshitification

The process of pouring low cost backfill into a trusted masthead is the general evolution of online media ecosystems:

This strategy meant that it became progressively harder for shoppers to find things anywhere except Amazon, which meant that they only searched on Amazon, which meant that sellers had to sell on Amazon. That's when Amazon started to harvest the surplus from its business customers and send it to Amazon's shareholders. Today, Marketplace sellers are handing 45%+ of the sale price to Amazon in junk fees. The company's $31b "advertising" program is really a payola scheme that pits sellers against each other, forcing them to bid on the chance to be at the top of your search. ... once those publications were dependent on Facebook for their traffic, it dialed down their traffic. First, it choked off traffic to publications that used Facebook to run excerpts with links to their own sites, as a way of driving publications into supplying fulltext feeds inside Facebook's walled garden. This made publications truly dependent on Facebook – their readers no longer visited the publications' websites, they just tuned into them on Facebook. The publications were hostage to those readers, who were hostage to each other. Facebook stopped showing readers the articles publications ran, tuning The Algorithm to suppress posts from publications unless they paid to "boost" their articles to the readers who had explicitly subscribed to them and asked Facebook to put them in their feeds. ... "Monetize" is a terrible word that tacitly admits that there is no such thing as an "Attention Economy." You can't use attention as a medium of exchange. You can't use it as a store of value. You can't use it as a unit of account. Attention is like cryptocurrency: a worthless token that is only valuable to the extent that you can trick or coerce someone into parting with "fiat" currency in exchange for it. You have to "monetize" it – that is, you have to exchange the fake money for real money. ... Even with that foundational understanding of enshittification, Google has been unable to resist its siren song. Today's Google results are an increasingly useless morass of self-preferencing links to its own products, ads for products that aren't good enough to float to the top of the list on its own, and parasitic SEO junk piggybacking on the former.

Bing finally won a PR battle against Google & Microsoft is shooting themselves in the foot by undermining the magic & imagination of the narrative by pushing more strict chat limits, increasing search API fees, testing ads in the AI search results, and threating to cut off search syndication partners if the index is used to feed AI chatbots.

The enshitification concept feels more like a universal law than a theory.

When Yahoo, Twitter & Facebook underperform and the biggest winners like Google, Microsoft, and Amazon are doing big layoff rounds, everyone is getting squeezed.

AI rewrites accelerates the squeeze:

"When WIRED asked the Bing chatbot about the best dog beds according to The New York Times product review site Wirecutter, which is behind a metered paywall, it quickly reeled off the publication’s top three picks, with brief descriptions for each." ... "OpenAI is not known to have paid to license all that content, though it has licensed images from the stock image library Shutterstock to provide training data for its work on generating images."

The above is what Paul Kedrosky was talking about when he wrote of AI rewrites in search being a Tragedy of the Commons problem.

A parallel problem is the increased cost of getting your science fiction short story read when magazines shut down submissions due to a rash of AI-spam submissions:

The rise of AI-powered chatbots is wreaking havoc on the literary world. Sci-fi publication Clarkesworld Magazine is temporarily suspending short story submissions, citing a surge in people using AI chatbots to “plagiarize” their writing.

The magazine announced(Opens in a new window) the suspension days after Clarkesworld editor Neil Clarke warned about AI-written works posing a threat to the entire short-story ecosystem.

Warnings Serving As Strategy Maps

"He who fights with monsters might take care lest he thereby become a monster. And if you gaze for long into an abyss, the abyss gazes also into you." - Nietzsche

Going full circle here, early Google warned against ad-driven search engines, then Google became the largest ad play in the world. Similarly ...

Elon wants to create a non-woke AI, but he'll still have some free speech issues.

Over time more of the web will be "good enough" rewrites, and the JPEG will keep getting fuzzier:

"This new generation of chat-based search engines are better described as “answer engines” that can, in a sense, “show their work” by giving links to the webpages they deliver and summarize. But for an answer engine to have real utility, we’re going to have to trust it enough, most of the time, that we accept those answers at face value. ... The greater concentration of power is all the more important because this technology is both incredibly powerful and inherently flawed: it has a tendency to confidently deliver incorrect information. This means that step one in making this technology mainstream is building it, and step two is minimizing the variety and number of mistakes it inevitably makes. Trust in AI, in other words, will become the new moat that big technology companies will fight to defend. Lose the user’s trust often enough, and they might abandon your product. For example: In November, Meta made available to the public an AI chat-based search engine for scientific knowledge called Galactica. Perhaps it was in part the engine’s target audience—scientists—but the incorrect answers it sometimes offered inspired such withering criticism that Meta shut down public access to it after just three days, said Meta chief AI scientist Yann LeCun in a recent talk."

Check out the sentence Google chose to bold here:

As the economy becomes increasingly digital the AI algorithms have deep implications across the economy. Things like voice rights, knock offs, virtual re-representations, source attribution, copyright of input, copyright of output, and similar are obvious. But how far do we allow algorithms to track a person's character flaws and exploit them? Horse racing ads that follow a gambling addict around the web, or a girl with anorexia who keeps clicking on weight loss ads.

One of the biggest use cases for paid AI chatbots so far is fantasty sexting. It is far easier to program a lovebot filled with confirmation bias than it is to improve oneself. Digital soma.

When AI is connected directly to the Internet and automates away many white collar jobs what comes next? As AI does everything for you do the profit margins shift across from core product sales to hidden junk fees (e.g. ticket scalper marketplaces or ordering flowers for Mother's Day where you get charged separately for shipping, handling, care, weekend shipping, Sunday shipping, holiday shipping)?

"LLMs aren’t just the biggest change since social, mobile, or cloud–they’re the biggest thing since the World Wide Web. And on the coding front, they’re the biggest thing since IDEs and Stack Overflow, and may well eclipse them both. But most of the engineers I personally know are sort of squinting at it and thinking, “Is this another crypto?” Even the devs at Sourcegraph are skeptical. I mean, what engineer isn’t. Being skeptical is a survival skill. ... The punchline, and it’s honestly one of the hardest things to explain, so I’m going the faith-based route today, is that all the winners in the AI space will have data moats." - Steve Yegge

Monopoly Bundling

The thing that makes the AI algorithms particularly dangerous is not just that they are often wrong while appearing high-confidence, it is that they are tied to monopoly platforms which impact so many other layers of the economy. If Google pays Apple billions to be the default search provider on iPhone any error in the AI on a particular topic will hit a whole lot of people on Android & Apple devices until the problem becomes a media issue & gets fixed.

The analogy here would be if Coca Cola had a poison and they also poured Pepsi products.

These cloud platforms also want to help retailers manage in-store inventory:

Google Cloud said Friday its algorithm can recognize and analyze the availability of consumer packaged goods products on shelves from videos and images provided by the retailer’s own ceiling-mounted cameras, camera-equipped self-driving robots or store associates. The tool, which is now in preview, will become broadly available in the coming months, it said. ... Walmart Inc. notably ended its effort to use roving robots in store aisles to keep track of its inventory in 2020 because it found different, sometimes simpler solutions that proved just as useful, said people familiar with the situation.

Microsoft has a browser extension for adding coupons to website checkouts. Google is also adding coupon features to their SERPs.

Every ad network can use any OS, email, or web browser hooks to try to reset user defaults & suck users into that particular ecosystem.

AI Boundaries

Generative AI algorithms will always have a bias toward being backward looking as it can only recreate content based off of other ingested content that has went through some editorial process. AI will also overemphasize the recent past, as more dated cultural references can represent an unneeded risk & most forms of spam will target things that are sought after today. Algorithmic publishing will lead to more content created each day.

From a risk perspective it makes sense for AI algorithms to promote consensus views while omitting or understating the fringe. Promoting fringe views represents risk. Promoting consensus does not.

Each AI algorithm has limits & boundaries, with humans controlling where they are set. Injection attacks can help explore some of the boundaries, but they'll patch until probed again.

Boundaries will often be set by changing political winds:

"The tech giant plans to release a series of short videos highlighting the techniques common to many misleading claims. The videos will appear as advertisements on platforms like Facebook, YouTube or TikTok in Germany. A similar campaign in India is also in the works. It’s an approach called prebunking, which involves teaching people how to spot false claims before they encounter them. The strategy is gaining support among researchers and tech companies. ... When catalyzed by algorithms, misleading claims can discourage people from getting vaccines, spread authoritarian propaganda, foment distrust in democratic institutions and spur violence."

Stating facts about population subgroups will be limited in some ways to minimize perceived racism, sexism, or other fringe fake victim group benefits fund flows. Never trust Marxists who own multiple mansions.

At the same time individual journalists can drop napalm on any person who shares too many politically incorrect facts.

Some things are quickly labeled or debunked. Other things are blown out of proportion to scare and manipulate people:

Dr. Ioannidis et. al. found that across 31 national seroprevalence studies in the pre-vaccine era, the median IFR was 0.0003% at 0-19 years, 0.003% at 20-29 years, 0.011% at 30-39 years, 0.035% at 40-49 years, 0.129% at 50-59 years, and 0.501% at 60-69 years. This comes out to 0.035% for those aged 0-59 and 0.095% for those aged 0-69.

The covid response cycle sacrificed childhood development (and small businesses) to offer fake protections to unhealthy elderly people (and bountiful subsidies to large "essential" corporations).

‘Civilisation and barbarism are not different kinds of society. They are found – intertwined – whenever human beings come together.’ This is true whether the civilisation be Aztec or Covidian. A future historian may compare the superstition of the Aztec to those of the Covidian. The ridiculous masks, the ineffective lockdowns, the cult-like obedience to authority. It’s almost too perfect that Aztec nobility identified themselves by walking with a flower held under the nose.

A lot of children had their childhoods destroyed by the idiotic lockdowns. And a lot of those children are now destroying the lives of other children:

In the U.S., homicides committed by juveniles acting alone rose 30% in 2020 from a year earlier, while those committed by multiple juveniles increased 66%. The number of killings committed by children under 14 was the highest in two decades, according to the most recent federal data.

Now we get to pile inflation and job insecurity on top of those headwinds to see more violence.

The developmental damage (school closed, stressed out parents, hidden faces, less robust immune systems, limited social development) is hard to overstate:

The problem with this is that the harm of performative art in this regard is not speculative, particularly in young children where language development is occurring and we know a huge percentage of said learning comes from facial expressions which of course a mask prevents from being seen. Every single person involved in this must face criminal sanction and prison for the deliberate harm inflicted upon young children without any evidence of benefit to anyone. When the harm is obvious and clear but the benefit dubious proceeding with a given action is both stupid and criminal.

Some entities will claim their own statements are conspiracy theory, even when directly quoted:

“If Russia invades . . . there will be no longer a Nord Stream 2. We will bring an end to it.” - President Joseph R. Biden

In an age of deep fakes, confirmation bias driven fast social shares (filter bubble), legal threats, increased authenticity of impersonation technology, AI algorithms which sort & rewrite media, & secret censorship programs ... who do you trust? How are people informed when nation states offer free global internet access with a thumb on the scale of truth, even as aggregators block access to certain sources demanding payments?

Lab leaks sure sound a lot like an outbreak of chocolatey goodness in Hershey, PA!

"The fact that protesters could be at once both the victims and perpetrators of misinformation simply shows how pernicious misinformation is in modern society." - Canadian Justice Paul Rouleau

What is freedom?

By 2016, however, the WEF types who’d grown used to skiing at Davos unmolested and cheering on from Manhattan penthouses those thrilling electoral face-offs between one Yale Bonesman and another suddenly had to deal with — political unrest? Occupy Wall Street was one thing. That could have been over with one blast of the hose. But Trump? Brexit? Catalan independence? These were the types of problems you read about in places like Albania or Myanmar. It couldn’t be countenanced in London or New York, not for a moment. Nobody wanted elections with real stakes, yet suddenly the vote was not only consquential again, but “often existentially so,” as American Enterprise Institute fellow Dalibor Rohac sighed. So a new P.R. campaign was born, selling a generation of upper-class kids on the idea of freedom as a stalking-horse for race hatred, ignorance, piles, and every other bad thing a person of means can imagine

New Google Ad Labeling

TechCrunch recently highlighted how Google is changing their ad labeling on mobile devices.

A few big changes include:

  • ad label removed from individual ad units
  • where the unit-level label was instead becomes a favicon
  • a "Sponsored" label above ads
  • the URL will show right of the favicon & now the site title will be in a slightly larger font above the URL

An example of the new layout is here:
2022 Google SERP layouts with new ad labeling

Displaying a site title & the favicon will allow advertisers to get brand exposure, even if they don't get the click, while the extra emphasis on site name could lead to shifting of ad clicks away from unbranded sites toward branded sites. It may also cause a lift in clicks on precisely matching domains, though that remains to be seen & likely dependes upon many other factors. The favicon and site name in the ads likely impact consumer recall, which can bleed into organic rankings.

After TechCrunch made the above post a Google spokesperson chimed in with an update

Changes to the appearance of Search ads and ads labeling are the result of rigorous user testing across many different dimensions and methodologies, including user understanding and response, advertiser quality and effectiveness, and overall impact of the Search experience. We’ve been conducting these tests for more than a year to ensure that users can identify the source of their Search ads and where they are coming from, and that paid content is clearly labeled and distinguishable from search results as Google Search continues to evolve

The fact it was pre-announced & tested for so long indicates it is both likely to last a while and will in aggregate shift clicks away from the organic result set to the paid ads.

Google Helpful Content Update

Granular Panda

Reading the tea leaves on the pre-announced Google "helpful content" update rolling out next week & over the next couple weeks in the English language, it sounds like a second and perhaps more granular version of Panda which can take in additional signals, including how unique the page level content is & the language structure on the pages.

Like Panda, the algorithm will update periodically across time & impact websites on a sitewide basis.

Cold Hot Takes

The update hasn't even rolled out yet, but I have seen some write ups which conclude with telling people to use an on-page SEO tool, tweets where people complained about low end affiliate marketing, and gems like a guide suggesting empathy is important yet it has multiple links on how to do x or y "at scale."

Trashing affiliates is a great sales angle for enterprise SEO consultants since the successful indy affiliate often knows more about SEO than they do, the successful affiliate would never become their client, and the corporation that is getting their asses handed to them by an affiliate would like to think this person has the key to re-balance the market in their own favor.

My favorite pre-analysis was a person who specialized in ghostwriting books for CEOs Tweeting that SEO has made the web too inauthentic and too corporate. That guy earned a star & a warm spot in my heart.

Profitable Publishing

Of course everything in publishing is trade offs. That is why CEOs hire ghostwriters to write books for them, hire book launch specialists to manipulate the best seller lists, or even write messaging books in the first place. To some Dan Price was a hero advocating for greater equality and human dignity. To others he was a sort of male feminist superhero, with all the Harvey Weinstein that typically entails.

Anyone who has done 100 interviews with journalists see ones that do their job by the book and aim to inform their readers to the best of their abilities (my experiences with the Wall Street Journal & PBS were aligned with this sort of ideal) and then total hatchet jobs where a journalist plants a quote they want & that they said, that they then attributes it to you (e.g. London Times freelance journalist).

There are many dimensions to publishing:

  • depth
  • purpose
  • timing
  • audience
  • language
  • experience
  • format
  • passion
  • uniqueness
  • frequency

Blogs to Feeds

For a long time indy blogs punched well above their weight due to the incestuous nature of cross-referencing each other, the speed of publishing when breaking news, and how easy feed readers made it to subscribe to your favorite blogs. Google Reader then ate the feed reader market & shut down. And many bloggers who had unique things to say eventually started to repeat themselves. Or their passions & interests changed. Or their market niche disappeared as markets moved on. Starting over is hard & staying current after the passion fades is difficult. Plus if you were rather successful it is easy to become self absorbed and/or lose the hunger and drive that initially made you successful.

Around the same time blogs started sliding people spent more and more time on various social networks which hyper-optimized the slot machine type dopamine rush people get from refreshing the feed. Social media largely replaced blogs, while legacy media publishers got faster at putting out incomplete news stories to be updated as they gather more news. TikTok is an obvious destination point for that dopamine rush - billions of short pieces of content which can be consumed quickly and shared - where the user engagement metrics for each user are tracked and aggregated across each snippet of media to drive further distribution.

Burnout & Changing Priorities

I know one of the reasons I blog less than I used to is a lot of the things I would write would be repeats. Another big reason was when my wife was pregnant I decided to shut down our membership site so I could take my wife for a decently long walk almost everyday so her health was great when it came time to give birth & ensure I had spare capacity for if anything went wrong with the pregnancy process. As a kid my dad was only around much for a few summers and I wanted to be better than that for my kid.

The other reason I cut back on blogging is at some point search went from a endless blue water market to a zero sum game to a negative sum game (as ad clicks displaced organic clicks). And in such an environment if you have a sustainable competitive advantage it is best to lean into it yourself as hard as you can rather than sharing it with others. Like when we had an office here our link builders I trained were getting awesome unpaid links from high-trust sources for what backed out to about $25 of labor time (and no more than double that after factoring in office equipment, rent, etc.).

If I share that script / process on the blog publicly I would move the economics against myself. At the end of the day business is margins, strategy, market, and efficiency. Any market worth being in is going to have competition, so you need to have some efficiency or strategic differentiators if you are going to have sustainable profit margins. I've paid others many multiples of that for link building for many years back when links were the primary thing driving rankings.

I don't know the business model where sharing the above script earns more than it costs. Does one launch a Substack priced at like $500 or $1,000 a month where they offer a detailed guide a month? How many people adopt the script before the response rates fall & it offsets the costs by more than the revenues? My issue with consulting is I always wanted to over-deliver for clients & always ended up selling myself short when compared to publishing, so I just stick with a few great clients and a bit of this and that vs going too deep & scaling up there. Plus I had friends who went big and then some of their clients who were acquired had the acquirer brag about the SEO, that lead to a penalty, then the acquirer of the client threw the SEO under the bus and had their business torched.

When you have a kid seeing them learn and seeing wonderment in their eyes is as good as life gets, but if you undermine your profit margins you'd also be directly undermining your own child's future ... often to help people who may not even like you anyhow. That is ultimately self defeating as it gets, particularly as politics grow more polarized & many begin to view retribution as a core function of government.

I believe there are no limits to the retributive and malicious use of taxation as a political weapon. I believe there are no limits to the retributive and malicious use of spending as a political reward.

Margins

The role of search engines is to suck as much of the margins as they can out of publishing while trying to put some baseline floor on content quality so that people would still prefer to use a search engine rather than some other reference resource. Google sees memes like "add Reddit to the end of your search for real content" as an attack on their own brand. Google needs periodic large shake ups to reaffirm their importance, maintain narrative control around innovation, and to shake out players with excessive profit margins who were too well aligned with the current local maxima. Google needs aggressive SEO efforts with large profits to have an "or else" career risk to them to help reign in such efforts.

You can see the intent for career risk in how the algorithm will wait months to clear the flag:

Google said the helpful content update system is automated, regularly evaluating content. So the algorithm is constantly looking at your content and assigning scores to it. But that does not mean, that if you fix your content today, your site will recover tomorrow. Google told me there is this validation period, a waiting period, for Google to trust that you really are committed to updating your content and not just updating it today, Google then ranks you better and then you put your content back to the way it was. Google needs you to prove, over several months - yes - several months - that your content is actually helpful in the long run.

If you thought a site were quality, had some issues, the issues were cleaned up, and you were still going to wait to rank it appropriately ... the sole and explicit purpose of that delay is career risk to others to prevent them flying to close to the sun - to drive self regulation out of fear.

Brand counts for a lot in search & so does buying the default placement position - look at how much Google pays Apple to not compete in search, or look at how Google had that illegal ad auction bid rigging gentleman's agreement with Facebook to not compete with a header bidding solution so Google could maintain their outsized profit margins on ad serving on third party websites.

Business ultimately is competition. Does Google serve your ads? What are the prices charged to players on each side of each auction & how much rake can the auctioneer capture for themselves?

The Auctioneer's Shill Bid - Google Halverez (beta)

That is why we see Google embedding more features directly in their search results where they force rank their vertical listings above the organic listings. Their vertical ads are almost always placed above organics & below the text AdWords ads. Such vertical results could be thought of as a category-based shill bid to try to drive attention back upward, or move traffic into a parallel page where there is another chance to show more ads.

This post stated:

Google runs its search engine partly on its internally developed Cloud TPU chips. The chips, which the company also makes available to other organizations through its cloud platform, are specifically optimized for artificial intelligence workloads. Google’s newest Cloud TPU can provide up to 275 teraflops of performance, which is equivalent to 275 trillion computing operations per second.

Now that computing power can be run across:

  • millions of books Google has indexed
  • particular publishers Google considers "above board" like Reuters, AP, the New York Times, the Wall Street Journal, etc.
  • historically archived content from trusted publishers before "optimizing for search" was actually a thing

... and model language usage versus modeling the language usage of publishers known to have weak engagement / satisfaction metrics.

Low end outsourced content & almost good enough AI content will likely tank. Similarly textually unique content which says nothing original or is just slapped together will likely get downranked as well.

Expect Volatility

They would not have pre-announced the update & gave some people some embargoed exclusives unless there was going to be a lot of volatility. As typical with the bigger updates, they will almost certainly roll out multiple other updates sandwiched together to help obfuscate what signals they are using & misdirect people reading too much in the winners and losers lists.

Here are some questions Google asked:

  • Do you have an existing or intended audience for your business or site that would find the content useful if they came directly to you?
  • Does your content clearly demonstrate first-hand expertise and a depth of knowledge (for example, expertise that comes from having actually used a product or service, or visiting a place)?
  • Does your site have a primary purpose or focus?
  • After reading your content, will someone leave feeling they’ve learned enough about a topic to help achieve their goal?
  • Will someone reading your content leave feeling like they’ve had a satisfying experience?
  • Are you keeping in mind our guidance for core updates and for product reviews?

As a person who has ... erm ... put a thumb on the scale for a couple decades now, one can feel the algorithmic signals approximated by the above questions.

To the above questions they added:

  • Is the content primarily to attract people from search engines, rather than made for humans?
  • Are you producing lots of content on different topics in hopes that some of it might perform well in search results?
  • Are you using extensive automation to produce content on many topics?
  • Are you mainly summarizing what others have to say without adding much value?
  • Are you writing about things simply because they seem trending and not because you'd write about them otherwise for your existing audience?
  • Does your content leave readers feeling like they need to search again to get better information from other sources?
  • Are you writing to a particular word count because you've heard or read that Google has a preferred word count? (No, we don't).
  • Did you decide to enter some niche topic area without any real expertise, but instead mainly because you thought you'd get search traffic?
  • Does your content promise to answer a question that actually has no answer, such as suggesting there's a release date for a product, movie, or TV show when one isn't confirmed?

Some of those indicate where Google believes the boundaries of their own role as a publisher are & that you should stay out of their lane. :D

Barrier to Entry vs Personality

One of the interesting things about the broader scope of algorithm shifts is each thing that makes the algorithms more complex, increases barrier to entry, and increases cost ultimately increases the chunk size of competition. And when that is done what is happening is the macroparasite is being preference over the microparasite. Conceptually Google has a lot of reasons to have that bias or preference:

  • fewer entities to police (lower cost)
  • more data to use to police each entity (higher confidence)
  • easier to do direct deals with players which can move the needle (more scale)
  • if markets get too consolidated Google can always launch a vertical service & tip the scale back in the other direction (I see your Amazon ad revenue and I raise you free product listing ads, aggregated third party reviews, in-SERP product comparison features, and a "People Also Ask" unit)
  • the macroparasites have more "sameness" between them (making it easier for Google to create a competitive clone or copy)

So long as Google maintains a monopoly on web search the bias toward macroparasites works for them. It gives Google the outsized margins which ensures healthy Alphabet profit margins even if the median of Google's 156,000+ employees pulls down nearly $300,000 a year. People can not see what has no distribution, people do not know what exist in invisibility, nor do they know which innovations were held back and what does not exist due to the current incentive structures in our monopoly-controlled publishing ecosystem.

I think when people complain about the web being inauthentic what they are really complaining about is the algorithmic choices & publishing shifts that did away with the indy blogs and replaced them with the dopamine feed viral tricks and the same big box scaled players which operate multiple parallel sites to where you are getting the same machinery and content production house behind multiple consecutive listings. They are complaining about the efforts to snuff out the microparasite also scrubbing away personality, joy, love, quirkiness, weirdness, and the zany stuff you would not typically find on content by factory order websites.

Let's Go With Consensus Here!

The above leads you down well worn paths, rather than the magic of serendipity & a personality worn on your sleeve that turns some people on while turning other people off.

Text which is roughly aligned with a backward looking consensus rather than at the forefront of a field.

History is written by the victors. Consensus is politically driven, backward looking, and has key messages memory holed.

Some COVID-19 Fun to "Fact" Check

I spent new years in China before the COVID-19 crisis hit & got sick when I got back. I used so much caffeine the day I moved over a half dozen computers between office buildings while sick. I week later when news on Twitter started leaking of the COVID-19 crisis hit I thought wow this looks even worse than what I just had. In the fullness of time I think I had it before it was a crisis. Everyone in my family got sick and multiple people from the office. Then that COVID-19 crisis news came out & only later when it was showed that comorbidities and the elderly had the worse outcomes did I realize they were likely the same. Then after the crisis had been announced someone else from the office building I was in got it & then one day it was illegal to go into the office. The lockdown where I lived was longer than the original lockdown in Wuhan. Those lockdowns destroyed millions of lives.

The reason the response to the COVID-19 virus was so extreme was huge parts of politically interested parties wanted to stop at nothing to see orange man ejected from the White House. So early on when he blocked flights from China you had prominent people in political circles calling him xenophobic, and then the head of public health in New York City was telling you it was safe to ride the subway and go about your ordinary daily life. That turned out to be deadly partisan hackery & ignorance pitched as enlightenment, leading to her resignation.

Then the virus spreads wildly as one would expect it to. And draconian lockdowns to tank the economy to ensure orange man was gone, mail in voting was widespread, and the election was secured.

Some of the most ridiculous heroes during this period wrote books about being a hero. Andrew "killer" Cuomo had time to write his "did you ever know that I'm your hero" book while he simultaneously ordered senior living homes to take in COVID-19 positive patients. Due to fecal-oral transmission and poor health outcomes for senior citizens sick enough to be in a senior living home his policies lead to the manslaughter of thousands of senior citizens.

You couldn't go to a funeral and say goodbye because you might kill someone else's grandma, but if you were marching for social justice (and ONLY social justice) that stuff was immune to the virus.

Suggesting looking at the root problems like no dad in the home is considered sexist, racist, or both. Meanwhile social justice organizations champion tearing down the nuclear family in spite of the fact that if you tear down the family all you are left with is the collective AND "mandatory collectivism has ended in misery wherever it’s been tried."

Of course the social justice stuff embeds the false narrative of victimhood, which then turns many of the fake victims into monsters who destroy the lives of others - but we are all in this together.

Absolutely nobody could have predicted the rise of murder & violent crime as we emptied the prisons & decriminalized large swaths of the penal code. Plus since many crimes are repeatedly ignored people stop reporting lesser crimes, so the New York Times can tell you not to worry overall crime is down.

In Seattle if someone rapes you the police probably won't even take a report to investigate it unless (in some cases?) you are a child. What are police protecting society from if rape is a freebie that doesn't really matter? Why pay taxes or have government at all?

What Google Wants

The above sidebar is the sort of content Google would not want to rank in their search results. :D

They want to rank text which is perhaps factually correct (even if it intentionally omits the sort of stuff included above), and maybe even current and informed, but done in such a way where you do not feel you know the author the way you might think you do if you read a great novel. Or hard biased content which purports to support some view and narrative, but is ultimately all just an act, where everything which could be of substance is ultimately subsumed by sales & marketing.

The Market for Something to Believe In is Infinite

Each re-representation mash-up of content in the search results decontextualizes the in-depth experience & passion we crave. Each same "big box" content factory where a backed entity can withstand algorithmic volatility & buy up other publishers to carry learnings across to establish (and monetize) a consensus creates more of a bland sameness.

That barrier to entry & bland sameness is likely part of the reason the recent growth of Substack, which sort of acts just like a blog did 15 or 20 years ago - you go direct to the source without all the layers of intermediaries & dumbing down you get as a side effect of the scaled & polished publishing process.

Automating Ourselves Out of Existence

Time has grown more scarce after having a child, so I rarely blog anymore. Though I thought it probably made sense to make at least a quarterly(ish) post so people know I still exist.

One of the big things I have been noticing over the past year or so is an increasing level of automation in ways that are not particularly brilliant. :D

Just from this past week I've had 3 treat encounters on this front.

One marketplace closed my account after I made a bunch of big purchases, likely presuming the purchases were fraudulent based on the volume, new account & an IP address in an emerging market economy. I never asked for a refund or anything like that, but when I believe in something I usually push pretty hard, so I bought a lot. What was dumb about that is they took a person who would have been a whale client & a person they were repeatedly targeting with ads & turned them into a person who would not recommend them ... after being a paying client who spent a lot and had zero specific customer interactions or requests ... an all profit margin client who spent big and then they discarded. Dumb.

Similarly one ad network had my account automatically closed after I had not used it for a while. When I went to reactivate it the person in customer support told me it would be easier to just create a new account as reactivating it would take a half week or more. I said ok, went to set up a new account, and it was auto-banned and they did not disclose why. I asked feedback as to why and they said that they could not offer any but it was permanent and lifetime.

A few months go by and I wondered what was up with that and I logged into my inactive account & set up a subaccount and it worked right away. Weird. But then even there they offer automated suggestions and feedback on improving your account performance and some of them were just not rooted in fact. Worse yet, if they set the default targeting options to overly broad it can cause account issues in a country like Vietnam to where if you click to approve (or even auto approve!) their automated suggestions you then get notifications about how you are violating some sort of ToS or guidelines ... if they can run that logic *after* you activate *their* suggestions, why wouldn't they instead run that logic earlier? How well do they think you will trust & believe in their automated optimization tips if after you follow them you get warning pop overs?

Another big bonus recently was a client was mentioned in a stray spam email. The email wasn't from the client or me, but the fact that a random page on their site was mentioned in a stray spoofed email that got flagged as spam meant that when the ticket notification from the host sent wounded up in spam they never saw it and then the host simply took their site offline. Based on a single email sent from some other server.

Upon calling the host with a friendly WTF they explained to the customer that they had so many customers they have to automate everything. At the same time when it came time to restoring hosting that the client was paying for they suggested the client boot in secure mode, run Apache commands x and y, etc. ... even though they knew the problem was not with the server, but an overmalicious automated response to a stray mention in a singular spam email sent by some third party.

When the host tried to explain that they "have to" automate everything because they have so many customers the customer quickly cut them off with "No, that is a business choice. You could charge different prices or choose to reach out to people who have spent tens of thousands on hosting and have not had any issues in years." He also mentioned how emails can be sent to spam, or be sent to an inbox on the very web host that went offline & was then inaccessible. Then the lovely customer support person stated "I have heard that complaint before" meaning they are aware of the issue, but do not see it as an issue for them. When the customer said they should follow up any emails with an SMS for servers going offline the person said you could do it on your end & then later sent them a 14-page guide for how to integrate the Twillio API.

Nothing in the world is fair. Nothing in the world is equal. But there are smart ways to run a business & dumb ways to run a business.

If you have enough time to write a 14-page integration guide it probably makes sense to just incorporate the feature into the service so the guide is unneeded!

Businesses should treat their heavy spenders or customers with a long history of a clean account with more care than a newly opened account. I had a big hedge fund as a client who would sometimes want rush work done & would do stuff like "hey good job there, throw in an extra $10,000 for yourself as a bonus" on the calls. Whenever they called or emailed they got a quick response. :D

I sort of get that one small marketplace presuming my purchases might have been a scam based on how many I did, how new my account was, and how small they were, but the hosting companies & ad networks that are worth 9 to 12 figures should generally do a bit better. Though in many ways the market cap is a sign the entity is insulated from market pressures & can automate away customer service hoping that their existing base is big enough to offset the customer support horror stories that undermine their brand.

It works.

At least for a while.

A parallel to the above is my Facebook ad account, which was closed about a half decade or so ago due to geographic mismatch. That got removed, but then sort of only half way. If I go to run ads it says that I can't, but then if I go to request an account review to once again explain the geographic difference I can't even get the form to submit unless I edit the HTML of the page on the fly to seed the correct data into the form field as by default it says I can not request a review since I have no ad account.

The flip side of the above is if that level of automation can torch existing paid accounts you have to expect the big data search & social companies are taking a rather skeptical view of new sites or players wanting to rank freely in their organic search results or social feeds. With that being the case, it helps to seed what you can to provide many signals that may remove some of the risks of getting set in the bad pile.

I have seen loads of people have their YouTube or Facebook or whatever such account get torched & only override the automated technocratic persona non grata policies by having followers in another channel who shared their dire situation so it could get flagged for human review and restoration. If that happens to established & widely followed players who have spent years investing into a platform the odds of it happening to most newer sites & players is quite high.

You can play it safe and never say anything interesting, ensuring you are well within the Overtone Window in all aspects of life. That though also almost certainly guarantees failure as it is hard to catch up or build momentum if your defining attribute is being a conformist.

Engineering Search Outcomes

Kent Walker promotes public policies which advantage the Google monopoly.

His role doing that means he has to write some really bad hot takes that lack context or intentionally & dishonestly redirect attention away from core issues - that's his job.

With that in mind, his most recent blog post defending the Google monopoly was exceptional.

Force Ranking of Inferior Search Results

"When you have an urgent question — like “stroke symptoms” — Google Search could be barred from giving you immediate and clear information, and instead be required to direct you to a mix of low quality results."

On some search queries users get a wall of Google ads, the forced ranked Google insert (or sometimes multiple of them with local & ecommerce) and then there can even be a "people also ask" box above the first organic result.

The idea that organic results must be low quality if not owned & operated indicates 1 of the following 3 must be true:

  • they should not be in search
  • their content scraping & various revenue shifting scams with their ad tech stack demonetized legit publishers
  • their forced rank of their own content is stripping them of the signals needed to rank websites & pages

Whenever Google puts a "people also ask" box above the first organic result that is them saying they did not know what to rank, or they are just trying to create a visual block to push the organic result set down the page and user attention back up toward the ads.

The solution to Google's claims is easy to solve. Either of the following would work.

  • Have an API that allows user choice (to set rich snippet or vertical defaults in various categories), or
  • If the vertical inserts remain Google-only then for Google to justify force ranking their own results above the organic result set Google should also be required to rank those same results above all of their ads, so that Google is demonetizing Google along with the rest of the ecosystem, rather than just demonetizing third parties.

If the thesis that this information needs to be front and center & that is a matter of life or death, then asking searchers to first scroll past a page or two of ads is not particularly legitimate.

Spam & Security

"when you use Google Search or Google Play, we might have to give equal prominence to a raft of spammy and low-quality services."

Many of the worst versions of spam that have repeatedly made news headlines like fake tech support, fake government document providers, and fake locksmiths were buying distribution through Google Ads or were featured in the search results through Google force ranking their own local search offering even though they knew the results were vastly inferior to Yelp.

If Google did not force rank Google local results above the rest of the organic result set then the fake locksmiths would not have ranked.

I have lost count of how many articles I have read about hundreds or thousands of fake apps in the Google Play store which existed to defraud advertisers or commit identity theft, but there have been literally thousands of such articles. I see a similar headline at least once a month without eve looking for them. Here is one this week for scammers monetizing the popularity of Wordle with fake apps.

Making matters worse, some of the tech support scams showed the URL of a real business and rerouted the call through a Google number directly to a scammer. A searcher who trusted Google & sees Apple.com or Dell.com on Google Ads in the search results then got connected with a scammer who would commit identity theft or encrypt their computer then demand ransom cryptocurrency payments to decrypt it.

After making the ads harder to run for scammers Google decided the problem was too hard & expensive to sort out so they also blocked legitimate computer repair shops.

Sometimes Google considers something spam strictly due to financial considerations.

Their old remote rater documents stated *HELPFUL* hotel affiliate websites should be labeled as spam.

Years later the big OTAs are complaining about Google eating their lunch as well as Google is twice as big as the next player.

At one point Google got busted for helping an advertiser route around the automated safety features built into their ad network so that they could pay Google to run ads promoting illegal steroids.

With cartels, you can only buy illegal goods and services from the cartel if you don't want to suffer ill consequences. The same appears to be true here.

The China Problem

"Handicapping America’s technology leaders would threaten our leading sources of research and development spending — just as bipartisan voices in Congress are recognizing the need to increase American R&D investment to stay competitive in the global race for AI, quantum, and other advanced technologies."

We are patriotic, and, but China... is a favorite misdirection of a tech monopolist.

The problem with that is while Eric Schmidt warns it is a national emergency if China overtakes the US in AI tech, Google also operates an AI tech lab in China.

In other words, Eric Schmidt is trying to warn you about himself and his business interests at Google.

Duplicitous? Absolutely.

Patriotic? Less than Chamath!

Inflation

"the online services targeted by these bills have reduced prices; these bills say nothing about sectors where prices have actually been rising and contributing to inflation."

Technology is no doubt deflationary (moving bits on an optical line is cheaper than printing out a book and shipping it across the world) BUT some dominant channels have increased the cost of distribution by increasing the chunk size of information and withholding performance information.

Before Google Analytics was "free" there was a rich and vibrant set of competition in web analytics software with lots of innovation from players like ClickTracks.

Most competing solutions went away.

Google moved away from an installed licensing model to a hosted service where they can change the price upon contract renewal.

Search hid progressively more performance information over time, only sampled data from larger data sets, & now you can sign up for Google Analytics 360 starting at only $150,000 per year.

The hidden search performance data also has many layers to that onion. Not only does Google not show keyword referrers on organic search, but they often don't show your paid search keywords either, and they keep extending out keyword targeting broader than advertisers intend.

Google used to pay Brad Geddes to run official Google AdWords ad training seminars for advertisers, so the idea that *he* has to express his frustrations on Twitter is an indication of how little effort Google is putting into having open communications channels or caring about what their advertisers think.

This is in accordance with the Google customer service philosophy:

he told her that the whole idea of customer support was ridiculous. Rather than assuming the unscalable task of answering users one by one, Page said, Google should enable users to answer one another's questions.

Those who were paying for ads get the above "serve yourself" treatment, all the while Google regularly resets user default ad settings to extend out ad distribution, automatically ad keywords, shift to enhanced AdWords ad campaigns, etc.

Then there are other features which would be beneficial and offered in a competitive market that have been deprioritized. Many years ago eBay did a study which showed their branded Google AdWords ad buys were cannibalistic to eBay profits. Google maintained most advertisers could not conduct such a study because it would be too expensive and Google does not make the feature set available as part of their ad suite.

Missing Information

"When you search for local businesses, Google Search and Maps may be prohibited from highlighting information we gather about hours of operation, contact information, and reviews. That could hurt small businesses and local retailers, as well as their customers."

Claiming reviews or an attempt to offer a comprehensive set of accurate review data as a strong point would be economical with the truth.

Back when I had a local business page my only review was from a locksmith spammer / scammer who praised his own two businesses, trashed a dozen other local locksmiths, crapped on a couple local SEO services, and joked about how a local mover smashed the guts out of his dog. Scammer fake reviewer's name was rather sophisticated ... it was ... Loop Dee Loop

About a decade back when Google was clearly losing Google took Yelp reviews wholesale (sometimes without even attributing them to Yelp!) and told Yelp that if they did not want Google stealing their work and displacing them with a copy of it then they should block GoogleBot. Google offered the same sort of advice / threat to TripAdvisor.

A few years before that Google temporarily "forgot" to show phone numbers on local listings.

After Yelp turned down an acquisition offer by Google & Yelp did a great job making some people aware of how Google was stealing their reviews wholesale without attribution Google bought Zagat & Fromer's to augment the Google local review data and then sold those businesses off.

This is sort of the same playbook Google has run in the past elsewhere. After Groupon said no to Google's acquisition offer, Google quickly provided daily deal ads to over a dozen Groupon competitors to help commoditize the Groupon offering and market position.

Ultimately with the above sort of stuff Google is primarily a volume aggregator or has lower editorial costs than pure plays due to the ability to force bundle their own distribution. And they use the ability to rank themselves above a neutral algorithmic position as a core part of their biz dev strategy. When shopping search engines were popular Google kept rewording the question set they sent remote raters to justify rank demotion for shopping search engines & Google also came up with innovative ranking "signals" like concurrent ranking of their own vertical search offering whenever competitors x or y are shown in the result set & rolled out a "diversity" algorithm to limit how many comparison shopping sites could appear in the search results. The intent of the change was strictly anti-competitive:

"Although Google originally sought to demote all comparison shopping websites, after Google raters provided negative feedback to such a widespread demotion, Google implemented the current iteration of its so-called 'diversity' algorithm."

As a matter of fact, part of one of many document dumps in recent years went further than the old concurrent ranking signal to a rank x above y feature which highlights how YouTube can be hard coded at a number 1 ranking position.

Part of that guide highlighted how to hardcode ranking YouTube #1.

If you re-represent content & can force rank yourself #1 (with larger listings) that can be used to force other players onto your platform on your terms. Back when YouTube was must less of a sure thing Google suggested they could threaten to change copyright.

This same approach to "relevancy" is everywhere.

Did you watermark your images? Well shame on you, as that is good for a rank demotion

And if there are photos which are deemed illegal Google will make you file an endless series of DMCA removal requests even though they already had the image fingerprinted.

Now there are some issues where there is missing information. These areas involve original reporting on local politics & are called news deserts. As the ad pie has consolidated around Google & Facebook that has left many newspapers high and dry.

Private equity players like Alden Global Capital buy up newspapers, fire journalists, and monetize brand equity as they drive the papers into the ground.

If you are sub-scale maybe Google steals your money or hits you with a false positive algorithm flag that has you seeking professional mental health help.

Big players get a slower blood letting.

Google has maintained they do not make any money from news search, but the states lawsuit around ad tech made it clear Google promoted AMP for anti-competitive purposes to block header bidding, lied to news publishers to get them to adopt AMP and eat the tech costs of implementation, did a deal with their biggest competitor in online advertising Facebook to maintain the status quo, charge over double what their competitors do for ad tech, and had a variety of bid rigging auction manipulation algorithms they used to keep funneling more money to themselves.

Internally they had an OKR to make *most* search clicks land on AMP pages within a year of launch

"AMP launched as an open source project in October 2015, with 26 publishers and over 40 publications already publishing AMP files for our preview demo. Our team built g.co/ampdemo and is now racing towards launching it for all of our users. We're responsible for the AMP @ Google integrations, particularly focusing on Search, our most visible product. We have a Google-wide 2016 OKR to deliver! By the end of 2016, our goal is that 50%+ of content consumed through Search is being consumed through AMP."

You don't get over half the web to shift to a proprietary version of HTML in under a year without a lot of manipulation.

Jasper.ai Review

Background / Intro

One of my longtime friends who was Internet marketing long before I was hit me up on Skype about a week ago praising Jasper.ai. I have to think long and hard about any other time he has really pitched or recommended something like that & really I just can't think of any other time where he did that. The following day my wife Giovanna mentioned something to me and I was like "oh you should check out this thing my buddy recommended yesterday" and then I looked and realized they were both talking about the same thing. :D

I have a general heuristic that if people I trust recommend things I put them near the top of the "to do" list and if multiple people I trust do that I pull out the credit card and run at it.

Unfortunately I have been a bit burned out recently and launched a new site which I have put a few hundred hours into, so I haven't had the time to do too much testing, BUT I have a writer who works for me who has a master's degree in writing, and figured she could do a solid review. And she did. :D

She is maybe even a bit more cynical than I am (is that even possible?) and a certified cat lady who loves writing, reading, poetry and is more into a soft sell versus aggressive sales.

Full disclosure...the above link and the one at the end of this post are affiliate links, but they had zero impact on the shape or format of the review. The reviewer was completely disconnected from the affiliate program and I pulled out my credit card to pay for the software for her to test it out.

With that tiny bit of a micro-introduction, the rest of the post from here on out is hers. I may have made a couple minor edits for clarity (and probably introduced a few errors she will choke me for. :D) but otherwise the rest of this post is all her ...

An In-depth Review of the Conversion.ai Writing Software

Considering the possibilities of artificial intelligence (AI), we picture robots doing tasks autonomously like humans. With a computer’s brain power, productivity is accelerated significantly. We also expect AI programs to have the capability to evolve intelligently the longer they are used. These types of AI employ “machine learning,” or deep learning to solve problems.

AI technology can be leveraged by various industries, especially with writing. Recently, I learned about the Conversion.ai copywriting tool. It uses machine learning which claims to write “high converting copy” for websites, ads, landing pages, emails, etc. The software is geared towards writers, marketers, entrepreneurs, and agencies that benefit from creating engaging and effective copy. To date, companies such as Hubspot, Shopify, and Salesforce are known to use the software. Currently, it’s offering a 7-day free trial with 20,000-word credits.

To give you the lowdown on Conversion.ai, I wrote an in-depth review of how this software works. I’ll go through its various features and show examples of how I used them. I’ll include the advantages of using Conversion.ai’s Jasper (that’s what it’s called) in writing scenarios. More importantly, I’ll discuss challenges and specific limitations this tool might present.

Assistance in Creating High Conversion Copy

As a writer doing web copy for 10 years, including the time I took a post-grad creative writing degree, I grabbed the opportunity to try this AI software. For starters, it struck me how Conversion.ai claims to provide “high converting copy” for increased conversion and higher ROI. Such claims are a tall order. If you’ve been in the marketing or sales industry, you’d know conversion depends on so many other factors, such as the quality of the actual product, customer support, price, etc. It’s not just how well copy is written, though it’s a vital part. But anyway, upon more research, I learned the app generates copy based on proven high conversion sales and marketing messages.

To be honest, I have mixed feelings about this conversion strategy. I believe it’s a double-edged sword. This is not to undermine facts or measurable data. Basing content creation on “proven content” means you’re likely using the same phrases, techniques, and styles already used by successful competitors. This serves as a jumping board for ideas of course, so you know what’s already there. However, it can be an echo chamber. Marketers must not forget that execution must still be fresh. Otherwise, you’ll sound like everyone else.

Next, while it seems sustainable, it also sounds pretty safe. If your product or service is not that distinct, you must put extra effort to create content that stands out. This applies to all aspects of the marketing strategy, not just in writing content. It’s a crucial principal I learned after reading Purple Cow by Seth Godin (thanks for the book suggestion, Aaron!).

Depending on your product or service, Conversion.ai will generate copy that most consumers keep going back to. Based on the samples it generated, I’d say it really does come up with engaging copy, though it needs editing. If your business must rewrite product descriptions for extensive inventories, Conversion.ai can cut the time in half. It can help automate description rewriting without hiring more writers. That saves money and time, so businesses need fewer writers and editors.

What did I learn? Conversion.ai can make writing and editing faster, yes, especially for low-level content focused on descriptions. It can also inform the strength of your ideas for more creative campaigns. However, it still takes solid direction and creativity to drive good marketing copy forward. That said, it’s only as good as the writer utilizing this app. As a content creator, you cannot rely on it solely for creativity. But as an enhancer, it will significantly help push ideas forward, organize campaigns, and structure engaging copy effectively.

When you use this app, it offers many different features that help create and organize content. It also customizes copy for various media platforms. Beyond rewriting , it even has special brainstorming tools designed to help writers consider various idea angles. This can add more flavor and uniqueness into a campaign.

At the end of the day, what will set your copy apart is the strength of your ideas and your communication strategy. How you customize content for a business is still entirely up to you. AI writing tools like Conversion.ai can only help enhance your content and the ideas behind it. It’s a far cry from creating truly unique concepts for your campaign, but it definitely helps.

Conversion.ai Writing Features & How They Work

This AI writing app comes with plenty of “writing templates” that are customized to help you write with a specific framework or media platform in mind. Currently, Conversion.ai offers 39 different writing templates or content building blocks that deliver results. We’ll provide details for how each one works.

For company or product descriptions, Conversion.ai has a Start Here step by step guide, which says users should alternate between the Product Description and the Content Improver template until they have found the right mix they’re looking for. But for this review, I just focused on how to use the templates for different writing projects. The app comes with video instructions as well as a live training call if you need further assistance on how to use it.

Each template asks you to input a description or what you want to write about. This is limited to 600 characters. Writing the description is the sole basis for how Jasper will generate ways to write or expand your content. It also helps you brainstorm and structure ideas for an article or campaign.

But as an issue, I find the 600-character limit can hinder reposting the full content generated by the AI back into the template for improvement. Yes, it churns out marketing copy of more than 600 characters. If you want to post the improved copy again, you might have to do this in two batches. In any case, Jasper can generate as many improved writing samples as you need.

To give you a better idea, here are different Conversion.ai templates and how they work. This is going to take a while, so have your coffee ready.

Long-form Assistant

This is for longer articles, emails, scripts, and stories. It’s also suggested for writing books. It has two modes, a blank document where you can start typing freely and an assistant workflow. The blank document also lets you access the rest of the other writing templates vertically. On the other hand, the long-form assistant workflow is where the app asks you to describe the content you want to create. Consider this carefully. The better you can articulate your topic, the higher quality content Jasper can help generate.

For the example, suppose I want to write about what it took to finally release Zack Synder’s 2021 Justice League. I want to write this feature article for my film and culture website.

Jasper asks for a maximum of three keywords. It’s optional, but I presume adding keywords will help Jasper generate more relevant content. Next, it prompts you to write a working title and start an introductory paragraph. Once you write your initial title, it will generate other title ideas.

For the title, I wrote: The Challenges of Releasing Zach Snyder’s Justice League 2021
It generated the following suggestions:

Jasper generated a long list of appropriate and catchy titles. I personally liked “The Tumultuous Journey of Zack Snyder’s Justice League 2021,” though I’d shorten it a bit. I also liked “The Darkest, Toughest Production in Film History,” which basically tells audiences how hard it was to produce the version of the film. Though it’s an exaggeration, it can be qualified in the article. The other suggestions with “Behind the Scenes” and “Towards a Better Script” were more specific angles. I like that I could explore those topics too. Generally, I think it’s definitely a great tool to make title writing faster.

So far, so good, right? But when I got to content suggestions, it got a bit messy. It was mainly because the suggested content was too general. It also tended toward suggesting expansions that were incoherent to the main topic. This did not sound very promising. Moreover, the AI added bits of information such as dates and other related information. I had to check if these details were used accurately. In a previous test I did (and even for this example), it completely got a date wrong. So I advise writers to always double check the details suggested by Jasper.

Take a look below. The top paragraph was the one I wrote, while the ones below are suggested by Jasper.

Content suggestions from Jasper:

1) You might not be able to see Zack Snyder’s Justice League on the big screen, but you can catch it online in 2021. It was released digitally by Warner Bros. on November 27th 2020 and we’ve been watching it slowly climb the ranks of IMDB since then. The film had a troubled production between 2016 and 2017 with the script going through major changes before and during shooting. But now that all is said and done, what did people think? Let’s find out!

2) Justice League is a film that was released on November 17, 2017. The movie had been preceded by Batman v Superman: Dawn of Justice and Wonder Woman. It’s also the fifth installment in the DC Extended Universe (DCEU). Despite being generally well-received by critics, it was considered to be a disappointment at the box office. In this blog post, we’ll take you behind the scenes and show you what went wrong with Zack Snyder’s 2021 Justice League film production!

3) Director Zack Snyder’s Justice League film was one of the most anticipated films of 2017. Unfortunately, the film had a troubled production and was not released until 2021. That didn’t stop it from becoming one of the highest-grossing films of all time. What does this have to do with SEO? When you release content on your website, be sure to include keywords that are relevant to your business and industry!

At first glance, the introduction actually looks appealing to Justice League fans. Many people have been anticipating it on the big screen. But upon further reading, compared to the paragraph I wrote, the AI content had so many problems. Besides being too general, it had issues with imprecise details. Though you can argue these problems are small and editable, it adds time to your work. It can even derail you from focusing on your topic.

The first AI suggestion wrote that Snyder Cut was released digitally by Warner Bros. in November 27, 2020. Upon further research, I found no such confirmation of this. However, there was a YouTube video “speculating” it’s release in November 2020. But from the looks of it, this did not pan out. Officially, Zack Snyder’s Justice League was released in March 18, 2021 by HBO Max via streaming platform, according to Rotten Tomatoes. And yes, it has been climbing the ranks since its digital release.

If you’re not careful about fact-checking, you might end up with misleading information. And frankly, I feel as if some of the other suggestions may tend towards fluff. However, what you can do is choose the best suggestions and put them together into one coherent paragraph. The first suggestion ended the introduction with “But now that all is said and done, what did people think? Let’s find out!” While it’s something I want to touch on eventually, it is not the main focus of my introduction. The AI was not sensitive enough to sense this follow up was out of place. I’d rather get to the details of the challenging production. If I use this suggestion, I’ll have to edit it into “Let’s take a look at what it took to deliver the highly anticipated Snyder Cut,” or something to that effect.

The second example, on the other hand, was quite a miss. It started by talking about the 2017 Justice League film. While it’s good to expound on the history of the project started, it got lost in discussing the 2017 version. Worse, it did not transition the topic smoothly into the 2021 Snyder Cut. If I read this introduction, I’d be confused into thinking the article was about the 2017 Justice League. Finally, it awkwardly ended the paragraph with “we’ll take you behind the scenes and show you what went wrong with Zack Snyder’s 2021 Justice League film production!” Besides the wordy sentence, suddenly it’s talking about the 2021 Justice League out of nowhere. I would not phrase the production’s challenges as something that went wrong. That’s unnecessary hype. It’s confusing, and just an example of bad writing. Again, while it can be fixed with editing, I feel better off writing on my own.

Finally, the third example actually started okay. But then it started talking about SEO out of nowhere. I don’t know where that came from or why the AI did that, but I’ll count it as a totally unusable suggestion from the app. I reckon there might be more of those glitches if I generate more content suggestions from Jasper.

SIDEBAR FROM AARON: COUGH. SEO IS EVERYTHING. HOW DO I REEEEECH DEZ KIDZ

I noticed these were nuances the AI was not able to catch. It’s probably even based on trending articles at the time, which had a tendency towards hype and dated showbiz information. And though the suggestions were interesting, they were mostly too general or against the direction I needed. If the usage of the information is not accurate, imagine what that would mean for health or political articles. But too be fair, it did generate other usable suggestions with less serious edits. It’s worth looking into those.

However, by this time, I felt I was better off writing the feature without the app, at least for this example. I guess it’s really a hit or miss. Even with so many content suggestions, I think you can still end up with inappropriate samples even if you find good ones. But at least you got a good title already. Personally, I’d rather go straight to researching on my own.

Framework Templates

Conversion.ai allows you to write copy based on marketing frameworks that have been used by professionals for years. It’s ideal for brands, products, and services you need to promote. This features includes the following templates:

  • AIDA Framework: The AIDA template stands for Attention, Interest, Desire, and Action. This basically divides your copy into sections drawing attention from consumers and piquing their interest. The suggested copy also includes content that appeals to the consumer’s desire, then ends with a call to action.
  • PAS Framework: The PAS template is structured by generating copy which highlights the consumer’s Problem, Agitate, and Solution. It’s focused on how a particular product will help solve a consumer’s problem.
  • Bridge-After-Bridge Framework: Also known as BAB framework, this copywriting structure revolves around the idea of getting a consumer from a bad place to a better one. It shows the before and after scenario after benefitting from a product.

For this example, I used the AIDA template for an imagined non-invasive weight loss service company. The new company promotes fitness and advocates against fad diets. It performs non-surgical weight loss procedures, such as wraps and thermomagnetic massages.

Again, Jasper asks for a description. It also requires you to specify the tone of the copy. I placed “friendly” and “professional” under the box. See my input below.

Here’s the first suggestion from Jasper:

Based on this example, I’d say the AI-generated content is quite engaging. It tried to have a personal touch by letting the customer know they’re here to help. The writing empathizes with consumers who have a hard time losing weight. However, since this is for a new company, the introduction “We have helped thousands of people lose weight and get in shape,” does not apply. So as a writer, I simply have to remove it. This can be replaced with the intent to help more people lose weight and get in shape.

I actually pulled out at least 6 different content suggestions. From these, writers could get the best parts and edit them into one strong copy description. On it’s own, the content would still benefit from a lot of editing. Here are some issues you might encounter while generating copy suggestions:

  • Hard Sell Copy. The sample content can be hard sell, even if you specify a professional tone of voice. It tends to use exclamation marks (!) per sample. I believe this depends on the product or service you are writing about. Certain products or services may sell more with the hard sell approach, so the AI suggests this strategy. It may also appear like the “proven” way to communicate to consumers. But if you’re going against this direction, it’s a nuance the AI tool might miss. If your business or client specifically avoids exclamation marks your copy, be ready to make the necessary edits.
  • Can be Wordy, Long, Redundant. In terms of style, here’s where you can’t rely on Jasper to write the entire thing. If you happen to input a long and detailed product description, the AI has a tendency to generate wordy variations of the copy. If you notice, some details are also redundant. In copywriting standards, this needs tightening. Conciseness can be an issue, most notably if you’re not used to brevity. Thus, I believe this tool will best benefit writers and editors who have considerable experience in crafting straightforward copy.

Product Description

The app comes with a special template for creating product descriptions. If you have a large inventory of product information for rewriting, this is the right tool to use. It even comes with an optional language output translation feature, which is available in other templates too.

However, the language feature is limited. I tried putting Thai, Italian, and Japanese and it generated few suggestions, some mixed with English. Same thing with Punjabi and Vietnamese. In other templates, they just keep making English suggestions. Filipino is also not recognized by the AI, which likely means it cannot translate a bunch of other languages. This feature obviously needs development. But it’s not the main feature, so I doubt they’ll do a lot of improvements.

For this example, I used an imagined tire center that offers products and services throughout the U.S. I specifically wrote that it’s the second most affordable tire center in the country. I asked for a professional and witty tone. I’m not at all fluent in Spanish, but I placed Spanish under the output language box.

Below is the first suggested copy in Spanish:

When translated through Google, it reads:

“Don’t think twice, Adam’s Tire Center is your best option because it offers the largest range of products for cars and wagons. Join our satisfied customers and insure your tires with the Road Hazard Warranty service. Call or visit our sales center in Miami, FL, where we are honored to help you.”

Obviously, I can’t comment much on the accuracy of the translation. Though certainly, I have doubts for how writing in another language can capture certain styles and tones. But right now, what I’m more concerned with is the tendency to use superlative descriptions that might not accurately fit the brand. Things like “we offer the largest range of products” should probably be tweaked to “we offer a wide range of products…” If your tire center does not offer the largest inventory, you should not be writing that. It also assumed a specific location, which prompts the writer to include the actual business location (this is a good suggestion). Again, the AI copy would benefit from fine-tuning to make it specific to your product or service.

Now, back to English. Here are three other content samples generated by Jasper:

The English AI-generated samples are not so bad. But in the last sample, there is a tendency for hard sell terms like “unmatched in quality,” that you need to watch out for. You can get the best parts and put them into one solid brand description. But again, these tend to be wordy and long. It would help to use the Hemingway app or Grammarly to make the descriptions tight and concise.

Content Improver

Using the Content Improver template will help you tweak the product or service descriptions you came up with. To show you how it works, I placed the content I wrote based on the edited tire center descriptions Jasper generated.

For this example, I placed professional and witty under tone of voice.

Suggested content from Jasper:

Based on the sample suggestion, I’d say the first two can pretty much stand on their own. These are straightforward copies that address consumer needs with a direct call to action. Though the first one may sound a bit informal, it might fit the type of consumer demographic you are targeting. Finally, the last example gets a bit wordy but can be fixed with a couple of edits. The major issue is the number (555-5555), which the AI mistook for an address.

Marketing Angles

Besides churning out copy suggestions, Conversion.ai has a brainstorming tool. This basically takes your product or service and comes up with various campaign ideas to promote it. If you’re running out of concepts for promotion, Jasper leverages on your product’s features and strengths. I appreciate that it tried to come up with benefit-driven copy based on the example I put.

For this example, the product I used is a gym management software. It helps gym owners manage activities, schedules, and handle payments. The software aims to run gyms more efficiently.

I personally find the following suggestions helpful in pushing the strengths of a product. I would definitely use this tool for brainstorming ideas. Here’s what Jasper generated:

Unique Value Propositions

Another intriguing feature is the unique value propositions (UVP ) template. UVP is essentially a clear statement that describes the benefit your product offers. It also captures how you address your customer’s needs and what distinguishes you from the competition.

If you have a product or service, It claims to generate copy that describes your product’s unique advantage in a remarkable way. To test how this works, I used the previous example, which is the gym software. It came up with several statements that emphasized the product’s benefits. See Jasper’s suggestions below. Personally, I like the idea of software that helps me make more money with less work.

Feature Benefit

The feature benefit template comes up with a list of advantages offered by your product. For this example, the product is a camisole for plus size women. You’ll see how it took the paragraph details and made bulleted benefits based on those features. It’s a useful tool if you want to break down your product’s unique selling points so you can further emphasize them in your campaign.

Persuasive Bullet Points

Another related function is the persuasive bullet points template. This is very similar to the feature benefit template. Personally, I think it’s either you use this or the feature benefit template if you want to highlight product advantages in bullet points. On the other hand, this template doesn’t categorize benefits as emotional or standard advantages.

Copy Headline and Sub-headline Templates

Conversioan.ai also comes with copy headline and sub-headline templates. They claim the AI is “trained with formulas from the world’s best copywriters.” It also guaranteed to create “high-converting headlines” for businesses. At this point, the only way to know if it does have high conversion is to see actual results. Right now, my review can’t prove any of that. But it would be interesting to know from companies who have been using this software for results.

  • Perfect Headline: For this template, I used an earlier example that provides non-invasive weight loss services. You’ll see the product description I used, followed by the suggestions made by Jasper. I specifically liked the headline: Science-based approach to safe, effective fat loss. It’s right concept I was going for.

  • Website Sub-headline: I used the same product description for the sub-headline. I also used the suggested headline generated by Jasper, which is “Science-based approach to safe, effective fat loss.” Based on Jasper’s suggestions, I liked the last one, which emphasizes non-invasive slimming. It also tells consumers the procedure is safe. Though it tends to be wordy, I appreciate it provides different ways you can get your message across.

Sentence Expander

Another interesting feature is the sentence expander. It claims to make your sentence longer and more creative. I guess it should help you get to another thought if you caught writer’s block. But I’m wary what kind of suggestions it might give. When I tried it, it’s just another way to rewrite your sentence in a longer, more detailed way.

In any case, see my sentence below.

Here’s what Jasper generated:

I’m actually not a fan of long-winded sentences. However, I do appreciate the extra details added by the AI. I can use these suggestions if I make further edits on them. But realistically, if I’m writing an article, I’d skip this and go directly to what I’m trying to say. That would save me time. If I want to talk about the negative psychological effects of social distancing, I’d write that point per point. My idea of expansion is moving an argument forward, not merely adding more details to what was already said.

Creative Story

Here’s an interesting template I was curious to try. I wonder how Jasper would develop a two sentence plot. It’s fascinating to see how an AI that uses “proven high conversion data” would suggest story development.

For my example, I took a horror story plot inspired by the Bone Turner from a popular horror podcast called The Magnus Archives. See my plot description and the suggestions made by Jasper.

Story suggestions by Jasper:

I have to say, these are very interesting ideas for an introduction. It’s also funny how it used the name “Jonathon,” because the actual name of main character in the Magnus Archives is “Jonathan.” I kind of think that was on purpose, since the AI probably knows the Bone Turner is from a popular online show.

In any case, I particularly liked the second suggestion. With some editing and fine-tuning, you could fix the details to fit the story in your head. On the other hand, I’m wary authors might rely too much on this to bridge plot gaps. While it’s amusing, it’s more compelling to read plot twists and resolutions that are not forced. At this stage, I’m still not convinced the AI can make a story without contrived plot twists.

Email Subject Lines

Besides creative writing tools, Coversion.ai also has templates for email marketing. This feature is made for businesses or individuals who want to promote products and services via email. The app claims to come up with catchy subject lines that draw consumers to open your email. In this example, I used an imaginary cake shop that delivers throughout LA. I thought Jasper came up with a long list of creative subject lines. These were spot on for the example. Since I am a cake person, I’d likely read this kind of email.

Personal and Company Bio

You can also generate creative personal and company bios through Jasper. If you’re running a personal blog or website, Jasper generates personal bios in first person or third person POV, whichever you are more comfortable with. I’m actually pleased with what the AI suggested. It’s a good start, because I find it hard writing about myself.

The example below is not me, of course. I made up Jessica Ackerman as the founder of Mad Cakes in LA.

Here’s what Jasper generated:

It does sound like a personalized bio. Especially with the detail about cuddling with cats and dogs. Again, I’d edit it to be more particular about details. Other than that, I think it’s a good tool to use.

Next, Jasper also generates company bios that sound professional. I put a three-sentence info about a company that boosts website conversion for businesses. I was surprised how long the suggestions were. It also presumed the names of clients the company has serviced (TripAdvisor, Yelp, etc.). Again, for particular information like this, it’s important to edit or remove them. Otherwise, you might publish copy with misleading details.

Suggestions from Jasper:

Real Estate Listing – Residential

You can utilize this template to create creative and descriptive residential listings. It’s helpful for real estate agents and people who are planning to sell their property. The following shows information about a house for sale, followed by listing suggestions by Jasper.

Suggestions from Jasper:

It’s interesting how the suggested content appeals to the consumer’s idea of a perfect home. It tries to paint a picture of affluent living just based on the golf course description I supplied. But again, for accuracy, these added details should be edited by the writer.

Templates for Specific Online Platforms

Besides articles and product or brand descriptions, expect Conversion.ai to provide special writing features for online platforms. This includes Facebook, Instagram, YouTube, Google, and Amazon accounts. The AI’s content suggestions are based on posts and ads that have generated high traffic on these platforms. I think this a good tool to use if you want an edge over what already sells.

  • Facebook Ad Headline: Makes catchy headlines for FB ads, claims to increase chances of clicks that lead to sales.
  • Facebook Ad Primary Text: Claims to generate high converting copy for FB ad’s primary text section.

For the Facebook ad headline, my example is a cake shop that delivers a wide assortment of cakes in Los Angeles. It specifically mentions delivering cakes “within an hour or your money back.” Here’s the example and Jasper’s suggested content.

AI ad sample headlines:

I must say these sound like fun and friendly FB headlines. I personally would like a last minute dessert. And if I don’t have time to pick up cake, I’d certainly like one delivered. Just not sure about “Get 500 Instagram Followers,” the suggestion is out-of-place. I’d use this tool for a fresh and exciting FB headline.

Here’s the AI sample for Facebook ad primary text:

Based on the FB text sample, the AI instantly suggested to give away free cake. Most of the generated samples headed toward this direction. It didn’t just generate engaging copy, it likely showed you what other cake shops do to draw more customers. I think it’s a great marketing strategy to have promos and free cake. I also like that it suggested catchy hashtags. But again, I’d fix the wordy and adjective-ridden descriptions. With a little editing, the samples should read more smoothly. Other than that, it’s a fast way to come up with social media copy.

Photo Post Captions for Instagram

You can use the app for a company or store’s IG accounts. Here are some samples based on a Mad Cakes Black Chocolate Indulgence photo. If you need ideas for your IG post, this tool can suggest copy that’s simple and straightforward for IG. Depending on your product or service, it suggests content that typically targets your customers base.

Video Writing Templates for YouTube

Next, Conversion.ai offers specialized templates for videos, specifically for platforms such as YouTube. But I also think you can use the content similarly if you’re posting on other video sites. However, the suggestions are based on content with high traffic on YouTube. It includes the following features:

Video Topic Ideas: For brainstorming video content concepts that rank well on YouTube. For example, your initial topic is baking homemade cake. It’s a useful tool for letting you know what people are actually interested in. It gives you an idea what to work on right off the batt. Here are the AI’s suggestions. It mentions concepts for cake baking videos many people look for:

Video Script Outline: Helps make script outlines for any topic. However, this works more suitably for how-to and listicle type videos, not the ones with a narrative. The example below for how to spot aurora borealis or Northern lights. From the AI suggestions, you can choose the best strategies to come up with your own outline. I noticed many suggestions can be too general, besides the more specific ones I posted below. It’s still best to do your own research to make your video content more nuanced and unique. Otherwise, you may just parrot what other content creators have already done.

Video Titles: Like the other templates, there’s also a video title feature. As an example, many users on YouTube like to create content about shows or films. Suppose you want to write a feature about the anime Attack on Titan. For the suggestion, the AI actually came up with pretty awesome titles and topics you can start researching on. While this is based on high-traffic fan search, what you can do is watch what’s already there. This will help you come up with more unique insights about the show that has not been tackled. Again, try to focus on what would set your content apart from what’s already there.

Blog Post Templates

Conversion.ai provides templates that help you conceptual blog posts for your brands. It has tools to help you brainstorm topic ideas and outline your content. These suggestions are all based on high ranking topics on Google. It also comes with features that help compose blog post introductions and conclusions.

  • Blog Post Topic Ideas
  • Blog Post Outline
  • Blog Post Intro Paragraph
  • Blog Post Conclusion Paragraph

For the example, let’s focus on the topic template. I used the earlier example, Best Shape, which is an imaginary non-invasive weight loss service. See the AI’s suggestions below.

Jasper’s results show topics that trend around non-invasive weight loss methods. Trending topics around your market is always good to know. For ideas on blog topics, I think Conversion.ai will really be a useful tool. If you need help structuring your outline, I think it’s worth using it especially if you’re having trouble with organization.

Personally, after getting different topics, you can start writing your post without the app. You won’t need it especially if you already have an idea what to write. It’s still better to do proper research than rely on the app to add information on your post. As you’ve noticed, it has a tendency to supply the incorrect information, which you must diligently edit.

Would I Recommend This Software?

After crash testing Conversion.ai, I would recommend this tool to agencies or individuals that deal with extensive online copywriting and product rewrites. They will benefit the most by eliminating the time-consuming process of doing product descriptions. I would also recommend it for businesses that run social media campaigns, including Google and Amazon ads. This will help generate and organize copy ideas faster, especially if you have a lot of products and services to promote. And because the AI suggestions are based on high-ranking topics, you have a better idea of what your client base is also looking for. It can also enhance messaging concepts and help brainstorm new campaign ideas for a product or brand. Just remember to always edit the content suggestions.

On the other hand, I would not recommend this app for long-form writing. I do not think it is ideal for any writing that requires a lot of research. Because the AI suggestions tend towards incorrect information, you’re better off researching current data on your own. It’s an interesting tool for wring stories, but I also worry authors might be too reliant on the app for plot ideas. There is a difference between carefully worded prose versus long-winded sentences composed by this app. Human writing is still more precise with expression, which the AI has yet to learn.

While it’s a good tool to have, the bottom line is, you still need to edit your content. It will help you structure your outline and compose your post. However, the impetus for writing and the direction it will take is still on you, the writer. My verdict? AI writing technology won’t fully replace humans anytime soon.

Update: This article was updated in May of 2022 to reflect Conversion.ai's AI writing bot name changed from Jarvis to Jasper. No other changes have been made since the original publication of this article.

Pages