Share
  • 0
  • 0

Updated: January 2nd 2024

CTR (click-through-rate) has received a lot of attention in recent months as more studies have shown both to support and disprove the hypothesis.

On the public record, Google has stated they don’t use pogo-sticking as a ranking factor, which makes sense given how the modern user behaves on desktop and mobile.

I have my own conclusions on both CTR as being a ranking factor for individual websites, and I’ll include my personal thoughts in conclusion – but first I want to assess the case for both, based on anecdotal evidence and studies generated by SEOs, as well as Google studies, patents, and research.

CTR (Click Through Rate)

A lot of recent fervor and backing for CTR as a ranking factor initially came from a Google paper, titled Incorporating Clicks, Attention, and Satisfaction into a Search Engine Result Page Evaluation Model.

The document outlines two initial challenges that this hypothesis faces, and these are down to the introduction of new SERP features by Google themselves and standard user behavior.

First, the presence of such elements on a search engine result page (SERP) may lead to the absence of clicks, which is, however, not related to dissatisfaction, so-called “good abandonments.”

This being featured snippets and other special content result blocks leading to a “zero-click” search.

Second, the non-linear layout and visual difference of SERP items may lead to non-trivial patterns of user attention, which is not captured by existing evaluation metrics.

This being the fact that as users we don’t go through search results in a linear fashion and also open multiple tabs, meaning the behavior is erratic and unpredictable.

The paper also outlines four core keywords post abstract, these being:

  • User behavior
  • Click models
  • Mouse movement
  • Good abandonment

The study also claimed that 42% of quality raters only clicked on the SERP result to confirm that the information initially presented on the SERP (in summary, through meta information) was correct. Meaning that additional clicks on results do not correlate with a “poor” set of initial results.

Notably, in assessing satisfaction the paper does not mention “dwell time”, bounce rate, or average time on site.

This then brings us onto a second Google research paper, Learning to Rank with Selection Bias in Personal Search. Which for me has two of the most important statements on CTR pretty much within the first paragraph. These being:

Click-through data has proven to be a critical resource for improving search ranking quality.

And…

Though a large amount of click data can be easily collected by search engines, various biases make it difficult to fully leverage this type of data.

So is Google able to collect this information? Yes, is Google able to apply this information across the entire web corpus in a meaningful and scalable fashion? No.

A Google Ranking Engineer Perspective

Another angle to look at comes from SMX West 2016, Google Engineer Paul Haahr talked about how Google works from the perspective of an engineer. The slides, unfortunately, are no longer publicly available through SlideShare, however, the talk is via YouTube.

There are some key slides that Paul shares that offer some insight into the question of CTR being a ranking factor, these being:

The transparency that Google looks for changes in click patterns in live traffic experiments, however with the caveat that it may be harder than you might expect – which makes sense if you look at the previously mentioned research papers.

It’s worth highlighting that as Paul talks, he refers to the click patterns as one of the metrics used to determine AB version success rates, and not that the click pattern experiments are separate from the AB tests.

Caveat: “Interpreting live experiments is actually a really challenging thing”

In Haahr’s own words, a naive interpretation of Algorithm B putting P₂ ahead of P₁ resulting in no click would be “bad”, and there are a number of factors as to why there could have been no click in this instance.

The Q&A hosted by Danny Sullivan with Paul Haahr and Gary Illyes from the same conference went on to reiterate how difficult it is to interpret the findings from live experiments.

There are so many experiments that we’ve done that have very misleading live metrics and we really need to dig into them.

Haahr also gave an example of a long-running Google experiment were they swapped #2 and #4, based on 0.2% of search engine users as a data set. From the analysis, they found that in the B test where #4 was in the place of #2, more users clicked on #1.

Haahr, also during the Q&A, revealed that #10 typically gets more clicks than positions #8 and #9 put together (but is still worse than #7).

AJ Kohn, Article 2015

In 2015 AJ Kohn set out to answer the same question, is click-through rate a ranking signal? For me is still a great post four years later.

With the post, AJ also addresses the question from a Bing standpoint, with a quote from Duane Forrester on whether or not Bing uses CTR as a ranking signal from an interview with Eric Enge in 2011.

Earlier Research Papers & Studies

The above papers and Paul Haahr’s SMX talk are from the 2016/2017 era of search, and CTR as a ranking input predates this. There are a large number of research papers, studies and Google patents that touch upon the topic but for me important ones to be aware of are:

Thorsten Joachims, 2002

Joachims, a researcher associated with Cornell University shows just how old research into CTR is, and the paper Optimizing Search Engines using Clickthrough Data has two key points:

  • The key insight is that such clickthrough data can provide training data in the form of relative preferences.
  • …there is a dependence between the links presented to the user and those for which the system receives feedback.

Filip Radlinski & Thorsten Joachims, 2005

From 2005, Joachims and Radlinksi, both of Cornell, published the paper Evaluating the Robustness of Learning from Implicit Feedback. Also referred to as machine learning and simulated CTR.

The goal of this paper is to understand when CTR data is useful and when CTR data is biased and less useful.

This paper is especially interesting because it introduces the possibility of modeling user behavior and using that data instead of actual user behavior. This paper also mentions reinforcement learning, which is machine learning.

Filip Radlinski, Thorsten Joachims & Robert Kleinberg, 2008

Again featuring Radlinski and Joachims, this time joined by Kleinburg, all still of Cornell, this paper (Learning Diverse Rankings with Multi-Armed Bandits) introduced the notion of user intent and the importance of showing a diversification of results to satisfy the most users.

And satisfying the most users means understanding what clicks results in the least amount of clicks back to the search engine, also known as abandonment – a term well used in Google papers on the subject.

Satisfying all users means showing different kinds of webpages. The user intent for many search queries is different – meaning SERP diversification and non-linear SERPs are required.

A conclusion from this paper being:

We expect such an algorithm to perform best when few documents are prone to radical shifts in popularity.

And for a large proportion of the web corpus, this will ring true as not a lot of change occurs in terms of user intent behind established topics and queries.

CTR Studies, Texts & Experiments

With anything, and one of my biggest recommendations to anyone starting out in SEO is to read as much on a topic including arguments for and against. For me two prominent studies in CTR being a ranking factor are:

Ricardo Baeza-Yates, 1999

Ricardo Baeza-Yates (Yahoo!) says in his book, Modern Information Retrieval, that click-through data can be a clearer and more definitive signal than PageRank data in determining the quality of a result.

Yahoo even released a patent, filed in 2006, discussing substituting PageRank data for CTR data.

Bartosz Góralewicz, 2015

Via Search Engine Land, Bartosz Góralewicz shared the results of an experiment he conducted that suggests click-through rate (CTR) from search is not a ranking factor.

Cesarino Morellato & Andrea Scarpetta, 2015

Again via Search Engine Land Italian SEO professionals Morellato and Scarpetta performed an experiment, albeit, with some biases, that concluded that CTR from search is a ranking factor.

Natzir Turrado, 2015

In his study titled CTR affects SEO, but why can not it be manipulated artificially (written in Spanish so you will need to use Google Translate), Turrado explores that CTR does indeed affect SEO but cannot be artificially manipulated.

Eric Enge, 2016

In 2016 Eric published a study on the then Stone Temple blog, now Perficient titled Why CTR is(n’t) a ranking factor.

To summarize, Google uses controlled click-through rate testing to validate the quality of their search results.

Eric’s study also correlates with a webinar type Q&A he had with Andrey Lipattsev, Search Quality Senior Strategist at Google, Rand Fishkin and Ammon Johns.

It’s also worth noting that Rand Fishkin has run experiments and found correlations with CTR and rank changes.

Nathan Veenstra, 2016

In Nathan’s post, he addresses both CTR and bounce rate and their merits as ranking factors.

The post, Why bounce rate is not an SEO factor, is in Dutch (original title Waarom bounce rate geen SEO-factor is), but Google translate works well.

What’s Google Had To Say?

I don’t feel any analysis post like this would be complete without including any insights from John Mueller & Gary Illyes, who has the unenviable job of putting up with SEO questions 24/7/365.

John Mueller

At BrightonSEO in April 2019, John had a live Q&A that Distilled has since analyzed and looked to read between the lines of his answers.

The question posed to John was: Surely at that point, John, you would start using signals from users, right? You would start looking at which results are clicked through most frequently, would you start looking at stuff like that at that point?

To which John replied:

I don’t think we would use that for direct ranking like that. We use signals like that to analyze the algorithms in general, because across a million different search queries we can figure out like which one tends to be more correct or not, depending on where people click. But for one specific query for like a handful of pages, it can go in so many different directions.

Which fits in with the comments made by Paul Haahr at SMX West 2016 in the Q&A, as well as the thoughts and conclusions of Bartosz Góralewicz and Eric Enge.

Gary Illyes

At SMX Advanced in 2015, Gary Illyes spoke about how Google uses clicks made in the search results in two different ways – for evaluation and for experimentation – but not for ranking.

He does say they see those who are trying to induce noise into the clicks and for this reason they know using those types of clicks for ranking would not be good.

In other words, CTR would be too easily manipulated for it to be used for ranking purposes. This is interesting, as this then contradicts the studies run by Rand Fishkin were results were influenced through an additional 600/700 clicks.

However, a month after Gary Illyes made these statements, Google released a patent showing how they may rank pages in part based upon user feedback (clicks) in response to rankings for those pages.

The patent titled Modifying search result ranking based on a temporal element of user feedback seems to correlate with Paul Haahr’s comments a year later – however, for a full breakdown of this patent I recommend Bill Slawski’s article.

Is CTR A Ranking Factor?

(In my opinion) CTR is not a ranking factor.

However, I do believe that CTR data is used as part of the wider algorithm assessment, and for high-focus queries (those that drive very high levels of traffic), it’s likely that that CTR will get more weighting.

Logically, basic machine learning requires:

  • An input
  • An algorithm (processing)
  • An evaluation

However, we have very little insight into the web corpus as a whole or running this on any greater scale.

In my opinion, CTR combines with other determinations and efforts from Google, meaning that it can’t be a standalone ranking factor.

A good example of this would be a high competition (ergo high search volume due to high PPC attention) search phrase with a small number of dominant interpretations.

As the exact search intent cannot be defined, Google needs to cater to a wide variety of results including both commercial and non-commercial webpages, and special content results blocks such as the knowledge graph panel, featured snippets, AJAZ question loader, and related searches.

As a result, it can’t be linear that more clicks = higher rankings, as a lot of SERPs aren’t linear in output, so a non-commercial result in #2 may get fewer clicks than a commercial result in #5.

In this instance, CTR couldn’t directly influence the rankings of those individual results but along with the Search Quality Raters and assessment of the algorithm around that query (and related) cause them to reconsider what the most common dominant interpretation is (i.e. it may be commercial over non-commercial), so the SERPs would change accordingly either permanently or as a test.

Counter Argument: Google’s Navboost

Navboost utilizes search indicators predominantly based on user interactions, specifically clicks. This advanced search ranking algorithm by Google is designed to elevate the user experience by refining the search outcomes for navigation-related inquiries. Its core objective is to decipher the purpose of such queries, ensuring the delivery of pertinent and precise results tailored to navigation-focused searches.

In a recent statement, Google’s expert, Nayak, offered insights into the intricate workings of the company’s search algorithms, highlighting the multifaceted approach to ranking tens of thousands of documents. The revelation came as part of a trial where Nayak explained the complex interplay of various signals that Google employs to refine its search results.

According to Nayak, the process starts with evaluating a vast array of documents using a combination of core signals, including topicality, page rank, and localization. These signals work in unison to assess and score the content, narrowing it down to the most relevant few hundred documents for any given query.

Navboost, is described as a specialized component that enhances navigation-related searches.

So could you describe Navboost (that uses click data) as a ranking signal, for some queries, yes – but is that enough to directly say CTR is a ranking signal as a broad sweeping statement, in my opinion, probably not.

Should I Optimise For CTR?

Yes, but not with the goal of it “increasing your rankings”. Making your results attractive for clicks within SERPs is a given, and I feel it has got lost in translation in recent years with people selling it as a ranking silver bullet.

We know that studies have shown 42% of users click to reaffirm statements made in the summary, so let’s make title tags and meta descriptions impactful and offer user value – and not worry about a zero-click search as being bad as Google have made a lot of effort to understand “good abandonment”, and what that looks like in terms of user behavior.

Please let me know in the comments below if you agree with my conclusion, or if you feel I’ve missed research, studies or papers to include in this analysis! Always happy to update.

 

Share
  • 0
  • 0