Navboost Browser Bias – Why CTR Manipulation In Chrome Matters

For years, Google employees have denied that the Chrome browser itself has any impact on page rankings. Matt Cutts went so far as to say that ‘Google’s organic algorithm does not use any Google Chrome data’.

However, with the recent leaks about Google Navboost, the system used to help determine page rank with the search giant, we know this to be either misleading or outright false. Several metrics in the leaked documentation refer directly to the Chrome browser.

Does that mean that CTR manipulation in Google Chrome somehow carries more weight? And what does this imply as far as topics such as antitrust are concerned?

We’ll dive into these questions and many more as we examine the potential impact of Navboost browser bias!

What is Google Navboost?

In brief, Google Navboost is one of the main ways that the page rank order is determined by Google Search. It takes a site’s responses and clicks history from the past 13 months and runs them through a series of analyses. Based on factors ranging from user clicks to site authority, to page titles and dozens of other data points, Navboost pushes sites up and down the ranking system.

Navboost was first mentioned during the Google antitrust trial but has since been exposed in a much more meaningful way due to the more recent Google Navboost API leak, which we talked about in May 2024.

What Evidence Exists for any Navboost Browser Bias?

As Rand Fishkin pointed out in his initial report about the leak, the informant he was speaking to made this startling claim:

‘Google utilizes cookie history, logged-in Chrome data, and pattern detection (referred to in the leak as “unsquashed” clicks versus “squashed” clicks) as effective means for fighting manual & automated click spam.’

And when we dug into the code, we came to the same conclusion. There are several references to Chrome within the leaked API docs, which Google has confirmed as being an authentic leak. Some of them are in reference to the ChromeCast (which is another kettle of fish entirely), but others reference the Chrome browser itself.

The main references include:

chromeInTotal (type: number(), default: nil) – Site-level Chrome views.

This is present in the NSR Data module.

topUrl (type: list(GoogleApi.ContentWarehouse.V1.Model.QualitySitemapTopURL.t), default: nil) – A list of top urls with highest two_level_score, i.e., chrome_trans_clicks.

This reference is in the Sitemap Target Group module.

uniqueChromeViews (type: integer(), default: nil) – Number of unique visits in Chrome.

And this appears in the Video Content Search module.

Additionally, the leaked API docs contain several references to Chromium. Most of them are fairly mundane, but others referred directly to acting on trace data from the browser:

chromiumTrace (type: GoogleApi.ContentWarehouse.V1.Model.HtmlrenderWebkitHeadlessProtoChromiumTrace.t, default: nil) – Contains chromium trace generated during page rendering. This is present if a chromium_trace_config was provided in the request.

This appears in the Webkit Proto Render Response section.

chromiumTrace (type: String.t, default: nil) – Populated if Chromium traces are requested in JSON format.

chromiumTraceProto (type: String.t, default: nil) – Populated if Chromium traces are requested in PROTO format.

These both appear in the Proto Chromium Trace docs.

No direct references to Blink, the Chrome rendering engine, were found. We tried.

The exact weight of each of these attributes is unknown; in fact weighting isn’t covered anywhere in the API leak. But this could be considered evidence that clicks in Chrome, and therefore CTR manipulation within anything that looks like an authenticated Chrome browser, are treated differently than clicks from other browsers.

Why Is a Possible Chrome Bias in Navboost Important?

There are two reasons why a Chrome bias in Navboost might become a central theme in the examination of the API leaks.

The first is antitrust. Google is already under investigation for using its power to manipulate the search and advertising markets. If it came to light that using the Chrome browser gave some websites an edge in search results, there are a lot of people who are going to have something to say about it.

The other browsers, of course, should be livid if the use of Chrome has a positive bias on Google Search. When Chrome holds over 65% of the market as of mid-2024, and the closest competitor Safari is hovering right around 18%, it is a hammer blow to find out that Chrome clicks get special treatment within the page rank algorithm.

Google has some thin layer of protection here because the weights of each attribute’s impact on search haven’t been revealed. But expect legal discovery on this if one of the other browsers decides to take action.

The second reason that Navboost’s Chrome bias matters sits within the field of CTR manipulation. If clicks from Chrome matter more (or less for some reason), or can be otherwise manipulated to change page rank order, the field of CTR manipulation needs to consider the impact.

It’s likely to take months of effort and experimentation to discover the exact relationship between browser type and the various ways that sites are positively and negatively impacted by Chrome VS non-Chrome browser types. But there are a few things that we can do right now to see the short-term impact of these new discoveries.


Scenarios to Test Navboost Browser Bias Using Typical CTR Manipulation Techniques

It might seem like the CTR test for a Chrome bias in Navboost is obvious: Just use more Chrome browsers and see if that helps or hurts the results. But that doesn’t really scratch the surface. We need to account for several different scenarios in order to see which criteria Google really cares about.

The most scientific way to make comparisons between two techniques is A/B testing. Two similar sites going for the same SEO key phrases. They can’t be word-for-word similar, but as close as possible without demonstrating complete plagiarism. The only difference between the sites will be the browsers that visit them, as detailed below.

The first test is using a Chrome user agent VS actually using Chrome. This scenario shows how deep the integration needs to be in order to hit the Chrome bias detailed in the API. Using a Chrome user agent is trivial, and can be done en-masse with virtual browsers. Having to actually use Chrome is more CPU and memory-intensive.

The second test is unauthenticated Chrome VS logged in. This scenario will show if there’s any additional value or legitimacy attached to having a logged-in Google account when using Chrome.

The third test is Chrome VS Firefox. This will test how much of a ‘bonus’ Chrome browsers get within Navboost.

And of course, once all of the above has been established, testing combinations of the above scenarios will be required. This should solidly establish a benchmark, and detect any cumulative effects that these conditions might have.

After a few months of trying these scenarios, any Naboost bias towards (or against) Chrome should be evident. Then the most optimal setup can be used on live sites.

How Do We Know That CTR Manipulation Works at All?

The API leak showed how Google treats CTR metrics. The API recognizes good clicks, bad clicks, last longest clicks, unicorn clicks, and overall impressions. That means the metrics that CTR manipulation uses, including dwell time and positive click ratios, very much matter in overall page ranking.

This is all despite the fact that official Google reps misinformed the public about the importance of click dynamics, deep scrolling, and other UX metrics. They also said that CTR wasn’t a metric they used for core search rankings, which of course was disproved by the recent leak.

Of course, these techniques need to be used in conjunction with best practices. There’s a reason every SEO company says that site performance is an important factor. There’s a reason why E-E-A-T (Experience, Expertise, Authoritativeness, and Trustworthiness) metatags and features are increasingly being utilized on successful websites. There’s a reason why relevance is still king.

But CTR manipulation takes that solid base and improves upon it. It’s one of the few ways that smaller sites can compete with the megalithic entities that dominate Google’s top rankings day in and day out. One of the other findings of the Navboost API leaks was a built-in bias towards larger companies, with the search engine assuming that their authority and level of trust start off at a higher basis. Something needs to be done in order to level the playing field.

As of Mid 2024, Have Any Lawsuits or Antitrust Charges Resulted from the Leak?

No. As of the second week of July 2024, nobody has officially brought a lawsuit against Google specifically citing browser bias in search.

But that’s not exactly a shock. It will take several weeks for competitors to determine whether or not they have a case, and whether or not they wish to make the financial commitment to make such a legal effort a reality.

When you go up against Google, you go up against a team of over 1,400 legal experts worldwide. That’s a massive in-house force. Smaller organizations would quickly be buried in paperwork from the motions that would be filed.

In short: Apple or Microsoft have the best chance of turning this issue into a legitimate legal battle. None of the smaller browsers have the resources to tackle Google on their own, though they might team up with one of the bigger entities so that the suit would represent a larger fraction of the browser landscape.

What about a government antitrust case? Those happen all the time, and Google is no stranger to them. They’re currently in the deliberation phase of a multi-billion dollar antitrust lawsuit regarding their payments to other companies in order to be the default browser on their products.

If the recent API leak is to spawn another one of these, expect it to be at least two years before we hear anything about it. It takes several rounds of subpoenas, hundreds of people doing in-depth investigations, dozens of interviews, and a lot of late nights in order to piece together an antitrust case that has a chance of sticking.

And sad the truth behind this revelation? This case simply might be too technical for most governments to want to take it on. They might very well decide that a corporate lawsuit is a more appropriate way to resolve this issue since getting support from politicians and taxpayers would involve an in-depth education campaign about how search works. Either way, it’s not likely to be on the top of anyone’s priority list in the short term.

Finding the Right CTR Manipulation to Take Advantage of Navboost’s Browser Bias

In order to get the best possible results that are in line with what we currently know about Navboost’s browser bias, the company doing the CTR manipulation needs to have a few specific features:

Firstly, they should be using actual web browsers, and not just forging a user agent. Unless the A/B testing proves otherwise, it’s worth the additional CPU and memory investment to open websites in a real web browser.

Second, the method should be user signal biased. Anyone can brute force a massive amount of searches and clicks. Good CTR manipulation companies click, scroll, and linger in order to signal that a high-quality user experience is taking place. As we know from the leak, those metrics are being recorded and utilized by the Navboost API at some level.

Next, the company needs to randomize key browser features and system metrics to avoid browser fingerprinting. Browser fingerprinting is rapidly replacing cookies as a way to identify and track users. Without some level of randomization, a search engine will be able to figure out that all of these different users are actually coming from the same place.

Of course, Top of the Results has all of these features and is very likely cheaper and more effective than any home-grown solution. Contact us for more information on our methodologies and our monthly plans.