TL;DR

Always make sure your website allows Chrome 41 browsers to access your site so Google can index your website properly. Do not prompt uers to upgrade their browser. Render the site as normal. Googlebot Smartphone has been known to use an outdated browser. WAY TO GO GOOGLE!

The Real Story – For all the SEO nerds or the naturally curious individual

A few weeks back, a website we work with saw a dramatic drop in Google organic traffic.

This is every SEO’s worst nightmare, but have no fear –  it was time to get to work! While there have been algorithm updates, the timing of our client’s traffic loss did not align well with these. There were no manual actions in Google Search Console, and rankings/traffic dropped for all pages so it wasn’t a page- or template-specific issue which lead us to believe it was some sort of technical issue.

Logging into Google Search Console we found URLs that Google was reading as noindex because it was being blocked by the robots.txt file.

block robot via gsc

After crawling the live URL with Google Search Console’s URL inspection tool, we no longer saw the robots.txt issue and the URL should have been indexable.

gsc-availble-to-index-2

Our first conclusion was that at some point in time the robots.txt file had accidentally be switched to disallow the whole site, but since then had been reverted. This is pretty commonplace as this occurs when staging websites’ robots.txt file that contains the Disallow: / directive is pushed to production. Problem solved we are HEROs!

WRONG!

We recommended to request indexing of the site URLs in batches to help Google rediscover the site’s pages. A few days later we started to see rankings and traffic slowly recover, but when we attempted to fetch LIVE URLs via GSC we saw that Google failed to fetch, and gave the error “Failed: Crawl anomaly”.

Google Search Console - Failed to Fetch

Around the time of de-indexing, we also noticed a large spike in 404 errors. The spike led us to believe that the de-indexation and these errors must be related. But robots.txt blocking should not cause 404 page errors.
when user-agent googlebot smartphone attacks 404 errors

We noticed that the user agent for both the URL inspector and the Coverage Report was “Googlebot smartphone” which makes sense since Google has moved to mobile-first indexing.

To further test whether we were looking at an issue that was specific to user-agent, we attempted to fetch the page via Google Mobile-Friendly Test tool which uses the Googlebot Smartphone user-agent. The test returned to us an error message.

page cannot be reached

That struck us as odd as we have used this tool plenty of times when doing SEO audits and rarely ever received this “Page cannot be reached” message.

Next, we used this tool fetching tool from TechnicalSEO.com to mimic the “Googlebot Smartphone” user-agent to see what it was being served and we got a 404-page error:

crawl tool fetch 404 error 5

In the 404 page error, we noticed that the rendered HTML was displaying a message “Unsupported – You are using an older browser that does not support the latest features required to use this website.”

That might seem odd, but it is known that in some cases, Google is still uses renderers as outdated as Chrome 41. If you want to read more about that read this fascinating blog post from DeepCrawl.

Final Conclusion

After bringing these findings to the client, they revealed to us that they had deployed an update to the production website that prompted a 404-page error for older unsupported web browsers, asking users to upgrade their browsers to access the website. Googlebot Smartphone saw this as a no index directive and started to deindex the website.

Now, we have a feeling that Google Search Console may use both Googlebot and Googlebot Smartphone user agents for various tools and reports within Google Search Console. Without clearly defining which user-agent discovers  issues, it becomes difficult to diagnose the source of the problem. For every report and tool, Google should disclose which user-agent is being deployed.

Recovery

The website update/patch was reverted and now alows unsupported browsers to access the site and in turn allows Googlebot Smartphone entrance. Google promptly started to reindex the website. Luckily the client rankings recovered to positions held prior to the de-indexing and in some cases earned even higher rankings, but if the error had gone for any longer than a few days we fear all could have been lost.

The prompt to request users to update their browser was a harmless update by a development team, aimed to ensure that users would have the best experience of their site by using a browser that fully supports all of its features. There is no reason that a development team should have been expected to foresee the repercussions that followed.

C’mon Google get it together.