Google’s John Mueller has clarified that the “Page Indexed without content” warning in Google Search Console is usually caused by server-side or CDN-level blocking of Googlebot, not JavaScript-related problems. Below is what site owners should review if they encounter this issue.
- “Page Indexed without content” commonly signals that Googlebot is being blocked at the server or CDN level.
- Pages affected by this error may begin disappearing from Google’s index, making it a high-priority issue.
- These blocks are often IP-based, which makes them difficult to detect using external testing tools.
Google Search Advocate John Mueller addressed this issue in response to a Reddit question, explaining that the error typically results from low-level blocking rather than JavaScript failures.
The discussion began after a site owner reported their homepage falling from position 1 to position 15 shortly after the error appeared in Search Console.
What’s Happening?
Mueller corrected a widespread misunderstanding about the cause of the “Page Indexed without content” status.
He explained:
“Usually this means your server / CDN is blocking Google from receiving any content. This isn’t related to anything JavaScript. It’s usually a fairly low level block, sometimes based on Googlebot’s IP address, so it’ll probably be impossible to test from outside of the Search Console testing tools.”
The site owner had already tried several troubleshooting steps. These included using curl to fetch the page as Googlebot, checking for JavaScript restrictions, and running tests through Google’s Rich Results tool. Desktop URL inspections consistently returned “Something went wrong” errors, while mobile inspections appeared normal.
According to Mueller, these types of blocks often cannot be detected using standard third-party tools.
He added:
“Also, this would mean that pages from your site will start dropping out of the index (soon, or already), so it’s a good idea to treat this as something urgent.”
The affected website was built on Webflow and used Cloudflare as its CDN. The site owner noted that the homepage had previously indexed without issues and that no recent site changes had been made.
Why This Matters
This type of issue has surfaced repeatedly over the years. Server and CDN configurations can unintentionally block Googlebot while leaving normal users and common testing tools unaffected. These restrictions are often applied to specific IP ranges, making them hard to reproduce outside Google’s own systems.
When Google first introduced the “indexed without content” status in the Index Coverage report, its documentation explained that the message appears when Google cannot retrieve page content for unspecified reasons. It also clarified that this situation is not related to robots.txt blocking. In most cases, the root cause lies deeper in the server or network layer.
The involvement of Cloudflare stands out. In a previous case, Mueller advised a site owner experiencing crawl issues across multiple domains—all of which were using Cloudflare. He pointed to shared infrastructure as the likely reason. The pattern in this case appears similar.
More recently, a Cloudflare outage in November caused widespread 5xx errors that disrupted crawling across many sites. While that was a large-scale event, this situation seems more targeted—possibly triggered by a firewall rule or bot protection setting that treats Googlebot’s IP addresses differently from other traffic.
How to Diagnose the Issue
Google Search Console’s URL Inspection and Live URL Test tools remain the most effective ways to diagnose this problem. If these tools report errors while external tests show no issues, server or CDN-level blocking is the most probable explanation.
Mueller has made similar observations in the past. In August, while addressing crawl rate declines, he advised site owners to “double-check what actually happened” and determine whether a CDN had blocked Googlebot.
Looking Ahead
If you encounter the “Page Indexed without content” error, carefully review your server and CDN configurations, especially any rules that may affect Googlebot’s IP ranges. Google provides a list of its crawler IP addresses, which can help identify whether security filters are unintentionally targeting them.
The URL Inspection tool in Search Console offers the clearest view of what Google sees when it crawls your pages. External testing tools often fail to detect IP-based restrictions that only impact Google’s infrastructure.
For sites using Cloudflare, it’s especially important to review:
- Bot management settings
- Firewall rules
- IP-based access controls
In some cases, the issue may be caused by automatic updates or default configuration changes, rather than any manual adjustments.