How does Google pagination crawling work in 2025?

12 May 2026
Pagination is everywhere — on product grids, blog archives, faceted search pages — you name it. Yet despite its ubiquity, Google pagination crawling remains one of the most misunderstood areas of technical SEO. Many still rely on rel=”next” and rel=”prev” tags in the <head>, assuming Googlebot knows how to follow the chain. But here’s the catch: Google officially stopped using those tags for crawling and indexing back in 2019. So why are they still so common? Because the ecosystem hasn’t caught up and there’s still confusion over whether they might work behind the scenes anyway. That uncertainty matters. If Googlebot isn’t crawling deeper pages in your pagination, you could be missing out on valuable content being indexed, ranked, and found. And if your crawl budget is limited, wasting it on ineffective markup isn’t doing you any favours.

We decided to test it ourselves

In early April 2025, we launched a live test to understand how Google pagination crawling actually works when URLs are linked exclusively via <link rel=”next/prev”> tags, a practice deprecated in 2019 but still widely used in web development. What followed was a surprising, month-long crawl silence, a series of careful interventions, and eventually, selective URL discovery that revealed more about what Google ignores than what it respects. To carry out this experiment, we used our dedicated internal SEO testing site, which, among other purposes, is used for controlled crawling and indexing tests. As part of the setup, we also developed a custom-built automated Log Analyser tool to monitor and visualise Googlebot activity in real time. This provided granular insight into exactly which URLs were requested, when, and by which user agent.

The hypothesis

Google announced in 2019 that it no longer uses rel=”next” and rel=”prev” as crawling signals. Yet these tags remain in use on many paginated experiences, and some SEOs have continued to speculate that they might still be used behind the scenes for crawl discovery.

Our test aimed to answer a simple question:

Does Google crawl URLs that are only referenced via rel=”next” / rel=”prev” in the <head> with no visible links?

The setup

Deployed on a clean testing environment, we created a 3-page pagination chain as follows:
  • / (index page) included <link rel=”next” href=”/page-2.html”>
  • /page-2.html included both <link rel=”prev” href=”/”> and <link rel=”next” href=”/page-3.html”>
  • /page-3.html included <link rel=”prev” href=”/page-2.html”>
There were no anchor links (<a href>) between pages, only rel tags in the head. We also built a custom server log analyser to monitor all Googlebot requests in real time.

Timeline of crawl behaviour and interventions

April 2025: No Hard-coded Links

  • 11 Apr: Initial setup live and submitted in GSC
  • 28 Apr: Added sitemap.xml (with only the index page), enhanced index copy, and added noindex,nofollow to the log analysis page

May 2025: Added Hardcoded Links

  • 08 May: Added hardcoded <a href=”/page-2″> to index
  • 09 May: Removed rel=”noreferrer” from link; duplicated link using relative format
  • 12 May: Updated content of /page-2, removed crawl delay from robots.txt
  • 19 May: Created /page-two.html with visible links to test a “clean slate” page
  • 29 May: Moved hardcoded links to the top of the page.

June 2025: Links Finally Crawled 

  • 04 Jun: Added sitemap to robots.txt, unproxied domain via Cloudflare
  • 09 Jun: Wrapped links in a <div> and added a blog article link

Observations and outcomes

Success: Google eventually crawled visible links

On 22 June, Googlebot finally visited:
  • /page-2.html
  • /page-two.html
  • /blog/why-angerbeef-is-so-popular-among-sheep.html
These were the URLs linked via visible <a href> tags. This confirms Google was not blocked from accessing them, but it took 6+ weeks and multiple interventions before they were discovered.

Failure: Google ignored rel=”next/prev” only URLs

At the time of writing, there is still no evidence that Googlebot ever requested /page-3.html, which remains only linked via <link rel=”next”> from /page-2.html.

Other interesting findings

  • Favicons are crawled very aggressively. Googlebot-Image hit /favicon.ico dozens of times more frequently than any other asset.
  • noindex,nofollow pages remained crawl targets. Even after flagging /server-log-analysis/ as noindex + nofollow, Googlebot continued to request it for weeks.
  • Link placement may play a role. Adding <div> wrappers and repositioning links to the top may have influenced crawl prioritisation, though we can’t isolate it conclusively.
  • Cloudflare’s impact on Google crawlers remains unclear.

Raw crawl timeline (condensed)

11–22 Apr /, /index.html, /robots.txt, /favicon.ico 28 Apr /sitemap.xml, /server-log-analysis/ 8–12 May Visible link to /page-2.html added, but not crawled 22 Jun /page-2.html, /page-two.html, /blog/why-angerbeef-is-so-popular-among-sheep.html – crawled 30 Jun – 3 Jul /robots.txt, /favicon.ico – crawled multiple times

What this means for pagination

While this test doesn’t confirm everything about how Google chooses to crawl, it strongly supports one thing about Google pagination crawling:

Google does not crawl URLs referenced only in rel=”next/prev” tags.

Even with a perfect chain of clean rel-tagged pages, no crawl occurred without the presence of visible, clickable anchor links. And even then, crawl delays stretched over weeks, suggesting Google deprioritised these low-authority pages despite all signals being favourable.

Connect to learn more

Contact the team here. Connect with Charles on LinkedIn.