Comparing the source code of a website for different user agents

Source code comparison?

  • Google uses different bots, some run JavaScript and CSS (Fetch & Render), some don't (Static).
  • You can use this tool to compare if a website is returning different data for different bots or user agents.
  • From an SEO point of view, it is important that the content of static and rendered versions does not differ as much as possible.

What is the difference between static Googlebot and Fetch & Render?

Static Googlebot and dynamic Googlebot (Fetch & Render) are two different tools that Google uses to crawl and render web pages. Here's a breakdown of their differences:

Static Googlebot:
The static Googlebot is the web crawler used by Google to discover and index web pages. It is responsible for fetching the HTML content of web pages, analyzing the page structure and extracting relevant information. The static Googlebot does not run JavaScript and does not render dynamic content. Instead, it focuses on the static HTML content of a web page. It mainly looks for text content, links, meta tags and other HTML elements to determine page relevance and index it in Google search results.

Fetch and Render:
Fetch & Render is a feature available in Google Search Console (formerly known as Google Webmaster Tools). This allows site owners to test how Googlebot renders their web pages, including how it handles and displays dynamic content, such as JavaScript-generated content or AJAX calls. With Fetch & Render you can submit a URL from your website to see how Googlebot will display and render it.

The Fetch & Render feature has two modes:

Fetch:
This mode fetches the HTML content of the given URL, similar to how the static Googlebot does. It fetches HTML, CSS and other resources of the page.

Render:
In this mode, the page is fetched and rendered as Googlebot does, honoring any dynamic JavaScript or AJAX calls. It displays the rendered output so you can see how Googlebot sees your page after processing JavaScript and other dynamic elements.

In summary, static Googlebot is responsible for crawling and indexing the static HTML content of webpages, while Fetch & Render is a tool within Google Search Console that allows website owners to test how Googlebot manages their pages, including dynamic content, renders.

What if the code is different?

If the source code differs between the static version and the one rendered with Fetch & Render, this may indicate a potential problem with Googlebot's processing and rendering of the webpage. Here are some examples that could lead to such differences:

  1. JavaScript execution:
    Fetch & Render uses a headless browser to render web pages, which means it can execute JavaScript and handle dynamic content. If your page relies heavily on JavaScript to generate or modify its content, the rendered version may differ from the static version because the static Googlebot doesn't run JavaScript.
  2. AJAX Calls:
    If your webpage makes AJAX calls to fetch additional content or data after the initial page load, the static version fetched by Googlebot will not contain the results of that AJAX -Views. However, Fetch & Render executes the JavaScript and displays the page as it appears after the AJAX calls are complete. This can lead to differences between the static and rendered versions.
  3. Dynamic content generation:
    If your page generates content dynamically based on user interactions or server-side logic, the static version fetched by Googlebot only captures the initial state of the page. However, Fetch & Render renders the page and displays its dynamic content as it appears after all processing is complete.

It's important to note that Google's ability to process JavaScript and render dynamic content has improved over time. With the introduction of technologies like the Headless Chrome rendering engine, Googlebot is now better equipped to understand and render JavaScript-driven websites.
However, it is recommended to ensure that critical content is available in static HTML rather than exclusively rely on JavaScript execution for optimal indexing and accessibility.

Relevant SEO content should be the same in both the static and JavaScript versions.

This includes:
  • HTTP status codes
  • Title Tag and Meta Description
  • Robots Instructions and Canonical Tags
  • Hreflang Tags
  • Headings (H tags)
  • Body text
  • Internal links
  • Images
You can neglect:
  • Login areas
  • Cart functions
  • Cookie Consents
  • Ad Pixel, Tracking Pixel
  • Third party integrations such as iframes
  • Personalizations

Ready to get started?

Dive into the world of practical insights, undiscovered opportunities, and growth

TRY OUR NEW FEATURE

FREE Content Vitaminizer: 
AI SEO Texts That Work

Generate a powerful supplement for your page: a compact content piece relevant to hundreds of search queries and apt to make the page more helpful for users

Buddler Image