Does Google Understand Your Website?

Shockwave Solutions LLC
4 min readNov 11, 2021

Getting your site ranked well by Google (and other search engines) is crucial if you want to bring in organic traffic. However, with search engines getting more complex year after year, most companies end up missing out — here’s how to ensure that you’ve got the basic setup correct.

This blog isn’t a comprehensive guide to SEO — it’s about making sure that the technical foundation is in place. While it’s definitely possible to achieve great SEO results without having the basic structure, you’re going to struggle to take your organic performance as far as possible.

Fortunately, ensuring that your website is search engine-accessible is relatively easy and doesn’t require any visual/ experience changes that might affect customer journeys.

Here are four essential points that’ll help you make sure that search engines can process your website correctly:

Effectively Using Sitemaps for SEO

Search engines don’t process websites in the same way that visitors do, particularly when it comes to site navigation. While a user won’t necessarily need to know where every part of your site is, search engine crawlers want to know exactly what your site includes.

A sitemap is a simple way to provide that information, listing all pages on your site, typically in XML (Extensible Markup Language) format. Most sitemaps also include information about the last time the page was modified and the frequency of changes for the page.

The process of creating a sitemap depends on the scope of your site (and the CMS you’re using). If your site only consists of a few pages, you can manually create a sitemap using any one of the many templates available online. As sites get larger, creating sitemaps becomes more time-consuming, requiring the use of specialized generators, particularly above the 500~ page mark.

Once you’ve created your sitemap, you need to make sure that search engine crawlers can access it. It’s usually best practice to link the file unobtrusively in a footer while also submitting the sitemap to key search engines.

Understanding robots.txt Files

Should every page on your site be discoverable via search engine? The answer’s almost definitely no, especially if you’re running split tests, handling payments, have duplicate content, or even just use certain kinds of site search tools.

A robots.txt file is a simple series of instructions for search engine crawlers, asking them not to index certain parts of your website. While not every crawler will obey these instructions, all major search engines should take them into account (provided that the file is correctly uploaded at [yourdomain.com]/robots.txt).

By blocking certain content, you’ll show search engines which parts of your site should be indexed, cutting down crawl speeds and freeing up crawl budgets, often resulting in SEO improvements.

Generating a basic robots.txt file is simple — just use a generator. Correctly blocking content can be more difficult, though. Make sure you’re familiar with robots.txt syntax before making any major changes to the file.

Getting Your Site Indexed

If Google doesn’t know your site exists, your content isn’t going to get ranked. You need to get your site noticed before you’ll see any results whatsoever.

There are two ways to go about getting indexed — you should be doing both:

1: Build Great External Links — when your site is linked from a source Google is aware of, they’ll start crawling your content. This can be a slow process, but it’s essential, as external links are one of Google’s most important ranking factors.

2: Actively Index your Site — sign up for Google Search Console Tools as soon as you can. This platform lets you explore how Google understands your site and lets you directly submit your sitemap and URLs for processing. It’s not going to immediately get you great rankings, but it is an essential step.

Using Canonical Tags Correctly

There are a lot of reasons to have duplicate content on your website, but they’re almost entirely for the benefit of your visitors, not search engine crawlers.

Search engines don’t like duplicate content and will only rank one version of a piece. This often leads to them ranking the wrong version of a particular piece, causing serious issues for international (and similarly duplicated) content.

Fortunately, there’s a way to avoid this issue — the Canonical tag. This tag, placed in the head of your page, lets you tell search engines exactly which version of a piece they should be indexing.

If you’ve got duplicate content, you absolutely need to control how search engines understand it, or you’re going to end up with subpar, confusing organic results. Make sure that you’re using Canonical tags wherever relevant.

Technical SEO can be incredibly complicated, but the basic steps are surprisingly simple. Head over to our 10 Minute Technical SEO Guide to find out more about taking the next step to improve your site’s technical performance.

By Richard Parkin

--

--