SERPs are Bunkum

wooden wall with posters

In the absence of Google Analytics or other real information, most SEOs will turn to the Search Engine Ranking Positions (SERPs) for a proxy as to how well a website is performing.

The logic goes that  the more keywords for which a site appears – especially in top places, 1 to 3 –  the more successful it will be, and the more traffic it will get. Logically …

However, there are several large caveats to this.

For a start, not all search phrases are created equal. You’ll probably have seen the posters stuck on lampposts or bus shelters claiming to “get your website to the top of Google for £100!” It’s possible of course, if you choose a suitably obscure keyword. “oven-baked rocking horse” anyone?

But the other side of that equation is that absolutely no-one will be looking for that term, and even if they were, they’d be unlikely to buy anything as a result.

How do you measure SEO success?

On the other hand, your website might appear for just one keyword, but that keyword may have a search volume of a million searches a week, with a conversion rate of 50%. On that basis, the SERPs tracking model would portray your site as an absolute failure, even if your order books told a different story.

One question I ask people who want to work for TMI’s Organic Search team is “How do you measure SEO success?” One of the most common answers is “appearing for lots and lots of keywords”.

But appearing for millions of very long-tail terms will actually do you very little good. Others say it’s all about the most position ones, “Position Zeros” or total keywords in the Top Three or Top Five. But being at position one for keywords with a zero search volume is similarly pointless.

One answer lies in an algorithm which combines lots of factors – search volumes, ranking positions, conversion rate, user intent, etc. – and such “index” values are touted by more than a few of the SERPs tracking tools.

But are the SERPs worth anything at all?

Understanding the Click-Thru rate

For a start there are lots of different types of search result pages. They range from straight lists of individual websites – try searching for “ 1TB HDD” – to pages with lots of complications like maps, images, knowledge graphs, position zero elements and especially paid search units (compare “hotels in London”, “red court shoes” and “Arnold Schwarzenegger”).

Any or all of these “distractions” change the way people interact with the search result, with an obvious knock-on effect on click-through rates (CTR).

And it doesn’t end there. Anyone who’s searched for a service or a product with a local dimension will have received a tailored result based on a geographical location. You don’t even need to specify the place: “plumbers near me” will give matching results from the Google My Business index based on your IP address.

(The search result for ”plumbers near me”. TMI is based in Nine Elms, Vauxhall)

Even if you’re not logged in, the results will be based on factors like your IP address, with varying degrees of success. Your mobile phone will show pretty good location data, but if you’re connecting via a cable provider you might find your estimated position is somewhere you aren’t expecting at all.

Greater “accuracy” still can be obtained by logging in to your Google account which then draws on previous search data, and will probably identify your location anyway.

And it’s not just your search history that may affect your perceived geographical location. Depending on the sorts of things you look for, you’ll get tailored results, and they’re not always what you might be expecting.

One issue that SEO agencies often encounter is clients who search for their own websites and report different results to the “official” SERPs. Agencies use third-party rank-tracking programs which employ big samples of data from different “locations” and IPs, and average out the results. A search via a client’s PC, however, can produce widely variable results, either flattering performance or under reporting it.

Putting it to the test

In fact, you don’t even need to be logged on, or have a search history, for Google to try to tailor your search results.

Google uses Meta Search Data as part of its information gathering: all the stuff that the simple act of looking for something gives to Google without you even knowing about it – including your Operating System, browser type, configuration and plug-ins.

Taken together, this gives a fairly detailed “profile” which Google can use – alongside all the other similar profiles it holds – to make a fair “guess-timation” of who you are, where you are, and what you like.

I tested this once using anonymous proxies, and different geographical locations – at the beginning and end of a flight from London to Heathrow – using a search term I had calculated to be broad enough to be ambiguous.

The result was something which reflected my geography, but also included results which were closely tied to my line of work. In other words, despite all my best attempts to conceal my identity, Google seemed to know who I was and where I was.

The question is then: what good are SERP-trackers if every search result is bespoke and personal?

And the answer is that they’re good enough. Medical treatments are based on population averages for the same reason.

When the figures don’t add up

SERPs data – based on big data averages – give at least an indication of what the facts are, and forms a starting point for any SEO programme. Ultimately, the right actions can only be taken with actual hard data from Analytics or Search Console.

June’s sharp upturn in ”Visibility” shown by SearchMetrics, should have indicated a massive rise in traffic, but it was exactly the opposite.

Recently, we were called in by an eCommerce website to explain why their revenue had steeply fallen, even though they were seemingly appearing for thousands of new keywords. The rise in the number of keywords was there in all the keyword trackers, which all predicted a jump in traffic, the sort of result that would get a prospecting agency whooping with joy.

Conversely, though, Google Analytics showed a drop in revenue, even with an increase in search impressions, as reported by Search Console.

The reason for this contradiction was also in the Search Console data: the site was appearing for more keywords on desktop and mobile, but the average position on desktop was the bottom of page 2, compared to the middle of page 1 for mobile.

Looking at where the traffic was coming from, hits on the mobile site peaked on Saturday/Sunday, while desktop’s peak was Monday.  It looks like people were researching a potential purchase at the weekend, but when they went to buy it on their desktop they weren’t finding it.

This time it’s personal

But if SERPs trackers – which use bulk data on millions of selected keywords – give unreliable results, and what you see in your personal results may be completely different to those of the SERPs trackers, should we be thinking of another direction?

How about this? We take on board our client’s contention that his rankings are better/worse than the “official” SERPs because – and not in spite – of his search history.

Our client’s potential customers are likely to have a similar search history, so won’t they be seeing a similar set of results? Perhaps we should be caring more about what searchers actually see, rather than what we think they see?

We’re currently running some tests with Virtual Machines using anonymous proxies, with a range of search profiles to see what the difference actually is, and the initial results are “interesting” and unexpected. It really might be that the client has the right idea about how he is appearing.

At the end there is another question, what use are SERPs at all? We quote them as an indication of the click-through rates at different positions, on the basis that more people click on result number 1 than number 2, and more people click on result number 2 than number 3, and so on. Anecdotally, more people click on result number 10 than result number 9, and going through to the next page, more people click on result number 11 than number 10.

Good or good enough?

But does one profile of click-through rates really cover all searches? Is that possible with all of the variations on search result pages, anyway? Back in 2011, SEO Scientist called the SERPs CTR model “a waste of time”. Since then, a lot of data has passed under the bridge and it’s pretty much accepted that there isn’t a one-size fits all model.

Both SearchMetrics and SEM Rush quote variable CTRs in their results, but taken in to consideration with personalised search, it raises a further question: are click-through rates personal too?

For now, we’re probably left with a variable CTR model based on results landing page and market. It may not be perfect, but it’s good enough. If something can give you 70-80% accuracy, do you reject it because it’s not 100%?

Scroll to Top