A modern understanding of SEO

SEO doesn't work the same way it used to work 5 years ago. These days, we have to test everything. Let me explain why.

A couple of years ago, I changed the way I think about SEO. Before Google engineers started to use machine learning for "the core ranking algorithm" - it must have been around 2016 - SEO was like a blueprint for me. I knew how things were supposed to be for a site to rank well, and my challenge was to execute against that vision.

Today, I don't think like that anymore. I still have a rough map in my mind. I know content, backlinks, titles, and user intent are important, but everything is on the line of impacting. Search Pilot case studies do a great job of showing how many "factors" can influence a page's rank for a keyword. I recently showed how Google tests content for different keywords in the SERPs. The machine learns and iterates. That's why blueprint SEO is dead to me. I know think about zero-based SEO, which means: test everything.

Fluid ranking algorithms

The fluidity of the ranking algorithm(s)

The idea of zero-based SEO came from zero-based budgeting: you plan your budget according to your needs (expenses), which means you need to develop your budget from zero instead of planning your expenses based on your budget. The same is true for my view on SEO: every site is different, every situation is different. Coming in with a blueprint and saying, "this is how things should be in my mind," doesn't work anymore. But why?

The ranking algorithm has become "fluid." In fact, we shouldn't even say "the ranking algorithm" or think about a single algorithm that makes all decisions. This world view is also outdated. We're dealing with many algorithms, probably many magnitudes more than 100.

John Mueller provides a great summary on the Search Off the Record podcast, which I transcribed for you below:

Search is not a science, I think that's really important to keep in mind, in the sense that there is no absolute truth out there with regards to which page should be ranking for which query. I'd rather these are things that can change over time, these are things where people are working on to keep improving things, and sometimes you can have discussions with smart people about which of these pages should be ranking first or if we have two very similar pages should they be both ranking, or should only one of them be ranking like those are like very, I don't know, interesting discussions to have.

But it's all based on, kind of, I don't know opinions and kind of ambiguous information that you have from the web. So that's kind of the one thing. And the other thing is that there's so many different ways to reach a final result in search. It's not that every site has to do the same thing so I use a mental model of something like a neural network, which is not how we would do this in search, but it kind of helps me in that we take the query that a user has, and we try to understand it, and split it up into lots of small parts, and these small kind of signals that we have from what the user is looking for, they go through this big network where different knots along the way, kind of left the individual parts pass, or they kind of reroute them a little bit. And in the end, we come up with a simple ranking for the different web pages for that kind of query. And when you have this kind of a network there are lots of different paths that could lead through that turn toward that end up with the same result.

So it's not that every site has to do the same thing but rather, there are multiple ways to get there. And you don't have to blindly follow just one ranking factor to get to the end result, and it's also not the case that any particular kind of factor within this big network is the one deciding factor or that you can say that this factor plays a 10% role because maybe for some science for some queries, it doesn't play a role at all, and maybe for other signs for other queries, it's the deciding factor. It's really hard to say kind of how to keep those together.

So, this kind of big network thing also changes over time, of course, as we try to improve our search results. And essentially, we try to optimize how we understand the query we try to optimize how we understand kind of the routing between the query and the search results. And these kind of changes take place all the time. And the best way for website to kind of remain in a stable position, which is not guaranteed at all, is really to make sure that you have a wide variety of different factors that you work on and kind of to keep this diversity of your website abroad.

So, as similar to how you might want to improve diversity in a team to get different viewpoints that same thing that you'd want to see on a website so that regardless of how things are routed through this network to find the search results, we can understand that this website is relevant in different ways, and all of these add up into kind of telling us that it's actually relevant for a particular query.

So that's all to kind of say that it's really really hard to take any particular element and say this has such and such an effect on the search results. And similarly, it's pretty much impossible to kind of go from the other way around and say well, looking at the search results, I can tell that this particular search results factor is this important or more important than this other factor, because it's really not the case that you can kind of Route days backwards and say well. Looking forward, it goes like this and looking backward it's exactly the same route, because there's just so many different ways to get to the end result. So that's kind of my short monologue on ranking factors.

I think it's worthwhile to keep in mind that when you talk about ranking factors externally. There are lots of different ways to get there. And it's not something that you can just kind of deduce into one specific element or kind of simplify into an ordered list of elements that you need to check off, but rather, You need to make sure that your website is good in a variety of different ways and don't just blindly focus on one particular element, and then try to make that element look natural. So that, it's like, hopefully the algorithms won't think that I'm trying to do something sneaky here, but instead, just make sure that everything is kind of natural."

You should read the whole passage, but here are the key points:

  1. Google looks at many ranking signals, and they change over time
  2. It's hard, sometimes impossible, for humans to reverse engineer all factors
  3. We need to avoid making a single factor responsible for ranking position
  4. Several similar pages can rank at the top if they fulfill different user intents for the same keyword (example)
  5. As Google's understanding of queries and user intent changes, so can the ranking composition
  6. Rankings can fluctuate wildly, and sometimes there is nothing you can do about it

When I write about "fluid ranking algorithms," I mean that the process of how Google ranks web pages is so adaptable that it appears fluid. It understands and weights signals by query.


The perfect example is E-A-T. Gary Illyes brought it to the point at the SF BAS meetup in 2019:

"E-A-T is a dumbed-down version of what the algorithms [are trying to] do. There’s, of course, not just one algorithm. There are probably millions of little algorithms that work together in unison. One algorithm might endorse what the scientific community thinks [Gary notes that he’s citing the search quality rater guidelines here]. The rater guidelines reflect what the algorithms are aiming for. There’s no E-A-T score."

The 23 questions Google back when Panda came out in 2011 to help webmasters improved their sites represents the same idea as E-A-T: a set of outcomes the algorithms are trying to achieve.

What's crucial to understand, and one of the results of the fluidity of ranking algorithms is how user perception impacts organic rankings. Low trust or user value from content diminishes a page's chance to rank well in organic search.

What that means for Organic Growth

We can still work with a high-level map in SEO: we know backlinks, content, or title-tags have an impact and should be optimized. I don't think anyone argues with this. From there, we need to adopt a zero-based approach and test everything.

Simple "before and after" tests are better than nothing, but we need to build stronger muscles in a/b testing, time-spaced testing, and alpha groups. Not every site can run clean a/b tests, but that can't stop us from figuring out what works for our target queries. It's crucial.

We need to think about SEO more like medicine or investing. We have rough ideas of what works, but a single data point proves nothing. We need to make assumptions and then ruthlessly question and test them.

That's why SEO has become more like Growth, and that's why I call what I do "Organic Growth".

Dive deeper

  1. https://www.kevin-indig.com/blog/how-google-tests-new-content-in-the-search-results/
  2. https://www.kevin-indig.com/blog/my-notes-from-the-gary-illes-qa-bay-area-search/
  3. https://developers.google.com/search/blog/2011/05/more-guidance-on-building-high-quality
  4. https://www.kevin-indig.com/podcast/seo-testing-101-with-will-critchlow/
  5. https://www.kevin-indig.com/blog/how-to-rock-seo-in-a-machine-learning-world/