One passage stood out to me in Google’s latest blog post about algorithm updates:
Last month we launched an improvement we made to help people find better product reviews through Search. We have an automated system that tries to determine if a review seems to go beyond just sharing basic information about a product and instead demonstrates in-depth research or expertise. This helps people find high quality information from the content producers who are making it.
Building a system that determines how deep an article goes into a subject sounds easy but is very difficult to build! How could that work?
Let’s start with the outcome. The questions Google provides around the product reviews update give us an understanding of what its many algorithms are trying to achieve:
- Express expert knowledge about products where appropriate?
- Show what the product is like physically, or how it is used, with unique content beyond what’s provided by the manufacturer?
- Provide quantitative measurements about how a product measures up in various categories of performance?
- Explain what sets a product apart from its competitors?
- Cover comparable products to consider, or explain which products might be best for certain uses or circumstances?
- Discuss the benefits and drawbacks of a particular product, based on research into it?
- Describe how a product has evolved from previous models or releases to provide improvements, address issues, or otherwise help users in making a purchase decision?
- Identify key decision-making factors for the product's category and how the product performs in those areas? For example, a car review might determine that fuel economy, safety, and handling are key decision-making factors and rate performance in those areas.
- Describe key choices in how a product has been designed and their effect on the users beyond what the manufacturer says?
In best case, a human and Google’s ranking algorithm would agree on the same answer for each question in the context of a website.
Many of these questions aren’t fundamentally different than Google's guidance around core updates.
"Is this content written by an expert or enthusiast who demonstrably knows the topic well?"
"Is the content free from easily-verified factual errors?"
Some answers to these questions are easier to measure, some much harder.
Let’s take a step back.
Google uses word embeddings to understand entities like people, companies, books, cities, and more. Word embeddings are the basis for knowledge graphs and entity mapping. A transformer model trained on Google's index (and Knowledge Graph) should understand the difference between content written by an expert vs. an amateur pretty well. Transformer technology allows Google to analyze sentences and understand the relationships between words:
Neural networks for machine translation typically contain an encoder reading the input sentence and generating a representation of it. A decoder then generates the output sentence word by word while consulting the representation generated by the encoder. The Transformer starts by generating initial representations, or embeddings, for each word. [...] Then, using self-attention, it aggregates information from all of the other words, generating a new representation per word informed by the entire context, represented by the filled balls. This step is then repeated multiple times in parallel for all words, successively generating new representations.
Backlinks add a layer of trustworthiness, authoritativeness, and expertise on top of the entity graph. What does that make? E-A-T! In simple terms, backlinks and deep natural language understanding (across different modalities, thanks to MUM) allow Google to build an “expertise graph”.
Expert vs amateur content
That’s bad news for content teams that try to create expert content without experts simply by synthesizing Google’s top results. For a while, using Marketers to write about topics they don't understand worked well. The truth is that people who have some experience in a field can quickly sniff out whether an article was written by an expert or by an amateur - and so can Google now.
When the Medic Update rolled out in 2018, Google introduced the world to the concept of E-A-T (expertise, authoritativeness, trustworthiness). Initially, E-A-T applied to YMYL sites (your money or your life), meaning sites in industries like health, insurance, or mortgages. Now it seems Google has expanded the concept into all topics that require expertise. The product reviews update I mentioned at the beginning of this article is a good example.
The rising cost of content
What does that mean for companies that rely on content for the majority of their SEO results? Get experts to write your content, of course! In some cases, it’s difficult to find experts who freelance or are open to contract work. In this case, it might work to ask experts to outline an article and have an editor or marketers fleshing it out. You should make sure the expert provides more input than just headlines and that the editor doesn’t make changes that take the expertise out of the content.
You also want to avoid the trap of relying only on students in a field to write your content. It pays off to pay a bit more (4-figures) for pillar content. That’s right. The content game has become that expensive. We often attach $0 to SEO traffic because it’s free, right? Well, not when you factor in content production cost.
Creating content has lower barriers than ever before. The result is growing competition and rising content production costs. Longer content itself doesn’t cut it; you need better-looking content and expert writers. In competitive industries, competition might blow up with no end in sight.
To be clear, non-experts writing content that clearly needs expertise was never a good idea to begin with, but it worked. That time is coming to an end.