Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Click To Tweet

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision. Click To Tweet

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]
Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]
Tabs in Google Knowledge Graph
New “Tabs” in Knowledge Cards

Notice that these tabs are different, depending on what users search (think: depending on the entity). In the article, Google mentions two examples, one for the keyword “Pugs” and one for “Yorkshire Terrier”. Both have different tabs in their respective Knowledge Cards. The Pug Knowledge Card shows tabs for “Buy or adopt”, “videos”, “names”, “health”, and “how to train”. The Terrier Knowledge Card shows the tabs “Characteristics”, “Grooming Tips”, and “History”. That difference is subtle but important. Tabs are not a new format for Knowledge Cards, they’re an evolution of the Knowledge Graph. They might change over time and I assume changing user behavior and intent will drive this. The displayed tabs say a lot about what information Google deems important for the keyword (entity) and at that moment. That information is something we SEOs can reverse engineer to improve and guide content creation (more in the last chapter)!

Topic Layers in Knowledge Graph
Topic Layers in Knowledge Graph

This evolution of the Knowledge Graph moves the web further towards entities (see Tim Berners-Lee’s TED talk and the 5-part series on Entity Indexing by the incredibly smart Cindy Krum.) The Knowledge Graph 2.0 (my naming, not Google’s) goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Smart SEOs like Cyrus Shepard and AJ Kohn (and surely others) have pointed out that we need to anticipate what the next questions are visitors have when they come to our site, instead of just answering the one they came for. This is exactly how we need to understand the concept of “Journeys”: not just as a Google feature, but as an approach to create content.

The Knowledge Graph 2.0 goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Click To Tweet

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

What else can you do with this new format? Show more ads! A new feature means more surface to show ads and increased engagement means more exposure to ads. Google’s whole business model is built on ads and I don’t expect that to change, even if Google transitions away from the classic model of a search engine. It’s very likely for this to start like so many other Google formats with ads slowly creeping into the feature over time.

Activity Cards and Collections are not the strongest discovery feature. It’s the Knowledge Card Tabs that help users discover new aspects of a topic they searched for:

“Rather than presenting information within a set of predetermined categories, we can intelligently show the subtopics that are most relevant to what you’re searching for and make it easy to explore information from the web, all with a single search.”

Tabs in Google Knowledge Graph
New “Tabs” in Knowledge Cards

Notice that these tabs are different, depending on what users search (think: depending on the entity). In the article, Google mentions two examples, one for the keyword “Pugs” and one for “Yorkshire Terrier”. Both have different tabs in their respective Knowledge Cards. The Pug Knowledge Card shows tabs for “Buy or adopt”, “videos”, “names”, “health”, and “how to train”. The Terrier Knowledge Card shows the tabs “Characteristics”, “Grooming Tips”, and “History”. That difference is subtle but important. Tabs are not a new format for Knowledge Cards, they’re an evolution of the Knowledge Graph. They might change over time and I assume changing user behavior and intent will drive this. The displayed tabs say a lot about what information Google deems important for the keyword (entity) and at that moment. That information is something we SEOs can reverse engineer to improve and guide content creation (more in the last chapter)!

Topic Layers in Knowledge Graph
Topic Layers in Knowledge Graph

This evolution of the Knowledge Graph moves the web further towards entities (see Tim Berners-Lee’s TED talk and the 5-part series on Entity Indexing by the incredibly smart Cindy Krum.) The Knowledge Graph 2.0 (my naming, not Google’s) goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Smart SEOs like Cyrus Shepard and AJ Kohn (and surely others) have pointed out that we need to anticipate what the next questions are visitors have when they come to our site, instead of just answering the one they came for. This is exactly how we need to understand the concept of “Journeys”: not just as a Google feature, but as an approach to create content.

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]
  1. https://www.blog.google/products/search/improving-search-next-20-years/
  2. https://www.blog.google/products/search/introducing-google-discover/
  3. https://www.blog.google/products/search/helping-you-along-your-search-journeys/
  4. https://www.blog.google/products/search/making-visual-content-more-useful-search/

It seems to me that this is not only a precursor of Google’s change over the next couple of years – and it’s changing fundamentally – but also an explanation for some of the hard-hitting algorithm updates we’ve seen in 2018. It’s crucial to understand this transformation for Search Engine Optimization and the future internet landscape.

Nick Fox summarized it best: “All of this marks a fundamental transformation in the way Search understands interests and longer journeys to help you find information.

3 shifts that drive the transformation

Google’s transition from a search to a destination engine is composed of three smaller changes:

Answers -> Journeys

Queries -> Recommendations

Text -> Visuals

Let’s dig a bit deeper into each of the three shifts.

Journeys – entities, ads, and the war with Facebook

One of the article comes with the headline “Discover new information and inspiration with Search, no query required”. “No query”? One idea of the Semantic Web is to move from URLs to entities and just a couple of weeks ago, Google took a stab at getting rid of them (“Google wants to kill the URL“). I don’t think Google wants to kill queries all together, that wouldn’t make sense. Instead, the phrase addresses another huge issue search engines struggle with: discovery. Search Engines are (traditionally) not discovery platforms because users need to have a question before using a search engine. That issue limits Google capabilities to grow, at least in search. Google wants to change that.

“All of this enables experiences that make it easier than ever to explore your interests, even if you don’t have your next search in mind.”

“Journeys” stand at the center of this new discovery strategy, embodied by “activity cards” that show relevant pages previously visited and queries previously googled. Content from Activity Cards can be stored in Pinterest-like “Collections”. The benefit for Google is clear: increased retention and engagement because users keep coming back to their collections. Then, there’s the benefit of additional feedback from that engagement. Activity Cards and Collections could help Google understand what content is saved most often, which then again can be used for recommendations and organic search. It’s an efficient way to determine evergreen content, for example.

The benefit of Journeys for Google is clear: increased retention and engagement because users keep coming back to their collections. Then, there’s the benefit of additional feedback from that engagement. Click To Tweet

What else can you do with this new format? Show more ads! A new feature means more surface to show ads and increased engagement means more exposure to ads. Google’s whole business model is built on ads and I don’t expect that to change, even if Google transitions away from the classic model of a search engine. It’s very likely for this to start like so many other Google formats with ads slowly creeping into the feature over time.

Activity Cards and Collections are not the strongest discovery feature. It’s the Knowledge Card Tabs that help users discover new aspects of a topic they searched for:

“Rather than presenting information within a set of predetermined categories, we can intelligently show the subtopics that are most relevant to what you’re searching for and make it easy to explore information from the web, all with a single search.”

Tabs in Google Knowledge Graph
New “Tabs” in Knowledge Cards

Notice that these tabs are different, depending on what users search (think: depending on the entity). In the article, Google mentions two examples, one for the keyword “Pugs” and one for “Yorkshire Terrier”. Both have different tabs in their respective Knowledge Cards. The Pug Knowledge Card shows tabs for “Buy or adopt”, “videos”, “names”, “health”, and “how to train”. The Terrier Knowledge Card shows the tabs “Characteristics”, “Grooming Tips”, and “History”. That difference is subtle but important. Tabs are not a new format for Knowledge Cards, they’re an evolution of the Knowledge Graph. They might change over time and I assume changing user behavior and intent will drive this. The displayed tabs say a lot about what information Google deems important for the keyword (entity) and at that moment. That information is something we SEOs can reverse engineer to improve and guide content creation (more in the last chapter)!

Topic Layers in Knowledge Graph
Topic Layers in Knowledge Graph

This evolution of the Knowledge Graph moves the web further towards entities (see Tim Berners-Lee’s TED talk and the 5-part series on Entity Indexing by the incredibly smart Cindy Krum.) The Knowledge Graph 2.0 (my naming, not Google’s) goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Smart SEOs like Cyrus Shepard and AJ Kohn (and surely others) have pointed out that we need to anticipate what the next questions are visitors have when they come to our site, instead of just answering the one they came for. This is exactly how we need to understand the concept of “Journeys”: not just as a Google feature, but as an approach to create content.

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section]

Many – including me – have been caught off guard by Google’s recent announcements. In essence:

As Google marks our 20th anniversary, I wanted to share a first look at the next chapter of Search, and how we’re working to make information more accessible and useful for people everywhere. This next chapter is driven by three fundamental shifts in how we think about Search:

The shift from answers to journeys: To help you resume tasks where you left off and learn new interests and hobbies, we’re bringing new features to Search that help you with ongoing information needs.


The shift from queries to providing a queryless way to get to information: We can surface relevant information related to your interests, even when you don’t have a specific query in mind.


And the shift from text to a more visual way of finding information: We’re bringing more visual content to Search and completely redesigning Google Images to help you find information more easily.”

Ben Gomes, VP of Search at Google

There is A LOT to be learned from the four articles Google published on its blog:

  1. https://www.blog.google/products/search/improving-search-next-20-years/
  2. https://www.blog.google/products/search/introducing-google-discover/
  3. https://www.blog.google/products/search/helping-you-along-your-search-journeys/
  4. https://www.blog.google/products/search/making-visual-content-more-useful-search/

It seems to me that this is not only a precursor of Google’s change over the next couple of years – and it’s changing fundamentally – but also an explanation for some of the hard-hitting algorithm updates we’ve seen in 2018. It’s crucial to understand this transformation for Search Engine Optimization and the future internet landscape.

Nick Fox summarized it best: “All of this marks a fundamental transformation in the way Search understands interests and longer journeys to help you find information.

3 shifts that drive the transformation

Google’s transition from a search to a destination engine is composed of three smaller changes:

Answers -> Journeys

Queries -> Recommendations

Text -> Visuals

Let’s dig a bit deeper into each of the three shifts.

Journeys – entities, ads, and the war with Facebook

One of the article comes with the headline “Discover new information and inspiration with Search, no query required”. “No query”? One idea of the Semantic Web is to move from URLs to entities and just a couple of weeks ago, Google took a stab at getting rid of them (“Google wants to kill the URL“). I don’t think Google wants to kill queries all together, that wouldn’t make sense. Instead, the phrase addresses another huge issue search engines struggle with: discovery. Search Engines are (traditionally) not discovery platforms because users need to have a question before using a search engine. That issue limits Google capabilities to grow, at least in search. Google wants to change that.

“All of this enables experiences that make it easier than ever to explore your interests, even if you don’t have your next search in mind.”

“Journeys” stand at the center of this new discovery strategy, embodied by “activity cards” that show relevant pages previously visited and queries previously googled. Content from Activity Cards can be stored in Pinterest-like “Collections”. The benefit for Google is clear: increased retention and engagement because users keep coming back to their collections. Then, there’s the benefit of additional feedback from that engagement. Activity Cards and Collections could help Google understand what content is saved most often, which then again can be used for recommendations and organic search. It’s an efficient way to determine evergreen content, for example.

What else can you do with this new format? Show more ads! A new feature means more surface to show ads and increased engagement means more exposure to ads. Google’s whole business model is built on ads and I don’t expect that to change, even if Google transitions away from the classic model of a search engine. It’s very likely for this to start like so many other Google formats with ads slowly creeping into the feature over time.

Activity Cards and Collections are not the strongest discovery feature. It’s the Knowledge Card Tabs that help users discover new aspects of a topic they searched for:

“Rather than presenting information within a set of predetermined categories, we can intelligently show the subtopics that are most relevant to what you’re searching for and make it easy to explore information from the web, all with a single search.”

Tabs in Google Knowledge Graph
New “Tabs” in Knowledge Cards

Notice that these tabs are different, depending on what users search (think: depending on the entity). In the article, Google mentions two examples, one for the keyword “Pugs” and one for “Yorkshire Terrier”. Both have different tabs in their respective Knowledge Cards. The Pug Knowledge Card shows tabs for “Buy or adopt”, “videos”, “names”, “health”, and “how to train”. The Terrier Knowledge Card shows the tabs “Characteristics”, “Grooming Tips”, and “History”. That difference is subtle but important. Tabs are not a new format for Knowledge Cards, they’re an evolution of the Knowledge Graph. They might change over time and I assume changing user behavior and intent will drive this. The displayed tabs say a lot about what information Google deems important for the keyword (entity) and at that moment. That information is something we SEOs can reverse engineer to improve and guide content creation (more in the last chapter)!

Topic Layers in Knowledge Graph
Topic Layers in Knowledge Graph

This evolution of the Knowledge Graph moves the web further towards entities (see Tim Berners-Lee’s TED talk and the 5-part series on Entity Indexing by the incredibly smart Cindy Krum.) The Knowledge Graph 2.0 (my naming, not Google’s) goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Smart SEOs like Cyrus Shepard and AJ Kohn (and surely others) have pointed out that we need to anticipate what the next questions are visitors have when they come to our site, instead of just answering the one they came for. This is exactly how we need to understand the concept of “Journeys”: not just as a Google feature, but as an approach to create content.

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.

[/et_pb_text][/et_pb_column][/et_pb_row][/et_pb_section][proof? _i=”0″ _address=”0″ /]

Many – including me – have been caught off guard by Google’s recent announcements. In essence:

As Google marks our 20th anniversary, I wanted to share a first look at the next chapter of Search, and how we’re working to make information more accessible and useful for people everywhere. This next chapter is driven by three fundamental shifts in how we think about Search:

The shift from answers to journeys: To help you resume tasks where you left off and learn new interests and hobbies, we’re bringing new features to Search that help you with ongoing information needs.


The shift from queries to providing a queryless way to get to information: We can surface relevant information related to your interests, even when you don’t have a specific query in mind.


And the shift from text to a more visual way of finding information: We’re bringing more visual content to Search and completely redesigning Google Images to help you find information more easily.”

Ben Gomes, VP of Search at Google

There is A LOT to be learned from the four articles Google published on its blog:

  1. https://www.blog.google/products/search/improving-search-next-20-years/
  2. https://www.blog.google/products/search/introducing-google-discover/
  3. https://www.blog.google/products/search/helping-you-along-your-search-journeys/
  4. https://www.blog.google/products/search/making-visual-content-more-useful-search/

It seems to me that this is not only a precursor of Google’s change over the next couple of years – and it’s changing fundamentally – but also an explanation for some of the hard-hitting algorithm updates we’ve seen in 2018. It’s crucial to understand this transformation for Search Engine Optimization and the future internet landscape.

Nick Fox summarized it best: “All of this marks a fundamental transformation in the way Search understands interests and longer journeys to help you find information.

3 shifts that drive the transformation

Google’s transition from a search to a destination engine is composed of three smaller changes:

Answers -> Journeys

Queries -> Recommendations

Text -> Visuals

Let’s dig a bit deeper into each of the three shifts.

Journeys – entities, ads, and the war with Facebook

One of the article comes with the headline “Discover new information and inspiration with Search, no query required”. “No query”? One idea of the Semantic Web is to move from URLs to entities and just a couple of weeks ago, Google took a stab at getting rid of them (“Google wants to kill the URL“). I don’t think Google wants to kill queries all together, that wouldn’t make sense. Instead, the phrase addresses another huge issue search engines struggle with: discovery. Search Engines are (traditionally) not discovery platforms because users need to have a question before using a search engine. That issue limits Google capabilities to grow, at least in search. Google wants to change that.

“All of this enables experiences that make it easier than ever to explore your interests, even if you don’t have your next search in mind.”

“Journeys” stand at the center of this new discovery strategy, embodied by “activity cards” that show relevant pages previously visited and queries previously googled. Content from Activity Cards can be stored in Pinterest-like “Collections”. The benefit for Google is clear: increased retention and engagement because users keep coming back to their collections. Then, there’s the benefit of additional feedback from that engagement. Activity Cards and Collections could help Google understand what content is saved most often, which then again can be used for recommendations and organic search. It’s an efficient way to determine evergreen content, for example.

What else can you do with this new format? Show more ads! A new feature means more surface to show ads and increased engagement means more exposure to ads. Google’s whole business model is built on ads and I don’t expect that to change, even if Google transitions away from the classic model of a search engine. It’s very likely for this to start like so many other Google formats with ads slowly creeping into the feature over time.

Activity Cards and Collections are not the strongest discovery feature. It’s the Knowledge Card Tabs that help users discover new aspects of a topic they searched for:

“Rather than presenting information within a set of predetermined categories, we can intelligently show the subtopics that are most relevant to what you’re searching for and make it easy to explore information from the web, all with a single search.”

Tabs in Google Knowledge Graph
New “Tabs” in Knowledge Cards

Notice that these tabs are different, depending on what users search (think: depending on the entity). In the article, Google mentions two examples, one for the keyword “Pugs” and one for “Yorkshire Terrier”. Both have different tabs in their respective Knowledge Cards. The Pug Knowledge Card shows tabs for “Buy or adopt”, “videos”, “names”, “health”, and “how to train”. The Terrier Knowledge Card shows the tabs “Characteristics”, “Grooming Tips”, and “History”. That difference is subtle but important. Tabs are not a new format for Knowledge Cards, they’re an evolution of the Knowledge Graph. They might change over time and I assume changing user behavior and intent will drive this. The displayed tabs say a lot about what information Google deems important for the keyword (entity) and at that moment. That information is something we SEOs can reverse engineer to improve and guide content creation (more in the last chapter)!

Topic Layers in Knowledge Graph
Topic Layers in Knowledge Graph

This evolution of the Knowledge Graph moves the web further towards entities (see Tim Berners-Lee’s TED talk and the 5-part series on Entity Indexing by the incredibly smart Cindy Krum.) The Knowledge Graph 2.0 (my naming, not Google’s) goes beyond understanding the relationship between entities (things, people, places). It’s based on Topic Layers that map the user journey across different entities depending on the expertise of the searchers. Smart SEOs like Cyrus Shepard and AJ Kohn (and surely others) have pointed out that we need to anticipate what the next questions are visitors have when they come to our site, instead of just answering the one they came for. This is exactly how we need to understand the concept of “Journeys”: not just as a Google feature, but as an approach to create content.

The search engine becomes a discovery engine. Discovery is traditionally reserved for social networks, thus, I also see this transition as an attack on Facebook & Co. This “silent war” has been going on for a long time and up until about 2 years ago, it seemed that social networks would win. Not only had Google failed with Google Plus (another attempt at creating a social network), but Facebook turned into the biggest source of traffic. Then things changed and Facebook cut its reach to the devastation of some publishers. Google became the king of traffic again. Now, it seems that the search engine is doubling down on this momentum by finding ways to build discovery into its core product.

That approach is not even new, it’s what Youtube has been doing for quite a while. Youtube is the second largest search engine but with the character of a social network: it tries to keep users on its platform instead of sending them elsewhere, measures engagement to rank and recommend videos, has user profiles and interaction through comments, and endless feeds (autoplay).

Recommendations – building the American Toutiao

Google started to push recommendations about a year ago, with a feed in the mobile app. That feed – ironically called “discovery” – is now being revamped and rolled out to the mobile web site as well!

“Think of it as your new mobile homepage where you can not only search, but also discover useful, relevant information and inspiration from across the web for the topics you care about most.”

I can also see this feature being a source of feedback about what content people most engage with, which would not only refine and personalize the recommendations given in the feed but also could funnel into organic search as a sort of ranking factor.

Google’s shift towards recommendations reminds me a lot of TouTiao, China’s leading content recommendation platform.

“Without any explicit user inputs, social graph, or product purchase history to rely on, Toutiao offers a personalized, high quality-content feed for each user that is powered by machine and deep learning algorithms.” (source: Y Combinator)

Toutiao
Source: https://blog.ycombinator.com/the-hidden-forces-behind-toutiao-chinas-content-king/

Toutiao has been growing at an insane pace: According to Y Combinator, users spend 74 minutes in the app every day – more than on Snapchat, Instagram, or Facebook. Whether Google got inspired by Toutiao or not is hard to say and it’s possible that Google tries to fend off competition early by occupying that space. Either way, just like Google, Toutiao went through a transformation from curation to recommendation engine and adopted stories as content format relatively early.

This description of Toutiao’s product would very well fit to Google’s new approach:

“The app uses machine and deep learning algorithms to source and surface content that users will find most interesting. Toutiao’s underlying technology learns about readers through their usage – taps, swipes, time spent on each article, time of the day the user reads, pauses, comments, interactions with the content and location – but doesn’t require any explicit input from the user and is not built on their social graph. Today, each user is measured across millions of dimensions and the result is a personalized, extensive, and high-quality content feed for every user, each time they open the app.”

Image search – from text to visuals

Google announced to push more visuals in search and revamp Image Search entirely:

“Since then, we’ve been working to include more imagery and videos in Search, whether it’s illustrated weather reports, live sports clips, or our visual recipe experience. We’ve been able to do this in part thanks to advancements in computer vision, which help us extract concepts from images. We model hundreds of millions of fine-grained concepts for every image and video that we have in our index. For example, an image of a tiger might generate concepts like “feline,” “animal” or “big cat.” This lets us identify a picture by looking at its pixels, without needing to be told by the words on a page.”

Google has made tremendous strides in computer vision in the past years and it seems to be at a point at which it can connect images and videos to entities. It’s able to “understand” the content in videos for quite some time, as you can see in the screenshot below.

Google computer vision videos
Google understands video content

This is not just a match between user intent and video title, this is Google showing you exactly the part of the video in which the solution to your problem is being discussed. That’s huge! Not relying on factors such as alt-tag, file name, and surrounding content anymore allows Google to refine image results much more and understand whether pages truly contain helpful or irrelevant images.

“Using computer vision, we’re now able to deeply understand the content of a video and help you quickly find the most useful information in a new experience called featured videos.”

Making search more “visual” fits perfectly into the greater trends we also see on social networks. Instagram is on the best way to overtake Facebook, in part because many things can be easier communicated through visuals.

The March, August, and September updates – precursors of the new Google

I want to put a bold theory out there: the hard-hitting algorithm updates we’ve seen in March, August, and September (potentially February and April) were roll-outs of the changes described in the announcement(s)! It seems that “the new Google” has already started its transformation in early 2018.

One paragraph in Making visual content more useful in Search cannot be overlooked:

“Over the last year, we’ve overhauled the Google Images algorithm to rank results that have both great images and great content on the page. For starters, the authority of a web page is now a more important signal in the ranking. If you’re doing a search for DIY shelving, the site behind the image is now more likely to be a site related to DIY projects. We also prioritize fresher content, so you’re more likely to visit a site that has been updated recently.

Also, it wasn’t long ago that if you visited an image’s web page, it might be hard to find the specific image you were looking for when you got there. We now prioritize sites where the image is central to the page, and higher up on the page. So if you’re looking to buy a specific pair of shoes, a product page dedicated to that pair of shoes will be prioritized above, say, a category page showing a range of shoe styles.”

That’s pretty clear!

This statement suggests three ranking factors that “recently” increased in weight:

  1. Certain queries demand the page to have images and they have to be higher up the page (easy to find)
  2. Authority (part of E-A-T)
  3. Fresh/regular content

When we speculated that the “Medic” update was about E-A-T, we were not wrong. But, we might have to rethink some of the Core Updates we’ve seen this year. They might all be connected to each other and driven by the three factors mentioned above and Natural Language Understanding.

Why “Natural Language Understanding”? Because Neural Matching!

“But we’ve now reached the point where neural networks can help us take a major leap forward from understanding words to understanding concepts. Neural embeddings, an approach developed in the field of neural networks, allow us to transform words to fuzzier representations of the underlying concepts, and then match the concepts in the query with the concepts in the document. We call this technique neural matching.”

Google spokesperson Danny Sullivan pointed out that Neural Matching might be applied to 30% of queries in a Twitter thread about the announcement.

Last few months, Google has been using neural matching, –AI method to better connect words to concepts. Super synonyms, in a way, and impacting 30% of queries. Don’t know what “soap opera effect” is to search for it? We can better figure it out. pic.twitter.com/Qrwp5hKFNz

— Danny Sullivan (@dannysullivan) September 24, 2018

Neural Matching is based on “Fuzzy String Matching”, which helps to understand queries that imply a concept but don’t mention it explicitly. Danny Sullivan says it helps to understand synonyms, which basically means user intent. Google’s understanding of what users are actually trying to achieve has significantly improved, for images and videos (see above), but also for textual search. Apparently, Neural Matching is applied to 30% of queries (massive!) and has been rolled out in the last couple of months – another hint at the big updates from this year.

Fuzz String Algorithms are not new per se, they’re applied in related searches (“did you mean…”). Neural Matching seems to be a Fuzzy String Algo on steroids, going beyond the tools we see in many search engine papers like N-Grams and the Levenshtein distance between words. Based on what we know, Neural Matching might very well be an upgrade to Rankbrain, which was introduced by Google in 2015 and said to impact about 15% of queries. Rankbrain was said to help Google understand what a search is about – especially queries that never occurred before – which is pretty much was Neural Matching does.

This is a look back at a big change in search but which continues to be important: understanding synonyms. How people search is often different from information that people write solutions about. pic.twitter.com/sBcR4tR4eT

— Danny Sullivan (@dannysullivan) September 24, 2018

Bottom line: we see the continuation of two important concepts: Hummingbird and Rankbrain. Hummingbird set the foundation for “Knowledge Graph 2.0” (Topic Layers) and Rankbrain for Neural Matching.

5 tips to rock the “new Google”

I want to use the opportunity to summarize a couple of actionable tips I derive from these announcements and my observations of SEO in the last 12 months.

First, optimize for topics and entities, instead of keywords/queries. That helps you to build a body of content that covers every aspect and question, instead of a lose collection of articles.

Ben Gomes tells a story about a library in the introductory article to the announcement:

“Growing up in India, there was one good library in my town that I had access to—run by the British Council. It was modest by western standards, and I had to take two buses just to get there. But I was lucky, because for every child like me, there were many more who didn’t have access to the same information that I did. Access to information changed my life, bringing me to the U.S. to study computer science and opening up huge possibilities for me that would not have been available without the education I had.”

(Hat tip to Jimmy Daly, who used this analogy before).

The analogy of a library fits perfectly here: a library has information for beginners and experts #Journeys, well sorted and structured content, and people can come back to it any time.

Second, look at the Knowledge Card Tabs for entities you want to write about and make sure you cover each intent in one big or several articles. With Tabs, Google shows us what it thinks is important to know about an entity. Cover that in your content because chances are high that it is useful outside of Knowledge Cards as well.

Third, create videos and images from your content and to support concepts you convey in it. As described above, it seems that rich media content isn’t only helpful, it might be crucial to rank high in organic search. When you add images to your articles to help visitors understand the gist of them, place them high up the page. As visual content becomes more important, it helps to “repurpose” content as I described in “The time to take SEO beyond Google is now”.

Fourth, create fresh content on an ongoing basis. That’s not new and shouldn’t come as a surprise. However, Google explicitly stating that freshness has become more important and Google showing Tabs dynamically tells me to pay extra attention to it. Certain topics need constant fresh content, even though it might not seem like it. Think of products that are continually developed or new studies in scientific fields.

Fifth, understand the intent behind queries and topics. Also not new, but instead of just reverse engineering user intent, we should think about how we can put the intent behind a query into a journey. Anticipate what the next steps are people might want to take after visiting a certain page on your site. Look at related searches and the most asked questions users have about a keyword (or entity). SEMrush, AHREFs, and Searchmetrics have useful features for that.

tl;dr

I hinted at a lot of the mentioned concepts in “How to rock SEO in a machine learning world”, but they’re now taken to be the next level. The transition from old to the “new Google” began with Hummingbird, continued with Rankbrain and has been accelerated this year with better Natural Language Understanding algorithms and Computer Vision.

Paired with the 10 ranking factors we know to be true, this should give us a good idea of what’s important in SEO now and in the future.