Most of what we do in SEO these days relates to user intent: content, SERP Features, mobile optimization. Intent is inevitable. You don’t meet it, you don’t rank.
This time, they added a section about “queries with multiple meanings”.
Many queries have more than one meaning. For example, the query [apple] might refer to the computer brand or the fruit. We will call these possible meanings query interpretations.Google Quality Rater Guidelines
I read these lines and instantly realized that my model needed an update but we weren’t ready. Many tools didn’t provide the necessary data to analyze queries with multiple meanings, or as I call it fragmented user intent.
Fast forward to today. Tools still don’t decrypt fragmented intent for keywords but I was able to build a model with what we currently have. Two years later, I present you the sequel to User Mapping on Steroids!
However, we first need to take a step back and zoom in on a universal problem in SEO: the dichotomy between search volume and keyword competitiveness.
The relationship between search volume, competition, query length, and intent
There is a direct relationship between the search volume of a keyword and how competitive it is. Makes sense; the bigger the pie, the more people want a piece of it.
We can also say that the shorter a query or keyword, the more search volume (or demand) it has. This relationship goes back to the age-old concept of shorthead vs. longtail.
In a nutshell, more people search for shorter keywords because they refine their searches over time. If they knew exactly what they wanted, they would write their query in full length. The only other possibility is laziness. Both lead me to ambiguous queries.
Ambiguity – an old problem with new solutions
“Ambiguous queries” are keywords without a clear or single meaning (mentioned as “queries with multiple meanings” in the Quality Rater Guidelines). I prefer the term “fragmented” and will FQI for Fragmented Query Intent to describe a keyword with multiple meanings.
Google distinguishes between dominant, common, and minor interpretations of ambiguous queries in the guidelines. Why should Google decide for a single meaning when a single word can have so many?
However, meaning is not steady. Neither is user intent. It can be unclear at first, be multifold, and change over time. As such, ambiguity is not binary but linear.
The degree of ambiguity
Keywords can be more or less ambiguous, depending on time, how their meaning changes due to events, and how well Google knows it.
Google faces a big challenge: ~15% of daily searches are new. they created technology like RankBrain to quickly understand the meaning of new queries. Sometimes they’re more specific, sometimes more ambiguous. The more specific, the easier it is for Google to show the right result for keyword right from the start because it indicates a clear intent.
As such, fluctuating keyword results – especially when coming from different types of sites, e.g. publisher vs. marketplace vs. e-commerce store – can indicate keyword ambiguity that’s either caused by an event or a better understanding by Google of what the user wants.
An example: a query like “Wuhan” would change the dominant intent from learning something about a city (blue in the screenshot below) to staying up to date with the outbreak of the COVID 19 pandemic (yellow).
As you can imagine, the dominant user intent here is colored in yellow and gets more intent than the common one (blue).
An other good example of a query with specific intent that becomes ambiguous once a year is “Independence Day”. The dominant intent is information about the movie with “welcome to earth” Will Smith and Jeff Goldblum. But around July 4th, it changes to the holiday.
As you can see in the screenshot above, the rankings for that keyword fluctuate closer to the holiday because Google shows more results related to the holiday.
To help users refine their search, Google uses “Query-Refinement Bubbles” on mobile devices. They first popped up in 2018 and now seem to show up more and more. I assume Google uses them to monitor dominant, common, and minor user intents over time for ambiguous queries.
Ambiguous queries are not new. In fact, Microsoft published a patent in 2007 called Identifying Ambiguous Queries in Web Search. It demonstrates that at least 87% of ambiguous queries can be identified and understood with supervised machine learning. Keep in mind that was 13 years ago.
The paper also mentions the importance of additional words to understand the query better. In other words: the ambiguity of a query also depends on its length. The third relationship exists between query search volume and ambiguity.
To finish this first part up, we can come to four conclusions:
- The higher the query search volume, the more competitive it is.
- The shorter the query, the higher its search volume.
- The higher the query search volume, the higher the ambiguity.
Due to the direct relationship between query search volume, competitiveness, and ambiguity, we can use keyword metrics in combination with SERP Features to effectively monitor Google’s perceived user intent for a keyword. Also for fragmented user intent.
Why it’s important to measure Fragmented User Intent
Keywords with high search volume and competition are where the meat is. They often have a fragmented intent and, as a result, more SERP Features as Google is answering more queries directly (I explained why in Google is a victim of its own success).
The second-order effect of this trend is that users scan the search results differently today than 10 years ago. Text without much fuzz (think Wikipedia) is still scanned in an F-shaped pattern (not always a good sign as they could mean that your content is only skimmable, not read-worthy).
Users scan modern search results – especially those with a fragmented intent – after a Pinball Pattern. Their attention bounces around like a pinball (non-sequentially) because there is so much information in the SERPS: knowledge cards, carousels, images, videos, … In fact, SERP Features received 74% of looks in Nielsen Groups’ eye-tracking study of almost 500 queries.
As a result, the click-curve is flatter.
The first results get fewer clicks, lower results more. The degree to which that happens varies, which is why my model contains a “SERP Feature degree”.
How people scan content depends on their task, layout, assumptions, and type of content. Nielsen Group differentiates between five scanning patterns:
- F-pattern: scanning from left to right downwards
- Spotted pattern: looking at visual queues
- Layer-cake pattern: fixation on headings and subheadings
- Commitment pattern: traditional reading, not scanning
- Pinball pattern: scanning a lot of information fast
A model to understand and track fragmented User Intent
The model I came up with allows us to:
- SERP Feature Degree, meaning how clear or unclear the intent is
- Track FQI over time to understand how it impacts relevant keywords
- Identify FQI at scale
- See User Intent opportunities based on whether Google displays a SERP Feature that fits your content or not
- How FQI changes over time for relevant keywords
The model in a nutshell
In the article User Intent Mapping on Steroids, I explain how we can look at SERP Features to reverse engineer intent. The thought process is simple:
- Google wants to satisfy searches as good as possible
- It does so effectively with SERP Features to provide even faster answers than organic results
- we can categorize SERP Features into different user intents
- SEO tools track SERP Features per keyword
- With a simple data export, we can match user intents to SERP Features and to keywords
- At scale, that provides a reliable model for optimization
How does this work with fragmented intent, though?
Building your own FQI model
You don’t need much to build your own FQI model. You can use any rank tracker (AHREFS, Moz, Stat, SEMrush) that tracks SERP Features for this model. Google Sheets should work but Excel is a better choice because it has more features and doesn’t break down with lots of data.
Before we start, we need to attribute SERP Features to user intents.
Here’s how I did it:
|SERP Feature||User Intent|
|Featured snippet||Short answer|
|Knowledge cards||Short answer|
|Instant answer||Short answer|
|Video carousel||In-depth answer|
|Site links||Navigate to site|
|Local pack||Navigate to location|
|Events||Navigate to location|
|Maps||Navigate to location|
|Local card||Navigate to location|
This is my own definition of user intent; my own interpretation. Feel free to customize this model as you might see a different user intent. In somes cases it’s crystal clear, in others there are several options to map an intent to a SEPR Feature. Google might even show the same Feature to satisfy several intents, e.g. Top Stories.
From here, it’s a simple data pull and editing process:
1. Pull the SERP Features per keyword from or the rank tracker tool of your choice. Before you export, filter for top 10, SV > 10, and exclude brand keywords.
You can export from a generic list of keywords a domain ranks for, but it makes more sense to pull from a defined set of keywords, i.e. your own project. This way, you narrow the analysis down to keywords you care about.
3. Import the data into an Excel or Google sheet.
If you want to, get rid of columns like “previous position”, “keyword difficulty”, “CPC”, “traffic”, “traffic %”, “competition”, “number of results”, “trends”, “timestamp” – anything else you feel isn’t necessary. I like to work with clean spreadsheets, but it’s just a preference.
3. Use split text to columns in the SERP Feature column.
4. Apply the “counta” formula to the row of SERP Features you just created. This will give you a count of SERP Features per keyword.
5. Create a new column on the right next to it and divide the “Count” column by 10 to get the “SERP Feature degree” (explanation below). It’s a simple formula like =Y2/10 if Y is the “Count” column.
6. Create a new tab and call it “serp-features”.
7. Copy/paste my SERP attribution table (above) to the “SERP-features” tab (start with cell A1).
8. Make sure the name in the SERP Feature columns on the first tab matches with my name for the SERP Feature on the “serp-features” tab.
In SEMrush, for example, you have to change “Video” to “Videos” on the first tab to match with my list of user intents. The reason: the function will recognize “video” within “video carousel” and duplicate the user intent for keywords that show a video and video carousels.
9. Go back to the first tab and create 14 new columns on the right of the “SERP Feature degree” (one for each SERP Feature).
10. In those new columns, use “=ARRAYFORMULA(SUM(N(REGEXMATCH(G1:O2, ‘serp-features’!$B$1:$B$2))))” in each of them to check whether certain SERP features are present.
The ranges in my formula, i.e. G1:O2 and $B$1:$B$2, are just an example and have to be adjusted to your sheet. They should cover the whole row of serp features on the left of the “count” tab.
The Arrayformula function will basically scan the row of SERP Features and match it with my table on the next tab. It’s important for you to follow the steps exactly and not reorder my table (or adjust the ranges so they match).
11. Now you can take the average of each column to understand which User Intent shows up the most for your keyword set (it’s a simple =average(x:y) function).
Alternatively, create a new tab and summarize the averages.
If you want to take this analysis a step further, you can compare the data of several months to see how SERP Features and User Intent changes over time (optional):
11. Create a new tab and give it the name of the previous month you’re currently in (see an example below).
12. Export the SERP Features of the previous month and repeat steps 1-10.
13. Create a new tab (call it “comparison’ or something similar) and use =vlookup to pull in the data per SERP Feature from the previous two tabs to compare them next to each other.
Voilà! Your own FQI model.
How Rank trackers report SERP Features
To optimally customize the FQI model for yourself, you need to know how your rank tracking tool reports on SERP Features.
Here are the links to the documentation of the four most known SEO tools:
I also created an overview of naming conventions in each of the tools. This is important so you can match the tool vendor names to the User Intent I defined above or to define your own.
|Updates||Top Stories||Top stories||News||Top Stories|
|Updates||Tweets box||Twitter box||Tweets|
|Short answer||Featured snippet||Featured snippet||Answers||Featured Snippets|
|Short answer||People Also Ask||People also ask||People also ask|
|Short answer||Featured Image|
|Short answer||Instant Answer||Knowledge card||Knowledge Cards|
|Short answer||Knowledge Panel||Knowledge panel||Knowledge graph||Knowledge Panels|
|Navigate to site||Site Links||Sitelinks||Site Links|
|Navigate to location||Local pack||Local Packs|
|In-depth answer||Featured Video|
|In-depth answer||Video Carousel|
|In-depth answer||FAQ||Organic (FAQ)|
|Explore||Image Pack||Image pack||Images||Image Packs|
|Buy||Shopping ads||Shopping results||Shopping||Shopping Results|
|Buy||Adwords Top||Adwords Top||AdWords (Top)|
|Buy||Adwords Bottom||Adwords Bottom||AdWords (Bottom)|
|Research||Research, carousel||In-depth Articles|
|Other||AMP||AMP results||Related Questions|
|Other||Find results on|
|Other||Found on the web|
The next step: adjusting for query syntax
We’re at a point at which we can track way more SERP Features and understand how fragmented User Intent changes. What’s left to do?
Well, there is another layer of complexity: query syntax. Especially for inventory-driven sites, query syntax should be an integral part driver of the keyword strategy and technical optimization.
In future content, I’ll explain how to define a query syntax, scale it, and apply the FQI model to it.