Blockchain, Fake News, and E-A-T

Untrustworthy results are a big problem for Google, not just for social networks. The internet changes "trust", and it's becoming a factor in Search, too.

When Blockchain technology become popular a couple of years ago, I didn’t expect it to affect marketing and Growth in any way. After all, Blockchain technology was built with the purpose of decentralizing systems. However, we can’t separate technological process from Growth (technical Marketing).

Blockchain and Bitcoin came on the scene in 2008. Even though its roots go back to the 1980s, Satoshi Nakamoto published his famous whitepaper in 2008, the year of the last big recession. I can only speculate about whether that was a coincidence or not, but we’ve seen growing mistrust against governmental institutions and the government.

Fake news

Ever since then, we’ve seen smaller outbursts like the Occupy Wallstreet movement, but mistrust and polarization peaked in 2016 when Donald Trump was elected POTUS. That’s when the world was introduced to the term “fake news.”

The idea of Fake News is misleading or false information from known publishers and TV networks. While Fake News exist, the term has been used to deflect criticism at times. Fake News seem to be a problem especially on social media, which has come to attention since Russia’s involvement in the 2016 election came to light. This pushed Facebook and other social networks to fact-check posts, but not without criticism.

It’s also worth mentioning that many startups that are built on trust gained traction over the last 5-7 years: UBER, Airbnb, Yelp, Lyft, Homeaway, Upwork, Thumbtack, Taskrabbit, Homejoy, Upcounsel, etc.

You know how the saying goes: “When I grew up, my mom told me not to get into cars with strangers. Nowadays, we literally pay strangers to drive our kids around."

The internet changed what trust means.

The problem of untrustworthy search results

There is a passage in Trillions of Questions, No Easy Answers: A (home) movie about how Google Search works that explains the origins of untrustworthy search results:

It used to be [...] that if you were reading a manuscript copied by someone you didn’t know you could have a certain trust that the test you were reading was stable; was authoritative; was right.
Printing changes of all of this, sure. That worried lots of people because, for example, if we don’t know who printed it what should we think about this information? If there is an error, then everyone will get it wrong. We look now at the print revolution, which we used to think about almost in a celebratory way, and we think now that actually the anxieties people had about print resemble the anxieties people today have about fake news; about origins of information.”
The video describes a case of false information about the Holocaust:
“A few years ago, people were pointing out that for some queries like ‘did the Holocaust happen?’ we returned results that had the words but were from low-quality sites. [...] This is clearly a case of misinformation.
The fundamental reason for that is that the problems reported to us are just the tip of the iceberg.
Every query has some notion of relevance and some notion of quality. We’re constantly trading off which set of results balances these two the best. If you type in the query ‘did the Holocaust happen?’, higher-quality web pages may not bother to explicitly say that the Holocaust did happen. They take for granted that we, as informed citizens, are aware that the Holocaust happened. So, the only websites that will closely match a query like that may in fact say that the Holocaust didn’t happen; that it was all a big hoax. [...] The relevance signals were overpowering the quality signals to a degree that was resulting in low-quality results for users.
We have long recognized that there is a certain class of queries like medical queries or finance queries. In all of these cases, authoritative resources are incredibly important. And so, we emphasize expertise over relevance in those cases. We try to get you results from authoritative sources in a more significant way.
And by ‘authoritative’ we mean that it comes from trustworthy sources, that the sources themselves are reputable, that they are upfront about who they are, where the information is coming from, that they themselves are citing sources.
And so, the change we’ve made in the case of misinformation is to change the ranking function to emphasize authority a lot more. And this has made all the difference.


Besides the insights in this passage about how Google views trustworthy sites and tries to achieve with the concept of E-A-T, the speakers indirectly reference an article by the Guardian titled Google is not ‘just’ a platform. It frames, shapes and distorts how we see the world. Guess what year it’s from? Right: 2016!

Spam is a threat to Google. In the video I transcribed above, Cathy Edwards mentions that 15% of daily queries are new to Google and, in 2019, 40% of pages Google crawled in Europe were classified as spam. A new challenge coming at Google is Fake News and untrustworthy results. Relevance, quality, and authority aren’t enough anymore. Results also need to be trustworthy.

The Expert Graph

In The demise of amateur content, I pointed out that it’s going to be harder for non-experts to write and rank content for queries that demands expertise. Similar to the concept of QDF (Query Deserves Freshness), which measures whether a query needs results that are most up to date, I expect Google to measure QDT: Query Deserve Trustworthiness.

Google looks different signals for Google News to measure trustworthiness:

Our systems are designed to use these guiding principles when assessing if a site adheres to our transparency policy. At the article level, we consider information that helps users quickly gain context about articles or the journalists covering stories. This includes information like an article byline (that often links to a bio describing the author’s credentials and expertise), the article’s publishing date, and labeling to indicate the article type, for example Opinion or News.
At the site level, we look for information that helps readers understand a site’s purpose, its organizational structure, and the kinds of information they can expect from that site. This includes a breadth of information such as a mission statement, editorial policies and standards, staff information and bios for both editorial and business staff, non-generic contact information, and other organizational-level information like owners and/or funding sources (for example, state-sponsorship, relationship to political parties or PACs).


If Google can measure trust for websites, it can do so for people, too. This idea is not foreign at all: Google has been building an entity graph with the knowledge panel since 2012.

“Individual voices play an important role in the news and information we consume. With so many complex, important stories unfolding daily, people not only rely on specific publishers for the latest news, but also increasingly turn to trusted individual journalists, authors and experts.“


The problem with only relying on authoritative sites is that you might dismiss expert content on new sites or not so authoritative sites. An “expert graph” would solve the problem.

How to leave a footprint on the web

Today, a lot of trust is determined on the site level, but Google has laid the groundwork for author-level trust. The question then becomes: how can you leave a footprint on the internet as an author to show Google that you’re a trustworthy entity?