There are tow types of Growth Marketers: those who see experiments as black or white and those who know better. Experiments, if conducted properly, are just data points that bring us closer to the truth. Sometimes, they're very big data points and show a causality. Other times, they hint at something larger and show a correlation.
Every causation has a correlation somewhere. The question is how we can get closer to the causation when we find a correlation? The answer is Bayes' Theorem.
The big idea behind Bayes' Theorem is to look at events like Experiments or probabilities in context instead of isolation. It is the basis for various applications: from Markov Chains to Naive Bayes Classifiers, a simple probabilistic model often used in Machine Learning.
Bayes defined his theorem as "the ratio between the value at which an expectation depending on the happening of the event ought to be computed, and the value of the thing expected upon its happening." Interestingly, he didn't publish his theorem. It was discovered and promoted after his death by Richard Price in 1763.
Bayes Theorem: P(B|A) = P(A|B) x P(B) / P(A)
Where P(A|B) is the probability of A (the experiment) based on B (the prediction). It's the product of P(A|B) and P(B), the probability of B , divided by P(A), the probability of A.
The prediction in the equation is the Theorem's killer feature. That's where context, or experience, comes in. Other than the Frequentist view, which looks at evidence in isolation, Bayesian thinking views new evidence as an update of existing beliefs.
The Bayesian vs. Frequentist mindset
Statisticians and Data Scientists have long been fighting over the Bayesian vs. Frequentist testing. The consensus is that both are valuable and a lot of the discussion is hair splitting. To me, embedding a probability based on prior events sounds more reasonable in Growth than looking at probability in isolation.
The Bayesian mindset is to update beliefs based on new data and don't look at experiments in isolation. Evidence should update believes, not define them.
In A modern understanding of SEO , I explain that we need to build out a corpus of experiments. Taking the word on the street is not enough. Neither is relying 100% on our experience or even taking one single experiment someone else conducted as the truth.
In Thriving in Ambiguity, I mention that "we have a rough idea of what works but need to think in probabilities instead of absolutes." Thinking about Ambiguity with Bayesian principles, we get closer to the truth when we update probabilities with new information. The most efficient decision-makers don't change their opinions radically but gradually.
We need to collect all available data points and use them as a catapult for our own experiments. Just like scientists don't put too much weight on a single study, SEOs and other Growth Marketers must build out a body of experimentation to regress to the mean and approximate to the truth. There is a big difference between rejecting the result of an experiment and updating your beliefs.