In this blog post, I want to give tips in order to improve the quantity and quality of our experiments. At Jana, we keep experimenting with new features in front of our users every day. Making the right conclusions and deriving next steps after experiments is important for our success.
In this post, I want to expand on 2 tips.
Tip 1: Make analyzing experiments cheap!
At Jana, we automated the graphical analysis of our core business metrics, such as revenue and user retention, into a tool tied to the experiment. If certain analyses are important for all experiments, automating those will certainly increase the consistency and quality of the analyses.
We also automated estimating the statistical significance of each of the metrics with the experiments exposure (% of users who see the experiment) and our estimation of the metric distribution. This avoids making the wrong conclusions if you know the minimum amount of significance required for you to make a good decision (it depends on the goal of the experiment among other things).
Tip 2: Avoid the multiple comparison problem
When you analyze lots of metrics, there’s always a probability that at least one of them will appear significant (when it is not) or have at least one metric that will yield the wrong results with a biased sample. See the concept in more detail here. Make sure that most users (ideally all) see the feature you want to test, focus on a few metrics that you want to test, and make sure that all of these metrics have significant results. This is not 100% bullet proof but it will greatly reduce the chances of making the wrong conclusions.
In order to experiment often and reduce decision error, make experimenting cheap and avoid complex analyses with too many metrics!