Imagine you want to increase your retention rate. You might prepare a nice tutorial so users can get familiar with what your application can do for them. However, how do you make sure that it works? Well, you run an A/B test.
You create multiple variants to test and prepare proper analytics to measure the result. In our example, I will use uninstall events as a success metric.
Firebase A/B testing helps you with running the test and evaluating the results. But you will need to make sure that you correctly configure your tests. I recommend this article describing some common mistakes that you can make when running A/B tests (specifically with Firebase): (4 Mistakes that I made while running A/B Tests with Firebase Remote Config)
One of the points in the article is about not using the activation event enough. As per official documentation, “set an activation event to ensure that only users who have first triggered some Analytics event are counted in your experiment”.
It is significant because existing users can make A/B test results incorrect. In our example, existing users that have already seen the tutorial would not see it again after being included in one of the test variants. Yet, the uninstall event would still be tracked for that variant when they uninstall the application. So to get a better result, I would need to exclude existing users from our measurement.
The activation event is all you need when evaluating A/B tests through Firebase. Users are not counted into final results until they track the specific event, e.g. tutorial_displayed.
However, you can use Firebase A/B testing to distribute the test and evaluate results yourself. I imagine there are multiple use cases why people would do that. In my case, I needed to analyse an A/B test based on internal company tracking instead of Firebase Analytics tracking.
In that case, the activation event is not enough. When you check the data, you can see that users can get a testing variant before tracking the activation event. I assume Firebase uses the activation event to decide whether to include the user in the results.
That has two problems for my use case:
- I would need to track the activation event in internal tracking as well. And I would have to remember to use it to know which users to include in the results.
- Since Firebase distributes users into the testing variants before the activation event, the number of users in testing variants will be uneven.
So I had to make sure that existing users did not receive an A/B test. There are possibly multiple options how to do that, but I ended up considering these three:
- Create a user audience from new users. That would be users that tracked the first_open automatically-collected event after a specific date (e.g. the day when you launch your A/B test).
- Set a user property for existing users. E.g. for apps with a tutorial, track a user property after the user successfully finishes the onboarding.
- Set a user property for new users.
Because of the timing, I could not use the audience. It is not enough to send the first_open event before fetching the remote config. In our testing, Firebase did not include a lot of new users in the A/B test with this method.
I could not use the property for existing users either. Because as I found out, Firebase kicks users that cease to fit the targeting criteria out of an A/B test. So in the second option, your new users who track the user property (e.g. successfully finish onboarding) would not be included in the A/B test anymore.
In the end, I had to go with the third option. On the first app open, devices track the current time as a user property. And we would set up the test to include only users that opened after we launched the test.
It would be great if Firebase supported this use case out of the box. But until then, I hope this article helps anyone who would like to target only new users with Firebase A/B testing.
This post was originally published in September 2020 and has been completely revamped and updated for accuracy and comprehensiveness.