I find it exciting to work on a consumer product like mCent because there is always a large number of ideas that could potentially make it a better app. Most of these potential features are well thought-out, and many of them are inspired by features in other consumer apps. But it is often difficult to predict how they will play out on mCent or whether they will have a positive effect on the metrics we care about. One good way to find out is to build them, but that usually requires a lot of engineering time and effort.
At Jana, we usually develop stripped down versions of features first, test them out, and develop a full version only if we measured a positive impact. In this post I will discuss a couple of cases where we tested features in mere days when a fully-implemented version would have required weeks.
Feature 1: Allow an mCent member to see how many of their connections had installed a specific app.
This feature was proposed after some of our users suggested that they would feel more comfortable trying an app if they knew that other people had also tried it. mCent is becoming a more social app, so it made a lot of sense to run with this idea. Our team decided to run an A/B test where the installation stats would be displayed on the home screen as shown in the image below, and three variants of the text would be used:
– x people tried this app
– x people in India tried this app
– x of your friends have tried this app
Thanks to some previous engineering work, we already had the ability to show the first two variants, but we did not have the data to support version 3. Instead of committing weeks to building infrastructure for a social network, we decided to first run a test with artificial data. Doing so would allow us to move quickly without losing our ability to correctly assess the impact of the feature. We used a hashing function to generate a believable and consistent number for a given user and app:
CHANCES = (30, 20, 10, 5, 5, 1, 2, 5, 2, 2, 5, 3, 2, 2, 2, 2, 1, 1) POSSIBLE_NUMBER_OF_FRIENDS = (0, 1, 2, 3, 4, 5, 6, 8, 11, 12, 13, 15, 19, 20, 27, 31, 43, 63) @classmethod def generate_social_stats(cls, user_id, app_id): placement = long(hashlib.md5(user_id + app_id).hexdigest(), 16) % 100 i = 0 chances = cls.CHANCES numbers = cls.POSSIBLE_NUMBER_OF_FRIENDS while len(chances) > i and sum(chances[:i + 1]) < placement: i += 1 return numbers[i]
The code for this experiment was developed within a couple of days and we collected results for about 10 days. After the experiment was finished, our analysis showed that the feature did not have much impact on how often our members chose to download an app. As a result, we ended up not prioritizing the feature, and we saved weeks of development effort.
Feature 2 – Members can see the number of unread notifications on the mCent icon, so that they know something is waiting for them in the app.
While notification count badges are a common feature for the iPhone, they are not natively supported on Android. Some manufacturers like Samsung and Sony have added support for this on some of their phones. Our team wanted to test if using notification count badges on supported devices would have an impact on how often users opened mCent.
Building a complete version of the feature would involve determining what counts as a notification, displaying the counts properly, and ensuring the counts are properly synced up with the server so that the user sees a consistent number across different devices. We determined that this would require a lot of engineering work.
To determine if the feature was worth building fully, we ran a test where “1” would be displayed in an icon badge if user had any number of unread notifications. Even though we were always displaying “1”, the test was good enough to help us estimate the impact of the notification count badge when fully implemented. The entire test took less than a day to develop and we ran the experiment for 15 days. Results showed the badges had a significant impact on how frequently users opened mCent. With that information, we are planning to work on enhancing the feature and showing the actual number.
“Fail early, Fail often”
If you’re building a consumer app, it is usually a good strategy to test lots of your ideas. In cases where you might need a lot of effort to implement a new idea, consider running a test with a more simple version of the feature that is quicker to develop. These tests can prevent you from spending energy on features that do not work, and also give you data to build a better version of your feature when fully implemented. The examples above were specific to showing numbers, but this methodology applies to everything from tracking clicks on dummy links to measuring how readily users clicked on an advertisement for your app on Facebook. Just remember, there are multiple ways to collect useful intelligence before fully investing in an idea.