Fraud Squad Field Notes is a multipart series that will cover a variety of topics related to user fraud and waste within mobile app marketing. With insights directly from the mCent fraud team, we will discuss industry trends, methodologies, and share observations from our audience data.
James and the perfect quality score:
How a bit of advanced modeling and a whole lot of air travel helps us find our best users.
Here on the Fraud squad we’ve been developing models to detect fraudulent behavior and systematically find users who have the highest likelihood of becoming highly engaged and retained users of our clients’ apps.
One of the most promising new features we have built has been an algorithm that uses our rich first-party behavioral data to score users based on their expected retention and engagement, both on our own platform as well those of our advertisers. We then use this score to target users for offers most likely to appeal to them.
Development of this scoring system is still in its formative stages, however in early experiments it has shown significant promise and helped us optimize our traffic immensely.
Still, even with math on our side looking at numbers on a spreadsheet can leave you curious as to how the model is actually working. Does the score accurately reflect the “quality” of a user? Is the model relying on the right behavioral predictors? Who are the actual people on the other end of the equation?
At Jana we put a lot of weight on direct feedback from our users, and regularly send team members – product managers, engineers, and sales reps alike – into our key markets to meet with our users and learn as much as we can about the environments in which we operate.
I recently had the opportunity to travel to Indonesia and conduct 7 days of user and product testing.
On my very first day I was walking around ITC Roxy Mas (one of Jakarta’s largest cell phone shopping malls) and met James. James is 20 years old and works at one of the outlets selling Sony smartphones. He had never heard of mCent before, but after 10 minutes or so of friendly chatting he let me show him the app. He went through the registration process and gave me some helpful feedback. I thanked him for his Time and we went our separate ways…
One of the objectives of my trip was to learn about user behavior and potentially assess, in the wild, if our scoring algorithm was working.
With that in mind, a few days after meeting James I queried a list of recent mCent members, and we began reaching out to users with the highest scores (“could these users really be as good as our model predicted?”). Within 3 phone calls we had found someone willing to talk to us and he had a perfect score of 100. Awesome!
The next day we arrived at our agreed upon meet-up place (as it happens, it was a Starbucks), and who’s sitting there? James!
As it turns out my quick demo of our app was quite convincing. After we first met he continued to use mCent, completed many offers, and even referred several friends (exactly the type of user we like to see on our platform). Low and behold, our model gave him a perfect score.
Behavioral modeling (especially when you have rich, first party data) can be an incredibly effective tool for targeting users and improving in-app engagement. In the era of big data, however, it is important to step back and make sure you look at the real world behaviors of your users.
Every model should also tell a logical story. What are the predictors you are looking at? Do they accurately capture what you are trying to measure?
Finally, once you’ve built your model and done the requisite statistical legwork, sometimes it helps to get out there and see it for yourself!
Interested in how learning more about how statistics meets real-world users? Fantastic, because we’re hiring.