development

6 Ways to Improve Mobile Performance for Emerging Markets

There’s no free lunch developing apps for emerging markets. Users are on spotty networks and a good portion of them are using outdated phones several years old. On top of that, data costs represent a significant portion of their paycheck. So how you get them to adopt your app? For us at Jana, giving people free internet is a priority and that starts with creating an app that accessible to as many users as possible.  Performance is a huge barrier for adoption in emerging markets so the first step is understand these limitations and then designing an experience that takes these into consider. For our purposes, we split up how we think about performance into 2 buckets: in-app and network performance.

In-App Performance (Android):Screen Shot 2016-03-29 at 11.54.05 AM.png

mCent users are on some strange devices out there and as a result we have to learn how to effectively manage the phone’s resources  – memory, processors, etc. Using Facebook Year Class, we’re able to classify our devices and realized that many of our users are on devices up to 6 years old.

So what do these devices look like? We took one of the common devices like a Samsung Duos and checked out the specifications: single core, cortex-A5 processor, 1 GHz clock speed, and 512 MB of RAM running Android’s Ice Cream Sandwich. Try any app on this phone and you’ll spend most of your time pulling your hair out. The scrolling through the pages is choppy; black screens appear, the app seems suddenly unresponsive. But for many users, this is their normal experience with most apps out there. Luckily, there are some general strategies that help improve performance through better resource management.

1. Make sure that the UI Thread (otherwise known as the main thread) only does what it needs to do.

The UI thread should always be available for tasks such as rendering/inflating views is an important. It should never be responsible to do long running I/O like database calls or any complex computations. These actions can done in a background thread. It’s also important to consider the number of background threads running concurrently so that lower-end processors don’t continually context switch from one thread to another. We use a thread pool executor where the number of threads in the pool is determined by the number of cores on the device’s processor.

2. Monitor memory usage and avoid dynamic allocations when possible

With newer devices running ART (Android Runtime), GC has improved performance but with Dalvik (ART’s predecessor), GC can lead to choppy rendering, poor UI responsiveness, and other issues. In order to make the app behave smoothly on devices running Dalvik, we need to be aware of how often we allocate memory and when it’s cleaned up so that GC does not impact the app’s behavior. If you’re developing in Android Studios, you can record allocations for a given time period and then see the allocations on different threads.

3. Do as little as possible in Activity onCreate, onStart, and onResume actions.

Screen Shot 2016-03-30 at 2.08.28 PM.pngThe activity lifecycle shows that onCreate, onStart, onResume methods need to execute before the activity runs (ie. the user won’t see anything on their screen until these finish).  If you have any long running IO, like a database query, in these methods, you may block the activity from starting up until it completes. We’ve seen cases where a simple SELECT SQLite statement to fetch one row can take up to 5s on a Samsung Duos. For activity layouts with multiple layers, this can also take a long time to render.

A pattern we’ve adopted is to load an initial, barebones layout so that the user sees a screen and can interact with top level navigation. Then fire off an async task that does some sort of IO. Once the task completes, then inflate the layout and render it so the app is always responsive even while the tasks are executing.

Network Performance:

In a report by Akamai in 2015, India has the slowest internet speed in the Asia-Pacific region. When we measured download speed, we saw that on average, users were getting around 10KB/s. That would a little over 8 minutes to download a 5MB app!

To understand what network performance looks like from a user’s phone and identify any bottlenecks, we tracked different stages of a request.

Screen Shot 2016-03-29 at 11.46.14 AM.png

Through this, we were able to prioritize several fixes:

1. Minimize the response and request payloads

Low internet bandwidth means we can’t bloat our responses or our requests with unnecessary information. Look into API endpoints that return the largest responses and make sure that you’re compressing server-side.

Similarly, if you’re sending up large payloads of data to your API, you’ll also want to consider client-side compression on the request payloads; although the caveat here is that this can cause app performance degradation because you’ll need to take up device resources to do the compression.

2. Batch multiple requests into one

Admittedly, compressing your responses will only do so much. Latency across mobile networks is the main culprit because infrastructure is not as mature as the ones in the US. If your app is executing multiple requests, you’ll want to make sure that you’re reusing connections to send requests (keeping the connection alive). Another way to tackle this is to batch requests together and then send them across the network. Keep in mind the impact on request and response size but by batching, you’ll significantly reduce the number of round trips you’ll need to make.

3. Cache client side

Caching client side has the benefit of saving data and making the app seem more responsive at all times. Say we have a news feed and each request to the server-side uses up 10KB of data. If we make that call 10x a day, we’ve used up 100KB of data just for the news feed. For people in emerging markets, more data means more cost.

Now let’s think about the experience when users have a spotty connection. Let’s assume that we always make a request whenever the user opens up the news feed on their app. The user waits until the request finishes until he or she can interact with the content.

Let’s take a 2nd case where we’ve cached a previous, successful response. If the user’s on
a spotty connection, don’t auto update the news feed. Instead load the news feed that’s already cached. If the connection is strong, we can load the news feed from the prior cached response so the user can interact with that content and then update the news feed once the new response comes back. In this case, we maintain a responsive app regardless of network connectivity.

Of course, the main concern with client-side caching is invalidating stale data. A naive solution is simply placing a time-to-live (TTL) on client-side data but this depends on what data you’re storing.

It’s best to not take anything for granted when it comes to developing apps for emerging markets. Understanding the device and network limitations in these countries will help you make better design decisions on how to improve the user’s experience. We take caution so that we don’t overcomplicate or over-engineer performance enhancements but by being able to measure it and compare fixes, we’ve made significant strides in the last few months. Happier users means more free internet 🙂

Let us know your thoughts! And as always, we’re hiring!

 

 

 

Discussion

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s