When building a mobile app, there’s a temptation to release it to the market as fast as possible, but taking just a little time to evaluate what is working and not working with your actual users could make all the difference between success and failure.
One of the most under-utilized practices in mobile app development today is A/B testing. This practice has been used for years to create better and more effective marketing campaigns, including to test push notification strategies with mobile devices, but hasn’t been adopted as much for the mobile app user experience.
A/B testing is defined as using two versions (A and B) of a sales promotion, broadcast ad or printed collateral piece that are identical except for one variation that might affect the intended result. Version A might be an existing advertisement (control), while Version B is modified in some respect by changing the title, layout, image, color, etc. Both are exposed to test audiences and, if one version elicits a better response, the better version is rolled out as the full campaign.
When it comes to the user experience for a mobile app, A/B testing can ensure that the app delivers more useful and pleasing features to the intended audience, enabling the app to be successful. This is critically important as more than 90 percent of apps are deleted because they fail to deliver as promised or are not very user-friendly. A/B testing can remove some of the guesswork, showing product managers and developers exactly what users prefer.
The merits of A/B testing
Here’s an example of how A/B testing works in practice: Action items such as buttons need more visual weight and clearly distinct labels, but sometimes layout placement is even more important. While some designers deal with these questions simply by thinking about it from the user perspective, an even better way to do this would be to conduct A/B testing. If a “buy now” button is pink instead of gray, does it increase sales? How does the shape of the button influence the purchaser? If it’s on the bottom left of the screen, does that make it easier for the consumer to find? Similar to a scientific experiment, A/B testing could provide answers to these questions and more.
There are many reasons why A/B testing should be more widely adopted by mobile app developers. Here’s just a few of them:
- A/B testing can increase sales. As previously stated, by experimenting with different screen layouts, developers can learn how to boost sales conversions; sometimes, it’s as simple as moving the position of the “buy” button or changing its color. As an example, a non-profit might consider changing its “Donate Now” button to “End Hunger Now” or “Support Us” and test which phrase elicits the greatest return.
- A/B testing can deliver useful analytics. As an example, in late 2014, Facebook’s Parse began offering Push Experiments, a feature that helps developers A/B test push messaging ideas. With Push Experiments, developers receive useful data and can learn which time of day is most effective for push notifications, how many people see each one, which text was most appealing and more.
- A/B testing can ensure a better user experience. A superior user experience is the Holy Grail of mobile app development. A/B testing can help achieve that goal by successfully showing what users want before the app is launched.
Roadblocks to the wider adoption of A/B testing
With so many positive reasons for mobile app teams to use A/B testing for new products, why aren’t more of them doing it? Two reasons: time and money. Often, there is an inordinate amount of pressure placed on the team to rush an app to market, either because it is timely, has a limited window of usability before being replaced by the Next Big Thing, or because a competitive app is also in the works.
Yet, it’s rare that a quickly released app with a flawed UX delivers better long-term results than one that is delayed but more usable and complete. This is the tired time-to-market versus quality debate. Fortunately, A/B testing does not need to severely impact a project timeline. Developers can, and should, carefully select the elements to trial ahead of time; for example, the placement of a button on a layout is much easier and quicker to A/B test than determining how users transition between pages.
Moving A/B testing into the mainstream of mobile app development
With the benefits of increased stickiness, customer retention and the ability to drive more revenue, A/B testing for mobile apps should become more of an accepted practice in 2015. There are plenty of case studies to show the benefits: Walmart developers used A/B testing to enhance its mobile commerce conversion rates in late 2013; in 2014, Facebook, LinkedIn and other sites jumped on the bandwagon for their mobile offerings. Socialcam outlined how it conducts A/B testing on its blog.
A telling sign of how the market is responding to mobile app A/B testing is the increasing number of vendors offering tools that make it easier for developers to do. Mixpanel is an example of a company that once offered just analytic tools for the desktop; it added a tool for A/B mobile app testing in 2014. Others appearing earlier this year include Leanplum, Apptimize and Optimizely. A/B testing is also supported by the platforms. For example, Android’s Developer site explains how you can integrate Google Analytics into a mobile app and use Content Experiments to run A/B tests.
In summary, A/B testing makes sense for any developer or product manager looking to ensure that a new mobile app is optimized to achieve its goal before it is launched. By learning what works and what doesn’t before the app is launched, there’s a greater chance that users will like the app and keep it, and that the project as a whole will be successful. Given the obvious benefits of A/B testing, even if it’s just on a few key items, it’s time for the practice to be routinely included in product development timelines.