On Using Robots for App Testing

I gave a talk at GTAC 2015 on using robots for Android App Testing. The talk got a lot of interest and I presented on the same topic at AndroidSummit, Connect.TECH and several meetups to share my ideas with developers and get feedback on their thoughts. Most of my audience were fascinated by the idea of having a robot test your app for increasing coverage and identifying all failures (crashes) in the application.

My talk at Google’s Test and Automation Conference at the Boston office.

We’ve come a long way since the talk. After 3 years of research and customer discovery with developers, these two reasons come to the top on why one should use robots for app testing.

  1. Scale & Agility: Manually approving each release across all the devices, takes a lot of time and requires a lot of skilled people to get right.
  2. Better Error Detection: Without strong rigor, it is easy to miss a lot of customer facing errors, mostly in the regression and compatibility areas.

A test framework must work towards a better customer experience by making sure that the apps have the highest app quality. It seemed that Artificial Intelligence was the most obvious way to proceed with this problem.

The AI we are building is not replacing human testing, but supplementing it. We have created sophisticated reinforcement learning algorithms that can better explore an app like a human and then classify bugs using deep learning algorithms.

One common rant we heard from our initial users was that the way users use their app is very different than how the current set of automated tools out there test apps. We are trying to bridge this gap by having the robot observe app users from varied sources to understand usage patterns and train our algorithms to make the robots mimic the general human behavior.

One straight-forward way to help the robot is to provide it recorded test cases using the Barista App. Through trial and error, the robot biases the exploration of the reinforcement learning algorithm to learn the user flow encoded in this test case. Then it prioritizes the algorithm for this flow, thereby mimicking the human behavior in this flow while also explore the adjacent flows in your app.

Once the robot has explored the application, it needs to communicate back and tell it’s story. How does it do that? What worked, what didn’t work, what errors did the robot encounter while using the application? The answer can be found by visualizing the robot’s internal model, we call the AppMap. It’s just like a Site map except the screens are app screens instead of webpages, which are linked by actions (e.g., tap, scroll, swipe etc.) instead of web links.

The robot sees the application as a model of screens and interactions that is can have with the application to transition between them.

This is the future of testing — a visual approach to help developers make their app more stable quickly, so that they can release it fast.

We are testing apps for our beta users. If you would like to have your app tested, reach out to us at [email protected].