Application stability is a top application quality indicator. Poor stability leads to low user ratings in Android PlayStore, which in turn lowers the application’s overall rating and reduces the downloads. In order to keep applications stable, the Android platform provides a tool called monkeyrunner ( https://developer.android.com/studio/test/monkeyrunner ) to test the application from the stability side.
Unfortunately, monkeyrunner is not integrated into Espresso or the UI Automator framework, which makes it almost useless for applications that require user login or for specific application states that monkey tests should start from. Moreover, it is impossible to collect valuable test results without implementing custom tests, which results in parsing solutions.
Taking this information into account, it is clear that monkey-like tests must be much smarter and easier to control. This chapter explains how to implement your own supervised monkey tests.
The Monkeyrunner Issue and Solution
With monkeyrunner, tests are not part of the project codebase and are not controlled by Espresso or the UI Automator test framework.
It is not the part of androidx or android.support library.
It is a standalone tool with its own issues and need for maintenance.
It is hard to fetch and process test results.
It was written in the Python programming language, which makes it harder to integrate with existing UI tests.
Monkey tests become part of the UI tests’ codebase, which means they are fully owned and controlled by you.
You can use UI tests in combination with monkey tests (for example, you can use a UI test to log in and afterward start the monkey tests).
It’s easy to fetch and process test results, using the existing reporting infrastructure.
Monkey tests can be supervised, which means if you leave the application, you can identify it and launch the application.
Different UI events or gestures can be implemented when needed.
Monkey Tests for Instrumented and Third-Party Applications
As mentioned, the monkeyrunner tool does not satisfy our requirements for monkey tests; therefore, in this section, we will implement our own supervised monkey tests.
Identifying Monkey Tests Operational Area
In short, the elements height calculation is determined from the top down, starting from the (0, 0) coordinate. Now it should be clear that to calculate the zero coordinate of the desired area, we need to know the height of the status bar. The same goes for the bottom-right corner, but in this case, we also need the height of the navigation bar. All of these calculations are done in the ScreenDimensions.kt class .
As you can see, we are using the UiDevice instance to get the device screen’s width and height and using the application context to get the navigation bar and status bar height based on their resource identifiers.
Defining the Monkey Test Actions
Click action—This action should indicate a click on random coordinates (randomX, randomY) inside the area of interest marked off in Figure 13-1. The UiDevice.click(int x, int y) action will be used for this purpose.
Drag (or swipe)—Drag and swipe actions should be executed based on randomly defined start (startX, startY) and end (endX, endY) coordinates. We use the UiDevice.drag(int startX, int startY, int endX, int endY, int steps) action here. The steps parameter is the number of steps for the swipe action. Each step execution is throttled to five milliseconds per step, so for 100 steps, the swipe will take around 0.5 seconds to complete.
Click system back button—The UiDevice.pressBack() action will be used to simulate a short press on the system’s back button.
Launch application—Here we will have different approaches to launching an application based on the application being tested. For a debug application, we need access to the source code, so we will use ActivityTestRule from the android.support library project and the ActivityScenario.launch(Activity.class) function from the androidx.test library. For third-party applications, we have another way of launching applications using the package name, which will be discussed later.
Relaunch application in case monkey tests left it—Basically we reuse the implementation from the previous point. This allows the monkey tests to leave the application and will make the tests more closely emulate real use case scenarios, when mobile users leave an application after a certain amount of time and then launch the application again.
Now we move to the implementation of all the mentioned actions, which can be seen in the chapter13.Monkey.kt file.
This implementation of the monkey actions looks clear and easy extendable. Even this number of actions is enough to perform good monkey tests. But it is also easy to extend it, which we can do by introducing one more action inside the when {} block.
The dragNow and pressNowBack constants are defined in a way to minimize cases where both expressions actionCount % dragNow or actionCount % pressNowBack return 0 (zero). You can of course change them to values suitable for your needs.
In short, this condition checks if the tests left the tested application or a crash occurred. If the monkey tests left an application, the relaunch mechanism is triggered. If an error occurred, an exception is created and thrown.
Implementing Package-Dependent Functionality
Launching or relaunching the test application in case we are testing a third-party application.
Checking if the test application process is in the error state.
Creating a function that identifies the need to relaunch the test application.
All of these cases are implemented in the chapter13.PackageInfo.kt file, as shown here.
Here, the shouldRelaunchTheApp() function validates two conditions. First, it determines if the test application is in an error state (CRASH or ANR). If it’s not, then it checks if the tested application has been shown to the user and if not relaunches it. The launchPackage(packageName) function uses the test context to send the start activity intent to the system and, with the help of the UiDevice wait mechanism, waits for the application to start. The last function, called isAppInErrorState(monkeyAction, packageName), ensures that the tested application process is currently not in the error state. When an error state is identified, the Espresso PerformException function is created with additional information about the last monkey action performed and the exception stacktrace. This way we are using the Espresso error reporting mechanism and the fail monkey test.
Next are the actual monkey tests for the instrumented and third-party applications. The com.google.android.dialer package (Android Phone application) is used for the third-party example.
While running these tests, we can see that the monkey actions are a bit slower than the monkeyrunner tests because of the need to check the application state during each test step. But we can neglect this issue, keeping in mind all the pros of having them implemented using Android native testing frameworks.
Exercise 28
- 1.
Check out the master branch of the TO-DO application project and migrate it to AndroidX. After migration, execute Build ➤ Clean project. Run some tests. If there are failures, analyze and fix them by updating the proguard rules or updating dependences in the build.gradle file.
- 2.
Implement a test class with a test that launches application activity using ActivityScenario.launch(Activity.class) in the @Before method and then runs the test.
Summary
Unfortunately, on the Android platform, monkey tests are not treated very important. The outdated monkeyrunner Python tool is supplied for this need instead of providing better support via native Android platform testing frameworks like UI Automator or Espresso. But even so, without too much effort, it is possible to run meaningful monkey tests that include easy ways to start and prepare the proper application under a test state, run supervised monkey tests, and report test results using the native testing frameworks functionality.