© Denys Zelenchuk 2019
Denys ZelenchukAndroid Espresso Revealedhttps://doi.org/10.1007/978-1-4842-4315-2_13

13. Supervised Monkey Tests with Espresso and UI Automator

Denys Zelenchuk1 
(1)
Zürich, Switzerland
 

Application stability is a top application quality indicator. Poor stability leads to low user ratings in Android PlayStore, which in turn lowers the application’s overall rating and reduces the downloads. In order to keep applications stable, the Android platform provides a tool called monkeyrunner ( https://developer.android.com/studio/test/monkeyrunner ) to test the application from the stability side.

Unfortunately, monkeyrunner is not integrated into Espresso or the UI Automator framework, which makes it almost useless for applications that require user login or for specific application states that monkey tests should start from. Moreover, it is impossible to collect valuable test results without implementing custom tests, which results in parsing solutions.

Taking this information into account, it is clear that monkey-like tests must be much smarter and easier to control. This chapter explains how to implement your own supervised monkey tests.

The Monkeyrunner Issue and Solution

Let’s take a closer look at what makes monkeyrunner so unusable:
  • With monkeyrunner, tests are not part of the project codebase and are not controlled by Espresso or the UI Automator test framework.

  • It is not the part of androidx or android.support library.

  • It is a standalone tool with its own issues and need for maintenance.

  • It is hard to fetch and process test results.

  • It was written in the Python programming language, which makes it harder to integrate with existing UI tests.

This list of monkeyrunner cons forces us to implement our own solution. Luckily, we don’t need to do much to have it in place. The idea is to write monkey tests in a native test framework like Espresso or UI Automator, or both. This introduces the following advantages:
  • Monkey tests become part of the UI tests’ codebase, which means they are fully owned and controlled by you.

  • You can use UI tests in combination with monkey tests (for example, you can use a UI test to log in and afterward start the monkey tests).

  • It’s easy to fetch and process test results, using the existing reporting infrastructure.

  • Monkey tests can be supervised, which means if you leave the application, you can identify it and launch the application.

  • Different UI events or gestures can be implemented when needed.

Monkey Tests for Instrumented and Third-Party Applications

As mentioned, the monkeyrunner tool does not satisfy our requirements for monkey tests; therefore, in this section, we will implement our own supervised monkey tests.

Identifying Monkey Tests Operational Area

We have tools that we will use to write monkey tests. Now we have to think about the concept of how and where these monkey tests should operate. Figure 13-1 defines the areas where the monkey test should perform its actions.
../images/469090_1_En_13_Chapter/469090_1_En_13_Fig1_HTML.jpg
Figure 13-1

Device screen areas. Red represents areas that should be ignored and blue shows the areas of interest.

According to the official Android documentation, the top and the bottom bars are called the navigation bar (see Figure 13-2) and status bar (see Figure 13-3).
../images/469090_1_En_13_Chapter/469090_1_En_13_Fig2_HTML.jpg
Figure 13-2

Navigation bar

../images/469090_1_En_13_Chapter/469090_1_En_13_Fig3_HTML.jpg
Figure 13-3

Status bar

Our first task is to identify the dimensions of the navigation and status bar and calculate the coordinates of the area we want our monkey tests to operate on. The ScreenDimensions class contains all the methods that perform this calculation. On top of this, it also generates random coordinates for monkey actions in our areas of interest. To fully understand how these calculations are performed, Figure 13-4 shows the device screen coordinates system.
../images/469090_1_En_13_Chapter/469090_1_En_13_Fig4_HTML.jpg
Figure 13-4

Android screen coordinates system

In short, the elements height calculation is determined from the top down, starting from the (0, 0) coordinate. Now it should be clear that to calculate the zero coordinate of the desired area, we need to know the height of the status bar. The same goes for the bottom-right corner, but in this case, we also need the height of the navigation bar. All of these calculations are done in the ScreenDimensions.kt class .

chapter13 .ScreenDimensions.kt Class Keeps All the Functions That Calculate Screen Dimensions and Generates Random Coordinates.
/**
 * Calculates screen dimensions, navigation, status and action bars dimensions.
 * Generates random coordinates for monkey clicks.
 */
object ScreenDimensions {
    private val heightWithoutNavigationBar: Int
    private var width = 0
    private val uiDevice = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation())
    private val appContext = InstrumentationRegistry.getInstrumentation().targetContext
    private val navBarResourceId =
            appContext.resources.getIdentifier("navigation_bar_height", "dimen", "android")
    private val statusBarResourceId =
            appContext.resources.getIdentifier("status_bar_height", "dimen", "android")
    init {
        width = uiDevice.displayWidth
        heightWithoutNavigationBar = uiDevice.displayHeight - ScreenDimensions.navigationBarHeight
    }
    /**
     * Calculate navigation bar height.
     */
    val navigationBarHeight : Int get() {
        return if (navBarResourceId > 0) {
            appContext.resources.getDimensionPixelSize(navBarResourceId)
        } else {
            0
        }
    }
    /**
     * Calculate status bar height.
     */
    val statusBarHeight: Int get() {
        return if (statusBarResourceId > 0) {
            appContext.resources.getDimensionPixelSize(statusBarResourceId)
        } else {
            0
        }
    }
    val randomY: Int
        get()  = (statusBarHeight..heightWithoutNavigationBar).random()
    val randomX: Int
        get() = (0..width).random()
    private fun IntRange.random() =
            Random().nextInt((endInclusive + 1) - start) +  start
}

As you can see, we are using the UiDevice instance to get the device screen’s width and height and using the application context to get the navigation bar and status bar height based on their resource identifiers.

Defining the Monkey Test Actions

The next step is to define the actions that our monkey tests will perform:
  • Click action—This action should indicate a click on random coordinates (randomX, randomY) inside the area of interest marked off in Figure 13-1. The UiDevice.click(int x, int y) action will be used for this purpose.

  • Drag (or swipe)—Drag and swipe actions should be executed based on randomly defined start (startX, startY) and end (endX, endY) coordinates. We use the UiDevice.drag(int startX, int startY, int endX, int endY, int steps) action here. The steps parameter is the number of steps for the swipe action. Each step execution is throttled to five milliseconds per step, so for 100 steps, the swipe will take around 0.5 seconds to complete.

  • Click system back button—The UiDevice.pressBack() action will be used to simulate a short press on the system’s back button.

  • Launch application—Here we will have different approaches to launching an application based on the application being tested. For a debug application, we need access to the source code, so we will use ActivityTestRule from the android.support library project and the ActivityScenario.launch(Activity.class) function from the androidx.test library. For third-party applications, we have another way of launching applications using the package name, which will be discussed later.

  • Relaunch application in case monkey tests left it—Basically we reuse the implementation from the previous point. This allows the monkey tests to leave the application and will make the tests more closely emulate real use case scenarios, when mobile users leave an application after a certain amount of time and then launch the application again.

Now we move to the implementation of all the mentioned actions, which can be seen in the chapter13.Monkey.kt file.

chapter13 .Monkey.kt .
/**
 * Class that keeps Monkey tests logic and main actions.
 */
object Monkey {
    private val uiDevice = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation())
    private val appContext = InstrumentationRegistry.getInstrumentation().targetContext
    private val toDoAppPackageName = appContext.packageName
    private const val numberOfSteps = 10
    // Random integer value used by modulus operator (%) to decide which action should be performed.
    private const val dragNow = 7
    private const val pressNowBack = 13
    // Variable that will keep action description for logging/exception building purpose.
    private var monkeyAction = ""
    /**
     * Drags from start to end coordinate.
     *
     * @param startX - start x coordinate
     * @param startY - start y coordinate
     * @param endX - end x coordinate
     * @param endY - end y coordinate
     */
    private fun drag(startX: Int, startY: Int, endX: Int, endY: Int) {
        uiDevice.drag(
                startX,
                startY,
                endX,
                endY,
                numberOfSteps)
    }
    /**
     * Runs monkey tests for provided package.
     *
     * @param actionsCount - number of events to execute during monkey tests.
     * @param packageName - package name that should be tested. If not provided TO-DO application is tested.
     */
    fun run(actionsCount: Int, packageName: String = toDoAppPackageName) {
        loop@ for (i in 0..actionsCount) {
            if (PackageInfo.shouldRelaunchTheApp(monkeyAction, packageName)) {
                relaunchApp(packageName)
            }
            val randomX = ScreenDimensions.randomX
            val randomY = ScreenDimensions.randomY
            when {
                i % dragNow == 0 -> {
                    val randomX2 = ScreenDimensions.randomX
                    val randomY2 = ScreenDimensions.randomY
                    monkeyAction = String.format(
                            "drag from: %d - %d to: %d - %d", randomX,
                            randomY, randomX2, randomY2
                    )
                    drag(randomX, randomY, randomX2, randomY2)
                    continue@loop
                }
                i % pressNowBack == 0 -> {
                    monkeyAction = "press back system button"
                    uiDevice.pressBack()
                    continue@loop
                }
                else -> {
                    monkeyAction = "click coordinate x:$randomX y:$randomY"
                    uiDevice.click(randomX, randomY)
                    continue@loop
                }
            }
        }
    }
    /**
     * Launches the application by its package name.
     * In case package name is equal to the TO-DO application package  ActivityScenario.launch() is used.
     *
     * @param packageName - name of the package to relaunch
     */
    private fun relaunchApp(packageName: String) {
        if (packageName == toDoAppPackageName) {
            ActivityScenario.launch(TasksActivity::class.java)
        } else {
            PackageInfo.launchPackage(packageName)
        }
    }
}

This implementation of the monkey actions looks clear and easy extendable. Even this number of actions is enough to perform good monkey tests. But it is also easy to extend it, which we can do by introducing one more action inside the when {} block.

The dragNow and pressNowBack constants are defined in a way to minimize cases where both expressions actionCount % dragNow or actionCount % pressNowBack return 0 (zero). You can of course change them to values suitable for your needs.

One of the important roles that logic plays in monkey tests is handled by this condition:
if (PackageInfo.shouldRelaunchTheApp(monkeyAction, packageName)) {
      relaunchApp(packageName)
}

In short, this condition checks if the tests left the tested application or a crash occurred. If the monkey tests left an application, the relaunch mechanism is triggered. If an error occurred, an exception is created and thrown.

Implementing Package-Dependent Functionality

There are three monkey test functionalities that rely on the application package name that we would like to implement:
  • Launching or relaunching the test application in case we are testing a third-party application.

  • Checking if the test application process is in the error state.

  • Creating a function that identifies the need to relaunch the test application.

All of these cases are implemented in the chapter13.PackageInfo.kt file, as shown here.

chapter13 .PackageInfo.kt.
/**
 * Provides package helper methods.
 */
object PackageInfo {
    private val uiDevice = UiDevice.getInstance(InstrumentationRegistry.getInstrumentation())
    private val testContext = InstrumentationRegistry.getInstrumentation().context
    /**
     * Checks if there is a need to relaunch the application.
     *
     * @return true when application under test is not displayed to the user.
     */
    fun shouldRelaunchTheApp(monkeyAction: String, packageName: String): Boolean {
        if (!isAppInErrorState(monkeyAction, packageName)
                && uiDevice.currentPackageName != packageName) {
            return true
        }
        return false
    }
    /**
     * Launches application based on its package name.
     * @param packageName - the name of the package to launch.
     */
    fun launchPackage(packageName: String) {
        val intent = testContext
                .packageManager
                .getLaunchIntentForPackage(packageName)!!
        testContext.startActivity(intent)
        uiDevice.wait(Until.hasObject(By
                .pkg(packageName)),
                5000)
    }
    /**
     * Checks if target application process is in error state and throws an exception, otherwise returns true.
     *
     * @return false if application is in error state, otherwise throws exception and fails the test.
     */
    private fun isAppInErrorState(monkeyAction: String, packageName: String): Boolean {
        val manager = testContext.getSystemService(Context.ACTIVITY_SERVICE) as ActivityManager
        var errorDescription = ""
        // Get processes in error state, return false when list is null.
        manager.processesInErrorState?.forEach {
            val isTargetPackage = it.processName.contains(packageName)
            when {
                isTargetPackage && it.condition == CRASHED ->
                    errorDescription = "Application $packageName crashed after $monkeyAction action"
                isTargetPackage && it.condition == NOT_RESPONDING ->
                    errorDescription = "Application $packageName not responding after $monkeyAction action"
            }
            /** Build and throw new Espresso PerformException with proper description and stacktrace
             *  At this point test is failed.
             */
            throw PerformException.Builder()
                    .withActionDescription(errorDescription)
                    .withCause(Throwable(it.stackTrace))
                    .build()
        }
        return false
    }
}

Here, the shouldRelaunchTheApp() function validates two conditions. First, it determines if the test application is in an error state (CRASH or ANR). If it’s not, then it checks if the tested application has been shown to the user and if not relaunches it. The launchPackage(packageName) function uses the test context to send the start activity intent to the system and, with the help of the UiDevice wait mechanism, waits for the application to start. The last function, called isAppInErrorState(monkeyAction, packageName), ensures that the tested application process is currently not in the error state. When an error state is identified, the Espresso PerformException function is created with additional information about the last monkey action performed and the exception stacktrace. This way we are using the Espresso error reporting mechanism and the fail monkey test.

Next are the actual monkey tests for the instrumented and third-party applications. The com.google.android.dialer package (Android Phone application) is used for the third-party example.

chapter13 . MonkeyTest.kt .
/**
 * Test class that demonstrates supervised monkey tests.
 */
@RunWith(AndroidJUnit4::class)
class MonkeyTest {
    @get:Rule
    var grantPermissionRule: GrantPermissionRule = GrantPermissionRule.grant(
            Manifest.permission.CAMERA,
            Manifest.permission.WRITE_EXTERNAL_STORAGE)
    @get:Rule
    var screenshotWatcher = ScreenshotWatcher()
    /**
     * Monkey tests will be executed against TO-DO application.
     */
    @Test
    fun testsInstrumentedApp() {
        ActivityScenario.launch(TasksActivity::class.java)
        Monkey.run(200)
    }
    /**
     * Monkey tests will be executed against provided application package name.
     * This is the example of how to test 3rd party application.
     */
    @Test
    fun testsThirdPartyApp() {
        val packageName = "com.google.android.dialer"
        PackageInfo.launchPackage(packageName)
        Monkey.run(200, packageName)
    }
}

While running these tests, we can see that the monkey actions are a bit slower than the monkeyrunner tests because of the need to check the application state during each test step. But we can neglect this issue, keeping in mind all the pros of having them implemented using Android native testing frameworks.

Exercise 28

Running Monkey Tests
  1. 1.

    Check out the master branch of the TO-DO application project and migrate it to AndroidX. After migration, execute Build ➤ Clean project. Run some tests. If there are failures, analyze and fix them by updating the proguard rules or updating dependences in the build.gradle file.

     
  2. 2.

    Implement a test class with a test that launches application activity using ActivityScenario.launch(Activity.class) in the @Before method and then runs the test.

     

Summary

Unfortunately, on the Android platform, monkey tests are not treated very important. The outdated monkeyrunner Python tool is supplied for this need instead of providing better support via native Android platform testing frameworks like UI Automator or Espresso. But even so, without too much effort, it is possible to run meaningful monkey tests that include easy ways to start and prepare the proper application under a test state, run supervised monkey tests, and report test results using the native testing frameworks functionality.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.118.12.186