Chapter 13. Testing and instrumentation

In this chapter

  • Unit testing
  • Testing activities with instrumentation
  • Mock objects and stubs
  • Input testing with the Monkey tool

I tread paths by moonlight that others fear to speak of during day.

Patrick Rothfuss, The Name of the Wind

If there’s one topic that polarizes the software development community, it must be testing. Though testing is commonly understood as a true development task, as opposed to being a duty of “the QA guys,” many programmers still try to avoid it like the plague. (If you don’t fall into that group, feel free to skip the initial motivation that follows.)

Why is that? From our experience, there are two main reasons why programmers don’t write tests: ignorance and arrogance. Ignorance, the unwitting kind, is most often found with programmers who don’t come from a development background where test-oriented or test-driven development are commonplace, and who aren’t familiar with the benefits of testing methodologies (unfortunately, mobile application developers often fall into that category). Even if they’re familiar with the concepts, they often don’t cherish the value of software tests, and therefore perceive writing tests as cumbersome, something “you know you should be doing but aren’t in the mood for just now.” After all, you need to get that milestone done and report progress to your senior, so why waste time writing tests, right?

The second cause is arrogance. We programmers are proud animals. Our own code is always faster, more clever, more stable, and more beautiful than everyone else’s is. Most importantly, it’s entirely bug free. So why write code that tests your own code, which presumably is already perfect? Or, so you think.

This boils down to the central question: why bother? Because it will pay off at some point, maybe earlier than you think. With a proper test suite in place, you can employ a build server to run the test suite after every commit, which allows you to discover regressions early on during development, and fixing things earlier rather than later will ultimately raise the quality of your product. Some people go further and practice test-driven development (TDD), where a test is written before the code that it’s testing even exists. That way you start with the functional requirements (the specification, or contract) by formulating them as a test case that initially fails, and you write and change the code under test until the test succeeds. This allows you to derive your application logic and even interfaces from your test suite, which defines how the application should work. Any deviation from that behavior introduced by a subsequent commit will then be automatically detected the next time you run the test suite (as mentioned, build servers can do that automatically for you, as we’ll see in chapter 14).

Yes, writing tests takes time and effort. You have to constantly think about the two rights: writing the right tests and having them test the right things. You could rephrase that as making the correct decision about what should be tested, and making sure your tests correctly reflect the requirements. Green lights on the build server will give you comfort. But let’s face it, they can be deceptive!

In any case, we highly encourage you to adopt some form of developer testing. Once it becomes routine, you’ll be uncomfortable adding another piece to your software without backing it with a test. Comfort can be a key driver behind writing tests. Whenever you find yourself committing code that breaks the build, don’t feel guilty. Instead, feel comforted, because without the help of that test, the bug might have slipped into production. Don’t be ignorant. Don’t be arrogant. Good developers write tests.

But this book is about Android, not about testing. If you still aren’t convinced that writing tests is a good thing, we encourage you to get your hands on some books about testing and TDD. Lots of them are out there, not least because Agile development models (which heavily advertise testing practices) have become wildly popular over the last decade.

Leaving our motivation behind, here’s what this chapter has to offer: the first section will lay some fundamentals by answering the basic “whats” and “hows” of Android testing. We’ll then focus on the core material around testing: we explain how you can use Android’s instrumentation framework to write user interface tests, and how to make your tests beautiful and expressive with the help of domain-specific languages (DSLs). Toward the end of the chapter, we’ll finally take you to the more advanced levels of testing. You’ll learn how to use mock objects to achieve a higher decoupling of your tests, how to take alternative paths to Android UI testing, and even how to stress-test applications using UI exercisers.

You may have noticed from the table of contents that this chapter is longer than usual, but rest assured that this isn’t because we plan to get you entangled in too much detail. Instead, we think that testing on Android is an area that doesn’t get enough attention. When putting this chapter together, we wanted to give you an in-depth understanding of the matter and not just reiterate what can be found in the official documentation (at http://mng.bz/TM2V). So prepare for a long ride, but we promise, it’ll be worth your while.

13.1. Testing the Android

This section will start by giving you a bird’s eye view of Android’s framework capabilities. It’ll specifically answer the following questions: what kind of tests can you write for an Android application? How do you set up a test suite for your project? How are tests on Android implemented? After laying some groundwork by answering the first question, we’ll show you how to set up a test project for your application in Eclipse, how tests are structured and executed, and then wrap things up the “In Practice” way by going head-first into our first technique, which is writing a simple unit test for an Android Application class.

13.1.1. Ways to test in Android

There are many forms of software testing and many ways to classify tests. This being a book about programming, we’ll focus on developer tests—tests that you, the programmer, write and execute. Android supports two such kinds of tests, unit tests and functional tests, and two ways of running them—in the Java virtual machine, or in the emulator or device.

Unit Tests

Unit testing focuses on testing a specific code unit in isolation, usually a class. Ideally, a unit test tests only the behavior of the unit under test, while isolating any dependent or depending code units. This keeps the test focused and rules out unwanted side effects induced by code you’re not currently testing. We can almost hear you scream for an example. Here’s one. In the DealDroid application from chapter 2, we have two activities: the DealList, which creates the main screen of the application and the DealDetails, which is the screen you see when clicking a deal in the DealList. The DealDetails Activity can only be launched from the DealList Activity, so DealDetails depends on DealList. When writing a unit test for either class, you should test only those features that are inherent to that class—for DealList, the display of deal items, and for DealDetails, the display of deal information. If you don’t obey this rule, then you may see a test for DealList fail when the cause was actually a flaw in the DealDetails class, or vice versa. Isolated testing using unit tests is a good practice. This is illustrated in figure 13.1.

Figure 13.1. Unit tests are about focusing on a single entity while isolating it from other code units. Here we have two units, DealList and DealDetails, which although closely related, are tested in isolation.

Unfortunately, isolating code units from each other can be surprisingly difficult to do. Mock objects can help (we’ll learn what mocks are and how to use them in section 13.3), but for classes to be properly testable using unit tests, you should think about this beforehand, ideally while writing the class. TDD can help by designing classes that are loosely coupled and therefore can easily be isolated in tests. Entire books can be written on how to achieve that, but let’s not digress too far here. Note that we’ll write a unit test for DealDetails in technique 75, so sit tight: a coding exercise is on its way.

Functional Tests

We use the term functional test here because Android uses this term for a specific kind of test in its documentation. To be frank, we think it’s misleading, because a unit test can also be a functional test, but one that tests an isolated piece of functionality. In general, any test that asserts that certain functionality behaves correctly with respect to a given specification is a functional test, as opposed to a nonfunctional test, which tests nonfunctional software properties such as speed or scalability. What Android calls a “functional test,” we’d personally refer to as a story test, since Android’s functional tests are mostly used for implementing user stories as tests.

That said, functional tests on Android allow you to test the behavior of your application across several code units (typically activities), and thus do full end-to-end tests of your application. In that regard, they’re the opposite of unit tests, since the code units that are part of the test don’t run in isolation, but interact with each other. This way of testing is illustrated in figure 13.2.

Figure 13.2. Functional tests on Android allow you to test scenarios involving several code units, such as transitioning from one screen to another when clicking a list item.

Testing your applications using functional tests is powerful, because it allows you to translate your user stories directly to a test suite, which can then verify that the application operates as expected. If you still don’t think that’s a compelling argument, it’s also fun to watch the screens automatically fly by while the Android emulator executes one of your functional tests!

Besides the classification into unit and functional tests, it should be said that there are also two fundamentally different ways to run tests: the standard Java way, and via Android’s instrumentation infrastructure. Both have their pros and cons, which may affect the way you lay out your test projects, so let’s quickly go through the differences.

Testing the Java Way

Running tests the standard Java way means running a test on a standard JVM (not on Dalvik), as any ordinary Java application would be. This has both benefits and drawbacks. The benefits are:

  • Speed Tests run more quickly than instrumentation tests, since the test code doesn’t have to be deployed to the emulator or device first.
  • Flexibility Because you don’t rely on the Android runtime, you’re free to use any testing framework you like, such as JUnit 4 or TestNG (we’ll explore JUnit on Android in a second).
  • Mock objects You can make full use of sophisticated object mock libraries, even those relying on byte code manipulation, which wouldn’t work on Dalvik.

Though this is all nice, the major downside of executing test code on a standard JVM is that you will not have access to any Android framework classes. All methods in the android.jar file against which your application (and your test code) is compiled will throw a RuntimeException. (See the sidebar “Help, I’m a stub!” for a more in-depth explanation of this behavior.) Without further effort, this makes running tests that directly or indirectly use Android framework classes on a standard JVM impossible, because any such test would terminate with a runtime error. We’ll show you in technique 79 how you can work around this with the help of some excellent third-party libraries. Still, running tests on a JVM is perfectly reasonable when testing code that isn’t bound to any framework classes. An example would be a random number generator you’ve written.

 

Help, I’m a stub!

You may have noticed that the android.jar library file linked to your Android projects doesn’t actually contain the Android framework code. If you open it and look at the class files, you’ll notice that every single method is implemented as:

throw new RuntimeException("Stub!");

That doesn’t seem helpful, so what’s the deal? The reason is simple: Android applications run on a device or the emulator, where the runtime library is already provided as part of the system image. The android.jar file you see in Eclipse will not be distributed with your applications. In order to compile an Android application, the compiler doesn’t need access to method bodies, just type signatures, public members, and so forth. In return, this means that by removing the actual implementation, the size of the JAR file bundled with the SDK is reduced significantly while still ensuring that your application will only use classes and interfaces that will exist on a device running the same version of Android.

 

Testing the Android Way

The second (and from an Android point of view, preferred) way to run tests is to run them directly on the device. This requires a lot of work behind the scenes, because the application and test code must first be deployed to the device, making this a much slower approach. On the other hand, your tests will have full access to the Android platform functions, as any ordinary Android application does. This can be a major benefit, because it allows you to run virtually any test you like, be it framework dependent or not. The drawbacks that you should be aware of are:

  • Slow execution Deploying tests to the emulator or device is slow, making this a less than ideal approach for TDD, where you want quick turnaround.
  • Technology lock-in Running on Dalvik means you’re much less flexible in terms of testing libraries, since Android only supports the somewhat aged JUnit 3, and mock libraries designed around runtime byte-code manipulation on a JVM won’t work.

Still, you’ll likely write most tests using this approach, because it requires less fighting against the framework. It’s also the way Google’s Android team writes tests. It’s well suited to write UI tests (tests that involve activities).

13.1.2. Organizing tests

Now that we know what kind of tests we can write, the next step is to create an environment in which we can keep our tests. The preferred way to organize your tests is to keep them separate from your application project. That way you can isolate test from production code, and as a result, not deploy your test code with your application (Android will bundle all source code in your project folder in the APK, even its test code). This involves creating a separate test project that lives next to your application project in Eclipse, whose name by convention should end in *Test, for example, DealDroidTest for the DealDroid project.

 

Grab the Project: DealDroidTest

You can get the source code for this project at the Android in Practice code website. Because some code listings here are shortened to focus on specific concepts, we recommend that you download the complete source code and follow along within Eclipse (or your favorite IDE or text editor).

Note that for the purpose of this chapter, we’ve created a branch of the DealDroid that changes the visibility of a few class members so that a test case can access them, and also introduces a new export feature that we’re going to test. (Because this chapter focuses on test code, and not applications, there will be few APK files to download.)

Source: http://mng.bz/Zk0O

 

The ADT plugin for Eclipse gives you a hand by providing a special wizard for creating test projects, which you can reach from the Eclipse menu via File > New > Other and selecting Android Test Project under the Android category, as seen in figure 13.3. Let’s do this and create a test project for the DealDroid application.

Figure 13.3. In order to create a new Android test project, you can use the wizard via File > New > Other and selecting Android Test Project.

After clicking Next, you’ll see the project settings form for our new test project. Apart from the standard set of project settings, such as name and workspace location, you’ll find a new setting specific to test projects: the test target. This will be our DealDroid application, so we select it from the file browser. You’ll also notice how the wizard puts the test code into the test package with the package name of the application that we’re testing as the parent package. Figure 13.4 shows how the filled-in wizard form looks for the DealDroidTest project.

Figure 13.4. The ADT wizard to create test projects sets a few defaults, such as the Java package name, for you. Make sure that the test project uses the same build target as the application you’re testing.

Click Finish, and you’ll find the new test project in your workspace. Looking at the project, you’ll see that this is an ordinary Android project, so what’s the deal? The differences are marginal, and it should be stressed that you could arrive at the same result by going through the standard Android project wizard (or the android create project command) and doing a few things manually that the test project wizard does for you. These are:

  • Adding the application under test to both the build path and the project references
  • Setting up the manifest so that Android knows that this project should be run as a test application

The second point is the interesting one. Let’s look at the AndroidManifest.xml file that the ADT test project wizard generated for us:

<?xml version="1.0" encoding="utf-8"?>
<manifest xmlns:android="http://schemas.android.com/apk/res/android"
      package="com.manning.aip.dealdroid.test" ...>
    <application>
       <uses-library android:name="android.test.runner" />
    </application>
    <uses-sdk android:minSdkVersion="4" />
    <instrumentation android:targetPackage="com.manning.aip.dealdroid"
       android:name="android.test.InstrumentationTestRunner" />
</manifest>

As you can see, we’re linking against a shared library (android.test.runner). That’s because we’re defining an instrumentation, a test runner defined in that shared library, which will be used to actually run our tests. We’ll learn what instrumentation is in section 13.2; for now just note that it’s there and that it’s responsible for executing our test suite when launching this test application on a device or emulator.

Apart from these differences, the project structure is identical to that of an ordinary Android project. Since a test project is an application of its own, the test code becomes the main source code and hence goes straight into the src folder, where you’re free to break it down into smaller subpackages. If you intend to write “classic” tests that will run on the JVM in addition to normal Android tests, it probably makes sense to put them into a different package or source folder, so that the distinction is clear to anyone looking at your test project structure.

13.1.3. Writing and running tests

As you may have noticed, our test project is still empty. Let’s change that and write a simple test. Android comes bundled with the JUnit 3 unit testing framework, so typically, all tests are written as JUnit 3 tests. For those tests you don’t intend to run on a device, you’re free to link whatever testing framework as a JAR, but tests running on the device should use the JUnit classes bundled with Android, because all special test classes Android defines are derived from a JUnit 3 TestCase. We won’t explore the JUnit testing framework in-depth here, but only cover the basics.

In JUnit, you bundle tests into test cases. Each test case contains the tests you’d like to perform against the unit under test (typically a class from your main application). You do so by inheriting from junit.framework.TestCase, and putting all tests into that class, like so:

import junit.framework.TestCase;

public class MyTestCase extends TestCase {

   public void testTruth() {
      assertTrue(true);
   }

}

This test case contains a single test that ensures that true evaluates to true. That’s a fairly useless test, since it’ll always succeed, but it serves our purpose of explaining the anatomy of a JUnit test case. Not every method defined in a test case needs to be a test; in fact, only those methods prefixed with test* (as in testTruth) will be executed as tests during runtime. JUnit identifies these at runtime using reflection. Any other method will be an ordinary method and not be called unless you explicitly call it. In order to make assertions in tests, JUnit provides helper methods starting in assert* (as in assertTrue). JUnit provides plenty of them already, such as assertNotNull, assertEquals, and so forth, and Android adds a few more in the MoreAsserts helper class. Using these simple building blocks, you can set up your entire test suite.

So how are these tests executed? As mentioned earlier, there are two ways: as an ordinary Java test (on a JVM), or as an Android test (on a device). If you right-click your test project (or any test class open in the current Eclipse editor), you can choose to Run As > JUnit Test or Run As > Android JUnit Test. The preceding snippet doesn’t rely on any Android-specific framework code, so either will do fine. For the remainder of this chapter, we always assume that tests are run the “Android way,” on the device (unless we explicitly say otherwise). The outcome of a test is displayed in the Eclipse JUnit view, which will open automatically when running tests. It looks something like figure 13.5.

Figure 13.5. The standard JUnit test result view in Eclipse. Here you can see which tests were run, and which ones succeeded, failed, or exited in error.

That covers the basics of testing. If we did a good job at writing this introduction, then you know why you should write tests, what kinds of tests you can write, how to set up test projects, and even how to write and run simple JUnit tests. Hopefully you’re curious for more! Let’s proceed and write a real-world practical test. Since we’ve already set up a test project for it, the DealDroid application will be the target of our tests for the remainder of this chapter. We start with technique 74, writing a simple application unit test using Android’s ApplicationTestCase class.

Technique 74: A simple Android unit test

Roll up your sleeves, it’s time to write our first proper test on Android. So you’ve learned that every Android test case is basically a JUnit TestCase. But Android adds some functionality on top of JUnit and provides different flavors of test cases by plugging its own test case class hierarchy underneath the JUnit TestCase base class. Look at figure 13.6, which shows how Android structures its test cases into different kinds.

Figure 13.6. Every kind of test case on Android inherits from a JUnit 3 TestCase. Android further structures tests into those that require access to an instrumentation (right subtree), and those that don’t (left subtree).

For the time being, we’re going to focus on the left branch emerging from TestCase in figure 13.6, since we haven’t yet introduced instrumentation code (we’ll do that in technique 75). As you can see, Android defines three different kinds of test cases here:

  • ApplicationTestCase Used to test an android.app.Application
  • ServiceTestCase Used to test an android.app.Service
  • ProviderTestCase2 Used to test an android.content.ContentProvider (refactoring of the older and now deprecated ProviderTestCase)

These inherit from AndroidTestCase, which contains a few helper methods, such as custom assertions to test for certain permissions being set, or even a method to inject custom Context objects, the latter of which is going to become important when learning about mock objects in technique 71. The difference to InstrumentationTestCase is that it doesn’t expose access to the Instrumentation, and we’ll see in section 13.2 what that means. In order to unit test the application object, services, and content providers, this isn’t required, so we’re good to go.

Problem

You’re writing a custom application class, content provider, or service, and you want to test it in a controlled way (isolated from other components in your application) using a unit test.

Solution

Although ApplicationTestCase, ServiceTestCase, and ProviderTestCase2 all have special helper methods tailored toward the specific kind of object they’re testing, working with them is largely similar, so we’re only going to look at one of them more closely as an example—ApplicationTestCase. Recall that the DealDroid defines its own application class (DealDroidApp) by deriving from android.app.Application. The application class is where you should put logic and settings that affect the entire application, so it’s always a good idea to have a few tests in place that make sure everything is configured and working the way it should be.

Looking again at the DealDroidApp class, it seems that it’s performing initialization logic in onCreate:

public class DealDroidApp extends Application {
   ...

   @Override
   public void onCreate() {
      this.cMgr = (ConnectivityManager)
           this.getSystemService(Context.CONNECTIVITY_SERVICE);
      this.parser = new DailyDealsXmlPullFeedParser();
      this.sectionList = new ArrayList<Section>(6);
      this.imageCache = new HashMap<Long, Bitmap>();
   }
   ...
}

This code tells us that the remainder of this class relies on certain objects to be fully initialized after onCreate has been called. This sounds like a good candidate for a test, so that we can always remain sure that this is the case. Moreover, we also want to add a test that makes sure the application remains properly configured with respect to its application icon and versioning scheme. More precisely, we want to assert that the application icon remains unchanged from the ddicon image in res/drawable, and that a developer working on DealDroid never uses a versioning scheme other than n.m, where n and m are digits (such as 1.0). Let’s express these requirements as a test case.

Listing 13.1. ApplicationTestCase can be used to test application classes

Let’s review what we’ve done here. We’ve created an ordinary class, but let it inherit from ApplicationTestCase. It’s a generic class, and we must pass it the application type we want to test . ApplicationTestCase exposes helpers specific to testing application classes—most importantly, creating a new application instance via createApplication, which will trigger the application’s onCreate handler, and getting a reference to the created application object through getApplication. We do that in the setUp method . setUp is a special test lifecycle hook exposed by JUnit, which is run before every single test method (of which we have three here), so be careful to not perform any overly expensive tasks here. setUp is typically used to load test fixtures or reset and initialize test state. Using a test on a static variable allows you to run setUp only once if needed.

 

Warning

Looking at our test code, you may get the idea that DealDroidApp.onCreate is called three times (via setUp, before every test method executes). That is not true: onCreate is called four times. That’s because InstrumentationTestRunner always calls Application.onCreate as part of its startup routine—before your tests run. Keep this in mind if you do things in onCreate that may affect the outcome of your tests, like starting AsyncTasks!

 

We then define three tests. testShouldInitializeInstances asserts that after onCreate is called, all three instances exposed via getters (sectionList, imageCache, and parser) have been fully constructed (aren’t null) . The second test, shouldStartWithEmptySections, makes sure that the application is launched with a clean state, which in this case means that no deals are loaded (the section list is empty) . Finally, we test that the application icon is always set to the correct drawable, and that the version name follows the n.m scheme . JUnit doesn’t provide an assertion that can test for regular expressions, but Android fortunately comes with a few custom JUnit assertions, such as the assertMatchesRegex used here, which can all be found in the android.test.MoreAsserts helper class.

Go ahead and run this test case by right-clicking the class and selecting Run As > Android JUnit test. If everything checks out, you should see green bars in the JUnit result view! You may also want to intentionally break the test, just to see what happens. For instance, open the application’s AndroidManifest.xml file and change the android:versionName attribute value from 1.0 to v1.0. Now save the file and run the test again. Whoops! We don’t allow letters in the version name, so our test fails (see figure 13.7).

Figure 13.7. Since we’ve written a test for it, we can now detect changes to the application version name that aren’t allowed by our rules. In that case, JUnit will report an assertion failure in the JUnit result view.

Just to be clear: that the test fails in this case is a good thing—it tells us that our test captures the correct semantics, which in this case means not allowing letters in the version name. Now go back and change the version name to what it was; after all we don’t want to leave the application in a broken state, do we?

Discussion

We’d like to mention one particularly irritating thing about writing tests in JUnit 3, and that’s the order in which tests are executed. Looking at our test case from listing 13.1, we see that the test checking for fully initialized objects (testShouldInitializeInstances) is defined before the test that uses one of these objects (testShouldStartWithEmptySections). This seems to make sense, because we can then assume that whenever we enter the latter test, the former test must’ve passed, right? Wrong! The order in which you define tests in your test case is entirely irrelevant. Unfortunately, JUnit 3 doesn’t warrant any specific order in which it executes tests, so you should make sure to never rely on order. To alleviate this circumstance, define a special test method that contains all those assertions that you believe must pass in order for any other test to pass, and give it a meaningful name such as testPreConditions. That doesn’t mean JUnit will give this specific test method any special treatment, but as soon as this one fails, you can tell that any other failing test may simply be failing because the preconditions weren’t met, and you know where you can start looking for bugs. Google use this pattern in their test suites for the Android framework.

You’ve seen how to write unit tests for a few core objects, but most of what you’ve seen was plain JUnit functionality with a bit of Android helper sugar on top. Nevertheless, everything seen so far is essential for the things to come.

Speaking of things to come: we’ve mentioned Instrumentation a few times, but never provided any details. At this point, you may have thought: JUnit and AndroidTestCase are fine and good, but we haven’t addressed the core issue of testing activities! After all, activities are what we’re spending most of our time with when developing Android applications, and they demand more in terms of test support than being able to trigger their onCreate method. What about clicking buttons? Sending key events? Firing intents or testing layouts? Looks like we’re ready to delve deeper into the Android testing framework. The next section explains how you can test your activities and user interfaces using Android’s instrumentation framework, and how to push the boundaries of test expressiveness using DSLs.

13.2. Pulling strings: Android instrumentation

So far we’ve been testing the invisible parts of your application, those parts that may play a fundamental role, but aren’t seen by the user. AndroidTestCase is sufficient for that: services, content providers, and the application object are background objects, so we don’t need any support for simulating a user interacting with them. But what about activities? We have plenty of user interface interaction here: users click buttons, type text, rotate the screen, scroll lists, and so on. How would we do all that as part of an automated test?

Android’s answer to this is instrumentation. Instead of calling methods and manipulating objects in the scope of an Activity, we take one step back and control the Activity itself; we orchestrate or instrument it. When writing normal application code, you’re confined to the internal interfaces of objects such as activities or services, and you can only react to outside system events in a passive way, for instance via lifecycle hooks such as onCreate or keyboard listeners like onKeyDown. There’s no way to manipulate anything from outside, to take control yourself. Instrumentation means breaking out of that restriction and being able to control activities and services from the outside. Think of puppets on strings: Instrumentation pulls the strings!

That being said, you’ve already seen instrumentation in action in a limited form. In technique 67 you manually created an Application object using ApplicationTestCase.createApplication. That’s instrumentation: under usual conditions, your code can be notified when the application object is created (via Application.onCreate), but it can’t control this event directly. Closely related to this, you may recall from section 13.1 that in a test project’s manifest file, you define an InstrumentationTestRunner, which will execute tests by using instrumentation. You may legitimately ask: if I’ve already seen how to use instrumentation code, what’s left to know about it? Plenty, for two main reasons. The first reason is that all types of automated tests so far inherit from AndroidTestCase, which does not give any access to a full instrumentation environment, unlike InstrumentationTestCase, which we’ll introduce in a moment. The second reason is more subtle. What we haven’t told you so far is that tests executed using an InstrumentationTestRunner aren’t executed on the main application thread, but on a separate instrumentation thread. When testing background objects such as services and content providers, that doesn’t matter; they don’t care on which thread they run. As explained in some detail in chapter 6, this does matter a lot when talking about user interface interaction, since UI events are always processed on the main application thread, and manipulating views outside that thread will fail with an error. Looks like we’d have been stuck if we only had AndroidTestCase, since that class has no means of executing UI actions in a test case and running them on the UI thread.

Technique 75: Unit testing Activities

Being able to steer user interface control flow via instrumentation opens up a new layer of complexity. You must be able to click buttons, enter text, scroll views, or open a menu item. In order to clearly separate tests that need these capabilities from those that don’t, the Android framework exposes a special set of test case classes that you can use whenever you need to write tests that rely on instrumentation. These tests are derived from the aptly named InstrumentationTestCase base class, most notably ActivityTestCase (see figure 13.8). There are more kinds of instrumentation tests than just ActivityTestCase, but they’re rather obscure and less useful than ActivityTestCase, so this is what we’ll focus on here.

Figure 13.8. InstrumentationTestCase should be used whenever tests require access to the instrumentation API. This is essential for anything related to testing Activities, especially for story driven testing via ActivityInstrumentationTestCase2.

ActivityTestCase isn’t exciting by itself, since it just handles some boilerplate code specific to testing activities, which you’d otherwise have to write yourself. The interesting distinction is between ActivityUnitTestCase and ActivityInstrumentationTestCase2. We’re going to postpone explaining the latter until we hit technique 69, and for now focus on ActivityUnitTestCase.

Using ActivityUnitTestCase, you can, unsurprisingly, unit test your activities. As explained in the introduction, unit testing an Activity means testing it in isolation, but what does that mean? Consider an ordinary Android application like our DealDroid. When it launches an Activity or transitions from one screen to the next, the runtime is busy coordinating the many components involved in these interactions: executing their lifecycle hooks (see chapter 3), drawing things on the display, and so forth. That’s the opposite of running in isolation! The problem with this is twofold. First, it’s slow. If you just want to test a single screen, why load or even consider other screens that may be parents of this one or branch off it? It would be more efficient to ignore them for this test. Second, and more importantly, testing in isolation minimizes effects other components may have on the component under test, and helps you focus on testing the expected input and output of your Activity—testing it on an interface level. Once again, let’s try to turn these findings into a concise problem description.

Problem

You want to test intrinsic properties of an Activity, and don’t want it to communicate with other platform components. This implies sacrificing its execution in a fully setup runtime environment for a significant gain in test case speed.

Solution

If your intention is to test things like “Given input X coming in with an Intent, my Activity should do Y,” or “After creating my Activity, views A and B should exist and be fully initialized,” then ActivityUnitTestCase is a perfect match. As mentioned earlier, tests defined in this kind of test case will be run “detached” from the actual system, so as to minimize dependencies on other components.

 

Remember...

Again, this doesn’t mean that your application won’t be started as part of the test run. As explained in the previous technique, the InstrumentationTestRunner will always start your application by calling its onCreate method.

 

Android leverages its instrumentation capabilities to run your Activity in a controlled way, entirely decoupled from everything else. Note that this also means that it will not go through the normal runtime lifecycle; only its onCreate method will be called when started in a unit test (more on that in a second). This is ideally suited for testing an Activity’s internal state, such whether its views are setup correctly or what should happen at its interfaces. For instance, you could run a test that states that if the Activity isn’t started using a specific kind of Intent, it’ll output an error. That would be a test for correct input. Additionally, you could test that it constructs the correct Intent to launch another Activity (but without actually launching that other Activity as part of the test!). That would be a test for correct output. Most likely, you’ll test that its layout and views are correctly set up.

Let’s write a unit test for the DealDetails Activity. This Activity is well-suited for unit testing: based on the currently selected item, it displays information about that item on the screen. It also allows us to open the Android browser, in order to load the item’s detail page on the eBay website. Codifying these features into assertions, we may arrive at something like this (note that we’re ignoring some of the views in the DealDetails to keep the example compact).

Listing 13.2. ActivityUnitTestCase allows you to unit test a single screen

As seen in the previous technique, we use the setUp method to initialize our tests . In this case, we’re setting up a test fixture—a dummy deal item holding the data that we feed to the Activity. Moreover, we use setApplication to inject a custom application instance.

Next, are the actual test methods. We use the testPreConditions pattern again to have a separate test expressing that we need the given views to be valid before other tests can succeed . In testThatAllFieldsAreSetCorrectly, we then make sure that given the dummy item we set up before, the respective views are actually showing that item’s data on the screen (the getViewText method is a helper we defined to easily read the text from a TextView). Now it gets more interesting. In testThatItemCanBeDisplayedInBrowser, we test that pressing the menu button with ID MENU_BROWSE will fire an Intent to view the current deal item via its deal URL . To achieve that, we leverage instrumentation to programmatically invoke a menu action using invokeMenuActionSync, and then call getStartedActivityIntent to check whether that triggered the Intent we expected. It’s crucial to understand that we’re not actually opening the menu, clicking the button, and launching a web browser here. If that happened, this wouldn’t be a unit test, but an integration test as part of a user story. Instead, this test code makes sure that if someone would click that menu item on their device, then an Intent of kind ACTION_VIEW carrying the item’s deal URL in its data field would be emitted. That’s all.

We haven’t explained the first line of code in each of these test methods: the call to startActivity. This will actually use instrumentation to mimic a launch of our DealDetails Activity, without really starting it. It’s an implicit call to its onCreate method, and it won’t call any other lifecycle hooks that are involved in a full Activity launch, such as onResume, onStart, and so on. If you need those methods to be called, use the getInstrumentation().callActivityOn* helper methods.

Moreover, you must call startActivity in each of your test methods; otherwise a call to getActivity to retrieve the current activity instance will return null. You have to specify the Intent used to simulate the launch yourself: this is an example of what we mean when we say that ActivityUnitTestCase runs things in a controlled way. You can even inject your own context; in this case, we use the getTargetContext helper method that constructs a normal Android Context instance for us.

Discussion

As you can imagine, instrumenting your activities is a powerful way to test. The key player here is the Instrumentation class, an instance of which is accessible from every InstrumentationTestCase via the getInstrumentation() accessor. We won’t list every method it provides, but know that it allows you to start and stop activities, send key events, run actions on the main application thread, and so forth. In fact, you’ll meet Instrumentation and a few of its more advanced features in the next technique.

Though Activity unit testing is a great way of testing things in isolation, sometimes you want to literally see the whole story, not just isolated fragments. In particular, wouldn’t it be great if you could run entire user flows spanning several screens as part of a test? It sure is, and it’s possible to do that using the not-so-well-named ActivityInstrumentationTestCase2.

Technique 76: User stories as functional tests

As mentioned in the introduction, Android supports not only unit tests, but also what Android calls a functional test. To recap quickly: a functional test allows you to test your application (or a single component, if you like) in a fully functional runtime environment, just as if you were running the application yourself. This is fundamentally different from what we’ve seen before, where tests were run in a controlled environment, isolated from the rest of the system. In a functional test we’re allowed to cross the boundary of one Activity to launch another and continue testing that new Activity, so we can now directly map user stories to a test suite and run full end-to-end tests of our applications.

Consider again our DealDroid application. Naturally, before we coded the DealDroid, the first step was to lay out the functional requirements for the application—the set of features it must support. One way to formulate functional requirements is creating user stories, where every feature requirement is written down as a single sentence, capturing compactly what the software must accomplish. Here’s an example:

As a user, I want to get a list of deals and see detailed information about them.

If you want to be picky, you could argue that this could be broken down into two user stories (get a list of deals (1), and given a deal, see detailed information about it (2)), but this serves us well enough for our example. Fortunately, this story has already been implemented for the DealDroid. The landing screen presents us with a selection of eBay offers, and clicking on one will open a new window with more detailed information about an item (see figure 13.9).

Figure 13.9. The DealDroid application as introduced in chapter 2. The user can select from a list of deals (left image) and get more information about them (right image).

What hasn’t been implemented yet is a test case that asserts in a programmatic fashion that our implementation works. Before writing the test, we must first identify the steps the user has to take in order to reach the deal details. These are:

1.  Start the application.

2.  Wait for the deal list to load.

3.  Click on a deal to see the deal details. Starting the application means starting the DealList Activity, since that’s our landing screen. If we want to test the entire flow, we therefore need to test the transition from the DealList to the DealDetails Activity. The crux of the matter is that we can’t use ActivityUnitTestCase anymore, since it doesn’t allow us to interact with any Activity other than the one under test.

Problem

You want to run full end-to-end tests, so you can test the flows your user can take through your application. To make that happen, you need a test case that is executed in a fully functional system environment.

Solution

We can realize our test scenario using ActivityInstrumentationTestCase2. The key difference between this class and ActivityUnitTestCase is that any test methods will be executed using the full system infrastructure. This allows you to simulate a user interacting with your application, such as pushing a button to open a new screen. This approach has the curious side effect that you can follow your tests being executed on the device or emulator, since you’ll see all interactions happening live on the screen!

 

Note

Instrumentation tests like the ones discussed here are executed using a real application environment, and any call to Activity.getApplication will return the same application instance, even across several test cases. For story tests, this is usually what you want, but if not, bear in mind that you’ll have to reset your application’s state manually before running those tests.

 

These seemingly ghostly interactions aren’t ghostly at all, but can be attributed to our old friend Instrumentation. Though an ActivityUnitTestCase was also powered by Instrumentation behind the scenes, its power couldn’t be unleashed due to the focus on a single code unit. This time around, we’ll look at some features Instrumentation exposes that enable us to do such useful things as:

  • Create and inject custom Activity or Application objects
  • Invoke Activity lifecycle methods directly
  • Monitor whether an Activity has been launched in response to actions such as button clicks
  • Dispatch key events
  • Manually execute code on the UI thread
  • Use helpers that let the test sleep until the application is idle

Coming back to the user story we want to test, there are some indispensable things on that list we’d want to use. If you’ve run the DealDroid yourself, you’ll have noticed that when opening the DealList, the application displays a progress dialog while loading data from the eBay web service. Until that dialog disappears, the UI is blocked, so we need to wait for that to happen. Moreover, we must then programmatically click a list item and assert that this will result in the DealDetails Activity being launched. Let’s see how Instrumentation allows us to do that, in the next listing.

Listing 13.3. ActivityInstrumentationTestCase2 can test flows through the application

We’ve defined a single test here, testDealListToDetailsUserFlow, which implements our user story. We start by storing a reference to the Instrumentation instance powering our test case and the DealList Activity . Note that ActivityInstrumentationTestCase2’s getActivity method will first check whether the Activity has already been started, and start it if not. Unlike an Activity unit test, this will call all lifecycle hooks and properly launch the Activity.

As mentioned before, one tricky aspect is that as soon as DealList starts, it’ll fetch item data from the Web in a worker thread. You can stub out or proxy that task during test execution, but for a full end-to-end test, you may want to keep it. This means we have to block the instrumentation thread (the thread of the test runner) until that task has finished. That’s exactly what AsyncTask.get is supposed to do, but in practice this method hasn’t proven to be reliable, since it sometimes doesn’t trigger onPostExecute. With several test cases testing different activities in different ways, UI processing can sometimes become flaky in instrumentation tests. That’s why we’ve added the waitAndUpdate helper to the ParseFeedTask, which ensures that the post execution handler is called :

public List<Section> waitAndUpdate() throws Exception {
   final List<Section> sections = this.get();
   Handler handler = new Handler(Looper.getMainLooper());
   handler.post(new Runnable() {
      public void run() {
         if (!getStatus().equals(Status.FINISHED)) {
            onPostExecute(sections);
         }
      }
   });
   return sections;
}

It’s also time for Instrumentation to enter the stage: by calling Instrumentation. waitForIdleSync we make sure that the UI thread has settled (has stopped processing UI events) . In particular, this ensures that the progress dialog has disappeared, the item list has been updated, and the Activity is now in an idle state in which you can interact with it. When writing an ActivityInstrumentationTestCase2 you must always keep in mind that the application is tested in a natural environment, which means that often we must ensure that everything has settled down before advancing to the next step or assertion, just as we would as a normal user of the application.

At this point, we know that the DealList activity is showing a list of deals, so we can click one. Before performing the click using the TouchUtils.clickView helper we must first tell the Instrumentation which Activity we expect to be started after clicking that view. We do that by registering an ActivityMonitor . An ActivityMonitor is a synchronization primitive (a monitor, as the name suggests), which we can use to watch for an Activity being started. In our case, we don’t expect a result from the Activity and we want it to be a nonblocking monitor, so we pass it the null and false arguments.

 

Instrumentation Object Mutability

One thing you should always keep in mind is that the Instrumentation instance returned by getInstrumentation doesn’t change across several test cases. It’s started once and is then used throughout the entire test suite. This means that any modifications you do to it, such as adding monitors, will be visible to all tests in the same suite, not just the one in which you made the call. This can be a common source of error, for example, when you forgot to remove an ActivityMonitor after adding it. A good place to clean up any changes to Instrumentation is a test case’s tearDown method, which is called after every test method.

 

With the monitor registered, we can now click a list item and wait for the system to settle (TouchUtils.clickView calls waitForIdleSync, so we don’t have to do that manually here) . We can now finally check whether the DealDetails Activity hit the monitor—whether it was started . We expect it to be hit only once, hence we pass 1 here. Remember to always pair calls to addMonitor and removeMonitor. If you add a monitor and forget to remove it again, it’ll stick around for an entire test suite run and can have unwanted side effects on other tests.

Discussion

One thing to watch for when writing instrumentation tests is running all UI-related actions on the UI thread (we’ve explained that in earlier chapters in some detail), and only there. The problem with that is that the instrumentation test runs on its own thread, so we’re not allowed to do anything that manipulates the user interface, not even something as simple as a button press. Wait, we just did that: we clicked a list item using TouchUtils.clickView, so surely it must work? The obvious answer is that this helper hides this technical detail from us. Without going through that helper method, we could’ve rewritten the list item click as:

instr.runOnMainSync(new Runnable() {
   public void run() {
      View firstItem = dealList.getListView().getChildAt(0);
      firstItem.performClick();
   }
});

Instrumentation.runOnMainSync will block until the UI thread is ready to process messages, and then invoke the given Runnable on it. That way, you can make sure that actions involving views are put where they belong: on the main application thread!

It’s easy to see how powerful this way of testing is. You can pour all your user flows into test cases that use ActivityInstrumentationTestCase2 and execute them all in sequence, simulating every possible path a user can take through your application and asserting that everything works as expected along the way. This becomes particularly powerful in combination with build automation, as we’ll see in the next chapter.

One thing that may have struck you about the tests we’ve written so far is the gross inelegance of the syntax and functions being used. Think about it for a minute. If we hadn’t talked about things such as activity monitors and waitForIdleSync, would you understand what this code is actually testing? We’ve dealt with a lot of boilerplate code here, such as explicitly waiting for the main application thread to become idle. Moreover, the assertion that the DealDetails Activity was started is spread over three lines of code (defining the monitor, waiting for idle sync, checking the monitor), and having to cope with synchronization primitives to do so is also not desirable. Being the esthetes we are, there’s only one answer to this: We can do better than that!

Technique 77: Beautiful tests with Robotium

Android excels in many areas, but its test framework API isn’t one of them. There are two golden rules a good test framework must obey: it should make writing tests as easy as possible, and maybe more importantly, it should make reading tests as easy as possible. If tests are difficult to write, developers will refrain from doing so. If tests are difficult to read, a fellow developer may misunderstand the purpose of a test, or not understand it at all. Moreover, if you want to project user stories onto a test suite, you ideally want to have a syntax that’s capable of closely matching the terms used to write these stories. That’s not the case for what we’ve seen so far: your test’s intention is often buried under a pile of ugly boilerplate code. It would be nice to have a testing API that was specifically designed to describe the steps a user can take through an Android application, such as press button, enter text, scroll list, or go back, and make assertions along the way.

When talking about syntax or instruction sets that are specific to a certain domain, such as Android testing and instrumentation, we’re in the realm of domain-specific languages (DSLs). DSLs come in various forms and sizes: they can be designed from scratch, like UML’s Object Constraint Language (OCL), or built on top of existing languages, like Ruby’s document builders. They can be short and cryptic, like regular expressions, or verbose and natural, like Cucumber (see http://cukes.info). You can even find DSLs in real life. Have you ever tried following a Texas Hold’em Poker tournament? The forced bet by the player next to the dealer is the big blind, and players can hold cards that are off suite. They can play the river, winning the game with Aces up. If you’re not into Poker, you’ll have no clue what these people are talking about. That’s because it’s domain-specific vocabulary.

DSLs are great for writing tests, because they allow you to describe what you expect to happen in a focused, natural way. The Cucumber language, for instance, is based on Ruby, and allows you to write test scenarios in something that comes close to spoken language:

Given I have entered 50 into the calculator
And I have entered 70 into the calculator
When I press add
Then the result should be 120 on the screen

Cucumber parses these instructions into test methods that can be executed like any other ordinary test case would be. Android testing isn’t as advanced as that yet, but the Android community has sure been busy! In this technique, we’ll see what Android testing DSLs are capable of these days.

Problem

Your test cases must be understood even by nontechnical staff, or you want to arrive at test code that’s generally easier to read and write, and better reflects the scenario-driven nature of your tests.

Solution

One noteworthy project put forth by the Android development community in the still-manageable world of testing libraries is Robotium, a free and open-source third-party library released under the Apache License 2.0. It’s deployed as an ordinary JAR file, so you can drop it into your test project and use it. (You can get it via its Google Code project site located at http://code.google.com/p/robotium.)

 

Grab the Project: DealdRoidRobotiumTest

You can get the source code for this project at the Android in Practice code website. Because some code listings here are shortened to focus on specific concepts, we recommend that you download the complete source code and follow along within Eclipse (or your favorite IDE or text editor).

Source: http://mng.bz/5745

 

Robotium isn’t so much a test framework on its own, as its project website suggests, but is more like an extension to the existing Android test framework. Think of Robotium as an add-on to Android’s instrumentation framework that makes writing even complex test scenarios a breeze. There are no Robotium “framework” classes you’d have to extend in order to write a Robotium test—your test cases still inherit from ActivityInstrumentationTestCase2. Instead, you leverage the Solo class to steer the UI flow in a test case. Any action or step the user takes is thereby invoked on an instance of that class, using an imperative style similar to the actions we mentioned earlier (press button, go back, and so forth). This approach makes it unobtrusive, and you’re free to mix calls to the Robotium Solo with calls to the standard Android framework classes.

 

Robotium Tests are Black Box Tests

Robotium was designed to write black box tests, much in the spirit of high-level test frameworks such as Cucumber. A black box test perceives the test subject as an opaque entity—assumes no knowledge of its inner structure or workings. It merely sticks in data into the test subject, and observes whether the output is what was expected. This is different from what we’ve seen so far, because we used implementation knowledge such as view IDs in tests.

 

Figure 13.10 depicts how Robotium fits into the threesome with the Android testing framework and your own test cases.

Figure 13.10. Robotium hooks into the Android testing framework by wrapping and extending existing functionality. It’s then used in your own test cases by going through the Solo class, the central entry point into Robotium’s test helpers.

Let’s rewrite the test case from the previous technique using Robotium. Since Robotium exposes such a nice concise syntax, we can also make the flow more complicated this time: instead of testing the transition from the list view to the DealDetails, we test selecting different deal lists from the spinner box, too. Here’s Solo in action.

Listing 13.4. Robotium uses a DSL to write functional tests as stories

The first thing you do when writing a test case juiced up with Robotium is define a reference to the Solo . This is the key object behind any Robotium test, and you use it to instrument activities by invoking command-like methods on it . The commands almost need no explanation, having natural names and being free of unnecessary argument bloat: clickInList clicks a list item at the given index (Robotium, by default, assumes there’s only one ListView at a time on a single screen), goBack presses the back button to return to the previous Activity, and scrollDown scrolls to the list’s bottom. If you add words like when, and, then, and so forth, then you arrive at something that comes close to a full English sentence. On top of that, all the nasty code noise is gone: no manual waitForIdleSync, no Activity monitors, none of that super-technical hoopla distracting readers from the actual test.

Discussion

Robotium came as a godsend for those who embrace clear and natural test code. The test case syntax is much nicer to read and work with, though truth be told, it’s traded for a slight loss in test speediness (well, as speedy as an instrumentation test can get), due to the higher abstractions and the frequent waits and sleeps performed by the Robotium Solo.

Because you write black box tests using Robotium, you can’t do things such as selecting views by ID. Instead, you need to write tests using only the data you can derive from what’s visible on the screen, which is view text (button labels), or tree indices when using hierarchyviewer. This means you can even use Robotium to test applications that you haven’t developed yourself, though it feels awkward when using it for your own projects.

There are already plans to make Robotium more powerful. One ongoing effort is to make Robotium more extensible so it’s easier to build extra functionality on top of it, such as the use of existing testing languages like Cucumber. Another interesting plan is to deliver an extension to Robotium called Remote Control (RC). Using the RC, a server would sit on the emulator or device, while the test code runs entirely on the client (the developer’s machine) and sends commands to the server to tell it what to do. This would result in faster and more flexible test execution.

There’s some traction in the world of testing libraries for Android at the moment. Another Android testing library that takes the same line as Robotium is Calculon. Unlike Robotium, Calculon doesn’t go through a proxy object to instrument an Activity, but extends the existing framework classes with new assertions that form a DSL. In Calculon, you write sentences that start with assertThat, and build your test from there:

assertThat(R.id.button1).click().starts(MyActivity.class);
assertThat(R.id.button2).click().implies(R.id.some_view).gone();

Even moreso than Robotium, Calculon focuses on clear and concise expression of assertions and actions in a test case. But it’s in an early stage of development and has yet to prove itself in a production environment. Calculon is also open source under the terms of the Apache License 2.0, and can be found online here: https://github.com/kaeppler/calculon.

13.3. Beyond instrumentation: mocks and monkeys

In the first two parts of this chapter, we showed you what makes Android tests roll, from setting up a test project to writing both simple and more complex test cases. We’re not quite done yet. This last section will deal with the advanced themes of testing on Android, going beyond your typical instrumentation tests. We’ll start with covering mock objects and explain why and how you should use them in tests. We’ll then leave the world of Android JUnit tests and explore some alternate techniques of testing your applications that fundamentally differ from what we’ve seen so far, but that can be used complementary to your ordinary Android test cases.

Technique 78: Mock objects and how to use them

There’s a golden rule when writing tests: never let the outcome of a test depend on something that’s not directly related to the entity under test, or worse, that’s beyond your control. We saw this rule in practice when we wrote a unit test for the DealDetails activity in technique 68, where the entire test environment in which the Activity was executed acted as a barrier. The test couldn’t possibly have failed due to the web browser crashing when we tested the “view in browser” functionality, since no actual web browser process was running! We merely tested an if-then scenario: if that menu item was pressed on a real device, then the browser would be started. That means we tested this piece of functionality without having to actually rely on the browser application, which is beyond our control. This is desirable, since we don’t actually care whether the browser works; we only care that if the browser works, then our application should work, too.

We didn’t do that in the instrumentation tests we wrote in techniques 69 and 70. Even though the entity under test was the DealList Activity, all sorts of entities were involved, including the DealDetails Activity and even a web service. There are two problems with this. First, for these kinds of instrumentation tests where everything is executed in much the same way as if an ordinary user was using the application, things can slow down. The average time for DealListTest test to run on an emulator on my computer was about six seconds, where most of the time was lost in the web service call. You can imagine that once your application grows large and you add more tests, the total time it takes for your test suite to run can grow significant. This is a hindrance when exercising TDD, since in that case you rely more on short feedback loops to see how a change to the source code affects the application as a whole.

Second, and much worse, it’s unreliable. What if the eBay web service is down? Should our test fail? Probably not; after all we’re testing our application, not the eBay web service, which isn’t under our control. You could argue: but it was an integration test, a story test that should simulate what a user does in the application. Yes, but we could’ve achieved the same result by replacing the call to the web service with a static result (a list of Item objects), and testing the code that establishes the web service connection and parses Item elements in separate unit tests. That way, we keep everything well tested, but we’ve isolated our tests: if the unit test for the web service parser passes, and an integration test just uses its API, then we know that if we plug these things together they’ll work.

This gives rise to the question: how can we remove dependencies in tests to components that aren’t within our reach of control or that should be tested elsewhere? This is where mock objects and stubs come into play.

Problem

You want to replace a piece of functionality in a test with a dummy, since its behavior would otherwise impact the test and potentially break it, even when the actual entity under test works correctly.

Solution

Mocks and stubs act as placeholders in your tests. They expose the same API as the object they’re replacing. From the test’s point of view, they’re identical, but their implementation has been replaced so as to not interfere with the test. Often this means returning a static result from a method, such as a test fixture bundled with the test project.

 

Managing Test Fixtures

Test fixtures often go hand in hand with mock objects, since test fixtures replace live data with some static, predefined data only meant to be used in a test. Consider the web service call in the DealDroid for instance: instead of doing that call and retrieving an XML document via HTTP, we could bundle a static XML file with our test project and use that for testing the DealList. Those files could live in your test project’s res/raw folder and be loaded in a test’s setUp method.

 

This is similar to using crash test dummies: they closely resemble human bodies (they have, for example, the same shape, size, and weight), but they’re just replacements. The difference between a mock object and a stub or fake object is that whereas stubs replace method implementations to return some static or manually crafted result, mock objects also verify that method invocations have happened. This is extremely useful if you’re testing object interactions where you don’t actually care about the result of an invocation, but you want to make sure that it has definitely happened. An example would be the verification of a credit card-holder’s name in an online payment process: you don’t care if the name is John Doe or Joe Blow, but you definitely care that the name is considered when verifying the payment. For simplicity, we hereafter only refer to mock objects, regardless if they’re true mocks or just fake objects.

To make the application of mock objects more interesting, here’s a new scenario: in the DealDroid, we’d like to add a simple deal export function. This would be a simple menu item that allows us to write all items from the current deal list to a file on the device by calling an item’s toString method (see figure 13.11).

Figure 13.11. A new export function has been added. By selecting it from the menu, the list of deals will be exported to a text file.

 

Grab the Project: DealdRoidWithExport

You can get the source code for this project, and/or the packaged APK to run it, at the Android in Practice code website. Because some code listings here are shortened to focus on specific concepts, we recommend that you download the complete source code and follow along within Eclipse (or your favorite IDE or text editor).

Source: http://mng.bz/27qZ, APK: mng.bz/1LX1

 

Here’s how the code for the exporter helper class might look.

Listing 13.5. The new exporter exports a list of deal items to a text file
 public class DealExporter {

    private Context context;

    private List<Item> deals;

    public DealExporter(Context context, List<Item> deals) {
       this.context = context;
       this.deals = deals;
    }

    public void export() throws IOException {
       FileOutputStream fos =
             context.openFileOutput("deals.txt", Context.MODE_PRIVATE);
       for (Item item : deals) {
          fos.write(item.toString().getBytes());
       }
       fos.close();
   }
}

This looks straightforward: we’re writing to a text file opened using the openFileOutput helper method, which will create a new file called deals.txt in the application’s data/files folder on the device. We’ve also made this functionality available in the DealList by adding it to the options menu (look at the full source code for Deal-List.java for this chapter if you’re interested).

Now, how would we write a unit test for the DealExporter? We’re not testing an Activity, Service, or Application. We’re testing a POJO, but it depends on a Context. No suitable Android test case class provides a fully set up context that we could use to call openFileOutput. Even if we had, we don’t want or need to test whether Android’s file I/O works; if we did that, then we’d be testing the Android platform, not our class. Long story short, we want to mock out the Context this class depends on. This involves two steps. First, defining a mock context class that implements a stubbed version of the openFileOutput method. Second, since DealExporter expects a valid FileOutputStream returned from that method, we implement a MockOutputStream, which doesn’t write to a file, but records invocations of its write method and redirects any bytes written to standard out. Here’s the DealExporterTest, including the mock objects just mentioned.

Listing 13.6. Use mock objects to decouple objects under test

First, we need to define a mock that’ll mimic the file output. We do that by inheriting from FileOutputStream and configuring it to write to STDOUT (by passing the FileDescriptor.out object to its constructor; note that it doesn’t matter what you pass here, as long as it’s a valid file descriptor, since we’re going to override write in the next step) . We also override its write method to not write to a file, but to keep a record of the number of invocations in the itemsWritten field , and to make sure that the data passed to this method is what we expect: a String-ified deal item .

The next step is to use this mocked-out FileOutputStream. Since the DealEx-porter writes to an output stream returned by the Context’s openOutputStream, we must stub out that method to return our MockOutputStream. Android already provides base classes to create mock objects for Contexts (called MockContext), but all its methods throw an UnsupportedOperationException, so you need to implement those you want to use to do something meaningful. We do that by implementing openFileOutput to return a new instance of MockOutputStream .

We haven’t yet looked at the actual test we want to run. testShouldExportItems only needs to do two things: invoke the exporter using our MyMockContext , and assert that the correct number of invocations have occurred . That’s all we require to make sure our exporter works!

Discussion

In this test case we implemented a custom MockContext by inheriting from that class directly. This is what you want to do if you need customized behavior specific to your test. Sometimes you don’t have to go that far. Often it’s desirable to have a fully working Context that behaves differently in a test environment. Android defines a few of those specialized Context implementations, but they’re easy to miss, since unlike MockContext and its brethren MockApplication, MockService, and so forth, they don’t live in the android.test.mock package, but in the android.test parent package. The most notable one is RenamingDelegatingContext, a context wrapper you can use in instrumentation tests to redirect database or SharedPreference output of the wrapped Context to dedicated test files. This is required for making sure that a test doesn’t overwrite preferences or database entries written by the actual application.

When dealing with mock objects, a general problem that arises is that of injection. If we want to replace an object with its dummy counterpart just for a test, then we need some way to do so. There are three basic ways to achieve that:

1.  Manually using setter methods and constructors

2.  Automatically using setter methods and constructors

3.  Using runtime bytecode manipulation and generation

In 1 and 2, we provide setter methods or constructors that allow us to replace the object we’re trying to mock out with an alternative implementation. That’s what we’ve done manually in this technique: we’ve configured the DealExporter with a mocked-out Context via its constructor. This can get tedious, and there are ways to do that automatically. A common approach is to use object lifecycle frameworks capable of dependency injection, such as the Spring framework or Google Guice. These frameworks let you declare dependencies on other classes or interfaces, and instead of having to resolve these dependencies yourself by calling setter functions, they wire all managed objects together automatically at runtime. This is an architectural pattern often called inversion of control (IoC), since objects don’t handle dependencies themselves—they declare dependencies and the container then takes care of connecting them.

Against the backdrop of testing and mocking, this means you can express things like the following: if in testing mode, please use this mock implementation; otherwise, use the actual implementation. You don’t need to invoke any setter manually: once your object has been initialized, the IoC container guarantees that it has all its dependencies set, mock or not! For instance, Google Guice has been adapted to Android as part of the RoboGuice project (http://code.google.com/p/roboguice/), but keep in mind that those frameworks, though comfortable to use, can leave a rather large footprint on your application, increasing startup time and memory use in general. The third and last option to inject mock objects is by leveraging runtime bytecode manipulation and generation. This is the way most established mock object libraries in the standard Java world, such as Mockito, EasyMock, or PowerMock, go. These libraries can create mock implementations of classes and interfaces at runtime, and even modify existing methods to return or do anything you want. The secret weapon here is cglib, a Java code generation library. Although EasyMock has at least to some degree been ported to Android as part of the android-mock project (http://code.google.com/p/android-mock), libraries depending on cglib don’t work on Dalvik, since the code generated by cglib isn’t compatible with the Dalvik virtual machine. Moreover, some of them rely on the java.beans package, which isn’t part of the Android framework libraries.

One solution would be to write tests not as Android instrumentation tests, but as standard JUnit tests running on a JVM, and mock out all involved framework classes using one of these mock libraries. To give you an idea what this could look like, we could’ve implemented the mock objects from listing 13.6 using Mockito like this:

FileOutputStream mockOutput = mock(FileOutputStream.class);
verify(mockOutput.write((byte[]) anyObject()).times(2);
Context mockContext = mock(Context.class);
when(mockContext.openFileOutputStream(anyString(),
   anyInt())).thenReturn(mockOutput);

Mockito exposes a DSL for creating mock objects and assertions on them, which makes it easy to write and read tests that involve mocks. A key problem with this approach is that usually so many Android framework classes are involved that you’ll find yourself almost reimplementing Android using custom mock objects. The fine people at XtremeLabs and Pivotal realized this problem early on, and came up with an entirely different answer to this: Robolectric to the rescue!

Technique 79: Accelerating unit tests with Robolectric

We’ve seen many approaches to writing tests so far: plain JUnit tests with or without using mock object libraries, Android unit tests, Android functional tests, and tests using Android’s rather limited form of mock objects. Plain JUnit tests running on a JVM have the advantage that they’re quick to execute, but they require you to mock out large parts of the Android framework library, whereas instrumentation tests can leverage the platform objects, but are slow to execute and have poor support for mock objects.

If speed is what matters to you, there’s a whole new way to write tests: the Robolectric unit test framework (http://pivotal.github.com/robolectric/). Robolectric’s premise is to “de-fang the Android SDK jar” so that you can unit test your application on a standard JVM without the need to explicitly mock out every framework class used somewhere down the call tree that would otherwise crash your test case with a RuntimeException("Stub!"). The key idea behind Robolectric is that it automatically mocks out the Android framework classes itself, instead of having the developer manually do that. Robolectric can therefore be thought of as one giant Android platform mock!

Problem

You want tests that execute quickly, perhaps because you’re exercising TDD, but running tests on a plain JVM would require you to mock out large parts of the Android framework.

Solution

Under the hood, Robolectric provides what’s called a shadow class for some Android framework class. A shadow class looks and behaves like its Android counterpart, but is implemented purely using standard Java code designed to run on a standard JVM. For example, in any Robolectric test, an Android Activity instance is automatically replaced by a ShadowActivity, a class that implements the same methods as an ordinary Activity does, but is actually a big mock object. For instance, it’ll support the findViewById method. It’ll even support view inflation and return a View instance that has a show method. But Robolectric doesn’t run any graphics routines to render a layout or view. It just pretends to. This means it’s quick, while allowing you to make test assertions against views and layouts as you would in an ordinary Android test case. Similar to visibility and position of a view, checking for view state is supported as testing for behavior, such as starting new activities when clicking a view.

Any class can become a shadow of a framework class; it doesn’t need to provide the full set of interface methods. All methods not implemented by the shadow class will be rewritten by Robolectric to do nothing or return null. This approach is called partial stubbing or partial mocking.

One cool aspect about all this is that most of the time you don’t have to worry about shadow classes at all. Instead, Robolectric hooks into the class-loading procedure and whenever it sees a stock Android class being requested, it silently replaces it with its shadow. Figure 13.12 depicts this.

Figure 13.12. The Robolectric test runner registers a custom class loader that will intercept any requests for classes made by the application under test (requests for the Context class, for instance). Instead of a Context instance, it’ll return a shadow implementation.

As outlined in figure 13.12, Robolectric tests are essentially JUnit 4 tests (JUnit 3 is not supported) executed using the RobolectricTestRunner. You don’t need to do any additional setup; your tests are still ordinary JUnit 4 tests, with Robolectric pulling the strings in the background.

 

Grab the Project: Dealdroidrobolectrictest

You can get the source code for this project at the Android in Practice code website. Because some code listings here are shortened to focus on specific concepts, we recommend that you download the complete source code and follow along within Eclipse (or your favorite IDE or text editor).

Source: http://mng.bz/zP3n

Note that this project doesn’t carry the Android Eclipse project nature; it’s an ordinary Java project instead.

 

Robolectric test projects, unfortunately, require a bit of work to set up properly. Detailed instructions on how to do that are on the Robolectric project website (http://pivotal.github.com/robolectric), but we’ve collected some general remarks and hints in the sidebar “Setting up Robolectric test projects.”

 

Setting up Robolectric test projects

Unlike Android instrumentation tests, Robolectric tests run on your ordinary workstation JVM. The best strategy is to create an ordinary Java project for your Robolectric tests rather than an Android test project. Assuming you use Eclipse, this means the project will be lacking the Android project nature, so you’ll have to add the dependency to the Android JAR files yourself. In Eclipse, one way to do so is through User Libraries:

Right click project > Build Path > Add Libraries... > User Library > User Libraries... > New > [enter a name] > Add JARs... > select both android.jar and maps.jar from your SDK home directory.

You’ll also have to add the robolectric-all.jar as well as JUnit 4 to the project’s build path:

JUnit 4: Right click project > Build Path > Add Libraries... > JUnit > JUnit 4.

Robolectric: copy robolectric-all.jar to a folder in your test project, and right-click > Build Path > Add to Build Path.

 

As an exercise, we’re now going to rewrite the DealDetailsTest from listing 13.2 using Robolectric and JUnit 4. At this point it probably makes sense for you to dig up that listing again and compare it directly against the Robolectric test. This will help in understanding where the differences are.

Listing 13.7. Robolectric tests can run outside a device or the emulator

As you can see, the listing isn’t dramatically different from the DealDetailsTest we’ve seen before, even though this time we’re not using Android’s testing framework at all. The differences on the test code level are mostly in the details. The most striking difference is that we’re now dealing with a JUnit 4 test, whereas Android tests are always run using the much older JUnit 3. JUnit 4 makes heavy use of Java annotations in order to decrease the intrusiveness of the test library. In that spirit, Robolectric doesn’t force us to subclass anything; instead, it provides a JUnit 4 test runner called RobolectricTestRunner via JUnit 4’s @RunWith annotation . Test methods don’t need to start in test*, but are marked as such using the @Test annotation.

Our test setup code is still in setUp, although in JUnit 4 that method is allowed to have any name, as long as it carries the @Before annotation. You may notice that we create our DealDetails instance manually instead of having a getActivity method do that for us. On the other hand, we can call getApplication on our Activity under test, since that call will be intercepted by Robolectric to automatically create a DealDroidApp instance and invoke its onCreate method for us . If you wonder how Robolectric can be clever enough to find out what our application class is: it analyzes the application manifest to find the class name. It’s that clever!

One thing you always have to do in Robolectric tests is invoke component lifecycle methods explicitly. On a device, Android would do that for you, but Robolectric doesn’t handle Android’s component lifecycle management. What’s called is what you call; hence the explicit call to activity.onCreate using a null bundle .

The actual test methods remain almost unmodified, except testThatItemCanBeDisplayedInBrowser, which nicely shows some of the main differences when testing with Robolectric. Let’s reiterate what this method tests: it makes sure that when the menu item corresponding to the MENU_BROWSER menu ID is pressed, an Intent is fired that will launch the Android browser using the deal’s details page on eBay. Remember that we’re not running on a device, so we can’t actually bring up the options menu and press a button. What we can do is invoke the function that Android would call if the user brought up the menu. This is a safe assumption to make, as long as Android itself doesn’t change significantly with respect to the way it sets up and handles an application’s option menu.

That being said, we simulate a menu button press by invoking the onOptionsItemSelected callback directly and pass a Robolectric TestMenuItem to it that identifies itself as being the Show in Browser button . This is a typical pattern for Robolectric tests: we know this method will be called by Android at runtime, so we invoke it ourselves and pass to it whatever we desire, making this a quick operation.

But how do we now test that this would result in the browser being started? An ordinary Android Activity gives us no means of seeing which other activities were started from it. But its shadow does! Robolectric records all Intents fired from an Activity under test on its corresponding shadow. That means we can get a reference to the shadow of the Activity under test and check whether the Intent we’re looking for is there. This is as simple as obtaining a reference to the shadow via Robolectric.shadowOf and calling getNextStartedActivity on the shadow . We can then do an ordinary assert on the Intent. Note that this Intent has a shadow, too, which can also be retrieved via the shadowOf helper.

Discussion

There are some strong arguments for using Robolectric. First and foremost, it’s fast. Second, Robolectric doesn’t require a device or the emulator to execute tests, since it doesn’t rely on the native Android runtime. This means you won’t need to manage emulators and device images (which can be difficult on headless build servers), and you get feedback from tests much more quickly. Moreover, since Robolectric can in its own way be understood as one big mock framework for Android, you often don’t need other mock object libraries such as Mockito (although, you’re free to use them if you like). Since Robolectric builds on JUnit 4 and standard Java, you can use whatever extra libraries you desire, without being bound by Dalvik’s restrictions.

Unfortunately, there are also downsides. The building blocks of Robolectric are the shadow classes that mimic their Android counterparts; they’re also Robolectric’s biggest problem. First of all, at the time of this writing, there are only 75 of them. This may sound like a lot at first, but that doesn’t even remotely cover the hundreds of Android framework classes that could potentially be involved in a test (directly or indirectly). That wouldn’t be much of a problem if there were an easy and unobtrusive way to provide your own, but even though the Robolectric authors claim the process of adding custom shadows is easy, it’s not. Instead of providing a framework API to register custom shadow classes, you need to change the library’s source code to do so. The Robolectric authors therefore encourage users to check out the Robolectric source code into a subfolder of the application under test (or the corresponding test project) and change things as desired.

One thing that struck us when working with Robolectric is its requirement for the application instance to be created. This seems odd, since Robolectric never calls component lifecycle methods for you, except on the Application instance: Robolectric will always create an instance for you and call its onCreate method. This sometimes requires you to mock out things in the application class even though you aren’t using them, for example when unit testing a Service.

Another thing to realize is that it’s generally not safe to assume that a passing test means your application is working correctly. Since tests aren’t executed against the Android runtime, but something that mimics it, you can never be 100% percent sure that your application will behave the same when running on an actual phone. If, for instance, Google decides to change the way findViewById works, then Robolectric has to follow up with its implementation of that method; otherwise, you’ll end up testing against an implementation that doesn’t correctly reflect how Android works. On the other hand, simple tests such as testing that views exist or are visible are relatively safe to assert via Robolectric, since its view support is exhaustive already. Those things are unlikely to change in Android, so in many cases this may sound worse than it really is.

In conclusion, you should decide for yourself whether Robolectric’s benefits outweigh its disadvantages. It’s an interesting alternative to Android unit testing, but may not fit everyone’s need.

We’ve covered a lot of ground, but one thing that all approaches we’ve discussed so far have in common is their focus on functionality. Every test we’ve written so far was essentially asserting that the unit under test behaves correctly with respect to some sort of specification. As mentioned in the chapter opening, those aren’t the only kinds of tests you can run; you can also test for nonfunctional properties such as speed or stability. In the last technique of this chapter, we’re going to show you how you can do that with the help from a monkey. A monkey? If that’s not the best reason to continue reading, we don’t know what is.

Technique 80: Stressing out with the Monkey

Isolated testing of your application’s components and story-based end-to-end testing of the application as a whole are necessary to ensure an application is behaving correctly. But it doesn’t end here. A bug-free application can still be slow. Also, applications may seem to work fine under normal conditions, but quickly become unresponsive or leak memory when put under load. In order to unearth these forms of nonfunctional defects, functional tests as seen so far in this chapter aren’t appropriate.

Things like speed or stability are difficult to test under normal conditions—conditions typical for the users of the application, such as using only a subset of the features, typing and clicking at normal speed, and so forth. Often an application’s nonfunctional defects only creep up when it’s being put under pressure, so we need a convenient way of doing that. One solution would be to install your application on a phone and give it some stress by wildly pressing buttons for a while to see if you can crash it. That’s not the level of convenience we had in mind though. There’s a better solution: meet the Monkey.

Problem

You want to stress test your application by sending a series of random input events to it, collecting information about crashes or out-of-memory situations along the way.

Solution

As odd as it may sound, one of the best ways to test an application’s stability and reliability is to use it in ways you typically wouldn’t. Applications are always designed and developed with special paths in mind for the user to take through the application. That makes sense: you start with some form of feature description, which typically involves the user role and the interface elements being used, and then you design and implement the application according to that description. But what if the user decides to take a different path, a path that was never part of your design? Ah, surely no one would ever click that menu button when on this screen. Surely no one would ever turn the screen while the application is loading something. Or would they?

We wouldn’t go as far as to compare the average user of your applications to a monkey, but you can be sure they’ll use it in ways you wouldn’t expect. If your application becomes even slightly unresponsive, we can almost guarantee that users will start wildly tapping at the display to get any sort of response from it. What they don’t know is that they’re making things worse, since more input events are queued up on a system that’s already under heavy load. Android’s Monkey tool allows you to simulate this kind of situation. The Monkey tool is an application and UI exerciser running on an Android device, and is capable of sending a series of pseudorandom input events to stress test your application. Since it runs directly on the device or emulator, you must invoke it remotely via adb:

The adb shell command routes whatever you pass to it to the device’s command shell (see appendix A). In this case, we’re invoking the Monkey tool on the device. The Monkey tool expects two argument sets: a list of options, and the number of events it should generate (in this case 500). One option you always want to pass is the name of the application package you’re targeting, which is achieved with the –p option. Before getting into more detail, let’s run this command and see what happens (see figure 13.13).

Figure 13.13. Three snapshots taken randomly while the Monkey was exercising the DealDroid application. The Monkey tool will make its way through the application in a pseudorandom manner, pressing keys and pushing buttons with no particular goal or plan in mind.

It’s best if you run the example yourself to get a feel for how the Monkey works, but here’s what we saw: the three screen dumps in figure 13.13 were taken at random points in time while the Monkey was exercising the application, and they’re in chronological order from left to right. The stress test always starts with invoking the launcher Activity—in our case, the DealList. The DealList displays a modal dialog, so any interactions with the application will have no effect until the deals have loaded, but technically the Monkey is still busy sending events. After the list of deals had been loaded, the Monkey decided to first change the screen orientation and then press the spinner box and select the first entry. After the list had changed, it selected a deal item, so the DealDetails were started. On that screen, the Monkey spent a few seconds in the options menu, selecting different entries.

 

Package Confinement

Regardless of how long you run the Monkey, you’ll notice that it will never leave the DealDroid application. Why is that? After all, we have a Show in Browser menu item that will bring up the website for an item, but even if the Monkey clicks that, nothing happens. That’s because the –p option confines the Monkey to the given package; any event that would cause an Activity outside that package to be started will be dropped. This is great for testing your application in isolation. If you do want to include other activities and applications reachable from yours, then you must specify each such application package with an additional –p option. In order to allow the Monkey to open the browser, you’d have to run it as monkey –p com.manning.aip.DealDroid –p com.android.browser, but this means that it could just as well start the browser first and spend a while testing it before getting to the DealDroid.

 

The good news is that our application was never unresponsive, nor did it crash—the stress test succeeded. Looks like we did a good job at implementing it! Here’s the shell output:

matthias:[~]$ adb shell monkey -p com.manning.aip.dealdroid 500
Events injected: 500
## Network stats: elapsed time=22791ms (22791ms mobile, 0ms wifi, 0ms not
     connected)

We fired 500 events, the whole run took round about 23 seconds, and we supposedly spent the same amount of time using a mobile data connection (that value is meaningless on the emulator, but can be useful when running on a device). It’s good that our application works so well, but for the fun of it, let’s break something and throw a RuntimeException in DealList.onCreate:

matthias:[~]$ adb shell monkey -p com.manning.aip.dealdroid 500
// CRASH: com.manning.aip.dealdroid (pid 1638)
// Short Msg: java.lang.RuntimeException
// Long Msg: java.lang.RuntimeException: Boom!
// Build Label: generic/google_sdk/generic/:2.2/FRF91/43546:eng/test-keys
// Build Changelist: 43546
// Build Time: 1277937122000
// java.lang.RuntimeException: Unable to start activity
     ComponentInfo{com.manning.aip.dealdroid/
     com.manning.aip.dealdroid.DealList}: java.lang.RuntimeException: Boom!
// at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2663)
//  [lengthy stack trace here]
// ... 11 more
//
** Monkey aborted due to error.
Events injected: 12
## Network stats: elapsed time=1893ms (1893ms mobile, 0ms wifi, 0ms not
     connected)
** System appears to have crashed at event 12 of 500 using seed 0

Once again we asked the Monkey to fire 500 events, but on event 12 it encountered a crash: that’s the exception we snuck in. We get all the usual information such as exception class and message, as well as a stack trace (we’ve shortened the stack trace here for better readability).

 

The Monkey Exit Code

If you intend to run the Monkey as part of an automated build (see chapter 14), be careful not to rely on its exit code to determine success or failure of the test. Typically, UNIX-compliant command-line tools indicate success by returning 0, and failure by returning a nonzero number, usually -1. The Monkey always returns 0, thus indicating success even if it aborted due to an error in the application. This issue is known and filed as ticket 13562 on the official Android issue tracker.

 

This diagnostic output tells us that the application failed with a crash, but we don’t know which event triggered it. “Event 12” is hardly useful information; it could’ve been anything. In order to get more detailed information about the events fired, you can invoke the Monkey with the –v (verbose logging) option. This will log every event that’s fired, and also include a summary detailing the distribution of event kinds that were used:

matthias:[~]$ adb shell monkey -p com.manning.aip.dealdroid -v 500
:Monkey: seed=0 count=500
:AllowPackage: com.manning.aip.dealdroid
:IncludeCategory: android.intent.category.LAUNCHER
:IncludeCategory: android.intent.category.MONKEY
// Event percentages:
//   0: 15.0%
//   1: 10.0%
//   2: 15.0%
//   3: 25.0%
//   4: 15.0%
//   5: 2.0%
//   6: 2.0%
//   7: 1.0%
//   8: 15.0%
...

Apparently the events fired by the Monkey aren’t as random as we initially suggested. That’s true: they’re pseudorandom, and they can be steered to happen more often or not, depending on the type of event. Pseudorandom in this case means that the Monkey will use a seed in order to randomize the sequence of the events fired. You can provide this seed manually via the –s option. An equal seed means the Monkey will fire the exact same sequence of events. This means that when a test fails, you can reproduce it by rerunning the Monkey with the same seed.

 

Reproducible Test Runs

In order to keep your test runs reproducible if they fail with an error, you should always use a manual seed. A good seed is the current UNIX timestamp in milliseconds, which can be obtained from the GNU date tool:

$adb shell monkey –p <package> -s `date +%s` -v 500

The back-ticks will execute the date tool and merge its output into the command. Don’t forget to run with the –v flag, so that the seed used to run this session is printed to the logs:

:Monkey: seed=1293818128 count=500

This will make your life a lot easier when running the Monkey as part of an automated build, something we’ll explore in chapter 14.

 

The Monkey can fire nine different types of events, and you can control how often they fire relative to each other. Table 13.1 summarizes the different kinds of events, their effect, and the corresponding command-line option (percentages are passed as values between 0 and 100).

Table 13.1. Kinds of events supported by the Monkey tool and the options used to steer them

Event type

Description

Option

Touch A touchscreen press/tap (down and up) --pct-touch
Motion A drag gesture (down, move, up) --pct-motion
Trackball A trackball motion --pct-trackball
Basic navigation Navigation using the directional pad (DPAD) --pct-nav
Major navigation DPAD Center and the Menu button[*] --pct-majornav
System keys Home, Back, Call, End call, Volume up, Volume down, Mute --pct-syskeys
Activity launch Random launches of activities for better coverage --pct-appswitch
Orientation change A screen orientation change[**] --pct-flip
Other Anything else, such as keyboard keys --pct-anyevent

* This doesn’t include the back button as the official documentation suggests; the back button is classified as a system key instead.

** This option is undocumented at the time of this writing, but it’s recognized and an integral part of a test run, so it’s unlikely to disappear in future versions of the platform.

This allows us to influence a test run using the Monkey. For instance, we could decide to completely disable the menu events and increase the likelihood for orientation changes, which are known to cause instability in applications, especially if concurrency is involved (see chapter 6). So far, we’ve mostly looked at an application’s stability. The Monkey can detect other nonfunctional defects, for instance, Application Not Responding (ANR) errors. If you followed our advice in chapter 6. then you’ll never run into these problems, but let’s always be prepared and remember the two virtues we mentioned in the chapter opening: don’t be ignorant, and don’t be arrogant! When the Monkey detects an application timeout, it exits with an error message and prints some diagnostic information. If we rewrite the standard Android HelloWorld application to get stuck in an endless loop, then exercising it with the Monkey will yield something like this:

matthias:[~]$ adb shell monkey -p com.aip.test 50
// NOT RESPONDING: com.android.phone (pid 3784)
ANR in com.android.phone (com.aip.test/.HelloWorld)
Reason: keyDispatchingTimedOut
...
DALVIK THREADS:
(mutexes: tll=0 tsl=0 tscl=0 ghl=0 hwl=0 hwll=0)
"main" prio=5 tid=1 SUSPENDED
  | group="main" sCount=1 dsCount=0 obj=0x4001f1a8 self=0xce48
  | sysTid=3784 nice=0 sched=0/0 cgrp=default handle=-1345006528
  | schedstat=( 5143014534 1347433116 135 )
  at com.aip.test.HelloWorld.onCreate(HelloWorld.java:~13)
...
// meminfo status was 0
** Monkey aborted due to error.
Events injected: 2
## Network stats: elapsed time=31502ms (31502ms mobile, 0ms wifi, 0ms not
     connected)
** System appears to have crashed at event 2 of 50 using seed 0

Since Android 2.3 (Gingerbread), the diagnostic reports printed here are fairly lengthy and in-depth, but if you dig around you’ll find the stack trace that shows you where your application got stuck.

 

ANR Traces on Android 2.2 and Earlier

Note that the Monkey spits out stack traces of all threads only on Android 2.3 or newer. On older Android versions, you can find all ANR stack traces in /data/anr/traces.txt, although that directory is only accessible on the emulator or a device with root access.

 

Discussion

The Monkey is an indispensable tool for testing your applications for all sorts of nonfunctional properties such as responsiveness and stability under heavy use. One thing you should always keep in mind is that events are sent in a random fashion, so you can’t rely on full coverage of all your application’s elements. A passing monkey test therefore doesn’t mean that your application is flawless, because it may have missed something. One way to improve coverage is to tweak the –pct-appswitch option. With higher values, the Monkey will probably see all activities in your application.

13.4. Summary

We’ve come a long way. In this chapter, we explained a few basic ideas behind testing, including how to set up test projects and write simple tests using the JUnit library that comes with Android. We then introduced the notion of instrumentation and how you can leverage it to write full end-to-end tests based on user stories. We also noticed that Android’s test framework doesn’t shine when it comes to ease of use and concise syntax, so we showed you how you can use open source testing libraries such as Robotium to make your tests look nicer, improving productivity in the end. We then introduced mock objects, both the Android way and the novel JVM-based approach taken by Robolectric. Having covered plenty of functional testing approaches, we wrapped everything up by showing how to Monkey-test your applications to detect nonfunctional defects such as stability or speed problems.

What a ride! This chapter should’ve equipped you with some solid knowledge about automated testing for your applications. But a problem remains: so far, we need to always remember to run the tests that we write. Ideally, we want to run them whenever we change a piece of code, because changes may introduce bugs. Can we automate this, too? Yes we can! The answer is to use a build system that not only generates an APK file, but also runs the automated tests for us. Enter the world of build systems and continuous integration servers.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
13.59.237.58