Przeglądaj źródła

Post about android testing.

master
Chris Smith 7 lat temu
rodzic
commit
d4e03d9f48

+ 232
- 0
site/content/post/2017-05-16-android-tests-espresso-spoon.md Wyświetl plik

@@ -0,0 +1,232 @@
1
+---
2
+date: 2017-05-16
3
+title: Android testing with Espresso and Spoon
4
+url: /2017/05/16/android-espresso-spoon/
5
+---
6
+
7
+I've been spending some time recently setting up automated testing for our
8
+collection of Android apps and libraries at work. We have a mixture of unit
9
+tests, integration tests, and UI tests for most projects, and getting them all
10
+to run reliably and automatically has posed some interesting challenges.
11
+
12
+### Running tests on multiple devices using Spoon
13
+
14
+[Spoon](https://github.com/square/spoon) is a tool developed by Square that
15
+handles distributing instrumentation tests to multiple connected devices,
16
+aggregating the results, and making reports.
17
+
18
+As part of our continuous integration we build both application and test APKs,
19
+and these are pushed to the build server as build artefacts. A separate build
20
+job then pulls these artefacts down to a Mac Mini we have in the office,
21
+and executes Spoon with a few arguments:
22
+
23
+{{< highlight bash >}}
24
+java -jar spoon-runner.jar \
25
+    --apk application.apk
26
+    --test-apk applicationTests.apk
27
+    --fail-on-failure
28
+    --fail-if-no-device-connected
29
+{{< / highlight >}}
30
+
31
+Spoon finds all devices, deploys both APKs on them, and then begins the
32
+instrumentation tests. We use two physical devices and an emulator to cover
33
+the form factors and API versions that are important to us; if any test fails
34
+on any of those devices, Spoon will return an error code and the build will
35
+fail.
36
+
37
+For library projects, you only have a single APK containing both the tests
38
+and the library itself. The current version of Spoon requires both `--apk` and
39
+`--test-apk` to be specified, so we simply pass in the same APK to both. It
40
+looks like future versions of Spoon will be
41
+[more flexible](https://github.com/square/spoon/pull/453) in this regard.
42
+
43
+Spoon produces HTML reports, showing the status of each test run on each device.
44
+We have the report output folder collected as a build artefact, so the reports
45
+can be seen right from the build server:
46
+
47
+<img src="/res/images/android-tests/spoon.png" alt="Spoon output summary, showing results of 171 tests run on 3 devices">
48
+
49
+### Flake-free UI testing with Espresso
50
+
51
+[Espresso](https://developer.android.com/topic/libraries/testing-support-library/index.html#Espresso)
52
+is an Android library that provides an API for interacting with and making
53
+assertions about the UI of Android applications. Espresso has a very simple
54
+interface, and does lots of clever things under the hood to ensure that your
55
+tests only execute code when the UI is idle (and hence stable). You shouldn't
56
+ever need to make your code sleep or wait.
57
+
58
+In order for Espresso's magic to work, it needs to know whenever some background
59
+activity is going on that it needs to wait for. By default, it hooks in to
60
+Android's `AsyncTask` executor so it can wait for those to finish. In our apps,
61
+there were a few cases where we used an explicit `Thread` to do some background
62
+work, which caused tests to fail intermittently (depending on whether the thread
63
+performed its UI update before or after Espresso executed the test code).
64
+Rewriting these cases to use an `AsyncTask` enabled Espresso to figure out what
65
+was happening and the tests started passing reliably.
66
+
67
+Another, similar, problem occurred where we were using RxJava to load some data.
68
+There are two possible ways to deal with this... Espresso has a concept of an
69
+[idling resource](https://developer.android.com/reference/android/support/test/espresso/IdlingResource.html),
70
+which provides a way of telling Espresso when a resource is busy so it can
71
+belay interacting with or testing the UI until the resource is finished. In our
72
+case, the code in question was going to be rewritten soon, so we went for a
73
+quicker and dirtier option: force RxJava to use the same executor as AsyncTask.
74
+
75
+To do this, we added a simple test utility class that registers an `RxJavaPlugin`
76
+that overrides the Schedulers used by Rx:
77
+
78
+{{< highlight java >}}
79
+/**
80
+ * Hooks in to the RxJava plugins API to force Rx work to be scheduled on the
81
+ * AsyncTask's thread pool executor. This is a quick and dirty hack to make
82
+ * Espresso aware of when Rx is doing work (and wait for it).
83
+ */
84
+public final class RxSchedulerHook {
85
+
86
+    private static final RxJavaSchedulersHook javaHook =
87
+            new RxJavaTestSchedulerHook();
88
+
89
+    private RxSchedulerHook() {
90
+        // Should not be insantiated
91
+    }
92
+
93
+    public static void registerHooksForTesting() {
94
+        if (RxJavaPlugins.getInstance().getSchedulersHook() != javaHook) {
95
+            RxJavaPlugins.getInstance().reset();
96
+            RxJavaPlugins.getInstance().registerSchedulersHook(javaHook);
97
+        }
98
+    }
99
+
100
+    private static class RxJavaTestSchedulerHook extends RxJavaSchedulersHook {
101
+        @Override
102
+        public Scheduler getComputationScheduler() {
103
+            return Schedulers.from(AsyncTask.THREAD_POOL_EXECUTOR);
104
+        }
105
+
106
+        @Override
107
+        public Scheduler getIOScheduler() {
108
+            return Schedulers.from(AsyncTask.THREAD_POOL_EXECUTOR);
109
+        }
110
+
111
+        @Override
112
+        public Scheduler getNewThreadScheduler() {
113
+            return Schedulers.from(AsyncTask.THREAD_POOL_EXECUTOR);
114
+        }
115
+    }
116
+}
117
+{{< / highlight >}}
118
+
119
+With the hook registered Rx does all of its work on the same thread pool as
120
+`AsyncTask`, which Espresso already knows about. It's not the best long-term
121
+solution, but it means we don't have to spend time integrating IdlingResource
122
+for code that doesn't have long to live. With the hook in place, the tests
123
+that were flaking because of Rx started passing reliably as well.
124
+
125
+### Getting automatic screenshots of failures
126
+
127
+Spoon provides a client library to, among other things, take a screenshot of
128
+the device. Espresso provides a hook that can be used to change how errors
129
+are handled. Putting the two together is very simple:
130
+
131
+{{< highlight java >}}
132
+final FailureHandler defaultHandler =
133
+        new DefaultFailureHandler(
134
+                InstrumentationRegistry.getTargetContext());
135
+
136
+Espresso.setFailureHandler(new FailureHandler() {
137
+    @Override
138
+    public void handle(Throwable throwable, Matcher<View> matcher) {
139
+        try {
140
+            Spoon.screenshot(
141
+                    getActivity(),
142
+                    "espresso-failure",
143
+                    description.getClassName(),
144
+                    description.getMethodName());
145
+        } catch (Exception ex) {
146
+            Log.e(TAG, "Error capturing screenshot", ex);
147
+        }
148
+        defaultHandler.handle(throwable, matcher);
149
+    }
150
+});
151
+{{< / highlight >}}
152
+
153
+In our new error handler we simply ask Spoon to take a screenshot, then call
154
+Espresso's original handler so that it can output its debugging information
155
+and fail the test. The Spoon runner automatically picks up the screenshot and
156
+adds it to the report:
157
+
158
+<img src="/res/images/android-tests/spoon-espresso.png" alt="Spoon output details, showing a screenshot captured of the failure">
159
+
160
+Having the screenshot, error message and logs all presented in a clean UI
161
+makes debugging failures much, much easier than searching through a huge build
162
+log to try and find the exception.
163
+
164
+### Filtering tests based on device capabilities
165
+
166
+Some tests won't work on every device you throw at them. We had two problems:
167
+some tests require a higher API version than some of our devices, and some
168
+of the UIs under test were designed to run only on certain resolutions.
169
+
170
+The main cause for our dependence on newer API versions was the use of
171
+[WireMock](http://wiremock.org/), a brilliant library for stubbing out
172
+web services. WireMock requires API 19, while our physical devices tend
173
+to run versions older than that. Stopping these tests running is simply a case
174
+of applying an annotation to the class:
175
+
176
+{{< highlight java >}}
177
+@SdkSuppress(minSdkVersion=19)
178
+{{< / highlight >}}
179
+
180
+Screen resolution is a bit more complicated. One of our apps is designed for
181
+a specific tablet device (and will never be used on anything else), and trying
182
+to render the UI on smaller screens results in items overlapping, important
183
+parts ending up below the fold, and other problems.
184
+
185
+We'd still like all the other tests for that app to run on all of the devices,
186
+though, so we can test them on a variety of API versions and in other
187
+conditions. We just need to suppress the UI tests. To do this, we subclassed
188
+the Android `ActivityTestRule` and overrode the apply method:
189
+
190
+{{< highlight java >}}
191
+@Override
192
+public Statement apply(Statement base, final Description description) {
193
+    if (!canRunUiTests(InstrumentationRegistry.getContext())) {
194
+        // If we can't run UI tests, then return a statement that does nothing
195
+        // at all.  With normal JUnit tests we'd just throw an assumption
196
+        // failure and the test would be ignored, but that makes the Android
197
+        // runner angry.
198
+        return new Statement() {
199
+            @Override
200
+            public void evaluate() throws Throwable {
201
+                // Do nothing
202
+            }
203
+        };
204
+    }
205
+
206
+    return super.apply(base, description);
207
+}
208
+
209
+/**
210
+ * Checks that the screen size is close enough to that of our tablet device.
211
+ */
212
+@TargetApi(Build.VERSION_CODES.HONEYCOMB_MR2)
213
+private boolean canRunUiTests(Context context) {
214
+    if (Build.VERSION.SDK_INT < Build.VERSION_CODES.HONEYCOMB_MR2) {
215
+        return false;
216
+    }
217
+    int screenWidth = dpToPx(context,
218
+            context.getResources().getConfiguration().screenWidthDp);
219
+    return screenWidth >= 600;
220
+}
221
+
222
+private int dpToPx(Context context, int dp) {
223
+    DisplayMetrics displayMetrics = context.getResources().getDisplayMetrics();
224
+    return Math.round(dp * displayMetrics.density);
225
+}
226
+{{< / highlight >}}
227
+
228
+When the rule is run, we check if the screen width meets a minimum number of
229
+pixels. If it doesn't, an empty `Statement` is returned that turns the test
230
+into a no-op. These show up as passed rather than skipped in the output, but
231
+there doesn't seem to be a nice way to signal to the Android JUnit runner that
232
+the test is being ignored programatically.

BIN
site/static/res/images/android-tests/spoon-espresso.png Wyświetl plik


BIN
site/static/res/images/android-tests/spoon.png Wyświetl plik


Ładowanie…
Anuluj
Zapisz