Experimental Setup – Android GUI Test Suites

Experimental Setup – Android GUI Test Suites

innovative android app ideas 2020 for students

innovative android app ideas 2020 for students

Benchmark Apps.We applied DetReduce to eighteen freeapps downloaded from the Google Play store [11] and F-Droid [39].Table 1 lists these apps along with their package name, the type ofapp, and the number of branches in the app (which offers a roughestimate of the size of the app.) Since the apps were downloadeddirectly from app stores, we have access to only their bytecode.Thirteen apps were used for experimental evaluation in previous re-search projects [6,9,49]; other apps, which we mark with asterisks,are newly selected. We excluded apps for which SwiftHand andRandom saturate the test coverage in less than an hour. innovative android app ideas 2020 for students Note thatadding such apps would only improve experiment results becausemost of traces in test suites for such apps are redundant.

Generating a Replayable Test Suite to be Used for Minimiza-tion.To generate test suites to be used as inputs to DetReduce, we first collected execution traces by running an implementation of theSwiftHand [5] and Random [5] algorithms. We ran each for eighthours, then checked whether the generated traces are replayable byre-executing each trace ten times. For each non-replayable trace, weidentified a non-empty replayable prefix of the trace and retainedthe prefix rather than throwing the entire trace away.An app can generate a non-replayable trace for several reasons:a) the app has external dependency (e.g., it receives messages fromthe outside world, depends on a timer, or reads and writes to thefile system), or b) the app has inherent non-determinism due to theuse of a random number generator or multi-threading. We removeddependency on the outside world by resetting the contents of theSD card and the app data every time we restart. Nonetheless, it isimpossible to eliminate all sources of non-determinism. Therefore,we replayed each trace generated by the SwiftHand and Randomalgorithms ten times to remove the non-replayable suffixes of traces.We determined experimentally that eight re-executions is sufficientto detect most of non-replayable traces for the benchmark apps

Why we did not use Monkey to generate initial test suite?Monkey [12] is a fuzz testing tool for Android apps. Monkey iswidely-used to automatically find bugs in real-world Android apps.We initially attempted to use Monkey to generate inputs for DetRe-duce; however, we found that Monkey is not capable of generatingreplayable traces. We now describe our experience with Monkey.Monkey is a simple black box tool that reports only the sequenceof actions it used to drive a testing session. To get a trace would re-quire non-trivial modifications to Monkey. Before jumping into thiseffort, we performed an experiment to determine whether Monkeyis even capable of generating replayable traces—if Monkey cannotgenerate replayable traces, there is no point in the modification.In this experiment, we used a script to generate traces withpartial information from Monkey and checked if those traces couldbe replayed. The script injects user actions at the rate ofmactionsper second, collecting branch coverage and screen abstraction afterinjecting everynactions. The script picks the value ofmfrom theset{1,2,5,10,20,100}and value ofnfrom{2,10,50,100,200}. Foreach pair of values formandn, the script runs Monkey until itinjects 2000 actions. By combining the sequence of actions reportedby Monkey with the collected coverage information, the scriptcan generate traces that have coverage and screen information after everynactions (instead of having the information after everyevent.) We call such tracespartial traces.Using this script, we collected three partial traces for each pos-sible value ofmandnusing the same random seed and comparedthe partial traces. If the partial traces do not match, this indicatesthat Monkey cannot generate a replayable trace. We performed theexperiment using ten apps with three different random seeds.The results of this experiment showed that Monkey passes thetest for four apps whenn=2 andm=2. https://codeshoppy.com/android-app-ideas-for-students-college-project.html For the other six apps,Monkey fails the test even when injecting one action per second. Atthis speed Monkey becomes useless in practice because its powercomes primarily from its ability to inject many actions quickly. Itwill take too long to generate a sufficiently good test suite usingMonkey at this speed. Therefore, we have concluded that usingMonkey is not viable for generating initial replayable test suite.Why is Monkey testing highly non-deterministic?We found thatMonkey injects actions asynchronously—that is, Monkey injectsan action without checking whether the previously injected actionhas been fully handled. This allows Monkey to inject an orderof magnitude more actions than testing tools that synchronouslyinject actions, but this also makes Monkey highly non-deterministic.For example, we noticed that if actions are injected while the appis unresponsive, those actions are dropped. Because the period ofunresponsiveness varies from execution to execution, the numberof dropped actions varies across executions