How to Use the Android API

One of Squish's most useful features is the ability to access the Java API from test scripts. This gives test engineers sufficient flexibility to allow them to test just about any aspect of the AUT.

Java API access is available for apps that are started by Squish, either using ApplicationContext startApplication(autName) or androidobserver in combination with ApplicationContext attachToApplication(autName). For the Android desktop and for other apps, Squish can invoke touch and keyboard actions on objects available via the Android accessibility framework. At least if the Android OS version is 4.3 or later.

With Squish's Android Object API it is possible to find and query objects, call methods, and access properties. In addition, Squish provides a convenience API (Android Convenience API) to execute common user interface actions such as tapping on a button or typing text into a text widget. Java objects are made available in a wrapper and the underlying objects' properties and methods are accessible through the Squish-added nativeObject property.

How to Work with Accessibility Objects

For certain tests it may be required to change Android wide setting. Other tests may launch a third party app that cannot be instrumented with Squish or using the accessibility frameworks is good enough to do some tasks.

Squish cannot record on objects outside the by Squish instrumented and started apps directly. But when during a recording Remote Control dialog as display is used for your Android device, actions are recorded for the provided accessibility user interface nodes. Android OS version must be 4.3 or later.

Like with in-app object names, prefer objects with unique text, resource name or description. Also look at the UiAutomator support, when replaying touches doesn't have any effect.

The IDE can help getting object names. When the IDE is at a breakpoint or recording is paused, the Application Objects view will show an extra top element, typically of type AccessiblePanel, if the by Squish instrumented app is not visible. So via the context menu an object names can be copied.

"Application Objects context menu"

The object snapshot viewer, called UI Browser, may help finding a particular accessibility object. Right-click an accessibility object in the Application Objects view. Choose the Save Object Snapshot, include screenshot, object names and select Open Snapshot in Browser After Saving in dialog that follows. In the graphical representation of the Android UI of the viewer, click the wanted object. Then right-click the selected item in the left hierarchical view and choose Copy real name.

"Copy real name menu item"

For following object snapshots, just leave the Open Snapshot in Browser After Saving unchecked. The viewer will automatically update its content when the object snapshot output file changes on disk.

Note: Make sure to refresh the Application Objects view every time the screen content changes.

Example script snippet that presses the Home button, opens the setting app, and so on. To emphasise the differences in the object names, multi property names are shown but abbreviated.

tapObject(waitForObject("{... description='Apps' type='Clickable' ...}"))
tapObject(waitForObject("{... text='Settings' type='Clickable' ...}"))
tapObject(waitForObject("{... text='Accounts' type='AccessibleLabel' ...}"))
tapObject(waitForObject("{... text='Add account' type='AccessibleLabel' ...}"))
tapObject(waitForObject("{... text='Exchange' type='AccessibleLabel' ...}"))
type(waitForObject("{... type='Editable' ...}"), "")

Only one Android instrumentation can access the Android UIAutomation framework. If more than one app should be accessed by Squish test scripts, then all but one should be started using the --no-ui-automation launcher option, for example:


How to Use the nativeObject Property

The nativeObject property provides access to the methods and properties of a Java object. For example, to change the text of an Android Button, we would first obtain a reference to the button object, and then call setText:

button = waitForObject(":Okay_Button")
var button = waitForObject(":Okay_Button");
my $button = waitForObject(":Okay_Button");
button = waitForObject(":Okay_Button")
set button [waitForObject ":Okay_Button"]
invoke $button setText "Cancel"

Here is another example that writes the method names of a java object methods (in this case a Button widget) to the Squish log.

buttonclass = button.nativeObject.getClass()
methods = buttonclass.getMethods()
for method in methods:
    test.log("Button method: " + method.getName())
var buttonclass = button.nativeObject.getClass();
var methods = buttonclass.getMethods();
for (i = 0; i < methods.length; ++i)
    test.log("Button method: " +;
my $buttonclass = button->nativeObject->getClass();
my @methods = buttonclass->getMethods();
foreach $method (@methods) {
    test.log("Button method: " . $method->getName());
buttonclass = button.nativeObject.getClass()
methods = buttonclass.getMethods()
for method in methods:
    test.log("Button method: " + method.getName())
set buttonclass [invoke [property get $button nativeObject] getClass]
set methods [invoke $buttonclass getMethods]
foreach method $methods {
    set name [invoke $method getName]
    test.log("ListView method: $name")

Finally an example of accessing a static property.

test.log("Value of View.INVISIBLE is " +
test.log("Value of View.INVISIBLE is " +;
test->log("Value of View.INVISIBLE is " . Native::android::view::View->INVISIBLE);

How to Use the GestureBuilder class

An instance of this class is returned by readGesture(gesture-file). When however the recorded gesture doesn't fit on the screen of the target device or emulator, a scaling and/or translation can be done.

It might be useful to get the screen metrics. Here an example how to get the screen size, using the Java script bindings. These metrics are in pixels, therefore also a convertion to milli-meter to match the points in the GestureBuilder object.

var activity = findObject(":Your_Activity").nativeObject;
var metrics = activity.getClass().forName("android.util.DisplayMetrics").newInstance();
w = metrics.widthPixels * 25.4 / metrics.xdpi;
y = metrics.heightPixels * 25.4 / metrics.ydpi;

Suppose the gesture was recorded in portrait mode. And when replaying in landscape mode, the gesture is too large and too much to the bottom-left. Then a Object GestureBuilder.scale(scaleX, scaleY, originX, originY) and Object GestureBuilder.translate(x, y) towards the top-right is a possible solution.

"The effect of a rotation, scale and translate transformation"

For instance, scale it 3/4 in size and 5 cm to the right and 1 cm upwards.

When using the Squish IDE, use the Console view when at a breakpoint in your script, to experiment with gesture transformations.

gesture(waitForObject(":some_object"), readGesture("Gesture_1").scale(0.75).translate(50,-10));

Another approach could be to only scale with an origin in the top-right corner.

var gst = readGesture("Gesture_1");
gesture(waitForObject(":some_object"), gst.scale(0.75, 0.75, gst.areaWidth, 0));

In some cases dynamic created gestures are required, e.g. for more accurate control or dependency on runtime state information. Then the Gesture creation methods can be used.

Here an example of a pitch gesture, two finger gesture making a curved counter clockwise movement on a 800x1200 pixel screen in one second.

var tb = new GestureBuilder(800, 1280, GestureBuilder.Pixel);
tb.addStroke( 600, 400 );
tb.curveTo(1000, 500, 300, 300, 300, 200, 400 );
tb.addStroke( 200, 800 );
tb.curveTo(1000, 300, 900, 500, 900, 600, 800);;
gesture(waitForObject(":some_object"), tb);

And here an example of a zoom gesture, two finger gesture moving away from each other, also in one second. This time written as one statement.

        new GestureBuilder(800, 1280, GestureBuilder.Pixel)
           .addStroke( 500, 400 )
           .lineTo(1000, 700, 100 )
           .addStroke( 300, 700 )
           .lineTo(1000, 100, 1000)

In the above two examples, the coordinate values are based on the area size of 800x1280. For different screen sizes or different size or position of the widget on which the gesture should replay, some calculations is needed to get these values. Next, a strategy that can help to keep the complexity under control when having to deal with that.

  • Create a gesture given the screen dimensions, within the boundary of x-axis [-0.5,0.5] and y-axis [-0.5,0.5] and a duration of 1s.
  • Translate it to the center of the target widget.
  • Scale it with a maximum of the widget size, using the center of this widget as origin.
  • Adjust the duration.


Here is a listing of this, in this case an S shaped figure.

var activity = findObject(":Your_Activity").nativeObject;
var metrics = activity.getClass().forName("android.util.DisplayMetrics").newInstance();

var tb = new GestureBuilder(metrics.widthPixels, metrics.heightPixels, GestureBuilder.Pixel)
             .addStroke(0, 0.5)
             .curveTo(500, -0.5, 0.5, -0.5, 0, 0, 0)
             .curveTo(500, 0.5, 0, 0.5, -0.5, 0, -0.5)

var widget = findObject(":Some widget");
var scale = widget.width > widget.height ? widget.height : widget.width;
var centerX = widget.screenX + widget.width/2;
var centerY = widget.screenY + widget.height/2;
        tb.translate(centerX, centerY)
          .scale(scale, -scale, centerX, centerY)

This example defines the figure with the positive y-axis upwards. In order to not get the figure up-side-down, a mirror in the x-axis is needed. The trick is to use a negative scale factor in the vertical direction.

To keep the defined gesture within the -0.5 to 0.5 boundary has the advantage that the total size is 1. Thus it can be scaled with the widget sizes without being scaled outside the screen boundaries. Having (0, 0) in the center, makes the translation simple, just to the center of the widget.