Testing a native mobile application without any mobile device… at what cost?

Testing a native mobile application without any mobile device… at what cost?

This article follows up my previous one on the cost of testing mobile websites without any mobile device.

There are less counterpart tools exposed here since testing a native application de facto excludes tools like User Agents or browser development toolset.

The time-saving argument between using an alternative tool and a mobile handset comes back stronger than ever especially when using the iOS platform and its associated IDE, XCode: launching the latest build through the simulator for example avoids issues with certificates required for installing it on the real device. If you are lacking the minimum knowledge – though you can acquire it online with Apple help -, those issues can take a long time to get solved and have involved, from my own experience, the contribution of fellow developers who ran into the same trouble – I used XCode 7 for iOS8 and lesser versions only.

User experience on a social media iOS application

On another hand, a native application User Experience has to be designed with a different mindset from the website version. Testing it needs real conditions as close as possible to the end user. I personally think it is too risky to establish reliable results based upon simulation or emulation tools, since there is way more at stake than applying ‘Passed’ or ‘Failed’ to each acceptance criteria: getting the most objective and honest cognitive feedback is key here – which is not easy as ‘cognitive’ refers to a very subjective concept.

[As a personal opinion: Besides enjoying a lot testing native applications, I find it to be a strong argument for manual testing, as it highlights the advantages and efficiency of manual testers over the machine. It gets the software closer to its end user by giving more consideration to the cognitive evaluation than to the automated check results ‘Passed’ or ‘Failed’]

All examples mentioned in this article relate to iOS platform, since I used to work 90% of the time on this framework within the mobile development team at Net-A-Porter.com.



This tool is integrated within the Integrated Development Environment which is platform-specific – XCode is iOS platform’s IDE for example: you can run multiple OS versions on it, providing that you have installed the associated Service Development Kit – SDK – version.

I often used the simulator for functional testing since its behaviour was overall satisfying: it allowed me to quickly verify bug fixes and some features without using the handset.

Outside the ergonomics and integration with the phone’s hardware resources which are two areas the simulator is not very reliable with – I nonetheless discovered memory leaks thanks to it -, it is a HUGE advantage to cover the OS configurations you don’t own physically, and to avoid unavailable devices when they are shared across the whole team.

This problem must be widespread: testers need the physical handsets for their tests to be reliable, but other team members need them too. Developers require them either to test a new feature and its ergonomics, either to test a bug fix whose reproducibility is impossible through the simulator… and what about designers and User Experience people who work on the human interaction with the phone and the image rendering quality? Yes, they also need the same hardware, which leads to having to negotiate your schedule using a specific mobile with a specific OS version, which can lead to long waiting times. It then turns into a bottleneck and an extra risk to the project, that the simulator can help you mitigating.

In the team we were at least twelve people potentially needing the devices: four testers, six developers plus two designers. Sometimes I was not able to get my hands quickly on the device I needed, though mandatory for testing – for example, if it was a bug fix which could not be reproduced on the simulator. The project’s delivery schedule may be slightly impacted, which means we sometimes had to reprioritise other tests among the one left to execute to meet the deadlines.

Same as the phone when plugged to the computer, the simulator can be combined with the IDE debugging mode, which offers those possibilities:


  • insert breaking points within the code matching precise steps of the functional scenario; You can then manually modify pre-conditions or data between two steps of a scenario to test edge cases,
  • visualise HTTP requests made to the webservices, which can highlight missing or incorrect requests,
  • view HTTP calls made to a website in the case of a hybrid application which is using web views,
  • get and analyse application logs.


Although those operations may be done with the device itself, the simulator almost gets even on that matter. In terms of variety, an analogy could be made between the IDE tools and the browser development tools when testing a mobile website, although the latter don’t have the additional tools to test the interaction between the application and the mobile hardware resources.


  • Reliable to perform most of functional testing,
  • Gives fast feedback on features or bug fixes by avoiding unavailability of physical devices,
  • IDE tools gives plenty of information for analysis and debugging.


  • Debug mode only: install, upgrade and uninstall scenarii are not representative of the customer interaction with the AppStore,

    Low battery notification
  • No ‘tap’ possible, hence not possible to test the ergonomics,
  • Not reliable for stress tests linked to hardware: crash following a memory excess usage, low storage space or low battery available – see picture on the right,



I have never tested a native application within an emulator, but I would be curious to know your feedback.


The real device

There is no need here to convince you of how efficient the use of a real mobile device really is: the point is to remain vigilant when testing on the simulator as I observed important behavioral differences with the handset.

One of the main differences was that the application under test was mostly run in ‘debug’ mode, though the application on the AppStore was running in ‘Release’ mode: a different way of compiling between those two modes explained the occurrence of bugs in ‘release’ mode only. It is possible to test the ‘Release’ mode on the simulator, but the bugs encountered in that mode made the risks too high.

To address this issue which we unfortunately spotted once after it went live, we were performing the following tests in ‘Release’ mode:

The AppStore is the customer main road to the native application
  • Upgrade, first installation and uninstallation followed by reinstallation scenarii through TestFlight application which simulated the AppStore behaviour,
  • Regression tests on critical features.


Testing on a real mobile, it is fun and it gives you a good excuse to do some exercice, especially with network connectivity tests and their impact upon data loading; To my knowledge two parts can not be tested by remaining seated despite all the tools available nowadays:

  • Nobody would believe so but they are all testing…

    Degradation/Gradual improvement of the signal,

  • Disconnection/Auto reconnection to new relay masts when using the application and moving fast at the same time (for example the high speed train ‘TGV’ in France…).

While we could test the last scenario on the way to work, the first case was taking us outside of the office since we were going to the shopping mall besides: we spotted different locations with different levels of connectivity for the SIM operator we had, and were walking back and forth between the different areas while performing the data scenarii.


  • You can get the ‘real feeling’ of the customer towards the ergonomics,
  • Ecosystem perfectly representative which brings forward the issues due to the constraints on the phone: transition smoothness, loading speed, buttons compatible with fat fingers, etc,
  • You can perform stress tests: limited memory, memory depending on other applications, full storage space, low battery levels, etc.


  • One device needed per OS/configuration tested,
  • Requires an efficient centralised management of the handsets to share them amongst the team members,
  • Difficulties in my own experience to test hybrid applications – mix of web views and native technology: we had issues with iOS certificates requiring IT support actions to connect to the website test environment through the native application.



I hope this first picture of the available tools for testing mobile native applications will have showed you limitations you did not know until then, and will help you to drive your future decisions regarding the relevance of such tools.

By the way, did those two articles spread some light – the first one being the the cost of testing mobile websites without any mobile device -? Were you already using a lot those tools? Did you already make the choice of avoiding those traps by investing into a wide range of devices? We would be glad to know more about your own experience by reading your comments.

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.