Alright, you’re a mobile software test engineer and you want to get some test devices in house. But your boss has the purse strings pretty tight, as they do sometimes. That’s fine. It’s up to you to convince the powers that be that you need them. Let’s go into the meeting prepared with some counter points against the “Can’t you just test on simulators?” argument.
An emulator is a virtual device that tries to replicate not just the appearance of the software under test but also the underlying hardware and os as well.
A simulator is a virtual device that tries only to replicate the appearance and behaviour of the application as if it were running on a real device.
For the most part when testing Android apps in a virtual environment, you’re going to be using an emulator. For the most part when testing iOS apps in a virtual environment, you’re going to be using a simulator.
Let me also say that simulators and emulators have their places in testing. There are absolutely some tests that you want to run on them, depending on where you are in the dev cycle. Say you’ve got a feature in development that has simple acceptance criteria that says GIVEN the user has opened the app, WHEN they click on the ABC button, THEN the user should be taken to XYZ screen. Since this functionality definition is simple and not platform dependent, we’d have high degree of confidence that validating in an emulator would suffice. You want to run broad stroke tests, high-level functionality and common gestures in emulators/simulators. But where do they fall short?
No matter how accurate the virtual representation is, it’s not the same hardware your application was designed to run on. And your application's performance will vary between real device hardware the hardware that is being mimicked. And that’s not to mention performance testing scenarios that aren’t reflected at all in the virtual environment. What happens to when your app puts too much strain on battery consumption? Or your application is running away with the CPU and causing temperature to spike?
There are other testing scenarios that hardware differences make difficult if not impossible to work around. If your app makes use of 3D Touch or Force Touch, Touch ID, pairs with a bluetooth device, or just uses the front facing camera. There’s likely a workaround for any of these in the testing arena, but they come at a development cost. Other limitations can be found when testing for different types of connectivity. GPS locations can be mocked but it’s much harder to reproduce specific events like accelerating, signal loss, or real driving conditions. Likewise for NFC, casting to a Chromebox or other device, using Airplay etc. Accessibility testing (TalkBack and VoiceOver) is also much more efficient on real devices, as emulators require and additional apk install.
That brings me to network conditions. Yes, yes. I know. You can simulate a chatty network, dropped packets, reduced speeds. Let me share some of my speed tests I ran on an Android 7.0 Samsung S8. Comparing a real 3G connection to one that has been set up proxied through Charles, with the connection throttled down to simulate a 3G network speeds 1
I did 3 benchmark speed tests setting my device to use only 3G connection
Average download speed: 8.30 Mbps
Average upload speed: 2.15 Mbps
Then I set my phone to use WIFI, connected to a proxy (Charles) on my laptop.
Average download speed: 52.50 Mbps
Average upload speed: 30.57 Mbps
Then I set my phone to use WIFI, connected to a proxy on my laptop and throttled the connection down to what the tool says is 3G speed.
Average download speed: 40.30 Mbps
Average upload speed: 0.89 Mbps
Ok, the upload speed is drastically reduced, but it’s only about 40% of an actual 3G upload speed. And what’s up with the download? Again, the speed is reduced, but not to 3G speed.
Now for most features this is probably going to be fine. More often than not, the feature in my experience has an acceptance criteria that says something like “feature should work on slower network speeds”, so proxying in this way would, to some degree, be acceptable testing conditions. But if you want to test at 3G speeds, you have to walk outside, smell some fresh air and test on actual 3G speeds.
And then there is design. Can’t tell you the number of times that a designer has pinged me and asked “Hey, have you got the new Motoroid Pixie SG 9000?" From the design team's perspective, it's absolutely essential to have real devices for their growth and marketing, creative and product design activities, including:
Without access to test devices, designers are forced to use their own personal devices. This basically means that they are designing for a single platform and are forced to find workarounds in designing apps for other platforms at the expense of speed and efficiency.
One of the best reasons I can think of for having a real device is collaboration. When you’ve found a bug while testing, it’s nice to be able to just walk over to the developers desk, and show them the behaviour that you’re seeing. Usually eliminates the response “Works on my machine."
The take away here is that we’re not designing and developing applications to be run simulators. We’ve developed these apps to run on real devices in the hands of real users.
I understand that the speed tests were not a scientific experiment, I have no idea what else was happening on the wifi at the time, or background processes on the device, caching between tests, etc. But I still think the point is valid. ↩
We plan, design, and develop the world’s most desirable software products. Our team’s expertise helps brands like Sony, Motorola, Tesco, Channel4, BBC, and News Corp build fully customized Android devices or simply make their mobile experiences the best on the market. Since 2008, our full in-house teams work from London, Liverpool, Berlin, Barcelona, and NYC.
Let’s get in contact