Warning: include_once(/homepages/38/d565553858/htdocs/app565582539/wp-content/plugins/crayon-syntax-highlighter/crayon_wp.class.php): failed to open stream: Permission denied in /homepages/38/d565553858/htdocs/app565582539/wp-settings.php on line 371

Warning: include_once(): Failed opening '/homepages/38/d565553858/htdocs/app565582539/wp-content/plugins/crayon-syntax-highlighter/crayon_wp.class.php' for inclusion (include_path='.:/usr/lib/php7.3') in /homepages/38/d565553858/htdocs/app565582539/wp-settings.php on line 371

Warning: include_once(/homepages/38/d565553858/htdocs/app565582539/wp-content/plugins/wp-code-highlight/wp-code-highlight.php): failed to open stream: Permission denied in /homepages/38/d565553858/htdocs/app565582539/wp-settings.php on line 371

Warning: include_once(): Failed opening '/homepages/38/d565553858/htdocs/app565582539/wp-content/plugins/wp-code-highlight/wp-code-highlight.php' for inclusion (include_path='.:/usr/lib/php7.3') in /homepages/38/d565553858/htdocs/app565582539/wp-settings.php on line 371
nandawon musings - Page 2 of 2 - Writings on various subjects

Mobile Apps Testing Considerations

More and more software applications and services are being developed either solely on mobile platforms or in conjunction with the traditional web applications.

But first, take note of the Four Key Aspects related to testing on mobile devices.

First, mobile apps are constrained by the device’s hardware they run on: battery life, memory, storage, processor and screen size.

Second, there is usually some form of interactivity between the embedded services (phone configuration, phone dialer, contact list, accelerometer) and the apps.

Third, mobile devices are restricted to simple touched-biased UI and without mouse-over, right-click or a combination of keyboard and mouse actions that are commonly available on web-based on desktop applications.

Fourth, mobile apps rely on the external resources (i.e. mobile network connectivity, WIFI and GPS,) of the device in order to connect to the outside world, which means consideration must be taken into account that connectivity is not guaranteed at all times.

All of these play in role when considering testing of mobile apps, as one or more of these factors will determine the usability and effectiveness of the apps.

1 Device Constraints Testing

1.1 Battery life

Mobile devices primarily run on battery power, and hence we need to ensure that the app under test (AUT) is not unnecessarity consuming more battery than necessary. This can be done by battery monitoring utilities which can be downloaded from the relevant App stores.

1.2 Memory

This type of testing is to check how the AUT behaves when there is insufficient amount of memory (RAM) available on the device. This can be done by writing a simple app which ‘fills up the memory’ intentionally. The AUT should recognise the limited amount of memory available and should quit gracefully or display an appropriate message, rather than starting the app and then crashing or hangs.

1.3 Storage

Some apps (such as games) write app specific data to either the internal memory or external micro-SD card, and some apps offer the user the option to install the app itself on either internal memory or on micro-SD cards. Check that appropriate warning is displayed when the app can’t write data fully on either the internal/external memory, rather than losing the data when the memory card is full.

If an app writes data to SD card to store the user configs, e.g. game stats, then start the app without the SD card and see if the user configs are lost forever or restored when the SD cards is put back in.

3. Processor

Some apps requires a certain processing speed in order for the app the run smoothly. Check that the app runs with acceptable performance on the device having the minimum acceptable processor rating.

Interoperability between embedded services

Typically, web-based application running on Desktops don’t suffer from interruptions the appli

Some examples of apps interoperability includes:

  • Facebook app calling Gallery app when uploading photos
  • From Facebook app, capture image using Camera app and returns to Facebook
  • Using Facebook credentials to log in to Instagram app automatically
  • From Contacts app, making a call directly with Phone app
  • From Phone app, searching for contacts from Contacts app

Functional Testing

Mobile UI elements

Depending on the OS, there are different types of UI elements which are used in an app and these needs to be tested, e.g.

  • Buttons
  • Search Field
  • Secure Text field
  • Combo box
  • Date Picker
  • Text field
  • Popup button
  • Checkbox
  • Scroll bar
  • List items

App UI Navigation

User interactions
  • Swipe up/down/left/right
  • Pinch zoom (for images) or Volume control
  • Multi-touch (image rotation, zoom) especially in Tablets
  • Change in orientation

Test Considerations

  • How the app navigates from one screen to the next
  • How to go back to the previous screen
  • How to return to the home screen of the app
  • Check for consistent behaviour between screen, i.e. consistent positions of OK, Cancel or Back buttons.
  • When changing orientation does the screen re-draw correctly and all UI elements in the correct place, albeit in different orientation?

Device Configurations

  • Date display format (DD/MM/YY vs MM/DD/YY)
  • Currency symbol
  • Units of measure (km vs miles, meters vs ft)
  • Left-to-right vs right-to-left (Arabic) text display

Change the device configuration and ensure that the changes are reflected correctly in the app. In some cases, the app itself could have it’s own settings, with regards to Units, in thi case, the app setting should override the device settings.

External resources

If your application depends on network access, SMS, Bluetooth, or GPS, then you should test what happens when the resource or resources are not available.

For example, if your application uses the network,it can notify the user if access is unavailable, or disable network-related features, or do both. For GPS, it can switch to IP-based location awareness. It can also wait for WiFi access before doing large data transfers, since WiFi transfers maximize battery usage compared to transfers over 3G or EDGE.

Interruptions

Incoming calls or SMS, alarms and notifications should not cause the app to crash or behave erratically..

Security

Most apps these days require network access to data/service hosted online, which means the app needs to store user information (e.g. username/password) to connect to the service, e.g. Facebook, gmail, etc. This means, the logininfo will be stored locally on the device, which means that this information should be stored securely so that no other app or user can retrieved it by connecting the phone to the laptop.

What is Good Quality Software?

As software engineers we all strive to build good quality software but along the way, things didn’t go as planned and we inevitably end up releasing software that is far from our original intended quality goal.

But what is good quality software (GQS)? The answer to this question is so obvious that there appear to be little point in writing a blog for it, but is the answer really as simple as we think it is?

On the surface, the answer seems obvious. A GQS is one that just works. But is that all? As a professional software tester, we need a little more elaboration on the definition of GQS, in order for us to be able to measure the software which we are testing against some quality benchmark that we set, so that we can say with certainly if the software has indeed achieved what is deemed to be an acceptable level of quality.

From my experience, GQS is not made up of not just one but several measurements. But let’s start with the most obvious that even we testers use in the most simplest case: a GQS is one which is free from bugs.

To build a piece of software which is completely free from bugs is an Utopian ideal, but in the real work, the best we can do is to test the software to best of our abilities within the time we are given to test it. As Testers, this is where 90% of all our efforts are spent in our working lives; finding bugs and retesting bugs once they are fixed by the Developers. Even if we were given an infinite amount of time and infinite resources to test a piece of software and prove once and for all that a piece of software is 100% free of bugs, does it make it a good quality software?

The answer, as you might expect, is not quite. So, what else? The second measurement for GQS is does it do what the user want? This also seems an obvious statement to make but many seasoned developers and testers would testify that after spending months (or sometimes years) developing a system and tested it to the hilt and deployed it to production and only to find that the system doesn’t do what the user wanted. A perfectly working software which doesn’t do what the user want is not GQS.

Finally, the last measure of GQS is somewhat subjective, and it’s how easy and ‘enjoyable’ the software to use. Ease of use is difficult to quantify since what seems easy to one user might not be so easy to another. Ease of use depends on familiarity of concepts and past experience. For example, someone who has used Microsoft Windows all his life would find the Apple Mac OS X difficult to use because the entire concept of the desktop interface is very different, and yet the Mac OS X is generally regarded as being the most user-friendly of the Desktop interfaces. So, taking into considering the user’s past experience on similar interfaces, the application still has to be easy to use, even to someone who has used similar application before.

Do you know where to poke?

TV-repair-shop

Whenever you tell people that you’re a software tester, people always think it’s an easy job. A job anyone can do with little or no training required. This cannot be further from the truth, but still this perception exists both inside and outside the IT industry.

I remember a story that I heard a long time ago and I like to associate it with the kind of work I do as a tester. The story goes something like this…

A guy went to a video repair shop (remember, I heard this story over 20 years ago, before the invention of DVD players when people only had Video Cassette Recorders) to fix his broken VCR. As he went into the shop, the door bell chimed and he saw that the shop was full of old TVs, VCrs and radios all piled up haphazardly on the shelves and on the floor of the shop. In the far corner of the shop was a work counter and an old man was there fixing a VCR.

The guy went up to the old man and said, ‘Um, can you take a look at my machine? It stopped working the other day.’

Old man said, ‘Sure, put it on the counter’, and he started unscrewing the cover. Once the cover was off, he looked into the machine and then with a screw driver, he started poking here and he started poking there, and muttering to himself, ‘Yup’, ‘Ah…’, ‘Hmm…’, etc. After a couple of minutes of doing that, the old man looked up and said to the guy, ‘I think I’ve fixed it’, and then he plugged in the machine to the mains socket and sure enough the machine whirled into life.

The guy was really happy that the old man was able to fix it so quickly. So he said, ‘Thank you so much. How much do I owe you for this?’ and the old man replied, ‘Fifty pounds’.

The guy was shocked. ‘Fifty quid? No way I’m going to pay you that. I saw what you did. All you did was poke your screw driver here and there and you didn’t even changed a thing.’

And the old man replied, ‘Yes, that’s true. I charge 5 pound for poking and 45 pounds for knowing where to poke.’

Like the old man, a good tester has the knack of knowing where the weakness might lie in a piece of software by simply using it for a short time and can tune the testing efforts that will uncover the most important bugs. But to an outsider, a tester’s job appear to be trivial and could be done by any person, but the difference is that a good tester always know where to poke. Do you?

Testing without Documentation

Any seasoned tester would have at least one story to tell when he/she has to test a system or a piece of software without any documentation, and those who have done so would also tell you the many problems they faced as a consequence of having to do so.

But testing a piece of software without documentation isn’t something that’s going to go away, regardless of how loud we, the testers, complain or shout about it. The sad reality of life is that this has happened in the past, it is still happening now and will, undoubtedly, continue to happen in the future, especially with the current trend in Agile Development, where the emphasis is on closer and frequent communication between Agile team members and less so on documentation.

So, what do we do when we are faced with the situation of having to test an application with nothing to compare it with? I would suggest a few pointers:

  • You would use Exploratory Testing to discover what the application does, and treating purely as a Black Box.
  • You would record the behaviour that we observed, in as much detail as possible. This means the following:
    • running end-to-end scenarios of the application, on the first instance
      for each interface/dialog,
    • trying out all controls (e.g. text boxes, check boxes, lists, drop-down boxes, radio buttons) and noting their behaviours
  • Making a list of all input data type/class and how they are processed by the system and the outputs generated.

Once you have completed exploring the system and noting the behaviour you would present these to the developers, analysts and stakeholders, i.e. end-users, and determine if the observed behaviour is what is being designed or for requested by the users.

Any discrepancies identified between the system behaviour and what the user expected or what the analyst designed can then be deemed as defects.

Testing in the Rational Unified Process Environment

This article describes how the Rational Unified Process (RUP) iterative development methodology is different to the traditional linear-development approach (e.g. Waterfall or V-Model). The author assumes that the reader is already familiar with the principle ideas of RUP, but who may still be a little confused as to how RUP differs from the traditional methodologies.

This article hopes to clarify this confusion by using an example project highlighting the differences. We will use the ubiquitous ATM system as an example.

The main benefits and differences of the Iterative approach are the following:

1- Testing starts early (right from the Elaboration Phase)
2- More emphasis is placed on non-functional testing in expense of functional testing
3- More chance of meeting Return on Investment (ROI) early
4- Regression testing begins much earlier in the development cycle
5- Production-like environments must be available early to support non-functional testing

The iterative approach means that all stages of testing is performed, e.g. unit, integration, system (functional & non-functional), end-to-end testing, several times during a project life cycle.

Let’s assume that we are to build the ATM system, which consists of various sub-systems integrated in order for a user to able to withdraw cash from an ATM machine.

In the traditional approach, the requirements will be gathered and baselined, analysis & design of the system will be carried out and during development stage each of the system components will be built separately, e.g. input validation, authentication, authorisation & settlement, withdrawal, and physical dispensing of the cash. Each of these components (or sub-systems) of the overall system will then be tested separately, using unit and integration testing (in the small). Each of these sub-systems will then be integrated and full integration (in the large) will be tested, followed by functional system testing, and finally user acceptance testing.

Anyone who has gone through the experience of such method will be familiar with the problems of changing requirements, problems of integration between components and sub-systems (due to misunderstanding of interface specifications), being unable to  perform end-to-end system testing until very late in the project life-cycle, to name just a few.

But with RUP, the approach taken is the incremental development of the entire system. A RUP project-cycle typically consists of 4 major phases: Inception, Elaboration, Construction and Transition. We will concentrate on the last 3 phases, as majority of the actual work is carried out on these phases. Within each phase, Iterations are defined with specific goals, with each iteration expanding on the result of the previous iteration.

So, during the Elaboration Phase, whilst analysis & design models are put into shape, several design ideas will be developed and tested for feasibility for implementation, to identify and de-risk design and architectural ideas and concepts early, and retaining basic skeletal designs and architectures to be taken forward to the Construction Phase.

This gives the opportunity to try out the new design ideas by building prototypes and proof-of-concept (PoC) models, as well as to carry out feasibility studies.

These are usually built using RAD techniques and testing for these prototypes and PoC requires different approach to the norm. In other words, the aim of testing is not to find defects in the prototypes, but to provide the system architects, analysts and designers with technical information which will help them make sound decisions on how the system should be implemented.

Example 1: Building a prototype of a web-based front-end to an existing legacy system and running basic stress tests found a bottle-neck at the system interface level which only permitted a maximum of 30 concurrent sessions to the legacy system. This resulted in change to the design of the system.

Example 2: In building a B2B system, the system architects were weighing the options of using persistent (database) and non-persistent (memory) queues for handling high-volume transactions from external systems which are to be processed within an agreed SLA. There was a concern that the persistent queue may not be fast enough to handle the amount of traffic envisaged. A prototype was built using both types of queues and load tests were carried out and performance of the queues measured. The tests showed that the persistent queue was fast enough to meet the required load with spare capacity.

During the Construction phase (which is where the main work is done) the skeleton of the system is developed in first iteration, by building the main flow of a set of use cases, which when string together provide the end-to-end functionality of the system. These are then put through the test cycle from Integration through to Functional and Non-Functional System testing. Hence, there is a need to have a Production-like environment at this stage of the project (to enable volume and performance testing).

The focus of the testing will be on end-to-end testing of a given set of use cases, to ensure that the main-flow of the use cases are functioning correctly, with stubs and drivers put in place for non-essential or alternate flows.

For example, the basic flow of one use case might state that an actor puts in the bank card into the ATM machine, enters the PIN, the PIN gets validated, if the PIN is valid, permit withdrawal of cash and if withdrawal amount <= daily or personal limit then dispense cash otherwise output error message.

So, on the first iteration, the system that contains the basic functionality of the use-case is built, i.e. the hardware (dev, test environments), software, links to bank’s networks, etc. need to be in place. If links to external network is not available, stubs and drivers are put in their place.

This stage is the most challenging aspect of the RUP project (and also the point where RUP projects usually fail and revert back to a Waterfall project): the need to procure the hardware needed for the infrastructure and constructing the interfaces between various systems within the infrastructure to achieve the end-to-end functionality during the first iteration of Construction phase. The main benefit of this approach is that the most challenging part of the project is tackled head-on, right at the beginning of the development phase, thereby minimising the risk to the project.

On the second and subsequent iterations, more features into the system are added, i.e. the alternate flows in the use-case to handle exceptions (e.g. handling of incorrect PIN no., lost of stolen cards, card not permitted to use on this ATM, etc.) and other business requirement features (e.g. withdrawal is permitted many times as long as the daily total amount withdrawn is less than daily limit for this user). Furthermore, the stubs and drivers are then replaced with actual code or peripheral systems.

Testing on second and sub-sequent iterations involves running regression tests of test cases from previous iteration(s), as well as new tests for the current iteration. Hence, Regression Testing plays a key role in a RUP project as the system functions will be tested several times during the life of the system being developed. In order to achieve the targets, being able to automate the regressions tests is also an important factor.

And at each iteration, the full development life cycle (analysis, design, build, test) of the system is carried out. So, in RUP, there is no longer the notion of phases, like Development phase, Unit test phase, System test phase and acceptance test phase. All of these so called phases are carried out in a single iteration.

The benefit of this approach is that by the time the final iteration arrives, the system would have been regression tested several times end-to-end, and there is high confidence that the system will work reliably.