Test automation of different system types 

 April 5, 2021

Different system types place different demands on successful test automation. Among other things, they determine the automation approach and also limit the possible tools.

Desktop applications

For a long time, pure desktop applications were the only thing that could be automated. They usually consist only of software and do not communicate with other systems. Therefore, they hardly have to consider boundary conditions and interfaces and the developers can concentrate entirely on the purpose of the software itself. In addition to developer tests at the component level that are appropriately focused on their own functionality, automated testing via the user interface makes sense. Tools that support the technology of the application or its user interface can be used for this purpose. In addition, it may be necessary to access or create test data via the file system (and any formats used by the application). Further levels of integration are not necessary, at least at the application level. However, the internal components are of course integrated with each other according to an integration strategy. Sometimes it is helpful to include auxiliary functionalities in the application for testing purposes, which provide support for test preparation or checking the results, for example.

With this type of software system, automation is much simplified because the dependencies to other software systems are limited - thus, in all likelihood, a limited set of interfaces may be sufficient to automate a functional test.

The fact that this type of system is inherently designed for only one concurrent user also avoids a number of problems in automation - thus, for example, tests with multiple users simultaneously can be dispensed with. However, testing of several parallel instances should not be dispensed with.

Client-Server-Systems

The first level of complexity for test automation was the emergence of client-server systems. Here, data is held centrally on a server, which can also be responsible for significant parts of the system's functionality. To differentiate this, a distinction will be made between "fat client" and "thin client": As the name suggests, a "fat client" contains significantly more parts of the functionality, while a "thin client" is seen more as a display and input screen and the functionality remains largely on the server.

An important decision for the automated testing of such systems is whether the user interface should also be considered, or whether manual testing is more efficient for this. Especially in the case of thin clients, it may be sufficient to let the automation work directly over the client interfaces and thus dispense with automated GUI tests. One of the advantages of such an approach is the generally greatly increased execution speed of the automated test cases. In the case of fat clients, such an approach is usually not appropriate, since a large part of the functionality lies in the client and would therefore not be tested.

An important aspect of client-server systems is that in most cases several users work with the system in parallel. There are several possible scenarios for an automated test:

  • Multiple users via the same test computer
  • Multiple users across different physical machines
  • Multiple users across different virtualized machines

In the first case, the processing of the automated test cases on a physical computer must be parallelized. However, most test tools do not provide explicit support for this, since automation via the GUI requires a certain exclusivity. The creation of functionality specifically required for this can be very costly.

The second option contains the difficulty of controllability of the test environment. Configuring and maintaining multiple physical computers for the purpose of automation involves a not inconsiderable amount of work, even if they are of identical design.

The most commonly used option is to test across multiple virtualized machines. This method has the advantage of being able to simply multiply a defined configuration of a computer across multiple instances.

Experience shows that many test teams using a virtualized environment for the first time greatly underestimate the necessary administration activities and the associated effort. Effective configuration management for the virtualized test computers is critical to success.

Web applications

In the meantime, a special case of client-server applications has become very widespread: Web applications. Here there is generally no specific client for an application, but a generic one - the browser. Due to the strong standardization of the transmitted data (HTTP and HTML), specific methods can be applied here that target these protocols and take advantage of their use (e.g. capture & replay at the protocol level). Many tools explicitly support web applications. The parallelization of test executions is also easier to implement in web applications, since some tools do not access the interface of the applications via the physical GUI, but via JavaScript, or can process their tests at all on the level of the underlying protocols and formats, which normally speeds up the test execution time considerably.

One determination that significantly simplifies a tool decision is answering the question of whether automated tests should be run on different browsers and browser versions. This depends specifically on the functionalities provided within the browser, i.e. JavaScript, Ajax and similar techniques.

Another scenario in which automated test runs can be deployed across multiple browsers is semi-automated testing. For example, automated functional tests can be run and screenshots captured as part of the execution so that after the run, a tester can manually review these screenshots and check for correct rendering. This method can provide a good balance between manual testing effort and automation effort, since machine verification of correct rendering of web pages with dynamic content cannot be guaranteed to be stable, at least at the current time.

Test automation of mobile applications

In general, the automation of mobile applications can be compared conceptually with the testing of client-server systems, with the difference that instead of desktop clients, mobile devices take over the communication with the server. Seen in this light, the question naturally arises as to why the automation of mobile application tests should be considered separately at all. However, apart from the conceptual similarity of these application areas, it becomes apparent that some special challenges can arise in this application area that justify a separate consideration. This chapter will therefore mention some of these special challenges in the testing of mobile applications and describe a possible procedure in these test projects.

Major challenges in mobile application automation are:

The selection of test platforms

Currently, the device landscape consists of a large number of potentially relevant end devices for test automation projects and a major problem is therefore the meaningful selection of a subset of devices for test execution.

Dealing with interrupts

In mobile applications, interrupts (e.g. incoming calls, SMS, push notifications, ...) pose a great challenge to tool manufacturers as well as test automators. If emulators or simulators are used for test execution, interrupts can be simulated relatively easily. On physical end devices, however, this is only possible with difficulty.

Different hardware of the end devices

Especially in mobile devices, a multitude of devices with different hardware can be found. The diversity in screen size, resolution and dot density is also only found in this form on mobile devices. These factors are less relevant for the automation of desktop applications.

Network performance and different types of network connections

Since mobile applications often have to function with different and constantly changing network connections, the question arises as to how this aspect can be taken into account in test automation. In practice, these tests are performed either manually in field tests or in a test environment with simulated network connections (WAN emulators).

Test automation for embedded systems

Embedded systems are characterized by the embedding of software in hardware. Thus, the system under test is not only the software, but hardware and software together. If different components with different hardware dependencies interact, then the complexity of developing and testing embedded systems can become arbitrarily complex. Additional factors such as interacting third-party systems or data dependency can further increase the complexity.

Embedded systems are also often systems that have a certain safety criticality. This means that high quality, such as high material values or human life, depends on the correctness of their behavior. For this purpose, there are corresponding standards such as ISO/IEC 25000 or domain-specific versions such as ISO 26262 for the automotive sector or EN 50128 for the railway sector.

In such standards there are method tables for the type of test approach as well as classifications for the tools used. According to this classification, EN 50128 distinguishes, for example, between T1, T2 and T3, where T1 means 'has no influence on the test object', T2 concerns verification and test tools whose errors could result in errors not being detected, and T3 means the types of tools that have a direct influence on the test object. The presented tools for test automation therefore fall under T2, for which separate quality proofs then become necessary.

data warehouses

Data warehouses are an example of complex systems with many interfaces, data and often a less intuitive structure. In principle, databases and the underlying systems and standard products themselves do not differ significantly from other applications in testing - they have concrete requirements and use cases. In contrast, data warehouses represent central data collections from several systems of a company, whose structure and preparation of the data allows a comprehensive analysis to support management and business decisions.

A data warehouse (DWH) essentially fulfills two principles:

  • Data integration
  • data separation

The operation of a DWH, starting with data acquisition, through the storage of data in the DWH database, to the management of data sets for subsequent data analysis and evaluation, is called "data warehousing".

Apart from the organizational problems and the infrastructure (large amounts of data, legally sensitive data, etc.), there are several other aspects that make manual testing almost impossible:

  • Many technical interfaces with many source systems for data
  • No graphical user interface
  • Complex core functionalities such as historization of data or consistency checks
  • Import, export and semantics of the data are complex and in many cases not known in detail to the test team

For a comprehensive test, a combination of automated approaches is necessary in most cases. For the core area of the data warehouse, i.e. the central data storage and actual data warehouse functionality, such as historization, references or other core functionalities, there are usually consistency rules or rules that the data in the core system must satisfy. Here, an automated system can check whether these rules are actually adhered to, i.e. whether, for example, the most recent data record in the data history always has the marker for the currently valid data record.

Other approaches in a DWH test are:

  • Plausibility checks: Valid and invalid data records are imported. In the target system, the valid data records must now be accepted and the invalid data records must appear as "rejected" in the log.
  • Post-implementation: Part of the functionality of the system is post-implemented in the automation framework for testing purposes.
  • Defined input-output pairs: A known set of test data sets derived from the import and transformation rules according to test design methods is imported and the result is compared with known result data also derived.

cloud-based systems

The most important characteristic of cloud computing is that applications, tools, development environments, management tools, storage capacity, networks, servers, etc. are no longer provided or operated by the user himself, but are "rented" from one or more providers who offer the IT infrastructure publicly via a network as cloud services. This has the advantage for the user that acquisition and maintenance costs for IT infrastructure are eliminated. Only the services actually consumed are paid for the duration of use. Standardized and highly scalable services now enable many companies to use services that were previously almost impossible to pay for.

One problem that the user of cloud services has to deal with is data security. It is his responsibility which data he wants to give outside his company and to what extent.

The outsourcing of operating environments and the use of external services for functionalities also have a significant impact on testing. Particularly in such multi-tier scenarios, in which responsibilities do not lie with a single party and functionality-relevant parts are developed and operated independently of each other, a continuous check of the functionality of the systems in the sense of a frequently performed regression test is necessary.

It is important to clearly define and clarify the scope for testing and test automation: Tests that a cloud infrastructure provider must perform are generally focused differently than tests that a platform or software developer or even the customer itself performs.