The complexity of integration 

 October 1, 2019

The dictionary defines integration as "(re)production of a unity (from differentiated); completion" and thus a major goal towards which this process is heading. The concrete manifestations, especially in software development, could hardly be more different: horizontal, vertical, microservices, APIs, loosely coupled, layers, silos, tightly meshed, as an integration platform or file-based, encapsulation, etc. - the integration of which has all the more goals, since integration-specific aspects such as the integration strategy, including requirements for the test environment, are also added here.

In order to maintain an overview here, structure is needed. And at this point, we would like to point out a few dimensions that can help in classificating your own integration tests or in checking for other necessary dimensions.

Dimension: Test objectives

A typical test objective of integration testing is to prove correct communication between the objects to be integrated. As is often the case, the primary objective is to minimize the risk of undetected errors and to find error effects at an early stage.

The functional aspects are usually in the foreground. However, non-functional aspects also play an important role: reliability, usability, information security, compatibility, transferability, modifiability, performance/ efficiency (see ISO 25010). Depending on the industry, application purpose and customer requirements, the non-functional aspects can become more and more important and should be taken into account accordingly.

Dimension: Test objects

The test object to be tested significantly influences the design of the tests and the test environment: interfaces, services, APIs, databases, subsystems, but also infrastructure and hardware.

It is essential for the success of the integration tests that the individual integration objects have already been tested as a product or subsystem, regardless of the communication at the interfaces. Otherwise, in the event of an error, it is not possible to determine whether the problems stem from errors that are relevant or irrelevant for interfaces. This can save a lot of time for troubleshooting.

Dimension: Test level

Depending on the test object, the integration testing plays on a different level, on a different level of abstraction:

  • Units: Here there is usually a good test support by the development environment or by the framework.
  • System components: Integrated libraries or databases support the test.
  • Systems: Even if software system interfaces are well documented, integration is complex and error-prone.
  • Integration of software and hardware: Here there is a special challenge for tests to also consider non-functional aspects.
  • Integration of software and data: Both describe information that must fit together. Usually, the former describes a generic part and the latter the project-specific part. Here, the balancing act between generic and project-specific testing is important.

It can also become complex across levels: If the integration takes place at the system level (often black box), but focuses on properties that are only recognizable in a white box view, this balancing act is a particular challenge for many development teams.

In any case, integration requires different teams from different levels with different perspectives to work together to enable efficient integration.

Dimension: Test basis

The test basis can be, for example: Interface specifications, definitions of communication protocols, sequence diagrams, models such as statecharts, architecture descriptions, software and system designs, workflows, use cases, or descriptions of data structures.

In general, the higher the degree of formalization, the more we can rely on the results. If we can be sure that the interface specifications of the communicating products are consistent with each other, then much has already been gained.

Dimension: Typical errors

Integration testing can detect many different types of errors, for example: wrong data structures, faulty interfaces, wrong assumptions about the passed data, missing data, problems with performance or security (encryption).

These dimensions can be considered in almost any combination and thus span a large field of possible integration tests. While component or system tests are rather homogeneous, this field of integration tests is very different in implementation, technology and methodology. Many tests, especially non-functional tests, can only be meaningfully tested by automated means. The set of combination possibilities described here, which impact the test, also suggests the need for increasing efficiency via test automation.

And establishing test automation itself is also a major challenge, as this step often requires adjustments to the frameworks or the development of custom test tools - an effort that should not be underestimated.

Conclusion

As a test manager or tester, you almost want to bury your head in the sand when overwhelmed by the abundance of possibilities. But don't get discouraged and just start small: Span a field with the most important dimensions and check where you have already implemented integration tests well, where perhaps there are still gaps and further tests would bring real added value, and then start improving. Good luck with that!

The article was published in the 02/2019 issue of German Testing Magazin.