Testing Cloud Services 

 September 1, 2012

Modern IT systems are getting bigger and more complex. Building them once is perhaps possible with professional help, but users are often overwhelmed with the maintenance and further development of such complex systems. Due to the constant development of new systems, the maintenance problem is multiplied. Software must constantly be corrected, changed, rehabilitated, and evolved to keep pace with operational reality. The more new software is created, the more capacity is needed to maintain it. Users therefore need more and more personnel to maintain the old systems on the one hand and to build new applications on the other.

It is not a solution to produce more and more code, faster and faster. Instead, the goal must be to gain functionality without increasing your code. The aim is to limit the code mass and still provide the required functionality. This can only be achieved by using external, prefabricated building blocks that the user does not have to develop and maintain himself. The responsibility for maintenance and further development lies with the provider. The advent of cloud computing promises a solution to this problem. In the cloud, the user has the opportunity to combine the benefits of pre-built software with the benefits of in-house development. There is no need for the user to solve every detailed problem himself. He uses the standard solutions and concentrates his power on the real operation-specific functions.

Service-oriented requirements analysis

A prerequisite for the use of cloud services is service-oriented requirements analysis. The first step is to model the business process in order to define the context of the application. Then a search begins for services that can be considered for solving the problem.

The user searches for building blocks that can be used for the planned system. Only when the list of potential building blocks is sufficient, the user starts to detail his requirements in such a way that they fit the available services. The functional and non-functional requirements define the minimum acceptance criteria. Here, only functions that absolutely must be present are described, and quality limits that absolutely must be met are defined.

The requirement specification contains the following contents:

  • Functional and non-functional requirements
  • Business objects as an indication of the data that must be made available to the service
  • Business rules as desired processing logic of the
    selected services
  • System actors as the subjects of the use cases
  • Process triggers that trigger use cases
  • Use cases
  • Business interfaces

As such, this document must be automatically analyzable, because on the one hand, the processing steps of the use cases must be matched with the operations of the potential services, and on the other hand, test cases must be generated for testing the services. The main purpose of the requirements documentation is to serve as a test oracle. Therefore, it must be comparable to the interface definition and thus on the same semantic level. Similarly, it contains information that links it upward to the business process. In this respect, the document is a link between the higher-level business process and the lower-level services.

The service test procedure

The test procedure provides for two phases:

  • Static analysis
  • Dynamic analysis

Static analysis

Static analysis compares the content of the requirements with the content of the interface definition. A text analyzer scans the specification and builds tables of use cases, processing steps, and interface data. A table of test cases is also built. A matching test case is created for each processing step, action, state, and rule. The test cases are supplemented with information from the interface definitions. In parallel, the interface schema is analyzed. Besides checking the rules and measuring the size, complexity and quality of the interfaces, tables are built here as well. Then the tables from the specification analysis are compared with those from the interface analysis. The use case processing steps are paired with the operations, and the business interface data is paired with the operations parameter data. Where a pairing does not match, an incompatibility is referenced.

Static analysis
Static analysis

If a service proves incompatible with the user's expectations, the user has four alternatives:

  • Reject the service and continue searching
  • Adapt its requirements to the service
  • Build a wrapper around the service to customize the results of the service to its requirements.
  • Develop your own service

The user will decide on a case-by-case basis which of these alternatives fits best. What we want to avoid is that the user starts to develop his own services. This alternative should only be allowed as a last resort.

At the end of the static analysis, we know on the one hand whether the service is even eligible based on its interface definition. For this, we have measured its size, complexity, and static quality. Second, we can compare the structure and content of the service interface definition with the structure and content we would like to have. If they are too far apart, we don't even need to start with the second phase - the dynamic analysis.

Dynamic analysis

In dynamic analysis, the service is executed and the results are recorded for comparison purposes. The starting point for the dynamic analysis is the schema of the interface and the test case table obtained from the requirements specification.

The analysis involves eight steps:

  1. The tester adds value assignments to the test case tables
  2. A test script is generated from the test case table and the interface schema
  3. The tester supplements the test scripts with further pre- and post-conditions
  4. The test scripts are compiled and test objects are formed
  5. Requests are generated from the interface schema and the preconditions of the test objects.
  6. The requests are sent in sequence and the corresponding responses are intercepted.
  7. The responses are validated against the postconditions of the test objects
  8. The test metric is evaluated

The test case table generated from the requirement specification is not complete. For example, the assignment of the test values must still be filled in by the tester. An automat then combines the test case table with the interface definition into a structured test script. The tester can refine and add to the script.

From here on, everything runs automatically. Test objects emerge from the scripts. The generator combines the interface schema with the preconditions to generate a series of requests for each test case. The requests are then sent by a test driver. This driver also receives the Responses and stores them. The validator compares the response with the postconditions and reports any deviating values in a defect report.

In the last step, the test metrics are aggregated and evaluated (test coverage, correctness, performance rating, etc.).
With the help of the test metrics report, the tester can assess to what extent the behavior of the service is suitable for the target application.

The results of the test serve as an aid to decision-making, and for this they must be presented in a form that the decision-maker can easily understand and assess.


In the future, code modules will increasingly be offered as ready-to-use services that users only need to integrate into their business processes. Programming will be done in a higher process description language such as BPMN, S-BPM and BPEL, if this term can be used at all. From there, the interfaces are operated and the individual services are called. The users no longer need to worry about the detailed implementation. Nevertheless, they still have to test the building blocks they use. Individually in a serviceunit testing, which was described here, and as a whole in a integration testing. Anyway, we are now facing a paradigm shift. We are now moving from object-oriented to service-oriented software development. This should alleviate the maintenance problem addressed at the beginning of the paper, if not definitively, then at least by quite a bit.


  • Ardagna et. Al.: A Service-based Framework for flexible Business Processes, IEEE Software Magazine, 03 2011, p61
  • Bozkurt, Harman, Hassoun: Testing Web Services - A Survey. Journal on Software Testing, Verification & Reliability, Vol. 17, No. 2, Wiley and Sons, 2008
  • Carr: The big Switch, Harpers Books, New York, 2007
  • Grundy, Kaefer, Keong, Liu: Software Engineering for the Cloud, IEEE Software Magazine, 03 2012, p26
  • Riungu-Kalliosaari, Taipale, Smolander: Testing in the Cloud - Exploring the Practice, IEEE Software Magazine, 03 2012, p46
  • Shull: A brave new World of Testing - an Interview with Google's James Whittaker, IEEE Software Magazine, 03 2012, p4
  • Sneed, Huang: The Design and Use of WSDLTest - a Tool for testing web services" (2007), Journal of Software Maintenance and Evolution, Vol. 19, No. 5, p297
  • Yau, Ho: Software engineering meets services and cloud computing, IEEE Computer Magazine, 10 2011, p47