Testing of complex e-government systems 

 September 1, 2009

In order to create broad acceptance of e-government systems, error-free functionality, good usability, security and performance are essential. But how does the customer of such a system know whether the delivered software actually fulfills all requirements? The following article describes the approach and benefits of systematic acceptance tests using the example of the e-government platform of the Free State of Saxony.

In 2005, the Free State of Saxony began building a central infrastructure platform for e-government applications as part of an e-government initiative. This contains components that are required for the realization of e-government processes. By simply integrating the basic components, new e-government applications can be effectively mapped and made easily accessible to users. The basic components of the e-government platform include the life situation portal Amt24, the form service, the geoportal Sachsenatlas, the integration framework, a central content management system, and components for electronic payment transactions and electronic signatures and encryption. For those responsible for the e-government platform, one thing is clear: none of these components may go into productive operation until the acceptance tests have been completed and the acceptance criteria have been met. The usefulness of systematic tests has been proven time and again, for individual software as well as for standard products. Thousands of errors have been documented and corrected in the error management system since 2005. Tens of thousands of test cases have been performed. The error spectrum ranges from minor layout problems to serious errors that would have been fatal in productive operation. But the test also uncovered faulty or incomplete requirements, which could thus be corrected before going live, e.g. by change request.

What needs to be considered in the structured test of an e-government platform is explained below using the test process as an example.

1. test planning

To avoid surprises later on, the planning of the test should start at the same time as the planning of the development project, with budget, staffing and scheduling specifically for the test project. The test objectives are essential for planning the test effort. For systems with a particularly high public profile, the focus is on performance and usability in addition to correct functionality. Systems that perform critical functions or process personal data must be tested for security vulnerabilities. The integration of old and new specialist systems makes a comprehensive test of the interfaces and data transmission indispensable.

Standards such as the test process according to ISTQB, software quality characteristics according to ISO 9126-1:2001 and test documentation according to IEEE 829:2008 help with effective test planning.

2. test preparation

The test preparation phase includes the test specification, the provision of test tools and the test environment as well as the procurement or creation of test data. The establishment of a special test center has proven particularly useful for this purpose. Many e-government projects involve test specialists from IT and testers from different departments and authorities. The test center offers them not only shared workstations but also the opportunity to exchange information on methodological and technical issues and to work together effectively beyond the line organization. In this way, many questions about the target behavior of the application to be tested are often clarified quickly during the test specification.

3. test execution

In the case of contracts for work and services, which are common in the public sector, the acceptance of the services often has to take place within a narrowly defined time frame. In the Free State of Saxony, it has therefore proven useful to define milestones with intermediate versions already during the ongoing development. A release test of the interim version provides timely information about the quality of delivery and makes it possible to correct any deficiencies found before the project is made available for acceptance. At acceptance testing , only a regression test of meaningful test cases from the release tests is performed.

While testers from the business unit essentially concentrate on the feasibility of the most important use cases and contribute their experience with typical weak points, test specialists check the standard conformity of interfaces and ensure the test coverage of all defined requirements.

4. test control

The project manager must be regularly informed about the test progress, error numbers and occurring problems. For example, in the event of problems with inter-agency IT communication, measures such as network disconnections or increasing server capacities can be initiated in good time before the go-live date. Since the current defects can be viewed at any time by recording them in the defect management system, disputes about individual defect reports (defect or feature? change request?) can be dealt with promptly by the project management.

5. test completion

A description of residual risks and the evaluation of the delivered software against the acceptance criteria form the core of the test report that the test team delivers to the project manager. The acceptance criteria, which are of course part of the work contract with the supplier, can take various forms here. For the e-government platform, the test coverage to be achieved is usually combined with error-based metrics.


  • 90% of all test cases prioritized as High and Medium are to be executed.
  • No preventive defects and a maximum of 10% of the aggravating defects found may still be open.

Based on the acceptance recommendation in the test report, the project manager can now decide on the final acceptance and further measures.

From the practice

When the e-government platform and its first components were put out to tender in Saxony in 2004, it was clear that a standard was to be created for subsequent projects. A test centre was set up in which test specialists from ANECON work together with testers from the specialist departments of the Free State of Saxony to specify and carry out functional test cases. In addition, security and usability tests are carried out according to common standards, e.g. BITV, BSI, W3C or OWASP. Load and performance tests are also an essential part of the test projects, as they also serve to determine response times for the service level agreements with operations.

Since the start of testing activities in the Free State of Saxony, it has been the case for all platform projects that these types of tests must be implemented in the same way as a test against the specifications of the operating environment. The cooperation of test specialists with testers from the specialist departments in the test center has proven to be one of the key success factors, as have the change, release and error management processes that have now become a matter of course. The effort is worthwhile, because house demands such as the EU Services Directive and the increasing expectations of users for e-government require an ever greater range of functions and an ever stronger networking of the IT systems involved. And their success depends heavily on the acceptance of the users, who expect an error-free, secure and fast system.