The system testing 

 September 1, 2011

From the requirements to the proof of quality

In the context of system testing, testers have to ensure that errors are found in software before it is delivered to users or customers. They test not only the software, but also the implemented business processes and the system as a whole. The system testing is a very complex undertaking that requires its own methods and tools.
This book provides you with practical guidance on planning, organizing, and executing system testing, whether you have purchased the systems, adopted them from open source libraries, migrated from legacy systems, or developed them from scratch. The authors show how to set up a test project efficiently, what methods and approaches are practical, and how to integrate and automate the testing process. They also provide an overview of useful tools for the system testing.

The 3rd edition takes into account the current developments at system testing with two new chapters: model-based test specification and test automation.

Authors: Harry M. Sneed, Manfred Baumgartner, Richard Seidl

Publisher: Carl Hanser Verlag

ISBN: 978-3-446-42692-4

Edition: 3rd edition

Table of Contents

1 Introduction to the system testing

1.1 The Essence of System Testing
1.2 From Developers and Users to Testers
1.3 Why We Need to Test
1.4 Goals of System Testing
1.5 The System Testing Process
1.6 System Testing Standards
1.7 System Testing Tools
1.8 System Testers
1.9 On System Testability
1.9.1 Use Case Testability
1.9.2 User Interface Testability
1.9.3 System Interface Testability
1.9.4 Database Testability
1.9.5 Testing Without User Interfaces

2 Test requirement analysis

2.1 Approaches to Formulating Requirements
2.1.1 Formal Specification
2.1.2 Semiformal Specification
2.1.3 Structured Specification
2.1.4 Informal Specification
2.2 Approaches to Standardizing Requirements
2.3 The Practice of Requirements Documentation
2.4 The V-Modell-XT Requirements Specification
2.5 The Analysis of Natural Language Requirements
2.6 Requirements-Based Test Case Identification
2.7 An Example of Test Case Identification
2.8 On the Automation of Test Case Identification
2.9 Experience with Automated Requirements Analysis

3 Model-based test specification

3.1 Where does the model come from?
3.1.1 Taking over the developer model
3.1.2 Creating your own test model
3.1.3 Obtaining a model from the requirements documentation
3.1.4 Obtaining a model from the code
3.2 Deriving test cases from a UML model
3.2.1 Test cases from the UseCase diagrams
3.2.2 Test Cases from Sequence Diagrams
3.2.3 Test Cases from Activity Diagrams
3.2.4 Test Cases from State Diagrams
3.2.5 Unifying Test Cases
3.3 From Test Model to Test Execution
3.4 Alternative to Model-Based Testing
3.4.1 Testing against the tester's ideas
3.4.2 Testing against the user manual
3.4.3 Testing against the requirements documentation
3.4.4 Testing against the existing system
3.5 Assessment of model-based testing
3.5.1 Model-based testing compared with testing against the tester's ideas
3.5.2 Model-based testing compared to testing against the user manual
3.5.3 Model-based testing compared to testing against an existing system
3.5.4 Testing against a model compared to testing against the requirements specification
3.5.5 The optimal testing approach is situation dependent

4 System test planning

4.1 Purpose of Test Planning
4.2 Prerequisites for Test Planning
4.3 Estimating Test Effort
4.3.1 Test Points
4.3.2 Test Productivity
4.3.3 Complexity and Quality
4.3.4 The COCOMO-II Equation
4.4 Estimating Test Duration
4.5 Test Project Organization
4.5.1 Organizing Test Resources
4.5.2 Organization of test personnel
4.6 Test risk analysis
4.7 Determination of test end criteria
4.8 Design of test plan according to ANSI/IEEE-829
4.8.1 Test concept ID
4.8.2 Introduction
4.8.3 Objects to be tested
4.8.4 Functions to be tested
4.8.5 Functions not to be tested
4.8.6 Test procedure
4.8.7 Test End Criteria
4.8.8 Test Abort Criteria
4.8.9 Test Results
4.8.10 Test Tasks
4.8.11 Test Environment
4.8.12 Test Responsibilities
4.8.13 Test Personnel Requirements
4.8.14 Test Schedule
4.8.15 Test Risks and Risk Management
4.8.16 Approvals
4.9 The Test Specification According to V-Modell-XT
4.9.1 Introduction
4.9.2 Test Objectives
4.9.3 Test Objects
4.9.4 Test Cases
4.9.5 Test Strategy
4.9.6 Test Criteria
4.9.7 Test Results
4.9.8 Test Tasks
4.9.9 Test Environment
4.9.10 Test Case Assignment
4.9.11 Test Effort
4.9.12 Risk Precautions

5 Specification of the test cases

5.1 Structure of Test Cases
5.1.1 The Test Case Identifier
5.1.2 The Test Case Purpose
5.1.3 The Test Case Source
5.1.4 The Test Requirement
5.1.5 The Test Process
5.1.6 The Test Objects
5.1.7 The Test Case Pre-states
5.1.8 The Test Case Post-states
5.1.9 The predecessor test cases
5.1.10 The successor test cases
5.1.11 The test environment
5.1.12 The test case arguments
5.1.13 The test case results
5.1.14 The test case status
5.2 Test case presentation
5.2.1 Test cases in text format
5.2.2 Test cases in table format
5.2.3 Test cases in XML format
5.2.4 Test cases in a formal language - TTCN
5.3 Creation of test cases
5.3.1 Generation of basic data from requirement text
5.3.2 Additions to test cases
5.4 Storage of test cases
5.4.1 Test cases as texts
5.4.2 Test cases as tables
5.4.3 Test cases as XML format
5.5 Quality assurance of test cases
5.5.1 Test case quantity
5.5.2 Measurement of test case complexity
5.5.3 Measurement of test case quality
5.6 Conversion of test cases into a test design
5.7 Maintenance and further development of test cases

6 Provision of the test data

6.1 Test Data Sources
6.1.1 The Requirements Documentation as a Source of Test Data
6.1.2 The Design Model as a Source of Test Data
6.1.3 The Source Code as a Source of Test Data
6.1.4 The Legacy Test Data as a Source of Test Data
6.1.5 The production data as a source of test data
6.1.6 The domain logical test cases as a source of test data
6.2 Test data objects
6.3 Test data creation approaches
6.3.1 The blind approach to test data creation
6.3.2 The targeted approach to test data creation
6.3.3 The combined approach
6.3.4 The mutation approach
6.4 Test data types
6.4.1 Databases
6.4.2 System interfaces
6.4.3 User interfaces
6.5 Test data generation
6.5.1 Data generation from test cases
6.5.2 Data Generation from Test Procedures
6.5.3 Data Generation from Source Code
6.5.4 Data Generation from Existing Data
6.6 Tools for Test Data Generation
6.6.1 Database Generators
6.6.2 Interface Generators
6.6.3 User Interface Generators

7 System test execution

7.1 System Types
7.1.1 Standalone Systems
7.1.2 Integrated Systems
7.1.3 Distributed Systems
7.1.4 Web-Based Systems
7.1.5 Service-Oriented Systems
7.1.6 Fully Automated Systems
7.1.7 Embedded Real-Time Systems
7.2 Testing Standalone Systems
7.3 Testing Integrated Systems
7.3.1 Functional Testing
7.3.2 Load Testing
7.3.3 Usability Testing
7.4 Testing Distributed Systems
7.4.1 Interaction Testing
7.4.2 Network Test Tracking
7.4.3 Security Test
7.5 Test of Web-based Systems
7.5.1 Test of Web Architecture
7.5.2 Test of Web Application
7.6 Test of Service-oriented Systems
7.6.1 Preparation of Service Test
7.6.2 Execution of Web Service Test
7.6.3 Simulated Test of Business Processes
7.6.4 Integration of Services with Business Processes
7.7 Testing Fully Automated Systems
7.7.1 Tools for Automated Testing
7.7.2 Testers for Automated Testing
7.8 Testing Embedded Systems
7.9 No Two Systems are the Same

8 Evaluation of the system test

8.1 Purpose of Test Evaluation
8.2 Evaluation of Test Results
8.2.1 Visible and Invisible Results
8.2.2 Options for Result Control
8.2.3 Rationale for Result Control
8.2.4 Automated Result Control
8.3 Measuring Test Coverage
8.3.1 Test Coverage Measures
8.3.2 Function Point Coverage
8.3.3 Requirements Coverage
8.3.4 Previous Functionality Coverage
8.3.5 Fault Coverage
8.4 Fault Analysis
8.4.1 Fault Localization
8.4.2 Fault Reporting
8.5 System Test Metrics
8.5.1 Test Coverage Measures
8.5.2 Fault Analysis Measures
8.5.3 Measuring Test Effectiveness
8.6 System Test Measurement in Practice

9 Test maintenance and updating

9.1 Analysis of Change Requests (CRs)
9.2 Updating and Optimizing the Test Plan
9.2.1 Updating the Test Objectives
9.2.2 Updating the Test Objects
9.2.3 Updating the Functions to be Tested
9.2.4 Updating the Test Strategy and Test End Criteria
9.2.5 Updating the Test Results
9.2.6 Update of the test tasks
9.2.7 Update of the personnel plan
9.2.8 Update of the test risks
9.2.9 Recalculation of the test costs
9.3 Impact analysis of the software
9.3.1 Static impact analysis
9.3.2 Dynamic impact analysis
9.4 Update of the test cases
9.4.1 Specification of new test cases
9.4.2 Adaptation of existing test cases
9.5 Enrichment of test data
9.5.1 Direct enrichment of data
9.5.2 Indirect enrichment of data
9.6 Execution of the regression test
9.6.1 Characteristics of a regression test
9.6.2 The test in dialog mode
9.6.3 The test in batch mode
9.6.4 Test automation in regression testing
9.7 Evaluation of the Regression Test
9.7.1 Checking the Regression Test Coverage
9.7.2 Checking the Regression Test Results
9.7.3 Logging the Regression Test Results
9.8 Automation of the Regression Test
9.9 The Regression Test in Migration Projects
9.9.1 Full Regression Test
9.9.2 Selective Regression Test

10 System test automation

10.1 A Model for Test Automation
10.1.1 Test Inputs
10.1.2 Test Outputs
10.1.3 Test Object Relationships
10.2 Test Events
10.2.1 Planning Test Events
10.2.2 Preparing Test Events
10.2.3 Executing Test Events
10.2.4 Completing Test Events
10.2.5 Summary of Test Events
10.3 On the Automation of Test Events
10.3.1 Automated Derivation of Logical Test Cases from Requirements Documentation
10.3.2 Automated Generation of a Test Plan
10.3.3 Automated Generation of a Test Design
10.3.4 Automated Generation of Test Data
10.3.5 Automated Generation of Physical Test Cases
10.3.6 Automated Generation of Test Procedures
10.3.7 Automated Instrumentation of Code
10.3.8 Automated Test Execution
10.3.9 Automated Results Checking
10.3.10 Automated Test Coverage Checking
10.3.11 Automatically Generated Test Metrics
10.4 Requirements of Test Automation
10.4.1 Formalization of Requirements Specification
10.4.2 Standardization of Test Documents
10.4.3 Definition of Data Value Ranges
10.4.4 Qualification of Testers
10.5 System Test Automation as an Independent Project
10.5.1 First Automation Level
10.5.2 Second Automation Level
10.5.3 Third Automation Level
10.5.4 Fourth Automation Level
10.5.5 Fifth Automation Level
10.6 Alternative to Automated Testing
10.6.1 First Alternative = Less Testing
10.6.2 Second Alternative = Massive Personnel Deployment
10.7 Past and Future of Test Automation

11 Tools for the system testing

11.1 Tool categories - areas of application
11.2 Functionality and selection criteria
11.3 Tools from projects: The test workstation

12 Test management

12.1 Necessity of system test management
12.2 Main tasks of system test management
12.2.1 Test planning and implementation of the test concept
12.2.2 Ongoing controlling of all test activities
12.2.2.1 Content controlling
12.2.2.2 Controlling of planning variables
12.2.2.3 Controlling of test end criteria
12.2.3 Ensuring the Quality of Test Results
12.2.3.1 Quality of Test Design and Test Cases
12.2.3.2 Quality of Test Execution Logging
12.2.3.3 Quality of Defect Management
12.3 Test Process Management
12.3.1 Test Process Design
12.3.2 Test Process Maturity
12.4 Test Team Leadership

13 Appendix

13.1 Appendix A: Test plan according to ANSI/IEEE-829
13.2 Appendix B1: Scheme for test case specification
13.3 Appendix B2: Example of test case specification for order processing test
13.4 Appendix C1: Test data generation script
13.5 Appendix C2: Test result validation script