Software in numbers 

 September 1, 2010

You can only plan the maintenance or further development of a software system well if you know your system exactly, can record it in figures and evaluate it. This is just one example of the benefits of metrics. A good inventory and evaluation of your software - in terms of quantity, quality and complexity - is an important prerequisite for just about all types of software projects.
In this practical book, you will find methods and metrics on how to measure software, taking into account the three dimensions of quantity, complexity and quality. All measurement approaches presented are field-tested and based on decades of experience in measuring software products and software processes (including development, maintenance, evolution and migration).
The metrics and approaches presented will help you make projects more predictable, get an overview of products and legacy systems, and better control the course of the project.

Authors: Harry Sneed, Richard Seidl, Manfred Baumgartner

Publisher: Carl Hanser Verlag

ISBN: 978-3-446-42175-2

Edition: 1st edition

Table of Contents

1 Software measurement

1.1 The Essence of Software
1.2 Sense and Purpose of Software Measurement
1.2.1 For Understanding (Comprehension) of Software
1.2.2 For Comparison of Software
1.2.3 For Prediction
1.2.4 For Project Control
1.2.5 For Interpersonal Understanding
1.3 Dimensions of the substance software
1.3.1 Quantity metric of software
1.3.2 Complexity metric of software
1.3.3 Quality metric of software
1.4 Views on the substance software
1.5 Objects of the software measurement
1.6 Aims of a software measurement
1.7 To the structure of this book

2 Software quantity

2.1 Quantity measures
2.2 Code sizes
2.2.1 Code files
2.2.2 Code lines
2.2.3 Instructions
2.2.4 Procedures and/or methods
2.2.5 Modules and/or classes
2.2.6 Decisions
2.2.7 Logic branches
2.2.8 Calls
2.2.9 Agreed upon data elements
2.2.10 Used data elements and/or Operands
2.2.11 Data Objects
2.2.12 Data Accesses
2.2.13 User Interfaces
2.2.14 System Messages
2.3 Design Sizes
2.3.1 Structured Design Sizes
2.3.2 Data Model Sizes
2.3.3 Object Model Sizes
2.4 Requirement Sizes
2.4.1 Requirements
2.4.2 Acceptance Criteria
2.4.3 Use Cases
2.4.4 Processing Steps
2.4.5 Interfaces
2.4.6 System Interfaces
2.4.7 System Actors
2.4.8 Relevant objects
2.4.9 Object states
2.4.10 Conditions
2.4.11 Actions
2.4.12 Test cases
2.5 Test sizes
2.5.1 Test cases
2.5.2 Test case attributes
2.5.3 Test runs
2.5.4 Test scripts/procedures
2.5.5 Test script lines
2.5.6 Test script statements
2.5.7 Error messages
2.6 Derived Size Measures
2.6.1 Function-Points
2.6.2 Data-Points
2.6.3 Object-Points
2.6.4 Use-Case-Points
2.6.5 Test-Case-Points

3 Software complexity

3.1 Complexity in Software Metrics
3.1.1 Software Complexity According to the IEEE Standard
3.1.2 Software Complexity from the Perspective of Zuse
3.1.3 Software Complexity According to Fenton
3.1.4 Complexity as a Disease of Software Development
3.1.5 Complexity Measurement According to Ebert and Dumke
3.1.6 The Alpha Complexity Metric
3.2 Increasing Software Complexity
3.2.1 Code Complexity - Why Java is More Complex than COBOL
3.2.2 Design Complexity - Why Different Design Approaches are Equally Complex in the End
3.2.3 Requirements Complexity - Why the Tasks to be Solved Become More Complex
3.3 Generally Valid Measures of Software Complexity
3.3.1 Language Complexity
3.3.2 Structure Complexity
3.3.3 Algorithmic Complexity

4 The measurement of software quality

4.1 Quality properties according to Boehm
4.1.1 Comprehensibility according to Boehm
4.1.2 Completeness according to Boehm
4.1.3 Portability according to Boehm
4.1.4 Modifiability according to Boehm
4.1.5 Testability according to Boehm
4.1.6 Usability according to Boehm
4.1.7 Reliability according to Boehm
4.1.8 Efficiency according to Boehm
4.2 Gilb and the quantification of quality
4.2.1 Functionality measurement according to Gilb
4.2.2 Performance measurement after Gilb
4.2.3 Reliability measurement after Gilb
4.2.4 Data protection measurement after Gilb
4.2.5 Efficiency measurement after Gilb
4.2.6 Availability measurement after Gilb
4.2.7 Maintainability measurement after Gilb
4.3 McCalls quality tree
4.4 A German view on software quality
4.4.1 Quality concept
4.4.2 Quality classification
4.4.3 Quality Measures
4.4.4 Quality Variables
4.5 Automated Software Quality Assurance
4.5.1 Automated Measurement of Requirements Quality
4.5.2 Automated Measurement of Design Quality
4.5.3 Automated Measurement of Code Quality
4.5.4 Automated Measurement of Test Quality
4.6 Goal-Directed Software Quality Assurance
4.6.1 Quality Goal Determination
4.6.2 Quality target survey
4.6.3 Quality target measurement
4.7 IEEE and ISO standards for software quality
4.7.1 Functionality according to ISO 9126
4.7.2 Reliability according to ISO 9126
4.7.3 Usability according to ISO 9126
4.7.4 Efficiency according to ISO 9126
4.7.5 Maintainability according to ISO 9126
4.7.6 Portability according to ISO 9126
4.8 Consequences of missing quality measurement

5 Requirement measurement

5.1 Tom Gilb's Impetus for Requirements Measurement
5.2 Other Approaches to Requirements Measurement
5.2.1 The Boehm Approach
5.2.2 N-Fold Inspection
5.2.3 Parnas & Weis Requirements Inspection
5.2.4 Requirements Alignment by Fraser and Vaishnavi (Requirements Inspection)
5.2.5 Requirements Tracking by Hayes
5.2.6 Requirements Evaluation by Glinz
5.2.7 ISO standard 25030
5.2.8 The V-model-XT as reference model for the requirement measurement
5.3 A metric for requirements of C. Ebert
5.3.1 Number of all requirements in a project
5.3.2 Completion degree of the requirements
5.3.3 Change rate of the requirements
5.3.4 Number of the change causes
5.3.5 Completeness of the requirement model
5.3.6 Number of requirements defects
5.3.7 Number of defect types
5.3.8 Usefulness of requirements
5.4 The Sophist requirements metric
5.4.1 Uniqueness of requirements
5.4.2 Exclusion of the passive form in requirements
5.4.3 Classifiability of requirements
5.4.4 Identifiability of requirements
5.4.5 Readability
5.4.6 Selectability
5.5 Tools for Requirements Measurement
5.5.1 Requirements measurement in previous CASE tools
5.5.2 Requirements measurement in the CASE tool SoftSpec
5.5.3 Requirements measurement in current requirements management tools
5.5.4 Requirements metrics from the TextAudit tool
5.5.5 Presentation of requirements metrics
5.6 Reasons for Requirements Measurement

6 Design measurement

6.1 Initial Approaches to a Design Metric
6.1.1 The MECCA Approach by Tom Gilb
6.1.2 The Structured Design Approach by Yourdon and Constantine
6.1.3 The Data Flow Approach by Henry and Kafura
6.1.4 The System Outline Approach of Belady and Evangelisti
6.2 Design Metrics of Card and Glass
6.2.1 Design Quality Measures
6.2.2 Design Complexity Measures
6.2.3 Experience with the First Design Metric
6.3 The SOFTCON Design Metric
6.3.1 Formal Completeness and Consistency Check
6.3.2 Technical Quality Measures for System Design
6.4 Object-Oriented Design Metrics
6.4.1 The OO Metric of Chidamer and Kemerer
6.4.2 MOOD Design Metric
6.5 Design Metric in UMLAudit
6.5.1 Design Quantity Metric
6.5.2 Design Complexity Metric
6.5.3 Design Quality Metric
6.5.4 Design Size Metric
6.6 Design Metric for Web Applications

7 Code metrics

7.1 Program Structure
7.2 Approaches to Measuring Code Complexity
7.2.1 Halstead's Software Science
7.2.2 McCabe's Cyclomatic Complexity
7.2.3 Chapin's Q-Complexity
7.2.4 Elshof's Reference Complexity
7.2.5 Prather's Interleaving Complexity
7.2.6 Other Code Complexity Measures
7.3 Approaches to Measuring Code Quality
7.3.1 Simon's Code Quality Index
7.3.2 Oman's Maintainability Index
7.3.3 Goal-oriented Code Quality Measurement
7.4 Code Metrics in the SoftAudit System
7.4.1 Code Quantity Metric
7.4.2 Code Complexity
7.4.3 Code Quality
7.5 Tools for the code measurement
7.5.1 The first code measurement tools
7.5.2 Code measurement tools of the 90's
7.5.3 Today's code measurement tools
7.6 Example of a code measurement

8 Test metrics

8.1 Test Measurement in Previous Project Practice
8.1.1 The ITS Project at Siemens
8.1.2 The Wella Migration Project
8.2 Test Metrics According to Hetzel
8.3 Test Metrics at IBM Rochester
8.4 Measures of system testing
8.4.1 Test Time
8.4.2 Test Costs
8.4.3 Test Cases
8.4.4 Error Messages
8.4.5 System Test Coverage
8.4.6 Recommendations by Hutcheson
8.4.7 Test Points
8.5 Test Metrics in the GEOS Project
8.5.1 Measurement of Test Cases
8.5.2 Measurement of Test Coverage
8.5.3 Measurement of Error Finding
8.5.4 Evaluation of Test Metrics
8.6 Sneed and Jungmayr Test Metrics
8.6.1 Testability Metric
8.6.2 Test Planning Metric
8.6.3 Test Progress Metric
8.6.4 Test Quality Metric

9 Productivity measurement of software

9.1 Productivity Measurement - A Controversial Topic
9.2 Software Productivity in Retrospect
9.2.1 Document Measurement with the Fog Index
9.2.2 Productivity Measurement at the Standard Bank of South Africa
9.2.3 The Emergence of the Function Point Method
9.2.4 Boehm's COCOMO-I Model
9.2.5 Putnam's Software Equation
9.2.6 The Data Point Method
9.2.7 The Object Point Method
9.2.8 The Use Case Point Method
9.3 Alternative Productivity Measures
9.4 Productivity Calculation Based on Software Size
9.5 Effort Measurement
9.6 Software Productivity Types
9.6.1 Programming Productivity
9.6.2 Design Productivity
9.6.3 Analysis Productivity
9.6.4 Test Productivity
9.6.5 Total Productivity
9.7 Productivity Studies
9.7.1 Software Productivity Studies in the USA
9.7.2 Software Productivity Studies in Europe
9.7.3 Productivity Comparison Problems
9.8 Productivity Measurement by Value Contribution

10 Measuring maintenance productivity

10.1 Previous Approaches to Measuring Software Maintainability
10.1.1 Stability Measures by Yau and Collofello
10.1.2 Maintenance Survey at the U.S. Air Force
10.1.3 The Maintainability Study by Vessey and Weber
10.1.4 Software Maintainability Assessment by Berns
10.1.5 The Maintenance Survey by Gremillion
10.1.6 Maintenance Metrics at Hewlett-Packard
10.1.7 Maintenance Measurement by Rombach
10.1.8 Measuring the Maintainability of Commercial COBOL Systems
10.1.9 Oman's Maintainability Index
10.2 Approaches to Measuring the Maintainability of Object-Oriented Software
10.2.1 First Investigation of the Maintainability of Object-Oriented Programs
10.2.2 Chidamer/Kemerer's OO Metric for Maintainability
10.2.3 MOOD Metric as an Indicator of Maintainability
10.2.4 An Empirical Validation of the OO Metric for Estimating Maintenance Effort
10.2.5 The Impact of Centralized Control on the Maintainability of an OO System
10.2.6 Calculating Maintenance Effort Based on Program Complexity
10.2.7 Comparing the Maintainability of Object-Oriented and Procedural Software
10.2.8 On the Change of Maintainability in the Course of Software Evolution
10.3 Maintenance Productivity Measurement
10.3.1 First Approaches to Maintenance Productivity Measurement
10.3.2 Measurement of Program Maintainability in the ESPRIT Project MetKit
10.3.3 Maintenance Productivity Measurement in the US Navy
10.3.4 Measurement of Maintenance Productivity at Martin-Marietta
10.3.5 Comparison of Maintenance Productivity of Representative Swiss Users

11 Software measurement in practice

11.1 Enduring Measurement Processes
11.1.1 Involving Stakeholders
11.1.2 Building on Existing Metrics
11.1.3 Transparency of the Process
11.2 Examples of Enduring Measurement Processes
11.2.1 Hewlett-Packard's software measurement initiative
11.2.2 Process and product measurement at Siemens AG
11.2.3 Built-in software measurement in the GEOS project
11.3 Overarching software cockpits and dashboards
11.3.1 Structure and Functionality of the Software Cockpit
11.3.2 Dashboard
11.3.3 Scorecard
11.3.4 Interactive Analyses and Reports
11.4 One-off Measurement Methods
11.4.1 Agreement on Measurement Objectives
11.4.2 Selecting the metric
11.4.3 Providing the measurement tools
11.4.4 Adopting the measurement objects
11.4.5 Performing the measurement
11.4.6 Evaluating the measurement results
11.5 Example of a one-time measurement