Software in numbers 

 September 1, 2010

Like it?
Share!

You can only plan the maintenance or further development of a software system well if you know your system exactly, can record it in figures and evaluate it. This is just one example of the benefits of metrics. A good inventory and evaluation of your software - in terms of quantity, quality and complexity - is an important prerequisite for just about all types of software projects.
In this practical book, you will find methods and metrics on how to measure software, taking into account the three dimensions of quantity, complexity, and quality. All measurement approaches presented are field-tested and based on decades of experience measuring software products and software processes (including development, maintenance, evolution, and migration).
The metrics and approaches presented will help you make projects more predictable, get an overview of products and legacy systems, and better manage project progress.

Authors: Harry Sneed, Richard Seidl, Manfred Baumgartner

Publisher: Carl Hanser Verlag

ISBN: 978-3-446-42175-2

Edition: 1st edition

Available in bookshops and on Amazon

Table of Contents

1 Software measurement

1.1 The Essence of Software
1.2 Meaning and Purpose of Software Measurement
1.2.1 For Understanding (Comprehension) of Software
1.2.2 For Comparison of Software
1.2.3 For Prediction
1.2.4 For Project Control
1.2.5 For Interpersonal Understanding
1.3 Dimensions of the Substance of Software
1.3.1 Quantity Metrics of Software
1.3.2 Complexity Metrics of Software
1.3.3 Quality Metrics of Software
1.4 Views of the Substance of Software
1.5 Objects of Software Measurement
1.6 Goals of Software Measurement
1.7 About the Structure of this Book

2 Software quantity

2.1 Quantity Measures
2.2 Code Sizes
2.2.1 Code Files
2.2.2 Lines of Code
2.2.3 Instructions
2.2.4 Procedures or Methods
2.2.5 Modules or Classes
2.2.6 Decisions
2.2.7 Logic Branches
2.2.8 Calls
2.2.9 Agreed Data Elements
2.2.10 Used Data Elements or Operands
2.2.11 Data Objects
2.2.12 Data Accesses
2.2.13 User Interfaces
2.2.14 System Messages
2.3 Design Sizes
2.3.1 Structured Design Sizes
2.3.2 Data Model Sizes
2.3.3 Object Model Sizes
2.4 Requirement Sizes
2.4.1 Requirements
2.4.2 Acceptance Criteria
2.4.3 Use Cases
2.4.4 Processing Steps
2.4.5 Interfaces
2.4.6 System Interfaces
2.4.7 System Actors
2.4.8 Relevant Objects
2.4.9 Object States
2.4.10 Conditions
2.4.11 Actions
2.4.12 Test Cases
2.5 Test Sizes
2.5.1 Test Cases
2.5.2 Test Case Attributes
2.5.3 Test Runs
2.5.4 Test Scripts or Test Procedures
2.5.5 Test Script Lines
2.5.6 Test Script Statements
2.5.7 Error Messages
2.6 Derived Size Measures
2.6.1 Function Points
2.6.2 Data Points
2.6.3 Object Points
2.6.4 Use Case Points
2.6.5 Test Case Points

3 Software complexity

3.1 Complexity in Software Metrics
3.1.1 Software Complexity according to the IEEE Standard
3.1.2 Software Complexity from Zuse's Perspective
3.1.3 Software Complexity according to Fenton
3.1.4 Complexity as a Disease of Software Development
3.1.5 Complexity Measurement according to Ebert and Dumke
3.1.6 The Alpha Complexity Metric
3.2 Increasing Software Complexity
3.2.1 Code Complexity - Why Java is More Complex than COBOL
3.2.2 Design Complexity - Why Different Design Approaches are Equally Complex in the End
3.2.3 Requirements Complexity - Why the Tasks to Be Solved Are Getting More Complex
3.3 General Measures of Software Complexity
3.3.1 Language Complexity
3.3.2 Structure Complexity
3.3.3 Algorithmic Complexity

4 The measurement of software quality

4.1 Quality properties according to Boehm
4.1.1 Comprehensibility according to Boehm
4.1.2 Completeness according to Boehm
4.1.3 Portability according to Boehm
4.1.4 Modifiability according to Boehm
4.1.5 Testability according to Boehm
4.1.6 Usability according to Boehm
4.1.7 Reliability according to Boehm
4.1.8 Efficiency according to Boehm
4.2 Gilb and the quantification of quality
4.2.1 Functionality measurement according to Gilb
4.2.2 Performance measurement according to Gilb
4.2.3 Reliability measurement according to Gilb
4.2.4 Data assurance measurement according to Gilb
4.2.5 Efficiency measurement according to Gilb
4.2.6 Availability measurement according to Gilb
4.2.7 Maintainability measurement according to Gilb
4.3 McCall's quality tree
4.4 A German view of software quality
4.4.1 Quality concept
4.4.2 Quality Classification
4.4.3 Quality Measures
4.4.4 Quality Variables
4.5 Automated Software Quality Assurance
4.5.1 Automated Measurement of Requirements Quality
4.5.2 Automated Measurement of Design Quality
4.5.3 Automated Measurement of Code Quality
4.5.4 Automated Measurement of Test Quality
4.6 Goal-Directed Software Quality Assurance
4.6.1 Quality Objectives
4.6.2 Quality Objectives Survey
4.6.3 Quality Objectives Measurement
4.7 IEEE and ISO Standards for Software Quality
4.7.1 Functionality according to ISO 9126
4.7.2 Reliability according to ISO 9126
4.7.3 Usability according to ISO 9126
4.7.4 Efficiency according to ISO 9126
4.7.5 Maintainability according to ISO 9126
4.7.6 Portability according to ISO 9126
4.8 Consequences of Lack of Quality Measurement

5 Requirement measurement

5.1 Tom Gilb's Impetus for Requirements Measurement
5.2 Other Approaches to Requirements Measurement
5.2.1 The Boehm Approach
5.2.2 N-Fold Inspection
5.2.3 Parnas & Weis Requirements Inspection
5.2.4 Matching Requirements by Fraser and Vaishnavi (Requirements Inspection)
5.2.5 Tracking Requirements by Hayes
5.2.6 Evaluating Requirements by Glinz
5.2.7 ISO Standard 25030
5.2.8 The V-Model-XT as a Reference Model for Requirements Measurement
5.3 A Metric for Requirements by C. Ebert
5.3.1 Number of All Requirements in a Project
5.3.2 Degree of Completion of Requirements
5.3.3 Rate of Change of Requirements
5.3.4 Number of Causes of Change
5.3.5 Completeness of Requirements Model
5.3.6 Number of Requirements Defects
5.3.7 Number of Defect Types
5.3.8 Usefulness of Requirements
5.4 The Sophist Requirements Metric
5.4.1 Uniqueness of Requirements
5.4.2 Exclusion of the Passive Form in Requirements
5.4.3 Classifiability of Requirements
5.4.4 Identifiability of Requirements
5.4.5 Readability
5.4.6 Selectability
5.5 Requirements measurement tools
5.5.1 Requirements measurement in previous CASE tools
5.5.2 Requirements measurement in the CASE tool SoftSpec
5.5.3 Requirements measurement in current requirements management tools
5.5.4 Requirements metrics from the TextAudit tool
5.5.5 Representation of requirements metrics
5.6 Reasons for requirements measurement.

6 Design measurement

6.1 Initial Approaches to a Design Metric
6.1.1 The MECCA Approach by Tom Gilb
6.1.2 The Structured Design Approach by Yourdon and Constantine
6.1.3 The Data Flow Approach by Henry and Kafura
6.1.4 The System Outline Approach by Belady and Evangelisti
6.2 Design Metrics by Card and Glass
6.2.1 Design Quality Measures
6.2.2 Design Complexity Measures
6.2.3 Experience with the First Design Metric
6.3 The SOFTCON Design Metric
6.3.1 Formal Completeness and Consistency Check
6.3.2 Technical Quality Measures for System Design
6.4 Object-Oriented Design Metrics
6.4.1 The OO Metric of Chidamer and Kemerer
6.4.2 MOOD Design Metrics
6.5 Design Metrics in UMLAudit
6.5.1 Design Quantity Metric
6.5.2 Design Complexity Metric
6.5.3 Design Quality Metric
6.5.4 Design Size Metric
6.6 Design Metrics for Web Applications

7 Code metrics

7.1 Program Structure
7.2 Approaches to Measuring Code Complexity
7.2.1 Halstead's Software Science
7.2.2 McCabe's Cyclomatic Complexity
7.2.3 Chapin's Q-Complexity
7.2.4 Elshof's Reference Complexity
7.2.5 Prather's Nesting Complexity
7.2.6 Other Code Complexity Measures
7.3 Approaches to Measuring Code Quality
7.3.1 Simon's Code Quality Index
7.3.2 Oman's Maintainability Index
7.3.3 Goal-Oriented Code Quality Measurement
7.4 Code Metrics in the SoftAudit System
7.4.1 Code Quantity Metric
7.4.2 Code Complexity
7.4.3 Code Quality
7.5 Code Measurement Tools
7.5.1 The First Code Measurement Tools
7.5.2 Code Measurement Tools of the 1990s
7.5.3 Today's Code Measurement Tools
7.6 Example of Code Measurement

8 Test metrics

8.1 Test Measurement in Previous Project Practice
8.1.1 The ITS Project at Siemens
8.1.2 The Wella Migration Project
8.2 Test Metrics by Hetzel
8.3 Test Metrics at IBM Rochester
8.4 Measures of the system testing
8.4.1 Test Time
8.4.2 Test Costs
8.4.3 Test Cases
8.4.4 Error Messages
8.4.5 System Test Coverage
8.4.6 Hutcheson's Recommendations
8.4.7 Test Points
8.5 Test Metrics in the GEOS Project
8.5.1 Measurement of Test Cases
8.5.2 Measurement of Test Coverage
8.5.3 Measurement of Error Finding
8.5.4 Evaluation of Test Metrics
8.6 Test Metrics according to Sneed and Jungmayr
8.6.1 Testability Metric
8.6.2 Test Planning Metric
8.6.3 Test Progress Metric
8.6.4 Test Quality Metric

9 Productivity measurement of software

9.1 Productivity Measurement - A Controversial Topic
9.2 Software Productivity in Retrospect
9.2.1 Document Measurement with the Fog Index
9.2.2 Productivity Measurement at the Standard Bank of South Africa
9.2.3 The Emergence of the Function Point Method
9.2.4 Boehm's COCOMO-I model
9.2.5 Putnam's software equation
9.2.6 The data point method
9.2.7 The object point method
9.2.8 The use case point method
9.3 Alternative productivity measures
9.4 Productivity Calculation Based on Software Size
9.5 Effort Measurement
9.6 Software Productivity Types
9.6.1 Programming Productivity
9.6.2 Design Productivity
9.6.3 Analysis Productivity
9.6.4 Test Productivity
9.6.5 Overall Productivity
9.7 Productivity Studies
9.7.1 Software Productivity Studies in the USA
9.7.2 Software Productivity Studies in Europe
9.7.3 Productivity Comparison Problems
9.8 Productivity Measurement by Value Contribution

10 Measuring maintenance productivity

10.1 Previous Approaches to Measuring Software Maintainability
10.1.1 Stability Measures by Yau and Collofello
10.1.2 Maintenance Survey by the U.S. Air Force
10.1.3 The Maintainability Survey by Vessey and Weber
10.1.4 Software Maintainability Assessment by Berns
10.1.5 The Maintenance Survey by Gremillion
10.1.6 Maintenance Metrics by Hewlett-Packard
10.1.7 Rombach's Maintenance Measurement
10.1.8 Measuring the Maintainability of Commercial COBOL Systems
10.1.9 Oman's Maintainability Index
10.2 Approaches to Measuring the Maintainability of Object-Oriented Software
10.2.1 Initial Investigation of the Maintainability of Object-Oriented Programs
10.2.2 Chidamer/Kemerer's OO Metric for Maintainability
10.2.3 MOOD Metric as an Indicator of Maintainability
10.2.4 An Empirical Validation of the OO Metric for Estimating Maintenance Effort
10.2.5 The Impact of Centralized Control on the Maintainability of an OO System
10.2.6 Calculating Maintenance Effort Based on Program Complexity
10.2.7 Comparing the Maintainability of Object-Oriented and Procedural Software
10.2.8 On the Change of Maintainability in the Course of Software Evolution
10.3 Measuring Maintenance Productivity
10.3.1 First Approaches to Measuring Maintenance Productivity
10.3.2 Measuring Program Maintainability in the ESPRIT Project MetKit
10.3.3 Measuring Maintenance Productivity in the US Navy
10.3.4 Measuring Maintenance Productivity at Martin-Marietta
10.3.5 Comparing Maintenance Productivity of Representative Swiss Users

11 Software measurement in practice

11.1 Permanent Measurement Processes
11.1.1 Involvement of Stakeholders
11.1.2 Building on Existing Metrics
11.1.3 Transparency of the Process
11.2 Examples of Permanent Measurement Processes
11.2.1 Hewlett-Packard's Software Measurement Initiative
11.2.2 Process and Product Measurement at Siemens AG
11.2.3 Built-in Software Measurement in the GEOS Project
11.3 Overarching Software Cockpits and Dashboards
11.3.1 Structure and functionality of the software cockpit
11.3.2 Dashboard
11.3.3 Scorecard
11.3.4 Interactive analyses and reports
11.4 One-off measurement procedures
11.4.1 Agreement on the measurement objectives
11.4.2 Selection of the metric
11.4.3 Provision of the measurement tools
11.4.4 Adoption of the measurement objects
11.4.5 Execution of the measurement
11.4.6 Evaluation of the measurement results
11.5 Example of one-off measurement

Do you like this post? Share it: