SOFTWARE METRICS FOR CONTROL AND QUALITY ASSURANCE COURSE OVERVIEW

1/10/00


Click here to start


Table of Contents

SOFTWARE METRICS FOR CONTROL AND QUALITY ASSURANCE COURSE OVERVIEW

Course Objectives

Course Structure

Recommended Reading

LESSON 1: SOFTWARE QUALITY METRICS BASICS

Lesson 1 objectives

How many Lines of Code?

What is software quality?

Software quality - relevance

Software Quality Models

Definition of system reliability

What is a software failure?

Human errors, faults, and failures

Processing errors

Relationship between faults and failures (Adams 1984)

The relationship between faults and failures

The ‘defect density’ measure: an important health warning

Defect density Vs module size

A Study in Relative Efficiency of Testing Methods

The problem with ‘problems’

Incident Types

Generic Data

Example: Failure Data

Example: Fault Data (1) - reactive

Example: Fault Data (2) - responsive

Example: Change Request

Tracking incidents to components

Fault classifications used in Eurostar control system

Lesson 1 Summary

LESSON 2: SOFTWARE METRICS PRACTICE

Lesson 2 Objectives

Why software measurement?

From Goals to Actions

Goal Question Metric (GQM)

PPT Slide

The Metrics Plan

The Enduring LOC Measure

Example: Software Productivity at Toshiba

Problems with LOC type measures

Fundamental software size attributes

The search for more discriminating metrics

The 1970’s: Measures of Source Code

Halstead’s Software Science Metrics

McCabe’s Cyclomatic Complexity Metric v

Flowgraph based measures

The 1980’s: Early Life-Cycle Measures

Software Cost Estimation

Simple COCOMO Effort Prediction

COCOMO Development Time Prediction

Regression Based Cost Modelling

Albrecht’s Function Points

Function Points: Example

Function Points: Applications

Function Points and Program Size

The 1990’s: Broader Perspective

The SEI Capability Maturity Model

Results of 1987-1991 SEI Assessments

Process improvement at Motorola

IBM Space Shuttle Software Metrics Program (1)

IBM Space Shuttle Software Metrics Program (2)

IBM Space Shuttle Software Metrics Program (3)

ISO 9126 Software Product Evaluation Standard

Lesson 2 Summary

LESSON 3: SOFTWARE METRICS FRAMEWORK

Lesson 3 Objectives

Software Measurement Activities

Opposing Views on Measurement?

Definition of Measurement

Example Measures

Avoiding Mistakes in Measurement

Be Clear of Your Attribute

A Cautionary Note

Types and uses of measurement

Some Direct Software Measures

Some Indirect Software Measures

Predictive Measurement

No Short Cut to Accurate Prediction

Products, Processes, and Resources

Internal and External Attributes

The Framework Applied

Lesson 3 Summary

CASE STUDY : COMPANY OBJECTIVES

General System Information

Main Data

Case Study Components

Single Incident Close Report

Single Incident Close Report: Improved Version

Fault Classification

Missing Data

‘Reliability’ Trend

Identifying Fault Prone Systems?

Analysis of Fault Types

Fault Types and System Areas

Maintainability Across System Areas

Maintainability Across Fault Types

Case study results with additional data: System Structure

Normalised Fault Rates (1)

Normalised Fault Rates (2)

Case Study 1 Summary

LESSSON 4: SOFTWARE METRICS: MEASUREMENT THEORY AND STATISTICAL ANALYSIS

Lesson 4 Objectives

Natural Evolution of Measures

Measurement Theory Objectives

Measurement Theory: Key Components

Representation Condition

Meaningfulness in Measurement

Measurement Scale Types

Nominal Scale Measurement

Ordinal Scale Measurement

Interval Scale Measurement

Ratio Scale Measurement

Absolute Scale Measurement

Problems of measuring of program ‘complexity’

Validation of Measures

Validation of Prediction Systems

Scale Types Summary

Meaningfulness and Statistics

Example: The Mean

Alternative Measures of Average

Summary of Meaningful Statistics

Non-Parametric Techniques

Box Plots

Box Plots: Examples

Scatterplots

Example Scatterplot: Length vs Effort

Determining Relationships

Causes of Outliers

Control Charts

Control Chart Example

Lesson 4: Summary

LESSON 5: EMPIRICAL RESULTS

Lesson 5 Objectives

Case study: Basic data

Hypotheses tested

Hypothesis 1a: a small number of modules contain most of the faults discovered during testing

Hypothesis 1b:

Hypothesis 2a: a small number of modules contain most of the operational faults?

Hypothesis 2b

Higher incidence of faults in function testing implies higher incidence of faults in system testing?

Hypothesis 4:Higher incidence of faults pre-release implies higher incidence of faults post-release?

Pre-release vs post-release faults

Size metrics good predictors of fault and failure prone modules?

Plotting faults against size

Cyclomatic complexity against pre-and post-release faults

Defect density Vs size

Complexity metrics vs simple size metrics

Benchmarking hypotheses

Case study conclusions

Evaluating Software Engineering Technologies through Measurement

The Uncertainty of Reliability Achievement methods

Actual Promotional Claims for Formal Methods

The Virtues of Cleanroom

The Virtues of Verification (in Cleanroom)

Use of Measurement in Evaluating Methods

Weinberg-Schulman Experiment

Empirical Evidence About Software Engineering Methods

The Case of Flowcharts vs Pseudocode (1)

The Case of Flowcharts vs Pseudocode (2)

The Evidence for Structured Programming

The Virtues of Structured Programming

Management Before Technology

Formal Methods: Rewarding ‘Quantified’ Success

IBM/PRG Project: Use of Z in CICS

CICS study: problems found during development cycle

Comprehensibility of Formal specifications

Difficulty of understanding Z

Experiment to assess effect of structuring Z on comprehension

Comparisons of scores for the different specifications

Formal Methods for Safety Critical Systems

SMARTIE Formal Methods Study CDIS Air Traffic Control System

CDIS fault report form

Relative sizes and changes reported for each design type in delivered code

Code changes by design type for modules requiring many changes

Changes Normalized by KLOC for Delivered Code by Design Type

Faults discovered during unit testing

Changes to delivered code as a result of post-delivery problems

Post-delivery problems discovered in each problem category

Post-delivery problem rates reported in the literature

Efficacy of Formal Methods: Summary

Lesson 5: Summary

LESSON 6: SOFTWARE METRICS FOR RISK AND UNCERTAINTY

Lesson 6: Objectives

The Classic size driven approach

Predicting road fatalities

Predicting software effort

Typical software/systems assessment problem

What we really need for assessment

Bayesian Belief Nets (BBNs)

Defects BBN (simplified)

Bayes’ Theorem

Bayesian Propagation

Classic approach to defect modelling

Problems with classic defects modelling approach

Many defects pre-release, few after

Few defects pre-release, many after

Schematic of classic resource model

Problems with classic approach to resource prediction

Classic approach cannot handle questions we really want to ask

Schematic of ‘resources’ BBN

“Appropriateness of resources” Subnet

Specific values for problem size

Now we require high accuracy

Actual resources entered

Actual resource quality entered

Software defects and resource prediction summary

Conclusions: Benefits of BBNs

Author: Norman Fenton

Email: norman@agena.co.uk

Home Page: http://www.csr.city.ac.uk/people/norman.fenton/