2. Deliverables by Phase
Possible Deliverables by Phase
Concept Document
Statement of Work (SOW)
Project Charter
RFP & Proposal
Software
Requirements Document (Software Requirements Specification)
Concept
Work Breakdown Structure (WBS)
Requirements Functional Specification ( Top Level Design Specification)
Entity Relationship Diagram
Data Flow Diagram
Analysis Detailed Design Specification
Object Diagrams
Detailed Data Model
Project Development Plan
(Software Development Plan ) Design Coding Standards
Baseline Project Plan Working Code
Quality Assurance Plan Unit Tests
Configuration Management Plan
Coding and
Risk Management Plan
Debugging Acceptance Test Procedures
Tested Application
Integration Plan Systems
Detailed SQA Test Plan Testing Maintenance Specification
SQA Test Cases Deployed Application
Deployment &
User Documentation Maintenance
Training Plan
2
3. Risk management
Risk management is concerned with identifying
risks and drawing up plans to minimise their
effect on a project.
A risk is a probability that some adverse
circumstance will occur.
Project risks affect schedule or resources
Product risks affect the quality or performance of the
software being developed
Business risks affect the organisation developing or
procuring the software
4. Software risks
Risk Risk type Description
Staff turnover Project Experienced staff will leave the
project before it is finished.
Management change Project There will be a change of
organisational management with
different priorities.
Hardware unavailability Project Hardware which is essential for the
project will not be delivered on
schedule.
Requirements change Project and There will be a larger number of
product changes to the requirements than
anticipated.
Specification delays Project and Specifications of essential interfaces
product are not available on schedule
Size underestimate Project and The size of the system has been
product underestimated.
CASE tool under- Product CASE tools which support the
performance project do not perform as anticipated
Technology change Business The underlying technology on which
the system is built is superseded by
new technology.
Product competition Business A competitive product is marketed
before the system is completed.
5. The Risk Management Process
• Risk identification
– Identify project, product and business risks
• Risk analysis
– Assess the likelihood and consequences of these
risks
• Risk planning
– Draw up plans to avoid or minimise the effects of
the risk
• Risk monitoring
– Monitor the risks throughout the project
6. The risk management process
Risk Risk analysis Risk planning Risk
identification monitoring
List of potential Risk avoidance Risk
Prioritised risk and contingency
risks list assessment
plans
8. Risks and risk types
Risk type Possible risks
Technology The database used in the system cannot process as many
transactions per second as expected.
Software components which should be reused contain defects
which limit their functionality.
People It is impossible to recruit staff with the skills required.
Key staff are ill and unavailable at critical times.
Required training for staff is not available.
Organisational The organisation is restructured so that different management
are responsible for the project.
Organisational financial problems force reductions in the project
budget.
Tools The code generated by CASE tools is inefficient.
CASE tools cannot be integrated.
Requirements Changes to requirements which require major design rework are
proposed.
Customers fail to understand the impact of requirements
changes.
Estimation The time required to develop the software is underestimated.
The rate of defect repair is underestimated.
The size of the software is underestimated.
9. Risk analysis
• Assess probability and seriousness of each risk
• Probability may be
– very low
– low
– moderate
– high or very high
• Risk effects might be
– catastrophic
– serious
– Tolerable
– insignificant
10. Risk analysis
Risk Probability Effects
Organisational financial problems force reductions Low Catastrophic
in the project budget.
It is impossible to recruit staff with the skills High Catastrophic
required for the project.
Key staff are ill at critical times in the project. Moderate Serious
Software components which should be reused Moderate Serious
contain defects which limit their functionality.
Changes to requirements which require major Moderate Serious
design rework are proposed.
The organisation is restructured so that different High Serious
management are responsible for the project.
The database used in the system cannot process as Moderate Serious
many transactions per second as expected.
The time required to develop the software is High Serious
underestimated.
CASE tools cannot be integrated. High Tolerable
Customers fail to understand the impact of Moderate Tolerable
requirements changes.
Required training for staff is not available. Moderate Tolerable
The rate of defect repair is underestimated. Moderate Tolerable
The size of the software is underestimated. High Tolerable
The code generated by CASE tools is inefficient. Moderate Insignificant
11. Risk planning
Consider each risk and develop a strategy to
manage that risk
Avoidance strategies
The probability that the risk will arise is reduced
Minimisation strategies
The impact of the risk on the project or product will
be reduced
Contingency plans
If the risk arises, contingency plans are plans to
deal with that risk
12. Risk management strategies
Risk Strategy
Organisational Prepare a briefing document for senior management showing
financial problems how the project is making a very important contribution to the
goals of the business.
Recruitment Alert customer of potential difficulties and the possibility of
problems delays, investigate buying-in components.
Staff illness Reorganise team so that there is more overlap of work and
people therefore understand each other’s jobs.
Defective Replace potentially defective components with bought-in
components components of known reliability.
Requirements Derive traceability information to assess requirements change
changes impact, maximise information hiding in the design.
Organisational Prepare a briefing document for senior management showing
restructuring how the project is making a very important contribution to the
goals of the business.
Database Investigate the possibilit y of buying a higher-performance
performance database.
Underestimated Investigate buying in components, investigate use of a program
development time generator.
13. Risk monitoring
• Assess each identified risks regularly to
decide whether or not it is becoming less or
more probable
• Also assess whether the effects of the risk
have changed
• Each key risk should be discussed at
management progress meetings
14. Risk factors
Risk type Potential indicators
Technology Late delivery of hardware or support software, many
reported technology problems
People Poor staff morale, poor relationships amongst team
member, job availability
Organisational organisational gossip, lack of action by senior
management
Tools reluctance by team members to use tools, complaints
about CASE tools, demands for higher-powered
workstations
Requirements many requirements change requests, customer
complaints
Estimation failure to meet agreed schedule, failure to clear
reported defects
16. Software Measurement
Objectives
– Assessing status
• Projects
• Products for a specific project or projects
• Processes
• Resources
– Identifying trends
• Need to be able to differentiate between a healthy project and one
that’s in trouble
– Determine corrective action
• Measurements should indicate the appropriate corrective action, if
any is required.
16
17. Software Measurement Objectives
• Types of information required to understand,
control, and improve projects:
– Managers
• What does the process cost?
• How productive is the staff?
• How good is the code?
• Will the customer/user be satisfied?
• How can we improve?
– Engineers
• Are the requirements testable?
• Have all the faults been found?
• Have the product or process goals been met?
• What will happen in the future?
17
18. The Scope of Software Metrics
– Cost and effort estimation
– Productivity measures and models
– Data collection
– Quality models and measures
– Reliability models
– Performance evaluation and models
– Structural and complexity metrics
– Capability-maturity assessment
– Management by metrics
– Evaluation of methods and tools
18
19. The Scope of Software Metrics
• The Scope of Software Metrics – some
details
– Possible productivity model
Productivity
Cost
Value
Personnel Resources Complexity
Quality Quantity
Time HW Env Problem
Reliability Defects Size Functionality Cnstrst difficulty
Money SW
19
20. The Scope of Software Metrics
• The Scope of Software Metrics – some
details
– Software quality model
Use Factor Criteria
Communicativeness
Usability
Accuracy
Product Reliability
Operatio Consistency
n
Efficiency Device Efficiency
Accessibility
Reusability Metrics
Completeness
Product Maintainability Structuredness
Revisio Conciseness
n Portability
Device Independence
Testability Legibility
Self-descriptiveness
20
Traceability
22. Measurement Basics
• Direct and Indirect Measurement
– Direct measure – relates an attribute to a number or
symbol without reference to no other object or attribute
(e.g., height).
– Indirect measure
• Used when an attribute must be measured by combining
several of its aspects (e.g., density)
• Requires a model of how measures are related to each
other
22
23. Measurement Basics
• Direct and Indirect Measures for Software – examples
– Direct
• Length or source code (lines of code)
• Duration of testing process
• Number of defects discovered during test
• Time a developer spends on a project
– Indirect
• Programmer productivity (LOC/workmonths of effort)
• Module defect density (number of defects/module size)
• Defect detection efficiency (# defects detected/total defects)
• Requirements stability (initial # requirements/total # requirements)
• Test effectiveness ratio (number of items covered/total number of items)
• System spoilage (effort spent fixing faults/total project effort)
23
25. Software product quality metrics
• The quality of a product:
- the “totality of characteristics that bear on its
ability to satisfy stated or implied needs”.
Metrics of the external quality attributes
producer’s perspective: “conformance to
requirements”
customer’s perspective: “fitness for use”
- customer’s expectations
26. Quality metrics
• Two levels of software product quality
metrics:
Intrinsic product quality
Customer oriented metrics
27. Intrinsic product quality metrics:
Reliability: number of hours the software can run
before a failure
Defect density (rate):
number of defects contained in software, relative
to its size.
Customer oriented metrics:
Customer problems
Customer satisfaction
28. Intrinsic product quality metrics
Reliability --- Defect density
• Correlated but different!
• Both are predicted values.
• Estimated using static and dynamic models
Defect: an anomaly in the product (“bug”)
Software failure: an execution whose effect is not conform to
software specification
30. MTBF (Mean Time Between Failures):
the expected time between two successive failures of a system
expressed in hours
a key reliability metric for systems that can be repaired or restored
(repairable systems)
applicable when several system failures are expected.
For a hardware product, MTBF decreases with the its age.
31. MTTF (Man Time To Failure):
the expected time to failure of a system
in reliability engineering metric for non-repairable systems
non-repairable systems can fail only once; example, a satellite is not repairable.
Mean Time To Repair (MTTR): average time to restore a system after a failure
When there are no delays in repair: MTBF = MTTF + MTTR
Software products are repairable systems!
Reliability models neglect the time needed to restore the system after a failure.
with MTTR =0 MTBF = MTTF
Availability = MTTF / MTBF = MTTF / (MTTF + MTTR)
32. 3.1.2. Defect rate (density)
Number of defects per KLOC or per Number of Function Point,
in a given time unit
Example:
“The latent defect rate for this product, during next four years, is 2.0
defects per KLOC”.
Crude metric: a defect may involve one or more lines of code
Lines Of Code
-Different counting tools
-Defect rate metric has to be completed with the counting method for LOC!
-Not recommended to compare defect rates of two products written in
different languages
33. Reliability or Defect Rate ?
Reliability:
often used with safety-critical systems such as: airline traffic control systems,
avionics, weapons.
(usage profile and scenarios are better defined)
Defect density:
in many commercial systems (systems for commercial use)
• there is no typical user profile
• development organizations use defect rate for maintenance cost and
resource estimates
• MTTF is more difficult to implement and may not be representative of all
customers.
34. Customer Oriented Metrics
Customer Problems Metric
Customer problems when using the product:
valid defects, usability problems, unclear documentation, user errors.
Problems per user month (PUM) metric:
PUM = TNP/ TNM
TNP: Total number of problems reported by customers for a time period
TNM: Total number of license-months of the software during the period
= number of install licenses of the software x number of months in the period
35. 3.2.2. Customer satisfaction metrics
Often measured on the five-point scale:
1. Very satisfied
2. Satisfied
3. Neutral
4. Dissatisfied
5. Very dissatisfied
IBM: CUPRIMDSO
(capability/functionality, usability, performance, reliability,
installability, maintainability, documentation /information,
service and overall)
Hewlett-Packard: FURPS
(functionality, usability, reliability, performance and service)
36. Ishikawa’s Seven Basic Tools for
Quality Control
• Checklist (or Check Sheet) – to facilitate gathering data and
to arrange data so it can be easily used later
• Pareto Diagram – a frequency chart of bars in descending
order; the bars are usually associated with types of problems
• Histogram – a graphic representation of frequency counts of
a sample or a population
• Scatter Diagram – portrays the relationship of two interval
variables; can make outliers clear
• Run Chart – tracks the performance of the parameter of
interest over time; used for trend analysis
• Control Chart – an advance form of a run chart for situations
in which the process capability can be defined
• Cause and Effect Diagram (fishbone diagram) – it shows the
relationship between a characteristic and the factors that
affect that relationship
38. Checklists
• Summarize the key points of the software
development process
• More effective than lengthy process documents
• Help ensure that all tasks are complete and the
important factors or quality characteristics of each
task are covered
• Examples of checklists are:
– Design review checklist
– Code inspection checklist
– Moderator (for review and inspection) checklist
– Pre-code-integration checklist
– Entrance and exit criteria for system tests
– Product readiness checklist